Look into contractual API restrictions, training data bias and the manual effort needed to properly tag assets in your DAM system.
With the wave of AI tools and vendors crashing at our doors, it can be easy to forget that there may be much more raw material available to us, right inside our own organization. Generative AI, automated intelligence that can help you create new marketing assets, represents a huge leap forward in marketing technology. Imagine the images, tech art, blogs, stories, and pages of important information like FAQs, definitions, product specification or pricing tables, one could create in a fraction of the time your team did before.
But there could be serious dangers lurking under the surface of generative AI. Every organizationâs products and services should have unique selling points and provide compelling and distinguishing features. The assets your firm has already housed in your digital asset management (DAM) solution can provide a proprietary learning environment that will prevent AI-generated assets from sounding like your competitors and, well, robotic.
1. Are there contractual restrictions that hinder your genAI API?
Will your AI API run only on the DAM, or can it make use of other content? Perhaps you only get one API instance rather than allowing for connection to multiple tools. Does the pricing plan that fits your budget include a workable number of API calls, or is there a chance that you might incur overage fees by creating more assets over time?
In addition, make sure to ask if there are restrictions on this genAIâs content. Some tools can only generate art without text or limited text. Others may be unable to search for relevant stock images or find images or digital videos within your DAM.
How much dynamic media do you need and are you satisfied with this toolâs quality? In addition, you might want to include programmatic marketing material like buys, ad rates, campaign themes and calendars, checklists, mailing lists or other data commonly attached to assets in your DAM.
2. On what data was your genAI trained?
What types of metadata (the data inside the assets) have been used to train the AI, and was that data a balanced set? This is a big one because many AI tools are notoriously biased.
- In 2021, the iSchool at UC Berkeley demonstrated in one report that a browser search for âprofessional haircutâ images showed a clear gender and racial bias.
- Research at the University of Pittsburgh discovered that Google Jobs showed higher-paying job ads to men more than it did to women. (For more, see this recent blog post from IBM on the sources of bias in AI, with examples).
Adherence to FAIR data principles and conformity to other standards necessary in your industry should be guaranteed.
I have used AI that tagged images of white women as âbeautifulâ and âhappyâ while merely tagging Asians or blacks as âethnic.â Will these kinds of tags affect your customer satisfaction? I would think so.
3. What manual work is involved in training your genAI?
There may be much manual work before your genAI can produce âclean,â properly marked-up marketing assets that meet your exacting standards for brand adherence, content quality and more.
Depending on oneâs industry, metadata fields can be a short list of names and email addresses or the two pages of information one must fill in for a doctor. Marketing metadata would beat both industries in a battle of the numbers.
Claravine has identified a handy list of 125 fields of marketing metadata (which you can download or copy into your own list), but Iâm trained as a librarian specializing in metadata, taxonomy and information organization. My list includes all of the possible metadata schemas that could be used to tag assets; itâs currently over 1,600 fields.
The point of all that metadata, however, is that most of your assets will be missing, and missing metadata translates to missing assets from searches. Most organizations have not had the time or the inclination to properly attach metadata. Even standard file names that explain what documents are about are rare and most PDFs are not machine-readable with optical character recognition technology.
What will it take to train the genAI on your assets? For example, does a typical training batch consist of only images, only text or a mixture of images and text? How does the AI read unstructured assets like social media posts?
Typically, âteachingâ AI entails reviewing your DAM and selecting assets matching a tag (keyword). That search could be long if your DAM assets lack metadata to accurately speed findability. Minutes add up to hours fast.
But the real hours come when you need to teach the AI not to tag your assets with certain terms that denote bias or donât match your audience, industry or niche. One cityâs fries are anotherâs frites â context matters.
Itâs all about your data, not their AI
No matter how your organization chooses to apply AI tools, how genAI will fit into your technology stack and integrate with other solutions will definitely be top of mind.
GenAI promises to reduce the time it takes to create assets. But it could increase the time you spend administering those assets. Consider the needs of the assets you produce and the audiences you produce them for, and your choices will become focused and clear.
The post 3 DAM considerations before adopting genAI appeared first on MarTech.
MarTech(40)