Why can’t AI content generators just follow the rules?

Maybe one step in the right direction would be ensuring that AI content generators properly cite their sources.



“Students in Europe are just now writing research papers versus students in the U.K. and U.S. who start writing them in high school.”


This comment was made by a British professor to my son who is a graduate student in Italy. The professor went on to say that, as a result, he and other professors, are more lenient when it comes to plagiarism.


The point of view the professor expressed seemed similar to the current state of AI content generators. We are at the beginning of what will be a long road of generative AI tools producing content. There are lessons to be learned on how to use them correctly. 


Last month, I wrote an article for MarTech that ended up appearing on a digital agency’s website, presented at first glance as their own content (it has since been removed). The article was based on research our firm conducted on the best in class social media practices of over 40 companies. For some reason, the website claimed that the article had been written by an AI bot, but also referenced MarTech as the site where it was first published.


Plagiarism and AI detection


As new AI tools are being rolled out, and “rolled in” to existing tools, the discussion is focused on how to regulate the content they generate. There are now numerous AI content detector tools that can be used to determine if content was created by AI, and if it was plagiarized. Consider this a public service announcement for marketers, college students and apparently, college presidents. 


OpenAI and Microsoft are now being sued by the New York Times for training its AI machines using Times content. The issue is that Open AI and Microsoft used copyrighted content owned by the Times, without paying for the rights to use it. Many believe that the output of this lawsuit could decide the fate of AI. 


The key point in this argument is that of input versus output. The argument for Open AI and Microsoft is the copyrighted material is being used for learning purposes (for their tools). This is something that proponents argue has been done for hundreds of years with humans. Attorneys are a great example. Trained on case history in law school, their written arguments are based on precedent.  


Where things change is in the output. If the output of the tools produces content that has been lifted from New York Times articles verbatim, or without citing the source, then you have issues with copyright infringement and/or plagiarism.  


Just cite your sources


Our firm invested in, and conducted the research, but the digital agency received the benefit of the insights without properly crediting the source, ostensibly using an AI bot to lift passages from my original article. Any new technology ushers in a time of uncertainty and unknown change. AI tools are disruptive and the ethical issues around their use, see the writers strike as more evidence, are complex. 


But maybe, in some ways, it’s not that complicated after all. If we consider AI tools to be, in a sense, a “digital” student consuming vast amounts of content to become knowledgeable and useful, then maybe the issue of how to manage AI generated content isn’t that difficult. 


If you prompt an AI tool to source the information it is using, it will return and answer within seconds, confirming it can track back to the original information it used to create the output. 


As with the OpenAI/Microsoft case, this comes down to the output, and even more specifically, the user. If, like new students, the users are naive, and/or lackadaisical, you will limit the effectiveness of the tools and your team, and potentially, invite someone from the legal department to come for a visit.  


On the other hand, if users are trained and treat the output of the tools like any other article or research paper they would write for high school or college, a huge productivity increase is possible. Simply prompting it to include the sources of the content used does the trick. 


Perhaps, the more things change, the more they stay the same. Giving credit where credit is due has always been right, no matter what the situation…or technology. 


The post Why can’t AI content generators just follow the rules? appeared first on MarTech.

MarTech

About the author






Scott Gillum

Contributor







Scott is the Founder and CEO of Carbon Design. Prior to founding Carbon Design, he was the President of the Washington, DC office for Merkle (a Dentsu agency), the world’s largest B2B agency.

His career follows the pipeline. Starting at the bottom closing deals as a sales rep. Then as a management consultant after graduate school, helping clients build sales and marketing channels. Advertising broadened his knowledge and experience in building brands and creating awareness.

Along the way, he’s been the head of marketing for an Inc. 500 company, and an interim CMO for a Fortune 500 company. Today, Scott helps clients improve the effectiveness of their marketing efforts up and down the funnel. From transitioning to digital to finding new ways to communicate, connect, and motivate audiences.

Scott has been a member of the Gartner for Marketing Leaders Council and he writes a monthly column for several publications on business marketing.  In the past, he has been a regular contributor to publications such as Forbes, Fortune, Adage, the Huffington Post and he has contributed to various books on marketing. Additionally, his work on sales and marketing integration was made into a Harvard Business School Case Study and is taught at leading business schools across the nation.

(6)