The generative AI bill is coming due, and it’s not cheap

 

By Mark Sullivan

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here.

The generative AI bill is coming due, and it’s not cheap

As AI developers try to commercialize and monetize their models, their customers are coming to grips with the fact that the technology is expensive. The up-front costs of developing AI models are significantly higher than those associated with developing traditional software. Developing large AI models requires highly talented (and highly paid) researchers. Training the models requires lots of expensive (often Nvidia) servers. And, increasingly, AI developers will have to pay for the text, image, and knowledge base data used to train models. SemiAnalysis analyst Dylan Patel estimated that running ChatGPT costs OpenAI about $700,000 a day, for example. A recent Reuters report says that in the first few months of 2023, Microsoft was losing about $20 per user on GitHub Copilot, the first LLM chatbot it offered, for which users pay $10 per month.

As the developers try to commercialize their models, those high costs must eventually be passed on to customers. The prices of the first available AI products for enterprises are already getting attention. Both Microsoft and Google have announced that they will charge $30 per user for their respective AI assistants within their productivity suites. That’s on top of the license costs customers already pay. Enterprises can also access large language models from companies like OpenAI, Anthropic, and Cohere by making calls on them via an application programming language. The cost per call, and then for the output from the model, can add up quickly.

For its part, OpenAI seems to be making a successful business out of selling subscriptions to its ChatGPT or selling API access to its GPT-3.5 Turbo and GPT-4 LLMs. Bloomberg reported in late August that the company is making $80 million per month, putting it on track for a billion in revenue for 2023. In 2022, the company lost $540 million during the development of ChatGPT and GPT-4, The Information reported

But the economics described above applies to the commercialization of huge, general-purpose models that are designed to do everything from summarizing long emails to writing computer code to discovering new cancer drugs. OpenAI, for example, explains that it’s trying to offer enterprises a generalized “intelligence layer” that can be used across business functions and knowledge areas. But that’s not the only approach. Many in the open-source community believe that enterprises can build and use a number of smaller, more specialized models that are cheaper to train and operate. 

Clem Delangue, CEO of the popular open-source model sharing platform Hugging Face, tweeted Tuesday: “My prediction: in 2024, most companies will realize that smaller, cheaper, more specialized models make more sense for 99% of AI use-cases. The current market & usage is fooled by companies sponsoring the cost of training and running big models (especially with cloud incentives).” 

AI disinformation in 2024: New details about what U.S. voters might encounter next year 

Senator Mark Warner, one of the smartest members of Congress when it comes to AI, fears AI-generated disinformation could wreak havoc during election season next year.  “[Russia’s actions were] child’s play, compared to what either domestic or foreign AI tools could do to completely screw up our elections,” he told Axios

The generative AI bill is coming due, and it’s not cheap

A new study from Freedom House puts some facts behind the fear. The researchers found that generative AI has already been used in at least 16 countries to “sow doubt, smear opponents, or influence public debate.” Surprisingly, the two most recent examples of widely distributed AI-generated disinformation were audio. Politico notes that right-wing operatives released fake audio clips depicting the voice of a liberal candidate talking about plans to rig the election and raise the price of beer. And Poland’s centrist opposition party used AI-generated audio clips mimicking the country’s right-wing prime minister in a series of attack ads.

Generative AI tools, including image, text, audio, and even meme generators have quickly become more available, more affordable, and easier to operate over the past few years. And social media platforms such as X and Facebook provide ready distribution networks that can reach millions of people very quickly. To make matters worse, the U.S. and many other countries have no binding regulations requiring that the developers and users of these tools make clear that their output is AI-generated.  

Americans want the government to develop its own AI braintrust, not rely on big tech, consulting firms

New polling on AI policy from the Vanderbilt Policy Accelerator finds that most people want the government to develop its own braintrust for regulating AI, and for deciding how federal agencies should use the technology. The government has traditionally relied on tech companies and consulting firms for the technical expertise needed for these things, but much of the public seems to believe that the stakes of AI regulation are too high to allow tech companies to define regulation, or regulate themselves. More than three-quarters (77%) of the thousand-plus surveyed support the creation of a dedicated team of government AI experts to improve public services and advise regulators. But that number dropped to 62% when confronted with the argument that such a team of experts might amount to “big government.”

 

Fast Company

(4)