AI Risky Or Not? 21st Century’s ‘Steam Engine Of The Mind’
Reid Hoffman, a partner at venture capital firm Greylock Partners, refers to generative AI (GAI) as the “steam engine of the mind” during this “cognitive industrial revolution,” and compares it with the advent of steam power in the 18th century.
The comparison is intended to get people thinking at a higher level, scale, and importance relative to society, industry, and their own life. (It caught my attention.)
“The steam engine gave us a tremendous number of physical superpowers in manufacturing, transport, and construction by ultimately creating machinery that was more powerful and mobile than simple watermills,” Hoffman told Lareina Yee, senior partner at McKinsey & Company, in an interview. “The same thing is happening now with cognitive capabilities in anything that we do that uses language, be it communication, reasoning, analysis, selling, marketing, support, and services.”
Hoffman also co-founded LinkedIn, as well as in 2023 a technology company called Inflection AI that uses generative GAI and machine learning (ML) to create digital experiences, apps, and hardware.
The fascinating interview facilitated by Yee delves into Hoffman’s views on cognitive and emotional intelligence, also known as emotional quotient (EQ), could include empathy, active listening, how to stay positive, and how to listen to feedback.
“The most natural thing when you’re doing engineering is to get IQ correct,” Hoffman told Yee. “But one of the things that’s really essential for people is how we bring EQ into it.”
Hoffman believes this revolution will become stronger than the era of the steam engine or printing press because of the speed at which it will move. When someone builds a new AI agent or offering based on it the possibility to reach billions of people within days, because of the internet and mobile infrastructure, becomes much greater.
Transitions are difficult. When Yee asked Hoffman, how will those just starting their career stay in demand within enterprises, he said, “ask yourself ‘What kinds of things might I experiment with? What did other people experiment with, and how do I learn from them?’”
The caveat to all this greatness resides in how companies building the technology proceed. Analysts in the past have said that warnings in the risk factors sections in U.S. Securities and Exchange Commission filings contain standard and repetitive verbiage, but in the past year companies have gone past competitive rhetoric to warnings around AI.
Bloomberg first pointed out the warnings earlier this month. Earnings calls have been full of warnings. Meta Platforms explained its AI could be used to create misinformation during elections. Microsoft noted it could face copyright claims related to AI training and output. Oracle warned its AI products may simply not work as well as the competition.
Alphabet, Google’s parent, wrote use of its AI tools may negatively affect human rights, privacy, employment, or other social concerns, and lead to lawsuits or financial damage.
Products are being affected. Adobe argues its programs like Photoshop will remain central to creative professionals, but earlier this year it added a warning that proliferation of AI could disrupt the workforce and demand for its existing software.
Palo Alto Networks, Dell Technologies, and Uber Technologies are among those recently adding AI-focused risk factor language.
(3)