"We have to be careful not to get distracted by sci-fi issues and focus on concrete risks that are the most pressing"
The explosion of artificial-intelligence systems has offered users solutions to a mind-boggling array of tasks. Thanks to the proliferation of large-language-model (LLM) generative AI tools like ChatGPT-4 and Bard, it’s now easier to create texts, images, and videos that are often indistinguishable from those produced by people.
As has been well-documented, because these models are calibrated to create original content, inaccurate outputs are inevitable—even expected. Thus, these tools also make it a lot easier to fabricate and spread misinformation online.
The speed of these advances has some very knowledgeable people nervous. In a recent Open Letter, tech experts including Elon Musk and Steve Wozniak called for a six-month pause on AI development, asking, “should we let machines flood our information channels with propaganda and untruth?â€
Regardless of the pace of advances of these AI systems, we’re going to have to get used to a good deal of misinformation, says William Brady, professor of Management and Organization at the Kellogg School, who researches online social interactions.
“We have to be careful not to get distracted by sci-fi issues and focus on concrete risks that are the most pressing,†Brady says.
[This article has been republished, with permission, from Kellogg Insight, the faculty research & ideas magazine of Kellogg School of Management at Northwestern University]