A researcher's experiments raise uncomfortable questions about the rapid evolution of large language models
Artificial intelligence is advancing so rapidly that this article may be obsolete by the time you read it. That’s Michal Kosinski’s concern when asked about his recent experiments with ChatGPT and the text-generation engine that powers it.
Kosinski, a computational psychologist and professor of organizational behavior at Stanford Graduate School of Business, says the pace of AI development is accelerating beyond researchers’ ability to keep up (never mind policymakers and ordinary users). We’re talking two weeks after OpenAI released GPT-4, the latest version of its large language model, grabbing headlines and making an unpublished paper Kosinski had written about GPT-3 all but irrelevant. “The difference between GPT-3 and GPT-4 is like the difference between a horse cart and a 737 — and it happened in a year,†he says.
Kosinski has been tracking AI’s evolutionary leaps through a series of somewhat unnerving studies. Most notably, he’s found that facial recognition software could be used to predict your political leaning and sexual orientation.
Lately, he’s been looking at large language models (LLMs), the neural networks that can hold fluent conversations, confidently answer questions, and generate copious amounts of text on just about any topic. In a couple of non-peer-reviewed projects, he’s explored some of the most urgent — and contentious — questions surrounding this technology: Can it develop abilities that go far beyond what it’s trained to do? Can it get around the safeguards set up to contain it? And will we know the answers in time?
This piece originally appeared in Stanford Business Insights from Stanford Graduate School of Business. To receive business ideas and insights from Stanford GSB click here: (To sign up : https://www.gsb.stanford.edu/insights/about/emails ) ]