W Power 2024

What if humans are no longer earth's most intelligent beings?

University of Virginia economist Anton Korinek believes that the kind of AI that Hawking refers to — "general artificial intelligence" that can equal or surpass human intelligence — could be just a few decades away

By Anton Korinek and Caroline Newman
Published: Dec 13, 2018 03:18:33 PM IST
Updated: Dec 13, 2018 03:24:35 PM IST

Image: Shutterstock


In his final, posthumously published book, famed physicist Stephen Hawking raises an alarm about the dangers of artificial intelligence, or AI, and the existential threat it could pose to humanity.

In Brief Answers to the Big Questions, Hawking writes, “a super-intelligent AI will be extremely good at accomplishing goals, and if those goals aren’t aligned with ours, we’re in trouble.”

University of Virginia economist Anton Korinek could not agree more, and he believes that the kind of AI that Hawking refers to — “general artificial intelligence” that can equal or surpass human intelligence — could be just a few decades away.

“I believe that, by the second half of this century, AI — robots and programs — will be better than us humans at nearly everything,” said Korinek, who holds a joint appointment in UVA’s Economics Department and the Darden School of Business. “The fundamental question becomes, ‘What will happen to humans if we are no longer the most generally intelligent beings on Earth?’”

Korinek has written and co-written several published and forthcoming papers on the economic impact of increasing artificial intelligence, including a paper published by the National Bureau of Economic Research and several works in progress. He will teach a course on the topic, “Artificial Intelligence and the Future of Work,” in the spring.

We sat down with him to discuss the fundamental question he posed, the possible future of artificial intelligence and what humans can do now to shape that future.

Q: Hawking and others concerned about the rise of AI programs and devices focus on “general artificial intelligence.” What does that mean, exactly?
A:
The AI systems we currently have are referred to as “narrow artificial intelligence.” We have computer programs that are each built for a specific, narrow purpose, whether that is targeting the best ad to a consumer to entice sales, filtering out credit card applicants or even driving a car. But these programs cannot generalize what they have learned beyond their narrow field of application.

“General AI” refers to artificial intelligence systems that could work across a wide range of fields, as humans can. We are not there yet. For example, devices like Alexa or Google Home, even though they may seem quite powerful in some respects and are getting more powerful by the day, do not have the general understanding that humans do.

Q: Are we close to that point?
A:
There is considerable uncertainty, but I believe we may reach what most would agree is general artificial intelligence in 15 to 50 years from now.

Human intelligence levels are pretty constant, barring a bit of wiggle room for advances in education. For AI, on the other hand, progress has been exponential. Though humans are still writing the computer programs that are the effectively the brains of any AI operation, many AI programs are already capable of training themselves and adapting on their own.

I do not see an end to that exponential progress in the near term. I would say the odds are high that AI meets and surpasses general human intelligence sometime this century.

Q: What are the economic implications of having technology eclipse human intelligence?
A:
In some ways, we are already seeing the impact. Over the past half-century, unskilled workers in the U.S. have barely seen any income gains. The average real wage of non-supervisory workers, for example, has declined over the past 40 years, even as the economy has almost tripled in size. Most of those gains have gone to the so-called superstars of the economy, with some estimates suggesting that the richest three Americans now own more wealth than the bottom 50 percent of the U.S. population combined. Technological advances have been an important driver of such inequality, and I would expect that to continue.

Going forward, as AI systems come closer and closer to human intelligence, the labor market impacts will become more and more severe and people higher up on the skill ladder will increasingly be affected; for example, doctors, lawyers, managers and, yes, professors. Machines could generate an explosion in non-human labor supply and lead to exponential economic growth, but cause real wages to plummet for a large portion of human workers as they are displaced. This is the dilemma: Left unchecked, the economy will produce ever more, but the gains from economic growth may go to fewer and fewer.

Ultimately, there is the risk, pointed out by Stephen Hawking, that super-intelligent AI systems can no longer be controlled by humans and may pursue their own agenda. Even if they respect human life, they are likely to consume significant resources, like land and energy, creating competition for such resources with humans and raising their prices, which would further impoverish humans.

Beyond the economic implications, there is also the social question of what happens when humans are no longer the most intelligent beings on the planet. Will we still be able to rule the planet as we have so far? Can we direct the AI or merge with AI in a way that will advance humanity and help us rather than hurt us?

Q: What are some steps government and business leaders could take now to shape this future?
A:
First, we need our government leaders to be aware of these important questions. They should be front and center in our policy debates, given the speed at which technology is accelerating. That is not happening in the United States right now, but it is happening in countries like China and Russia.

China, for example, has declared that it wants to be the world leader in artificial intelligence by 2030, and it is investing enormous resources to reach this goal. In Russia, Vladimir Putin sees AI as the key to the future, declaring that “whoever becomes the leader in this sphere will become the ruler of the world.” If we want the U.S. to lead AI developments, and to put our values of freedom and democracy at the center of those developments, our leaders need to be more active in that sphere. We need a master plan for AI to steer technological progress in a direction that is desirable for us.

To me, as an economist, inequality will be one of the greatest challenges posed by future advances in AI. I have studied several economic policy measures to mitigate increases in inequality.

Q: What are those?
A:
There are three specific ideas I have looked at in my work.

The first is guarding against monopoly power. In the digital economy, there is a strong tendency for digital platforms to turn into monopolies. If you are the first decent social network, or the first decent search engine, or the first decent e-commerce site, you tend to capture the entire market because of so-called network externalities; each user that you add gives you more information and allows you to serve your existing users better. As a result, firms and their top management have enormous market power. Having such power tightly concentrated is actually inefficient for the market and calls for more aggressive antitrust rules and enforcement.

Secondly, as part of this, we should also re-examine intellectual property laws. In our knowledge economy, intellectual property laws encourage innovation, which is certainly important and a good thing. However, they also feed monopolies, given that they may grant firms exclusive access to new technologies for up to two decades. One sector where we see this clearly is the drug industry, where pharmaceutical companies are raising the prices of patented drugs to incredible levels and reaping enormous profits, often for innovations that are really insignificant.

At this point in time, given the rise of AI, it might be wise to revisit intellectual property laws and recalibrate the balance between encouraging innovation and guarding against harmful monopolies.

Finally, there are ways to steer technological progress in a direction that helps lesser-educated workers, instead of replacing them. Some economists have begun referring to this as intelligence assistance or IA. For example, AI could equip human call center workers to better respond to customer problems without entirely eliminating that human worker, and in fact using their unique human qualities to help solve the problem. Consciously and intentionally steering progress in that direction would improve conditions in the human labor market.

Q: Is there anything else that you would like to see happening now to better prepare humans for the future?
A:
One of my goals is to increase awareness of the radical opportunities and threats created by advanced AI. At present, we have more questions than answers, and we are pressed for time — technological progress does not stop to give us more time to come up with good answers to the challenges posed by advanced AI. I hope that some of our best and brightest young minds will work on these questions, to the benefit of all humanity.

I will discuss many of these questions with our students in a course on “Artificial Intelligence and the Future of Work” that I will offer to economics and computer science majors this spring.

 As wonderful as many of the innovations in the field of AI are — our phones, for example, can certainly do a lot of cool tricks now — it is important to reflect on how far we have come over the past 10 years and how far we could go in the next 10, 20 or 50 years. How can we individually prepare ourselves for this “Age of Artificial Intelligence,” and what will it mean for us as individual human beings, for our country and for humanity as a whole?

This article originally appeared on UVA Today.

[This article has been reproduced with permission from University Of Virginia's Darden School Of Business. This piece originally appeared on Darden Ideas to Action.]

X