W Power 2024

AI, a boon or bane? That is the question

Regardless of the cheers and fears that it brings, we are a long way from it being truly safe and acceptable

Published: Jul 13, 2018 08:11:44 AM IST
Updated: Jul 13, 2018 03:04:15 PM IST

Visitors at the 2nd World Intelligence Congress, in Tianjin, China. Humans trusting AI systems will lead to their greater acceptance
Image: Zhang Peng / Lightrocket via Getty Images

There is tremendous excitement in the air about artificial intelligence (AI) across the world today. Everyone seems to be talking about AI—from leaders of the industry, scientific thinkers, heads of state, to the grandmother next door. The subject has been widely welcomed as the salvation of humankind, and, at the same time, has invited criticism for bringing in the potential doom of humans.

Taking away jobs
According to a 2017 Gartner report, AI could lead to a staggering 1.8 million job losses, particularly in the manufacturing sector. In the long run, however, it will create more than 2 million jobs in other sectors, including health care and education, which will continue to be skill-oriented. This reaffirms the fact that the impact of AI on employment will vary, depending on the industry.

As is the case with any disruptive technology, AI will change the landscape of employment, along with the human resource and skill requirements across sectors. It is for time to tell which jobs are likely to be replaced and what new skills will become valuable in the future. No one could have imagined the role of a BPO executive 25 years ago. The job market is always evolving and diversifying, generating new skill needs for various industries that may not necessarily be automated.

Therefore, both the advantages of AI and the risks associated with it are a long way from being realised. We are several years, if not decades, away from achieving true human-level AI. There are many pressing and real problems that need to be addressed so that AI can truly be functional. Let us take a look at the challenges that the AI community needs to address before we can claim the arrival of an all-pervasive AI.

Adversarial attacks  

AI-enabled systems have achieved super-human performance in several domains, such as recognising voice commands, identifying objects in images, and even in the diagnoses of medical conditions. In all of these settings, the intent often has been to ensure that AI succeeds. No one would want their home assistant to make mistakes when it is being addressed. If that is indeed desired, one can easily confuse it by speaking in a different accent. There have been instances of AI agents mistaking a dog for a ball due to the presence of some noise in the images. What is worrying about such instances is that the noise is almost imperceptible to humans and we have no difficulty in identifying the objects correctly.

This is not a phenomenon limited to the current state-of-the-art methods. This is an issue that the AI community has faced for more than a decade. With the potential for wide deployment of AI, addressing such adversarial attacks on it has taken on a sense of urgency. If we are going to use an AI-enabled authentication system, it is quite likely that someone could launch an attack on it. The academic community is making good progress to address it.  

Biases in AI
Another worrying aspect of an AI system is that it tends to reflect socio-cultural biases in the data that it is presented with. There have been reports of AI-enabled systems considering only white men as the typical demographic for CEOs of companies; or using the ethnicity of a person as an indicator of criminal intent. It is unfortunate that the bias in the data is a manifestation of the biases embedded in human behaviour.  

With the ever-increasing use of AI in decision-making processes in critical areas, it is crucial to put in place a mechanism for such biases to be avoided and for decisions to be fair. This would, however, mean asking the AI to be judged at an ethical standard higher than what humans are known to hold themselves to.  

One way of scientifically approaching this question is to ensure that all decisions are fair with respect to some protected attribute apparent in human behaviour, such as gender. This means that all things being equal, the gender does not play a role in the decision made using AI. The goal is to ensure this, even when there is a significant bias in the data.

Explainability in AI

Another important feature that will impact the acceptability of AI is the ability of systems using AI to be able to explain outcomes. In a future where AI systems are going to take on an increasingly large proportion of the repetitive decision-making, people would not like to be told, for instance, that they are being denied a loan because a black box called the AI so decided. We would ideally like the AI to offer explanations that are meaningful to a layperson. In this case, for example, an explanation such as, ‘Since your annual income is below the required level, and you already have significant outstanding loans, we are unable to sanction you the loan’ would be more meaningful.

The current trend in AI is to use complex models where such clear and comprehensible explanations are not easy to obtain. While we have made some progress in explaining perception, i.e., ‘Why did the AI identify that as a dog?’, we are a long way from producing such explainable behaviour for other tasks.

The other side of being able to explain outcomes is being able to trust the outcome. Consistent explanations go a long way in engendering trust in AI systems that will then lead to greater acceptability.  

AI as a menace

Is AI a menace to human kind? AI is a tool, a technology, which is likely to be used by humans for whatever purposes they want. Is nuclear technology a threat to human kind? Yes, without a shred of doubt. It is, however, also beneficial if used responsibly. Likewise, AI is a disruptive technology that should be handled carefully. We have a long way to go before we can ensure truly safe and acceptable AI. It is possible that people will rush to deploy AI systems to get a first-mover advantage, and that can result in catastrophic failures. But the notion of a fully autonomous AI system evolving to take over the world belongs in the realm of science fiction, as of today.  

Balaraman Ravindran heads the Robert Bosch Centre for Data Science and AI at IIT Madras

(This story appears in the 20 July, 2018 issue of Forbes India. To visit our Archives, click here.)

X