W Power 2024

The AI risks we should be worried about

AI has contributed significantly to an increase in the spread of misinformation, automating existing biases, and threatening privacy. And as it continues to be used in a legal vacuum, it is important to understand the harms associated with it to ensure effective regulation

Published: Jun 23, 2023 10:29:56 AM IST
Updated: Jun 23, 2023 11:04:00 AM IST

The development of AI-based surveillance technology can lead to privacy violations.
Illustration: Chaitanya Dinesh SurpurThe development of AI-based surveillance technology can lead to privacy violations. Illustration: Chaitanya Dinesh Surpur

The conversation around Artificial Intelligence (AI) and all it can do has accelerated in the last one year. From ChatGPT to eerily realistic deepfakes to AI generated music and art, AI is everywhere so much so that it is becoming increasingly difficult to go on to any website today without accessing AI generated content in one way or another. Conversations around its use, the ethics surrounding such use, and possible harms arising have also picked up pace in the last several months, to the extent that we have now reached the point where researchers and industry leaders have written an open letter conveying their concerns that AI could pose a “risk of extinction†and “destroy humanityâ€.  
 
However, AI ethicists like Timnit Gebru and Margaret Mitchell have criticised the open letter saying that it focuses on hypothetical future concerns while failing to identify and respond to the real harms of AI that are already being felt. So what are these harms that are being felt already? AI has contributed significantly to an increase in the spread of misinformation, automating existing biases, and threatening privacy. One of the foremost examples of the harm that AI is causing right now is the use of AI-based surveillance technology by law enforcement agencies.  
 
Movies like Tom Cruise’s Minority Report and TV shows like Person of Interest showcase worlds where humans and advanced (almost sentient) technology team up to catch the bad guys. Technologies such as predictive policing, facial recognition, and social media monitoring are used in consonance with each other to provide the good guys (the police) with as much information as possible without any concern for citizens’ right to privacy to aid them in fighting crime and criminals. A similar narrative has now gripped law enforcement agencies around the world and companies like US-based Clearview AI, Israel-based Zencity, and China-based Huawei have capitalised on this shift. With increasing questions about the fairness and objectivity of traditional policing cropping up in various jurisdictions, AI, with its promise of objectivity, seems like the answer. However, questions of privacy and accuracy remain.  
 
Let’s take the example of the Delhi Police’s use of facial recognition technology to examine these questions. The first reported use of facial recognition by the Delhi Police was in 2019. However, responses to RTI requests reveal that the technology has been acquired by the Delhi Police in 2017 on the basis of an order issued by the Delhi High Court, which authorised them to use facial recognition to find missing children. In spite of this, as per their own admission, the Delhi Police is now using the technology for investigation purposes, a clear instance of widening the scope of the technology without any legal authorisation which raises severe privacy concerns. These concerns are only heightened by the absence of a data protection law in India, which would provide protection to citizens, and the lack of a specific regulation for facial recognition, which would trace the contours in which use of facial recognition in India would be contained.  
 
RTI responses from the Delhi Police also reveal that they consider an 80 percent similarity match to be a positive result. While it is unclear how the Delhi Police has arrived at the 80 percent threshold, the information brings us to the chilling conclusion that the Delhi Police’s use of facial recognition has a 20 percent margin of error. Even in cases where the match is not at 80 percent similarity, any result generated is treated as a false positive result which is investigated further by the Delhi Police for additional “corroborative evidenceâ€. Such operating procedure not only whitewashes the inaccuracy of the technology but also enables the continued harassment and privacy violation of communities where family members might share facial features.  
 
Here, it becomes interesting to note that not just the use but also the development of AI-based surveillance technology can lead to privacy violations. AI-based systems are data hungry. This is because they need increasing amounts of new data to be trained to identify patterns and to improve their accuracy over time. This data is usually purchased online or scraped from the web. In the case of India’s surveillance systems such as the National Automated Facial Recognition System (AFRS) which is currently being developed by the National Crime Records Bureau, access to “dynamic police databases†will be provided to create the AFRS database. The use of vague terminology coupled with an absence of regulatory mechanisms expose the data held in Indian government databases to being accessed and processed non-consensually.    

Also read: Navigating the future and ethics of Generative AI

So what can be done to fix these issues with AI? The European Union is currently in the midst of drafting an AI Act which will put in place a classification system that will determine the level of risk an AI use case could potentially pose to the health and safety or fundamental rights of a person. The risk categories are unacceptable, high, limited, and minimal. In the most recent iteration of the draft, systems that would carry out real-time biometric identification in public spaces have been categorised as posing an unacceptable risk and subsequently been proposed to be prohibited with little exception.  

In India, we do not have any regulation for AI currently. However, frameworks seem to be in development. The NITI Aayog has released three papers in its series on Responsible AI, the first of which laid down the principles for Responsible AI. These papers form a part of NITI Aayog’s National Strategy for AI, #AIforAll. There are also multiple independent initiatives being undertaken by the Ministry of Electronics & Information Technology and the Ministry of Commerce & Industry. However, when these frameworks will be put in place remains to be seen.       

Read More

In order to regulate AI successfully, India should aim to adopt a framework that would assess the risks attached to each AI system effectively. Here, it is essential to look at the nature and sensitivity of the data that will be collected and processed by it in order to measure the harms that may occur in case of breach or misuse. The framework should also assess the impact the AI system will have on individual rights and liberties, with uses such as law enforcement which can have severe consequences on individual liberty, being categorised as high-risk.  

Use of these technologies is happening in a legal vacuum. While AI may seem innovative and amusing to most, it is important that those in charge of regulating it understand the harms associated with it to ensure effective regulation.  

(Anushka Jain is Policy Counsel, Internet Freedom Foundation)

X