Implementing technical solutions to social problems does not hold us in good stead to understand fairness or mitigate bias
Delhi Police’s facial recognition tool does not ‘simply see’, but rather struggles to identify individuals, with an accuracy rate of less than 1 percent, as stated by the Centre before the Delhi High Court
Image: Shutterstock
Â
In February 2020, a Union minister urged the Ministry of Home Affairs against the use of facial recognition technology on innocent people. Responding to this, Home Minister Amit Shah stated, “This is a software. It does not see faith. It does not see clothes. It only sees the face and through the face the person is caught.â€
This statement’s connotation of an artificial intelligence (AI) system like facial recognition being a neutral tool runs contrary to much of what we have witnessed in recent years. For starters, it is well-documented that the Delhi Police’s facial recognition tool does not ‘simply see’, but rather struggles to identify individuals, with an accuracy rate of less than 1 percent, as stated by the Centre before the Delhi High Court.
Secondly, the legality and necessity of facial recognition’s use by Delhi Police has come under the scanner. Beyond that, as AI applications crop up across of sectors—from education to health care and law enforcement to lending—it is apparent that while states and companies promote AI through promises of modernisation, economic growth and efficiency, these systems can also lead to harmful outcomes that include violation of fundamental rights, disproportionate impacts on vulnerable communities and exacerbated historical discrimination in societies.
(This story appears in the 13 August, 2021 issue of Forbes India. To visit our Archives, click here.)