BIO-METRICS: HOW GOVTS CAN MISUSE TECHNOLOGY

It’s the stuff of sci-fi dreams: we’ve always wanted technology to evolve and grow up to be more and more capable, but experts are not so sure about whether one of the latest ways in which machines are learning is leading to them going astray. Artifi cial intelligence and facial recognition — this brainchild of humanity needs to be reeled in and checked, as claims about this technology perpetuating long-held biases of its creators are on the rise.

One prominent study that calls out bias embedded in facial recognition systems is MIT’s Gendershades programme. It proved that facial recognition technology by prominent companies such as IBM, Microsoft and Face++, while having shown a relatively high accuracy in facial recognition overall, faltered when it came to recognising certain genders and races. It seems surprising that something non-human can retain biases that plague human
society, but the answer to this lies in the creation of these programmes.

Simply put, facial recognition technology is soft ware that detects and classifi es a person’s face. It works through the process of machine learning, by going through a sample data set that is fed to the programme, and the programme is told what pictures are those of human faces, and what aren’t. The larger the data set, the better it gets at recognising faces. Th is is where we run into our first possible reason for bias in the system— the sample dataset used to teach the soft ware to recognise human faces, could have been fed a higher number of white, cisgender, male faces that obviously leads to the system recognising those faces better. This speaks to a larger issue, a pervasive one that could trip up how eff ectively the soft ware identifi es minority groups at every step of the process: are those building the technology even aware of their biases, or how their oversight could seep into the technology?

Your smartphone has it, your photos app has it, and now your government wants it. Here, in India, the government seems to be enamoured by the promise facial recognition holds, having approved a plan last year, in 2020, to set up the National Automated Facial Recognition System. The issue with this, however, is that facial recognition technology in the hands of the government is poised to be a huge violation of privacy and digital rights — something that digital rights advocates have been shouting themselves hoarse about for the past few years. A chilling use of this technology is in China, where facial recognition soft ware has been used to racially profi le and surveille minorities like the Uighur Muslims living there. In the hands of the Indian government, which has in the past shown a majoritarian bias, against Muslims and those of lower castes, it cannot be said how justly this technology will be put to use.

If facial recognition is to be a type of technology that companies and governments want to invest in, the best course of action from here on, would be to ensure transparency — in the making of the soft ware, the data sets used to prep it, and in how it is to be used, and for what purpose.

10 Jan 2022
Shruti Menon