
Response to NITI Aayog's Latest FRT Framework
The past decade has seen rapid advancements in the development and adoption of Facial Recognition Technologies (FRT) across a range of global sectors - most notably in the case of law enforcement and transportation. In India, instances such as the use of FRT under the Digi Yatra scheme at select airports and the Delhi Police’s use of it in tracing and identifying missing children have demonstrated the willingness of both public and private actors to rely on these technologies.
Despite the growing use of FRT within the country, a clear regulatory framework or enforceable set of guidelines outlining the permissibility and safeguards needed for these technologies remains strikingly absent. This is particularly concerning given the many risks and challenges associated with FRT that have been flagged by civil society bodies and experts from around the world.
Amidst this rapidly evolving landscape, NITI Aayog has released its draft discussion paper “Adopting the Framework: A Use Case Approach on Facial Recognition Technology” that articulates a framework for the deployment of FRT systems in India that is grounded in principles of Responsible AI and AI ethics. In line with NITI Aayog’s call for public comments, Aman Nair, Dona Mathew and Urvashi Aneja of DFL have authored this policy brief outlining various points of response and feedback.
The brief is divided into two sections:
- Section one identifies existing tensions and gaps within NITI Aayog’s report.
- Section two makes a broader case for the banning of FRT in public spaces.
Section 1 - Report Specific Response
The brief asserts that while the NITI Aayog report does recognise some of the limitations of FRT systems (such as their unreliability, biased outcomes and threat to privacy and rights) it does not adequately address the implications of these shortcomings. Our analysis broadly follows four key points:
- The distinction drawn by the report between security and non-security use cases of FRT is an inadequate framework and downplays the harms associated with FRT
- The use of consent as a legitimising framework ignores the limitations of consent in the context of a lack of digital rights awareness and unequal power structures among individuals and institutions
- The inadequate redressal of the illegal nature of present FRT systems and the adoption of a framing of rights as simply challenges to be accounted for is dangerous
- The selection of Digi Yatra as a case study represents a suboptimal choice. Given the report's assertion of the harms of security/ law enforcement based use cases of FRT, by focusing on a non law enforcement use case such as Digi Yatra, the report fails to adequately paint an accurate picture of the harms that would arise from the legitimisation of FRT systems
Section 2 - General Comments on FRT
This section interrogates the underlying proposition of NITI Aayog’s paper - that FRT can be adequately regulated and there exists circumstances wherein its use is warranted. Our analysis begins by assessing the inherent limitations associated with FRT systems and the foundational shortcomings associated with their use:
- FRT systems are often fallible and have been demonstrated to often produce biased results
- The adoption of FRT can have a normalising effect on other equally dangerous technologies
- The increased use of FRT will result in systematic effects of increased surveillance and affect privacy rights and freedom of expression.
The second stage of our analysis focuses on moving beyond state backed instances of FRT use to examining the use of FRT by private actors. In doing so, we demonstrate that the use of these systems by corporations and by community organisations can infringe on the rights of individuals much in the same way as its use by the state. For example, the use of a fallible FRT system by a business for checking employee attendance can have clear economic repercussions for employees who are misidentified by the system. Moreover, the increased use of these technologies can foster an environment of lateral surveillance, wherein citizens and citizen bodies can use these technologies to monitor and surveil others.
With these considerations in mind, we put forth a series of prompts to help assess whether the use of FRT in a situation should be considered permissible. These include:
- Whether the system is deployed in a public space
- Whether there is an element of coercion involved in its deployment
- Whether individuals have the option to opt out of being surveilled by the system
- Whether the impact of the system is on an individual or multiple people
Utilising these prompts, we find that many existing use cases of FRT systems do not meet the required threshold to be considered permissible. While certain exceptions, such as the use of facial verification on phones do meet this standard, the use of FRT by law enforcement, corporations, and infrastructure (such as schools and airports) all fall short.
We, therefore, recommend a complete ban on the use of FRT systems that are deployed by either the state or private actors in public spaces. We further recommend a ban on FRT systems that are deployed on individuals in a private space (by either state or private actors) without any choice to opt-out and without the consent of the individual being surveilled or where the individual being surveilled is subject to coercion that results in their consent becoming moot.
