As everybody already knows by now, the US’s ICE (United States’ Immigration and Customs Enforcement) began deploying facial recognition for the first time in 2025 in the field, a practice that was only deployed in investigatory settings in the past…
The fact that a government agency was using facial recognition systems as definitive identifications in the field is grossly irresponsible. On the other end of the spectrum, Dominick Skinner, a Netherlands-based immigration activist, used facial recognition and AI (artificial intelligence) to reveal the identities of masked ICE agents.
The problem? The ICE agent who shot a US citizen was being misidentified by various outlets as his picture looked like many different men. This is due to people using AI to reconstruct what he might look like unmasked.
While ICE’s actions have been slammed and caused a public outcry worldwide, Skinner’s actions are also equally troubling – not his action per se, but rather the consequences of what facial recognition technology (FRT) can possibly do.

AI Facial Recognition – The Good, The Bad, And The Grey
In India’s Maharashtra, a predictive AI tool called MahaCrimeOS AI, designed with Microsoft’s support, has been assisting the state police in investigations. Even the Delhi Police has been planning to deploy AI-based facial recognition technology. As generative AI (GenAI) use takes off in India’s law enforcement architecture, the appeal is immense.
Firstly, it can sift through digital evidence and CCTV feeds to flag suspects, link cases, and spot patterns in real time, seemingly helping agencies manage their ever-rising workloads. In fact, the Delhi Police is all set to analyse live CCTV feeds to flag vehicles, track missing persons, and identify suspects using FRT, predictive analytics, and automated number-plate recognition.
Such tech tools could possibly become more conclusionary and pervasive, posing challenges to the personal autonomy of citizens, especially in public spaces. These real-time analytics could not only lead to heightened surveillance of specific groups, but also lead to wrongful suspicion, and unfair targeting, and even allow law enforcement to build profiles of individuals.
Plus, concerns surrounding lack of clear laws governing AI decisions, data quality, accuracy, and transparency remain.

Take the case of the slowly-dying paper passports and the rise of digital travel documents. While using smartphones and FRT to confirm identities against travel details might reduce “airport friction” and waiting times, the tech being deployed isn’t very transparent, and it increases chances of surveillance and data breaches.
Did you ever think that seeking freelance jobs on Upwork could possibly violate civil liberties? Recently, it came to light that Flock, an AI-powered camera company, an automatic license plate reader and an all-pervasive technology in the US, employed overseas workers from Upwork, a freelance and gig work platform, to train its ML (machine learning) algorithms.
In fact, the local police use Flock to investigate unlawful activities such as carjackings, and have even performed many lookups for ICE in the system. It raised serious questions about who exactly has access to this footage collected by Flock’s surveillance cameras that are put up practically everywhere across multiple neighbourhoods and communities.
What happens when facial recognition doesn’t recognise your face as a face? Nearly a 100 million people live with facial differences – ranging from craniofacial conditions to birthmarks – and have undergone surgeries that have resulted in changes in their facial structures. As identity verification software becomes commonplace, they’re increasingly struggling to participate in modern life as they’re getting blocked from accessing essential services and systems, as their facial verification fails.
Having borne stigma their entire lives, they’re now being forced to re-experience it as the technology blocks them from accessing necessary financial and public services, with even phone-unlocking systems and social media filters failing them.

What’s Good – And What’s Not
We’re just scratching the surface when it comes to AI facial recognition, or even the reverse in the form of AI facial reconstruction, for that matter. AI facial recognition and reconstruction tools aren’t 100% reliable, even when computer scientists run experiments in better testing conditions. After all, AI’s job is to predict the likeliest outcome, and it isn’t 100% accurate either.
In fact, a study on forensic facial recognition tools showed that when AI attempted to clarify and enhance photos of celebrities, they didn’t look like themselves in any way.
Chances are that Dominick Skinner – and many others using AI tools off the web – probably run faces through the off-the-shelf facial recognition model PimEyes, which has long been mired in controversy around its ethics, model training practices, and use for dangerous behaviour. They said that they were able to reveal faces using AI as long as they had 35% or more of the face visible. This in itself is a dangerous practice, without adding government, law enforcement, and state-sponsored surveillance to the mix.
Today, FRT is at a crossroads that could possibly transform many aspects of society, while still being used possibly irresponsibly. It paints a stark picture of how dangerous things could possibly get, and hence, we need to find a balance between innovation and accountability.
The responsibility might start largely with those who govern, but we as a collective need to ensure that FRT is developed and used in a way that builds trust, promotes justice, and protects people’s rights with an emphasis on transparency and accountability around discrimination, bias, surveillance, and privacy.
In case you missed:
- The AI Surveillance Society: Is It Necessary or Have We Gone Too Far?
- Can Drones Protect Our Cities?
- India’s DPDP Act: Data Privacy Or Data Surveillance?
- Passwords vs. Passkeys: What Should We Be Using?
- All About Data Poisoning Cyberattacks
- Can Your Wi-Fi Betray You?
- Is Quantum Computing a Threat to Blockchain Security?
- Seabed Security: A Deep Dive Into Underwater Robotics Technology
- AI: Bringing About The Next Agricultural Revolution In India
- The Rise of AI-Driven Governance in India










