In 2018, Singapore declared its plans for nationwide monitoring by deciding to embed facial recognition cameras in lampposts..
Likewise, Malaysia, around the same time, was readying to partner with China’s Yitu Technology to provide a central database-linked artificial intelligence (AI)-powered facial recognition system to its police. The reason? Using body camera footage to identify citizens in real time.
These aren’t isolated examples. There’s a new breed of digital eyes that are keeping watch over citizens around the world. While mass surveillance isn’t a new concept, these AI-powered systems have allowed the governments to keep much more efficient tabs on the public. According to the 2019 AI Global Surveillance Index, at least 75 countries had already employed AI for surveillance in one form or another.

However, by the time we entered 2025, at least 69 other countries have proposed more than 1000 AI-related legal frameworks and policy initiatives to address public concerns around AI governance and safety.
It’s a double-edged sword; while the public might feel safer when public security agencies use AI, the question everyone is now asking is: how far is too far?
The Good: AI-Powered Surveillance in Public Spaces
CCTV (closed-circuit television) cameras have been around since the 1940s, but didn’t enter the public sphere until 1968 when the city of Olean in New York implemented it for civilian purposes, being the first American city to do so.
Today, machine learning (ML) surveillance and security systems have surpassed military use, penetrating civilian life at an unprecedented pace and scale – and not all of it has been bad. Cities are exploring AI’s capabilities of predicting crime by analysing surveillance data, thus improving safety and security.

AI has helped build trust, creating and delivering innovative police services, connecting police forces with citizens, and strengthening associations with communities. According to studies, smart solutions such as video surveillance systems, smart cameras, facial recognition, and biometrics could help reduce crime by nearly 30-40%, while also reducing emergency services’ response times by 20-35%. This extends to real-time crowd management, crime mapping, and even detecting gunshots.
Plus, big data analyses and ML are now making it possible to navigate through massive amounts of data on terrorism and crime to identify correlations, patterns, and trends.
That’s not all; AI is also being employed at urban emission and tolling zones to reduce air pollution. Prevention of health crises is another emerging area of application, with Paris having used AI to ensure that passengers were wearing masks during the pandemic. The idea wasn’t to punish rule-breakers, but rather to help authorities anticipate any future outbreaks. It’s also been invaluable in overseeing adherence to operational protocols and hygiene standards, contributing to a more efficient and safer healthcare environment.
It might be too simple to associate intelligent surveillance with crime prevention and law enforcement. The real value is far-reaching, affecting industries from logistics and transportation to even retail, offering benefits that are industry-specific and more wide-ranging. How to achieve these goals while respecting liberties and privacy remains the crucial question.

Concerns About Privacy and Civil Liberties
Remember the 2002 Tom Cruise movie “Minority Report”? It showed a society where the police deployed psychic mutants to predict and prevent murders. Today, we have the likes of the high-tech “Dejaview,” an innovation of South Korean company ETRI (Electronics and Telecommunications Research Institute). It blends AI with real-time CCTV to discern anomalies and patterns in real-time scenarios, allowing it to envisage incidents ranging from drug trafficking to pettier offences with a sci-fi-esque accuracy rate of 82%.
Its “brain” is a massive PCM (predictive crime map), which comprises more than 33,000 CCTV clips of criminal and illegal incidents captured between 2018 and 2021 in Seoul’s Seocho District.

This is just one example, and ETRI’s invention could very well be the reality we as citizens are facing in the future. It’s a double-sided challenge; while the allure of pre-emptively preventing crimes is enticing, it’s raised obvious concerns about the encroachment on civil liberties and privacy.
The use of AI is largely unregulated, and surveillance oversight is equally insufficient. It introduces a new dimension to scrutiny, where the public might be under constant observation by an invisible agent, à la George Orwell’s 1984. It could trigger feelings of heightened self-awareness and unease for even the most innocuous of activities, such as taking a shortcut on your way home or using a cash machine.

We certainly cannot stop this blitzkrieg of technology whose overarching aim is to make our lives easier. But one needs to draw the line when it comes to the privacy of citizens, which is something that all concerned parties, including the tech makers, the governments, and other authorities need to map out too.
Uncle Ben from Spiderman famously said “With great power comes great responsibility,” and it couldn’t fit the AI surveillance scenario better. If an app like Dejaview is categorised as “high-risk,” it also allows real-time biometric identification to prevent threats to physical safety. When the right relationships are in place, AI is the layer that could support law enforcement agencies to deliver their job and trigger behaviour change for the better.
The ultimate goal should be the creation of agile AI security systems that can detect suspicious activity and crime while not posing a threat to the privacy and civil liberties of citizens, contributing to a safe society and effective justice systems.
In case you missed:
- Can Drones Protect Our Cities?
- Can Your Wi-Fi Betray You?
- Phantom Wallet: The Fastest-Growing Crypto Wallet
- Passwords vs. Passkeys: What Should We Be Using?
- Should I Use An Anonymous Crypto Wallet?
- The Ethics of AI in Healthcare
- Has Private Cloud Made Its Comeback?
- Can We Really Opt Out of Artificial Intelligence Online?
- Seabed Security: A Deep Dive Into Underwater Robotics Technology
- Re-examining Cybersecurity through Blockchain