With terrorists and drug cartels having no qualms about weaponizing AI, the rest of the world will not be given a choice, says Nigel Pereira


It was pretty shocking for most people last week when the headlines said the San Francisco ruling Board of Supervisors had voted 8-3 in favor of allowing the police to use robots that can kill people. However, fierce pushback and demonstrations from activists, as well as warnings about weaponizing AI from experts around the globe have forced them to take a step back, at least for now. In an almost unprecedented backtrack in Bay Area politics where a second vote is just a formality and usually mirrors the first, the Board of Supervisors has banned the use of such robots for now and sent the issue to a committee for further discussion.

The argument for L.A.W

To put things into perspective, the United States of America has had over 604 mass shootings in 2022 alone. Added to the fact that guns are legal and part of the Second Amendment of the Constitution, there’s not a lot that police officers can do to curb the violence without endangering their own lives. While “active shooter” drills have become commonplace in schools and even kindergartens in the US, many argue that Lethal Autonomous Weapons (L.A.W) could be the hi-tech solution to a very serious problem. This was evident in 2016 when the Dallas Police used a bomb disposal robot armed with an explosive to kill an active shooter who had already killed 5 police officers.

The San Francisco Police Department currently has 12 robots used for reconnaissance, investigation, and bomb disposal, and while none of them are armed, the Police wanted the option to arm them with explosives as a last resort. Additionally, the proposal clearly stated that the Police could only use weaponized robots “when risk of loss of life to members of the public or officers is imminent and officers cannot subdue the threat after using alternative force options or de-escalation tactics.” However, many feel that such policies are notorious for having loopholes and “interpretations” while the weaponization of AI could be the beginning of a slippery slope.

What are Slaughterbots?

Black Quadcopter (Image Credit: PickPik)

The term Slaughterbots was first coined in a viral 2017 arms-control advocacy video where a future is depicted where killer micro-robots armed with explosives, AI, and facial recognition are being used by humans for everything from revenge to political gain. Unlike UCAVs or unmanned combat aerial vehicles that have varying levels of autonomy and for the most part, are under human control, Slaughterbots are completely autonomous. Even with UCAVs like the infamous predator drones that have been used by the US on countless occasions to take out high-priority targets, the final decision to kill or end life is still made by a human.

Slaughterbots or Lethal Autonomous Weapons, use an AI algorithm to identify, select, and kill human targets without human intervention. To be fair, that’s not what the San Francisco Police were asking for, however, it didn’t stop protesters from holding up signs that read, “We all saw that movie… No Killer Robots.” That being said, it’s not hard to imagine the natural progression from semi-autonomous to fully autonomous weapons going by the current situation in the world. In Ukraine, in particular, UCAVs and armed drones are permanently changing the way wars are fought on the battlefield.

The consequences of weaponized AI

The Matrix TV shot (Image Credit: Flickr)

There have been numerous movies and shows about AI taking over the planet and while a lot of us feel like it’s all science fiction, there are numerous groups of people who are constantly working to ensure it doesn’t become a reality. In addition to stopkillerrobots.org and autonomousweapons.org, there’s also the United Nations and the International Committee of the Red Cross (ICRC) that are against the use of Slaughterbots or LAWs. One of the main arguments on the ICRC’s website against the use of killer robots is that while AI may excel at facial recognition and targeting, it can never fully comprehend the value of human life.

Another interesting argument that came from a YouTube video by Vice news is that military missions are more carefully planned when the attackers have “skin in the game,” or “boots on the ground.” What this means is that extra precautions are taken during military operations when there are friendly lives at risk and removing this risk from the equation would make attacks less cautious. The most compelling argument, however, is the fact that AI still makes mistakes and could mistake someone of a similar race or gender for an actual target as is depicted in the Slaughterbot viral video.

A history of violence

History has been written by people who invented new weapons to defeat or subjugate those around them and while it’s great that there are a lot of people lobbying against the use of Slaughterbots, this has never stopped us in the past. From Ghengis Khan’s recurved bows to America’s nuclear warheads, technology has been and always will be used as a weapon.

While autonomous drones have already been used in Syria in 2021, Russia claims to have built a robot tank that outperforms humans and America has an AI pilot that outperformed a USAF Colonel on a simulator. Additionally, with terrorists and drug cartels having no qualms about weaponizing AI, the rest of the world will not be given a choice.

In case you missed:

With a background in Linux system administration, Nigel Pereira began his career with Symantec Antivirus Tech Support. He has now been a technology journalist for over 6 years and his interests lie in Cloud Computing, DevOps, AI, and enterprise technologies.

Leave A Reply

Share.
© Copyright Sify Technologies Ltd, 1998-2022. All rights reserved