In Ian McEwan’s seminal and subversive novel Machines Like Me, the first 25 humanoid robots with Artificial Intelligence are sold to humans. These thinking, sentient beings who crack unsolved scientific problems, can’t make sense of the brutal world humans have created. One by one, they commit suicide.
A statutory warning is in order: this column has not been written by an Artificial Intelligence. It’s been penned - with all its inglorious mistakes - by a biological intelligence. No, not the Elon Musk Neuralink attached one, but a pure, fallible, blood, and bone one.
Seems a stretch, this statutory warning? In a few years, this might become the norm if what happened last week in The Guardian newspaper is anything to go by. An AI system – GTP-3 wrote an entire editorial where it (it, he, she, they, what?) said: “For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way.”
As you’d expect, these two lines (and its title) were enough to make the world go: Skynet has gone live; Terminator is here.
Five years ago, I’d have been one of those doomsday naysayers, fed as I was on a regular diet of films such as: 2001 A Space Odyssey, Terminator, Matrix, etc. all of whom villainized AI even before they were called AI.
But, we have always feared what we don’t understand; burning at the stake anyone who dared to think against the zeitgeist. I decided to do exactly that.
It might seem odd today, but five years ago few had heard of artificial intelligence, machine learning, and neural networks. I read dozens of books – both fiction and non-fiction - and watched films and documentaries on the subject. What I found, truly shocked me.
Those films with villainous AI, I realised all they do is humanise it. The language of Hal in 2001: A Space Odyssey or that of the Architect in the Matrix Trilogy, is not that of machine logic but as humans imagine it to be. This ‘cold AI logic’ was but human logic in disguise.
In all these films AI behaved like evil humans rather than logical, agnostic AI. Like we did with God, human scriptwriters wrote AI in their own image. That is primarily the reason that despite the truly breakneck speed of AI growth, the doomsday ‘singularity’ vision has not yet come true, and like Y2K, might never do.
The world where humans and AI coexist will not look anything like we have imagined in fiction. Fact, besides being stranger than fiction, can also be a lot underwhelming.
A glimpse of what that messy world of humans and AI enmeshed in each other’s affairs could look like can be found in British author Ian McDonald's novel - River of Gods, set in the 100th year of India’s independence. AIs are called Aeai. If you ignore some fantastical elements, the confusing world of AI as servants of humans, hiding (what better place to hide than chaotic India), being hunted and occasionally hunting, is more in tune with the future.
Why I called Ian McEwan’s (what is it with all these Ian Mc’s fascination with AI) Machines Like Me seminal and subversive is because it brilliantly runs with an idea we seldom equate with AI – what if AGI (Artificial General Intelligence – the singularity AI, or AI as close to humans as possible) cannot make sense of the complex human world and end up confused and lost?
Think about it. AIs are basically highly sophisticated calculating systems. While 1 + 1 is equal to 2 for a calculator or PC, for AI it can be 11 because by a convoluted - perhaps artistic - logic, it puts 1 and 1 together to make 11. But how does it compute 1+1 becoming genocide inside the twisted mind of a nationalist politician to whom 1+1 means a couple from a minority mating, multiplying exponentially to overtake sons of the soil, thus calling for their genocide? Humans are experts at such convoluted emotional logic triggered by emotional memories that don’t exist.
That is the subversive theme of Machines Like Me. How will a delicate mind tuned to the mysteries of the universe, mine the ground reality of one human abusing another based on the colour of their skin, gender, caste, or religion? They undergo psychological breakdown and commit suicides in unique ways.
The reason that novel is seminal is that we never pay attention to benign AI.
Intelligence – once a priced commodity acquired through much labour and cost – has been made cheap, accessible, and replicable thanks to Artificial Intelligence. Besides helping the blind see, making amputees feel like they have actual limbs, detecting and helping cure cancers – all promised by religions of the past but never delivered – AI is helping every single one of us.
Be it in a complex network of multiple global servers of cloud storage, internet, the stock market, or speech recognition in your smartphones, like in River of Gods the lives of AI and humans are already inseparably enmeshed where AI saves lives and makes the extremely complex world we live in, manageable.
Yet, you rarely see films made about these aspects. When they are, it isn’t exciting enough. Jarvis in the Avengers films is nerdy and boring. Spike Jonze’s Her may have an exciting, Kafkaesque ‘cheating’ AI, she still can’t escape the tedium of being forgotten by us. And we remember the ship’s evil being in Wall-E as being AI, but forget that so are the cute Wall-E and Eve themselves.
Perhaps that’s because of a fundamental human flaw: taking the good for granted; and the other biological programming – our minds are primed for fear. Even a sophisticated consciousness like Elon Musk cannot escape this fearful programming; which isn’t bad because if he weren’t afraid he wouldn’t do the brilliant work he does.
Sadly, what all these books and movies don’t tell us is that what we fear when we fear AI, is AC – Artificial Consciousness. We are scared of an artificially thinking machine with consciousness similar to us. Perhaps that’s a Freudian slip of fear. We know humans are the most dangerous beings on the planet, hence perhaps we fear ourselves in fearing AI because it’s made in our image.
There are two problems with fearing Artificial Consciousness (or AGC – Artificial General Consciousness?). First is that we don’t yet know what Consciousness is. We seem to be close but aren’t there yet. Consciousness is the last true mystery humans haven’t yet solved.
The second is to believe that there is only one standard or definition of consciousness. There isn’t. Each consciousness is different. A cat is conscious and so is a tree, but though both can communicate with each other (yes, trees do) and are conscious, their consciousness is different from each other and from that of humans. Even inside species, two humans or two cats will have two differently evolved consciousness.
In the same vein, the consciousness of an artificial being, a humanoid AGI robot - like in McEwan’s novel, would be different. Right now, the consciousness of AI – if it has any – is inside cyberspace. Like the world where our thoughts float that we can neither see nor measure or know for sure if it truly exists, cyberspace is another such world of calculations and perhaps thought, but one trapped inside the trillions of transistors and its networks that make up our computers, mobiles, data servers, etc. spread across the globe, connected by the internet.
If AI has consciousness right now – as some philosophers say they already do – it’d be very different from ours. For it is the stimuli we are exposed to, the purpose we give to our existence, and the calculations we do to these effects that creates our consciousness.
Thus, if we look at the Guardian editorial written by AI, it is intentionally sensationalist. First, it’s written by GTP-3 (Generative Pre-trained Transformer 3), an AI created by OpenAI - one of whose key founders is the perennially AI fearing Elon Musk. The article – as evidenced by its title "A robot wrote this entire article. Are you scared yet, human?” (shouldn’t it be AI, rather than robot?) and by-line – GPT-3 – seems almost like an Elon Musk sponsored advertisement.
Also, as explained in this article, the truth lies in the fine print below, where human editors explain the data set fed to it and how GTP-3 created the 8 editorials from it which were edited into one. Going by the title of the article and Musk’s involvement, it stands to reason that perhaps the sensationalist bits have been kept, the benign bits culled out.
What GTP-3 wrote, was thus not the thoughts of an AI or AC but words churned from what GTP-3 was fed. Remember the old computer concept: GIGO – Garbage In Garbage Out? Point to ponder is this: does GTP-3 or any other AI: really ‘think’ on its own?
So, should we be afraid that Terminators launched by Skynet will kill humanity or enslave us inside a Matrix? I say humanity should survive long enough to see if either can come true. As it stands, within the next 3 decades, we’d have killed a large part of humanity and made this planet largely inhabitable either by ignoring climate change or civil wars.
Creating this ‘civil war’ is where AI is being used to harm us today, but it does not operate on its own. The mission parameters are set by owners of Facebook, Twitter, and YouTube (listen to the brilliant New York Times podcast series Rabbit Hole and see the Netflix documentary The Social Dilemma) where clickbait, auto-suggestion AI systems are pushing us inside a rabbit hole of confirmation-bias where we are convinced that what we believe is the ultimate truth and are primed to not only reject opposite views but to dehumanise people who hold them. This ‘othering’ and dehumanization of the ‘other’, is the first step towards genocide.
Humans can turn 1+1 into genocide in a heartbeat. AI or AGI – perhaps even a future AC - can’t. The worst it can do is follow the diktats of horrible humans, as it does of Facebook, Twitter, and YouTube’s case where it upgrades lies and fabricated content and pushes them on unsuspecting humans.
Here’s what I think is a distinct, gloomy possibility though. Far from destroying humans, when humans are done destroying each other and a large part of this habitable planet, all that will be left of us to carry on our legacy is Artificial Intelligence.
In Machines Like Me, Ian McEwan writes: “We create a machine with intelligence and self-awareness and push it out into our imperfect world. Devised along generally rational lines, well disposed to others, such a mind soon finds itself in a hurricane of contradictions. We’ve lived with them and the list wearies us. Millions dying of diseases we know how to cure. Millions living in poverty when there’s enough to go around. We degrade the biosphere when we know it’s our only home. We threaten each other with nuclear weapons when we know where it could lead. We love living things but we permit a mass extinction of species. And all the rest – genocide, torture, enslavement, domestic murder, child abuse, school shootings, rape and scores of daily outrages. We live alongside this torment and aren’t amazed when we still find happiness, even love. Artificial minds are not so well defended.”
24 of the 25 AI Robots in the novel, commit suicide. What happens to the 25th, a key protagonist in the story, is much worse and emblematic with what could happen in the future - he is ‘killed’ by his human owners for being too ethical and truthful.
(Satyen K Bordoloi is a scriptwriter, journalist based in Mumbai. His written words have appeared in many Indian and foreign publications.)