A new AI safety report finds that CEOs of AI companies are playing Russian Roulette with humanity’s future, says Satyen K. Bordoloi as he details the key findings of a terrifying report they pray you never read.
The number of people who believe AI will cause the end of the world has increased over the years. Geoffrey Hinton, the “Godfather of AI,” warns of its “existential threat.” Yuval Noah Harari declares AI could “destroy civilisation.” Elon Musk, whose billions fuel the AI arms race, calls it “summoning the demon.” Such apocalyptic rhetoric is problematic: it reduces the debate to cartoonish extremes. Either AI will save humanity or destroy it.
Either we panic or shrug. Lost in this binary is the urgent, messy reality – AI is already destabilising our world, and the companies building it are alarmingly unprepared to manage the risks they’ve unleashed. And we’ve got proof.
The Future of Life Institute recently released a bombshell report, the 2025 AI Safety Index, which serves as a reality check for the AI industry, revealing a chilling truth: the labs racing to build human-level intelligence have no credible plan to control it. Not a single company scored above a ‘D’ in existential safety planning. Would we allow skyscrapers to be built without fire escapes or hoses? Then, why this?

The Report as an Expose
The report evaluates the AI safety practices of seven leading AI companies, highlighting their strengths, weaknesses, and areas for improvement in managing risks associated with advanced AI systems amid rapid advancements in AI capabilities. The seven companies are Anthropic, OpenAI, Google DeepMind, x.AI, Meta, Zhipu AI and DeepSeek. Call it a report card of these companies, and the results are not good.
The report’s most damning finding cuts to the heart of Silicon Valley’s grand promise when it states: “Companies claim they’ll achieve AGI within the decade, yet none scored above D in Existential Safety planning.. none of the companies has anything like a coherent, actionable plan for ensuring such systems remain safe and controllable.” OpenAI, Anthropic, and Google DeepMind – firms spending billions to create “digital minds” – are graded C, C+, and C- respectively. Chinese rivals Zhipu AI and DeepSeek outright fail to get an F.
Let that sink in. The very architects promising God-like machines within years – machines that could redefine life, labour, and liberty – have no credible plan to control them. It’s like designing a fusion reactor in a suburban garage with no blueprint for radiation shielding. Anthropic, the top-ranked company, scored a measly C+ overall. Meta and China’s Zhipu AI languished near failing grades. In the critical “Existential Safety” category, Anthropic scored a D, DeepMind a D-, and crucially, all the other companies scored an F. Even Musk’s x.AI. It’s like after warning about summoning the demon, Mr. Musk is doing everything to ensure the demon has a smooth arrival.

Is this Blind Spot? Ignorance… or Illusion?
Is this negligence? Arrogance? Or something darker? Consider two possibilities. The first is obvious: in the gold rush that is AI, safety slows the digging. OpenAI gutted its “Superalignment” safety team in 2024. Google DeepMind lobbied against AI safety laws, such as California’s SB 1047. When shareholder returns clash with the need to safeguard humanity, guess who wins?
I, however, suspect something else. After he stole the show with Dall-E and then ChatGPT, Sam Altman went on a world tour, demanding money to build AGI – Artificial General Intelligence. His ask: 7 trillion – more than India’s GDP. However, if you listen to those building it and are not scared of honesty, they think AGI is a moonshot. Their opinion: brute-forcing bigger models with more data and cash is like climbing a mountain to reach the moon? What if AGI instead requires a paradigm shift nobody has invented yet? The reckless sprint suggests desperation: If we can’t build it, at least we looked like we tried; or opportunism: Make hay while the sun shines.
Proof of this accusation: Press these companies for details on controlling superintelligence, and they’ll default to vague promises about “alignment research” and “ethical principles.” The Safety Index clearly proves that these are but empty words.
The real crisis, I believe, isn’t rogue AI – it’s human delusion. Tech giants are trapped in a self-created arms race, peddling AGI as inevitable to justify reckless expansion. Like I remember reading somewhere: “We’re not building God. We’re building hyper-competent sociopaths optimised for profit.”

The Internet’s Pandora’s Box
But AGI is again ‘future risk’. Right here, right now, we face a much more serious threat. To illustrate this, I’ll narrate an incident from personal experience. In 2022, while researching for the Netflix series Saare Jahan Se Accha about an Indian spy preventing Pakistan from building a nuclear bomb in the 1970s, I dug deep into Google to find authentic ways to show the making of the nuclear bomb.
I am not a nuclear physicist, but I stumbled upon what seemed like blueprints for building nuclear reactors openly available on the searchable web. What took nations decades and billions to develop, countless lives lost to protect or steal, now floods the training data for every AI out there. And if these AI models are not trained to refuse giving out such information, bad actors could not only obtain such information but also brainstorm with the AI about fine-tuning it.
The Safety Index confirms this nightmare. Only three of the seven companies (Anthropic, OpenAI, Google) even attempt to test models for bio-terrorism risks. Their evaluations? Flawed and half-hearted. Methodologies linking tests to real-world risks are absent. Meanwhile, Meta and xAI release models that are so vulnerable to jailbreaks that hackers can easily bypass their safety filters.
This isn’t theoretical. Last year, researchers warned that AI models could be tricked into drafting a pandemic virus synthesis plan and proposed “that national governments.. pass legislation and set mandatory rules that will prevent advanced biological models from substantially contributing to large-scale dangers, such as the creation of novel or enhanced pathogens capable of causing major epidemics or even pandemics.”

The Accountability Vacuum
So why are companies so deathly complacent when it comes to safety guardrails around AI? The reason is a heady cocktail of profit motives and regulatory paralysis. The Safety Index reveals that whistleblowers are silenced, with only OpenAI publishing its whistleblower policy – but only after the media exposed its punitive non-disparagement clauses.
These companies talk of “AI ethics boards” and “red teaming” to address safety concerns. However, this is more theatre and rhetoric than action and reality, as the reviews from these tasks they purportedly carry out are cursory and secretive. Then there’s the bigger problem: nations like China that do not always follow the global order. While Western firms at least pay lip service to safety, Chinese companies like DeepSeek operate with near-zero transparency, protected by state mandates.
The result is an unregulated Wild West for AI where capabilities outpace safeguards by years, and what we have on the ground is a race between AI’s intelligence and our understanding of it – one is a tortoise and the other a spaceship. And when even industry leaders score a D in existential safety, any guesses on who’s losing?
This isn’t just about a Terminator scenario of robots going rogue. It’s about power. Every unchecked AI deployment erodes privacy, amplifies misinformation, and centralises control. Then there are cybercriminals who are having a field day using AI to code chaos faster than they could previously. And we all saw what happened to x.AI’s Grok spewing hate speech on Twitter. Yet, AI companies push ahead fantasising that AI safety will magically appear from the same profit-driven culture that gave us addictive social media and exploitative apps.
So, though I disagree with AI doomsayers, the disagreement is more about the semantics. Because if you read this report, you’ll realise that the AI apocalypse, when it comes, won’t arrive with a bang. It’ll creep in via hacked biolabs, manipulated elections, and algorithmic discrimination in our daily lives. Maybe it already has.
The AI companies promised us Jarvis. It seems what they made was Ultron. And nobody’s building a Vibranium shield for either.
In case you missed:
- A Manhattan Project for AI? Here’s Why That’s Missing the Point
- The Rise of Personal AI Assistants: Jarvis to ‘Agent Smith’
- Yes, an AI did Attempt Blackmail, But It Also Turned Poet & erm.. Spiritual
- Why is OpenAI Getting into Chip Production? The Inside Scoop
- Rogue AI on the Loose: Can Auditing Uncover Hidden Agendas on Time?
- Nuclear Power: Tech Giants’ Desperate Gamble for AI
- OpenAI’s Secret Project Strawberry Points to Last AI Hurdle: Reasoning
- AI Hallucinations Are a Lie; Here’s What Really Happens Inside ChatGPT
- Cure Every Disease: How AI is Rewriting the Future of Medicine, and Humanity
- Generative AI With Memory: Boosting Personal Assistants & Advertising