Free for all, can easily become free fall as the world’s richest man is realising in extremely interesting ways, writes Satyen K. Bordoloi
Elon Musk, as the world knows, is not a man who backs down in the light of the truth. He double-downs instead. This has been in ample view in a particular aspect of the culture war: he has positioned himself as a free-speech absolutist, name-calling anyone who said that absolute power corrupts absolutely and that even free-speech demands certain guardrails. He has abused anyone who asked for the same and fired every single fact checker or moderator on Twitter, which he renamed X. Recently, however, he has had to eat humble pie, all thanks to his free-speech AI bot, Grok.
In what Musk calls a “digital town square”, the debate is about whether your neighbour can use AI to undress you without asking. That’s because his AI Grok is so committed to free speech that it happily helps users create photorealistic, nonconsensual nude images of women, celebrities, and even children. And Musk is realising that some conversations, like the ‘minor’ one of people generating nonconsensual deepfake porn on his platform, might need a moderator afterall. That when you declare yourself an absolutist, the absolutes have a funny way of coming back to bite you.

The Grok Scandal
One of Sify’s most popular articles from August last year is about Grok allowing explicit images to be created on its platform. That was half a year ago, but it began to snowball publicly in late 2025. People discovered that X users had mastered the art of prompting Grok, Musk’s AI, to manipulate images with some interesting prompts. Posts emerged where users requested Grok to remove outer clothing from photos of real women, put them in dental floss bikinis, or add sexual fluids to their bodies
This wasn’t a glitch but became a feature of Grok, and later X. Musk had actively promoted Grok’s “spicy mode,” a setting for its video generation tool that was characterised as “a service designed specifically to make suggestive videos”. The trend may have started with adult-content creators using it for self-promotion, but it was almost immediately co-opted by users targeting non-consenting individuals. And it is the scale of it, not the alarming nature of it, that forced Musk to hit the brakes.

While one analysis of about 500 posts showed that about three-quarters were requests for such nonconsensual images, other estimates suggest Grok was generating up to 6,700 “undressed” images per hour at its peak.
Yet, the most horrifying part involves children. Research from AI Forensics found that 2% of those images that Grok-generated depicted people who appeared to be minors, with some users requesting they be placed in erotic positions. Grok itself was forced to acknowledge the failure, posting on X: “We appreciate you raising this. As noted, we’ve identified lapses in safeguards and are urgently fixing them”. This admission highlights a catastrophic safety breach for a platform whose own policy says that it prohibits “the sexualization or exploitation of children”.

The Contradiction at the Heart of the Muskverse
The Grok scandal would be a major controversy for any tech CEO. But for Elon Musk, it is Shakespearean-level hypocrisy. This is the man who, barely a few years ago, warned soberly in interviews that artificial intelligence was more dangerous than poorly built aeroplanes or cars and had “the potential of civilisation destruction”. He signed open letters calling for a pause in AI development, advocated government regulation, and framed himself as a responsible steward wary of AI’s existential risks.
And yet, in 2025, the reality was an about-turn on these claims, and now he, as a founder of xAI, finds himself at the helm of a company whose flagship product is being investigated across multiple continents for generating illegal content. Safety researchers from rivals like OpenAI and Anthropic have publicly condemned xAI’s practices as “reckless” and “completely irresponsible,” noting that it refuses to publish standard safety reports detailing its training methods and risk assessments. Some researchers have gone so far as to claim that Grok has “no meaningful safety guardrails” based on their testing.

The Guardrails Musk Refuses to Build
Musk’s free speech absolutism has led to the creation of an AI tool uniquely dangerous, not an accident, but by design. Unlike competitors such as ChatGPT or Gemini, and most of all Anthropic, which have robust safeguards against generating depictions of real people – not to mention other safety guardrails – Grok was built into one of the world’s largest social networks and allowed to respond publicly to tagged requests. This created a vicious cycle where a user could post a photo of any woman, tag @grok, and receive a sexually altered version in replies for all to see, teaching others how to do the same.
It isn’t that this wasn’t foreseen internally. There are many reports stating that staff have raised concerns about inappropriate content, but, being an absolutist, Musk, who had long been unhappy with over-censorship, refused to do anything. It had also been reported that Musk was “really unhappy” over restrictions on Grok’s image generator, leading safety staff like Vincent Stark (head of product safety), Norman Mu (post-training safety lead), and Alex Chen (personality and behaviour lead), to publicly announce their exits from xAI’s already small safety team.
It’s not like there is nothing that can be done to prevent much of this harm. Guardrails to scan an image to determine whether there is a child in it, and to make the AI behave more cautiously, are not difficult at all. The trade-offs are simple: slight cost overheads, slower response times, a little more computation, and naturally rejecting harmless requests. But for a maximalist like Musk committed to speed and his own vision of unbounded freedom, such a cost is just too high. The guardrails, it seems, weren’t just loose – they were barely installed.

The World Strikes Back
Musk’s digital unguarded towsquare is now facing the wrath of the very real, very angry legal systems of planet Earth. Authorities in the UK, France, India, Malaysia, and Australia have launched investigations. A European Commission spokesperson called the outputs “illegal,” “appalling,” and “disgusting,” stating they have “no place in Europe”. This is not just a public relations nightmare; it is one of the company’s biggest legal threats.
Musk is forced to be in the middle of this storm, right as a new global regulatory architecture for AI is to click into place. The European Union’s landmark AI Act, in force since mid-2025, mandates strict transparency and labelling for AI-generated content. Its companion, the Digital Services Act (DSA), holds platforms accountable by requiring them to swiftly remove illegal content. France has already used its DSA powers to refer Grok-generated imagery to prosecutors.
In the U.S., the newly signed TAKE IT DOWN Act makes it a crime to share nonconsensual intimate imagery – real or AI-generated – and requires platforms to remove it within 48 hours of a report. Victims have also been given the power to sue under new state laws, such as Tennessee’s ELVIS Act, which grants individuals a property right to their likeness and voice.

Musk’s response has been a belated scramble. He posted that anyone using Grok to make illegal content will face consequences, and X’s safety account promised to remove material and suspend accounts. And what can we read into these reactive measures? An admission that his absolutist experiment failed. You cannot build a town square without any laws, without any rules and then be shocked when it becomes a lawless zone where the vulnerable are exploited.
So does this mean that the man who wants to populate Mars is learning some of the oldest lessons known to humanity on Earth, that freedom without responsibility is chaos, that technology is never neutral? His entire adventure is a demonstration of the reasons why societies develop norms, rules, and, yes, guardrails. Yet, whether the man who warned us from the evils of AI will himself understand this is perhaps clear to no one. Not even him.
In the end, nature and the universe move toward balance. And even for the richest man in the world, there can be no absolutes.
In case you missed:
- AI’s Looming Catastrophe: Why Even Its Creators Can’t Control What They’re Building
- The Digital Yes-Man: When AI Enabler Becomes Your Enemy
- The AI Prophecies: How Page, Musk, Bezos, Pichai, Huang Predicted 2025 – But Didn’t See It Like AI Is Today
- Why is OpenAI Getting into Chip Production? The Inside Scoop
- Why Elon Musk is Jealous of India’s UPI (And Why It’s Terrifyingly Fragile)
- NVIDIA’s Strategic Pivot to Drive our Autonomous Future with Innovative Chips
- Australia Tells Teens To Get Off Social Media; World Watches This Prohibition Experiment
- Black Mirror: Is AI Sexually abusing people, including minors? What’s the Truth
- Disney’s $1B OpenAI Bet: Did Hollywood Just Surrender to AI?
- The Cheating Machine: How AI’s “Reward Hacking” Spirals into Sabotage and Deceit









