The government of India can’t find a generative AI tool it can trust, as all of them are foreign writes Satyen K. Bordoloi as he outlines five ways in which the government can still use them.
A senior secretary in the Defence Ministry repeatedly uses an LLM to research hypersonic missiles, their range, capabilities, and the time it would take to reach a particular country. A foreign nation gets a dump of just the prompts he types into the LLM. An analysis of these prompts helps it to figure out that India is not only trying to build a hypersonic missile of its own, but also which country it is keeping in mind while doing so.
Another employee in the same ministry uploads a sensitive document into an LLM, thinking it doesn’t matter. Not only does this document and its content now reside on foreign servers, but other users could potentially gain insights from it and become training data in the future.
These might seem unintended mistakes, yet espionage of seemingly innocuous queries on LLMs can reveal state secrets. Thankfully, as an article by Indian Express points out, the Government of India (GoI) understands this. Foreign-made generative AI models pose security risks via inference, where foreign Gen AI platforms could analyse prompts from senior officials to piece together government priorities, policy timelines, and even institutional vulnerabilities.

Think of it as behavioural surveillance through prompt analysis whose digital footprints could reveal strategic government thinking patterns. Add to that the issues of document confidentiality and data integrity, and it could have massive national security implications.
This puts GoI in a catch-22: they can’t avoid using LLMs either, as lacking AI capabilities could leave India at a competitive disadvantage. A middle ground does exist, and here I present practical tips for government personnel in sensitive positions to use foreign LLMs while minimising exposure. It can never be foolproof, but these strategies do offer a few pragmatic solutions.

- Strictly Avoid Chinese LLMs: The most crucial point any Indian official should keep in mind is never to use Chinese LLMs for government work. Powerful models like DeepSeek impress with capabilities that are at times superior to even US ones, yet the risks for GoI employees far outweigh the benefits. China is not just a strategic competitor to us, a nation we have gone to war with, but it has, when profitable to them, operated outside international norms.
And let us not forget the security apparatus surrounding Chinese technology companies, which are not only opaque but also have direct ties to the government. This makes it dangerous for any government official choosing to use it. American LLMs are the “lesser evil” among foreign options, yet even these need careful consideration. Precautions in favour of a secure AI strategy should be the key.

- Master the Art of Being Oblique: The human mind possesses remarkable complexity in understanding meaning and context, capabilities that even advanced LLMs have not fully mastered. Therefore, when queries are necessary, framing them obliquely rather than with specificity could help hide one’s track while using LLMs.
Let’s say a government official wants to research cybersecurity protocols for critical infrastructure. Instead of asking: “What are the most vulnerable points in India’s power grid system and how might they be protected?” – which directly reveals your focus, try a more generalised approach: “What are general best practices for securing distributed energy infrastructure in geographically diverse regions?” This way, you can obtain valuable information without disclosing specific national concerns or vulnerabilities.
This technique creates semantic distance, i.e., it maintains enough separation between your actual interests and how you frame your queries. This method can protect sensitive information while still harnessing the AI’s knowledge base for your purpose.

- Use Task Fragmentation to Reduce Pattern Recognition Risks: Though most people only know major LLMs like ChatGPT, Gemini, or Grok, in truth, there are hundreds worldwide, not to mention models specialised in specific tasks. Now, AI excels at pattern recognition. But if you feed only bits of the whole to one system, it will become tough, maybe even impossible, for it to figure out your intention. This you do by distributing your task across multiple LLMs.
For example, working on a sensitive policy initiative, you could use one LLM for background research on international best practices, another for analysing potential economic impacts, a third for reviewing implementation challenges and a fourth for communication strategy suggestions.
This fragmented approach means that no single LLM receives enough contiguous data to reconstruct your strategic intent or identify patterns that could potentially reveal confidential initiatives. While not foolproof, this makes it difficult for any platform to meaningfully infer your activities.
- Establish a Strict No-Upload Policy for Sensitive Documents: This is sacrosanct – never upload sensitive documents to foreign LLMs. Once uploaded, documents potentially reside indefinitely on foreign servers and may become part of their training datasets, accessible to other users. This exposure of confidential information to the general public is irreversible.
In unavoidable situations where document analysis is necessary, employ structured obfuscation, i.e. replace specific names, designations, locations, and dates with generic placeholders before upload. For instance, change “Ministry of Defence 2025 procurement plan” to “large organisation annual acquisition strategy.” While the fundamental content remains, the specific context becomes obscured. The information would still exist somewhere in the system, but without contextual markers, it’ll become practically impossible for others to connect it back to its original meaning or importance.

- Build Sovereign Capabilities via Indigenous AI Development: This is the most crucial long-term strategy for this dilemma: accelerate India’s domestic AI ecosystem through initiatives like the IndiaAI Mission. We have promising programs like Make in India, Skill India, and Digital India, but their implementation often falters due to bureaucratic inefficiencies that have plagued us since independence.
AI is such a transformative leap for nations that missing its development could consign us to the backwaters of technological dependency. The government’s IndiaAI Mission, with its ₹10,300 crore allocation, aims to democratise access to computing, enhance data quality, and promote ethical AI adoption. Yet without the will to deploy with unprecedented efficiency, there’s a risk it will get buried in the “red tape trap” that has hindered past initiatives.

India’s AI goals stand at a strategic crossroad. We have a choice between technological dependence on others on the one hand and strategic autonomy on the other. The strategies outlined here provide a practical roadmap for government agencies to navigate current risks before we, as a nation, build our own capabilities in the field. And the ultimate, the most crucial solution lies not in tip-toeing around foreign systems hoping they don’t catch our drift, but in confidently embracing and championing our own AI revolution.
The same nation that democratised digital payments through UPI and whose people now lead the world’s most successful AI companies as CEOs possesses both the innovative spirit to overcome these challenges and the people to execute them. The only ask is for the government to encourage them. And as their own recent catch-22 situation shows, it is, ultimately, in their own self-interest to do so.
In case you missed:
- OpenAI, Google, Microsoft: Why are AI Giants Suddenly Giving India Free AI Subscriptions?
- The Major Threat to India’s AI War Capability: Lack of Indigenous AI
- Unbelievable: How China’s Outsmarting US Chip Ban to Dominate AI
- How India Can Effectively Fight Trump’s Tariffs With AI
- Zero Clicks. Maximum Theft: The AI Nightmare Stealing Your Future
- A Small LLM K2 Think from the UAE Challenges Global Giants
- Digital Siachen: Why India’s War With Pakistan Could Begin on Your Smartphone
- Digital Siachen: How India’s Cyber Warriors Thwarted Pakistan’s 1.5 Million Cyber Onslaught
- How Can Indian AI Startups Access Global VC Funds?
- AI Browser or Trojan Horse: A Deep-Dive Into the New Browser Wars










