NEW DELHI — In a significant move to regulate artificial intelligence, India’s government unveiled new AI governance guidelines in November 2025, aimed at making apps and digital services safer for everyday users. The framework promises clearer transparency, stronger protections against fake videos, and better safeguards for personal data.
What’s Changing for You
Starting now, companies using AI in their apps must be more transparent about how their algorithms work. When an app recommends a video, product, or news story, the company must explain why—whether it’s based on your search history, location, or browsing habits.
The government also mandated watermarking for AI-generated videos. This means deepfakes and fake celebrity videos will be digitally labeled as “AI-generated,” helping users identify manipulated content before sharing it.
Additionally, if you face unfair treatment from an AI system or your data is misused, you can now file complaints with companies. Importantly, these complaints must be handled in Indian languages—Hindi, Tamil, Telugu, and Marathi—not just English.
The Real Picture
However, experts caution that these are guidelines, not binding legal requirements. Companies are expected to follow them voluntarily, but enforcement remains unclear. The actual legal power comes from the existing Digital Personal Data Protection (DPDP) Act, 2023.
“This is a balanced approach prioritizing innovation,” said tech policy experts, but critics argue it gives companies too much flexibility.
What’s Still Pending
Women’s groups have raised concerns about inadequate protections against non-consensual AI-generated content. Deepfake watermarking technology remains imperfect, and a comprehensive AI law is still being drafted.
The Bottom Line
While India’s new AI rules mark progress toward safer digital experiences, real change depends on consistent government enforcement and genuine company compliance. Users are advised to remain cautious online and verify information before sharing.



