India is taking a big swing at stopping fake news and crazy AI videos from messing things up online. The government is now telling tech giants like OpenAI, Meta, Google, and even X (yeah, Twitter with a new name) that they need to let users know when something is made by AI, not humans.
Picture this: anything made by AI—videos, images, even audio—has to come with a super-visible label. Not just a dinky little watermark tucked in the corner, either. The label has to cover at least 10% of the video or image, or show up during the first 10% of any audio clip. No sneaky AI edits without calling them out.
And it’s not just about slapping a label on things. The social networks themselves have to get a “yes or no” from everyone uploading stuff—asking if what they’re sharing is AI-generated or not. Plus, they’ll need to put smart checks in place on their platforms to make sure folks aren’t dodging the rules.
Officials say these new requirements are here to keep things honest online, especially as India deals with a massive, diverse crowd who rely on the internet. Fake information could inflame tensions, especially around elections, and deepfakes have already got people worried.
The Indian government’s move follows what’s happening in places like the EU and China, and they want everyone—companies and regular people—to pitch in their ideas before the rules go final in November. The goal? Make it blatantly obvious which content is cooked up by computers, all while keeping things transparent with metadata every step of the way.
So, next time you scroll past a wild video, don’t be surprised to see a giant “AI-generated” badge splashed right there. That’s the new normal India’s shooting for: making it easy for everyone to spot the bots.

