New Delhi: India has decided to backtrack on its recent AI advisory following criticism from both local and global entrepreneurs and investors. The Ministry of Electronics and IT issued an updated AI advisory to industry stakeholders, removing the requirement for government approval before launching or deploying an AI model to users in the South Asian market. Instead, firms are now encouraged to label under-tested and unreliable AI models to inform users of potential issues. This revision comes after severe criticism earlier this month, with Martin Casado, a partner at venture firm Andreessen Horowitz, calling India’s initial move “a travesty.”
The revised advisory marks a departure from India’s previous hands-off approach to AI regulation, with the ministry declining to regulate AI growth less than a year ago, citing its importance to India’s strategic interests. While the new advisory, like the original, has not been published online, TechCrunch has reviewed a copy. The ministry clarified that while the advisory is not legally binding, it signals the “future of regulation” and emphasizes the need for compliance. It also stresses that AI models should not be used to share unlawful content under Indian law and should not promote bias, discrimination, or threats to the integrity of the electoral process.
Intermediaries are advised to use consent pop-ups or similar mechanisms to explicitly inform users about the unreliability of AI-generated output. The ministry still prioritizes the identification of deepfakes and misinformation, advising intermediaries to label or embed content with unique metadata or identifiers. However, the requirement for firms to develop a technique to identify the “originator” of specific messages has been removed.