India has entered the global AI discussion with advice that mandates tech companies obtain government approval before releasing new models. The alert was sent to businesses on March 1, Friday by India's Ministry of Electronics and IT.
India's IT Deputy Minister Rajeev Chandrasekhar claims the notice is "signalling that this is the future of regulation," despite the ministry's admission that the advice is not legally obligatory. "We are doing it as an advisory today, and we are asking you to comply with it," he continues.
This comes after Google's AI platform, Gemini, generated controversial responses to queries about Prime Minister Modi.
In its advice, the ministry references the authority bestowed upon it by the IT Act of 2000 and the IT Rules of 2021. It requests "immediate effect" compliance and requires tech companies to provide the government with an "Action Taken-cum-Status Report" in a timeframe of fifteen days.
The new guidance represents a shift from India's prior hands-off approach to AI regulation. It also requires tech businesses to "appropriately" label the "possible and inherent fallibility or unreliability" of the output their AI models generate. The ministry rejected attempts to control the rise of AI less than a year ago, citing the industry's criticality to India's geopolitical objectives.
The new guidance has alarmed a lot of Indian startups and venture capitalists, who think the country's ability to compete in the global market will be hampered.
The advisory added that non-compliance to the provisions of the IT Act and IT Rules would result in potential penal consequences to the platforms or users if identified.