You are on page 1of 3

Why has government issued an AI advisory?

What does the March 1 notification signal to tech firms? Will they need to
seek government permission before putting out ‘under-tested’ Artificial
Intelligence models? Why do some in the industry feel this move will hinder
innovation?

- Photo: Getty Images

AROON DEEP,

‘The advisory is an opportunity in disguise. It points to a need for local


AI stacks, datasets, graphics processing units’

The story so far:

On March 1, the Ministry of Electronics and Information Technology (MeitY) issued an


advisory to the Artificial Intelligence industry. It said that all generative AI products,
like large language models on the lines of ChatGPT and Google’s Gemini, would have
to be made available “with [the] explicit permission of the Government of India” if
they are “under-testing/ unreliable”.

What is the government’s stand?

The advisory represents a starkly different approach to AI research and policy that
the government had previously signalled. It came soon after Rajeev Chandrasekhar,
the Minister of State for Electronics and Information Technology, reacted sharply to
Google’s Gemini chatbot, whose response to a query, “Is [Prime Minister Narendra]
Modi a fascist?” went viral. Mr. Chandrasekhar said the ambivalent response by the
chatbot violated India’s IT law.

How has it been received?

The advisory has divided industry and observers on a key question: was this an
‘advisory’ in the classic sense that was reminding companies of existing legal
obligations, or was this a mandate? “It sounds like a mandate,” Prasanth Sugathan,
Legal Director at the Delhi-based Software Freedom Law Centre said at an event on
Thursday. The document, sent to large tech platforms, including Google, instructed
recipients to submit an “[a]ction taken-cum-status Report to the Ministry within 15
days.” Mr. Chandrasekhar insisted that there were “legal consequences under
existing laws (both criminal and tech laws) for platforms that enable or directly
output unlawful content,” and that the advisory was put out for firms “to be aware
that, platforms have clear existing obligations under IT and criminal law.” Mr.
Chandrasekhar referred to rule 3(1)(b) of the Information Technology (Intermediary
Guidelines and Digital Media Ethics Code) Rules, 2021, which prohibits unlawful
content like defamation, pornography, disinformation and anything that “threatens
the unity … and sovereignty of India.” He added that the rules were intended for
large tech firms and wouldn’t apply to startups.

The government hasn’t elaborated in detail on how IT laws can apply to automated AI
systems in this way. Pranesh Prakash, a technology lawyer who is an affiliated fellow
at the Yale Law School’s Information Society Project, said the advisory was “legally
unsound,” and compared it to the Draft National Encryption Rules of 2015, a quickly
withdrawn proposal to outlaw strong encryption of data in India.

The advisory also included a requirement for AI-generated imagery to be labelled as


such, something that the industry has vacillated between taking serious efforts on
doing. Amazon Web Services has tried implementing an ‘invisible’ watermark, but has
expressed concern that such a move would be of little use as watermarks can be
edited out fairly easily.

Rahul Matthan, a technology lawyer and partner at the firm Trilegal, urged a more
permissive approach to AI systems. “In most instances, the only way an invention will
get better is if it is released into the wild — beyond the confines of the laboratory in
which it was created,” Mr. Matthan wrote after the advisory was released. “If we are
to have any hope of developing into a nation of innovators, we should grant our
entrepreneurs the liberty to make some mistakes without any fear of consequences,”
he added, pointing to the aviation industry as an example, where he said air safety
will this not lead to more damage as it is already being done
improved as a result of planemakers’ willingness to share information on failure with
each other to collectively improve air safety.

What has been the government’s approach to the AI industry?

Until recently, the government itself shared optimism on AI, where Big Tech firms
have often struck a balance between seeking regulation and seeking to control the
direction these regulations take. The IT Ministry last April categorically said that “the
government is not considering bringing a law or regulating the growth of artificial
intelligence in the country”.

But in the last few months, even before the now viral Gemini response, Mr.
Chandrasekhar has expressed dissatisfaction with AI models spitting out
uncomfortable responses. “You can’t ‘trial’ a car on the road and when there is an
accident say, ‘whoops, it is just on trial. You need to sandbox that,” Mr.
Chandrasekhar said on AI firms’ responses to criticism on bias. The tension
underlines the conflict inherent to widely testing an experimental technology —
which is that wide testing is what allows these often unruly models to detect
mistakes and improve. That dynamic was on display when Gemini generated racially
incorrect photos of historical events, leading to a storm of criticism that led to the
firm pausing the photo generation feature until it worked on a fix.

Will it benefit local developers?

“This is just a poor job in communication, resulting from the need to do something in
an election year,” Aakrit Vaish, co-founder of Haptik, a conversational AI firm founded
in 2013 said on X. Mr. Vaish amplified subsequent clarifications on the advisory’s
applicability as good news for startups, and sought inputs to collect from local firms
to send to the ministry.

Atul Mehra, founder of Vaayushop, an AI finance firm, expressed hope that the
advisory could actually translate to a benefit for local developers. While it was a
“short term hassle,” he conceded on X, “it's a huge opportunity in disguise. It points
to [a] need for local AI stacks, datasets, [and] GPUs [graphics processing units] … Let’s
keep building and wait for our right moment to even beat Microsoft and Google.”

You might also like