You are on page 1of 8

Printed from

Why AI is generating so much excitement and so


many worries
TNN | May 3, 2023, 01.42 PM IST

Technology changes rapidly. But even then, there are some


seminal moments, those that are truly transformative. In the past
three decades, the emergence of the internet was one of them.
Smartphones another. And now, we seem to be at the threshold
of yet another – artificial intelligence. And it’s causing both
enormous excitement, and an almost equal amount of anxiety. On
May 1, we heard that Geoffrey Hinton, who served as a Google vice
president and who has been called the “godfather of AI” for his
work on deep learning, has left the company to speak out on the
dangers of AI. The world doesn’t have a lot of answers on AI yet,
and, on a suggestion from Nasscom, Times Techies decided to
initiate a discussion on the subject, and we started that last week

with a webinar where we had four speakers who are deeply immersed in the world of AI.

The starting point of the discussion, obviously, was the launch of OpenAI’s ChatGPT late last year, because that
was clearly the moment when people at large began to see the wonders of AI. Debjani Ghosh, president of
Nasscom, noted that ChatGPT gained its first million users in just five days, and 100 million users in two months.
“This is the fastest we have ever seen for any consumer internet app till now. By comparison, TikTok took nine
months (for 100 million), Instagram, more than two years. And every day, there’s a new surprise. The scale at
which innovation is happening, the scale at which products, platforms are getting created, is mindboggling,” she

said.

Rohini Srivathsa, national technology officer for India at Microsoft, the biggest investor in OpenAI, noted that
while we were all using AI in multiple applications every day even earlier, the difference ChatGPT made was in its
user interface. “It created access to AI that earlier was hidden behind. And now, the use cases are increasing all
the time. Take this meeting we are having. If for some reason you are not able to attend it, you can later on ask an
AI agent what happened in this meeting, who said what, is there any action item for me. You can have a

conversation with that agent, which is truly game changing,” she said.
Akhilesh Tuteja, partner for national alliances and TMT (technology, media & telecom) industry leader at KPMG in
India, said many companies are today doing proof-of-concepts with ChatGPT to figure out how it could change
their business. The easiest use cases, he said, relate to customer service. “All of us have been living with some
dumb chatbot, in an airline, banking or insurance service, and we invariably wish we were talking to a human.
Now these companies are looking at these newer kinds of chatbots (like ChatGPT),” he said. A second major use
case is documentation services – when you have to summarise an input, create a brief. Another major use case,
Tuteja said, is research augmentation. “If you are planning to write on a topic, then you could ask whether these
are things that I should look at, and whether there are other things also that I should consider. Are there outliers,

boundary conditions one has missed? It’s not replacing your research, but augmenting your research,” he said.
Srivathsa said these technologies have enabled creativity to get more democratised. And that allows us to
change the frontier of what we can solve, whether it is targeted medical delivery, or personalised education. “We
have to think about expanding the pie of problems to solve, instead of thinking that machines are taking away

part of a limited pie,” she said.

LET'S DISCUSS, BEFORE WE REGULATE

There are also, however, big worries around what bad actors can do with technologies like generative AI, and also
around the pace of development of AI – whether that would end up creating technologies that bad actors could
overwhelm humanity with, or bring massive job losses, or even overtake human intelligence. Some worry that
ChatGPT-like models may generate personal information or sensitive data that could be used to identify or harm
individuals. Some worry about the model’s ability to create deep fake audio or text or in spreading false or
misleading information. Content rating tool NewsGuard warned this week that it found some 49 news sites
publishing content that looked to be completely fabricated by AI, and using that to generate ad money. Earlier,

these sites would have had to pay humans to do it, now they can do it practically free on a mass scale.
Balaraman Ravindran, professor of computer science at IIT Madras, says earlier, if his students were making up
stuff, he would be able to figure that out based on the self-consistency of the argument. But ChatGPT, he says, is
so good at “hallucinating” output – giving confident answers even if it is wrong and doing so in a very self-
consistent way – that it would be difficult to figure out if a student had used ChatGPT or did the work himself.
Ravindran said GPT-4, the latest version of OpenAI’s large language model (ChatGPT emerged from the previous

version GPT-3), is even better at hallucinating output than ChatGPT.

Responsible AI movement

Such concerns have generated a movement for building `Responsible AI’. Ghosh said Nasscom has worked with
the tech industry to create a responsible AI framework. The basic idea of the framework, she said, is for
developers and companies to ask at each stage of product development – right from ideation – whether the
product is inclusive, transparent, accountable, and keeps human interest first. “And we are making sure the tech
industry in India has access to this framework, has access to each other’s learnings – what works, what does not,
where do you really face a challenge, how do you overcome the challenge. The idea is to create a community of

collective learning because that is what is needed more than anything else today,” she said.

Srivathsa noted that it’s far from easy to put principles – like fairness, or transparency – to actual practice. First, she
said, you need to understand what these values mean for you in your context. And when that finally goes to a
product designer, she would expect the requirements she has to adhere to. Those requirements will then need
tools, techniques and processes, which in turn need policies, practices and governance. “Microsoft’s two year
journey, from version 1 of our Responsible AI standard that came out in 2019, to version 2 that came out in 2021,
involved going through this entire exercise. And even now we don’t have all the answers,” she said.

Ravindran said large organisations like Microsoft may be able to do such exercises, but outside of that, the
challenge is huge. “Suppose you have to make sure your large language model does not generate any derogatory
language towards Indians. Do I even know what constitutes derogatory language in, say, Bihar! Do I have it
written down somewhere for me to enforce that in my system? That’s a lot of work that needs to be done,” he

noted.

Time for regulation?

Industry self-regulation may not always work, and may not produce all the desirable social goals. So, should
governments step in?

Ghosh said it’s early days for governments to step in, and it’s best to wait a bit to see how AI plays out, and how it
impacts our lives and jobs. But, she said, the initial conversations should start. “We are proposing a multi-
disciplinary task force that looks at all of these things, and maybe even comes out with the design principles for
regulating tech. I, for instance, don’t think it makes sense to regulate research. If you limit research, you’ll put a
stop before you know what can happen. But you should absolutely regulate the products that get
commercialised, make sure companies don’t commercialise it to do harm. The design principles for that are very,
very important,” she said, adding that with India leading the G20 this year, “it is a fantastic opportunity for our
government to take this dialogue to the G20 countries and try and get at least some broad alignment among

them.”

Ravindran said how to regulate has to be a national collaborative effort, one that includes not just government,
but also academia and industry.

Tuteja too agreed that regulation must slowly evolve. He said AI’s potential for good versus its potential for not so
good is a 100 to one. He said human beings are generally not prone to self-destruction – “otherwise we would
have by now built a nuclear reactor at a $100 cost.” He also said it may be unfair to put all the onus of
`Responsible AI’ on developers, and that we must attach responsibility equally to the humans who use the
outcomes of the developer. “If someone thinks ChatGPT is a search engine, and uses it like that, I don’t think

OpenAI is at fault. That somebody who thinks and applies it that way is at fault,” he said.

You might also like