You are on page 1of 3

They write seminar papers, scientific articles and can even design experiments.

The
Office of Technology Assessment has evaluated the consequences of AI chatbots like
ChatGPT for education and science in an extensive study. We spoke with Steffen
Albrecht, author of the study.

The AI chatbot ChatGPT delivers texts that appear as if a human had written them.
This radically changes everyday life at schools and in research: The chatbot
facilitates fraud as well as access to knowledge. The Office of Technology
Assessment at the German Bundestag (TAB) has studied the effects on behalf of the
German Bundestag. The office is run by the Karlsruhe Institute of Technology (KIT)
and advises the parliament on research and technology policy issues. Steffen
Albrecht is a research associate at TAB.

Mr. Albrecht, recently AI researchers together with AI developers published an open


letter: They demand a moratorium. No more AIs should be trained worldwide that are
more powerful than the latest version of ChatGPT. Many renowned experts have signed
the appeal, and the letter now has almost 28,000 signatures. Have you also signed
the letter?

No, I think the open letter is a bit too unworldly for that: The moratorium is
supposed to be valid for only six months. During this time, legislators worldwide
are supposed to enact rules for the use of models like ChatGPT, but political
processes don't move that fast. Even we researchers will not be able to reach a
final verdict on the effects within half a year. The moratorium also gives the
impression that only new AI systems pose risks, while existing ones are completely
unproblematic, which is not true. I am also surprised that so many AI developers
have signed the call: After all, they could react immediately and stop possible
undesirable developments in their own companies. Instead, Elon Musk, for example,
signed the moratorium and founded an AI company at the same time. In this respect,
this moratorium leaves me with very mixed feelings.

Are they actually mixed feelings? You appear to reject it entirely.

Steffen Albrecht. Picture: TAB/Konstantin Börner

No, because the call for a moratorium has generated a lot of public attention. It
makes sense to have a broad debate now about how we want to deal with AI systems.
Society needs to be clear about what it is getting into and what rules we want to
agree on. The open letter was met with many reactions: Numerous experts pointed out
the opportunities that such programs offer, while others emphasized that we
sometimes overestimate the capabilities of programs like ChatGPT that sometimes
appear more intelligent and thus more threatening than they actually are.

However, some cities, states and universities have even banned ChatGPT or only
allow it in a restricted way. Do you think that makes sense?

In my view, a general ban is too sweeping, because systems like ChatGPT harbor
dangers and potential in equal measure. On the one hand, they can take over tedious
routine tasks for us, for example, or simplify access to knowledge. On the other
hand, they can also provide fake news or reproduce certain prejudices, for example,
by reinforcing gender stereotypes. Even for us as experts on the impact of
technology, it isstill too early to make a final assessment of the impact of the
programs, which we also point out in our recently published study.

Pressing questions are already being asked, for example, in research. The software
is likely to be tempting for scientists who want to cheat.

Of course, this is a big issue! Programs like ChatGPT could increase the number of
fraud cases in science, because there is already a lot of pressure to publish as
much as possible. So it's easy to imagine researchers being tempted to let an AI
system write their studies. However, many scientific publishers categorically
reject this; after all, ChatGPT cannot assume any responsibility for the content of
a text. At the same time, however, there are indications that the system could also
be helpful in scientific writing, for example, when it comes to getting an overview
of the relevant literature or publishing in a language other than one's native
tongue.

Can ChatGPT already write entire studies or scientific articles?

In the future, such systems could not only formulate research results, but also
design and conduct chemical experiments, as one study has shown. In this study,
ChatGPT planned a series of experiments on a given question and forwarded them to
an automated pipetting system. Although this was only intended to demonstrate
feasibility so far, this study illustrates how fundamentally scientific practice
could be affected. In contrast, the articles published so far, which were partly or
largely written using ChatGPT or related systems, tend to show the limitations of
the system as the results have not turned out to be all that original. But of
course this can only be judged for the cases where the use of ChatGPT was made
transparent.

Will it be technically possible to detect whether texts originate from an AI in the


foreseeable future?

I highly doubt it, because ChatGPT mostly generates unique texts: The texts are not
composed of set pieces from different sources, but they are, so to speak,
reassembled word by word on each request. Existing software for detecting
plagiarism thus fails here, and new programs are being trained, but so far they are
running with only moderate success. More promising would be a kind of watermark:
This involves interspersing certain patterns in the texts that do not bother us
humans when we read them, but are recognizable by machines. However, there are
still a number of technical challenges, and the developers of the AI systems must
also play along and use the method in their systems.

How can we still assess performance fairly, for example, at schools and
universities?

For example, by designing exams differently: Pupils and students no longer just
hand in a finished text, which is then assessed as good or bad, but they exchange
ideas with their teachers at a much earlier stage. For example, they discuss the
development of a question, the search for sources, or the structure of an argument.
An AI system like ChatGPT can only help to a very limited extent in dealing with
sources. Even these intermediate steps are then assessed by the professors. The
focus is therefore more on supporting the learners, which is particularly important
in schools.

Why?

Because otherwise the programs could reinforce social inequalities. This has
already been shown with other digital teaching and learning offerings: According to
studies, it is primarily high-achieving students who benefit from such apps. Weaker
children and young people tend to learn less effectively with them. Teachers also
tell us similar things about ChatGPT. At TAB, however, we are also concerned about
another problem: Data protection.

What risks do you see?

With ChatGTP, students are using a system that is managed by a private company in
the USA. They may feed it with a large amount of very personal data and disclose
information about their performance. We see that as very questionable. The Italian
data protection authority has therefore even banned ChatGPT and is demanding
improvements from the developers. However, we at TAB also see great opportunities
in such programs.

Which ones?

Let's stay with schools for a moment: There, language-based systems like ChatGPT
could help to prepare learning offers in a differentiated way. They then develop
tasks for the same content, but at different levels of difficulty depending on the
learning level of the respective child. I see a lot of potential here!

And at universities?

There, the programs are primarily of interest for subjects that work a lot with
language, i.e., in the social sciences. Some teachers are already using ChatGPT
like a sparring partner: The app drafts coherent counter-arguments to a position,
which the students then have to respond to. - This trains their own thinking. The
situation is more difficult in the natural sciences: Logical thinking and
mathematical skills are central there, and ChatGPT has been weak in these fields so
far.

Are you considering using the program in research as well?

Yes, for example, in medicine: AI could be helpful in genome analysis, because the
models read genetic code in a similar way to human languages and recognize patterns
and irregularities in it. However, this branch of research already uses other AI
systems that are far more advanced. Whether ChatGPT outperforms their results
remains to be seen. The system also designs program code - this capability could
also be interesting for science, as researchers often use specially designed
scientific software as well as smaller utilities. ChatGPT masters various
programming languages and delivers code that, while not always directly usable,
runs quite convincingly after a rework. Or it shows potential for improvement
towards more elegant solutions. This is where systems like ChatGPT show their great
value, because they make our work easier, not only in science, but also in many
other areas.

Do you have an example?

Yes, legal advice. Lawyers often work in a very routine way, for example, when they
check contracts. In the future, an AI system may be able to do this just as well; a
pilot project is underway at the Stuttgart Higher Regional Court. Positive effects
could also be seen, for example, in the inclusion of people with disabilities:
There are still only a few texts translated into accessible language, and ChatGPT
or related systems could make an important contribution here in the future. The
system thus holds numerous opportunities, as well as risks, and we point out both
in our study. It provides a juxtaposition of pros and cons, which cannot be avoided
at the moment. At present, it is too early to make any firm decisions.

25.04.2023
Interview: Jenny Niederstadt

You might also like