You are on page 1of 2

VitalLaw™

Cybersecurity Policy Report, Regulators, Advocacy Groups Weigh Privacy


Concerns Over ChatGPT, (Apr 4, 2023)

By Tony Foley

As the use of artificial intelligence chatbot ChatGPT continues to raise


concerns around the world, several regulators have weighed in on the
dangers posed by the technology with respect to privacy and data
protection. In addition, industry and consumer groups have asked
regulators in the U.S. and European Union to consider whether the use of
AI is compliant with privacy and consumer protection laws.

UK ICO issues guidance. Steven Almond, executive director–regulatory


risk at the United Kingdom’s Information Commissioner’s Office (ICO), issued a news release yesterday in which he
outlined questions that developers and users should ask before using generative AI and large language models
(LLMs) such as ChatGPT.

While these technologies have captured the world’s imagination, Mr. Almond said, a step back to reflect on how
personal data is being used by the technology is warranted. Organizations developing and using generative AI should
be considering their data protection obligations from the outset, taking a data protection by design and default
approach. Specifically, he outlined the following eight questions to be asked and answered prior to developing or
using generative AI while using personal data:

1. What is the lawful basis for processing?

2. Is the developer/user a controller, joint controller, or processor?

3. Has a data protection impact assessment (DPIA) been prepared?

4. How will transparency be assured?

5. How will security risks be minimized?

6. How will unnecessary processing be limited?

7. How will individual rights requests be fulfilled? and

8. Will generative AI be used to make automated decisions?

ICO noted that it would ask those questions of developers or users of generative AI and would act in cases where
organizations were not following data protection law and considering the impact of the technology on individuals. The
release also includes links to resources, such as the ICO’s guidance on AI and data protection and its accompanying
risk toolkit, and encourages innovators to utilize ICO’s regulatory sandbox and innovation advice services, among
other resources.

Swiss DPA issues statement. The Swiss Federal Data Protection and Information Commissioner (FDPIC) issued a
statement today in which it acknowledged the recent ban of ChatGPT by Garante, the Italian data protection authority
(CPR, March 31), and advised users to make conscious use of AI-based applications. The statement recognizes the
opportunities that the technology offers but clarifies that it also entails risks for the private sphere and informational
self-determination.

The statement urges users to check, before entering text or uploading images in generative AI applications, regarding

© 2023 CCH Incorporated and its affiliates and licensors. All 1 Apr 5, 2023 from VitalLaw™
rights reserved.
Cybersecurity Policy Report, Regulators, Advocacy Groups Weigh Privacy
Concerns Over ChatGPT, (Apr 4...

the purposes for which the data will be processed. It also said companies using AI-based apps must ensure that legal
data protection requirements were met, including informing users in a transparent and understandable manner about
the data processed and the purposes of processing. The FDPIC has not yet examined ChatGPT in the context of a
fact-finding proceeding, so it did not comment on whether the app was compliant with data protection law, but the
agency is in contact with Garante to obtain further information.

CAIDP files FTC complaint. On March 30, the Center for Artificial Intelligence and Digital Policy (CAIDP) filed a
complaint with the Federal Trade Commission urging an investigation into ChatGPT and asserting that the bot was
“biased, deceptive, and a risk to privacy and public safety.” The complaint alleged that ChatGPT’s outputs cannot
be proven or replicated and that no independent assessment was conducted prior to its deployment.

In addition, CAIDP said OpenAI, the developer of the bot, had acknowledged that there were dangers associated with
cybersecurity, as well as disinformation and influence operations, although CAIDP claims that OpenAI has already
disclaimed any liability for the consequences that may follow.

The complaint also says that the FTC has declared that the use of AI should be “transparent, explainable, fair and
empirically sound while fostering culpability,” arguing that the latest iteration of the bot, ChatGPT-4, meets none of
these requirements.

CAIDP calls on the FTC to provide independent oversight and evaluation of commercial AI products in the U.S. and
urges the agency to open an investigation into OpenAI regarding potential violations of section 5 of the FTC Act, to
enjoin further commercial releases of GPT-4, and to ensure the establishment of guardrails to protect consumers and
businesses.

Canada initiates investigation. The Office of the Privacy Commissioner of Canada (OPC) announced today that it
had launched an investigation into OpenAI in response to a complaint alleging the collection, use, and disclosure of
personal information without consent. “AI technology and its effects on privacy is a priority for my Office,” Privacy
Commissioner Philippe Dufresne said. “We need to keep up with—and stay ahead of—fast-moving technological
advances, and that is one of my key focus areas as Commissioner.” Because the inquiry is an active investigation,
the OPC did not offer any additional details.

BEUC asks for investigation. Citing the filing of the CAIDP complaint, the European Consumer Organization
(BEUC) issued a press release on March 30 calling for EU and national authorities to launch an investigation into
ChatGPT and similar chatbots. Noting that the EU is currently working on the AI Act, the first legislation concerning AI,
BEUC nevertheless expressed concern that it will take years for the AI Act to take effect if it is adopted, leaving
consumers at risk from technology that will not be regulated during the interim period.

“For all the benefits AI can bring to our society, we are currently not protected enough from the harm it can cause
people. In only a few months, we have seen a massive take-up of ChatGPT and this is only the beginning,” said
BEUC Executive Director Ursula Pachl. “Waiting for the AI Act to be passed and to take effect, which will happen
years from now, is not good enough as there are serious concerns growing about how ChatGPT and similar chatbots
might deceive and manipulate people.”

MainStory: TopStory FederalLegislation InternationalLegislation LitigationEnforcement DataPrivacy DataSecurity

© 2023 CCH Incorporated and its affiliates and licensors. All 2 Apr 5, 2023 from VitalLaw™
rights reserved.

You might also like