You are on page 1of 3

Artificial Interference: The Critical State of National Democracy Entering 2024

In a rare moment of agreement, members of Congress, irrespective of partisan affiliation,


expressed palpable concerns regarding a critical threat that stands to compromise the foundation
of our nation’s elections systems – artificial intelligence.The use of AI to target out election
security, seeking to unravel the fabric of our nation’s democracy has already begun chipping
away at the American public’s trust in the integrity of their vote. Now, more than ever, we must
turn our attention towards those advocating for increased appropriations towards election
security - before it’s too late.

Critical to consider, especially entering into an election year, is that these threats,
particularly with the growing role of AI, have origins both foreign and domestic. One of the most
effective means of interfering in our elections, and the most difficult to regulate, is the spread of
misinformation to intentionally manipulate voter behavior. The AI bots that disseminate and
amplify misinformation on platforms like X, formerly known as Twitter, and Meta, formerly
known as Facebook, are deployed by both foreign actors, as we saw with Russia and China in the
past two election cycles, but also by domestic interest groups, or even opposing campaign
camps, attempting to cripple the platform of their opponents. Growing is the threat of these
domestic actors as we approach November of 2024. Photographs of Republican frontrunner and
former president Donald J. Trump, embracing infectious disease specialist Anthony Fauci have
circulated through the recent American media cycle, even after being debunked by AI specialists
as a deep-fake produced by affiliates of the republican challenger Ron DeSantis.

The advent of IA and its role in election interference pose unique and nuanced legislative
challenges, and our current rulemaking is not keeping pace with the velocity and evolution of
these threats. The automated generative powers of AI make it particularly difficult to both target
and cease the dissemination of this misinformation, as well as identify the source, in order to
prevent continued interference attempts. Furthermore, the pattern-tracking and recognition
capabilities of AI allow adversaries to disguise their activity in such a manner that may not be as
readily detectable by analysts, thus making it more difficult to extract actionable intelligence
from these events. With respect to attribution, actors are using increasingly sophisticated tactics
to impede attribution efforts, such as the employment of virtual private networks (VPNs) and the
adoption of farce identities to further obfuscate the true source of these attacks on our elections.

As the law currently stands, there is no legislation that explicitly prohibits or regulates the
usage of AI in campaigning and political ads. However, nested within the Federal Election
Campaign Act (FECA) are two provisions that, arguably, pertain to the threat of AI in our
elections: the prohibition of “fraudulent misrepresentation,” and the requirement of explicit
disclaimers when deploying a politically financed ad with AI. Given the concerning ambiguity of
this language, the Federal Elections Commission (FEC) has accepted a petition for rulemaking to
narrow the statute’s jurisdiction to encompass generative AI.

Those resistant to ask for increased federal funding and congressional attention towards
election security cite two primary arguments. The first is the fear that such regulation of
campaign behavior could risk a violation of first amendment rights. At a September 2023 Senate
Rules Committee hearing on concerns regarding AI and its implications for upcoming elections,
Senator Debra Fischer (R-NE) worried that such rulemaking would serve as “a prohibition of
politically protected speech,” and wondered if there is a way forward in which lawmakers can
protect “the public, innovation, and speech.” Another argument cited by budgetary leaders and
campaign finance pundits on the Hill is the issue of underspending - if Congress were to allocate
more financial resources for states to fortify their elections from cyber threat and foreign
adversaries, how can they ensure that this financing is going towards that cause, and not being
reallocated on a state level, or set aside in some reserve? The constitutional argument is a sound
one, and will require careful consideration going forward in the rulemaking process, but the
budgetary concern is a question of politicking. Many of the states that have historically
underspent such funds have not been able to spend this money due to political nuances on a local
level. Take Oklahoma, for example - the state is cited as the most palpable case of elections
security underspending, but the Oklahoma’s election department, per the state constitution,
cannot update its systems until its DMV does so first, and the DMV does not have the financing
to do so. As such, underspending of these appropriations is not a demonstration of waning need
nor state indifference, but rather, an issue of local politics.

It is unclear just how potent the impact of AI will be on the results of the upcoming
presidential election, but one would be negligent not to think that it would take center stage
throughout the campaign process. While voters would be prudent to remain vigilant for
misinformation, Congress must also remain aware that the fate of this election, future elections,
and the future of American democracy as a principle, lies in their law-making hands. Without
legislative guardrails to protect our elections from the looming threat of generative AI and
misinformation, our next president could be decided by lines of code.

That adversary could be an individual domestic antagonist, who in


coming elections will be able to harness AI to attack election offices
with far fewer resources than ever before; or it could be a nation-state
like China, Russia, or Iran, all of whom have meddled in recent
American elections, and all of whom are developing their own AI
technologies capable of targeting American networks. Microsoft
analysts have warnedthat Chinese operatives have already used
artificial intelligence to “generate images . . . for influence operations
meant to mimic U.S. voters across the political spectrum and create
controversy along racial, economic, and ideological lines.”

There is bipartisan consensus in Congress on the need for AI


safeguards, particularly when it comes to elections. One bill with
bipartisan co-sponsors, for example, would prohibit distributing
materially deceptive AI-generated video, images or audio related to
candidates for federal office. It would also allow federal candidates
targeted by deceptive content to have it taken down and to seek
damages in federal court. But the legislative process may not move
quickly enough to make a difference in next year’s elections.

In the meantime, it will fall to business leaders—as trusted


employers and important members of their communities—to play an
important role in restoring trust in our elections. Resistance to
misinformation and disinformation starts with an informed citizenry.

You might also like