Professional Documents
Culture Documents
Critical to consider, especially entering into an election year, is that these threats,
particularly with the growing role of AI, have origins both foreign and domestic. One of the most
effective means of interfering in our elections, and the most difficult to regulate, is the spread of
misinformation to intentionally manipulate voter behavior. The AI bots that disseminate and
amplify misinformation on platforms like X, formerly known as Twitter, and Meta, formerly
known as Facebook, are deployed by both foreign actors, as we saw with Russia and China in the
past two election cycles, but also by domestic interest groups, or even opposing campaign
camps, attempting to cripple the platform of their opponents. Growing is the threat of these
domestic actors as we approach November of 2024. Photographs of Republican frontrunner and
former president Donald J. Trump, embracing infectious disease specialist Anthony Fauci have
circulated through the recent American media cycle, even after being debunked by AI specialists
as a deep-fake produced by affiliates of the republican challenger Ron DeSantis.
The advent of IA and its role in election interference pose unique and nuanced legislative
challenges, and our current rulemaking is not keeping pace with the velocity and evolution of
these threats. The automated generative powers of AI make it particularly difficult to both target
and cease the dissemination of this misinformation, as well as identify the source, in order to
prevent continued interference attempts. Furthermore, the pattern-tracking and recognition
capabilities of AI allow adversaries to disguise their activity in such a manner that may not be as
readily detectable by analysts, thus making it more difficult to extract actionable intelligence
from these events. With respect to attribution, actors are using increasingly sophisticated tactics
to impede attribution efforts, such as the employment of virtual private networks (VPNs) and the
adoption of farce identities to further obfuscate the true source of these attacks on our elections.
As the law currently stands, there is no legislation that explicitly prohibits or regulates the
usage of AI in campaigning and political ads. However, nested within the Federal Election
Campaign Act (FECA) are two provisions that, arguably, pertain to the threat of AI in our
elections: the prohibition of “fraudulent misrepresentation,” and the requirement of explicit
disclaimers when deploying a politically financed ad with AI. Given the concerning ambiguity of
this language, the Federal Elections Commission (FEC) has accepted a petition for rulemaking to
narrow the statute’s jurisdiction to encompass generative AI.
Those resistant to ask for increased federal funding and congressional attention towards
election security cite two primary arguments. The first is the fear that such regulation of
campaign behavior could risk a violation of first amendment rights. At a September 2023 Senate
Rules Committee hearing on concerns regarding AI and its implications for upcoming elections,
Senator Debra Fischer (R-NE) worried that such rulemaking would serve as “a prohibition of
politically protected speech,” and wondered if there is a way forward in which lawmakers can
protect “the public, innovation, and speech.” Another argument cited by budgetary leaders and
campaign finance pundits on the Hill is the issue of underspending - if Congress were to allocate
more financial resources for states to fortify their elections from cyber threat and foreign
adversaries, how can they ensure that this financing is going towards that cause, and not being
reallocated on a state level, or set aside in some reserve? The constitutional argument is a sound
one, and will require careful consideration going forward in the rulemaking process, but the
budgetary concern is a question of politicking. Many of the states that have historically
underspent such funds have not been able to spend this money due to political nuances on a local
level. Take Oklahoma, for example - the state is cited as the most palpable case of elections
security underspending, but the Oklahoma’s election department, per the state constitution,
cannot update its systems until its DMV does so first, and the DMV does not have the financing
to do so. As such, underspending of these appropriations is not a demonstration of waning need
nor state indifference, but rather, an issue of local politics.
It is unclear just how potent the impact of AI will be on the results of the upcoming
presidential election, but one would be negligent not to think that it would take center stage
throughout the campaign process. While voters would be prudent to remain vigilant for
misinformation, Congress must also remain aware that the fate of this election, future elections,
and the future of American democracy as a principle, lies in their law-making hands. Without
legislative guardrails to protect our elections from the looming threat of generative AI and
misinformation, our next president could be decided by lines of code.