You are on page 1of 3

From Deepfakes to National Narratives: The Impact

of AI on Political Landscapes

The fast pace of technological progress in our society has resulted in the emergence of new risks.
With over a decade of existence, the internet has opened up hitherto unexplored frontiers. Since
its conception in the 1960s, cyberspace has grown into a user- and developer-friendly arena of
communication and entertainment. The downsides of social media were exposed, however,
because of this development. Even though progress benefits humans, it frequently ignores animal
species' suffering. Artificial intelligence, with its ability to help and even replace human work,
creates chances for exploitation owing to the inherent complexity of such systems. This level of
nuance presents ethical problems in the information age since it may be used for malicious ends.
The "Pentagon Bombing"
The occurrence on May 22, 2023, when a computer-generated picture of the Pentagon being
attacked was circulated online, is indicative of the far-reaching effects of AI. Social media
platforms like Twitter (now X) rapidly spread false information in this instance. If authoritative
sources like the media repeat it, it can generate widespread worry and distress. The incident also
highlighted the public's lack of capacity to fact-check the news, leaving them vulnerable to
believing and sharing potentially erroneous information with real-world consequences.
The financial repercussions of this event were notable, with the US stock market dropping
sharply within hours of the image's dissemination. This is a prime example of the damage that
artificial intelligence-generated misinformation can do to a country's economy, with
repercussions felt by traders and investors alike. Concerns regarding the use of AI-generated
content for political manipulation were also raised, and the event served to emphasize the need
for better cybersecurity measures and public awareness to resist the spread of deepfakes and
misinformation in our increasingly digital society.
Films of politicians saying things they haven't uttered are produced using artificial intelligence
(AI) by propaganda producers to further a narrative against them. The public's faith in AI has
skyrocketed, maybe much more than is widely recognized. Anarchy, as predicted by Robert
Chesney and Danielle Citron in their article "Deepfakes and the New Disinformation War," as a
consequence of the rise of social media and the availability of deepfakes that can create plausible
fake news. When word spread those Chinese authorities had begun severely restricting residents'
mobility from Wuhan to their homes during the COVID-19 period, the country became an easy
target for the West's accusations of human rights breaches. Westerners, who did not yet accept
the reality of the rapidly spreading virus, used the crisis to further their already bad stereotypes
of China.
Images of the filthy circumstances in which quarantined patients were forced to live were shown
on television, and reports detailed how residents' limitations on mobility were leading to
fatalities since they prevented individuals from gaining access to food, water, and medical care.
Fake news helped the West produce ruthless and unfavorable perceptions of the Chinese regime,
which contributed to the eventual lockdown that was necessary for the West.
Everything has neutral qualities that rely on context and application. Strategic planners may use
AI to help them map out their ideas, strategies, and content production, as well as assess the
societal impact of putting those plans into action. A robotic system equipped with artificial
intelligence may be put to many uses with the right instructions and data management. It can not
only assist in discovering social trends, undertaking political analysis and news reporting, and
administering cyberspace conveniently, but it can also aid a state in crisis management, risk
prevention, and security deterrence.
Unfortunately, the majority of AI applications are not beneficial. It's taking us closer to the future
that Stephen Hawking warned us about, when fully autonomous robots may destroy humanity.
The use of AI to stifle discussion of certain issues while simultaneously giving birth to others has
reached a national scale. Gender inequality, political instability, and extremism have all
flourished at this stage. Similarly, governments are employing AI-generated goods to influence
political campaigns, zero in on certain cultures, disseminate fake information, and promote
phony democracy.
The Republican National Committee (RNC) released a film earlier this year that was created in
part using artificial intelligence to demonstrate what life in the United States would be like if Joe
Biden were elected president. Despite being just 30 seconds long, the film that responded to
Biden's re-election campaign effectively depicted a United States where there are more crimes,
more migrants and terrorists are let in, there is an escalated war with China, and the economy has
collapsed. As a result, Biden faced more scrutiny and saw his potential influence in the public
sphere diminish. AI has the potential to influence future political campaigns similarly to how it
has previously influenced media coverage of candidates and issues. In addition, it may aid in the
dissemination of information in a manner that encourages voters who are on the fence to vote for
a certain candidate. Similarly, AI may aid politicians in disseminating misinformation and
gaining popular support by facilitating greater defamation via deepfakes.
Conclusion
In a society that is increasingly reliant on the internet for many of its duties and where
individuals are more accessible online than offline, AI can control thousands of such areas with
little to no effort. Henceforth, as social media users and responsible citizens, gaining cyber
awareness and preventing the propagation of material that can be essentially harmful is our
obligation. It's important to keep in mind that no information found online can be confirmed as
real or fraudulent before disseminating it. The situation on the ground must be taken into
account, and confirmation must be sought using various intelligence resources.
Concentration is required while analyzing visual content since it is easy to spot AI-generated
items with closer inspection. Monitoring who is uploading what material and what their history
suggests may reveal more instances of false information and deception. It takes work, but if
we're willing to put in the time and energy, we can break out of the vicious cycle of believing
falsehoods. Perhaps if Biden were to perform "Baby Shark" on national television, people might
find it amusing. But we must recognize that this is just one more way that a false narrative is
being disseminated about the ineptitude of the current US president.

You might also like