You are on page 1of 3

Name: Suprova Tasneem Hossain

ID: 2112283630
Faculty: HMB
Course Code: POL101

Challenges of AI in politics
Today, almost no one questions the fact that AI systems will have an increasing impact on how we
conduct our lives. Growing human-nonhuman intelligent system interactions are both a source of danger
and a source of promise. Weak and powerful AI system development and application ethical issues are
examined. Then, legal issues that arise when lawmakers try to control ambiguous and unpredictable
phenomena like AI are examined. Here, we demonstrate that, despite these difficulties, a suitable
legislation is the best defense against the dangers and opportunities posed by AI. We also mention the fact
that the advancement of AI will present significant political difficulties. Politicians will have to strike a
tough balance between accepting inevitable and largely positive technical advancement for mankind on
the one hand and some hazards and dangers, likely also inherent in the particularity of AI functioning, on
the other.
There has been a movement towards more critical views to digital media, both its creators and consumers,
in light of rising populism, extremism, digital spying, and data manipulation. This change, together with
requests for a road toward digital wellbeing, calls for a deeper look into the study of the moral dilemmas
brought on by artificial intelligence (AI) and big data. Big Data and AI applications in digital media
frequently violate fundamental human rights and democratic values. Instead of a digitally well-being with
equal and active involvement of educated citizens, the prevalent paradigm is one of covert exploitation,
degradation of individual agency and autonomy, and a blatant lack of transparency and accountability,
which is reminiscent of authoritarian dynamics. By offering a thorough examination of the difficulties
that stakeholders confront when attempting to mitigate the negative effects of Big Data and AI, our work
makes a valuable contribution to the encouraging research landscape that aims to address these ethical
issues. Rich empirical data gathered from six focus groups with important stakeholders in shaping the
ethical dimensions of technology, conducted across Europe, offer helpful insights into elucidating the
complex conflicts, tensions, and challenges that stakeholders face when tasked with addressing ethical
issues of digital media, with a focus on AI and Big Data. To imagine and develop strategies and policies
to address these difficulties, academics and policymakers must first recognize, discuss, and explain them.
Our research contributes to the academic conversation and is helpful for practitioners pursuing
responsible innovation that safeguards users' wellbeing while safeguarding threatened democratic tenets.
A more critical approach to digital media emerged in response to the unexpected political events of two
influential global actors, the United States and Britain, and may have peaked in the riots that stormed the
US Capitol in 2021. We saw words like "cracked down," "suspended," "allowed back," "banned,"
"locked," and "pulled the plug" in reference to users' accounts (and particularly those of Trump) across
the news, all of which are verbs associated with power and control. This may have been the first time that
so-called Big Tech companies like Facebook and Twitter have been associated with positions of authority
akin to that of a global power. While the rest of the world watched to see what these firms would do next,
they were the ones making the decisions. These events were significant because, up until this point,
intergovernmental organizations with legal membership structures and democratic procedures, such as the
United Nations, World Bank, International Criminal Court, etc., had a preponderance in discussions of
global governance in political discourse. All eyes were now focused on a select group of people—self-
made, democratically unelected CEOs of these social media companies. Trump's temporary and
permanent suspensions from social media, in the words of a New York Times commentator, "clearly
illustrated" that power in the current digital society "resides not just in the precedent of law or the checks
and balances of government, but in the ability to deny access to the platforms that shape our public
discourse."
Unquestionably, the post-Capitol riots digital media debates peaked, emphasizing issues with
unaccountable and unfettered power that are essential to democracies. 2 Following these occurrences,
tech firms were labeled as "corporate autocracies posing as mini-democracies". Concerns were raised
about the Capitol riots both domestically and abroad. Ursula von der Leyen, the president of the European
Commission, expressed worry about how "the business model of online platforms has an impact not just
on free and fair competition, but also on our democracies" shortly after the events. It's vital to keep in
mind the larger backdrop and ongoing challenges to democracy despite the significance of these events
and their contribution to the shift towards a more critical outlook. Regulators criticized Big Tech
corporations before the Capitol Riots for their overwhelming power and for abusing it, such as by
adopting illegal measures to suppress competition. The US government and the attorneys general of 48
states even brought broad-based lawsuits against Facebook in December 2020, asserting that it should be
divesting itself of its dominant position in the social networking sector. In February 2021, Google sacked
the co-head of its AI ethics division. This came after another prominent AI ethics researcher claimed she
had been fired and accused Google of "silencing marginalized voices," sparking further backlash against
the corporation. Many saw these departures as an attempt to restrict research that took a critical stance on
the corporation's goods because both researchers had campaigned for greater diversity inside the company
and had spoken out against the harmful consequences of technology.
The signs that the democratic institutions created to deal with dissent are failing in the current digital age
are arguably present in all of the aforementioned examples. This increases the urgency of our work
because it shows that the means and modes of negotiating disagreement are neither effective nor
constructive. Ignoring people or making efforts to restrict them are indications of authoritarian
administrations that do not support ambiguity, plurality, or tolerance. In the long run, these actions are
likely to cause more harm than good, with enraged users turning to more specialized extremist sites. If we
use Trump as an example, the decision to ban a former president with millions of followers is symbolic of
the perilous populist power granted to this person through digital tools, the power to influence the "hearts
and minds" and behavior of hundreds of millions of people, as well as the unchecked powers of a small
group of unelected entrepreneurs, as the quote at the beginning of the article alludes to. At the same time,
its popularity as a tool for "giving voice" highlights the shortcomings of representative democracy's
frequently distant, bureaucratic, and staged communication, as well as the desire for more direct
democracy tools — something Trump recognized right away.

You might also like