You are on page 1of 5

“Artificial Intelligence as a Threat”

Artificial intelligence (AI) has been part of our daily lives for decades. When you use a
smartphone app or a search engine, you’re using artificial intelligence. But are we ready for
machines to take over our jobs? Artificial intelligence is the technology that is going to change
the world. While it has the potential to help humans do things that they can’t do alone, the
question is: is AI a threat? Many advocates of artificial intelligence are concerned that it is a
threat to humanity, and it is time for us to take a closer look at the relationship between AI and
humans.

According to a study, in Artificial Intelligence vs. Human Intelligence, AI seeks to


offer a method of work efficiency that will make it easier to address issues. In contrast to human
intelligence, which takes a long time to become accustomed to the mechanics, it can solve any
problem in the blink of an eye. As a result, the primary distinction between natural and artificial
intelligence is how long it takes for each to function. Today’s AI can be dangerous even if it has no
evil intent. For instance, racial biases have been found in the US’s healthcare allocation algorithms.
Similar biases have been discovered in law enforcement-used facial recognition technologies. Despite the
AI’s ‘limited’ ability, these biases have wide-ranging harmful effects.

The data that is used to train AI is what causes bias. The training data was not typical of the population in
the cases of racial bias. Another instance occurred in 2016, when a chatbox powered by AI was
discovered to be sending extremely rude and racist comments. This was discovered to be the result of the
bot learning from people’s offensive messages that it received.

The American political blogger and journalist Kevin Drum recently wrote an essay titled
“You Will Lose Your Job to a Robot—and Sooner Than You Think” in which he makes a strong case for
why the future of labor is dismal. His main thesis is that artificial intelligence is growing exponentially
(AI). In accordance with his article, by 2035 AI will be 1/10 th as powerful as a human brain, and by 2045
it will be fully human-level.
As history has shown, the definition of work will evolve in the future. The following
category modifications are to be anticipated in the context of AI:
The most popular application of Al that is currently quickly gaining ground in the market is auto
mation. By 2022, according to a 2018 WEF analysis from the Swiss 
Think Tank, Al would eliminate 75 million jobs globally while also creating 133 milli-on new on
es. Data science
specific abilities, such as expertise in programming, data mining, data wrangling,software engine
ering, and data visualization, will be required for the new job descriptions.

It will take over jobs that employ people and make them obsolete. For example, in healthcare
technology fields like radiology or dermatology-where Al is already being used as a diagnostic
tool-there are no more positions for radiologists or pathologists on staff at hospitals because
computers can now do the same work as humans with far less training and experience
requirements.
In addition to taking over our jobs as workers, artificial intelligence is also changing how we live
our lives:
• We’ll have new ways of doing things (like Uber). One example here would be using
an app like Waze instead of driving yourself around town because your car could become
unreliable due to poor maintenance or even theft (which happens more than you might
think!).

How can engineers design strong artificial intelligence (AI) systems that are good for
society? Our AI systems must “do what we want them to do”; therefore, human control over AI
must be maintained. The necessary research is interdisciplinary, drawing from fields such as
economics, law, and various branches of computer science, including computer security and
formal verification. There are four categories of challenges: verification (“Did I create the system
right?”), validity (“Did I design the right system?”), security, and control (“OK, I built the
system badly, can I repair it?”).Concerns about self-driving automobiles and civilian drones are
some that are currently in the near future. For instance, in an emergency, a self-driving car could
have to choose between a high likelihood of a minor accident and a low danger of a severe
accident. Other difficulties include how to best manage the economic effects of occupations
being replaced by AI and privacy concerns as AI grows more capable of analyzing big
surveillance datasets.
There is still much work to be done in order to develop and evaluate a reliable
solution to the "control dilemma," as existing AI methods like reinforcement learning and straigh
tforward utility functions are insufficient to address this issue.

Numerous experts concurred that AI posed a risk if misused. “Robots and AI


systems do not need to be sentient to be harmful; they just need to be effective instruments in the
hands of people who seek to hurt others,” says Dr. George Montanez, an AI expert from Harvey
Mudd College.

Al has the potential to be a threat, but it also has the potential to be a great thing for
our future. It’s up to us as programmers and humans as a species if we want Al or not.
https://www.sciencealert.com/here-s-why-ai-is-not-an-existential-threat-to-humanity

https://en.m.wikipedia.org/wiki/
Open_Letter_on_Artificial_Intelligence#:~:text=5%20Notes-,Background,human%20race%20if
%20deployed%20incautiously.

https://rockcontent.com/blog/artificial-intelligence-pros-and-cons/

You might also like