You are on page 1of 15

Artifical Evolution

Cechi Dan Alexandru


Spiru Haret National College, XI A 2023
Coordinator: Ana Durac
I. Introduction
This paper will discuss the evolution of AI from its early
beginnings to the present day, including the first wave of AI, the
rise of machine learning, the emergence of cognitive computing,
and the current state of this technology. The paper will also
examine some of the ethical considerations surrounding AI and
what the future of its development holds.

AI is a branch of computer science that involves the


development of intelligent machines capable of performing tasks
that typically require human intelligence, such as perception,
reasoning, learning, decision-making, and natural language
processing. AI systems use algorithms and statistical models to
analyze and make predictions from data, learn from experience,
and adapt to new situations.

Nowadays, AI is becoming increasingly important to our lives.


This disruptive technology has already transformed many
different industries, including healthcare, finance,
manufacturing, and even transportation, and is on the trajectory
to transforming every aspect of our existence. Concrete
examples of its present use cases include: designing of new
molecules, predicting regions of poverty by analyzing satellite
imagery, and even AI-powered cancer screenings that have
already saved thousands of life. Its impact of society as a whole
cannot be understated.

II. The Emergence of AI

a. Origins

The origins of AI can be traced back to the 1950s, with the


development of the first electronic computers. The first AI
pioneers, including John McCarthy, Marvin Minsky, and Claude
Shannon, laid the groundwork for the development of AI..

One of the first real applications of AI was Theseus, a remote


controlled and life-sized robotic mouse which was created by
Claude Shannon, who is considered by many the father of
information theory. The robotic mouse and the maze were
designed in such a way that when Theseus was placed inside the
maze and switched on from its controller, it could autonomously
learn the most efficient path from point A to point B, by going
through the maze multiple times. In (Fig.1) is the mouse’s first
time going through the maze, its path being highlighted with the
yellow line, and in (Fig.2) is its second run with notable
improvements. (Klein, 2018)
This invention also sparked one of the most well-known robotics
competitions called “Micro mouse” where competitors create
the fastest maze-solving mouse out of electronics.
(Fig. 1) (Fig. 2)
b. The first wave

The first wave of AI started in the


1950s and 1960s, with the
development of the first
electronic computers. The term
"Artificial Intelligence" was
coined by John McCarthy in (Fig. 3)
1956 at the Dartmouth Conference, where a group of researchers
came together to discuss the possibility of creating intelligent
machines. Even though, this breakthrough came about recently
in the history of mankind, the concept of computers, that
powered the advent of AI, are as far from modern as they get.
For example, (Fig.3) is the Antikythera, an Ancient Greek
mechanism designed to calculate and display information about
astronomical phenomena which is considered to be the first ever
analogue computer.

The focus of the first wave of AI was on the development of


expert systems, which were rule-based systems that could solve
complex problems in a particular domain. Expert systems used a
knowledge base and a set of rules to reason about a problem and
provide a solution. They were used by experts to take over
routine tasks and free them to work on more demanding
problem-solving tasks.
DENDRAL was such a
system. It was
developed in 1965 by
Edward Feigenbaum
and Joshua Lederberg
at Stanford University
(Fig.4) and was used as
a chemical analysis
expert system. It was
(Fig. 4) designed to detect
unknown organic molecules with the help of their mass spectra
and vast knowledge base of chemistry.

However, the limitations of first-wave AI soon became apparent.


Expert systems were brittle, meaning that they could only solve
problems within a specific domain and could not handle
unexpected situations or new data. They were also difficult and
expensive to develop and maintain, and they could not learn or
adapt to new situations. These limitations led to the decline of
first-wave AI in the 1980s, and the focus shifted to machine
learning and neural networks in the second wave of AI.

c. The Second wave


The second wave of AI began in the 1980s and was focused on
the development of machine learning algorithms and neural
networks.

Machine learning algorithms are mathematical models upon


which are mapped methods which are then used to learn or
uncover underlying patterns embedded in the data. They are
automated and self-modifying to continue improving over time,
and can learn from data and can even become better on their
own. Neural networks consist of interconnected nodes or
neurons in a layered structure that resembles the human brain
and they form the basis upon which machine learning algorithms
function.
There are many different types of machine learning algorithms,
which can be broadly clustered into 3 different cateogries:
supervised learning algorithms, unsupervised learning
algorithms, and reinforcement learning algorithms.

Out of the 3 main types, Reinforcement learning algorithms


are perhaps the most widely recognizable, and they involve a
computer learning to make decisions in a pre-defined
environment in order
to maximize a
reward signal.
As an example, this type of learning was used in game-playing
programs, such as IBM's Deep Blue, which defeated Garry
Kasparov, the world chess champion in 1997. (Fig.5)

However, the limitations of second-wave AI were also apparent.


Neural networks were difficult to train and required a large
amount of labeled data. They were also black boxes, meaning
that it was difficult to understand how they arrived at a decision,
which impeded further (Fig. 5)
advancements. All
these limitations combined led to the emergence of the third
wave of AI in the 2010s, with cognitive computing becoming
the industry’s next focus.

d. The third wave

Cognitive computing is a combination of machine learning,


natural language processing, and knowledge representation that
enables computers to understand and reason with human-like
intelligence.

IBM Watson is one of the


most well-known examples
of cognitive computing.
Watson is a question-
answering system that uses
natural language processing and machine learning algorithms to
understand and answer questions posed in natural language. In
(Fig. 6), we can see Watson who competed in the game show
“Jeopardy”, an American (Fig. 6)
TV show where players need to answer quiz-like questions, and
won.

The emergence of cognitive computing has raised significant


ethical considerations surrounding AI.
One concern is the potential for AI systems to perpetuate biases
and discrimination in decision-making. One area in which this
effect is already felt, is in the hiring process, where candidates
with “white sounding names” are given a far greater chance of
ending up on the hiring manager’s screens, while people with
less common names are taken less into consideration.

Another even bigger concern, that is predicted to rise in its


effects in the near future, is that 80% of workers will have their
jobs impacted by AI, while a staggering 25% of them will have
their jobs completely taken away and be replaced by these
machines. While this is not the first time such a shift in the
economy happened, as is often the case with new technologies,
the sheer magnitude and implications are not to be messed
around with.
On a more positive note, as the technology will likely replace
repetitive and monotonous work first, there will be a
productivity boom in the coming decades unlike any one that’s
come beforehand.

The third wave of AI is still in its early stages, and the future of
AI is uncertain. However, it is clear that AI will continue to
transform our lives, and if we act carefully and make sure the
technology is under our control, create a wealth of new
opportunities and prosperity for all of us. The development of
AI will require collaboration between researchers, policymakers,
and industry leaders to ensure that AI is developed and used in a
responsible and ethical manner. This is what is called
“The Allignment Problem”, in short trying to pose the question
of “How will we ensure that the goals of AI systems will be the
same as those of our own?”

This is not a question to be posed lightly.

III. The paradigm of the present:


Considering we are only at the beginning of the third AI wave,
we are not sure what the next couple of years will hold in store
in terms of AI development, but even still there are a lot of
amazing tools that are able to be used even today, including:
1. ChatGPT
ChatGPT is an AI chatbot developed by
OpenAI and released on the 30th of November
2022 that uses natural language processing to
create human-like conversational dialogue. It is
the most widespread AI tool as it was the first
tool to give users a seamless interaction with a
machine. It is a form of generative AI that can
respond to questions and compose various written content,
including articles, social media posts, essays, code, emails, and
all sorts of other creative use cases.

2. Jasper
Considered to be the best marketing tool out
there, Jasper AI is most well known for its
amazing digital copywriting capabilities.
While ChatGPT can also create general
writing, companies like Jasper have honed
in on personalized writing for businesses
and individuals to use. Moreover, the more
this tool is used, the more accurate and
similar sounding responses to the style you
want it to convey, it generates.
3. Perplexity AI
If ChatGPT had a more sophisticated brother
with capabilities such as querying the most
relevant web articles, journals, Wikipedia links
and even academic research papers, that would be Perplexity AI.
As a matter of fact, it was the main tool used in order to create
this paper and it is one that can forever change the way someone
looks at searching for real, factual information, as all the
citations used in the generated response are there for the user to
check their validity.

4. Strofe
On a more musical note, Strofe is a modern AI
tool that lets its users create and hone in on the
perfect melody, without any prior experience
creating music. The aim of this tool is to give
everyone the ability to express themselves
through music while also streamlining the
process of creating that music.

5. Descript
While on the topic of user-generated sounds, what about
generating a whole new artificial persona that
can say and act out exactly the words you give
it without you ever needing to say anything?
That’s what Descript is. You give it a name,
some prompts to say and model how you want
the video of the artificial persona to look while
it says your text and this tool outputs a video
that’s uncannily close to real speech and movement.
6. Tome.AI
Maybe generating a whole persona
wasn’t enough, and you want your
persona to actually have a presentation
to say? Well hop on to Tome.AI and
with one single prompt have a whole
powerpoint presentation about the
specific topic you want it to say, filled
with AI generated pictures and almost
perfect factual information. Creating a presentation has never
been easy, but now in just 30 seconds you can have the
barebones of a professional presentation.

7. Scribe
The idea of having to learn the exact
ways in which a company does its
business when entering a new job can
be daunting. Moreover, it’s usually
impossible for new employees to be trained by their managers
and be shown all the exact ways in which they should interact
with the company software, which is why many companies
choose to create documentation papers. Happily, with a tool like
Scribe, anyone can record a step-by-step guide on how to do any
task involving the computer, without ever having to worry about
writing anything down. This tool records the user’s movements,
creates a sharable guide and allows for the user to rename and
add only the essential text needed to describe what is happening
in the picture, as the rest is taken care of.

IV. The future: Where is AI heading?


There are still many challenges facing AI, including the
limitations of current algorithms, ethical concerns surrounding
AI, and the need for greater collaboration and regulation in the
development and use of AI.

The future of AI is likely to involve the development of more


sophisticated algorithms and models, as well as the deeper
integration of AI with other emerging technologies, such as
blockchain and the Internet of Things. Additionally, there will
be a growing focus on the ethical and social implications of AI,
as well as the need for greater transparency and accountability in
AI decision-making.
The need for such considerations is reflected in the creation of
“Universal Basic Income” research departments in leading firms
such as OpenAI which aim to analyze the possibility of giving
everyone a universal income in the age where a lot of jobs will
be taken away by machines, so as to attenuate the effect that
these technologies will have. The CEO of OpenAI created a
lengthy blog post about the subject called: “Moore’s Law for
Everything”

The development and adoption of AI will have significant


implications for society, including changes in the job market,
new opportunities for innovation and growth, and potential risks
and challenges related to privacy, security, and accountability,
and so the need for careful planning is of paramount importance.

In conclusion, the evolution of AI from its dawn to the present


paradigm and beyond has been a fascinating journey, with many
twists and turns along the way. While there are still many
challenges and uncertainties facing AI, there is also great
potential for AI to transform our lives in positive ways. As we
continue to develop and refine AI technologies, it is important to
ensure that we do so in a responsible and ethical manner, with a
focus on the well-being of society as a whole.

You might also like