You are on page 1of 10

MANUEL MARÍA TEJADA ROCA HIGH SCHOOL

Topic:
ARTIFICIAL INTELLIGENCE

Preparted by: CRISTIAN J. CARRERA


Grade: 12ª A SCIENCE
OCTOBER 23, 2023
INTRODUCCIÓN

Today, artificial intelligence (ai) is a very popular topic and there is a lot
of talk about how this technology will revolutionize different areas
such as transportation, health and finance; and also about the risks
that it can bring to humanity, such as destructive use or
misinformation. but what is ai really? in this article i will give you a
basic overview of ai and its history, so that you can understand and
explain ai to anyone in a simple way. this is an important issue that we
must understand to advance in an increasingly technological world.
¿WHAT IS ARTIFICIAL INTELLIGENCE?

Artificial Intelligence refers to the ability of machines to perform tasks that would normally require
human intelligence, such as learning, problem solving, and decision making. This is achieved by
developing algorithms and computer programs that simulate human cognition and behavior. AI
technology has the potential to revolutionize industries and transform society's way of life.

AI takes many forms, each with its own capabilities and limitations. Rule-based AI involves creating
a set of rules that the computer must follow to obtain a specific result. Machine learning is the
process by which the computer learns and improves its performance from the exposure of data. Deep
learning is the use of neural networks to process and analyze large amounts of data. Natural language
processing is another type of AI that teaches machines to understand and interpret human language.
HISTORY OF ARTIFICIAL INTELLIGENCE
The journey of AI began in the mid-20th century, when tech-savvy minds strove to create
machines that could imitate human behavior. Since then, the AI ​sector has made significant
advances in areas such as machine learning, natural language processing, and robotics.

The main historical events that occurred during these more than 70 years of evolution have been
the following:

1. Dartmouth Conference (1956): At this Dartmouth College conference in 1956, several


scientists, including John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon,
created the term "artificial intelligence" and laid the foundation for the study of systems that
simulate human intelligence.
2. Lisp programming language: Developed by John McCarthy in the 1950s. What makes it special
is its structure based on linked lists, which gives it a lot of flexibility and ability to manipulate data
efficiently. Furthermore, it is an extensible language, which means that programmers can modify
and extend it according to the needs of the program. In the field of artificial intelligence, Lisp
especially shines in areas such as natural language processing and knowledge representation.

3. The IBM Chess Program: In the 1950s, IBM researchers created a chess program called "IBM
704." Although primitive compared to modern programs, it was an important step in the application
of logic and programming to simulate strategic thinking.
4. Turing Test (1950): Alan Turing, a British mathematician and computer scientist, proposed
this test as a way to evaluate a machine's ability to exhibit human-like intelligent behavior. The
Turing Test has become an important benchmark in artificial intelligence.

5. Neural networks. This technology dates back to the 40s and 50s. At that time, the neurologist
and cyberneticist Warren McCulloch together with the mathematician Walter Pitts proposed in
1943 a mathematical model of a simplified neural network based on the functioning of biological
neurons. This was one of the first theories that laid the foundation for the development of
artificial neural networks.

6. Learning-capable neural networks: In the 1950s, psychologist and computer scientist Frank
Rosenblatt developed the Perceptron, which is considered the first learning-capable artificial
neural network model. The Perceptron was able to learn to recognize simple patterns and was
used in various image recognition applications.
What is artificial intelligence for?

Artificial intelligence has been used in different fields such as robotics, computer
science, finance, health, autonomous transportation systems, the world of video
games and communications. In these environments, machines are capable of
handling large amounts of data that allow them to identify and understand verbal
commands and images, to perform complex calculations and actions very quickly.
These systems, consequently, serve to perceive their environment and relate to it, as
well as act with a specific objective, after very exhaustive data collection and
processing. That is, it is technology applied to solve tasks in the market.
Some examples of how artificial intelligence is applied in different sectors:

Personal: assistance through smartphones, tablets and computers.


IT: cybersecurity guarantees.
Productive: assembly and automation in factories and laboratories
Financial: fraud detection.
Climate: reduction of deforestation and energy consumption.
Health: identification of genetic factors that anticipate the detection of diseases.
Transportation: manufacturing of autonomous and intelligent vehicles.
Agricultural: anticipation of environmental impact and improvement of
agricultural performance.
Commercial: sales forecast.
CONCLUSION
In conclusion, artificial intelligence is a rapidly growing field that has a history that
dates back to many years of research and with infinite possibilities and risks. While
there are certainly concerns about the potential negative effects of AI, it is
important to apply ethical principles and create regulations that ensure that AI is
used responsibly. We can use AI to our advantage by knowing it to mitigate its
negative effects and use it to make our lives easier and improve the world and our
lives. It is up to us to ensure that artificial intelligence operates ethically and
responsibly, to achieve the greatest benefits for society as a whole.
Thank you…

You might also like