You are on page 1of 45

WHEN TECHNOLOGY

AND HUMANITY CROSS

PRESENTED BY GROUP 4
• The ever-growing society has made people see
technology as some sort of necessity. Tracing
back its origins, the word “technology” came
from the Greek words techne and logos which
mean art and word respectively.
TELEVISION SETS, MOBILE PHONES,
COMPUTERS AND HUMANITY
• in 1907, two inventors, Alan Archibald Campbell-Swinton who was an
English scientist and Boris Rosing who was a Russian scientist, created a new
system of television by using cathode ray tube in addition to the mechanical
scanner system.
• This success story gave rise to two types of television systems, the
mechanical and electronic television.
• According to Kantar Media, one of the most trusted television audience
measurement providers, in the Philippines, 92 percent of urban homes and
70 percent of rural homes owns at least one television set.
• The current count of households with television set already reached 15.135
million (Noda, 2012)
TELEVISION SYSTEM

Mechanical Electronic
FIRST MOBILE PHONE

• Mobile phones, have a very interesting background


story. On April 3, 1973, Martin Copper, a senior
engineer at Motorola, made the world’s first mobile
phone call.
• He called their rival telecommunications company and
properly informed them that he was making the call
from a mobile phone.
• The mobile phone used by Copper
weighed 1.1 kilograms and measured
228.6 x 127 x 44.4 mm.
• This kind of device was capable of a 30-
minute talk time. However, it took 10
hours to charge.
• In 1983, Motorola made their first
commercial mobile phone available to
the public.
• It was known as the Motorola
DynaTAC8000X.
COMPUTERS
• Charles Babbage, a 19thcentury English Mathematics professor, who
designed the Analytical Engine which was used as the basic
framework of the computers even until the present time.
• In general, computers can be classified into 3 generations. Each
generation of the computer was used for a certain period of time and
each gave people a new and improved version of the previous one.
• Laptops have been available to the public, for even a less time than
personal computers.
•Before, the first design of
computer was so big that it
could occupy whole floors of
buildings.

•The first TRUE portable


computer was released in April
1981. it was called the
Osborne 1
Roles Played by These Technological
Advancements

• For instance, television is mainly used as a platform for


advertisements and information dissemination. In fact,
television remains to be the most used avenue by different
advertising companies not only in the Philippines but all over
the world.
• Television also is a good platform for different propagandas
and advocacies. Lastly, it can also be a good way to bond
with one’s family members.
• Mobile phones, on the other hand, also have their own roles
in the lives of the people. They are primarily used for
communication. Mobile phones offer services like texting
and calling.
• In the present, people use their mobile phones to surf to the
internet and to take pictures more than text or to call
people.
• It is like an all-in-one device.
• Very portable and convenient because it can fit into any
space, may it be inside the pocket or bag.
• Personal computers and laptops also have useful set of
functions and roles.
• People prefer to do their job using either a PC or laptop that
a mobile phone. One reason is that laptop has a wide
keyboard than using a mobile phone, especially when the
mobile phone has a small screen.
• Another reason is that the availability of a mouse or a
touchpad made easier to maneuver than mobile phones.
• Lastly, for the youth who love ton play different computer
games, PC are really the better choice because it allows them
to play with comfort and convenience.
ETHICAL DILEMMA FACED BY THESE
TECHNOLOGICAL ADVANCEMENTS
• Most parents would argue that these devices make their
children lazy and unhealthy
• There are some people who are more likely to experience
alienation because they no longer take time to get out of
their houses and mingle with other people.
• Moral dilemma-people, especially children who are not
capable yet of rationally deciding for themselves what is right
or wrong, are freely exposed to different things in television,
mobile phones, PC’s etc.
ROBOTICS AND HUMANITY
• An actuated mechanism programmable in two or more axes with a
degree of autonomy, moving within its environment, to perform
intended tasks.

Autonomy – the ability to perform intended task based


on current and sensing without human intervention.
SERVICE ROBOTS – performs useful
task for humans

PERSONAL SERVICE ROBOTS – used for


non-commercial task

PROFESSIONAL SERVICE ROBOTS –


commercial task, operated properly
by trained operator
Roles Played by Robotics
• Robots play different role not only in the lives of the
people but also in the society as a whole.
• They are primarily used to ease the workload of mankind.
They were invented to make life ore efficient and less
stressful.
• They perform complicated activities which human beings
are incapable of doing. They perform the simplest tasks at
home so that their masters can perform the complex ones
without stressing themselves over the simple task.
• There are robots who are made for pleasure. To be more
specific, these types of robots perform activities to
entertain people.
ETHICAL DILEMMAS FACED BY
ROBOTICS
• SAFETY-the safety of not only the owner of the technology
but also all the people inside the house should be priority
mote than anything else.

• EMOTIONAL COMPONENT-it is just right for the robots to be


given their own set of rights should they develop the ability
to feel different kinds of emotion. It can be argued that the
same thing happened with animals.
DEVELOPMENT
OF ARTIFICIAL
INTELLIGENCE
WHAT IS ARTIFICIAL INTELLIGENCE?
• Artificial Intelligence is a method of making a computer, a computer-
controlled robot, or a software think intelligently like the human
mind. AI is accomplished by studying the patterns of the human brain
and by analyzing the cognitive process. The outcome of these studies
develops intelligent software and systems.
A Brief History of Artificial Intelligence
• 1956 - John McCarthy coined the
term ‘artificial intelligence’ and had
the first AI conference. He defined
it as “the science and engineering
of making intelligent machines”
• 1969 - Shakey was the first general-purpose
mobile robot built. was the first mobile
robot able to perceive and reason about its
surroundings. This early robot became an
archetype from which subsequent robots
were built and significantly influenced
modern robotics and AI techniques.
• 1997 - Supercomputer ‘Deep Blue’
was designed, and it defeated the
world champion chess player in a
match. It was a massive milestone by
IBM to create this large computer.
• 2002 - The first commercially successful robotic vacuum
cleaner was created.

• 2005 - 2019 - Today, we have speech recognition, robotic


process automation (RPA), a dancing robot, smart homes, and
other innovations make their debut.

• 2020 - Baidu releases the LinearFold AI algorithm to medical


and scientific and medical teams developing a vaccine during
the early stages of the SARS-CoV-2 (COVID-19) pandemic. The
algorithm can predict the RNA sequence of the virus in only
27 seconds, which is 120 times faster than other methods.
USES AND CHALLENGES ARTIFICIAL
INTELLIGENCE
AI is used in different domains to give insights into user behavior and
give recommendations based on the data. For example, Google’s
predictive search algorithm used past user data to predict what a user
would type next in the search bar. Netflix uses past user data to
recommend what movie a user might want to see next, making the
user hooked onto the platform and increasing watch time. Facebook
uses past data of the users to automatically give suggestions to tag your
friends, based on the facial features in their images. AI is used
everywhere by large organizations to make an end user’s life simpler.
Common Challenges of an AI
1. Computing Power
The amount of power these power-hungry
algorithms use is a factor keeping most
developers away. Machine Learning and
Deep Learning are the stepping stones of
this Artificial Intelligence, and they
demand an ever-increasing number of
cores and GPUs to work efficiently. 
2. Human-level
This is one of the most important challenges in AI, one that
has kept researchers on edge for AI services in companies
and start-ups. These companies might be boasting of above
90% accuracy, but humans can do better in all of these
scenarios. For example, let our model predict whether the
image is of a dog or a cat. The human can predict the correct
output nearly every time, mopping up a stunning accuracy of
above 99%. For a deep learning model to perform a similar
performance would require unprecedented fine-tuning,
hyper parameter optimization, large dataset, and a well-
defined and accurate algorithm, along with robust
computing power, uninterrupted training on train data and
testing on test data.
3. Data Privacy and Security
The main factor on which all the deep and machine learning models are
based on is the availability of data and resources to train them. Yes, we
have data, but as this data is generated from millions of users around
the globe, there are chances this data can be used for bad purposes.
For example, let us suppose a medical service provider offers services
to 1 million people in a city, and due to a cyber-attack, the personal
data of all the one million users fall in the hands of everyone on the
dark web. This data includes data about diseases, health problems,
medical history, and much more. To make matters worse, we are now
dealing with planet size data.
TYPES OF ARTIFICIAL
INTELLIGENCE
SYMBOLIC ARTIFICIAL INTELLIGENCE

is an approach that trains Artificial Intelligence (AI) the same way


human brain learns. It learns to understand the world by forming
internal symbolic representations of its “world”.
Benefits of Symbolic AI
Symbolic artificial intelligence is very
convenient for settings where the rules are
very clear cut,  and you can easily obtain input
and transform it into symbols. In fact, rule-
based systems still account for most computer
programs today, including those used to
create deep learning applications.
CONNECTIONIST ARTIFICIAL
INTELLIGENCE
An approach to artificial intelligence (AI) that developed
out of attempts to understand how the human brain
works at the neural level and, in particular, how people
learn and remember.
A system made with connectionist AI gets smarter as it
is exposed to more data and learns the patterns and
relationships.
A section of an artificial neural network. The
The most popular technique in this category is the weight, or strength, of each input is indicated
Artificial Neural Network (ANN). This consists of here by the relative size of its connection. The
multiple layers of nodes, called neurons, that process firing threshold for the output neuron, N, is 4 in
some input signals, combine them together with some this example. Hence, N is quiescent unless a
combination of input signals is received
weight coefficients, and squash them to be fed to the from W, X, Y, and Z that exceeds a weight of 4.
next layer.
EVOLUTIONARY ARTIFICIAL
INTELLIGENCE

An evolutionary algorithm is an evolutionary AI-based computer


application that solves problems by employing processes that mimic
the behaviors of living things. As such, it uses mechanisms that are
typically associated with biological evolution, such as reproduction,
mutation and recombination. 
How Does Evolutionary AI
Works?
EAs are inspired by the concepts in
Darwinian Evolution. In EAs, the solutions
play the role of individual organisms in a 
population. The mix of potential solutions
to a problem is populated randomly first.
Then the population is tested for fitness --
how well and how quickly it solves a
problem. Next, the fittest individuals are
selected for reproduction. The cycle begins
again as the fitness of the population is
evaluated and the least fit individuals are
eliminated.
PHILOSOPHICAL DEBATE OVER
ARTIFICIAL INTELLIGENCE
Artificial intelligence (AI) would be the possession of intelligence, or the
exercise of thought, by machines such as computers. Philosophically,
the main AI question is “Can there be such?” or, as Alan Turing put it,
“Can a machine think?” What makes this a philosophical and not just a
scientific and technical question is the scientific recalcitrance of the
concept of intelligence or thought and its moral, religious, and legal
significance.
Since computers give every outward appearance of performing
intellectual tasks, the question arises: “Are they really thinking?” And if
they are really thinking, are they not, then, owed similar rights to
rational human beings?
A complication arises if humans are animals and if animals are
themselves machines, as scientific biology supposes. Still, “we wish to
exclude from the machines” in question “men born in the usual
manner” (Alan Turing). And if nonhuman animals think, we wish to
exclude them from the machines, too.
Accordingly, the scientific discipline and engineering enterprise of AI
has been characterized as “the attempt to discover and implement the
computational means” to make machines “behave in ways that would
be called intelligent if a human were so behaving” (John McCarthy), or
to make them do things that “would require intelligence if done by
men” (Marvin Minsky).
These standard formulations duck the question of whether deeds
which indicate intelligence when done by humans truly indicate it when
done by machines: that’s the philosophical question. So-called weak
AI grants the fact (or prospect) of intelligent-acting machines; strong
AI says these actions can be real intelligence. Strong AI says some
artificial computation is thought. Computationalism says that all
thought is computation. Though many strong AI advocates are
computationalists, these are logically independent claims: some
artificial computation being thought is consistent with some thought
not being computation, contra computationalism.
THE FUTURE OF
ARTIFICIAL
INTELLIGENCE
Undoubtedly, Artificial Intelligence (AI) is a revolutionary field of
computer science, which is ready to become the main component of
various emerging technologies like big data, robotics, and IoT. It will
continue to act as a technological innovator in the coming years. In just a
few years, AI has become a reality from fantasy. Machines that help
humans with intelligence are not just in sci-fi movies but also in the real
world.
We are using AI technology in our daily lives either unknowingly or
knowingly, and somewhere it has become a part of our life. Ranging from
Alexa/Siri to Chatbots, everyone is carrying AI in their daily routine.
Things that will change in the Future:
• Transportation: Although it could take some time to perfect them, autonomous cars will
one day ferry us from place to place.
• Manufacturing: AI powered robots work alongside humans to perform a limited range of
tasks like assembly and stacking, and predictive analysis sensors keep equipment running
smoothly.
• Healthcare: In the comparatively AI-nascent field of healthcare, diseases are more
quickly and accurately diagnosed, drug discovery is sped up and streamlined, virtual
nursing assistants monitor patients and big data analysis helps to create a more
personalized patient experience.
• Education: Textbooks are digitized with the help of AI, early-stage virtual tutors assist
human instructors and facial analysis gauges the emotions of students to help determine
who’s struggling or bored and better tailor the experience to their individual needs.
INFORMATION AGE
Influences of the past on the Information Age
What is Information Age?
It is defined as a “period of starting in the last quarter of the 20th
century when information became effortlessly accessible through
publications and through the management of information by computer
and computer networks.”
The Renaissance influenced the Information Age by creating the idea
inventions, while too advanced for the time, the basic idea was used to
develop modern inventions. The Renaissance also changed literature.
At first, only books that told stories of religion and religious heroes
were written. During the Renaissance, people began to
write realistic books and not just religious stories. People's mindset
about themselves changed. It was no longer about what humans could
do for God, but what humans could do for themselves. This way of
thinking is called humanism. 
HISTORY
The Table below traces the History and Emergence of the Information Age
YEAR EVENT
3000 BC Sumerian writing system used pictographs to represent words
2900 BC Beginnings of Egyptian hieroglyphic writing
100 AD Book (parchment codex)
105 AD Woodblock printing and paper was invented by the Chinese
1455 Johannes Gutenburg invented the printing press movable metal type
1802 Invention of the carbon arc lamp
1830s First viable design for a digital computer
1861 Motion picture were projected onto a screen
1899 First magnetic recordings were released
1902 Motion picture special effects were used
1923 Television camera tube was invented by Zvorkyn
1926 First practical sound movie
YEAR EVENTS

1940s Beginnings of information science as a discipline


1946 ENIAC computer was develop
1971 Intel introduced the first microprocessor chip
1975 Altair Microcomputer Kit was release. First personal computer for the public
1977 RadioShack Introduce the first complete personal computer
1984 Apple Macintosh computer was introduced
Mid 1980s Artificial Intelligence was separated from information science
1987 Hypercard was develop by Bill Atkinson recipe box metaphor
1991 Four hundred fifty complete works of literature on one CD-ROM was release
1997 RSA ( encryption and network security software) internet security code
cracked for a 48-bit number
THANK YOU FOR
LISTENING!!!

You might also like