You are on page 1of 10

Republic of the Philippines

UNIVERSITY OF EASTERN PHILIPPINES


University Town, Northern Samar
Website: http://uep.edu.ph Email: ueppres06@gmail.com

COLLEGE OF AGRICULTURE, FISHERIES, AND NATURAL RESOURCES


LIVING IN THE IT ERA
Name: SALAZAR, JACINT ROD P. Course & Year: BSA – AgEcon – 3

ARTIFICIAL INTELLIGENCE (AI)

Artificial Intelligence or AI has proven to be one of today’s most transformative tech


evolutions. However, with the current world scenario, Artificial Intelligence seems more
promising than ever. Trends in the AI technology such as Alterego, a mind-reading wearable
and citizen robots like Sophia are indicators how huge AI will come in the coming years. Drones
and robot delivery are all being tested by companies like Domino’s and Doordash despite the
fact that these latest emerging technologies are still riddled with flaws that cause a threat to
human safety. AI supplements are certain to become technological trend in the future giving the
immense potential needled to say companies like Nvidia, Google, Microsoft, and other
companies are adopting artificial intelligence in some way or the other.

Artificial intelligence (AI) is the ability of a digital computer or a computer-controlled robot


to accomplish tasks that are generally associated with intelligent beings. The phrase is usually
referred to the endeavor of producing systems with human-like cognitive processes, such as the
ability to reason, discover meaning, generalize, or learn from past experience. Since the
development of the digital computer in the 1940s, it has been proved that computers can be
taught to perform extremely complex jobs with great proficiency, such as discovering proofs for
mathematical theorems or playing chess. Nonetheless, despite ongoing increases in computer
processing speed and memory capacity, there are no programs that can match human flexibility
across broader areas or in tasks requiring a high level of everyday knowledge. On the other
hand, some programs have surpassed the performance levels of human experts and
professionals in performing specific tasks, so artificial intelligence in this limited sense can be
found in applications ranging from medical diagnosis to computer search engines to voice or
handwriting recognition.

Stanford’s John McCarthy, a fundamental figure in artificial


intelligence, has died at the age of 84. McCarthy coined the term
“artificial intelligence” and was a towering figure in computer
science at Stanford for the majority of his career. During his
career, he created the programming language LISP, played
computer chess with opponents in Russia via telegraph, and
devised computer time-sharing.

The origins of current artificial intelligence can be traced


back to classical philosophers’ attempts to characterize human
thought as a symbolic system. However, the area of artificial
intelligence was not formally established until 1956, at a meeting
at Dartmouth College in Hanover, New Hampshire, where the
phrase “artificial intelligence” was invented.
Most people associate artificial
intelligence (A.I.) with machines learning to
think like humans in sci-fi films like Star Wars
and Terminator. A.I. and machine learning
may appear to be a fantastical, far-flung
concept, but in reality, there is a vast range
of current technological breakthroughs that
make use of A.I.–all of which empower us
and assist us in carrying out our daily
obligations. Here are some interesting
examples of artificial intelligence that play a
role in our daily lives. For instance,
Navigation Apps - Even your regular trip to
and from work, believe it or not, necessitates the employment of artificial intelligence. A.I. is
used in navigation apps such as Google Maps. to examine the pace of traffic movement It also
uses user-reported incidents, such as traffic accidents or road construction, to anticipate how
long it will take you to get to your destination and recommends the shortest route.

Artificial intelligence is critical to our future since it is the cornerstone of computer


learning. AI enables computers to harness huge volumes of data and utilize their learned
intelligence to make optimal judgments and discoveries in fractions of the time that people
would take. Artificial intelligence is now responsible for everything from cancer research
advances to cutting-edge climate change research.

AI is expected to have a long-term impact on almost every business imaginable. We


already see artificial intelligence in our smart devices, cars, healthcare systems, and favorite
apps, and its influence will continue to permeate further into many other industries in the
foreseeable future.
Republic of the Philippines
UNIVERSITY OF EASTERN PHILIPPINES
University Town, Northern Samar
Website: http://uep.edu.ph Email: ueppres06@gmail.com

COLLEGE OF AGRICULTURE, FISHERIES, AND NATURAL RESOURCES


LIVING IN THE IT ERA
Name: SALAZAR, JACINT ROD P. Course & Year: BSA – AgEcon – 3

EDGE COMPUTING

Edge computing is a current technology development that brings data storage and
computation closer to businesses improving reaction times and reducing using bandwidth
usage. It also considered to be the most recent technological trend because enterprises are
rapidly gaining access to sophisticated and specialized resources which is sure to reduce
latency. Edge computing raises security to new heights by addressing local compliance, privacy
legislation and data sovereignty concerns. Although many people feel that edge computing
increases the attack surfaces, it really reduces the influence of an enterprise. Other benefits of
edge computing include enhanced speed and reduced costs, top tier companies like Dell, AWS,
Google cloud platform, HPE, and IBM among others are readily adopting edge computing.

Edge computing is a distributed information technology (IT) architecture in which client


data is processed at the periphery of the network, as close to the originating source as possible.

Data is the lifeblood of modern business, providing valuable business insight and
supporting real-time control over critical business processes and operations. Today's
businesses are awash in an ocean of data, and huge amounts of data can be routinely collected
from sensors and IoT devices operating in real time from remote locations and inhospitable
operating environments almost anywhere in the world.

But this virtual flood of data is also


changing the way businesses handle
computing. The traditional computing paradigm
built on a centralized data center and everyday
internet isn't well suited to moving endlessly
growing rivers of real-world data. Bandwidth
limitations, latency issues and unpredictable
network disruptions can all conspire to impair
such efforts. Businesses are responding to
these data challenges through the use of edge
computing architecture.

In simplest terms, edge computing


moves some portion of storage and compute
resources out of the central data center and
closer to the source of the data itself. Rather
than transmitting raw data to a central data
center for processing and analysis, that work is
instead performed where the data is actually
generated -- whether that's a retail store, a factory floor, a sprawling utility or across a smart
city. Only the result of that computing work at the edge, such as real-time business insights,
equipment maintenance predictions or other actionable answers, is sent back to the main data
center for review and other human interactions.

Thus, edge computing is reshaping IT and business computing. Take a comprehensive


look at what edge computing is, how it works, the influence of the cloud, edge use cases,
tradeoffs and implementation considerations.

Edge computing continues to evolve, using new technologies and practices to enhance
its capabilities and performance. Perhaps the most noteworthy trend is edge availability, and
edge services are expected to become available worldwide by 2028. Where edge computing is
often situation-specific today, the technology is expected to become more ubiquitous and shift
the way that the internet is used, bringing more abstraction and potential use cases for edge
technology.

This can be seen in the proliferation of compute, storage and network appliance
products specifically designed for edge computing. More multivendor partnerships will enable
better product interoperability and flexibility at the edge. An example includes a partnership
between AWS and Verizon to bring better connectivity to the edge.

Wireless communication technologies, such as 5G and Wi-Fi 6, will also affect edge
deployments and utilization in the coming years, enabling virtualization and automation
capabilities that have yet to be explored, such as better vehicle autonomy and workload
migrations to the edge, while making wireless networks more flexible and cost-effective.
Republic of the Philippines
UNIVERSITY OF EASTERN PHILIPPINES
University Town, Northern Samar
Website: http://uep.edu.ph Email: ueppres06@gmail.com

COLLEGE OF AGRICULTURE, FISHERIES, AND NATURAL RESOURCES


LIVING IN THE IT ERA
Name: SALAZAR, JACINT ROD P. Course & Year: BSA – AgEcon – 3

HUMAN AUGMENTATION

The field of human augmentation (sometimes referred to as “Human 2.0”) focuses on


creating cognitive and physical improvements as an integral part of the human body. An
example is using active control systems to create limb prosthetics with characteristics that can
exceed the highest natural human performance.

Human augmentation is the ability to perform actions, whether physical or mental, with
the help of tools that practically integrates into our bodies. Here the word "practically" has an
ambiguous meaning since not every increment of this type is directly grafted onto the body. And
it has not been difficult to see this human performance improvement in our daily lives.

We are already experiencing an increase in our natural abilities thanks to wearable


devices, such as smartwatches and smartphones. This leads to our ability to communicate
"where no man has gone before." Sounds familiar?

"Collaboration and augmentation are the foundational principles of innovation."


Vaclav Smill

The human augmentation market is attracting media because it can let us become
"superhumans," or Human 2.0. With advances in brain-computer interfaces, we are approaching
augmented human intelligence. But not all forms of human augmentation technology will grant
us "superpowers."

Innovations in human augmentation will occur on three action fronts.

Replicating

Perhaps the category that is the most essential is replicating. We now have human
augmentation technologies that can replace some part of compromised human ability.

One good example of a tech player that's immensely impacting the market is Naked
Prosthetics, which builds finger prosthetics for everyone who has suffered an accident or an
amputation.

Certainly, Naked Prosthetics are one of the benchmark companies when we mix
technological material and human health. Not to mention the purpose of the business, which
attracts many candidates to participate in the transformation of people's lives.\
Supplementing

As it already says, supplementing is an augmentation tech that supplies more potential


of an individual capacity. This could include a wide variety of items like ear devices to others
capable of enhancing our vocabulary.

Imagine a smart glass that shows you a holographic display providing eye-to-eye contact
with remote people or places. The good news, it already exists, and Google bought it.

Exceeding

The human augmentation market found its most significant challenge when developing
products that exceed human potential. As humans, we know our limits and the things we can do
naturally, so in this area, some companies allow us to become the heroes who were never born.

See what Elon Musk is doing at Neuralink and how they can cross boundaries of the
interactions between a human mind and a computer. In addition, this is not the whole picture;
many organizations are developing other tools that can change the status quo of human
possibilities.

The augmentation scenario is not exclusive to big tech companies. There is room for
everybody. Small and medium-sized businesses are also getting involved in the revolution. No
wonder a study by Capterra announced that 54% of these types of companies would be
adopting wearable computing technology in a short period of up to two years.

And we can't forget the startups that are in the spotlight for fundraising. MindPortal is
one of the cases, and it is aiming to develop a wearable device capable of allowing the brain to
create fully immersive realities for us to experience alone or with others. According to
Pitchbook's Emerging Tech Indicator report (Q2 2021), the $5 million investment in MindPortal
is definitely some notable move in the tech industry to keep an eye on.
Republic of the Philippines
UNIVERSITY OF EASTERN PHILIPPINES
University Town, Northern Samar
Website: http://uep.edu.ph Email: ueppres06@gmail.com

COLLEGE OF AGRICULTURE, FISHERIES, AND NATURAL RESOURCES


LIVING IN THE IT ERA
Name: SALAZAR, JACINT ROD P. Course & Year: BSA – AgEcon – 3

QUANTUM COMPUTING

Quantum computing is a type of computation that harnesses the collective properties of


quantum states, such as superposition, interference, and entanglement, to perform calculations.
The devices that perform quantum computations are known as quantum computers.: I-5 Though
current quantum computers are too small to outperform usual (classical) computers for practical
applications, they are believed to be capable of solving certain computational problems, such as
integer factorization (which underlies RSA encryption), substantially faster than classical
computers. The study of quantum computing is a subfield of quantum information science.

There are several types of quantum computers (also known as quantum computing
systems), including the quantum circuit model, quantum Turing machine, adiabatic quantum
computer, one-way quantum computer, and various quantum cellular automata. The most
widely used model is the quantum circuit, based on the quantum bit, or "qubit", which is
somewhat analogous to the bit in classical computation. A qubit can be in a 1 or 0 quantum
state, or in a superposition of the 1 and 0 states. When it is measured, however, it is always 0 or
1; the probability of either outcome depends on the qubit's quantum state immediately prior to
measurement.

Efforts towards building a physical quantum computer focus on technologies such as


transmons, ion traps and topological quantum computers, which aim to create high-quality
qubits.: 2–13 These qubits may be designed differently, depending on the full quantum
computer's computing model, as to whether quantum logic gates, quantum annealing, or
adiabatic quantum computation are employed. There are currently a number of significant
obstacles to constructing useful quantum computers. It is particularly difficult to maintain qubits'
quantum states, as they suffer from quantum decoherence and state fidelity. Quantum
computers therefore require error correction.

Any computational problem that can be solved by a


classical computer can also be solved by a quantum computer.
Conversely, any problem that can be solved by a quantum
computer can also be solved by a classical computer, at least in
principle given enough time. In other words, quantum computers
obey the Church–Turing thesis. This means that while quantum
computers provide no additional advantages over classical
computers in terms of computability, quantum algorithms for
certain problems have significantly lower time complexities than
corresponding known classical algorithms. Notably, quantum computers are believed to be able
to quickly solve certain problems that no classical computer could solve in any feasible amount
of time—a feat known as "quantum supremacy." The study of the computational complexity of
problems with respect to quantum computers is known as quantum complexity theory.

HISTORY

Quantum computing began in 1980 when physicist Paul Benioff proposed a quantum
mechanical model of the Turing machine. Richard Feynman and Yuri Manin later suggested that
a quantum computer had the potential to simulate things a classical computer could not feasibly
do. In 1986 Feynman introduced an early version of the quantum circuit notation. In 1994, Peter
Shor developed a quantum algorithm for finding the prime factors of an integer with the potential
to decrypt RSA-encrypted communications. In 1998 Isaac Chuang, Neil Gershenfeld and Mark
Kubinec created the first two-qubit quantum computer that could perform computations. Despite
ongoing experimental progress since the late 1990s, most researchers believe that "fault-
tolerant quantum computing [is] still a rather distant dream." In recent years, investment in
quantum computing research has increased in the public and private sectors. On 23 October
2019, Google AI, in partnership with the U.S. National Aeronautics and Space Administration
(NASA), claimed to have performed a quantum computation that was infeasible on any classical
computer, but whether this claim was or is still valid is a topic of active research.

A December 2021 McKinsey & Company analysis states that "investment dollars are
pouring in, and quantum-computing start-ups are proliferating". They go on to note that "While
quantum computing promises to help businesses solve problems that are beyond the reach and
speed of conventional high-performance computers, use cases are largely experimental and
hypothetical at this early stage."

IBM Q System One (2019), the first circuit-based


commercial quantum computer
Republic of the Philippines
UNIVERSITY OF EASTERN PHILIPPINES
University Town, Northern Samar
Website: http://uep.edu.ph Email: ueppres06@gmail.com

COLLEGE OF AGRICULTURE, FISHERIES, AND NATURAL RESOURCES


LIVING IN THE IT ERA
Name: SALAZAR, JACINT ROD P. Course & Year: BSA – AgEcon – 3

COMPUTER VISION

Computer vision is a field of artificial intelligence that trains computers to interpret and
understand the visual world. Using digital images from cameras and videos and deep learning
models, machines can accurately identify and classify objects — and then react to what they
“see.”

Early experiments in computer vision took place in the 1950s, using some of the first
neural networks to detect the edges of an object and to sort simple objects into categories like
circles and squares. In the 1970s, the first commercial use of computer vision interpreted typed
or handwritten text using optical character recognition. This advancement was used to interpret
written text for the blind.

As the internet matured in the 1990s, making large sets of images available online for
analysis, facial recognition programs flourished. These growing data sets helped make it
possible for machines to identify specific people in photos and videos.

Today, a number of factors have converged to bring about a renaissance in computer


vision:

The effects of these advances on the computer vision field have been astounding.
Accuracy rates for object identification and classification have gone from 50 percent to 99
percent in less than a decade — and today’s systems are more accurate than humans at
quickly detecting and reacting to visual inputs.
Computer vision works in three basic steps:

Today’s AI systems can go a step further and take actions based on an understanding of
the image. There are many types of computer vision that are used in different ways:

 Image segmentation partitions an image into multiple regions or pieces to be examined


separately.
 Object detection identifies a specific object in an image. Advanced object detection
recognizes many objects in a single image: a football field, an offensive player, a
defensive player, a ball and so on. These models use an X,Y coordinate to create a
bounding box and identify everything inside the box.
 Facial recognition is an advanced type of object detection that not only recognizes a
human face in an image, but identifies a specific individual.
 Edge detection is a technique used to identify the outside edge of an object or
landscape to better identify what is in the image.
 Pattern detection is a process of recognizing repeated shapes, colors and other visual
indicators in images.
 Image classification groups images into different categories.
 Feature matching is a type of pattern detection that matches similarities in images to
help classify them.

Simple applications of computer vision may only use one of these techniques, but more
advanced uses, like computer vision for self-driving cars, rely on multiple techniques to
accomplish their goal.

You might also like