Professional Documents
Culture Documents
EDGE COMPUTING
Edge computing is a current technology development that brings data storage and
computation closer to businesses improving reaction times and reducing using bandwidth
usage. It also considered to be the most recent technological trend because enterprises are
rapidly gaining access to sophisticated and specialized resources which is sure to reduce
latency. Edge computing raises security to new heights by addressing local compliance, privacy
legislation and data sovereignty concerns. Although many people feel that edge computing
increases the attack surfaces, it really reduces the influence of an enterprise. Other benefits of
edge computing include enhanced speed and reduced costs, top tier companies like Dell, AWS,
Google cloud platform, HPE, and IBM among others are readily adopting edge computing.
Data is the lifeblood of modern business, providing valuable business insight and
supporting real-time control over critical business processes and operations. Today's
businesses are awash in an ocean of data, and huge amounts of data can be routinely collected
from sensors and IoT devices operating in real time from remote locations and inhospitable
operating environments almost anywhere in the world.
Edge computing continues to evolve, using new technologies and practices to enhance
its capabilities and performance. Perhaps the most noteworthy trend is edge availability, and
edge services are expected to become available worldwide by 2028. Where edge computing is
often situation-specific today, the technology is expected to become more ubiquitous and shift
the way that the internet is used, bringing more abstraction and potential use cases for edge
technology.
This can be seen in the proliferation of compute, storage and network appliance
products specifically designed for edge computing. More multivendor partnerships will enable
better product interoperability and flexibility at the edge. An example includes a partnership
between AWS and Verizon to bring better connectivity to the edge.
Wireless communication technologies, such as 5G and Wi-Fi 6, will also affect edge
deployments and utilization in the coming years, enabling virtualization and automation
capabilities that have yet to be explored, such as better vehicle autonomy and workload
migrations to the edge, while making wireless networks more flexible and cost-effective.
Republic of the Philippines
UNIVERSITY OF EASTERN PHILIPPINES
University Town, Northern Samar
Website: http://uep.edu.ph Email: ueppres06@gmail.com
HUMAN AUGMENTATION
Human augmentation is the ability to perform actions, whether physical or mental, with
the help of tools that practically integrates into our bodies. Here the word "practically" has an
ambiguous meaning since not every increment of this type is directly grafted onto the body. And
it has not been difficult to see this human performance improvement in our daily lives.
The human augmentation market is attracting media because it can let us become
"superhumans," or Human 2.0. With advances in brain-computer interfaces, we are approaching
augmented human intelligence. But not all forms of human augmentation technology will grant
us "superpowers."
Replicating
Perhaps the category that is the most essential is replicating. We now have human
augmentation technologies that can replace some part of compromised human ability.
One good example of a tech player that's immensely impacting the market is Naked
Prosthetics, which builds finger prosthetics for everyone who has suffered an accident or an
amputation.
Certainly, Naked Prosthetics are one of the benchmark companies when we mix
technological material and human health. Not to mention the purpose of the business, which
attracts many candidates to participate in the transformation of people's lives.\
Supplementing
Imagine a smart glass that shows you a holographic display providing eye-to-eye contact
with remote people or places. The good news, it already exists, and Google bought it.
Exceeding
The human augmentation market found its most significant challenge when developing
products that exceed human potential. As humans, we know our limits and the things we can do
naturally, so in this area, some companies allow us to become the heroes who were never born.
See what Elon Musk is doing at Neuralink and how they can cross boundaries of the
interactions between a human mind and a computer. In addition, this is not the whole picture;
many organizations are developing other tools that can change the status quo of human
possibilities.
The augmentation scenario is not exclusive to big tech companies. There is room for
everybody. Small and medium-sized businesses are also getting involved in the revolution. No
wonder a study by Capterra announced that 54% of these types of companies would be
adopting wearable computing technology in a short period of up to two years.
And we can't forget the startups that are in the spotlight for fundraising. MindPortal is
one of the cases, and it is aiming to develop a wearable device capable of allowing the brain to
create fully immersive realities for us to experience alone or with others. According to
Pitchbook's Emerging Tech Indicator report (Q2 2021), the $5 million investment in MindPortal
is definitely some notable move in the tech industry to keep an eye on.
Republic of the Philippines
UNIVERSITY OF EASTERN PHILIPPINES
University Town, Northern Samar
Website: http://uep.edu.ph Email: ueppres06@gmail.com
QUANTUM COMPUTING
There are several types of quantum computers (also known as quantum computing
systems), including the quantum circuit model, quantum Turing machine, adiabatic quantum
computer, one-way quantum computer, and various quantum cellular automata. The most
widely used model is the quantum circuit, based on the quantum bit, or "qubit", which is
somewhat analogous to the bit in classical computation. A qubit can be in a 1 or 0 quantum
state, or in a superposition of the 1 and 0 states. When it is measured, however, it is always 0 or
1; the probability of either outcome depends on the qubit's quantum state immediately prior to
measurement.
HISTORY
Quantum computing began in 1980 when physicist Paul Benioff proposed a quantum
mechanical model of the Turing machine. Richard Feynman and Yuri Manin later suggested that
a quantum computer had the potential to simulate things a classical computer could not feasibly
do. In 1986 Feynman introduced an early version of the quantum circuit notation. In 1994, Peter
Shor developed a quantum algorithm for finding the prime factors of an integer with the potential
to decrypt RSA-encrypted communications. In 1998 Isaac Chuang, Neil Gershenfeld and Mark
Kubinec created the first two-qubit quantum computer that could perform computations. Despite
ongoing experimental progress since the late 1990s, most researchers believe that "fault-
tolerant quantum computing [is] still a rather distant dream." In recent years, investment in
quantum computing research has increased in the public and private sectors. On 23 October
2019, Google AI, in partnership with the U.S. National Aeronautics and Space Administration
(NASA), claimed to have performed a quantum computation that was infeasible on any classical
computer, but whether this claim was or is still valid is a topic of active research.
A December 2021 McKinsey & Company analysis states that "investment dollars are
pouring in, and quantum-computing start-ups are proliferating". They go on to note that "While
quantum computing promises to help businesses solve problems that are beyond the reach and
speed of conventional high-performance computers, use cases are largely experimental and
hypothetical at this early stage."
COMPUTER VISION
Computer vision is a field of artificial intelligence that trains computers to interpret and
understand the visual world. Using digital images from cameras and videos and deep learning
models, machines can accurately identify and classify objects — and then react to what they
“see.”
Early experiments in computer vision took place in the 1950s, using some of the first
neural networks to detect the edges of an object and to sort simple objects into categories like
circles and squares. In the 1970s, the first commercial use of computer vision interpreted typed
or handwritten text using optical character recognition. This advancement was used to interpret
written text for the blind.
As the internet matured in the 1990s, making large sets of images available online for
analysis, facial recognition programs flourished. These growing data sets helped make it
possible for machines to identify specific people in photos and videos.
The effects of these advances on the computer vision field have been astounding.
Accuracy rates for object identification and classification have gone from 50 percent to 99
percent in less than a decade — and today’s systems are more accurate than humans at
quickly detecting and reacting to visual inputs.
Computer vision works in three basic steps:
Today’s AI systems can go a step further and take actions based on an understanding of
the image. There are many types of computer vision that are used in different ways:
Simple applications of computer vision may only use one of these techniques, but more
advanced uses, like computer vision for self-driving cars, rely on multiple techniques to
accomplish their goal.