You are on page 1of 20

ARTIFICIAL INTELLIGENCE

Start thinking in​ LIFE 3.0

Artificial Intelligence (AI) impact on the future of life on Earth and beyond.

by Omar Shaaban- DBI&T MasterStudent IMC ​| ​ October 23, 2020

Index
1. What is AI ,Where works and why Life 3.0
2. Gartner Hype cycle explains how artificial intelligence reaching enterprises
3. AI using
4. Future of the AI
5. Ethics for AI
6. Risks of the AI
7. Will AI replace humans?
8. Conclusion
9. EXTRA links

1
1. What is AI ,Where works and why Life 3.0
1.1. AI Artificial Intelligence​ is a new discipline , a kind of technology and artificial life form,entering a rapid transition from theory to reality ; the
Machine processor makes Data processing after learning, interprets the collected Data into useful Digital results by using lines of codes and
sets of Logarithms, which already made by Human or by the human”s supervising , simply it is collecting processing reacting , and it not a
traditional software like spreadsheet programs or whatsapp , it is giving intelligence to the software like a Personal AI chatbot which simulate
real life human conversations. FURTHERMORE Machine learning is not forgotten and spreadable in parts of seconds without border.
Artificial intelligence is a reflection to human intelligence and knowledge that being over tens of thousands of years, and it will reach the stage
very close to human intelligence, but AI will not be able to recognize it self , special because it do not come from learning and will not have
the ability to imagine and dream as Human do

“​new discipline , a kind of


technology and artificial life
form,entering a rapid
transition from theory to
reality”

1.2. AI will join the Human​ whatever he is doing and wherever he is living because it is a kind of thinking and cognitive designed for the purpose
of assisting Humans and facilitating his Life ,anticipate problems or deal with issues as they come up.Not only in the material fields, but also in
the spiritual fields such as comparative religion, psychological sciences and metaphysics , AI helps each person to create his private World ,
Replica AI chatbot as a model will permit you to make your reflected companion as your psyche envision. What's more, with Hologram
innovation we can converse with our progenitors or them precursors

1.3. Life 3.0​ is an idiom published in a book for Max Tegmark 2017 “Life 1.0 referring to biological origins, Life 2.0 referring to cultural
developments in humanity, and Life 3.0 referring to the technological age of humans”

“Alpha zero beated Stockfish ,the electronic most


important chess applications ”

After man grew up and adjusted to nature, he instructed himself and framed his way of life, Now he begins teaching the machine by what makes it ready to support
him and shaped and renew forming his lifestyle and this is the third period of human life on this planet, it is the best thing that has happened to the Human so far,
for example. The alpha zero defeated the Japanese world champion in the famous game of chess that has its roots for 3,000 years of game and wisdom. And also
beat the electronic most important chess applications themselves which is Stockfish , .

2
“After Human grew up and adjusted to nature, he
instructed himself and framed his way of life, Now
he begins teaching the machine”

https://en.wikipedia.org/wiki/Life_3.0
https://course.elementsofai.com/
https://futureoflife.org/ai-news/
https://doi.org/10.1007/BF00142926
https://en.wikipedia.org/wiki/AlphaGo

2. Gartner Hype cycle explains how artificial intelligence reaching enterprises and illustrate

the 5 stages of the innovations in AI fields,

3
The innovation trigger of the AI was in 1956, at a conference at Dartmouth College
Peak of inflated expectations:
starts almost in 2016 with the first smartest robot Sophia is a social humanoid robot developed by Hong Kong-based company
Through of disillusionment
starts with NLP (Natural Language Processing) which gives the machines the ability to read, understand and derive meaning from human languages.
The Plateau of productivity​,
as expected AI will reaching productivity between 2 to 5 years with voice recognition ,Composite AI ,Adaptive ML , but 5 to 10 years with Responsible And
Explainable AI , Self Supervised Learning ,Augmented Development
The 5 Trends Drive the Gartner Hype Cycle as published :
Augmented intelligence​ is a human-centered partnership model of people and AI working together to enhance cognitive performance. It focuses on AI’s assistive
role in advancing human skills ..
AI interacting with people and improving what they already know reduces mistakes and routine work and can improve customer interactions, citizen services, and
patient care. The goal of augmented intelligence is to be more efficient with automation while complementing it with a human touch and common sense to manage
the risks of decision automation.
Chatbots ​are the face of AI and impact all areas where there is communication between humans such as carmaker KIA, which talks to 115,000 users per week, or
Lidl’s Winebot Margot that provides guidance on which wine to buy and tips on food pairings.
Chatbots can be text- or voice-based, or a combination of both, and rely on scripted responses involving few people.
Common applications exist in HR, IT help desk, and self-service, but customer service is where chatbots are already having the most impact, notably changing the
way customer service is conducted. The change from “the user learns the interface” to “the chatbot is learning what the user wants” means greater implications for
onboarding, productivity, and training inside the workplace.
Read more: Chatbots Will Appear to Modern Workers

“The 5 Trends Drive the Gartner Hype Cycle


Augmented intelligence
AI Chatbots
Machine Learning
AI Governance
Software Intelligent Application “

Machine learning​ can solve business problems, such as personalized customer treatment, supply chain recommendations, dynamic pricing, medical diagnostics, or
anti-money laundering. ML uses mathematical models to extract knowledge and patterns from data. The adoption of ML is increasing as organizations encounter
exponential growth of data volumes and advancements in computing infrastructure.
Currently, ML is being used in multiple fields and industries to drive improvements and find new solutions for business problems. American Express uses data
analytics and ML algorithms to help detect fraud in near-real-time in order to save millions in losses. Volvo uses data to help predict when parts might fail or when
vehicles need servicing, improving its vehicle safety.
AI governance
Organizations should not neglect AI governance. They need to be aware of the potential regulatory and reputational risks. “AI governance is the process of creating
policies to fight AI-related biases, discrimination, and other negative implications of AI,” says Sicular.
Identify transparency requirements for data sources and algorithms to reduce risks and grow confidence
To develop AI governance, data and analytics leaders and CIOs should focus on three areas: trust, transparency, and diversity. They need to focus on trust in data
sources and AI outcomes to ensure successful AI adoption. They also need to identify transparency requirements for data sources and algorithms to reduce risks and
grow confidence in AI. They should ensure data, algorithms, and viewpoint diversity to pursue AI ethics and accuracy.
4
Intelligent applications
Most organizations’ preference for acquiring AI capabilities is shifting in favor of getting them in enterprise applications. Intelligent applications are enterprise
applications with embedded or integrated AI technologies to support or replace human-based activities via intelligent automation, data-driven insights, and guided
recommendations to improve productivity and decision making.
Today, enterprise application providers are embedding AI technologies within their offerings as well as introducing AI platform capabilities — from enterprise
resource planning to customer relationship management to human capital management to workforce productivity applications.
CIOs should challenge their packaged software providers to outline in their product roadmaps how they are incorporating AI to add business value in the form of
advanced analytics, intelligent processes, and advanced user experiences.

Gartner set a timeline of up to ten years, but he did not predict the new technologies in which artificial intelligence would enter, and he did not take into account
the new measures of speed, time and development that differ from the measurement standards he follows in the study of other innovations.
https://en.wikipedia.org/wiki/Dartmouth_workshop
https://www.gartner.com/smarterwithgartner/top-trends-on-the-gartner-hype-cycle-for-artificial-intelligence-2019?utm_campaign=RM_
NA_2019_SWG_NL_NL38_IT&utm_medium=email&utm_source=Eloqua&cm_mmc=Eloqua-_-Email-_-LM_RM_NA_2019_SWG_NL_NL38_IT
-_-0000
https://analyticsindiamag.com/gartner-hype-cycle-2020-artificial-intelligence/
https://en.wikipedia.org/wiki/Sophia_(robot)
https://en.wikipedia.org/wiki/Natural_language_processing

3. AI using
3.1. Artificial Intelligence Areas

AI will generate new areas like not before

https://www2.deloitte.com/se/sv/pages/technology/articles/part1-artificial-intelligence-defined.html#

5
3.2. AI in industries

https://www.oneragtime.com/24-industries-disrupted-by-ai-infographic/​ ,
https://www.brookings.edu/series/a-blueprint-for-the-future-of-ai/

3.3.What Artificial Intelligence Can Do with Big Data ?

6
Big Data Is Too Big Without AI ,Big data and artificial intelligence are two important branches of computer science today. In recent years, research in the fields of
big data and artificial intelligence has never stopped. Big data is overlapping with artificial intelligence.
First, since it uses many artificial intelligence ideas and techniques, the advancement of big data technology depends on artificial intelligence.
Second, artificial intelligence development must also be based on big data technology, since it takes a lot of data to maintain it.
The 6 ways AI fuels better insights according to Kevin Casey from enterpriseproject

A. AI is creating new methods for analyzing data


B. Data analytics is becoming less labor-intensive
C. Humans still matter plenty
D. AI/ML can be used to alleviate common data problems
E. Analytics become more predictive and prescriptive
for instances , When a Human creates an artificial intelligence program that invests big data on the state of the climate on the earth, it will give an accurate report
on the state of each geographical region after years, which stimulates governments to formulate policies that preserve the life and health of their citizens , and
when Google provides information on its maps and NASA provides its satellite data for artificial intelligence application the traffic accidents " which recently,more
than a million people die annually" will end and become zero, and also when collect data for patients with a specific disease and feed a medical artificial
intelligence program, it will be closer to discovering a drug than we imagined.

https://enterprisersproject.com/article/2019/10/how-big-data-and-ai-work-together?page=1
Phonlamai Photo/Shutterstock.com
IBM Cancer research
Google AI energy
World economic forum

7
4. Future of the AI
4.1.Investment in AI
Worldwide Spending on Artificial Intelligence Is Expected to Double in Four Years,SURGE BY 120%
Reaching $110 Billion in 2024, According to New IDC Spending Guide
Software and services will each account for a little more than one third of all AI spending this year with hardware delivering the remainder. The largest share of
software spending will go to AI applications ($14.1 billion) while the largest category of services spending will be IT services ($14.5 billion). Servers ($11.2 billion)
will dominate hardware spending. Software will see the fastest growth in spending over the forecast period with a five-year CAGR of 22.5%.

4.2. Human- AI integration ​NEURALINK DEVELOPING

“Neuralink will wire Human brain to huge


universe database via internet “

The first tests on humans will begin this year. The project is backed by Neuralink, where they’ve stated previously their goal is to simply “understand and treat
different forms of brain or spine-related disorders” For instance, paralyzed humans could use the implanted device to control phones or computers.” Musk envisions
an alternative use, by utilizing the implant as a means of enhancing our own brain, giving humans the option to achieve a symbiosis with artificial intelligence (AI).

Neuralink is building a fully integrated brain machine interface (BMI) system. Sometimes you'll see this called a brain computer interface (BCI). Either way, BMIs are
technologies that enable a computer or other digital device to communicate directly with the brain. For example, through information readout from the brain, a
person with paralysis can control a computer mouse or keyboard. Or, information can be written back into the brain, for example to restore the sense of touch. Our
goal is to build a system with at least two orders of magnitude more communication channels (electrodes) than current clinically-approved devices. This system
needs to be safe, it must have fully wireless communication through the skin, and it has to be ready for patients to take home and use on their own. Our device,
called the Link, will be able to record from 1024 electrodes and is designed to meet these criteria.
​https://medium.com/@tylerwaddell/the-first-attempt-ai-human-integration-starting-in-2020-99b4e31763ae
https://neuralink.com/

8
4.3. The next generation of the AI

“The next AI revolution


Machine self Learning
Centralized machine learning
Open AL transformer “

Many AI leaders see unsupervised learning as the next great frontier in artificial intelligence. In the words of AI legend Yann LeCun: “The next AI revolution​ will not
be supervised.​” UC Berkeley professor Jitenda Malik put it even more colorfully: “Labels are the opium of the machine learning researcher.”
Privacy-preserving artificial intelligence—methods that enable AI models to learn from datasets without compromising their privacy—is thus becoming an
increasingly important pursuit. Perhaps the most promising approach to privacy-preserving AI is​ federated learning​.
The concept of federated learning was first formulated by researchers at Google in early 2017. Over the past year, interest in federated learning has exploded: more
than 1,000 research papers on federated learning were published in the first six months of 2020, compared to just 180 in all 2018.
OpenAI’s release of GPT-3, the most powerful language model ever built, captivated the technology world this summer. It has set a new standard in NLP: it can
write impressive poetry, generate functioning code, compose thoughtful business memos, write articles about itself, and so much more.
GPT-3 is just the latest (and largest) in a string of similarly architected NLP models—Google’s BERT, OpenAI’s GPT-2, Facebook’s RoBERTa and others—that are
redefining what is possible in NLP.
The key technology breakthrough underlying this revolution in language AI is the ​Transformer.
https://www.forbes.com/sites/robtoews/2020/10/12/the-next-generation-of-artificial-intelligence/#7bfb864559eb

4.4.AGI Artificial General Intelligence

Artificial General Intelligence (AGI) can be defined as the ability of a machine to perform any task that a human can. Although the aforementioned applications
highlight the ability of AI to perform tasks with greater efficacy than humans, they are not generally intelligent, i.e., they are exceedingly good at only a single
function while having zero capability to do anything else. Thus, while an AI application may be as effective as a hundred trained humans in performing one task it
can lose to a five-year-old kid in competing over any other task. For instance, computer vision systems, although adept at making sense of visual information,
cannot translate and apply that ability to other tasks. On the contrary, a human, although sometimes less proficient at performing these functions, can perform a
broader range of functions than any of the existing AI applications of today.

9
experts have predicted the development of artificial intelligence to be achieved as early as by 2030. A survey of AI experts recently predicted the expected
emergence of AGI or the singularity by the year 2060.
Many experts named AGI / the AI singularity
The greatest fear about AI is singularity (also called Artificial General Intelligence), a system capable of human-level thinking. According to some experts, singularity
also implies machine consciousness. Regardless of whether it is conscious or not, such a machine could continuously improve itself and reach far beyond our
capabilities. Even before artificial intelligence was a computer science research topic, science fiction writers like Asimov were concerned about this and were
devising mechanisms (i.e. Asimov’s Laws of Robotics) to ensure benevolence of intelligent machines.
In 2019, 32 AI experts participated in a survey on AGI timing:
● 45% of respondents predict a date before 2060
● 34% of all participants predicted a date after 2060
● 21% of participants predicted that singularity will never occur.

https://www.forbes.com/sites/cognitiveworld/2019/06/10/how-far-are-we-from-achieving-artificial-general-intelligence/#786d6b606dc4
https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/
h​ ttp://www.steve-wheeler.co.uk/2019/05/our-digital-future-5-artificial.html

4.5. Quantum Machine Learning QML -Quantum Computing and AI

“Google has a quantum computer they


claim is 100 million times faster than any
of today’s systems”

Quantum Computing it is similar to traditional computing, which relies on bits—that is, the 0’s and 1’s to encode information. But quantum computing as its own
version of this: the quantum bit or qubit. This is where the information can have multiple states at the same time. And the reason for this is the impact of the effects
of quantum mechanics, like superposition and entanglement. Companies like IBM, Microsoft, Google, and Honeywell have been investing aggressively in the
technology
Google announced it has a “quantum supremacy” that is 100 million times faster than any classical computer in its lab. Every day, we produce 2.5 exabytes of data.
That number is equivalent to the content on 5 million laptops.
researchers are bringing them together. The main goal is to achieve a so-called quantum advantage, where complex algorithms can be calculated significantly
faster than with the best classical computer. This would be a game-changer in the field of AI.
Quantum computers could change the future of AI in four ways
Handling HUGE Amounts of Data
Machine learning and AI eat data. Lots of data. Quantum computers are designed to manage huge amounts of data. With each iteration of quantum computer design
and improvements to quantum error-correction code, programmers are able to better master the potential of qubits — quantum bits — to manage exponentially
more data, a​ ccording to Lorenzo​. from bbva .
Building Better Models
These industries require complex models that classical computers just can’t generate. Quantum computers, on the other hand, have the potential processing power
to model the most complex situations. If quantum technology can create better models, it may lead to better treatments for disease, decreased risk of financial
implosions, and improved logistics.
More Accurate Algorithms

10
Quantum computing should have an immediate impact on traditional AI models and algorithms, such as non-supervised learning and reinforcement learning,
according to the researcher.
“Dimensionality reduction algorithms are a particular case. These algorithms are used to represent our original data in a more limited space, but preserving most of
the properties of the original dataset,” said Lorenzo.
He added that quantum computing’s particular skill will help pinpoint certain global properties in a dataset, not so much specific details..
“In this context, some theoretical proposals have already been laid out to accelerate this training using quantum computers, which may contribute to developing an
extremely powerful artificial intelligence in the future,” ” said Lorenzo.
Using Multiple Datasets
“The promise is that quantum computers will allow for quick analysis and integration of our enormous data sets which will improve and transform our machine
learning and artificial intelligence capabilities,Bernard Marr forbes

AI and Quantum Collaborations Are Happening Now, The natural extension that quantum computers offer machine learning and artificial intelligence is not lost on
entrepreneurs, who are busy now learning ways to exploit the technical combination. Recently, The Quantum Daily reported on a deal that was signed between
C-DAC and Atos, two companies that are investigating the match between quantum computing and AI.
https://bernardmarr.com/default.asp?contentID=1178
https://www.forbes.com/sites/tomtaulli/2020/08/14/quantum-computing-what-does-it-mean-for-ai-artificial-intelligence/#d7f74833b4c8
https://www.raconteur.net/technology/artificial-intelligence/quantum-computing-ai/
https://thequantumdaily.com/2020/01/23/four-ways-quantum-computing-will-change-artificial-intelligence-forever/
https://www.bbva.com/en/quantum-computing-how-it-differs-from-classical-computing/

4.6. Accumulation of AI

“​Human will reach to form the largest


artificial brain in his History”

If the cancer researchers from all over the World provide central database in a Millions of photos and medical reports , then AI with
complicated Logarithms work on that Big Data to detecting the
It is not possible to imagine the results that can be obtained from connecting the artificial intelligence programs in all the world
together , and with the digital libraries that contain history and scientific archives of the Human , i know it is a science fiction , but. when
machine learns from that huge data then store results in central Cloud, we can say ; Human have reach to form the largest artificial brain
in his History
https://deepcognition.ai/​ , ​https://cancercenter.ai/​ , h​ ttps://www.nature.com/articles/d41586-020-00847-2​ ,
https://www.cancer.gov/research/areas/diagnosis/artificial-intelligence

11
4.7. Knowledge-Growing System (KGS)

“(KGS) is a system that has capability to


develop its own knowledge”

Knowledge Growing System (KGS) is a system that has capability to develop its own knowledge along with the accretion of information and time. ... The ultimate
result of KG mechanism is new knowledge regarding a certain phenomenon that is measured with a parameter called Degree of Certainty (DoC)
The latter mechanism has been researched which gave birth a new perspective in AI called Knowledge Growing System (KGS) in 2009 . KGS simply is a system that
is capable of growing its own knowledge as the accretion of information it receives as the time passes. At the end of the learning phase the system’s brain will
generate new knowledge regarding the phenomenon and can use this knowledge to make prediction or estimation of possible future the same or similar
phenomenon. Knowledge generation represents cognitive characteristics shown by the human brain. KGS emulates this characteristic and answers the second
purpose of AI as mentioned by Herbert Simon. This is the reason we call KGS as the main engine of Cognitive AI
https://www.researchgate.net/publication/330397678_KNOWLEDGE_GROWING_SYSTEM_A_NEW_PERSPECTIVE_ON_ARTIFICIAL_INTELLIGENCE#:~:text=Knowledge
%20Growing%20System%20(KGS)%20is,accretion%20of%20information%20and%20time.&text=The%20ultimate%20result%20of%20KG,Degree%20of%20Certain
ty%20(DoC)​.
https://iopscience.iop.org/article/10.1088/1757-899X/732/1/012037/pdf
https://www.cognitivesoftware.com/cognitiveAI

4.8. Leadership qualities in the AI age

The leadership skills which intended here do not begin with teaching university students as it happens now, but rather start from the pre-primary stage, because it
is grow with the person himself/herself and form his/her personality, in fact it determines his professional and personal future,
It center around :

12
advanced IT skills and programming, advanced literacy skills, critical thinking, and problem solving. They are more likely to hire from outside the company for
less-complex skills
The current challenge in the third stage of human life is that we need leaders who can lead local communities, various institutions, and smart machines. The
existence of smart machines and programs is what forces change the traditional view on the concepts of leadership and management.
successful leaders in the intelligence revolution will need to cultivate the following 10 leadership skills:
1. Agility
The pace of change, particularly with AI, is astonishing. Leaders must therefore be able to embrace and celebrate change (including new technologies). And,
importantly, they should not view change as a burden, but see it as an opportunity to grow and innovate, both at an individual and organizational level.
2. Emotional intelligence
As more and more workplace activities become automated, softer skills like emotional intelligence and empathy will become more critical for human workers. And if
we expect the workplaces of the future to prioritize such human skills, it stands to reason that leaders must model these behaviors themselves.
3. Cultural intelligence
The workplaces of the future will be even more diverse, global and dispersed than they are today. Effective leaders will be able to appreciate and leverage the
differences individuals bring to the table, and to respect and work well with people from all backgrounds – even when they share a different world view.
4. Humility
Confidence will still be an important trait in leaders, but the successful leaders of the future will be able to strike a balance between confidence and humility. They
will see themselves as facilitators and collaborators, rather than critical cogs to success. In other words, they’ll encourage others to shine.
5. Accountability
Flatter organizational structures, more project-based teams, partnership working – all of these things will lead to organizations becoming more transparent and
collaborative. Leaders will therefore need to be more transparent and hold themselves accountable. What’s more, their actions must be in clear alignment with the
company’s goals.
6. Vision
To understand the impact of AI on the business and all of its stakeholders, leaders in the intelligence revolution will need that big-picture vision. How will AI
transform the organization and lead to new business opportunities? It’s up to leaders to determine this, while managing stakeholders’ needs effectively.
7. Courage
We’ve barely scratched the surface of what AI can do, so leaders will need the courage to face the uncertain, the courage to fail fast, and the courage to change
course when the situation calls for a new strategy. As part of this, they’ll need the courage to identify their own weaknesses and be open to coaching and learning.
(In fact, as skills become outdated even more quickly in the future, successful leaders will need to cultivate a culture of learning right across the business.)
8. Intuition
There’s no doubt that data-driven decision making is the way forward, but that doesn’t mean intuition and instinct will become obsolete, far from it. Particularly as
workplaces undergo rapid change, leaders will still require that uniquely human skill of intuition, of being able to “read” what’s not being said.
9. Authenticity
Any new technology brings with it issues around ethics and misuse, not to mention issues around change management. Leaders in the intelligence revolution will
therefore need to be able to build trust with customers, employees, and other stakeholders – and that means exuding authenticity. This will become especially
important in times of uncertainty, change, or failure
10. Focus. Finally, with the incredible pace of change, and the continual need to adapt, future leaders will need to maintain a laser-like focus on the organization’s
strategic objectives. They’ll need to be able to cut through the chaos and hype to identify what’s really important – especially the initiatives and technology that
will help the organization deliver on its goals.
https://www.forbes.com/sites/bernardmarr/2020/10/12/10-essential-leadership-qualities-for-the-age-of-artificial-intelligence/#d40b4477f79e
https://www.mckinsey.com/featured-insights/future-of-work/skill-shift-automation-and-the-future-of-the-workforce
https://swisscognitive.ch/2020/10/15/essential-leadership-qualities-for-the-age-of-ai/​ , h​ ttps://futureofleadership.ai/leadership-in-the-age-of-ai/

13
5. Ethics for AI

“The singularity thus raises the problem of the concept of AI


again. It is remarkable how imagination or “vision” has played a
central role since the very beginning of the discipline at the
“Dartmouth Summer Research Project””

we need to establish ethical guidelines before technology catches up with us. While AI Professor Jürgen Schmidhuber predicts artificial intelligence will be able to
control robotic factories in space, the Swedish-American physicist Max Tegmark warns against a totalitarian AI surveillance state, and the philosopher Thomas
Metzinger predicts a deadly AI arms race. But Metzinger also believes that Europe in particular can play a pioneering role on the threshold of this new era: creating
a binding international code of ethics.
various initiatives and councils carried out by the IEEE for endorsing AI ethics, including:
ECPAIS: An overview of the IEEE’s ethics certification program for autonomous systems, comprising its objectives and applications
AuroraAI: A pilot program carried out by the Government of Finland and the ECPAIS in order to gauge the effect of the implementation of ethics principles for AI’s
use in the public sectors
Ethically Aligned Design, OCEANIS, AI Commons, and CXI: How the IEEE is currently battling certain awareness challenges in incorporating AI ethics
https://emerj.com/ethics-and-regulatory/establishing-ai-ethics-public-and-private-sector/
https://plato.stanford.edu/entries/ethics-ai/

6. Risks of the AI

6.1. the effects of the "filter bubble" phenomenon on user exposure

Dangers of filter bubbles


This can lead to people becoming used to hearing what they want to hear, which can cause them to react more radically when they see an opposing viewpoint. The
filter bubble may cause the person to see any opposing viewpoints as incorrect and could allow the media to force views onto consumers
https://www.pwc.ch/en/insights/risk/what-is-the-future-of-risk/strengthen-awareness-of-artificial-intelligence-risks.html

14
https://en.wikipedia.org/wiki/Filter_bubble#:~:text=Dangers%20of%20filter%20bubbles,-Filter%20bubbles%20have&text=This%20can%20lead%20to%20people,t
o%20force%20views%20onto%20consumers​.
6.2. Fake News ,fake visuals
Facebook Fact Checking
Especially after the recent backlash against Facebook the company is on a mission to regain user trust. Facebook has been working with four independent
fact-checking organizations—Snopes, Politifact, ABC News and FactCheck.org—to verify the truthfulness of viral stories. New tools that are designed to avert the
spread of misinformation will notify Facebook users when they try to share a story that has been bookmarked as false by these ‘independent fact-checkers.’
Facebook has just recently announced its plan to open two new AI Labs that will work on creating an AI safety net for its users, tackling fake news, political
propaganda as well as bullying on its platform.
https://bernardmarr.com/default.asp?contentID=1440

6.3. Automation-spurred job loss


According to pwc
In the long run, less well educated workers could be particularly exposed to automation, emphasising the importance of increased investment in lifelong learning
and retraining
Financial services jobs could be relatively vulnerable to automation in the shorter term, while transport jobs are more vulnerable to automation in the longer term
Waves Description and impact
Wave 1: Algorithmic wave (to early 2020s) Automation of simple computational tasks and analysis of structured data, affecting data-driven sectors such as financial
services.
Wave 2: Augmentation wave (to late 2020s) Dynamic interaction with technology for clerical support and decision making. Also includes robotic tasks in
semicontrolled environments such as moving objects in warehouses.
Wave 3: Autonomous wave (to mid2030s) Automation of physical labour and manual dexterity, and problem solving in dynamic real world situations that require
responsive actions, such as in transport and construction.

AI is not going to take away jobs. It will displace some jobs, yes, but it will more likely change what human workers do
AI will actually create a plethora of new jobs (many of which we don’t even know what they'll be yet).
PwC predicts (via the Guardian) that AI will create over 7.2 million jobs in the U.K. alone over the next two decades. And according to McKinsey, about 77% of
companies "expect no net change in the size of their workforces in either Europe or the United States as a result of adopting automation and AI technologies.
Indeed, more than 17% expect their workforces on both sides of the Atlantic to grow."
We need to think about educating our current and future workforce to learn the skills that machines can’t replicate like Creative skills , Social skills, Emotional
Skills,

https://www.theguardian.com/technology/2018/jul/17/artificial-intelligence-will-be-net-uk-jobs-creator-finds-report
https://www.forbes.com/sites/forbes-personal-shopper/2020/10/22/best-cutting-boards-according-to-cooking-experts/#1ad86d8c4a23
https://www.mckinsey.com/featured-insights/future-of-work/skill-shift-automation-and-the-future-of-the-workforce
https://www.pwc.com/hu/hu/kiadvanyok/assets/pdf/impact_of_automation_on_jobs.pdf

15
6.4. Privacy violations and deepfakes

“Deepfakes do pose a risk to politics in terms of fake media appearing to be real, but right now the more tangible threat is how the idea of deepfakes can be
invoked to make the real appear fake,” says Henry Ajder, one of the authors of the report. “The hype and rather sensational coverage speculating on deepfakes’
political impact has overshadowed the real cases where deepfakes have had an impact.”
In recent months many research groups, and tech companies like Facebook and Google, have focused on tools for exposing fakes, such as databases for training
detection algorithms and watermarks that can be built into digital photo files to reveal if they are tampered with. Several startups have also been working on ways
to build trust through consumer applications that verify photos and videos when they’re taken, to form a basis for comparison if versions of the content are
circulated later. Gregory says tech giants should integrate both kinds of checks directly into their platforms to make them widely available.
https://www.technologyreview.com/2019/10/10/132667/the-biggest-threat-of-deepfakes-isnt-the-deepfakes-themselves/
https://arxiv.org/ftp/arxiv/papers/1802/1802.07228.pdf

6.5. Lethal autonomous weapons (LAWs)

a type of a​ utonomous​ ​military system​ that can independently search for and engage targets based on programmed constraints and descriptions.​[1]​ LAWs are also
known as lethal autonomous weapon systems (LAWS), autonomous weapon systems (AWS), robotic weapons, killer robots or slaughterbots.​[2]​ LAWs may operate in
the air, on land, on water, under water, or in space. The autonomy of current systems as of 2018 was restricted in the sense that a human gives the final command
to attack - though there are exceptions with certain "defensive" systems.
The group Campaign to Stop Killer Robots formed in 2013. In July 2015, over 1,000 experts in artificial intelligence signed a letter warning of the threat of an
artificial intelligence arms race and calling for a ban on autonomous weapons. The letter was presented in Buenos Aires at the 24th International Joint Conference
on Artificial Intelligence (IJCAI-15) and was co-signed by Stephen Hawking, Elon Musk, Steve Wozniak, Noam Chomsky, Skype co-founder Jaan Tallinn and Google
DeepMind co-founder Demis Hassabis, among others.
In addition to for new taxes and financial laws that transform part of the surplus value resulting from the use of technology into support educational and
development programs, as well as increasing investment in artificial intelligence security research and consider it as part of the computer science research budgets
as an integral part not as an additional
https://futureoflife.org/lethal-autonomous-weapons-pledge/?submitted=1#confirmation
https://en.wikipedia.org/wiki/Lethal_autonomous_weapon

16
6.6. Safety Engineering for Artificial General Intelligence
it is essential to supplement philosophy with applied science and engineering aimed at creating safe machines: a new field which we will term “AI Safety
Engineering.” For brain-inspired AIs, the focus will be on preserving the essential humanity of their values, without allowing moral corruption or technical hardware
and software corruption to change them for the worse. For de novo AIs, the focus will be in defining goal systems that help humanity, and then preserving those
goals under recursive self-improvement toward superintelligence.
The Consortium on the Landscape of AI Safety (CLAIS) is a global not-for-profit organisation which oversees the production and use of the AI Safety Landscape.
This initiative aims at defining an AI safety landscape providing a “view” of the current needs, challenges and state of the art and the practice of this field, as a key
step towards developing an AI Safety body of knowledge. Recognizing the need of an AI Safety Landscape, is pivotal because of the following reasons:

● More consensus is crucial: Achieving more consensus in terminology and meaning is a key step towards aligning the understanding of engineering and
socio-technical concepts, existing/available theory and technical solutions and gaps in the diversity of AI safety. Increasing conceptual consensus has the
power of accelerating the mutual understanding of the multiple disciplines working on how to actually create, test, deploy, operate and evolve safe
AI-based systems, as well as ensuring awareness of broader strategic, ethical and policy issues. Also in any consensus there are many trade-offs and
compromises we must make.

● Focus on generally accepted knowledge: "Generally accepted" means that the knowledge described is applicable to most AI Safety problems, by still
expecting that some considerations will be more relevant to certain applications or algorithms. We also expect to be somewhat forward-looking in the
different interpretations by taking into consideration not only what is generally accepted today, but we expect will be generally accepted in a longer
timeframe, with the dawn of systems whose cognitive capabilities approach those of humans.
I believe that guarding against the dangers of artificial intelligence should start from the Human himself, not from a position of defense, as many believes, but from
a position of building a new ethical culture for the leaders of technological transformation, programmers and artificial intelligence scientists, in other words who
make artificial intelligence itself, because in the end it is a reflection Who made it

In response to the changing threat landscape we make four high-level recommendations:


1. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI. 2. Researchers and
engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuserelated considerations to influence research priorities and
norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.
3. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where
applicable to the case of AI.
4. Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.
https://arxiv.org/ftp/arxiv/papers/1802/1802.07228.pdf
http://intelligence.org/files/SafetyEngineering.pdf

17
7. Will AI replace humans?

The future we’ll see tomorrow is in our hands and it entirely depends on us.That’s exactly what AI has in store for us. Despite all the hoopla surrounding it, AI will
not invade our capabilities; instead it will make us more efficient and creates new industries and jobs
new labour market takes shape over the 2018–2022 period, governments, businesses and individuals will also find themselves confronted with a range of wholly
new questions
a range of immediate implications and priorities stand out for different stakeholders.
For governments, firstly, there is an urgent need to address the impact of new technologies on labour markets through upgraded education policies aimed at rapidly
raising education and skills levels of individuals of all ages
Secondly, improvements in education and skills provision must be balanced with efforts on the demand side
Thirdly, to the extent that new technologies and labour augmentation will boost productivity, incomes and wealth, governments may find that increased tax
revenues provide scope to enhance social safety nets to better support those who may need support to adjust to the new labour market.

For industries, firstly, it will pay to realize that—as competition for scarce skilled talent equipped to seize the opportunities of the Fourth Industrial Revolution
intensifies and becomes more costly over the coming years—there is an opportunity to support the upskilling of their current workforce toward new (and
technologically reorganized) higher-skilled roles to ensure that their workforce achieves its full potential
Secondly, the need to ensure a sufficient pool of appropriately skilled talent creates an opportunity for businesses to truly reposition themselves as learning
organizations and to receive support for their reskilling and upskilling efforts from a wide range of stakeholders
Thirdly, with the increasing importance of talent platforms and online workers, conventional industries, too, should be thinking strategically how these action items
could be applied to the growing ‘gig’ and platform workforces as well.
For workers, there is an unquestionable need to take personal responsibility for one’s own lifelong learning and career development. It is also equally clear that
many individuals will need to be supported through periods of job transition and phases of retraining and upskilling by governments and employers

Ultimately, the core objective for governments, industries and workers alike should be to ensure that tomorrow’s jobs are fairly remunerated, entail treatment with
respect and decency and provide realistic scope for personal growth, development and fulfilment.

http://reports.weforum.org/future-of-jobs-2018/conclusions/

18
8. Conclusion;

AI changes our cognitive thinking in terms of time and human efforts. , pick up an object in your hand costs millions of years of evolution and several years of
childhood practice. Artificial intelligence makes it with many lines of coding and a set of algorithms.
I can imagine how the World will be if
The AI Computer passed again “the Turing Test” in a higher percentage than Eugene Goostman
the motion learning aligning with cognitive machine learning in the same rhythm,
the AI uses IoT through 5 G ,
I think the earth will not be like our fathers know another Planet.
In such a scenario, we need to start thinking deeply about what outcome we prefer and how we steer humanity in that direction — because, as Tegmark rightly
points out, “if we don’t know what we want, we’re unlikely to get it.” The impact of AI on the future of humanity is much too important to be left in the hands of
parochial politicians fanning the flames of nationalism and corporations worried about quarterly revenues. This requires all of us to be more aware and to make
better-informed decisions about the kind of future we want. Max Tegmark’s book is a grand and exhilarating study of the subject and an excellent place to start.

Governments and institutions must move from an idea that we would like to have a technological shift to a stage; We must have a technology shift
We want to build the power of artificial intelligence, and we steer it to serve humanity, and we set a specific goal for it
The education required to lead AI is centered around research, engineering and design Aligning with the traditional science

Indeed , as we are in 2020 , I noticed that The current educational curricula for my children who are in the primary stage are an improved model of what I have
been studying for thirty years! , this is not enough to keep pace with the rapid development in the uses of artificial intelligence, educational institutions must start
to modify the curricula in a revolutionary and completely different way like a few of schools around the World

What I would like to say is that artificial intelligence will preserve human life and improve its quality at a speed that cannot be imagined by a person who lived in
the last century, and I think it is wise for governments and scientific research institutions to lead this development, and not ignore it, and also enlighten Explaining
the meaning and importance of artificial intelligence for groups that refuse to adapt to the new stage, or consider it a threat to them, and show that the movement

19
of human development will not stop no matter how harsh conditions, this is what we have learned from history, and that new legislations start in line with the era
of technological transformation. And it benefits from the surplus income generated by technology, to finance the training and rehabilitation of workers who have
lost their jobs

Finally , i believe that It is not wise to shut down Airbus and Boeing because their aircraft collided with the buildings of September 11th , and Many scientists say
also said Tigmark : ​Lets build AI that empower us , but not everpower us

https://medium.com/awecademy/review-life-3-0-being-human-in-the-age-of-artificial-intelligence-by-max-tegmark-b3c129aae8da
https://www.telenor.com/the-perfect-storm-5g-iot-and-ai/

Extra links

https://cbmm.mit.edu/
https://valohai.com/success-stories/

https://spectrum.ieee.org/automaton/robotics/artificial-intelligence/google-large-scale-robotic-grasping-project

https://futurism.com/google-artificial-intelligence-built-ai
https://en.wikipedia.org/wiki/Marvin_Minsky
​https://www.scottaaronson.com/blog/?p=1858
https://digital.hbs.edu/platform-rctom/submission/google-duplex-does-it-pass-the-turing-test/
https://www.digitaltage.swiss/programm/
http://bonsai.hgc.jp/people/miyano/profile.html
https://scholar.google.com/citations?hl=en&user=gLnCTgIAAAAJ&view_op=list_works&sortby=pubdate
https://medium.com/vsinghbisen/where-is-artificial-intelligence-used-areas-where-ai-can-be-used-14ba8c092e73
https://autonomousweapons.org/
https://futureoflife.org/lethal-autonomous-weapons-pledge/?submitted=1#confirmation
https://www.tomorrow.city/

20

You might also like