You are on page 1of 10

ARTIFICIAL INTELLIGENCE

A report submitted in partial fulfilment of the requirement for the degree of

MASTERS OF BUSINESS ADMINISTRATION

(2018 - 2020)

Submitted to: Submitted by:

Mrs.Shivinder Kaur Akshay Sharma

(Assistant Professor) Section – D


Roll No.: 18421010

SCHOOL OF MANAGEMENT STUDIES


PUNJABI UNIVERSITY PATIALA

1|Page
ARTIFICIAL INTELLIGENCE

What is Artificial intelligence?


Artificial Intelligence History

The term artificial intelligence was coined in 1956, but AI has become more popular today
thanks to increased data volumes, advanced algorithms, and improvements in computing
power and storage.

Early AI research in the 1950s was used for problem solving and symbolic methods. In the
1960s, the US Department of Defence took interest in this type of work and began training
computers to mimic basic human reasoning. For example, the Defence Advanced Research
Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA
produced intelligent personal assistants in 2003, long before Siri, Alexa were household
names.

This work made the way for the automation and formal reasoning that we see in computers
today, including decision support systems and smart search systems that can be designed to
complement and augment human abilities.

While Hollywood movies and science fiction novels depict AI as human-like robots that take
over the world, the current evolution of AI technologies isn’t that scary – or quite that smart.
Instead, AI has evolved to provide many specific benefits in every industry. It is now being
used in sports, health care, retail store and many more.

Why is artificial intelligence important?

• AI automates repetitive learning and discovery through data: But AI is


different from hardware-driven, robotic automation. Instead of automating manual tasks,
AI performs frequent, high-volume, computerized tasks reliably and without fatigue. For
this type of automation, human inquiry is still essential to set up the system and ask the
right questions.

2
ARTIFICIAL INTELLIGENCE

• AI adds intelligence to existing products. In most cases, AI will not be sold as an


individual application. Rather, products you already use will be improved with AI
capabilities, much like Siri was added as a feature to a new generation of Apple products.
Automation, conversational platforms, bots and smart machines can be combined with
large amounts of data to improve many technologies at home and in the workplace, from
security intelligence to investment analysis.

• AI adapts through progressive learning algorithms to let the data do the


programming. AI finds structure and regularities in data so that the algorithm acquires a
skill: The algorithm becomes a classifier or a predictor. So, just as the algorithm can teach
itself how to play chess, it can teach itself what product to recommend next online. And
the models adapt when given new data. Back propagation is an AI technique that allows
the model to adjust, through training and added data, when the first answer is not quite
right.

• AI analyses more and deeper data using neural networks that have many hidden
layers. Building a fraud detection system with five hidden layers was almost impossible a
few years ago. All that has changed with incredible computer power and big data. You
need lots of data to train deep learning models because they learn directly from the data.
The more data you can feed them, the more accurate they become.

• AI achieves incredible accuracy through deep neural networks – which was


previously impossible. For example, your interactions with Alexa, Google Search and
Google Photos are all based on deep learning – and they keep getting more accurate the
more we use them. In the medical field, AI techniques from deep learning, image
classification and object recognition can now be used to find cancer on MRIs with the
same accuracy as highly trained radiologists.

• AI gets the most out of data: When algorithms are self-learning, the data itself can
become intellectual property. The answers are in the data; you just have to apply AI to get
them out. Since the role of the data is now more important than ever before, it can create a

2
ARTIFICIAL INTELLIGENCE

competitive advantage. If you have the best data in a competitive industry, even if
everyone is applying similar techniques, the best data will win.

Role of business analyst in Artificial Intelligence

Artificial intelligence (AI) is an overarching term used to describe how computers are
programmed to exhibit human-like intelligence such as problem solving and learning. This
definition of AI is broad and non-specific which is part of the reason why the scope of AI can
sometimes be confusing. As machines become increasingly capable of performing
"intelligent" tasks, those tasks slowly become commonplace and as such are removed from
the scope of what is generally accepted as artificial intelligence. This is known as the AI
effect. A more precise definition might be any device that takes in information from its
environment and acts on it to maximize the chance of achieving its goal.

Imagine a computer program that accepts loan applicant information, applies several complex
decisioning rules, and determines whether to approve the applicant for a loan based upon the
probability of default. This is a form of AI, or at least it used to be. But most of us probably
no longer find this type of behaviour complex enough to rise to the level of AI. There is a
saying that goes "AI is whatever hasn't been done yet".

The spectrum of artificial intelligence runs from narrow AI to general AI. Determining
whether to approve a loan applicant is narrow AI. It's a program built with very specific rules
to solve a very specific problem. General AI is on the other end of the spectrum. Its what
people think about when they imagine a fully independent and reasoning superhuman-like
machine.

Two rapidly expanding areas of AI are machine learning and deep learning. They are best
described as techniques for achieving artificial intelligence and are driving massive and
accelerating progress in the field. You can no longer speak about AI without mentioning
them.

2
ARTIFICIAL INTELLIGENCE

Machine learning is an approach that goes beyond programming a computer to exhibit


"smart" behaviour. Machine learning programs learn from the environment and improve
their performance over time. Most machine learning techniques require the programmer to
examine the dataset ahead of time and identify the important features. Features are attributes
of the data that best correlate to successfully predicting the desired output. For example, a
credit score is likely an important feature of the loan applicant dataset when determining the
risk of loan default. The programmer then determines the best models for the machine
learning program to apply to the features such that the error rate of predicted outputs is
minimized. It's important to understand that a machine learning program must be trained.
Hundreds or thousands of well-defined data records need to be fed into the program so the
predictive model can refine itself over time. With each record it learns to more accurately
predict outputs when given a new input.

Another popular AI technique, which is a subset of machine learning itself, is deep learning.
Just like machine learning, deep learning programs learn and improve their performance over
time. Deep learning programs get their name due to the "deep" multi-layered neural networks
they use to learn and predict outcomes. Much like the structure of the human brain, neural
networks are made up of many nodes (like neurons) that receive inputs, perform a function,
and pass the result on to another node. By chaining many nodes together in a web-like or
tree-like structure complex decisioning can be achieved. Unlike other types of machine
learning programs, deep learning neural nets do NOT require the programmer to pre-identify
the important features of the data. They are capable of automatically extracting the data
features that are most influential to creating successful predictive outputs. Deep learning
programs require substantial computing power and massive amounts of data to be trained.

So, what does all of this mean for the business analyst? According to a recent Oxford
University study by the year 2035 nearly 47% of all jobs will be replaced by artificial
intelligence. Will business analysts be one of them? This number is a frightening projection
indeed, but let's put this into perspective. First, people in their mid-40s have much less to
worry about since they will likely be approaching retirement. For those who are younger, two
decades is a lot of time to adapt and focus on continuing education and retraining as needed?

2
ARTIFICIAL INTELLIGENCE

Keep in mind that many jobs won't disappear with a single AI advancement. Instead various
aspects of a job will slowly be replaced by AI over time.

Baidu chief scientist, Coursera co-founder, and Stanford adjunct professor Andrew Ng is a
respected leader in the AI field. During a speech at Stanford he addressed what he saw as
some of the more immediate ways that business analysts and product managers will need to
evolve as they support AI projects. Traditional applications tend to get their information
through keyboard inputs, mouse clicks, and input files in text form. But AI programs
typically require vastly larger quantities of data to be successful and, therefore, get their
information in alternative formats such as voice streams, video, photographs and much of it in
real-time. There isn't yet consensus around how to best define and communicate
requirements around these types of sources. This is perhaps the first and most immediate
opportunity for business analysts, adapting our role to AI projects. One thing is for certain,
the safest place to be when AI starts wiping out jobs is working in AI.

Business analytics is a core asset for companies focused to deliver on business objectives of
growth and revenue. On an annual basis, marketers now spend as much as fifty billion
dollars for business analytics. Providing data-driven solutions, business analytics is essential
for smarter decisions by helping understand customers and operations. Also, business
analytics should not be confused with business intelligence. While business intelligence
provides information in identifying aspects of the business, business analytics explains the
reasons behind business performance and using that information to forecast future results.

An excellent example of business analytics could be seen with Google’s HR (Analytics at


Google). By creating a People’s Analytics Department, Google’s HR utilizes data in order to
make decisions. In order to learn about the effectiveness of their managers, Google turned to
business analytics as well. Through Project Oxygen, a codename for the initiative to learn
about management, data revealed that areas with better managers had more content and
productive employees. In order to learn which incentives drive better management, the
impact of the Great Manager Award was observed. Through business analytics utilized in this
process, Google HR decided to continue the Great Manager Award and made revisions in its
training program.

2
ARTIFICIAL INTELLIGENCE

Though business analytics is often associated with “big data” and “big industry”, it has played
a crucial role in SMB’s as well. Adriana Papell, a fashion company, had difficulties in
utilizing data to make business decisions. By utilizing business analytics practices, the
company was able to learn which of their products sell the most during specific times. The
data led to increase their sales by 15%. Columbus Foods also utilized business analytics to
track historical sales data for their meat. Through the information, they were able to
understand the buying patterns of their customers and the exact demand of specific meats.

Such excellent examples help better understand the significance of business analytics. It is is
the user stories and how teams are using data shows the impact it makes on business
decisions.

Robotics, AI and machine learning (will) have social, political and business effects,
transforming many modern industries and displacing jobs. A research paper published in
2017 estimated that 47% of all US jobs are at “high risk” of being automated in the next 20
years. Does that mean technology will be a net job destroyer? Past revolutions have in fact
brought increased productivity and resulted in net job creation. Of course, the nature of work
has constantly evolved over time; some tasks have been delegated to technology while new
tasks have emerged. It is still unclear what tasks AI will create, but current trends predict
some jobs are reasonably “safe” in the short term, particularly those requiring:

• Extensive human contact


• Social skills
• Strategic and creative thinking
• Being comfortable with ambiguity and unpredictability.

It is common to view new technologies as competitors rather than complementary to humans,


especially amidst growing fear that AI threatens our employment. The reality is that machines
are better than us at crunching numbers, memorizing, predicting, and executing precise
moves; robots relieve us of tedious, dangerous and physically demanding tasks. A study has
even shown that computer-based personality judgements are far superior to those of humans.
Based on “Facebook likes,” a computer-based prediction tool was able to beat a human

2
ARTIFICIAL INTELLIGENCE

colleague after just 10 likes. It needed 70 likes to beat a friend or roommate, 150 to beat a
family member, and 300 to beat a spouse. But AI cannot (yet) replace humans when
creativity, perception and abstract thinking are required. Hence, AI systems can serve as
partners that can augment and improve many aspects of work and life. Table 1 provides
examples of products and AI-related technologies and their potentially relevant industries. In
a data-centric world, these systems can synthesize tons of information and help us make
better informed decisions. They can also free up time that we can then spend doing what is
valuable to us.

As more and more tasks and decisions are delegated to algorithms, there is growing concern
about where responsibility will lie. For example, who is responsible when an algorithmic
system, initially implemented to improve fairness in employee performance assessment, ends
up reinforcing existing biases and creating new forms of injustice? And who is accountable
for the algorithmic decisions when human lives are at stake, as in recent accidents involving
self-driving cars? Should we differentiate between decisions taken by an AI vs. a human
being? Humans are not expected to justify all their decisions, but there are many cases where
they have an ethical or legal obligation to do so.

And that is where the shoe pinches. Advanced algorithms can be so complex that even the
engineers who created them do not understand their decision-making process. Consider
deep neural networks, a type of machine learning method inspired by the structure of the
human brain. All you do is feed the algorithms with some inputs and let the algorithm figure
out the output. We have no idea what goes on in between. As illustrated in Figure 2, there
are many different pathways that could lead to the outcome, and most of the “magic”
happens in the hidden layers. Moreover, this “magic” could even imply a process of
information that is completely different from that of the human brain. A famous illustration
of this reality is Facebook’s experience with negotiating bots, which, after several rounds
of negotiations, realized that it was not necessary to use a human language to bargain.

Coming back to the question of algorithmic accountability: How do we assess the


trustworthiness of algorithmic decisions when the algorithm’s decision-making process is a
black box? What kind of technical/legal/policy-oriented mechanisms should we implement as
a solution? A straightforward option is to design these algorithms so that their “thought

2
ARTIFICIAL INTELLIGENCE

process” is “human-readable.” If we could understand how these algorithms make their


decisions, we could also potentially adjust their “thinking” to match humans’ legal, moral,
ethical and social standards, thus making them accountable under the law.

So far, our discussion on AI has focused on “narrow AI,” which is specialized by design to
perform a specific task. But what about artificial general intelligence (AGI), which could
perform any cognitive task as well as a human. What would that look like? Would it have its
own character/emotions? Imagine this AGI has been trained by looking at all human
histories. Have we always been kind to each other? Have we always treated people equally?
If this machine’s input is our history, why would it behave differently to us? Imagine we
simply asked an AGI to calculate the number pi. What would prevent this machine from
killing us at some point to create a more powerful machine to calculate pi (i.e. to carry out
our instruction)? After all, in creating past civilizations, most humans did not really care
about the ants they killed on the way, so why would an AGI care about humans and their
rules? “Just pull the plug,” we hear you say. But if the AGI is smarter than you, it will have
anticipated that and found a way around it – by spreading itself all over the planet for
example. For now, this question is, of course, highly philosophical. Max Tegmark, author of
the book Life 3.0 identified the following schools of thought depending on one’s opinion on
what AGI would mean for humanity and when (if ever) it comes to life:

• The Techno Skeptics: The only type that thinks AGI will never happen.
• The Luddites: Strong AI opponents who believe AGI is definitely a bad thing
• The Beneficial AI Movement: Harbour concerns about AI and advocate AI-safety
research and discussion in order to increase the odds of a good outcome.

• The Digital Utopians: Say we should not worry, AGI will definitely be a good thing.

2
ARTIFICIAL INTELLIGENCE

What are the challenges of using artificial intelligence?

Artificial intelligence is going to change every industry, but we have to understand its limits.

The principle limitation of AI is that it learns from the data. There is no other way in which
knowledge can be incorporated. That means any inaccuracies in the data will be reflected in
the results. And any additional layers of prediction or analysis have to be added separately.

Today’s AI systems are trained to do a clearly defined task. The system that plays poker
cannot play solitaire or chess. The system that detects fraud cannot drive a car or give you
legal advice. In fact, an AI system that detects health care fraud cannot accurately detect tax
fraud or warranty claims fraud.

In other words, these systems are very, very specialized. They are focused on a single task
and are far from behaving like humans.

Likewise, self-learning systems are not autonomous systems. The imagined AI technologies
that you see in movies and TV are still science fiction. But computers that can probe complex
data to learn and perfect specific tasks are becoming quite common.

You might also like