You are on page 1of 14

National Institute of Business Management

Chennai - 020
EMBA/ MBA

Elective: Artificial Intelligence (Part I)

1.Explain the problems of Artificial Intelligence . How are artificial intelligence problems
different from others?

Ans:- Artificial Intelligence (AI) is the toast of every technology driven company. Integration of
AI gives a business a massive amount of transformation opportunities to leverage the value
chain. Adopting and integrating AI technologies is a roller-coaster ride no matter how business-
friendly it may sound. A Deloitte report says, around 94% of the enterprises face potential
Artificial Intelligence problems while implementing it.
As an AI technology consumer and developer, we must know about both the merits and the
challenges associated with the adoption of AI. Knowing these nitty-gritty of any technology,
helps the user/developer to mitigate the risks linked to the technology as well as take the full
advantage of it.
It is very important to know how a developer should address/tackle the AI problems in the real
world. AI technologies must be accepted as a friend not as a foe.

1. Lack of technical knowledge


To integrate, deploy and implement AI applications in the enterprise, the organization must have
the knowledge of the current AI advancement and technologies as well as its shortcomings. The
lack of technical know-how is hindering the adoption of this niche domain in most of the
organization. Only 6% enterprises, currently, having a smooth ride adopting AI technologies.
Enterprise requires a specialist to identify the roadblocks in the deployment process. Skilled
human resources would also help the teamwork with Return on in tracking of adopting AI/ML
solutions.
2. The price factor
Small and mid-sized organization struggles a lot when it comes to adopting AI technologies as it
is a costly affair. Even big firms like Facebook, Apple, Microsoft, Google, Amazon (FAMGA)
allocate a separate budget for adopting and implementing AI technologies.
3. Data acquisition and storage
One of the biggest Artificial Intelligence problems is data acquisition and storage. Business AI
systems depend on sensor data as its input. For validation of AI, a mountain of sensor data is
collected. Irrelevant and noisy datasets may cause obstruction as they are hard to store and
analyze.
AI works best when it has a good amount of quality data available to it. The algorithm becomes
strong and performs well as the relevant data grows. The AI system fails badly when enough
quality data isn’t fed into it.
With small input variations in data quality having such profound results on outcomes and
predictions, there’s a real need to ensure greater stability and accuracy in Artificial Intelligence.
Furthermore, in some industries, such as industrial applications, sufficient data might not be
available, limiting AI adoption.
4. Rare and expensive workforce
As mentioned above, adoption and deployment of AI technologies require specialists like data
scientists, data engineer and other SMEs (Subject Matter Experts). These experts are expensive
and rare in the current marketplace. Small and medium-sized enterprises fall short of their tight
budget to bring in the manpower according to the requirement of the project.
5. Issue of responsibility
The implementation of AI application comes with great responsibility. Any specific individual
must bear the burden of any sort of hardware malfunctions. Earlier, it was relatively easy to
determine whether an incident was the result of the actions of a user, developer or manufacturer.
6. Ethical challenges
One of the major AI problems that are yet be tackled are the ethics and morality. The way how
the developers are technically grooming the AI bots to perfection where it can flawlessly imitate
human conversations, making it increasingly tough to spot a difference between a machine and a
real customer service rep.
Artificial intelligence algorithm predicts based on the training given to it. The algorithm will
label things as per the assumption of data it is trained on. Hence, it will simply ignore the
correctness of data, for example- if the algorithm is trained on data that reflects racism or sexism,
the result of prediction will mirror back it instead of correcting it automatically. There are some
current algorithms that have mislabeled black people as ‘gorillas’. Therefore, we need to make
sure that the algorithms are fair, especially when it is used by private and corporate individuals.
7. Lack of computation speed
AI, Machine learning and deep learning solutions require a high degree of computation speeds
offered only by high-end processors. Larger infrastructure requirements and pricing associated
with these processors has become a hindrance in their general adoption of the AI technology. In
this scenario, cloud computing environment and multiple processors running in parallel offer a
potent alternative to cater to these computational requirements. As the volume of data available
for processing grows exponentially, the computation speed requirements will grow with it. It is
imperative to develop next-gen computational infrastructure solutions.
8. Legal Challenges
An AI application with an erroneous algorithm and data governance can cause legal challenges
for the company. This is yet again one of the biggest Artificial Intelligence problems that a
developer faces in a real world. Flawed algorithm made with an inappropriate set of data can
leave a colossal dent in an organization’s profit. An erroneous algorithm will always make
incorrect and unfavorable predictions. Problems like data breach can be a consequence of weak
& poor data governance–how? To an algorithm, a user’s PII (personal identifiable information)
acts as a feed stock which may slip into the hands of hackers. Consequently, the organization
will fall into the traps of legal challenges.
9. AI Myths & Expectation:
There’s a quite discrepancy between the actual potential of the AI system and the expectations of
this generation. Media says, Artificial Intelligence, with its cognitive capabilities, will replace
human’s jobs.
However, the IT industry has a challenge on their hands to address these lofty expectations by
accurately conveying that AI is just a tool that can operate only with the indulgence of human
brains. AI can definitely boost the outcome of something that will replace human roles like
automation of routine or common work, optimizations of every industrial work, data-driven
predictions, etc.
However, in most of the occasions (particularly in highly specialized roles), AI cannot substitute
the caliber of the human brain and what it brings to the table.
Not everything you hear about AI is true. AI is often over-hyped. Read this article from
Forbes to clear all your misconceptions about the AI technologies.
10. Difficulty of assessing vendors
In any emerging field, a tech procurement is quite challenging as AI is particularly vulnerable.
Businesses face a lot of problems to know how exactly they can use AI effectively as many non-
AI companies engage in AI washing, some organizations overstate.
It’s true that AI technology is a luxurious retreat because you cannot oversee the radical changes
it brings in to the organization. However, to implement it an organization needs experts who are
hard to find. For successful adoption, it needs a high-degree computation processing. Enterprises
should concentrate on how they can responsibly mitigate these Artificial Intelligence problems
rather than staying back and ignore this ground-breaking technology.
The key lies in minimizing the Artificial Intelligence problems and maximizing the benefits
through the creation of an extensive technology adoption roadmap that understands the core
capabilities of artificial intelligence.

2.There are many types of reasoning methodologies which generate knowledge in artificial
intelligence. Explain the major types of reasoning.

Ans:- The reasoning is the mental process of deriving logical conclusion and making predictions
from available knowledge, facts, and beliefs. Or we can say, "Reasoning is a way to infer facts
from existing data." It is a general process of thinking rationally, to find valid conclusions.

In artificial intelligence, the reasoning is essential so that the machine can also think rationally as
a human brain, and can perform like a human.
Types of Reasoning

In artificial intelligence, reasoning can be divided into the following categories:

o Deductive reasoning
o Inductive reasoning
o Abductive reasoning
o Common Sense Reasoning
o Monotonic Reasoning
o Non-monotonic Reasoning

1. Deductive reasoning:

Deductive reasoning is deducing new information from logically related known information. It is
the form of valid reasoning, which means the argument's conclusion must be true when the
premises are true.

Deductive reasoning is a type of propositional logic in AI, and it requires various rules and facts.
It is sometimes referred to as top-down reasoning, and contradictory to inductive reasoning.

In deductive reasoning, the truth of the premises guarantees the truth of the conclusion.

Deductive reasoning mostly starts from the general premises to the specific conclusion, which
can be explained as below example.

Example:

Premise-1: All the human eats veggies

Premise-2: Suresh is human.

Conclusion: Suresh eats veggies.

The general process of deductive reasoning is given below:


2. Inductive Reasoning:

Inductive reasoning is a form of reasoning to arrive at a conclusion using limited sets of facts by
the process of generalization. It starts with the series of specific facts or data and reaches to a
general statement or conclusion.

Inductive reasoning is a type of propositional logic, which is also known as cause-effect


reasoning or bottom-up reasoning.

In inductive reasoning, we use historical data or various premises to generate a generic rule, for
which premises support the conclusion.

In inductive reasoning, premises provide probable supports to the conclusion, so the truth of
premises does not guarantee the truth of the conclusion.

Example:

Premise: All of the pigeons we have seen in the zoo are white.

Conclusion: Therefore, we can expect all the pigeons to be white.

3. Abductive reasoning:

Abductive reasoning is a form of logical reasoning which starts with single or multiple
observations then seeks to find the most likely explanation or conclusion for the observation.

Abductive reasoning is an extension of deductive reasoning, but in abductive reasoning, the


premises do not guarantee the conclusion.

Example:

Implication: Cricket ground is wet if it is raining

Axiom: Cricket ground is wet.

Conclusion It is raining.
4. Common Sense Reasoning

Common sense reasoning is an informal form of reasoning, which can be gained through
experiences.

Common Sense reasoning simulates the human ability to make presumptions about events which
occurs on every day.

It relies on good judgment rather than exact logic and operates on heuristic
knowledge and heuristic rules.

Example:

1. One person can be at one place at a time.


2. If I put my hand in a fire, then it will burn.

The above two statements are the examples of common sense reasoning which a human mind
can easily understand and assume.

5. Monotonic Reasoning:

In monotonic reasoning, once the conclusion is taken, then it will remain the same even if we
add some other information to existing information in our knowledge base. In monotonic
reasoning, adding knowledge does not decrease the set of prepositions that can be derived.

To solve monotonic problems, we can derive the valid conclusion from the available facts only,
and it will not be affected by new facts.

Monotonic reasoning is not useful for the real-time systems, as in real time, facts get changed, so
we cannot use monotonic reasoning.

Monotonic reasoning is used in conventional reasoning systems, and a logic-based system is


monotonic.

Any theorem proving is an example of monotonic reasoning.

Example:

o Earth revolves around the Sun.

It is a true fact, and it cannot be changed even if we add another sentence in knowledge base like,
"The moon revolves around the earth" Or "Earth is not round," etc.
Advantages of Monotonic Reasoning:

o In monotonic reasoning, each old proof will always remain valid.


o If we deduce some facts from available facts, then it will remain valid for always.

Disadvantages of Monotonic Reasoning:

o We cannot represent the real world scenarios using Monotonic reasoning.


o Hypothesis knowledge cannot be expressed with monotonic reasoning, which means
facts should be true.
o Since we can only derive conclusions from the old proofs, so new knowledge from the
real world cannot be added.

6. Non-monotonic Reasoning

In Non-monotonic reasoning, some conclusions may be invalidated if we add some more


information to our knowledge base.

Logic will be said as non-monotonic if some conclusions can be invalidated by adding more
knowledge into our knowledge base.

Non-monotonic reasoning deals with incomplete and uncertain models.

"Human perceptions for various things in daily life, "is a general example of non-monotonic
reasoning.

Example: Let suppose the knowledge base contains the following knowledge:

o Birds can fly


o Penguins cannot fly
o Pitty is a bird

So from the above sentences, we can conclude that Pitty can fly.

However, if we add one another sentence into knowledge base "Pitty is a penguin", which
concludes "Pitty cannot fly", so it invalidates the above conclusion.

Advantages of Non-monotonic reasoning:

o For real-world systems such as Robot navigation, we can use non-monotonic reasoning.
o In Non-monotonic reasoning, we can choose probabilistic facts or can make assumptions.
Disadvantages of Non-monotonic Reasoning:

o In non-monotonic reasoning, the old facts may be invalidated by adding new sentences.
o It cannot be used for theorem proving.

3.Why does the EM (expectation maximization) algorithm necessarily converge?

Ans:- Mitchell, Keller, and Kedar-Cabelli (1986) present a unifying framework for an
explanation-based approach to generalization. Its underlying idea is to form an explanation
structure (such as a plan or a proof tree) for a specific situation, and generalize the explanation
structure so that it applies to a wider range of situations. Explanation based generalization (EBG)
uses a logical representation for knowledge, and an inferential view of problem solving. DeJong
and Mooney (1986) suggest a more general term, explanation-based learning (EBL), to also
cover systems that may specialize knowledge using information from an explanation structure.
"Explanation-Based learning" (EBl) is a technique by which an intelligent system can learn by
observing examples. EBl systems are characterized by the ability to create justified
generalizations from single training instances. They are also distinguished by their reliance on
background knowledge of the domain under study. Although EBl is usually viewed as a method
for performing generalization, it can be viewed in other ways as well. In particular, EBl can be
seen as a method that performs four different learning tasks: generalization, chunking,
operationalization and analogy.

The EBL Process


Example

Domain theory:

Fixes(u,u) Robust(u) // An individual that can fix itself is robust

Sees(x,y)  Habile(x) Fixes(x,y) // A habil individual that can see another entity can
// fix that entity

Robot(w) Sees(w,w) // All robots can see themselves

R2D2(x)  Habile(x) // R2D2-class in individuals are habil

………
Facts:

Robot(Num5)

R2D2(Num5)

Age(Num5,5)

………

Goal:

Robust(Num5)
EBl methods also address a more theoretical issue. EBl may be viewed as an attempt to solve the
problem of inductive bias. As described by Mitchell in [Mitchell SO], every system that learns
from examples requires some sort of bias. Mitchell defines bias to be "any basis for choosing one
generalization over another, other than strict consistency with the observed 6 training instances·
[Mitchell 80]. A system lacking inductive bias would not be capable of making predictions
beyond the training examples it has already seen. Typical types of bias include using a restricted
vocabulary in the generalization language [Utgoff 86] and restricting the form of concept
descriptions to conjunctive expressions [Vere 75]. EBL may be viewed as an attempt to use
"background knowledge" or a "domain model· as a type of bias. The EBL method is biased
toward making generalizations that can be justified by explaining them in terms of the domain
model. EBL programs usually represent domain knowledge in a declarative style. EBL may
therefore be said to utilize a declarative bias representation.

Several advantages result from representing bias in terms of a declarative domain model. To
begin with. a declarative bias can be interpreted in terms of direct statements about the domain.
For this reason. the bias is subject to evaluation by human experts even· before it is used to
process training examples. By comparison. a bias such as a restricted vocabulary is not
immediately interpretable as a statement about the domain [Dietterich 86]. It therefore cannot be
easily evaluated except by testing its consistency with the training examples [Russell and Grosof
87]. A declarative bias also offers advantages of domain independence. As observed in
[Dietterich and Michalski 81]. greater domain independence is achieved if the bias is contained
in a separate module. The declarative domain models used by EBL systems are usually kept
separate and can be easily modified. Traditional types of bias. such as the two cited above. are
normally built in to the representation and procedures used by the learning system. For this
reason they are not easily modifiable. A declarative bias representation also helps to integrate
diverse sources of background knowledge into the learning process [Russell and Grosof 87].

Relation to other Machine Learning Research

EBL is characterized by the fact that it makes use of extensive background knowledge to guide
the learning process. A number of researchers outside the area of EBL have also used such
knowledge-intensive approaches to machine learning. Some early examples include Lenat's AM
program [Lenat 82], Sussman's HACKER program [Sussman 75] and Soloway's program for
learning rules of competitive games [Soloway 77]. These systems are difficult to compare since
they use diverse program architectures. Their background knowledge is embedded in specialized,
domain dependent heuristics, such as Lenat's heuristics for creating and evaluating concepts and
Sussman's knowledge base of bugs and patches. Additional programs using knowledge-intensive
learning techniques include [Buchanan and Mitchell 78; Vere 77; Lebowitz 83; Stepp and
Michalski 86; Lenat et aI. 86]. The search control technique known as "dependency-directed
backtracking", (DDB), provides an interesting comparison to EBL. This technique is used to
control the process of backtracking when a contradiction or failure is encountered during a
search process [Doyle 79; Stallman 77]. DOB may also be interpreted as a type of explanation-
based learning. DDB uses data dependencies to generalize the context of a contradiction, or
search failure, in much the same manner as EBL uses explanations to generalize from training
examples.

4..How will AI change the practice of law? Explain.


Ans:- AI is rapidly infiltrating the practice of law. A recent survey of managing partners of U.S.
law firms with 50 or more lawyers found that over 36 percent of law firms, and over 90 percent
of large law firms (>1,000 attorneys), are either currently using or actively exploring use of AI
systems in their legal practices. The following summary describes some of the major categories
and examples of such applications.
Technology-Assisted Review

Technology-assisted review (TAR) was the first major application of AI in legal practice, using
technology solutions to organize, analyze, and search very large and diverse data sets for e-
discovery or record-intensive investigations. Going far beyond keyword and Boolean searches,
studies show that TAR provides a fifty-fold increase in efficiency in document review than
human review.
For example, predictive coding is a TAR technique that can be used to train a computer to
recognize relevant documents by starting with a “seed set” of documents and providing human
feedback; the trained machine can then review large numbers of documents very quickly and
accurately, going beyond individual words and focusing on the overall language and context of
each document. Numerous vendors now offer TAR products.
Legal Analytics

Legal analytics use big data, algorithms, and AI to make predictions from or detect trends in
large data sets. For example, Lex Machina, now owned by LexisNexis, uses legal analytics to
predict trends and outcomes in intellectual property litigation, and is now expanding to other
types of complex litigation. Wolters Kluwer leverages a massive database of law firm billing
records to provide baselines, comparative analysis, and efficiency improvements for in-house
counsel and outside law firms on staffing, billing, and timelines for various legal matters.
Ravel Law, also recently purchased by LexisNexis, uses legal analytics of judicial opinions to
predict how specific judges may decide cases, including providing recommendations on specific
precedents and language that may appeal to a given judge. Law professor Daniel Katz and his
colleagues have utilized legal analytics and machine learning to create a highly accurate
predictive model for the outcome of Supreme Court decisions.
Practice Management Assistants

Many technology companies and law firms are partnering to create programs that can assist with
specific practice areas, including transactional and due diligence, bankruptcy, litigation research
and preparation, real estate, and many others. Sometimes billed as the first robot lawyer, ROSS
is an online research tool using natural language processing powered by IBM Watson that
provides legal research and analysis for several different law firms today, and can reportedly
read and process over a million legal pages per minute. It was first publicly adopted by the law
firm BakerHostetler to assist with its bankruptcy practice, but is now being used by that firm and
several others for other practice areas as well.
A similar system is RAVN developed in the United Kingdom and first publicly adopted by the
law firm Berwin Leighton Paisner in London in 2015 to assist with due diligence in real estate
deals by verifying property details against the official public records. According to the law firm
attorney in charge of implementation: “once the program has been trained to identify and work
with specific variables, it can complete two weeks’ work in around two seconds, making [it] over
12 million times quicker than an associate doing the same task manually.” Kira is another AI
system that has already been adopted by several law firms to assist with automated contract
analysis and data extraction and due diligence in mergers and acquisitions.
Legal Bots

Bots are interactive online programs designed to interact with an audience to assist with a
specific function or to provide customized answers to the recipient’s specific situation. Many law
firms are developing bots to assist current or prospective clients in dealing with a legal issue
based on their own circumstances and facts. Other groups are developing pro bono legal bots to
assist people who may not otherwise have access to the legal system.
For example, a Stanford law graduate developed an online chat bot called DoNotPay that has
helped over 160,000 people resolve parking tickets, and is now being expanded to help refugees
with their legal problems.
Legal Decision Making

AI is enabling judicial decision making in a number of ways. For example, the Wisconsin
Supreme Court recently upheld the use of algorithms in criminal sentencing decisions. While
such algorithms represent an early use of primitive AI (some may not consider such algorithms
AI at all), they open the door to use more sophisticated AI systems in the sentencing process in
the future. A number of online dispute resolution tools have or are being developed to
completely circumvent the judicial process.
For example, the Modria online dispute resolution tool, developed from the eBay dispute
resolution system, has been used to settle many thousands of disputes online using an AI system.
The U.K. government is developing an Internet-based dispute resolution system that will be used
to resolve minor (<£25,000) civil legal claims without any court involvement. Microsoft and the
U.S. Legal Services Corporation have teamed up to provide machine learning legal portals to
provide free legal advice on civil law matters to people who cannot afford to hire lawyers.
The Future of AI and the Law

These initial applications of AI to legal practice are just the early beginnings of what will be a
radical technology-based disruption to the practice of law. AI “represents both the biggest
opportunity and potentially the greatest threat to the legal profession since its formation.” The
transformative impacts of AI on legal practice will continue to accelerate going forward. AI will
take over a steadily increasing share of law firm billable hours, be applied to an ever-expanding
set of legal tasks, and require knowledge and abilities outside the existing skill set of most
current practicing attorneys. Today AI represents an opportunity for a law firm or an attorney to
be a leader in efficiency, cost-effectiveness, and productivity, but soon incorporation of AI into
practice will be a matter of keeping up rather than being a leader.
AI in the practice of law raises many broader issues that can only be briefly listed here. How will
AI change law firm billing, where a smart AI system can conduct searches and analyses in a few
seconds that formerly would have taken several weeks of an associate’s billable time? If AI
eliminates many of the more routine tasks in legal practice that are traditionally performed by
young associates, how will this affect hiring and advancement of young attorneys? How will
legal training and law schools need to change to address the new realities of AI-driven legal
practice? How will AI affect the competitive advantage of large law firms versus small and
medium-sized firms? Will companies start obtaining legal services directly from legal
technology vendors, skipping law firms altogether? Will AI systems be vulnerable to charges of
unauthorized practice of law? Given that AI systems increasingly use their own self-learning
rather than preprogrammed instructions to make decisions, how can we ensure the accuracy,
legality, and fairness of AI decisions? Will lawyers be responsible for negligence for relying on
AI systems that make mistakes? Will lawyers be liable for malpractice for not using AI that
exceeds human capabilities in certain tasks? Will self-learning AI systems need to be deposed
and take the stand as witnesses to explain their own independent decision making?
One thing is certain—there will be winners and losers among lawyers who do and do not uptake
AI, respectively. As one senior lawyer recently remarked, “Unless private practice lawyers start
to engage with new technology, they are not going to be relevant even to their clients.”15 The AI
train is leaving the station—it is time to jump on board.

******

You might also like