Professional Documents
Culture Documents
Chennai - 020
EMBA/ MBA
1.Explain the problems of Artificial Intelligence . How are artificial intelligence problems
different from others?
Ans:- Artificial Intelligence (AI) is the toast of every technology driven company. Integration of
AI gives a business a massive amount of transformation opportunities to leverage the value
chain. Adopting and integrating AI technologies is a roller-coaster ride no matter how business-
friendly it may sound. A Deloitte report says, around 94% of the enterprises face potential
Artificial Intelligence problems while implementing it.
As an AI technology consumer and developer, we must know about both the merits and the
challenges associated with the adoption of AI. Knowing these nitty-gritty of any technology,
helps the user/developer to mitigate the risks linked to the technology as well as take the full
advantage of it.
It is very important to know how a developer should address/tackle the AI problems in the real
world. AI technologies must be accepted as a friend not as a foe.
2.There are many types of reasoning methodologies which generate knowledge in artificial
intelligence. Explain the major types of reasoning.
Ans:- The reasoning is the mental process of deriving logical conclusion and making predictions
from available knowledge, facts, and beliefs. Or we can say, "Reasoning is a way to infer facts
from existing data." It is a general process of thinking rationally, to find valid conclusions.
In artificial intelligence, the reasoning is essential so that the machine can also think rationally as
a human brain, and can perform like a human.
Types of Reasoning
o Deductive reasoning
o Inductive reasoning
o Abductive reasoning
o Common Sense Reasoning
o Monotonic Reasoning
o Non-monotonic Reasoning
1. Deductive reasoning:
Deductive reasoning is deducing new information from logically related known information. It is
the form of valid reasoning, which means the argument's conclusion must be true when the
premises are true.
Deductive reasoning is a type of propositional logic in AI, and it requires various rules and facts.
It is sometimes referred to as top-down reasoning, and contradictory to inductive reasoning.
In deductive reasoning, the truth of the premises guarantees the truth of the conclusion.
Deductive reasoning mostly starts from the general premises to the specific conclusion, which
can be explained as below example.
Example:
Inductive reasoning is a form of reasoning to arrive at a conclusion using limited sets of facts by
the process of generalization. It starts with the series of specific facts or data and reaches to a
general statement or conclusion.
In inductive reasoning, we use historical data or various premises to generate a generic rule, for
which premises support the conclusion.
In inductive reasoning, premises provide probable supports to the conclusion, so the truth of
premises does not guarantee the truth of the conclusion.
Example:
Premise: All of the pigeons we have seen in the zoo are white.
3. Abductive reasoning:
Abductive reasoning is a form of logical reasoning which starts with single or multiple
observations then seeks to find the most likely explanation or conclusion for the observation.
Example:
Conclusion It is raining.
4. Common Sense Reasoning
Common sense reasoning is an informal form of reasoning, which can be gained through
experiences.
Common Sense reasoning simulates the human ability to make presumptions about events which
occurs on every day.
It relies on good judgment rather than exact logic and operates on heuristic
knowledge and heuristic rules.
Example:
The above two statements are the examples of common sense reasoning which a human mind
can easily understand and assume.
5. Monotonic Reasoning:
In monotonic reasoning, once the conclusion is taken, then it will remain the same even if we
add some other information to existing information in our knowledge base. In monotonic
reasoning, adding knowledge does not decrease the set of prepositions that can be derived.
To solve monotonic problems, we can derive the valid conclusion from the available facts only,
and it will not be affected by new facts.
Monotonic reasoning is not useful for the real-time systems, as in real time, facts get changed, so
we cannot use monotonic reasoning.
Example:
It is a true fact, and it cannot be changed even if we add another sentence in knowledge base like,
"The moon revolves around the earth" Or "Earth is not round," etc.
Advantages of Monotonic Reasoning:
6. Non-monotonic Reasoning
Logic will be said as non-monotonic if some conclusions can be invalidated by adding more
knowledge into our knowledge base.
"Human perceptions for various things in daily life, "is a general example of non-monotonic
reasoning.
Example: Let suppose the knowledge base contains the following knowledge:
So from the above sentences, we can conclude that Pitty can fly.
However, if we add one another sentence into knowledge base "Pitty is a penguin", which
concludes "Pitty cannot fly", so it invalidates the above conclusion.
o For real-world systems such as Robot navigation, we can use non-monotonic reasoning.
o In Non-monotonic reasoning, we can choose probabilistic facts or can make assumptions.
Disadvantages of Non-monotonic Reasoning:
o In non-monotonic reasoning, the old facts may be invalidated by adding new sentences.
o It cannot be used for theorem proving.
Ans:- Mitchell, Keller, and Kedar-Cabelli (1986) present a unifying framework for an
explanation-based approach to generalization. Its underlying idea is to form an explanation
structure (such as a plan or a proof tree) for a specific situation, and generalize the explanation
structure so that it applies to a wider range of situations. Explanation based generalization (EBG)
uses a logical representation for knowledge, and an inferential view of problem solving. DeJong
and Mooney (1986) suggest a more general term, explanation-based learning (EBL), to also
cover systems that may specialize knowledge using information from an explanation structure.
"Explanation-Based learning" (EBl) is a technique by which an intelligent system can learn by
observing examples. EBl systems are characterized by the ability to create justified
generalizations from single training instances. They are also distinguished by their reliance on
background knowledge of the domain under study. Although EBl is usually viewed as a method
for performing generalization, it can be viewed in other ways as well. In particular, EBl can be
seen as a method that performs four different learning tasks: generalization, chunking,
operationalization and analogy.
Domain theory:
Sees(x,y) Habile(x) Fixes(x,y) // A habil individual that can see another entity can
// fix that entity
………
Facts:
Robot(Num5)
R2D2(Num5)
Age(Num5,5)
………
Goal:
Robust(Num5)
EBl methods also address a more theoretical issue. EBl may be viewed as an attempt to solve the
problem of inductive bias. As described by Mitchell in [Mitchell SO], every system that learns
from examples requires some sort of bias. Mitchell defines bias to be "any basis for choosing one
generalization over another, other than strict consistency with the observed 6 training instances·
[Mitchell 80]. A system lacking inductive bias would not be capable of making predictions
beyond the training examples it has already seen. Typical types of bias include using a restricted
vocabulary in the generalization language [Utgoff 86] and restricting the form of concept
descriptions to conjunctive expressions [Vere 75]. EBL may be viewed as an attempt to use
"background knowledge" or a "domain model· as a type of bias. The EBL method is biased
toward making generalizations that can be justified by explaining them in terms of the domain
model. EBL programs usually represent domain knowledge in a declarative style. EBL may
therefore be said to utilize a declarative bias representation.
Several advantages result from representing bias in terms of a declarative domain model. To
begin with. a declarative bias can be interpreted in terms of direct statements about the domain.
For this reason. the bias is subject to evaluation by human experts even· before it is used to
process training examples. By comparison. a bias such as a restricted vocabulary is not
immediately interpretable as a statement about the domain [Dietterich 86]. It therefore cannot be
easily evaluated except by testing its consistency with the training examples [Russell and Grosof
87]. A declarative bias also offers advantages of domain independence. As observed in
[Dietterich and Michalski 81]. greater domain independence is achieved if the bias is contained
in a separate module. The declarative domain models used by EBL systems are usually kept
separate and can be easily modified. Traditional types of bias. such as the two cited above. are
normally built in to the representation and procedures used by the learning system. For this
reason they are not easily modifiable. A declarative bias representation also helps to integrate
diverse sources of background knowledge into the learning process [Russell and Grosof 87].
EBL is characterized by the fact that it makes use of extensive background knowledge to guide
the learning process. A number of researchers outside the area of EBL have also used such
knowledge-intensive approaches to machine learning. Some early examples include Lenat's AM
program [Lenat 82], Sussman's HACKER program [Sussman 75] and Soloway's program for
learning rules of competitive games [Soloway 77]. These systems are difficult to compare since
they use diverse program architectures. Their background knowledge is embedded in specialized,
domain dependent heuristics, such as Lenat's heuristics for creating and evaluating concepts and
Sussman's knowledge base of bugs and patches. Additional programs using knowledge-intensive
learning techniques include [Buchanan and Mitchell 78; Vere 77; Lebowitz 83; Stepp and
Michalski 86; Lenat et aI. 86]. The search control technique known as "dependency-directed
backtracking", (DDB), provides an interesting comparison to EBL. This technique is used to
control the process of backtracking when a contradiction or failure is encountered during a
search process [Doyle 79; Stallman 77]. DOB may also be interpreted as a type of explanation-
based learning. DDB uses data dependencies to generalize the context of a contradiction, or
search failure, in much the same manner as EBL uses explanations to generalize from training
examples.
Technology-assisted review (TAR) was the first major application of AI in legal practice, using
technology solutions to organize, analyze, and search very large and diverse data sets for e-
discovery or record-intensive investigations. Going far beyond keyword and Boolean searches,
studies show that TAR provides a fifty-fold increase in efficiency in document review than
human review.
For example, predictive coding is a TAR technique that can be used to train a computer to
recognize relevant documents by starting with a “seed set” of documents and providing human
feedback; the trained machine can then review large numbers of documents very quickly and
accurately, going beyond individual words and focusing on the overall language and context of
each document. Numerous vendors now offer TAR products.
Legal Analytics
Legal analytics use big data, algorithms, and AI to make predictions from or detect trends in
large data sets. For example, Lex Machina, now owned by LexisNexis, uses legal analytics to
predict trends and outcomes in intellectual property litigation, and is now expanding to other
types of complex litigation. Wolters Kluwer leverages a massive database of law firm billing
records to provide baselines, comparative analysis, and efficiency improvements for in-house
counsel and outside law firms on staffing, billing, and timelines for various legal matters.
Ravel Law, also recently purchased by LexisNexis, uses legal analytics of judicial opinions to
predict how specific judges may decide cases, including providing recommendations on specific
precedents and language that may appeal to a given judge. Law professor Daniel Katz and his
colleagues have utilized legal analytics and machine learning to create a highly accurate
predictive model for the outcome of Supreme Court decisions.
Practice Management Assistants
Many technology companies and law firms are partnering to create programs that can assist with
specific practice areas, including transactional and due diligence, bankruptcy, litigation research
and preparation, real estate, and many others. Sometimes billed as the first robot lawyer, ROSS
is an online research tool using natural language processing powered by IBM Watson that
provides legal research and analysis for several different law firms today, and can reportedly
read and process over a million legal pages per minute. It was first publicly adopted by the law
firm BakerHostetler to assist with its bankruptcy practice, but is now being used by that firm and
several others for other practice areas as well.
A similar system is RAVN developed in the United Kingdom and first publicly adopted by the
law firm Berwin Leighton Paisner in London in 2015 to assist with due diligence in real estate
deals by verifying property details against the official public records. According to the law firm
attorney in charge of implementation: “once the program has been trained to identify and work
with specific variables, it can complete two weeks’ work in around two seconds, making [it] over
12 million times quicker than an associate doing the same task manually.” Kira is another AI
system that has already been adopted by several law firms to assist with automated contract
analysis and data extraction and due diligence in mergers and acquisitions.
Legal Bots
Bots are interactive online programs designed to interact with an audience to assist with a
specific function or to provide customized answers to the recipient’s specific situation. Many law
firms are developing bots to assist current or prospective clients in dealing with a legal issue
based on their own circumstances and facts. Other groups are developing pro bono legal bots to
assist people who may not otherwise have access to the legal system.
For example, a Stanford law graduate developed an online chat bot called DoNotPay that has
helped over 160,000 people resolve parking tickets, and is now being expanded to help refugees
with their legal problems.
Legal Decision Making
AI is enabling judicial decision making in a number of ways. For example, the Wisconsin
Supreme Court recently upheld the use of algorithms in criminal sentencing decisions. While
such algorithms represent an early use of primitive AI (some may not consider such algorithms
AI at all), they open the door to use more sophisticated AI systems in the sentencing process in
the future. A number of online dispute resolution tools have or are being developed to
completely circumvent the judicial process.
For example, the Modria online dispute resolution tool, developed from the eBay dispute
resolution system, has been used to settle many thousands of disputes online using an AI system.
The U.K. government is developing an Internet-based dispute resolution system that will be used
to resolve minor (<£25,000) civil legal claims without any court involvement. Microsoft and the
U.S. Legal Services Corporation have teamed up to provide machine learning legal portals to
provide free legal advice on civil law matters to people who cannot afford to hire lawyers.
The Future of AI and the Law
These initial applications of AI to legal practice are just the early beginnings of what will be a
radical technology-based disruption to the practice of law. AI “represents both the biggest
opportunity and potentially the greatest threat to the legal profession since its formation.” The
transformative impacts of AI on legal practice will continue to accelerate going forward. AI will
take over a steadily increasing share of law firm billable hours, be applied to an ever-expanding
set of legal tasks, and require knowledge and abilities outside the existing skill set of most
current practicing attorneys. Today AI represents an opportunity for a law firm or an attorney to
be a leader in efficiency, cost-effectiveness, and productivity, but soon incorporation of AI into
practice will be a matter of keeping up rather than being a leader.
AI in the practice of law raises many broader issues that can only be briefly listed here. How will
AI change law firm billing, where a smart AI system can conduct searches and analyses in a few
seconds that formerly would have taken several weeks of an associate’s billable time? If AI
eliminates many of the more routine tasks in legal practice that are traditionally performed by
young associates, how will this affect hiring and advancement of young attorneys? How will
legal training and law schools need to change to address the new realities of AI-driven legal
practice? How will AI affect the competitive advantage of large law firms versus small and
medium-sized firms? Will companies start obtaining legal services directly from legal
technology vendors, skipping law firms altogether? Will AI systems be vulnerable to charges of
unauthorized practice of law? Given that AI systems increasingly use their own self-learning
rather than preprogrammed instructions to make decisions, how can we ensure the accuracy,
legality, and fairness of AI decisions? Will lawyers be responsible for negligence for relying on
AI systems that make mistakes? Will lawyers be liable for malpractice for not using AI that
exceeds human capabilities in certain tasks? Will self-learning AI systems need to be deposed
and take the stand as witnesses to explain their own independent decision making?
One thing is certain—there will be winners and losers among lawyers who do and do not uptake
AI, respectively. As one senior lawyer recently remarked, “Unless private practice lawyers start
to engage with new technology, they are not going to be relevant even to their clients.”15 The AI
train is leaving the station—it is time to jump on board.
******