You are on page 1of 9

Unit # 2

Case Studies : ELIZA


& LUNAR
Case study - ELIZA
1. ELIZA
• ELIZA is a computer program that simulates the behavior of a therapist. This is first
programs of its very kind developed way back in 1966 in MIT. This program interacts wit
the user in simple English and simulate the presence or talking with a therapist imaginary
Doctor. Though then many concepts of Artificial Intelligence were still not developed but
ELIZA surprised number of individuals as it attributed human-like feelings to its user.
• Eliza listened to what the user said and it could parse the sentence in very basic way, and
then present question in way that is somehow related to question. In its early 1960s people
were fooled by Eliza as they thought were told that a real live therapist was talking from the
second computer.
• A program like ELIZA requires knowledge of three domains.
1. Artificial Intelligence
2. Expert System
3. Natural Language Processing
• Weizenbaum who was developer of this program was shocked to know that the MIT staff of
the lab thought that the machine was a real therapist, and spent hours revealing their
problems to the program. When Weizenbaum informed them that he had access to logs of all
conversations, the community was outraged at this invasion of their privacy. He himself
was shocked that these kind of simple programs could so easily deceive a native user into
revealing personal information.
Case study – ELIZA (contd)
• Although ELIZA doesn't understand context or meaning and is limited to do syntactic
analysis of current generation chat box, it set the stage for the development of more
sophisticated AI and chatbots that use complex NLP and machine learning techniques to
interact with users.

The Technical Blocks of ELIZA can be broken down in to following components

Input Processing: ELIZA starts with the input processing where the user's input is scanned
for keywords or phrases that the system can recognize. This is typically a simple string
matching process without any understanding of the language.

Pattern Matching: ELIZA uses a pattern matching technique to identify the user's statements'
key elements. This is done using a script, which is essentially a collection of pattern-response
pairs.

Decomposition Rules: Once a pattern is identified in the user's input, ELIZA applies
decomposition rules to break down the input into smaller parts. These rules are used to
transform the input into a form that can be more easily manipulated to generate a response.
Case study – ELIZA (contd)
Reassembly Rules: After decomposition, reassembly rules are used to construct the response.
These rules take the decomposed input and reassemble it into a statement that reflects what
the user has said. The reassembly process often involves rephrasing the user's input and asking
for further information.

Script Database: ELIZA operates using a script, a database of pre-defined patterns and
responses. The most famous script, known as DOCTOR, simulates a Rogerian
psychotherapist, which means it primarily uses the user's own statements to form questions.

Response Generation: The response is generated based on the matching patterns and
associated rules. It then selects an appropriate response from the script that corresponds to the
identified pattern.

Output: Finally, the generated response is output to the user, continuing the conversation.
Case study – ELIZA (contd)
Strengths of ELIZA :
• User Engagement: ELIZA was effective at engaging users in conversation, particularly with
the DOCTOR script in it, which could lead users to continue the conversation for extended
periods.
• Foundational: It could set the stage for the development of more sophisticated
conversational agents and contributed significantly to the field of artificial intelligence and
NLP.
• Simplicity: The simple design and rule-based system made it easy to understand and
implement, providing a model for the early development of NLP applications.

Weakness of ELIZA :
• Lack of Real Understanding: ELIZA could not truly understand the conversation; it operated
purely on pattern matching and substitution, which meant it can't handle complex or nuanced
conversations.
• No Contextual Awareness: It never had the ability to remember past interactions, which
meant it could not maintain context over a conversation.
• Repetitive: The responses could have been very repetitive, and the illusion of intelligence
could have broken down once users notice the patterns in ELIZA's responses. These were the
reasons due to which it really never could take off and sustain.
Case study - LUNAR
2. LUNAR
• It was developed by Woods in 1970. It is one of the largest and most successful question-
answering system better than ELIZA its predecessor . LUNAR exploited AI techniques better
than ELIZA. It had a separate syntax analyzer and a semantic interpreter. It was designed
to allow users to query a database of information about rock samples collected during the
Apollo moon missions hence the name LUNAR.

• LUNAR is a classic example of a Natural Language database interface system. It was


capable of translating elaborate natural language expressions into database queries and
handle requests without errors.

The building blocks of LUNAR NLP are as described below :

Database Queries: LUNAR allowed users to interact with a database using natural
language queries. This was revolutionary because, at the time, most interactions with
databases required knowledge of specific query languages.
Semantic Grammar: LUNAR used a form of semantic grammar that was more flexible
than the keyword matching used by ELIZA. This allowed it to parse a variety of natural
language inputs and map them to database queries. LUNAR's semantic grammar was
tailored to the domain of lunar geology. It included
Case study – LUNAR (contd)
specific vocabulary related to rock types, chemical elements, and geological processes, which
are crucial for interpreting the queries correctly. Syntactic grammar rules mapped Semantic
patterns. This involved identifying key verbs, nouns, and modifiers, and using the structure of
the query to deduce its meaning.
Fragment Assembly: To handle a broad range of questions, LUNAR broke down queries into
fragments that could be recombined in various ways, accommodating the different ways
people might phrase their questions. This was done by first parsing the user's natural language
query into its constituent parts, identifying key words and phrases that indicate the query's
intent and the information sought. The system then could brake down the parsed query into
smaller fragments. Each fragment represented a piece of the query that could be mapped to
specific information in the database or to a particular way of structuring a database query.
LUNAR's use of fragment assembly was a pioneering effort in natural language processing,
demonstrating how systems could interpret and respond to complex queries in specialized
knowledge domains.

Response Generation: LUNAR generated responses based on the results retrieved from the
database, formatting them into coherent, natural language for the user. This step involved
organizing the data into sentences and paragraphs that that were grammatically correct and
understandable to the user to convey the answer to the user's query.
Case study – LUNAR (contd)
Strengths of LUNAR :
User-Friendly: By enabling natural language queries, LUNAR made database search
accessible to users without technical backgrounds.
Flexible Query Handling: LUNAR's use of semantic grammar allowed it to understand a
variety of query formulations, making it robust against different phrasing styles.
Domain-Specific Focus: LUNAR was an early example of a successful domain-specific NLP
application, demonstrating the potential for specialized information retrieval systems.

Weakness of LUNAR :
Limited Scope: LUNAR was designed for a very specific domain. While effective within its
scope, it lacked the generalizability of modern NLP systems.
Lack of Learning: Like many early systems, LUNAR did not learn from interactions. Each
session was independent, and the system did not improve over time.
Complex Queries: While capable of handling a range of questions, LUNAR might struggle
with very complex queries that required understanding beyond its programmed grammar.
Thanks

You might also like