You are on page 1of 6

Practical – 3

Constraint Satisfaction Problem


3.1 Constraint Satisfaction Problem Crypt Arithmetic.
[SEND + MORE = MONEY]

● Software requirements:
Python Programming Language [Jupiter Notebook/Google Colab]
● Theory:
The cryptarithmetic problem is a constraint satisfaction problem where the goal
is to assign digits to letters satisfying an arithmetic equation. The solution involves
propagating constraints and making educated guesses, with heuristics like preferring
letters with fewer possible values. The output is (SEND, MORE, MONEY): (9567,
1085, 10652).

● Code (Handwritten):

from itertools import permutations

def solve_cryptarithmetic():
# Define the letters and their possible values
letters = ['S', 'E', 'N', 'D', 'M', 'O', 'R', 'Y']
digits = range(10)

# Generate all possible permutations of digits for the letters


for perm in permutations(digits, len(letters)):
# Assign each digit to its corresponding letter
mapping = {letter: digit for letter, digit in zip(letters, perm)}

# Check if the mapping satisfies the equation


if mapping['S'] != 0 and mapping['M'] != 0:
send = mapping['S']*1000 + mapping['E']*100 + mapping['N']*10 +
mapping['D']
more = mapping['M']*1000 + mapping['O']*100 + mapping['R']*10 +
mapping['E']
money = mapping['M']*10000 + mapping['O']*1000 + mapping['N']*100 +
mapping['E']*10 + mapping['Y']

if send + more == money:

D22DCE179 CE357 – Artificial Intelligence 25


# Return the solution
return {'SEND': send, 'MORE': more, 'MONEY': money}

return None

# Get the solution


solution = solve_cryptarithmetic()

# Print output
if solution:
print('(SEND, MORE, MONEY):', (solution['SEND'], solution['MORE'],
solution['MONEY']))
else:
print("No solution found.")

● Output (Screenshot)

● Conclusion: The cryptarithmetic problem involves solving constraint satisfaction


problems by assigning digits to letters. Iterative propagation and strategic guesses,
guided by heuristics, aid in efficient search, reducing search space and achievingsuccess
through effective constraints propagation and systematic search strategies.

D22DCE179 CE357 – Artificial Intelligence 26


Practical – 4
Natural Language Processing
4.1 Regular Expression

● Perform Natural Language Processing Tasks [Text Reading, Text


Analysis, Text Pre-processing, Text Classification, EDA, Stemming,
Lemmatization] using NLTK using Python Programming.

● Software requirements:
Python Programming Language

● Theory:
Regular expressions, or regex, are crucial in Python for pattern matching and text
manipulation. They aid in tasks like text reading, analysis, pre-processing, classification,
and exploratory data analysis. The NLTK library in Python allows developers to
efficiently perform these tasks, enhancing the capabilities of NLTK and creating robust
NLP pipelines for various applications.

● Code (Handwritten):

D22DCE179 CE357 – Artificial Intelligence 27


import re
import nltk
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from nltk.stem import PorterStemmer, WordNetLemmatizer
nltk.download('punkt')
nltk.download('stopwords')
nltk.download('wordnet')

text = "The quick brown fox jumps over the lazy dog, exploring the lush green forest
with curious eyes."

# Text Reading
print("Text Reading:")
print(text)

# Text Pre-processing
print("\nText Pre-processing:")
# Convert text to lowercase

D22DCE179 CE357 – Artificial Intelligence 28


text_lower = text.lower()

# Remove punctuation and special characters


text_clean = re.sub(r"[^\w\s]", "", text_lower)

# Tokenize text
tokens = word_tokenize(text_clean)

# Remove stopwords
stop_words = set(stopwords.words("english"))
filtered_tokens = [word for word in tokens if word not in stop_words]

# Text Analysis (Word Frequency)


print("\nText Analysis (Word Frequency):")
word_freq = nltk.FreqDist(filtered_tokens)
print(word_freq.most_common())

# Text Classification (Identify specific words)


print("\nText Classification (Identify specific words):")

# Define patterns for classification


patterns = {
"nltk": r"\bnltk\b",
"processing": r"\bprocessing\b"
}

# Classify text based on patterns


classified_text = {}
for category, pattern in patterns.items():
classified_text[category] = re.findall(pattern, text_lower)
print(classified_text)

# Exploratory Data Analysis (EDA)


print("\nExploratory Data Analysis (EDA):")

# Sentence tokenization
sentences = nltk.sent_tokenize(text)
print("Number of sentences:", len(sentences))

# Average words per sentence

D22DCE179 CE357 – Artificial Intelligence 29


avg_words_per_sentence = sum(len(word_tokenize(sentence)) for sentence in
sentences) / len(sentences)
print("Average words per sentence:", avg_words_per_sentence)

# Stemming
print("\nStemming:")
porter = PorterStemmer()
stemmed_words = [porter.stem(word) for word in filtered_tokens]
print(stemmed_words)

# Lemmatization
print("\nLemmatization:")
lemmatizer = WordNetLemmatizer()
lemmatized_words = [lemmatizer.lemmatize(word) for word in filtered_tokens]
print(lemmatized_words)

● Output:

● Conclusion: Regular expressions are crucial for Python's Natural Language Processing
(NLTK) tasks, enabling efficient text reading, analysis, pre-processing, classification,
EDA, stemming, and lemmatization. They enable pattern identification, tokenization,
and linguistic feature identification, enhancing the efficiency and accuracy of text
processing workflows.

D22DCE179 CE357 – Artificial Intelligence 30

You might also like