You are on page 1of 32

LAB NO: 01

OBJECTIVE: To Understand Basics of Python

TASK NO: 01- Write a Python program which accepts the radius of a circle from the user and compute the
area.

Program:
pi=3.14
radius=int(input("Enter radius of circle= "))
area=pi*radius*radius
print(area)

TASK NO: 02-Write a Python program to guess a number between 1 to 9. Hint(use radom method to
randomly generate numbers).

Program:
import random
a=int(input("Enter any number between 1 to 9 :"))
b=random.randint(1,10)
if(a==b):
print("Numbers matched")
print("Number was")
print(b)
if(a!=b):
print("Numbers did not matched")
print("Number was")
print(b)

TASK NO: 03- Write a Python program that accepts a word from the user and reverse it.

Program:

a=input("Enter any word: ")


for x in reversed(a):
print(x)

TASK NO: 04- Write a Python program that prints all the numbers from 0 to 6 except 3 and 6. Note : Use
'continue' statement

Program:
for x in range(0,6):
if x==3:
continue
if x==6:
continue
print(x) for x in range(0,6):
if x==3:
continue
if x==6:
continue
print(x)

TASK NO: 05- Write a Python program which iterates the integers from 1 to 50. For multiples of three print
“Multiple of 3" instead of the number and for the multiples of five print “Multiple of 5". For numbers which are
multiples of both three and five print “Multiple of both 3 & 5".

Program:
for x in range(1,50):
if x % 3 == 0 and x % 5== 0:
print("Multiple of 3 and 5")
continue
if x % 3 == 0:
print("Multiple of 3")
continue
if x % 5 == 0:
print("Multiple of 5")
continue
else:
print(x)

TASK NO: 06- Write a Python program to check whether an alphabet is a vowel or consonant.

Program:
l=input("Enter any letter of the alphabet: ")
if l in ('a', 'e', 'i', 'o', 'u'):
print("%s is a vowel" % l)
elif l=='y':
print("Sometimes letter y stand for vowel, sometimes y stands for consonant")
else:
print("%s is a consonant" % l)

TASK NO: 07- Write a Python program to convert temperatures to and from celsius, fahrenheit.

Program:
l=input("Press F for Converion from Fahrenheit to Celsius or Press C for Converion from Celsius to Fahrenheit
")
if l=='F':
fahrenheit=int(input("Enter temperature in fahrenheit= "))
celsius=(fahrenheit-32)*5.0/9.0
print("Temperature:", fahrenheit, "Fahrenheit=", celsius, "C")
elif l=='C':
celsius=int(input("Enter temperature in celsius= "))
fahrenheit=(9.0/5.0)*celsius+32
print("Temperature:", celsius, "Celsius=", fahrenheit, "F")

else:
print("Invalid operation")
LAB NO: 02
OBJECTIVE: Introduction to Data Structures, Functions and Recursion

TASKS:
Execute at least 3 functions from each:

• Mathematical Functions
import math
# Fractional number.
n = -99.99
m=50
print("number n is", n)
print("number m is", m)
print("Floor",math.floor(n))
print("ceil",math.ceil(n))
# Absolute value.
print("absolute",abs(n))
print("power",pow(n,2))
print("square root",math.sqrt(m))

• Random Functions
import random
print ('Random integers between 0 and 5: ',random.randint(0, 5))
print('Random numbers between 0 and 100: ',random.random() * 100)
myList = [2, 109, False, 10, "Lorem", 482, "Ipsum"]
print('Random choice from the list',random.choice(myList))

• Trigonometric Functions
import math
a = math.pi/6
print ("cos(math.pi) : ", math.cos(math.pi))
print ("sin(2*math.pi) : ", math.sin(2*math.pi))
print ("The value of tangent of pi/6 is : ", end=" ")
print (math.tan(a))

• Mathematical Constraints
import math
print("pi",math.pi, "e", math.e)

Execute at least 5 built-in functions of arrays of your own choice.


brands = ["Coke", "Apple", "Google", "Microsoft", "Toyota"]
num_brands = len(brands)
print("Length of Array: ",num_brands)
brands.append('Honda')
print("New Element Added ",brands)
brands.remove("Honda")
print("New Element Removed ",brands)
concat=brands + ["Mechanical","Software","Civil"]
print("Array Concated",concat)
concat.insert(4,"Hello")

 Execute at least 5 built-in functions of list


odd = [1, 3, 5]
print("List: ",odd)
odd.append(7)
print("List after append: ",odd)
odd.extend([9, 11, 13])
print("List after extend: ",odd)
odd.insert(1,17)
print("List after insert: ",odd)
del odd[1]
print("List after delete: ",odd)
print("minimum number in the list ",min(odd))

 Create tuples and manipulate it using built-in functions, operation and slicing.
my_tuple = ('p','e','r','m','i','t')
print("First Tuple",my_tuple)
#delete
del my_tuple
print("First Tuple deleted")
new_tuple = ('p','r','o','g','r','a','m','i','z')
#count
print(new_tuple.count('p'))
# Index
print(new_tuple.index('g'))
print("slicing")
# elements 2nd to 4th
print(new_tuple[1:4])
# elements beginning to 2nd
print(new_tuple[:-7])
# elements 8th to end
print(new_tuple[7:])
# elements beginning to end
print(new_tuple[:])

 Create dictionaries and manipulate it using built-in functions and operation.


my_dict = {'name':'Jack', 'age': 26}
print(my_dict['name'])
print(my_dict.get('age'))
squares = {1:1, 2:4, 3:9, 4:16, 5:25}
print(squares)
squares.pop(4)
print("removing a particular item",squares)
squares.popitem()
print("removing an arbitrary item",squares)
squares.clear()
print("clearing the list",squares)

 Write a program to create, read, write files.

#open a file for writing and create it if it does not exist


file = open("testfile.txt","w+")
file.write("Hello World")
file.write("This is our new text file")
file.write("and this is another line.")
file.write("Why? Because we can.")
file.close()
#Open the file back and read the contents
f=open("testfile.txt", "r")
if f.mode=='r':
contents=f.read()
print(contents)
f.close()

LAB TASKS:

Write a program for Fibonacci series with simple function and recursion.
def fibonacci(n):
if(n <= 1):
return n
else:
return(fibonacci(n-1) + fibonacci(n-2))
n = int(input("Enter number of terms:"))
print("Fibonacci sequence:")
for i in range(n):
print (fibonacci(i)),

• Write a Python function to sum all the numbers in a list


def sum(numbers):
total = 0
for x in numbers:
total += x
return total
print(sum((8, 2, 3, 0, 7)))
• Write a Python program to reverse a string.
a=str(input("Enter a string: "))
print("Reverse of the string is: ")
print(a[::-1])

• Write a Python function that accepts a string and calculate the number of upper case letters and lower
case letters.
def string_test(s):
d={"UPPER_CASE":0, "LOWER_CASE":0}
for c in s:
if c.isupper():
d["UPPER_CASE"]+=1
elif c.islower():
d["LOWER_CASE"]+=1
else:
pass
print ("Original String : ", s)
print ("No. of Upper case characters : ", d["UPPER_CASE"])
print ("No. of Lower case Characters : ", d["LOWER_CASE"])

string_test('Artificial Intelligence')

• Write a Python program to check if a given key already exists in a dictionary.


d={'A':1,'B':2,'C':3}
key=input("Enter key to check:")
if key in d.keys():
print("Key is present and value of the key is:")
print(d[key])
else:
print("Key isn't present!")

CLASS TASKS:
 Write a Python program to sum all the items in a list.

lst = []
num = int(input('How many numbers: '))
for n in range(num):
numbers = int(input('Enter number '))
lst.append(numbers)
print("Sum of elements in given list is :", sum(lst))
• Write a Python script to concatenate following dictionaries to create a new one. Sample Dictionary:
dic1={1:10, 2:20} dic2={3:30, 4:40} dic3={5:50,6:60} Expected Result : {1: 10, 2: 20, 3: 30, 4: 40, 5: 50, 6:
60}
dic1={1:10, 2:20}
dic2={3:30, 4:40}
dic3={5:50,6:60}
dic4 = {}
for d in (dic1, dic2, dic3): dic4.update(d)
print(dic4)

LAB NO: 03
OBJECTIVE: To Become Familiar with Searching Algorithms

TASK NO 1: Execute breadth first search and bfs shortest path algorithm.

Creating graph:

Breadth-first Search

graph = {'A': ['B', 'C', 'E'],


'B': ['A','D', 'E'],
'C': ['A', 'F', 'G'],
'D': ['B'],
'E': ['B', 'A'],
'F': ['C'],
'G': ['C']
}
def bfs_connected_component(graph,start):
explored=[]
queue=[start] # list explore in future
while queue:
node=queue.pop(0) #c
if node not in explored:
explored.append(node)
neighbours=graph[node] # [A, F, G]
for neighbour in neighbours:
queue.append(neighbour)
return explored
print("\nHere the nodes of the graph visited by"
"breadth_first search,starting from node 'A':"
,bfs_connected_component(graph,'C'))

BFS Shortest Path


def bfs_shortest_path(graph,start,goal):
explored=[]
queue=[[start]]
if start==goal:
return "goal has been achieved"
while queue:
path=queue.pop(0)
node=path[-1]
if node not in explored:
neighbours=graph[node]
for neighbour in neighbours:
new_path = list(path)
new_path.append(neighbour)
queue.append(new_path)
if neighbour == goal:
return new_path
explored.append(node)
return "So Sorry, but a connecting path doesn't exist"
if __name__ =='__main__':
graph = {'A': ['B', 'C', 'E'],
'B': ['D', 'E'],
'C': ['A', 'F', 'G'],
'D': ['B'],
'E': ['B', 'A'],
'F': ['C'],
'G': ['C']
}

print("\nHere the shortest path between nodes 'G' and 'D':"


,bfs_shortest_path(graph,'G','D'))

TASK NO 2: Execute depth first search and dfs shortest path algorithm.

DEPTH – FIRST SEARCH

graph = {'A': ['B', 'C', 'E'],


'B': ['A','D', 'E'],
'C': ['A', 'F', 'G'],
'D': ['B'],
'E': ['B', 'A'],
'F': ['C'],
'G': ['C']
}
def dfs(graph,node,visited):
if node not in visited:
visited.append(node)
for n in graph[node]:
dfs(graph,n,visited)
return visited
visited=dfs(graph,'A',[])
print("here is dfs path betwwen node A to G:" ,visited)

DFS Shortest Path

graph = {'A': ['B', 'C', 'E'],


'B': ['A','D', 'E'],
'C': ['A', 'F', 'G'],
'D': ['B'],
'E': ['B', 'A'],
'F': ['C'],
'G': ['C']
}
def dfs_paths(graph,start,goal):
stack=[(start,[start])]
visited=[]
while stack:
(vertex,path)=stack.pop()
if vertex not in visited:
if vertex==goal:
return path
visited.append(vertex)
for neighbour in graph[vertex]:
stack.append((neighbour,path+[neighbour]))
print("here is the dfs shortest path from A to F:" ,dfs_paths(graph,'A','F'))
TASK NO 3: Execute Uniform Cost Search algorithm on weighted graph.

class Graph:
def __init__(self,node):
self.edges = {"S":[]}
self.weights = {"S":[0]}

def addEdges(self,node,child,cost):
if node not in self.edges:
self.edges[""+node] = []
self.edges[node].append(child)
self.weights[node+"->"+child] = cost

def neighbours(self,node):
return self.edges[node]

def get_cost(self,from_node,to_node):
return self.weights[(from_node +"->"+ to_node)]

def uniformCostSearch(graph,start,goal):
visited = []
queue = [(0,start)]
pathlist = [start]

while queue:
cost,node = queue.pop(0)
path = pathlist.pop(0)
if node not in visited:
visited.append(node)
if node == goal:
print("The shortest path is : ",path," with cost of ",cost," units.")
return
for i in graph.neighbours(node):
if i not in visited:
total_cost = cost + graph.get_cost(node,i)
pathlist.append(path +"->"+ i)
queue.append((total_cost,i))

graph = Graph("S")
graph.addEdges("S","A",1); graph.addEdges("S","B",5) ;graph.addEdges("S","C",8)
graph.addEdges("A","D",3); graph.addEdges("A","E",7)
graph.addEdges("A","G",9); graph.addEdges("B","G`",4)
graph.addEdges("C","G``",5); graph.addEdges("D","A",3)
graph.addEdges("E","A",7); graph.addEdges("G","A",9)
graph.addEdges("G`","B",4); graph.addEdges("G``","C",5)
print("The graph is : \n",graph.edges)
print("\n")

uniformCostSearch(graph,"S","G``")
uniformCostSearch(graph,"S","G`")
uniformCostSearch(graph,"S","G")
uniformCostSearch(graph,"S","G`")

LAB NO: 04
OBJECTIVE: To Become Familiar With Informed Searched Algorithms

TASK NO: 01 Execute the code for A* provided in the handouts.


TASK NO :02 Execute the A* algorithm on any map of your own choice.
LAB NO: 05
OBJECTIVE: To become familiar with nltk toolkit
TASK NO 1: Perform each task that is included in Handouts on your own choice of text.

Tokenization:

import nltk
nltk.data.path.append(r"E:\nltk_data")
from nltk.tokenize import word_tokenize, sent_tokenize
data="All work and no play makes jack a dull boy, all work and no play"
print(word_tokenize(data))

Tokenizing Sentences

import nltk
nltk.data.path.append(r"E:\nltk_data")
from nltk.tokenize import word_tokenize, sent_tokenize
data="All work and no play makes jack a dull boy. all work and no play"
print(sent_tokenize(data))

Get Synonyms from Wordnet

import nltk
nltk.data.path.append(r"E:\nltk_data")
from nltk.corpus import wordnet
syn=wordnet.synsets("pain")
print(syn[0].definition())
print(syn[0].examples())
Getting Synonyms

import nltk
nltk.data.path.append(r"E:\nltk_data")
from nltk.corpus import wordnet
synonyms=[]
for syn in wordnet.synsets("Software"):
for lemma in syn.lemmas():
synonyms.append(lemma.name())
print(synonyms)

Getting Antonyms

import nltk
nltk.data.path.append(r"E:\nltk_data")
from nltk.corpus import wordnet
antonyms=[]
for syn in wordnet.synsets("Large"):
for l in syn.lemmas():
if l.antonyms():
antonyms.append(l.antonyms()[0].name())
print(antonyms)
NLTK and Arrays

import nltk
nltk.data.path.append(r"E:\nltk_data")
from nltk.tokenize import word_tokenize, sent_tokenize
data="All work and no play makes jack a dull boy. All work and no play makes jack dull boy"
phrases=sent_tokenize(data)
words=word_tokenize(data)
print(phrases[0])
print(phrases[1])
print(words[0])

Stop Words

import nltk
nltk.data.path.append(r"E:\nltk_data")
from nltk.corpus import stopwords
print(set(stopwords.words('English')))

Removing Stop Words

import nltk
nltk.data.path.append(r"E:\nltk_data")
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize, sent_tokenize
text="In this program, I using NLTK. It is an interesting toolkit to use"
stop_words=set(stopwords.words('English'))
words=word_tokenize(text)
new_sentence=[]
for word in words:
if word not in stop_words:
new_sentence.append(word)

print(new_sentence)

Searching

import nltk
nltk.data.path.append(r"E:\nltk_data")
file=open('wordstem.txt','r')
read_file=file.read()
text=nltk.Text(nltk.word_tokenize(read_file))

match=text.concordance('word')

Stemming

import nltk
nltk.data.path.append(r"E:\nltk_data")
from nltk.stem import PorterStemmer
from nltk.tokenize import sent_tokenize, word_tokenize

words=["Go","Going","Gone","Goes"]
ps=PorterStemmer()

for word in words:


print(ps.stem(word))

Stemming For Sentences

import nltk
nltk.data.path.append(r"E:\nltk_data")
from nltk.stem import PorterStemmer
from nltk.tokenize import sent_tokenize, word_tokenize

ps=PorterStemmer()
sentence="gaming, the gamers play games"
words=word_tokenize(sentence)

for word in words:


print(word+ ":" + ps.stem(word))

Lemmatizing words using wordnet


import nltk
nltk.data.path.append(r"E:\nltk_data")
from nltk.stem import PorterStemmer
stemmer=PorterStemmer()
print(stemmer.stem('increases'))

import nltk
nltk.data.path.append(r"E:\nltk_data")
from nltk.stem import WordNetLemmatizer
lemmatizer=WordNetLemmatizer()
print(lemmatizer.lemmatize('increases'))

Pos Tags

import nltk
nltk.data.path.append(r"E:\nltk_data")
text=nltk.word_tokenize("Dive into NLTK: Part-of-speech tagging and POS Tagger")
print(nltk.pos_tag(text))

TASK NO: 02 Define and Perform chunking in python using nltk

Chunking

The basic technique we will use for entity detection is chunking, which segments and labels multi-token
sequences as illustrated in 2.1. The smaller boxes show the word-level tokenization and part-of-speech tagging,
while the large boxes show higher-level chunking. Each of these larger boxes is called a chunk. Like
tokenization, which omits whitespace, chunking usually selects a subset of the tokens. Also like tokenization,
the pieces produced by a chunker do not overlap in the source text.

Noun Phrase Chunking: We will begin by considering the task of noun phrase chunking, or NP-
chunking, where we search for chunks corresponding to individual noun phrases. 

import nltk
nltk.data.path.append(r"E:\nltk_data")
sentence = [("the", "DT"), ("little", "JJ"), ("yellow", "JJ"),
("dog", "NN"), ("barked", "VBD"), ("at", "IN"), ("the", "DT"), ("cat", "NN")]

grammar = "NP: {<DT>?<JJ>*<NN>}"

cp = nltk.RegexpParser(grammar)
result = cp.parse(sentence)
print(result)
"""(S
(NP the/DT little/JJ yellow/JJ dog/NN)
barked/VBD
at/IN
(NP the/DT cat/NN))"""
result.draw()

LAB NO: 06
OBJECTIVE: To Become Familiar With Parsing, Classification Of Text And Sentiment
Analysis
TASK NO: 01 Execute all examples that is provided in the labs handouts.(Parsing example
should include your roll no: in the sentence)

Parsing:

With roll number


Bayes Theorem:
Sentiment Analysis:

TASK NO: 02 Explore the different types of parsers(at least 2).


DEPENDENCY PARSING

Dependency parsing (DP) is a modern parsing mechanism. The main concept of DP is that each linguistic
unit (words) is connected with each other by a directed link. These links are called dependencies in
linguistics. There is a lot of work going on in the current parsing community. While phrase structure
parsing is still widely used for free word order languages (Czech and Turkish), dependency parsing has
turned out to be more efficient.

A very clear distinction can be made by looking at the parse tree generated by phrase structure grammar and
dependency grammar for a given example, as the sentence "The big dog chased the cat". The parse tree for
the preceding sentence is:

RECURSIVE DESCENT PARSER :

A recursive descent parser is a top-down parser, so called because it builds a parse tree from the top (the start
symbol) down, and from left to right, using an input sentence as a target as it is scanned from left to right.
The actual tree is not constructed but is implicit in a sequence of function calls. (In the same way the actual
tree is implicit in the sequence of reductions used for a shift-reduce parser.) This type of parser was very
popular for real compilers in the past, but is not as popular now. A recursive descent parser is often written
entirely by hand and does not require any sophisticated tools. It is a simple and effective technique, but is not
as powerful as some of the shift-reduce parsers -- not the one presented in class, but fancier similar ones
called LR parsers. There also exists a table-driven type of top-down parser that is sometimes used.This parser
uses a recursive function corresponding to each non-terminal symbol in the language. For simplicity one often
uses the name of the non-terminal as the name of the function. The body of each recursive function mirrors
the right side of the corresponding rule. If there are several rules with the same non-terminal on the right side,
the code mirrors all those possibilities. In order for this method to work, one must be able to decide which
function to call based on the next input symbol. (This parser, like most, looks one token ahead all the
time.)Surprisingly, one hard part of even small recursive descent parsers is the scanning: repeatedly fetching
the next token from the scanner. It is tricky to decide when to scan, and the parser doesn't work at all if there
is an extra scan or a missing scan.This parser has another advantage: the writer has complete control over a
relatively simple program. If a problem comes up, one can often "cheat" by hacking in specialized code to
solve the problem. As one common example, you can "peek ahead", further than the next token, to decide
what to do.
TASK NO: 03 Perform parsing using “generate module” on demo grammar from generate
module provided by nltk and perform sentence generation using CFG with n=10.
TASK NO: 04 Modify the classification of text example using Muslims males and females
names.
muslim_male.txt muslim_female.txt

You might also like