Professional Documents
Culture Documents
TASK NO: 01- Write a Python program which accepts the radius of a circle from the user and compute the
area.
Program:
pi=3.14
radius=int(input("Enter radius of circle= "))
area=pi*radius*radius
print(area)
TASK NO: 02-Write a Python program to guess a number between 1 to 9. Hint(use radom method to
randomly generate numbers).
Program:
import random
a=int(input("Enter any number between 1 to 9 :"))
b=random.randint(1,10)
if(a==b):
print("Numbers matched")
print("Number was")
print(b)
if(a!=b):
print("Numbers did not matched")
print("Number was")
print(b)
TASK NO: 03- Write a Python program that accepts a word from the user and reverse it.
Program:
TASK NO: 04- Write a Python program that prints all the numbers from 0 to 6 except 3 and 6. Note : Use
'continue' statement
Program:
for x in range(0,6):
if x==3:
continue
if x==6:
continue
print(x) for x in range(0,6):
if x==3:
continue
if x==6:
continue
print(x)
TASK NO: 05- Write a Python program which iterates the integers from 1 to 50. For multiples of three print
“Multiple of 3" instead of the number and for the multiples of five print “Multiple of 5". For numbers which are
multiples of both three and five print “Multiple of both 3 & 5".
Program:
for x in range(1,50):
if x % 3 == 0 and x % 5== 0:
print("Multiple of 3 and 5")
continue
if x % 3 == 0:
print("Multiple of 3")
continue
if x % 5 == 0:
print("Multiple of 5")
continue
else:
print(x)
TASK NO: 06- Write a Python program to check whether an alphabet is a vowel or consonant.
Program:
l=input("Enter any letter of the alphabet: ")
if l in ('a', 'e', 'i', 'o', 'u'):
print("%s is a vowel" % l)
elif l=='y':
print("Sometimes letter y stand for vowel, sometimes y stands for consonant")
else:
print("%s is a consonant" % l)
TASK NO: 07- Write a Python program to convert temperatures to and from celsius, fahrenheit.
Program:
l=input("Press F for Converion from Fahrenheit to Celsius or Press C for Converion from Celsius to Fahrenheit
")
if l=='F':
fahrenheit=int(input("Enter temperature in fahrenheit= "))
celsius=(fahrenheit-32)*5.0/9.0
print("Temperature:", fahrenheit, "Fahrenheit=", celsius, "C")
elif l=='C':
celsius=int(input("Enter temperature in celsius= "))
fahrenheit=(9.0/5.0)*celsius+32
print("Temperature:", celsius, "Celsius=", fahrenheit, "F")
else:
print("Invalid operation")
LAB NO: 02
OBJECTIVE: Introduction to Data Structures, Functions and Recursion
TASKS:
Execute at least 3 functions from each:
• Mathematical Functions
import math
# Fractional number.
n = -99.99
m=50
print("number n is", n)
print("number m is", m)
print("Floor",math.floor(n))
print("ceil",math.ceil(n))
# Absolute value.
print("absolute",abs(n))
print("power",pow(n,2))
print("square root",math.sqrt(m))
• Random Functions
import random
print ('Random integers between 0 and 5: ',random.randint(0, 5))
print('Random numbers between 0 and 100: ',random.random() * 100)
myList = [2, 109, False, 10, "Lorem", 482, "Ipsum"]
print('Random choice from the list',random.choice(myList))
• Trigonometric Functions
import math
a = math.pi/6
print ("cos(math.pi) : ", math.cos(math.pi))
print ("sin(2*math.pi) : ", math.sin(2*math.pi))
print ("The value of tangent of pi/6 is : ", end=" ")
print (math.tan(a))
• Mathematical Constraints
import math
print("pi",math.pi, "e", math.e)
Create tuples and manipulate it using built-in functions, operation and slicing.
my_tuple = ('p','e','r','m','i','t')
print("First Tuple",my_tuple)
#delete
del my_tuple
print("First Tuple deleted")
new_tuple = ('p','r','o','g','r','a','m','i','z')
#count
print(new_tuple.count('p'))
# Index
print(new_tuple.index('g'))
print("slicing")
# elements 2nd to 4th
print(new_tuple[1:4])
# elements beginning to 2nd
print(new_tuple[:-7])
# elements 8th to end
print(new_tuple[7:])
# elements beginning to end
print(new_tuple[:])
LAB TASKS:
Write a program for Fibonacci series with simple function and recursion.
def fibonacci(n):
if(n <= 1):
return n
else:
return(fibonacci(n-1) + fibonacci(n-2))
n = int(input("Enter number of terms:"))
print("Fibonacci sequence:")
for i in range(n):
print (fibonacci(i)),
• Write a Python function that accepts a string and calculate the number of upper case letters and lower
case letters.
def string_test(s):
d={"UPPER_CASE":0, "LOWER_CASE":0}
for c in s:
if c.isupper():
d["UPPER_CASE"]+=1
elif c.islower():
d["LOWER_CASE"]+=1
else:
pass
print ("Original String : ", s)
print ("No. of Upper case characters : ", d["UPPER_CASE"])
print ("No. of Lower case Characters : ", d["LOWER_CASE"])
string_test('Artificial Intelligence')
CLASS TASKS:
Write a Python program to sum all the items in a list.
lst = []
num = int(input('How many numbers: '))
for n in range(num):
numbers = int(input('Enter number '))
lst.append(numbers)
print("Sum of elements in given list is :", sum(lst))
• Write a Python script to concatenate following dictionaries to create a new one. Sample Dictionary:
dic1={1:10, 2:20} dic2={3:30, 4:40} dic3={5:50,6:60} Expected Result : {1: 10, 2: 20, 3: 30, 4: 40, 5: 50, 6:
60}
dic1={1:10, 2:20}
dic2={3:30, 4:40}
dic3={5:50,6:60}
dic4 = {}
for d in (dic1, dic2, dic3): dic4.update(d)
print(dic4)
LAB NO: 03
OBJECTIVE: To Become Familiar with Searching Algorithms
TASK NO 1: Execute breadth first search and bfs shortest path algorithm.
Creating graph:
Breadth-first Search
TASK NO 2: Execute depth first search and dfs shortest path algorithm.
class Graph:
def __init__(self,node):
self.edges = {"S":[]}
self.weights = {"S":[0]}
def addEdges(self,node,child,cost):
if node not in self.edges:
self.edges[""+node] = []
self.edges[node].append(child)
self.weights[node+"->"+child] = cost
def neighbours(self,node):
return self.edges[node]
def get_cost(self,from_node,to_node):
return self.weights[(from_node +"->"+ to_node)]
def uniformCostSearch(graph,start,goal):
visited = []
queue = [(0,start)]
pathlist = [start]
while queue:
cost,node = queue.pop(0)
path = pathlist.pop(0)
if node not in visited:
visited.append(node)
if node == goal:
print("The shortest path is : ",path," with cost of ",cost," units.")
return
for i in graph.neighbours(node):
if i not in visited:
total_cost = cost + graph.get_cost(node,i)
pathlist.append(path +"->"+ i)
queue.append((total_cost,i))
graph = Graph("S")
graph.addEdges("S","A",1); graph.addEdges("S","B",5) ;graph.addEdges("S","C",8)
graph.addEdges("A","D",3); graph.addEdges("A","E",7)
graph.addEdges("A","G",9); graph.addEdges("B","G`",4)
graph.addEdges("C","G``",5); graph.addEdges("D","A",3)
graph.addEdges("E","A",7); graph.addEdges("G","A",9)
graph.addEdges("G`","B",4); graph.addEdges("G``","C",5)
print("The graph is : \n",graph.edges)
print("\n")
uniformCostSearch(graph,"S","G``")
uniformCostSearch(graph,"S","G`")
uniformCostSearch(graph,"S","G")
uniformCostSearch(graph,"S","G`")
LAB NO: 04
OBJECTIVE: To Become Familiar With Informed Searched Algorithms
Tokenization:
import nltk
nltk.data.path.append(r"E:\nltk_data")
from nltk.tokenize import word_tokenize, sent_tokenize
data="All work and no play makes jack a dull boy, all work and no play"
print(word_tokenize(data))
Tokenizing Sentences
import nltk
nltk.data.path.append(r"E:\nltk_data")
from nltk.tokenize import word_tokenize, sent_tokenize
data="All work and no play makes jack a dull boy. all work and no play"
print(sent_tokenize(data))
import nltk
nltk.data.path.append(r"E:\nltk_data")
from nltk.corpus import wordnet
syn=wordnet.synsets("pain")
print(syn[0].definition())
print(syn[0].examples())
Getting Synonyms
import nltk
nltk.data.path.append(r"E:\nltk_data")
from nltk.corpus import wordnet
synonyms=[]
for syn in wordnet.synsets("Software"):
for lemma in syn.lemmas():
synonyms.append(lemma.name())
print(synonyms)
Getting Antonyms
import nltk
nltk.data.path.append(r"E:\nltk_data")
from nltk.corpus import wordnet
antonyms=[]
for syn in wordnet.synsets("Large"):
for l in syn.lemmas():
if l.antonyms():
antonyms.append(l.antonyms()[0].name())
print(antonyms)
NLTK and Arrays
import nltk
nltk.data.path.append(r"E:\nltk_data")
from nltk.tokenize import word_tokenize, sent_tokenize
data="All work and no play makes jack a dull boy. All work and no play makes jack dull boy"
phrases=sent_tokenize(data)
words=word_tokenize(data)
print(phrases[0])
print(phrases[1])
print(words[0])
Stop Words
import nltk
nltk.data.path.append(r"E:\nltk_data")
from nltk.corpus import stopwords
print(set(stopwords.words('English')))
import nltk
nltk.data.path.append(r"E:\nltk_data")
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize, sent_tokenize
text="In this program, I using NLTK. It is an interesting toolkit to use"
stop_words=set(stopwords.words('English'))
words=word_tokenize(text)
new_sentence=[]
for word in words:
if word not in stop_words:
new_sentence.append(word)
print(new_sentence)
Searching
import nltk
nltk.data.path.append(r"E:\nltk_data")
file=open('wordstem.txt','r')
read_file=file.read()
text=nltk.Text(nltk.word_tokenize(read_file))
match=text.concordance('word')
Stemming
import nltk
nltk.data.path.append(r"E:\nltk_data")
from nltk.stem import PorterStemmer
from nltk.tokenize import sent_tokenize, word_tokenize
words=["Go","Going","Gone","Goes"]
ps=PorterStemmer()
import nltk
nltk.data.path.append(r"E:\nltk_data")
from nltk.stem import PorterStemmer
from nltk.tokenize import sent_tokenize, word_tokenize
ps=PorterStemmer()
sentence="gaming, the gamers play games"
words=word_tokenize(sentence)
import nltk
nltk.data.path.append(r"E:\nltk_data")
from nltk.stem import WordNetLemmatizer
lemmatizer=WordNetLemmatizer()
print(lemmatizer.lemmatize('increases'))
Pos Tags
import nltk
nltk.data.path.append(r"E:\nltk_data")
text=nltk.word_tokenize("Dive into NLTK: Part-of-speech tagging and POS Tagger")
print(nltk.pos_tag(text))
Chunking
The basic technique we will use for entity detection is chunking, which segments and labels multi-token
sequences as illustrated in 2.1. The smaller boxes show the word-level tokenization and part-of-speech tagging,
while the large boxes show higher-level chunking. Each of these larger boxes is called a chunk. Like
tokenization, which omits whitespace, chunking usually selects a subset of the tokens. Also like tokenization,
the pieces produced by a chunker do not overlap in the source text.
Noun Phrase Chunking: We will begin by considering the task of noun phrase chunking, or NP-
chunking, where we search for chunks corresponding to individual noun phrases.
import nltk
nltk.data.path.append(r"E:\nltk_data")
sentence = [("the", "DT"), ("little", "JJ"), ("yellow", "JJ"),
("dog", "NN"), ("barked", "VBD"), ("at", "IN"), ("the", "DT"), ("cat", "NN")]
cp = nltk.RegexpParser(grammar)
result = cp.parse(sentence)
print(result)
"""(S
(NP the/DT little/JJ yellow/JJ dog/NN)
barked/VBD
at/IN
(NP the/DT cat/NN))"""
result.draw()
LAB NO: 06
OBJECTIVE: To Become Familiar With Parsing, Classification Of Text And Sentiment
Analysis
TASK NO: 01 Execute all examples that is provided in the labs handouts.(Parsing example
should include your roll no: in the sentence)
Parsing:
Dependency parsing (DP) is a modern parsing mechanism. The main concept of DP is that each linguistic
unit (words) is connected with each other by a directed link. These links are called dependencies in
linguistics. There is a lot of work going on in the current parsing community. While phrase structure
parsing is still widely used for free word order languages (Czech and Turkish), dependency parsing has
turned out to be more efficient.
A very clear distinction can be made by looking at the parse tree generated by phrase structure grammar and
dependency grammar for a given example, as the sentence "The big dog chased the cat". The parse tree for
the preceding sentence is:
A recursive descent parser is a top-down parser, so called because it builds a parse tree from the top (the start
symbol) down, and from left to right, using an input sentence as a target as it is scanned from left to right.
The actual tree is not constructed but is implicit in a sequence of function calls. (In the same way the actual
tree is implicit in the sequence of reductions used for a shift-reduce parser.) This type of parser was very
popular for real compilers in the past, but is not as popular now. A recursive descent parser is often written
entirely by hand and does not require any sophisticated tools. It is a simple and effective technique, but is not
as powerful as some of the shift-reduce parsers -- not the one presented in class, but fancier similar ones
called LR parsers. There also exists a table-driven type of top-down parser that is sometimes used.This parser
uses a recursive function corresponding to each non-terminal symbol in the language. For simplicity one often
uses the name of the non-terminal as the name of the function. The body of each recursive function mirrors
the right side of the corresponding rule. If there are several rules with the same non-terminal on the right side,
the code mirrors all those possibilities. In order for this method to work, one must be able to decide which
function to call based on the next input symbol. (This parser, like most, looks one token ahead all the
time.)Surprisingly, one hard part of even small recursive descent parsers is the scanning: repeatedly fetching
the next token from the scanner. It is tricky to decide when to scan, and the parser doesn't work at all if there
is an extra scan or a missing scan.This parser has another advantage: the writer has complete control over a
relatively simple program. If a problem comes up, one can often "cheat" by hacking in specialized code to
solve the problem. As one common example, you can "peek ahead", further than the next token, to decide
what to do.
TASK NO: 03 Perform parsing using “generate module” on demo grammar from generate
module provided by nltk and perform sentence generation using CFG with n=10.
TASK NO: 04 Modify the classification of text example using Muslims males and females
names.
muslim_male.txt muslim_female.txt