Professional Documents
Culture Documents
S.No Date Experiment Name Pre- In-Lab (25M) Post- Viva Total Faculty
Lab Program/ Data and Analysis & Lab Voce (50M) Signature
(10M) Procedure Results Inference (10M) (5M)
(5M) (10M) (10M)
1. Introductory Session -NA-
Tokenization_of_text #1
2.
Text_2_Sequences #2
3.
One_Hot_Encoding #3
4.
Vectorization_of_texts #4
5.
Databases_how_to_Use #5
6.
Parsing_nltk_toolbox #6
7.
TF_Testing_fail #7
8.
IDF_Why #8
9.
TFIDF_Vertorization #9
10.
TF_IDF_Failure_meaning #10
11.
Distance_Metrics #11
12
Aim/Objective:
The aim is to compare and evaluate different tokenization techniques or libraries, such as NLTK,
SpaCy, and TensorFlow, to determine their effectiveness in handling various types of text data.
Description:
Tokenization is the 1st step in any NLP model. The experiment may aim to explore how tokenization
using NLTK, spaCy, and TensorFlow can be integrated into a broader NLP pipeline or used as a
preprocessing step for tasks such as sentiment analysis, machine translation, named entity
recognition, or text summarization. The focus is on understanding the impact of tokenization choices
on downstream model performance. The experiment may aim to analyze the performance
characteristics of tokenization using NLTK and TensorFlow.
Pre-Requisites:
1. https://pip.pypa.io/en/stable/installation/
2. https://packaging.python.org/en/latest/tutorials/installing-packages/
3. https://pypi.org/project/nltk/
4. https://www.tensorflow.org/install/pip
5. https://spacy.io/usage
6. https://pypi.org/project/gensim/
Pre-Lab:
This Section must contain at least 5 Descriptive type questions or Self-Assessment Questions which
help the student to understand the Program/Experiment that must be performed in the Laboratory
Session.
2. How can you tokenize a sentence into individual words using NLTK?
5. How can you tokenize a text document into sentences using NLTK?
In-Lab:
1. Apply tokenization methods in the NLTK library on a 5-line text data available in NLTK.
2. Apply tokenization methods in the TF library on a 5-line text data available in NLTK.
3. Draw comparisons based on text handling capabilities.
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data
1. What is tokenization?
2. According to your exp which tokenizer API is the best?
3. How NLTK and TensorFlow handle tokenization for different languages.
4. List the Metrics used to Evaluate Tokenization Techniques.
5. Can you tokenize multiple text documents simultaneously using TensorFlow.
Post-Lab:
1. Try tokenization in the spaCy library and compare with the NLTK and Tensorflow.
2. Try tokenization on big corpus dataset given below.
https://www.kaggle.com/datasets/thoughtvector/customer-support-on-twitter
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data.
Evaluator MUST ask Viva-voce prior to signing and posting marks for each experiment.
Aim/Objective:
The aim is to evaluate different techniques or libraries, such as NLTK, SpaCy, and TensorFlow, to
determine their effectiveness in converting text to a sequence of numbers.
Description:
The objective of converting text to a sequence of numbers is a fundamental step in natural language
processing (NLP) tasks. The primary goal of this conversion is to represent textual data in a numerical
format that machine learning models can process effectively. To convert text to a numerical format
that enables the application of machine learning and NLP techniques.
Pre-Requisites:
1. https://pip.pypa.io/en/stable/installation/
2. https://packaging.python.org/en/latest/tutorials/installing-packages/
3. https://pypi.org/project/nltk/
4. https://www.tensorflow.org/install/pip
5. https://spacy.io/usage
6. https://pypi.org/project/gensim/
Pre-Lab:
This Section must contain at least 5 Descriptive type questions or Self-Assessment Questions which
help the student to understand the Program/Experiment that must be performed in the Laboratory
Session.
3. Are all sentences in the text considered to have the same length? If No, What did you do.
In-Lab:
1. Apply tokenization and convert a sequence of sentences in the NLTK library to a sequence of
numbers.
2. Convert a 10-sentence dataset with multiple-length sentences into a number array of equal
size for ML model training.
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data
Post-Lab:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data.
Evaluator MUST ask Viva-voce prior to signing and posting marks for each experiment.
Aim/Objective:
The aim is to convert the text into numbers and eventually code those converted numbers into
encodings for downstream NLP tasks using NLTK, SpaCy, and TensorFlow.
Description:
One hot encoding of text data is a process of transforming categorical data, such as words or symbols,
into numerical data that can be used by machine learning models. It involves creating a binary vector
for each categorical value, where only one element is 1 and the rest are 0. The length of the vector is
equal to the number of unique categories in the data. One hot encoding allows the representation of
categorical data as multidimensional binary vectors that can be fed to models that require numerical
input.
Pre-Requisites:
1. https://pip.pypa.io/en/stable/installation/
2. https://packaging.python.org/en/latest/tutorials/installing-packages/
3. https://pypi.org/project/nltk/
4. https://www.tensorflow.org/install/pip
5. https://spacy.io/usage
6. https://pypi.org/project/gensim/
Pre-Lab:
This Section must contain at least 5 Descriptive type questions or Self-Assessment Questions which
help the student to understand the Program/Experiment that must be performed in the Laboratory
Session.
3. Are all sentences in the text considered to have the same length? If No, what did you do.
In-Lab:
1. Apply One Hot Encodings and convert a sequence of sentences in the NLTK library to a
sequence of numbers and then OHE.
2. Convert a 10-sentence dataset with multiple-length sentences into a OHE array of equal size
for ML model training.
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data
Post-Lab:
1. Try using OHE data for training a simple neural network model.
2. Try text to OHE on big corpus dataset given below and train a ANN model.
https://www.kaggle.com/datasets/thoughtvector/customer-support-on-twitter
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data.
Evaluator MUST ask Viva-voce prior to signing and posting marks for each experiment.
Aim/Objective:
The aim is to convert text into vectors by computing term frequencies and create a corpus.
Description:
The objective of converting text to a sequence of numbers using TF vectorizer function. The primary
goal of this conversion is to represent textual data in a numerical format that machine learning models
can process effectively. To convert text to a numerical format that enables the application of machine
learning and NLP techniques.
Pre-Requisites:
1. https://pip.pypa.io/en/stable/installation/
2. https://packaging.python.org/en/latest/tutorials/installing-packages/
3. https://pypi.org/project/nltk/
4. https://www.tensorflow.org/install/pip
5. https://spacy.io/usage
6. https://pypi.org/project/gensim/
Pre-Lab:
This Section must contain at least 5 Descriptive type questions or Self-Assessment Questions which
help the student to understand the Program/Experiment that must be performed in the Laboratory
Session.
In-Lab:
1. Apply tokenization and convert a sequence of sentences in the NLTK library to a sequence of
numbers. Use those sequences and calculate term frequencies for representing text data on
a small corpus.
2. Convert a 10-sentence dataset with multiple-length sentences into TF representations and
compare them with OHE.
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data
Post-Lab:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data.
Evaluator MUST ask Viva-voce prior to signing and posting marks for each experiment.
Aim/Objective:
The aim is to use the online resources of text data to test NLP applications.
Description:
A text corpus is a large and structured collection of texts, typically stored in a digital format, that
serves as a linguistic resource for language analysis and research. It consists of a diverse range of
written or spoken texts from various sources and domains, such as books, articles, newspapers,
websites, social media, conversations, and more.
Pre-Requisites:
1. https://pip.pypa.io/en/stable/installation/
2. https://packaging.python.org/en/latest/tutorials/installing-packages/
3. https://pypi.org/project/nltk/
4. https://www.tensorflow.org/install/pip
5. https://spacy.io/usage
6. https://pypi.org/project/gensim/
Pre-Lab:
1. How can I create a text corpus from a collection of documents using Python?
2. What Python libraries can I use to tokenize and preprocess text data for corpus creation?
3. How can I handle different file formats (e.g., PDF, Word documents) when building a text
corpus in Python?
4. What are the steps involved in cleaning and preprocessing text data for corpus creation?
5. How can I remove stopwords and punctuation from text documents when creating a corpus
in Python?
In-Lab:
1. From NLTK library, download and apply wordnet package of built-in corpus. Extract the
requirements of a text dataset and tokenize the text.
2. From spaCy, use en_core_web_sm (English Small) corpus and tokenize this text.
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data
Post-Lab:
1. Try to encode the wordnet text into TF vectors and OHE. Measure the corpus size occupied
by them in memory.
2. Try to find some text datasets available online and load into your current program.
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data.
Evaluator MUST ask Viva-voce prior to signing and posting marks for each experiment.
Aim/Objective:
The aim is to analyze the grammatical structure of sentences in natural language text data using
NLTK and spaCy.
Description:
To perform parsing in NLTK, you typically start by defining a grammar using CFG, then apply a parsing
algorithm to parse a sentence and obtain a parse tree or dependency tree representation. NLTK
provides functions and methods to assist in these tasks, such as nltk.CFG for defining CFG,
nltk.ChartParser for chart parsing, and nltk.DependencyParser for dependency parsing.By utilizing
NLTK's parsing capabilities, you can analyze sentence structure, extract syntactic information, and
facilitate further natural language understanding and processing tasks.
Pre-Requisites:
1. https://pip.pypa.io/en/stable/installation/
2. https://packaging.python.org/en/latest/tutorials/installing-packages/
3. https://pypi.org/project/nltk/
4. https://www.tensorflow.org/install/pip
5. https://spacy.io/usage
6. https://pypi.org/project/gensim/
Pre-Lab:
1. What is parsing in natural language processing (NLP), and what is its goal?
2. Explain the concept of Context-Free Grammars (CFG) and their role in parsing.
In-Lab:
1. Analyze the grammatical structure of sentences and extract syntactic information on small
text corpus to identify the performance of the libraries used in NLTK.
2. Show that the parser used in spaCy and NLTK libraries have the capability to extract
semantic information.
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data
Post-Lab:
1. Try parsing using context free grammar on wordnet text data in NLTK.
2. Try parsing on big corpus dataset given below.
https://www.kaggle.com/datasets/thoughtvector/customer-support-on-twitter
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data.
Evaluator MUST ask Viva-voce prior to signing and posting marks for each experiment.
Aim/Objective:
The aim is to evaluate term frequency (TF) on large text corpuses note its breaking point.
Description:
TF = (Number of occurrences of the term in the document) / (Total number of terms in the
document)
The TF value reflects the relative importance or prevalence of a term within a specific document. It
helps to identify which terms are more frequently used and potentially carry more significance or
relevance in the context of that document. However, term frequency alone does not consider the
significance of the term in the overall corpus.
Pre-Requisites:
1. https://pip.pypa.io/en/stable/installation/
2. https://packaging.python.org/en/latest/tutorials/installing-packages/
3. https://pypi.org/project/nltk/
4. https://www.tensorflow.org/install/pip
5. https://spacy.io/usage
6. https://pypi.org/project/gensim/
Pre-Lab:
3. How can you calculate the term frequency of a specific term in a document using NLTK?
4. How can you count the occurrences of a specific term in a list of tokens using NLTK?
5. How do you normalize the term frequency to account for the document length in NLTK?
In-Lab:
1. Compute TF vectors on large corpus in NLTK library and identify the reason why it cannot
capture the sematic information in the text data.
2. Investigate deeply the above process on a text corpus of your choice to arrive at the solution
in a faster way.
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data
Post-Lab:
1. Try to compare dimensionality of TF and OHE. Which is the best show through program.
2. Try to explain how TF fails on big corpus dataset given below.
https://www.kaggle.com/datasets/thoughtvector/customer-support-on-twitter
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data.
Evaluator MUST ask Viva-voce prior to signing and posting marks for each experiment.
Aim/Objective:
The aim is to evaluate the importance of IDF as an alternative to TF, which is projected as an
information retrieval to quantify the importance or rarity of a term in a collection of documents.
Description:
The IDF of a term is calculated as the logarithm of the ratio between the total number of documents
in the collection and the number of documents that contain the term. The formula for IDF is as follows:
IDF = log(N / DF), N: Total number of documents in the collection, DF: Number of documents that
contain the term. The IDF value increases as the term becomes less frequent in the document
collection. It helps to identify terms that are relatively rare and potentially carry more important or
distinctive information. Terms with higher IDF scores are considered to have more discriminative
power.
Pre-Requisites:
1. https://pip.pypa.io/en/stable/installation/
2. https://packaging.python.org/en/latest/tutorials/installing-packages/
3. https://pypi.org/project/nltk/
4. https://www.tensorflow.org/install/pip
5. https://spacy.io/usage
6. https://pypi.org/project/gensim/
Pre-Lab:
3. How can you calculate IDF for a specific term using Python and a given collection of
documents?
5. Can you handle the presence of stop words during IDF calculations in NLTK?
In-Lab:
1. Compute IDF on small text data and show how IDF is better than TF in the context of text
discrimination in documents of a corpus.
2. Use wordnet dataset in NLTK and show how IDF beats TF as a text discriminator.
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data
Post-Lab:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data.
Evaluator MUST ask Viva-voce prior to signing and posting marks for each experiment.
Aim/Objective:
The aim is to transform a collection of documents into a numerical representation with TF-IDF
vectors.
Description:
The TF-IDF value is computed by multiplying the term frequency (TF) of a term in a document by the
inverse document frequency (IDF) of the term. Each document is represented as a vector, where each
dimension corresponds to a unique term in the collection. The TF-IDF value for each term in the
document becomes the value of the corresponding dimension in the vector.
Pre-Requisites:
1. https://pip.pypa.io/en/stable/installation/
2. https://packaging.python.org/en/latest/tutorials/installing-packages/
3. https://pypi.org/project/nltk/
4. https://www.tensorflow.org/install/pip
5. https://spacy.io/usage
6. https://pypi.org/project/gensim/
Pre-Lab:
This Section must contain at least 5 Descriptive type questions or Self-Assessment Questions which
help the student to understand the Program/Experiment that must be performed in the Laboratory
Session.
In-Lab:
1. Apply TF-IDF vectorization model in NLTK on a small set of text data and show the
representation is better than TF and IDF.
2. Convert the NLTK wordnet corpus into a TF-IDF data representaiton.
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data
Post-Lab:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data.
Evaluator MUST ask Viva-voce prior to signing and posting marks for each experiment.
Aim/Objective:
The aim is to evaluate the reason for failure of TF-IDF vectors and estimate the meaning of failure
to represent the text data in terms of semantics, context extraction and corpus size.
Description:
TF-IDF does not capture the semantic meaning of words or the context in which they are used. It treats
each term independently, without considering their relationships within the document or across the
collection. This can lead to issues when dealing with tasks that require a deeper understanding of
language, such as sentiment analysis or question-answering. TF-IDF is influenced by document length,
as longer documents generally have higher term frequencies. This bias can result in longer documents
dominating the similarity or importance measures, overshadowing shorter and potentially relevant
documents. TF-IDF treats documents as bags of words, disregarding the order and context in which
the words appear. This can be problematic in tasks like text generation or language translation, where
word order and context play a crucial role.
Pre-Requisites:
1. https://pip.pypa.io/en/stable/installation/
2. https://packaging.python.org/en/latest/tutorials/installing-packages/
3. https://pypi.org/project/nltk/
4. https://www.tensorflow.org/install/pip
5. https://spacy.io/usage
6. https://pypi.org/project/gensim/
Pre-Lab:
In-Lab:
1. Apply TF-IDF and evaluate the vectors to check their failure related to context and semantic
representation.
2. Show the reason for failure of TF-IDF on large datasets such as wordnet in nltk.
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data
Post-Lab:
1. Try normalization of converted TF-IDF vectors from text data and does they still fail.
2. Try TF-IDF on big corpus dataset given below and use ANN for classification.
https://www.kaggle.com/datasets/thoughtvector/customer-support-on-twitter
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data.
Evaluator MUST ask Viva-voce prior to signing and posting marks for each experiment.
Aim/Objective:
The aim is to evaluate different distance based techniques and their effectiveness for text encoded
information such as TF-IDF.
Description:
Cosine Distance: It measures the cosine of the angle between two vectors in a high-dimensional space.
In NLP, it is commonly used to compute the similarity between documents represented as TF-IDF
vectors or word embeddings. Euclidean Distance: It measures the straight-line distance between two
points in Euclidean space. It is often employed to quantify the dissimilarity between word embeddings
or document vectors.
Pre-Requisites:
1. https://pip.pypa.io/en/stable/installation/
2. https://packaging.python.org/en/latest/tutorials/installing-packages/
3. https://pypi.org/project/nltk/
4. https://www.tensorflow.org/install/pip
5. https://spacy.io/usage
6. https://pypi.org/project/gensim/
Pre-Lab:
4. What is the result of a ‘0’ in cosine distance between two text words?
In-Lab:
1. Apply different distance metrics on TF-IDF vectorized text data and verify which captures the
closeness between the words effectively.
2. Use a large corpus and find effectiveness of various distance metrics defined in the
objective.
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data
Post-Lab:
1. Try to compare Jaccard Distance metric with the other two previously used methods.
2. Try comparing all three distance metrics on big corpus dataset given below.
https://www.kaggle.com/datasets/thoughtvector/customer-support-on-twitter
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data.
Evaluator MUST ask Viva-voce prior to signing and posting marks for each experiment.
Aim/Objective:
The aim is to extract a piece of text by calculating similarities between the words using NLTK.
Description:
The Word similarities are measurements that quantify the degree of relatedness or similarity between
words based on their meanings, contexts, or semantic properties. Cosine Similarity measures the
cosine of the angle between two vectors, which indicates the similarity between their directions in a
high-dimensional space. It is commonly used with word embeddings or TF-IDF representations to
compute word similarities.
Pre-Requisites:
1. https://pip.pypa.io/en/stable/installation/
2. https://packaging.python.org/en/latest/tutorials/installing-packages/
3. https://pypi.org/project/nltk/
4. https://www.tensorflow.org/install/pip
5. https://spacy.io/usage
6. https://pypi.org/project/gensim/
Pre-Lab:
1. What is word similarity, and why is it important in natural language processing (NLP)?
2. Explain the concept of word embeddings and how they are used to measure word similarity.
3. What is cosine similarity, and how is it applied to compute word similarity using word
embeddings?
4. Describe the distributional hypothesis and how it relates to measuring word similarity.
5. How does WordNet contribute to measuring word similarity, and what are some common
similarity metrics used in WordNet?
In-Lab:
1. Extract a small piece of text from a tiny corpus using the NLTK word similarities model.
2. Use wordnet dataset and apply word similarities to extract words close to atlaset 10 words.
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data
Post-Lab:
1. Try to design an experiment to compare different word similarity metrics provided by NLTK's
WordNet, such as path similarity, Wu-Palmer similarity, or Leacock-Chodorow similarity..
2. Try Investigating the impact of context on word similarity using NLTK on big corpus dataset
given below. https://www.kaggle.com/datasets/thoughtvector/customer-support-on-twitter
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data.
Evaluator MUST ask Viva-voce prior to signing and posting marks for each experiment.
Aim/Objective:
The aim is to analyse text using TF-IDF vectors and apply for information retrieval.
Description:
Clean and preprocess the text data by removing any unnecessary elements like punctuation, stop
words, and special characters. Apply techniques such as tokenization, stemming, or lemmatization to
reduce words to their base form. Represent each document as a vector using the computed TF-IDF
values. Each dimension of the vector corresponds to a term in the corpus, and the value represents
the TF-IDF weight of the term in the document. This vectorization process creates TF-IDF vectors for
all documents in the corpus. Identify important terms or phrases in a document using the TF-IDF
vectors. High TF-IDF values indicate terms that are specific to a document and may carry important
information. Extracting these terms can help in tasks such as keyword extraction or named entity
recognition.
Pre-Requisites:
1. https://pip.pypa.io/en/stable/installation/
2. https://packaging.python.org/en/latest/tutorials/installing-packages/
3. https://pypi.org/project/nltk/
4. https://www.tensorflow.org/install/pip
5. https://spacy.io/usage
6. https://pypi.org/project/gensim/
Pre-Lab:
In-Lab:
1. Apply TF-IDF vectors on a 5 document text data and analyze the relationships between
similar phrases by applying cosine similarity metrics.
2. Cluster text with similar meaning using TF-IDF vectors developed in the above objective.
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data
Post-Lab:
1. Try the same on Wordnet dataset in NLTK and comment on the limitations of TF-IDF vectors.
2. Try to vectorize using TF-IDF on big corpus dataset given below and apply clustering.
https://www.kaggle.com/datasets/thoughtvector/customer-support-on-twitter
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data.
Evaluator MUST ask Viva-voce prior to signing and posting marks for each experiment.
Aim/Objective:
The aim is to evaluate Zipf’s law on small and large corpus text data and use it as a corpus analysis
tool for building a text dataset specific to a university exam portal chatbot.
Description:
Zipf's law states that the frequency of any word in a text is inversely proportional to its rank in the
frequency table. The most common word will occur about twice as often as the second most common
word, three times as often as the third most common word, and so on. Zipf's law has been observed
in a wide variety of corpora, including natural language texts, computer code, and even the population
of cities. The law has been used in a variety of applications, including information retrieval, natural
language processing, and network analysis.
Pre-Requisites:
1. https://pip.pypa.io/en/stable/installation/
2. https://packaging.python.org/en/latest/tutorials/installing-packages/
3. https://pypi.org/project/nltk/
4. https://www.tensorflow.org/install/pip
5. https://spacy.io/usage
6. https://pypi.org/project/gensim/
Pre-Lab:
1. Why does Zipf's law hold for a wide variety of data sets
3. How can Zipf's law be used to improve the performance of machine learning algorithms.
5. Why the frequency of 1st word is approximately 3 times that of second word?
In-Lab:
1. Apply zipfs law on Brown dataset in nltk and prove that it holds.
2. Apply zipfs law on your own dataset of not more than 100 words and check that it holds.
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data
Post-Lab:
1. Try proving zipfs law on your exam answers written by you during an in-sem exam.
2. Try zips law on big corpus dataset given below.
https://www.kaggle.com/datasets/thoughtvector/customer-support-on-twitter
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data.
Evaluator MUST ask Viva-voce prior to signing and posting marks for each experiment.
Aim/Objective:
Description:
The Topic modelling is a type of statistical modelling that is used to discover the abstract "topics" that
occur in a collection of documents. Topic modelling is a frequently used text-mining tool for discovery
of hidden semantic structures in a text body. The "topics" produced by topic modelling techniques are
clusters of similar words. A topic model captures this intuition in a mathematical framework.
Pre-Requisites:
1. https://pip.pypa.io/en/stable/installation/
2. https://packaging.python.org/en/latest/tutorials/installing-packages/
3. https://pypi.org/project/nltk/
4. https://www.tensorflow.org/install/pip
5. https://spacy.io/usage
6. https://pypi.org/project/gensim/
Pre-Lab:
4. Who will decide the weights across the words for topic clustering.
In-Lab:
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data
Post-Lab:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data.
Evaluator MUST ask Viva-voce prior to signing and posting marks for each experiment.
Aim/Objective:
The aim is to compute PCA from scratch on a corpus of text data transformed using TF-IDF vectors
and reduce the dimensionality. Use dimensionally reduced features to reconstruct the original
information and report error.
Description:
The Principal Component Analysis (PCA) is a statistical procedure that uses an orthogonal
transformation to convert a set of observations of possibly correlated variables into a set of values of
linearly uncorrelated variables called principal components. PCA is a widely used technique in data
analysis for dimensionality reduction, feature extraction, and visualization. It can be used to reduce
the number of variables in a dataset while preserving as much of the variation in the data as possible.
This can be useful for making the data easier to visualize and interpret, and for improving the
performance of machine learning algorithms.
Pre-Requisites:
1. https://pip.pypa.io/en/stable/installation/
2. https://packaging.python.org/en/latest/tutorials/installing-packages/
3. https://pypi.org/project/nltk/
4. https://www.tensorflow.org/install/pip
5. https://spacy.io/usage
6. https://pypi.org/project/gensim/
Pre-Lab:
5. Which is the most widely used dimensionality reduction model for text analysis?
In-Lab:
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data
Post-Lab:
1. Try building a small 100-word text data and apply PCA on TF-IDF vectors. Train a ANN model
on the dimensionally reduced and verify the performance of the model.
2. Try the same on big corpus dataset given below.
https://www.kaggle.com/datasets/thoughtvector/customer-support-on-twitter
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data.
Evaluator MUST ask Viva-voce prior to signing and posting marks for each experiment.
Aim/Objective:
The aim is to implement SVD from scratch and apply it to reduce dimensionality of text data.
Description:
SVD can be used to reduce the dimensionality of a document-term matrix while preserving as much
of the variation in the data as possible. This can be useful for making the data easier to visualize and
interpret, and for improving the performance of machine learning algorithms. SVD can also be used
to extract features from a document-term matrix. This can be useful for tasks such as topic modelling
and text classification. Latent semantic analysis (LSA) is a technique that uses SVD to identify the latent
semantic structures in a document-term matrix. This can be useful for tasks such as text clustering and
information retrieval.
Pre-Requisites:
1. https://pip.pypa.io/en/stable/installation/
2. https://packaging.python.org/en/latest/tutorials/installing-packages/
3. https://pypi.org/project/nltk/
4. https://www.tensorflow.org/install/pip
5. https://spacy.io/usage
6. https://pypi.org/project/gensim/
Pre-Lab:
3. Why SVD loses information when small quantity of singular values are retained.
In-Lab:
1. Apply SVD from scratch on text corpus and construct analysis framework.
2. Compare SVD and PCA on the text data used earlier and conclude which has performed
better.
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data
Post-Lab:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data.
Evaluator MUST ask Viva-voce prior to signing and posting marks for each experiment.
Aim/Objective:
The aim is to develop pseudo-code in python for span SMS classification using semantic
representation (LSA) and Construct SMS spam elimination algorithm with the LSA model.
Description:
SMS spam detection is the process of identifying and filtering out unwanted or malicious SMS
messages. Spam messages can be a nuisance, and they can also be dangerous, as they can contain
malware or phishing links.
The dataset@
https://www.kaggle.com/datasets/uciml/sms-spam-collection-dataset
Pre-Requisites:
1. https://pip.pypa.io/en/stable/installation/
2. https://packaging.python.org/en/latest/tutorials/installing-packages/
3. https://pypi.org/project/nltk/
4. https://www.tensorflow.org/install/pip
5. https://spacy.io/usage
6. https://pypi.org/project/gensim/
Pre-Lab:
2. What is the function of t-NSE representation and its role in the analysis of NLP pipelines.
3. Tell how polysemy, homonyms and zeugma impact the outcome of a text-based NLP
application.
In-Lab:
1. Download the data from the link in the description. Built an NLP pipeline for pre-processing
the text in the dataset. Convert the text to vector corpus.
2. Develop a SMS spam filter model with Bayes classifier and test the trained model. Report the
failure rate of the model.
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data
Post-Lab:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data.
Evaluator MUST ask Viva-voce prior to signing and posting marks for each experiment.
Aim/Objective:
The aim is to develop an end-to-end NLP pipeline for sentiment analysis using a time series model
trained on twitter sentiment data.
Description:
Sentiment analysis using RNNs can be implemented in a variety of ways. One common approach is to
use a Long Short-Term Memory (LSTM) network. LSTMs are a type of RNN that is specifically designed
to address the problem of long-term dependencies. This makes them well-suited for tasks such as
sentiment analysis, where it is important to be able to understand the context of the text.
https://www.kaggle.com/datasets/crowdflower/twitter-airline-sentiment
Pre-Requisites:
1. https://pip.pypa.io/en/stable/installation/
2. https://packaging.python.org/en/latest/tutorials/installing-packages/
3. https://pypi.org/project/nltk/
4. https://www.tensorflow.org/install/pip
5. https://spacy.io/usage
6. https://pypi.org/project/gensim/
Pre-Lab:
In-Lab:
1. Download the data from the link in the description. Built an NLP pipeline for pre-processing
the text in the dataset. Convert the text to vector corpus.
2. Develop a sentiment analysis model using VADER and SentiWordNet. Report the results
using data visualization techniques.
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data
Post-Lab:
1. Try converting the above model into a sentiment analysis using Tensorflow with LSTMs.
2. Try LSTM model for text classification on big corpus dataset given below.
https://www.kaggle.com/datasets/thoughtvector/customer-support-on-twitter
• Procedure/Program:
This Section is meant for the student to Write the program/Procedure for the Experiment
This Section is meant for the students to collect, record the results generated during the
Program/Experiment execution. Include instructions on how to present the results, such as creating
tables, graphs, or visualizations.
This Section is meant for the students to analyse their data, perform calculations Include
questions or prompts to encourage critical thinking and interpretation of the data.
Evaluator MUST ask Viva-voce prior to signing and posting marks for each experiment.