You are on page 1of 3

Cour Cours L T P C

Cour
18CSE39 se e
se TEXT MINING Professional Elective
3T Nam Categ 3 0 0 3
Code
e ory

Co-
Pre- Progress
requisit
requisite Nil Nil ive Nil
e
Courses Courses
Courses
Course Offering Data Book /
CSE Nil
Department Codes/Standards

Course Learning
The purpose of learning this course is to: Learning Program Learning Outcomes (PLO)
Rationale (CLR):
CL 1 2 3 1 2 3 4 5 6 7 8 9 1 1 1 1 1 1
R- Understand the fundamentals of text mining 0 1 2 3 4 5
1:
CL
R- Utilize text for prediction techniques
2:

SustainabilityEnvironment &
ResearchAnalysis, Design,
Expected Proficiency (%)
Expected Attainment (%)
(Bloom)Level of Thinking

Engineering Knowledge

Design & Development


CL

Modern Tool Usage


Society & Culture
R- Understand the relevance between information retrieval and text mining
3:
CL
R- Understand the goals of information extraction
4:
CL
R- Analyze different case studies related to text mining
5:

Course Learning
At the end of this course, learners will be able to:
Outcomes (CLO):
CL - - - - - - - - - - - - -
8 8 -
O- Acquire knowledge on fundamentals of text mining 2 H
0 5
1:
CL - - - - -
8 8
O- Perform prediction from text and evaluate it 2 H
0 0
2:
CL - - - - -
8 7
O- Perform document matching 2 H
0 5
3:
CL M M - -
7 8
O- Identify patterns and entities from text 2 H -
5 5
4:
CL - - - - -
8 8
O- Understand how text mining is implemented 2 H
0 5
5:

Duratio
9 9 9 9 9
n (hour)
SL
Labels for the Right Clustering Documents by
O- Overview of text mining Linear scoring Methods Ideal Model of Data
S Answers similarity
1
-
SL
1 Special about Text Feature selection by Similarity of composite
O- Evaluation of Performance Practical Data Sourcing
Mining attribute ranking documents
2
SL
Sentence Boundary Estimating current and future
O- Structured Data K-means Clustering Prototypical Examples
S Determination performance
1
-
SL
2 Getting the most from a
O- Unstructured Data Part of speech Tagging Hierarchical Clustering Hybrid Example
Learning Method
2
SL
Is text different from Word Sense Errors and Pitfalls in Big data Mixed Data in Standard
O- The EM Algorithm
S numbers Disambiguation Evaluation Table Format
1
-
SL
3 Types of Problem can Graph models for social Goals for Information Case study: Market
O- Phrase Recognition
be solved. Networks Extraction Intelligence from the web
2
SL Case Study: Lightweight
Named Entity Information Retrieval and Finding Patterns and
O- Document Classification Document Matching for
S Recognition Text Mining Entities from Test
1 Digital Libraries
-
SL Generating Model cases
4 Entity Extraction as
O- Informational Retrieval Parsing Keyword search for Help desk Application:
Sequential Tagging
2 case study
S SL
Prediction and Tag Prediction as Assigning topics to news
- O- Feature Generation Nearest- Neighbor Methods
Evaluation Classification articles: Case study
5 1
SL From Textual Using text for prediction Measuring Similarity The maximum Entropy E-mail Filtering: Case
O- Information to
method study
2 Numerical Vectors
SL
Recognizing that Linguistic Features and SearchEngines : case
O- Collecting Documents Shared Word Count
S document Fit a pattern Encoding study
1
-
SL
6 Document Local Sequence Extracting Named Entities
O- Document Classification Word count and Bonus
Standardization Prediction Models from Documents
2
SL
Learning to Predict from Global sequence
O- Tokenization Cosine Similarity Mining Social Media
S Text Prediction Models
1
-
SL
7 Similarity and Nearest- Web based Document Coreference and
O- Lemmatization Customized Newspapers
Neighbor Method Search relationship Extraction
2
SL
Template Filling And
O- Inflectional Stemming Document Similarity Link Analysis Emerging Directions
S Database Construction
1
-
SL
8 Commercial Extraction Different ways of
O- Stemming to a Root Decision Rules Document Matching
System: Application collecting samples
2
SL
Vector Generation for Criminal Justice : Learning to Unlabeled
O- Decision trees Inverted List
S Prediction Application data
1
-
SL
9
O- Multiword Features Scoring by Probabilities Evaluation of Performance Intelligence Applicaton Distributed Text Mining
2

Learnin
g 1. By Sholom M. Weiss, Nitin Indurkhya, Tong
Resour Zhang.,Fundamentals of Predictive Text Mining
ces

Learning Assessment
Bloom’s Continuous Learning Assessment (50% weightage) Final Examination (50%
Level of CLA – 1 (10%) CLA – 2 (15%) CLA – 3 (15%) CLA – 4 (10%)# weightage)
Thinking Theory Practice Theory Practice Theory Practice Theory Practice Theory Practice
Rememb
er
Level 1 40 % - 30 % - 30 % - 30 % - 30% -
Understa
nd
Apply
Level 2 40 % - 40 % - 40 % - 40 % - 40% -
Analyze
Evaluate
Level 3 20 % - 30 % - 30 % - 30 % - 30% -
Create
Total 100 % 100 % 100 % 100 % 100 %

# CLA – 4 can be from any combination of these: Assignments, Seminars, Tech Talks, Mini-Projects, Case-Studies, Self-Study, MOOCs, Certifications, Conf. Paper
etc.,

Course Designers
Experts from Industry Experts from Higher Technical Institutions Internal Experts
Dr.E.Poovammal, SRMIST
Mr.L.N.B.Srinivas, SRMIST
Mr.D.Vivek, SRMIST

You might also like