Professional Documents
Culture Documents
MLA NLP Lecture1
MLA NLP Lecture1
MACHINE
DOMAIN
EXPERTISE
https://en.wikipedia.org/wiki/Data_science
What is Machine Learning?
“Programming computers to learn from experience should
eventually eliminate the need for much of this detailed
programming effort”
Arthur Samuel (1959) – Computer Scientist
Problem
Classical Programming (If/else, etc.) Answers
Rules
Problem
ML Algorithms Trained ML Models (Rules) Answers
Answers
No
Some Important ML Terms
Unsupervised Data is
Clustering Putting similar things together Learning provided
without
labels
Anomaly Detection Finding uncommon things
Supervised vs. Unsupervised Learning
Data is
provided Supervised Learning Unsupervised Learning
with the
correct Data is
labels provided
without
Model labels
Collaborative
learns by Regression Classification
K-Means
Filtering
looking at (Quantity) (Category) Model
PCA
these finds
examples patterns
in data
Neural Net
Logistic
Linear
Trees
SVM
KNN
Supervised Learning: Regression
Data is
provided Supervised Learning
with the
Price
correct
labels
Model
learns by Regression Classification
looking at (Quantity) (Category) SqFootage
these
examples Label Features
Logistic
Linear
Trees
SVM
KNN
280.000 3 3292 14
210.030 2 2465 6
… … … …
Supervised Learning: Classification
Data is
provided Supervised Learning Class 1 = star
with the Class 0 = not star
correct
Feature 2
labels
Model
learns by Regression Classification Feature 1
looking at (Quantity) (Category)
these
examples Label Features
Logistic
Linear
Trees
SVM
KNN
1 5 10< 750
0 0 >9 150
… … … …
Unsupervised Learning: Clustering
Feature 2
Unsupervised Learning
Data is
provided
Feature 1 without
labels
Collaborativ
e Filtering
K-Means
Features Model
PCA
finds
patterns
Age Music Books in data
21 Classical Practical Magic
47 Jazz The Great Gatsby
… … …
Unsupervised Learning: Clustering
Feature 2
Unsupervised Learning
Data is
provided
Feature 1 without
labels
Collaborativ
e Filtering
K-Means
Features Model
PCA
finds
patterns
Age Music Books in data
21 Classical Practical Magic
47 Jazz The Great Gatsby
… … …
Class Imbalance
Class Imbalance
Star rating count in millions • Number of samples per class is not
2.5
equally distributed.
Star rating count
2
• The ML model may not work well for
1.5 the infrequent classes.
1 • Examples:
0.5
• Fraud Detection
• Anomaly Detection
1 2 3 4 5 • Medical Diagnosis
Star rating
Amazon review dataset: The number of 5 star reviews almost equals the total of the
other 4 types of star reviews combined.
Class Imbalance
How to address class imbalance problem?
The test set is not available to the model for learning, it is only used to
ensure that the model generalizes well on new “unseen” data.
Training – Validation – Test Sets
Learning
Training Set
(used to train the model) Train, tune and validate the
Training Set model (multiple times!)
Validation Set
Original (used for unbiased evaluation of the model) Testing
Dataset
Test Set
Test Set (Final) Test the model
(used for final evaluation of the model)
Training Training Validation Training Training Use K different holdout samples to validate the model,
Training Validation Training Training Training
each time training with the remainder samples:
• Split the training dataset into K independent folds.
Validation Training Training Training Training • Repeat the following K times:
▪ Set aside Kth fold of the data for validation.
▪ Train the model on the other folds, the training
set.
▪ Test the model on the validation set.
Average or combine validation performance metrics
• Average or combine the model performance metrics.
Model Evaluation: Underfitting
Underfitting: Model is not good enough to describe the relationship
between the input data (x1, x2) and output y: {Class 1, Class 2}.
x1
Model Evaluation: Overfitting
Overfitting: Model memorizes or imitates training data, and fails to
generalize well on new “unseen” data (test data).
x1
Model Evaluation: Good Fit
Appropriate fitting: Model captures the general relationship between the
input data (x1, x2) and output y: {Class 1, Class 2}.
x1
Model Evaluation
Regression Metrics
Metrics Equations : Data values
: Predicted values
Mean Squared Error
: Mean value of data values,
(MSE)
: Number of data records
Root Mean Squared
Error (RMSE)
R Squared (R2)
Classification Metrics
Prediction True Positive: Predicted ‘Positive’
when the actual is ‘Positive’
Positive Negative False Positive: Predicted ‘Positive’
when the actual is ‘Negative’
False Negative: Predicted ‘Negative’
Positive
Negative
Negative
Negative
F1 Score
Low when one or both of the Precision High when both Precision
Precision and Recall are low and Recall are high
Introduction to Natural Language
Processing (NLP)
Some NLP Terms
• Corpus: Large collection of words or phrases - can come from different
sources: documents, web sources, database
▪ Common Crawl Corpus: web crawl data composed of over 5 billion web pages
(541 TB)
▪ Reddit Submission Corpus: publicly available Reddit submissions (42 GB)
▪ Wikipedia XML Data: complete copy of all Wikimedia wikis, in the form of
wikitext source and metadata embedded in XML. (500 GB)
▪ Etc.
Some NLP Terms
party
sense
land
Example:
Sentence Tokens
“I don’t like eggs.” “I”, “do”, “n’t”, “like”, “eggs”, “.”
MLA-NLP-Lecture1-Text-Process.ipynb
Text Vectorization
Bag of Words (BoW)
▪ Bag of Words method converts text data into numbers.
▪ It does this by
• Creating a vocabulary from the words in all documents
• Calculating the occurrences of words:
o binary (present or not)
o word counts
o frequencies
Bag of Words (BoW)
Simple example using word counts:
“It is a dog.” 1 0 1 1 1 0 0 0 0
“It is a dog.” 1 0 1 1 1 0 0 0 0
“It is a dog.” 1 0 1 1 1 0 0 0 0
“It is a dog.” 1 0 1 1 1 0 0 0 0
MLA-NLP-Lecture1-BOW.ipynb
K Nearest Neighbors (KNN)
K Nearest Neighbors (KNN)
K Nearest Neighbors (KNN) predicts new data points based on K similar
records from a dataset.
Class
Class
What class does ? belong to?
Look at the K closest data points:
• Choose K = 3
?
K Nearest Neighbors (KNN)
K Nearest Neighbors (KNN) predicts new data points based on K similar
records from a dataset.
Class
Class
What class does ? belong to?
Look at the K closest data points:
• Choose K = 3
?
• Calculate the distances from to
?
? ? ? ?
KNN: Choosing K
What is the effect of K on the model?
• Low K (like K=1): predictions based on only one data sample could
be greatly impacted by noise in the data (outliers, mislabeling)
• Large K: more robust to noise, but the nearest “neighborhood” can
get too inclusive, breaking the “locality”, and a class with only a few
samples in it will always be outvoted by the other classes
• Rule of thumb in selecting K:
K= , where is the number of data samples
KNN: Choosing the Distance Metric
Data samples are considered similar to each other if there are close to
each other, as determined by a specific distance metric.
How to choose the distance metric?
• Real-valued features:
• Similar types: p = 2, Euclidean
• Mixed types (lengths, ages, salaries): p = 1, Manhattan (taxi-cab)
• Binary-valued features: Hamming
• number of positions where the values of two vectors are different
• Boolean-valued features: Jaccard
• ratio of shared values to the total number of values of two vectors
• High dimensional feature space: cosine similarity
• the angle between two vectors (similarity irrespective of their sizes)
KNN: Curse of Dimensionality
With too many features, KNN becomes computationally expensive and
difficult to solve.
Data closeness
Data sparsity
?
?
KNN: Feature Scaling
Features should be on the same scale when using KNN.
Unscaled features: ?closer to Scaled features: ?closer to
10 1
(K = 1)
( almost
halved)
5 ? 0.5 ?
d2 = 2 d2 = 0.2
3 0.3
d1 = 5 ( changed d1 = 0.05
less)
Feature Scaling
Motivation: Many algorithms are sensitive to features being on different
scales, like metric-based algorithms (KNN, K Means) and gradient
descent-based algorithms (regression, neural networks)
Note: tree-based algorithms (decision trees, random forests) do not have
this issue
Transform:
Transform:
MLA-NLP-Lecture1-KNN.ipynb