You are on page 1of 28

Natural Processing

An Application of Deep Learning


What is NLP?
● Natural language processing is a field at the intersection of computer science, artificial
intelligence and linguistics.
● Goal: for computers to process or “understand” natural language in order to perform tasks
that are useful like queries answering.
● Applications of NLP:
○ Spell checking
○ Searching keywords
○ Extracting information from internet sources
○ Sentiment analysis
○ Complex question answering
○ Language translations
○ Speech recognition
● Voice-enabled applications such as Alexa, Siri, and Google Assistant use
NLP and Machine Learning (ML) to answer our questions, add activities to
our calendars and call the contacts that we state in our voice commands.
How is NLP applied?
● Translating languages is more complex than a simple word-to-word
replacement method. Since each language has grammar rules, the
challenge of translating a text is to do so without changing its meaning
and style. Since computers do not understand grammar, they need a
process in which they can deconstruct a sentence, then reconstruct it in
another language in a way that makes sense.
● Speech recognition is a machine’s ability to identify and interpret phrases
and words from spoken language and convert them into a machine-
readable format. It uses NLP to allow computers to simulate human
interaction, and ML to respond in a way that mimics human responses.
● Sentiment analysis uses NLP and ML to interpret and analyze
emotions in subjective data like news articles and tweets. Positive,
negative, and neutral opinions can be identified to determine a
customer’s sentiment towards a brand, product, or service. Sentiment
analysis is used to gauge public opinion, monitor brand reputation,
and better understand customer experiences.
● Automatic text summarization is the task of condensing a piece of text
to a shorter version, by extracting its main ideas and preserving the
meaning of content. This application of NLP is used in news headlines,
result snippets in web search, and bulletins of market reports.
Advantages and Disadvantages of NLP
● Advantages:
○ Very efficient and less expensive
○ Faster customer service for organizations
○ Easy to implement as many trained models are already
available
● Disadvantages:
○ Training is time consuming
○ Not reliable all the time
● It is constantly evolving.
● With the recent popularity and success of word embeddings (low
dimensional, distributed representations), neural-based models have
achieved superior results on various language-related tasks as
compared to traditional machine learning models like SVM or logistic
regression.
● Word Embeddings: Distributional vectors, also called word
embeddings, are based on the so-called distributional hypothesis —
words appearing within similar context possess similar meaning. Word
embeddings are pre-trained on a task where the objective is to predict a
word based on its context, typically using a shallow neural network.
● Wide variety of NLP tasks such as sentiment analysis and sentence
compositionality are done using this.
● One of the challenges with word embedding methods is when we want to
obtain vector representations for phrases such as “hot potato” or “Boston
Globe”. We can’t just simply combine the individual word vector
representations since these phrases don’t represent the combination of
meaning of the individual words. And it gets even more complicated
when longer phrases and sentences are considered.
● Word2vec: is a technique for natural language processing published in
2013. The word2vec algorithm uses a neural network model to learn word
associations from a large corpus of text. Once trained, such a model can
detect synonymous words or suggest additional words for a partial
sentence.
● To overcome all the drawbacks, CNN is used.
● A CNN is basically a neural-based approach which represents a feature
function that is applied to constituting words or n-grams to extract
higher-level features. The resulting abstract features have been effectively
used for sentiment analysis, machine translation, and question
answering, among other tasks.
● The goal of their method was to transform words into a vector
representation via a lookup table, which resulted in a primitive word
embedding approach that learn weights during the training of the
network.
● In order to perform sentence modeling with a
basic CNN, sentences are first tokenized into
words, which are further transformed into a
word embedding matrix (i.e., input embedding
layer) of d dimension.
● Then, convolutional filters are applied on this
input embedding layer which consists of
applying a filter of all possible window sizes to
produce what’s called a feature map.
● This is then followed by a max-pooling
operation which applies a max operation on
each filter to obtain a fixed length output and
reduce the dimensionality of the output.
● One of the shortcoming with basic CNNs is
there inability to model long distance
dependencies, which is important for various
NLP tasks. To address this problem, CNNs have
been coupled with time-delayed neural
networks (TDNN) which enable larger contextual
range at once during training.
NLP using RNN
● The main strength of an RNN is the capacity to memorize the results of
previous computations and use that information in the current computation.
This makes RNN models suitable to model context dependencies in inputs of
arbitrary length so as to create a proper composition of the input. RNNs have
been used to study various NLP tasks such as machine translation, image
captioning, and language modeling, among others.
● As it compares with a CNN model, an RNN model can be similarly effective or
even better at specific natural language tasks but not necessarily superior. This
is because they model very different aspects of the data, which only makes
them effective depending on the semantics required by the task at hand.
Speech Recognition
The first step in speech recognition is obvious — we need to feed sound waves into a computer.

Sound waves are one-dimensional. At every moment in time, they have a single value based on the
height of the wave. Let’s zoom in on one tiny part of the sound wave and take a look:
To turn this sound wave into numbers, we just record of the height of the wave at equally-spaced
points:

This is called sampling. We are taking a reading thousands of times a second and recording a number
representing the height of the sound wave at that point in time.
“CD Quality” audio is sampled at 44.1khz (44,100 readings per second). But for speech recognition, a
sampling rate of 16khz (16,000 samples per second) is enough to cover the frequency range of human
speech.
Pre-processing our Sampled Sound Data
We now have an array of numbers with each number representing the sound wave’s amplitude at
1/16,000th of a second intervals.
We could feed these numbers right into a neural network. But trying to recognize speech patterns by
processing these samples directly is difficult. Instead, we can make the problem easier by doing some
pre-processing on the audio data.
Let’s start by grouping our sampled audio into 20-millisecond-long chunks. Here’s our first 20
milliseconds of audio
Plotting those numbers as a simple line graph gives us a rough approximation of the original sound
wave for that 20 millisecond period of time:

This recording is only 1/50th of a second long. But even this short recording is a complex mish-mash
of different frequencies of sound. There’s some low sounds, some mid-range sounds, and even some
high-pitched sounds sprinkled in. But taken all together, these different frequencies mix together to
make up the complex sound of human speech.
To make this data easier for a neural network to process, we are going to break apart this complex
sound wave into it’s component parts. We’ll break out the low-pitched parts, the next-lowest-pitched-
parts, and so on. Then by adding up how much energy is in each of those frequency bands (from low
to high), we create a fingerprint of sorts for this audio snippet.
You can see that our 20 millisecond sound snippet

If we repeat this process on every 20 millisecond chunk of audio, we end up with a


spectrogram (each column from left-to-right is one 20ms chunk):

A neural network can find patterns in this kind of data more easily than raw sound waves.
So this is the data representation we’ll actually feed into our neural network.
Recognizing Characters from Short Sounds
Now that we have our audio in a format that’s easy to process, we will feed it into a deep neural
network. The input to the neural network will be 20 millisecond audio chunks. For each little audio
slice, it will try to figure out the letter that corresponds the sound currently being spoken.
We’ll use a recurrent neural network — that is, a neural network that has a memory that influences future
predictions. That’s because each letter it predicts should affect the likelihood of the next letter it will predict
too. For example, if we have said “HEL” so far, it’s very likely we will say “LO” next to finish out the word
“Hello”. It’s much less likely that we will say something unpronounceable next like “XYZ”. So having that
memory of previous predictions helps the neural network make more accurate predictions going forward.
After we run our entire audio clip through the neural network (one chunk at a time), we’ll end up with a
mapping of each audio chunk to the letters most likely spoken during that chunk. Here’s what that mapping
looks like for me saying “Hello”:
Our neural net is predicting that one likely thing I said was “HHHEE_LL_LLLOOO”. But it also
thinks that it was possible that I said “HHHUU_LL_LLLOOO” or even “AAAUU_LL_LLLOOO”.
First, we’ll replace any repeated characters a single character, then we’ll remove any blanks:
● HE_L_LO becomes HELLO
● HU_L_LO becomes HULLO
● AU_L_LO becomes AULLO
That leaves us with three possible transcriptions — “Hello”, “Hullo” and “Aullo”. If you say them out
loud, all of these sound similar to “Hello”.

The trick is to combine these pronunciation-based predictions with likelihood scores based on large
database of written text (books, news articles, etc). You throw out transcriptions that seem the least
likely to be real and keep the transcription that seems the most realistic.

So we’ll pick “Hello” as our final transcription instead of the others. Done!
Thank you
Introduction
to
Autoencoders
Autoencoders
An autoencoder is a neural network that is trained to attempt to
copy its input to its output. Internally, it has a hidden layer h that
describes a code used to represent the input. The network may be
viewed as consisting of two parts: an encoder function h = f (x)
and a decoder that produces a reconstruction r = g(h).

Traditionally, autoencoders were used for dimensionality


reduction or feature learning.

Encoder – This transforms the input (high-dimensional into a


code that is crisp and short.
Decoder – This transforms the shortcode into a high-
dimensional input.
Undercomplete Autoencoders
An autoencoder whose code dimension is less than the input
dimension is called undercomplete. Learning an undercomplete
representation forces the autoencoder to capture the most salient
features of the training data.

The learning process is described simply as minimizing a loss


function L(x, g(f(x))).

if the encoder and decoder are allowed too much capacity, the
autoencoder can learn to perform the copying task without
extracting useful information about the distribution of the data.
Regularized Autoencoders
in the overcomplete case in which the hidden code has dimension greater than the input. In these
cases, even a linear encoder and linear decoder can learn to copy the input to the output without
learning anything useful about the data distribution.

Ideally, one could train any architecture of autoencoder successfully, choosing the code dimension
and the capacity of the encoder and decoder based on the complexity of distribution to be modeled.
Regularized autoencoders provide the ability to do so. Rather than limiting the model capacity by
keeping the encoder and decoder shallow and the code size small, regularized autoencoders use a
loss function that encourages the model to have other properties besides the ability to copy its input
to its output.
Sparse Autoencoders
◦ A sparse autoencoder tries to ensure the neuron is inactive most of the time.
◦ A sparse autoencoder is simply an autoencoder whose training criterion involves a sparsity penalty Ω(h) on the
code layer h, in addition to the reconstruction error
Recontruction Error -> L(x, g(f(x))) + Ω(h).
{regularLoss} {loss to maintain activation value 0}

◦ Sparse autoencoders are typically used to learn features for another task such as classification.
Denoising autoencoders
Rather than adding a penalty Ω to the cost function, we can obtain an autoencoder
that learns something useful by changing the reconstruction error term of the cost
function. Traditionally, autoencoders minimize some function L(x, g(f(x)))

where L is a loss function penalizing g(f (x)) for being dissimilar from x, such as
the L^2 norm of their difference. This encourages g ◦ f to learn to be merely an
identity function if they have the capacity to do so. A denoising autoencoder or
DAE instead minimizes L(x, g(f(x˜)))

where x˜ is a copy of x that has been corrupted by some form of noise. Denoising
autoencoders must therefore undo this corruption rather than simply copying their
input.

You might also like