1 views

Uploaded by Rahul Mehta

RNN Lecture

- Neural network games
- A Fuzzy Back Propagation Algorithm
- jurnal cc 3 1-s2.0-S0957417418300435-main
- Classification of Diabetic Retinopathy Using Artificial Neural Network
- Jurnal Ids m0509073 for Jatit
- 2006_05
- A Novel Method for Early Diagnosis of Alzheimer’s Disease Based on Pseudo Zernike Moment From Structural Mri_2015
- Forecasting Wheat Price Using Backpropagation And NARX Neural Network
- 06 Contents
- AUTONOMOUS RACING.docx
- IRJET-Web Server Performance Prediction using a Deep Recurrent Network
- Landslide_Mustafa Ridha Mezaal.pdf
- KM Slides Ch12
- 1610.05567
- Image Compression Using Neural Networks
- Computer Vision Course Project
- Convolutional Neural Networks-CNN.pdf
- MasterThesis_JessicaAlecci4417623.pdf
- Evolving Plastic Neural Networks With Novelty Search
- 9

You are on page 1of 85

Saket Anand

Multilayer Neural Networks

Types of Exclusive-OR Classes with Most General

Structure

Decision Regions Problem Meshed regions Region Shapes

Single-Layer Half Plane A B

B

Bounded By A

B A

Hyperplane

B

Or A

B A

Closed Regions

Three-Layer Arbitrary A B

(Complexity B

A

Limited by No. B A

of Nodes)

Convolutional Neural Network: Key Idea

Exploit

1. Structure

2. Local Connectivity

3. Share Parameter

To Give

1. Translation Invariance

2. Minor Distortion and Scale Invariance

3. Occlusion Invariance

Handling sequences through NNs

• How to capture sequential information using neural network?

each other.

each word based on our understanding of previous words.

machine translation, etc assuming that there is no dependency

between inputs is a bad idea.

Handling sequences through NNs

ot-1 ot ot+1

𝑆𝑆𝑡𝑡−1 𝑆𝑆𝑡𝑡 𝑆𝑆𝑡𝑡+1

𝑊𝑊𝑡𝑡−1 𝑊𝑊𝑡𝑡 𝑊𝑊𝑡𝑡+1

xt-1 xt Xt+1

Handling sequences through NNs

• Too many parameters.

ot-1 ot ot+1

Convolutional Neural Networks. 𝑉𝑉 𝑉𝑉 𝑉𝑉

𝑆𝑆𝑡𝑡−1 𝑆𝑆𝑡𝑡 𝑆𝑆𝑡𝑡+1

• Tie the weights. 𝑊𝑊 𝑊𝑊 𝑊𝑊

𝑈𝑈 𝑈𝑈 𝑈𝑈

xt-1 xt Xt+1

Recurrent Neural Network

ot-1 ot ot+1 ot

𝑉𝑉 𝑉𝑉 𝑉𝑉 𝑉𝑉

𝑆𝑆𝑡𝑡−1 𝑆𝑆𝑡𝑡 𝑆𝑆𝑡𝑡+1 𝑆𝑆𝑡𝑡 𝑊𝑊

𝑊𝑊 𝑊𝑊 𝑊𝑊

𝑈𝑈 𝑈𝑈 𝑈𝑈 𝑈𝑈

xt-1 xt-1 Xt+1 xt

Recurrent Neural Network

• RNNs are called recurrent because they perform same task for every

element of a sequence.

calculated so far.

Recurrent Neural Network - Unrolling

• Unrolling/Unfolding means that we write out the network for the

complete sequence. For example, if RNN is used for language modeling

then a sequence will contain N words and it would be unrolled into a N-

layer neural network, one for each word.

ot o1 o2 o100

𝑉𝑉

𝑆𝑆𝑡𝑡 𝑊𝑊

𝑉𝑉

𝑆𝑆𝑡𝑡−1

𝑊𝑊

𝑆𝑆𝑡𝑡

𝑉𝑉

… 𝑉𝑉

𝑆𝑆𝑁𝑁

𝑊𝑊

𝑈𝑈 𝑈𝑈 𝑈𝑈 𝑈𝑈

xt x1 x2 XN

Notation

• 𝒙𝒙𝒕𝒕 is the input at time step 𝒕𝒕.

• 𝒔𝒔𝒕𝒕 is the hidden state at time step 𝒕𝒕. It’s the “memory” of the

network.

• 𝒔𝒔𝒕𝒕 is calculated based on the previous hidden state and the input at

current step, 𝒔𝒔𝒕𝒕 = 𝒇𝒇(𝑼𝑼𝒙𝒙𝒕𝒕 + 𝑾𝑾𝒔𝒔𝒕𝒕−𝟏𝟏 ).

Notation (contd.)

• 𝒔𝒔−𝟏𝟏 , which is required to calculate the first hidden state, is typically

initialized to all zeros.

RNN – Model

• The main feature of an RNN is its hidden state, which captures

information about what happened in a sequence in all the previous

time steps.

performing same task at each step. This reduces the number of

parameters to learn.

• Depending upon task we may want output at each time step or just

one final output at last time step.

RNN Modeling Based on Input/Output

number Captioning Analysis from Description Speech

reading Video

Sequential Processing of Non-sequential Data

house numbers

Training: Backpropagation Through Time

• Basic equation of RNN is

𝑠𝑠𝑡𝑡 = 𝑓𝑓(𝑈𝑈𝑥𝑥𝑡𝑡 + 𝑊𝑊𝑠𝑠𝑡𝑡−1 )

𝑦𝑦𝑡𝑡′ = 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝑉𝑉𝑠𝑠𝑡𝑡 )

𝐸𝐸 𝑦𝑦, 𝑦𝑦′ = 𝑦𝑦 log 𝑦𝑦 ′ = � 𝑦𝑦𝑡𝑡 log(𝑦𝑦𝑡𝑡′ )

𝑡𝑡

• 𝑦𝑦𝑡𝑡 is the ground truth word at time step 𝑡𝑡, and 𝑦𝑦𝑡𝑡′ is the predicted word.

Backpropagation Through Time (BPTT)

• Gradients of the error are calculated w.r.t. Parameters 𝑈𝑈, 𝑉𝑉 and 𝑊𝑊

using stochastic gradient descent (or mini-batch or any other variant).

• Gradients are summed up at each time step for one training example

𝜕𝜕𝜕𝜕 𝜕𝜕𝐸𝐸𝑡𝑡

=�

𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕

𝑡𝑡

applied in backward direction it is backpropagation algorithm and

hence called as backpropagation through time(BPTT).

RNN Example: Character-Level Language

Model

RNN Example: Word-Level Language Model

RNN Example: Sentiment Classification

RNN Example: Machine Translation

Long Term Dependencies

I grew up in France… I speak fluent French

V V V V V

𝑆𝑆 W 𝑆𝑆 W 𝑆𝑆 W 𝑆𝑆 W 𝑆𝑆 W

U U U U U

xt-2 xt-1 xt Xt+1 Xt+2

Difficulties involved in BPTT

• RNNs trained with BPTT have difficulties learning long term

dependencies due to what is called the vanishing gradient problem.

• When there are many hidden layers the error gradient weakens as it

moves from the back of the network to the front, because the

derivative the sigmoid weakens towards the poles

• The updates as you move to the front of the network will contain less

information.

Difficulties involved in BPTT (Cont.)

• The problem exist in CNNs also. RNNs amplify this. Effectively the

number of layers that is traversed by back-propagation grows

dramatically.

temporally distant events.

Some Partial Solutions

• Vanishing gradients can be partially solved by properly initializing the

weight matrices (U, V, W) or performing regularization.

preferred solution (typically done in Deep CNNs).

Solution Sketch

• What could be the simplest solution to handle vanishing/exploding

gradient in long term dependencies?

LSTM Forget Insert Recurrent

ht

* +

tanh

* *

σ σ tanh σ

Xt

Long Short-Term Memory

LSTM Architecture * +

tanh

* *

• Each line carries an entire vector. σ σ tanh σ

to different locations.

LSTM Networks * +

tanh

* *

σ σ tanh σ

• LSTMs are explicitly designed to avoid the long term dependency problem.

module.

Unrolling LSTMs

ht-1 ht ht+1

* +

tanh

* +

tanh

* +

tanh

* * * * * *

σ σ tanh σ σ σ tanh σ σ σ tanh σ

Xt-1 Xt Xt+1

LSTM Networks

ht

𝑐𝑐𝑡𝑡−1

* + 𝑐𝑐𝑡𝑡

tanh

*𝑐𝑐̃

𝑖𝑖𝑡𝑡

*

𝑓𝑓𝑡𝑡 𝑜𝑜𝑡𝑡

𝑡𝑡

σ σ tanh σ

ℎ𝑡𝑡−1 ℎ𝑡𝑡

Xt

ResNet Analogy

How does LSTM cell works? (Cont.)

• The key to LSTMs is the cell state, the

horizontal line running through the top of the ht

cell.

𝑐𝑐𝑡𝑡−1 𝑐𝑐𝑡𝑡

• This line runs straight through the entire chain * +

* 𝑜𝑜𝑡𝑡

𝑐𝑐𝑡𝑡̃

*

σ σ tanh σ

• The LSTM have the ability to add or remove ℎ

information to the cell state regulated by gates. 𝑡𝑡−1 ℎ𝑡𝑡

Xt

• Gates consist of a sigmoid neural network

layer and a pointwise multiplication operation.

Gate

• The sigmoid layer outputs numbers between zero and one

representing how much information each component should let

through.

means “let everything though!”.

σ

LSTM Operations: Forget

ht

• First step is to decide what information

to throw away from the cell state. 𝑐𝑐𝑡𝑡−1 𝑐𝑐𝑡𝑡

* +

tanh

* *

layer” makes this decision. σ σ

𝑐𝑐𝑡𝑡̃

tanh σ

ℎ𝑡𝑡−1 ℎ𝑡𝑡

• It looks at past state output, ht-1 and

current input, xt and outputs a number Xt

between 0 and 1 telling how much to

𝒇𝒇𝒕𝒕 = 𝛔𝛔 𝐖𝐖𝐟𝐟 � [𝒉𝒉𝒕𝒕−𝟏𝟏 , 𝒙𝒙𝒕𝒕 ] + 𝒃𝒃𝒇𝒇

keep.

LSTM Operations: Input/Insert

ht

• Next step is to decide what new

information should be stored in the cell 𝑐𝑐𝑡𝑡−1 * + 𝑐𝑐𝑡𝑡

state. tanh

𝑓𝑓𝑡𝑡 𝑖𝑖𝑡𝑡 𝑜𝑜𝑡𝑡

* *

• First, a sigmoid layer called the “input 𝑐𝑐𝑡𝑡̃

σ σ tanh σ

gate layer” decides which values should ℎ𝑡𝑡−1 ℎ𝑡𝑡

be updated, it.

Xt

• Next, a tanh layer creates a vector of

new candidate values, 𝒄𝒄 ̃𝒕𝒕 that could be 𝒊𝒊𝒕𝒕 = 𝝈𝝈 𝑾𝑾𝒊𝒊 � [𝒉𝒉𝒕𝒕−𝟏𝟏 , 𝒙𝒙𝒕𝒕 ] + 𝒃𝒃𝒊𝒊

added to the state.

𝒄𝒄� 𝒕𝒕 = 𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕 𝑾𝑾𝒄𝒄 � [𝒉𝒉𝒕𝒕−𝟏𝟏 , 𝒙𝒙𝒕𝒕 ] + 𝒃𝒃𝒄𝒄

LSTM Operations: Update

ht

• Old cell state, ct-1 is updated into the

new cell state, ct. 𝑐𝑐𝑡𝑡−1 𝑐𝑐𝑡𝑡

• Old state is multiplied by forget layer * +

tanh

output, ft. 𝑓𝑓𝑡𝑡 𝑖𝑖𝑡𝑡

* 𝑜𝑜𝑡𝑡 *

• Input gate layer output, it is multiplied 𝑐𝑐𝑡𝑡̃

σ tanh σ

σ

with candidate values, 𝒄𝒄 𝒕𝒕̃ and the result ℎ𝑡𝑡−1 ℎ𝑡𝑡

is added to values obtained by above

multiplication. Xt

• The output of above computations is 𝑪𝑪𝒕𝒕 = 𝒇𝒇𝒕𝒕 ∗ 𝒄𝒄𝒕𝒕−𝟏𝟏 + 𝒊𝒊𝒕𝒕 ∗ 𝒄𝒄� 𝒕𝒕

the new candidate value, ct.

LSTM Operations: Output/Recurrent

ht

• Final step is to decide what to output.

• Output is based on the current cell 𝑐𝑐𝑡𝑡−1 𝑐𝑐𝑡𝑡

state, ct. * +

tanh

* 𝑜𝑜𝑡𝑡 *

of cell state is going to output. 𝑐𝑐𝑡𝑡̃

σ tanh σ

σ

• Then cell state is passed through a tanh ℎ𝑡𝑡−1 ℎ𝑡𝑡

layer.

• The output is then multiplied by the Xt

output of the sigmoid gate.

𝒉𝒉𝒕𝒕 = 𝒐𝒐𝒕𝒕 ∗ 𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕 𝒄𝒄𝒕𝒕

𝒐𝒐𝒕𝒕 = 𝝈𝝈 𝑾𝑾𝒐𝒐 � [𝒉𝒉𝒕𝒕−𝟏𝟏 , 𝒙𝒙𝒕𝒕 ] + 𝒃𝒃𝒐𝒐

LSTM Example

Variants of LSTMs

• No Input Gate (NIG)

• No Forget Gate (NFG)

• No Output Gate (NOG)

• No Input Activation Function (NIAF)

• No Output Activation Function (NOAF)

• Peepholes

• Coupled Input and Forget Gate (CIFG)

• Full Gate Recurrence (FGR).

Peephole

ht

𝑐𝑐𝑡𝑡−1

* + 𝑐𝑐𝑡𝑡

tanh

*𝑐𝑐̃

𝑖𝑖𝑡𝑡

*

𝑓𝑓𝑡𝑡 𝑜𝑜𝑡𝑡

𝑡𝑡

σ σ tanh σ

ℎ𝑡𝑡−1 ℎ𝑡𝑡

Xt

Coupled Input and Forget Gates

ht

𝑐𝑐𝑡𝑡−1

* + 𝑐𝑐𝑡𝑡

tanh

-1

*𝑐𝑐̃ *

𝑓𝑓𝑡𝑡 𝑜𝑜𝑡𝑡

𝑡𝑡

σ tanh σ

ℎ𝑡𝑡−1 𝑖𝑖𝑡𝑡 ℎ𝑡𝑡

Xt

σ

Gated Recurrent Unit (GRU)

• Combine the forget and input gates into a single “update gate.”

Gated Recurrent Unit (GRU)

ℎ𝑡𝑡−1

* + ℎ𝑡𝑡

* -1

* ℎ�

𝑟𝑟𝑡𝑡 𝑧𝑧𝑡𝑡

𝑡𝑡

σ σ

tanh

𝑥𝑥𝑡𝑡

Gated Recurrent Unit (GRU)

• 𝑧𝑧𝑡𝑡 = 𝜎𝜎 𝑊𝑊𝑧𝑧 ∗ ℎ𝑡𝑡−1 , 𝑥𝑥𝑡𝑡

Gated Recurrent Unit (GRU)

• Simplifies the design

Alternate Representation

Effect of various LSTM structures

• The most commonly used LSTM architecture (vanilla LSTM) performs

reasonably well on various datasets and using any of eight possible

modifications does not significantly improve the LSTM performance

• Certain modifications such as coupling the input and forget gates or

removing peephole connections simplify LSTM without significantly

hurting performance.

• The forget gate and the output activation function are the critical

components of the LSTM block. While the first is crucial for LSTM

performance, the second is necessary whenever the cell state is

unbounded.

RNN Extensions

• There are some variants of RNNs available, some of them are:

• Bidirectional RNNs

Bidirectional RNNs

• Models that current state

depends on both previous

state as well as future state in

the sequence.

• They are simple.

• Two RNNs stacked on top of

each other.

• Output is then computed

based on the hidden state of

both RNNs.

Deep Bidirectional RNNs

• Similar to bidirectional RNNs.

• Only difference is layered

architecture.

• Multiple layers of bidirectional

RNNs.

• In practice gives a higher

learning capacity.

• Requires a lot of training data.

Feature-based approaches to Activity Recognition

• Dense trajectories and motion boundary descriptors for action recognition:

Wang et al., 2013

• Action Recognition with Improved Trajectories: Wang and Schmid, 2013

Dense Trajectories

Dense Trajectories

• Dense trajectories and motion boundary descriptors for action recognition:

Wang et al., 2013

points optical flow in the (stabilized) coordinate

system of each tracklet

Spatio-Temporal ConvNets

• 3D Convolutional Neural Networks for Human Action Recognition: Ji et al.,

2010

Spatio-Temporal ConvNets

• Sequential Deep Learning for Human Action Recognition: Baccouche et al.,

2011

Spatio-Temporal ConvNets

• Large-scale Video Classification with Convolutional Neural Networks,

Karpathy et al., 2014

1 million videos

487 sports classes

Spatio-Temporal ConvNets

• Large-scale Video Classification with Convolutional Neural Networks,

Karpathy et al., 2014

Spatio-Temporal ConvNets

• Large-scale Video Classification with Convolutional Neural Networks,

Karpathy et al., 2014

Spatio-Temporal ConvNets

• Learning Spatiotemporal Features with 3D Convolutional Networks: Tran et

al. 2015

3D VGGNet, basically.

Spatio-Temporal ConvNets

• Two-Stream Convolutional Networks for Action Recognition in Videos:

Simonyan and Zisserman 2014

Spatio-Temporal ConvNets

• Two-Stream Convolutional Networks for Action Recognition in Videos:

Simonyan and Zisserman 2014

Long-time Spatio-Temporal ConvNets

Action Classification in Soccer Videos with Long Short-Term Memory

Recurrent Neural Networks: ICANN'10

Long-time Spatio-Temporal ConvNets

• Long-term Recurrent Convolutional Networks for Visual Recognition and

Description: Donahue et al., 2015

Long-time Spatio-Temporal ConvNets

• Beyond Short Snippets: Deep Networks for Video Classification: Ng et al.,

2015

Beyond Short Snippets: Deep Networks for

Video Classification: Ng et al., 2015

• Deep Video LSTM takes input the output

from the final CNN layer at each

consecutive video frame.

through time and upwards through five

layers of stacked LSTMs.

time step.

Beyond Short Snippets: Deep Networks for

Video Classification: Ng et al., 2015

Combining predictions:

• Return the prediction at the last time

step

• max-pooling the predictions over time,

• summing the predictions over time and

return the max

• linearly weighting the predictions over

time

Less than 1% difference in output by any

of the 4 choices.

Bi-Directional RNN

A Multi-Stream Bi-Directional Recurrent Neural Network for Fine-Grained

Action Detection, CVPR 2016

Image Captioning

Image Sentence Datasets

et al. 2014. www.mscoco.org

• Currently: ~120K images, ~5

sentences each

RNNs for Image Captioning

Soft Attention for Captioning

Soft Attention

Show Attend and Tell: Xu et al., 2015

• RNN attends spatially to different parts of images while generating

each word of the sentence

Soft Attention

Soft Attention for Everything!

Attending to Arbitrary Regions

Attention mechanism from Show, Attend, and Tell only lets us softly attend

to fixed grid positions … can we do better?

Spatial Transformer Networks

Jaderberg et al, “Spatial Transformer Networks”, NIPS 2015

Spatial Transformer Networks

Spatial Transformer Networks

Attention: Recap

• Soft attention:

• Easy to implement: produce distribution over input locations,

reweight features and feed as input

• Attend to arbitrary input locations using spatial transformer

networks

• Hard attention:

• Attend to a single input location

• Can’t use gradient descent!

• Need reinforcement learning!

Other Image Captioning Works

• Explain Images with Multimodal Recurrent Neural Networks, Mao et

al.

• Deep Visual-Semantic Alignments for Generating Image Descriptions,

Karpathy and Fei-Fei

• Show and Tell: A Neural Image Caption Generator, Vinyals et al.

• Long-term Recurrent Convolutional Networks for Visual Recognition

and Description, Donahue et al.

• Learning a Recurrent Visual Representation for Image Caption

Generation, Chen and Zitnick

Learning Representation

Unsupervised Learning with LSTMs, Arxiv 2015.

Pose Estimation

Recurrent Network Models for Human Dynamics, ICCV 2015

Reidentification

Recurrent Convolutional Network for Video-based Person Re-Identification,

CVPR 2016

OCR

Recursive Recurrent Nets with

Attention Modeling for OCR in the

Wild, CVPR 2016

recursive convolutional layers to

extract encoded image features

output characters by recurrent

neural networks

- Neural network gamesUploaded bylenomagnus
- A Fuzzy Back Propagation AlgorithmUploaded byĐặng Sơn Tùng
- jurnal cc 3 1-s2.0-S0957417418300435-mainUploaded byDeny Arifianto
- Classification of Diabetic Retinopathy Using Artificial Neural NetworkUploaded byInternational Journal of Innovative Science and Research Technology
- Jurnal Ids m0509073 for JatitUploaded byRanger Macul
- 2006_05Uploaded bysachin
- A Novel Method for Early Diagnosis of Alzheimer’s Disease Based on Pseudo Zernike Moment From Structural Mri_2015Uploaded byDesirée López Palafox
- Forecasting Wheat Price Using Backpropagation And NARX Neural NetworkUploaded bytheijes
- 06 ContentsUploaded bysantanu_1310
- AUTONOMOUS RACING.docxUploaded byharoon shahzad
- IRJET-Web Server Performance Prediction using a Deep Recurrent NetworkUploaded byIRJET Journal
- Landslide_Mustafa Ridha Mezaal.pdfUploaded byPraveen Choudhary
- KM Slides Ch12Uploaded byNoel Estrada
- 1610.05567Uploaded byabdul muqeet
- Image Compression Using Neural NetworksUploaded bypantilt
- Computer Vision Course ProjectUploaded byVaibhav Dubey
- Convolutional Neural Networks-CNN.pdfUploaded byDeepak Yadav
- MasterThesis_JessicaAlecci4417623.pdfUploaded byangelo
- Evolving Plastic Neural Networks With Novelty SearchUploaded bynicanica44
- 9Uploaded bybhaidada
- Neural network approach on SR.pdfUploaded bySuman
- 25Uploaded byAnonymous ueQIE2
- ECG classification.pdfUploaded byAmrYassin
- 08626183Uploaded byMuhammad Saim
- Artificial Neural NetworkUploaded byKhairiBudayawan
- 04918628Uploaded byBiswanath Dekaraja
- IEEEXplore_1Uploaded byRohit Sajjan
- noticia.docxUploaded byLaura L
- 6758281Uploaded bySuwandiRahman
- Artificial Intelligence Techniques for Prediction of the Capacity of RC BeamsUploaded byOmar

- Beliefs Selffulfillingprophecy 21943Uploaded byRahul Mehta
- Mid Sem Exam Schedule_For Rest StudentsUploaded byManish Mahalwal
- 8.Relavance Feedback - IIUploaded byRahul Mehta
- CF MidsemUploaded byRahul Mehta
- prob-Q1_5Uploaded byRahul Mehta
- Summary Trading in the ZoneUploaded byleo_kp5104
- GRE Info Bulletin 17 18Uploaded byRahul Mehta
- Assign 5Uploaded byRahul Mehta
- Understanding the Changing Roles of Scientific Publications via Citation_ EmbeddingsUploaded byRahul Mehta
- lecture10slides_17415Uploaded byRahul Mehta
- Introduction to Algo TradingUploaded byMark Luther
- A Proof for a QuickHull AlgorithmUploaded byRahul Mehta
- FeRoSa-facetatedRecoUploaded byRahul Mehta
- Auvsi Suas Competition Interoperability SystemUploaded byRahul Mehta
- lab3-familiarizingwithlinuxandunixcommands_13071.pdfUploaded byRahul Mehta
- algorithms list in AIUploaded bykucingmucing

- 耶穌的比喻 （二）- The Parables of Jesus 2Uploaded byFree Children's Stories
- Market Evaluation and Development Recommendatinos for Kau HaUploaded bycem0930
- Becoming Human 3 NotesUploaded byPuneeth Uttla
- Adolescence, Young Adulthood and Internet Use in Nigeria: a Review of What is Known and UnknownUploaded byTI Journals Publishing
- Use of English 07AUploaded byMargarita Alcevska
- VMware HealthAnalyzer Install and User Guide v5.0.6Uploaded byThirumal Kannan
- Reflective MemoryUploaded byMarco
- DissolutionUploaded byMuhammad Masoom Akhtar
- Cancer ElectrotherapyUploaded bymich
- A Selective Sanctuary: "Pinkwashing" and Gay Palestinian Asylum Seekers in IsraelUploaded byJosh Wéimòdé Feng
- From the Mute God to the Lesser GodUploaded byefunction
- 223706329-Concrete-Mix-Design.xlsUploaded byTri Angga Dio Simamora
- LAO, A Heuristic Search Algorithm That Finds Solutions With LoopsUploaded byAriesta Zain Febriana
- Rigid RotorUploaded bymeghna
- SEP Sample ChapterUploaded byhamod9800
- Anti Fungal in BreadUploaded byRufa Lara Mendez
- 05. Measurement Theory and Accounting Measurement System (Part II) - 26 Mar 2015 - GSLCCCCCCCCUploaded byNovi Somerhalder
- ISAAC BORCELIS v. VICENTE GOLINGCO ET ALL. G.R. No. 7353 August 26, 1914.pdfUploaded byIvan Angelo Apostol
- Sports Communication.pdfUploaded byjoe blow
- RefractionUploaded byAman Lilani
- Visual Field Changes in GlaucomaUploaded byOlayemi Olorundare
- VogelweideUploaded byStefano Campagnolo
- Resource Room IntroductionUploaded bySamantha Slomin
- Oper Case HoldingUploaded byAjie Witama
- 1Uploaded byMCH
- Revisiting OrientalismUploaded byIndeewara Thilakarathne
- Facial Videos Facebook AnatomyUploaded byYouWeirdTube1
- letter of rec for kristina hedrickUploaded byapi-311804577
- Nydle: Lesson 2: Public Policy PlanUploaded byAmanda Wright
- ICL7660-MAX1044 Switched-Capacitor Voltage Converters.pdfUploaded byJuan Gil Roca