14 views

Original Title: support vector machines.pdf

Uploaded by Talha Yousuf

- multiple kernel learning
- 05597072
- Burgh Outs 2014
- Hit or Flop--Box Office Prediction for Feature Films - Cocuzzo, Dan & Wu, Stephen - Stanford University - December 13, 2013
- hw02
- Lecture3 Linear SVM With Slack
- Svm
- sdarticle_103
- SVM Report
- SVM Lecture 2
- Fuzzy Support Vector Machines Based on Fcm Clustering
- Svm Tricia
- Pedestrian Detection
- Instance based
- load Prediction in cloud computing
- 5
- Linalg Vectors
- Lee_2009
- Comparing Classification and Regression Tree and Support Vector Machine for Analyzing Sentiments for IPL 2016
- sample-chapter--ch02.pdf

You are on page 1of 37

SVM Tutorial

Introduction

This is the first article from a series of articles I will be writing about the math behind SVM. There is a lot to talk about and a lot of

mathematical backgrounds is often necessary. However, I will try to keep a slow pace and to give in-depth explanations, so that

everything is crystal clear, even for beginners.

If you are new and wish to know a little bit more about SVMs before diving into the math, you can read the article: an overview of

Support Vector Machine.

The goal of a support vector machine is to find the optimal separating hyperplane which maximizes the margin of the training

data.

The first thing we can see from this definition, is that a SVM needs training data. Which means it is a supervised learning

algorithm.

It is also important to know that SVM is a classification algorithm. Which means we will use it to predict if something belongs to a

particular class.

Figure 1

We have plotted the size and weight of several people, and there is also a way to distinguish between men and women.

With such data, using a SVM will allow us to answer the following question:

Given a particular data point (weight and size), is the person a man or a woman ?

For instance: if someone measures 175 cm and weights 80 kg, is it a man of a woman?

Just by looking at the plot, we can see that it is possible to separate the data. For instance, we could trace a line and then all

the data points representing men will be above the line, and all the data points representing women will be below the line.

Even though we use a very simple example with data points laying in R2 the support vector machine can work with any number

of dimensions !

in two dimensions, it is a line

in three dimensions, it is a plane

in more dimensions you can call it an hyperplane

What is the optimal separating hyperplane?

The fact that you can find a separating hyperplane, does not mean it is the best one ! In the example below there is several

separating hyperplanes. Each of them is valid as it successfully separates our data set with men on one side and women on the

other side.

Suppose we select the green hyperplane and use it to classify on real life data.

This hyperplane does not generalize well

This time, it makes some mistakes as it wrongly classify three women. Intuitively, we can see that if we select an hyperplane

which is close to the data points of one class, then it might not generalize well.

So we will try to select an hyperplane as far as possible from data points from each category:

This one looks better. When we use it with real life data, we can see it still make perfect classification.

The black hyperplane classifies more accurately than the green one

That's why the objective of a SVM is to find the optimal separating hyperplane:

and because it is the one which will generalize better with unseen data

What is the margin and how does it help choosing the optimal hyperplane?

Given a particular hyperplane, we can compute the distance between the hyperplane and the closest data point. Once we have

this value, if we double it we will get what is called the margin.

Basically the margin is a no man's land. There will never be any data point inside the margin. (Note: this can cause some

problems when data is noisy, and this is why soft margin classifier will be introduced later)

As you can see, Margin B is smaller than Margin A.

The further an hyperplane is from a data point, the larger its margin will be.

This means that the optimal hyperplane will be the one with the biggest margin.

That is why the objective of the SVM is to find the optimal separating hyperplane which maximizes the margin of the

training data.

This concludes this introductory post about the math behind SVM. There was not a lot of formula, but in the next article we will

put on some numbers and try to get the mathematical view of this using geometry and vectors.

SVM - Understanding the math - Part 2 : Calculate the margin

Alexandre KOWALCZYK

I am passionate about machine learning and Support Vector Machine. I like to explain things simply to share

my knowledge with people from around the world. If you wish you can add me to linkedin, I like to connect

with my readers.

November 2, 2014

105 Replies

« Previous Next »

SVM Tutorial

This is Part 2 of my series of tutorial about the math behind Support Vector Machines.

If you did not read the previous article, you might want to start the serie at the beginning by reading this article: an

overview of Support Vector Machine.

In the rst part, we saw what is the aim of the SVM. Its goal is to nd the hyperplane which maximizes the margin.

That means it is important to understand vector well and how to use them.

What is a vector?

its norm

its direction

How to add and subtract vectors ?

What is the dot product ?

How to project a vector onto another ?

Once we have all these tools in our toolbox, we will then see:

What is the equation of the hyperplane?

How to compute the margin?

What is a vector?

Figure 1: a point

De nition: Any point x = (x1 , x2 ), x ≠ 0 , in R 2 speci es a vector in the plane, namely the vector starting at the

origin and ending at x.

This de nition means that there exists a vector between the origin and A.

Figure 2 - a vector

→

If we say that the point at the origin is the point O(0, 0) then the vector above is the vector OA. We could also give it

an arbitrary name such as u.

Note: You can notice that we write vector either with an arrow on top of them, or in bold, in the rest of this text I will

→

use the arrow when there is two letters like OA and the bold notation otherwise.

Ok so now we know that there is a vector, but we still don't know what IS a vector.

De nition: A vector is an object that has both a magnitude and a direction.

1) The magnitude

The magnitude or length of a vector x is written ∥x∥ and is called its norm.

→

For our vector OA, ∥OA∥ is the length of the segment OA

Figure 3

2 2 2

OA = OB + AB

2 2 2

OA = 3 +4

2

OA = 25

−−

OA = √25

∥OA∥ = OA = 5

2) The direction

u1 u2

, )

∥u∥ ∥u∥

To nd the direction of a vector, we need to use its angles.

Naive de nition 1: The direction of the vector u is de ned by the angle θ with respect to the horizontal axis, and with the angle

α with respect to the vertical axis.

This is tedious. Instead of that we will use the cosine of the angles.

adjacent

cos(β) =

hypotenuse

In Figure 4 we can see that we can form two right triangles, and in both case the adjacent side will be on one of the axis.

Which means that the de nition of the cosine implicitly contains the axis related to an angle. We can rephrase our naïve

de nition to :

Naive de nition 2: The direction of the vector u is de ned by the cosine of the angle θ and the cosine of the angle α.

u1

cos(θ) =

∥u∥

u2

cos(α) =

∥u∥

Hence the original de nition of the vector w . That's why its coordinates are also called direction cosine.

Computing the direction vector

We will now compute the direction of the vector u from Figure 4.:

u1 3

cos(θ) = = = 0.6

∥u∥ 5

and

u2 4

cos(α) = = = 0.8

∥u∥ 5

We can see that w as indeed the same look as u except it is smaller. Something interesting about direction vectors like

w is that their norm is equal to 1. That's why we often call them unit vectors.

Figure 6: two vectors u and v

u + v = (u1 + v1 , u2 + v2 )

Which means that adding two vectors gives us a third vector whose coordinate are the sum of the coordinates of the

original vectors.

u − v = (u1 − v1 , u2 − v2 )

Figure 8: the di erence of two vectors

Since the subtraction is not commutative, we can also consider the other case:

v − u = (v1 − u1 , v2 − u2 )

The last two pictures describe the "true" vectors generated by the di erence of u and v.

However, since a vector has a magnitude and a direction, we often consider that parallel translate of a given vector

(vectors with the same magnitude and direction but with a di erent origin) are the same vector, just drawn in a di erent

place in space.

Figure 10: another way to view the di erence v-u

and

If you do the math, it looks wrong, because the end of the vector u − v is not in the right point, but it is a convenient

way of thinking about vectors which you'll encounter often.

De nition: Geometrically, it is the product of the Euclidian magnitudes of the two vectors and the cosine of the angle

between them

Which means if we have two vectors x and y and there is an angle θ (theta) between them, their dot product is :

x ⋅ y = ∥x∥∥y∥cos(θ)

Why ?

Figure 12

In the de nition, they talk about cos(θ), let's see what it is.

adjacent

cos(θ) =

hypotenuse

However if we take a di erent look Figure 12 we can nd two right-angled triangles formed by each vector with the

horizontal axis.

Figure 13

and

Figure 14

Figure 15

θ = β−α

There is a special formula called the di erence identity for cosine which says that:

adjacent x1

cos(β) = =

hypotenuse ∥x∥

opposite x2

sin(β) = =

hypotenuse ∥x∥

adjacent y1

cos(α) = =

hypotenuse ∥y∥

opposite y2

sin(α) = =

hypotenuse ∥y∥

x1 y1 x2 y2

cos(θ) = +

∥x∥ ∥y∥ ∥x∥ ∥y∥

x1 y 1 + x2 y 2

cos(θ) =

∥x∥∥y∥

∥x∥∥y∥cos(θ) = x1 y 1 + x2 y 2

∥x∥∥y∥cos(θ) = x ⋅ y

x ⋅ y = x1 y 1 + x2 y 2 = ∑(xi y i )

i=1

The dot product is called like that because we write a dot between the two vectors.

Talking about the dot product x ⋅ y is the same as talking about

scalar product because we take the product of two vectors and it returns a scalar (a real number)

Given two vectors x and y, we would like to nd the orthogonal projection of x onto y.

Figure 16

Figure 17

By de nition :

∥z∥

cos(θ) =

∥x∥

∥z∥ = ∥x∥cos(θ)

x⋅y

cos(θ) =

∥x∥∥y∥

x⋅y

∥z∥ = ∥x∥

∥x∥∥y∥

x⋅y

∥z∥ =

∥y∥

y

u =

∥y∥

and

∥z∥ = u ⋅ x

Since this vector is in the same direction as y it has the direction u

z

u =

∥z∥

z = ∥z∥u

Why are we interested by the orthogonal projection ? Well in our example, it allows us to compute the distance

between x and the line which goes through y.

Figure 19

−−−−−−−−−−−−−−−

2 2 −−

∥x − z∥ = √ (3 − 4) + (5 − 1) = √17

You probably learnt that an equation of a line is : y = ax + b . However when reading about hyperplane, you will often

nd that the equation of an hyperplane is de ned by :

T

w x = 0

In the hyperplane equation you can see that the name of the variables are in bold. Which means that they are vectors !

Moreover, wT x is how we compute the inner product of two vectors, and if you recall, the inner product is just another

name for the dot product !

Note that

y = ax + b

y − ax − b = 0

−b 1

⎛ ⎞ ⎛ ⎞

Given two vectors w ⎜ −a ⎟ and x ⎜ x ⎟

⎝ ⎠ ⎝ ⎠

1 y

T

w x = −b × (1) + (−a) × x + 1 × y

T

w x = y − ax − b

The two equations are just di erent ways of expressing the same thing.

It is interesting to note that w0 is −b, which means that this value determines the intersection of the line with the

vertical axis.

it is easier to work in more than two dimensions with this notation,

the vector w will always be normal to the hyperplane(Note: I received a lot of questions about the last remark. w

will always be normal because we use this vector to de ne the hyperplane, so by de nition it will be normal. As

you can see this page, when we de ne a hyperplane, we suppose that we have a vector that is orthogonal to the

hyperplane)

And this last property will come in handy to compute the distance from a point to the hyperplane.

Figure 20

As you can see on the Figure 20, the equation of the hyperplane is :

x2 = −2x1

which is equivalent to

T

w x = 0

2 x1

with w ( ) and x ( )

1 x2

Note that the vector w is shown on the Figure 20. (w is not a data point)

We would like to compute the distance between the point A(3, 4) and the hyperplane.

This is the distance between A and its projection onto the hyperplane

Figure 21

If we project it onto the normal vector w

Figure 22 : projection of a onto w

We get the vector p

Our goal is to nd the distance between the point A(3, 4) and the hyperplane.

We can see in Figure 23 that this distance is the same thing as ∥p∥ .

Let's compute this value.

We start with two vectors, w = (2, 1) which is normal to the hyperplane, and a = (3, 4) which is the vector between

the origin and A.

− −−−−− –

2 2

∥w∥ = √ 2 + 1 = √5

2 1

u = ( , )

– –

√5 √5

p = (u ⋅ a)u

2 1

p = (3 × +4× )u

– –

√5 √5

6 4

p = ( + )u

– –

√5 √5

10

p = u

–

√5

10 2 10 1

p = ( × , × )

– – – –

√5 √5 √5 √5

20 10

p = ( , )

5 5

p = (4, 2)

− −−−−− –

2 2

∥p∥ = √ 4 + 2 = 2√5

Now that we have the distance ∥p∥ between A and the hyperplane, the margin is de ned by :

–

margin = 2∥p∥ = 4√5

Conclusion

This ends the Part 2 of this tutorial about the math behind SVM.

There was a lot more of math, but I hope you have been able to follow the article without problem.

What's next?

Now that we know how to compute the margin, we might want to know how to select the best hyperplane, this is

described in Part 3 of the tutorial : How to nd the optimal hyperplane ?

Search

SVM Tutorial

This is the Part 3 of my series of tutorials about the math behind Support Vector Machine.

If you did not read the previous articles, you might want to start the serie at the beginning by reading this article: an overview of

Support Vector Machine.

The main focus of this article is to show you the reasoning allowing us to select the optimal hyperplane.

How do we calculate the distance between two hyperplanes ?

What is the SVM optimization problem ?

At the end of Part 2 we computed the distance ∥p∥ between a point A and a hyperplane. We then computed the margin which

was equal to 2∥p∥ .

However, even if it did quite a good job at separating the data it was not the optimal hyperplane.

As we saw in Part 1, the optimal hyperplane is the one which maximizes the margin of the training data.

In Figure 1, we can see that the margin M1 , delimited by the two blue lines, is not the biggest margin separating perfectly the

data. The biggest margin is the margin M2 shown in Figure 2 below.

Figure 2: The optimal hyperplane is slightly on the left of the one we used in Part 2.

You can also see the optimal hyperplane on Figure 2. It is slightly on the left of our initial hyperplane. How did I find it ? I simply

traced a line crossing M2 in its middle.

Right now you should have the feeling that hyperplanes and margins are closely related. And you would be right!

If I have an hyperplane I can compute its margin with respect to some data point. If I have a margin delimited by two

hyperplanes (the dark blue lines in Figure 2), I can find a third hyperplane passing right in the middle of

the margin.

Finding the biggest margin, is the same thing as finding the optimal hyperplane.

It is rather simple:

2. select two hyperplanes which separate the data with no points between them

3. maximize their distance (the margin)

The region bounded by the two hyperplanes will be the biggest possible margin.

It is because as always the simplicity requires some abstraction and mathematical terminology to be well understood.

Each xi will also be associated with a value y i indicating if the element belongs to the class (+1) or not (-1).

Note that y i can only have two possible values -1 or +1.

Moreover, most of the time, for instance when you do text classification, your vector xi ends up having a lot of dimensions. We

can say that xi is a p -dimensional vector if it has p dimensions.

D = {(x , y

i i)

p

∣ xi ∈ R , y i ∈ {−1, 1}}

n

i=1

Step 2: You need to select two hyperplanes separating the data with no points between them

Finding two hyperplanes separating some data is easy when you have a pencil and a paper. But with some p -dimensional data

it becomes more difficult because you can't draw it.

Moreover, even if your data is only 2-dimensional it might not be possible to find a separating hyperplane !

Figure 3: Data on the left can be separated by an hyperplane, while data on the right can't

So let's assume that our dataset D IS linearly separable. We now want to find two hyperplanes with no points between them,

but we don't have a way to visualize them.

We saw previously, that the equation of a hyperplane can be written

T

w x = 0

However, in the Wikipedia article about Support Vector Machine it is said that :

First, we recognize another notation for the dot product, the article uses w ⋅ x instead of w

T

x .

You might wonder... Where does the +b comes from ? Is our previous definition incorrect ?

Not quite. Once again it is a question of notation. In our definition the vectors w and x have three dimensions, while in the

Wikipedia definition they have two dimensions:

w ⋅ x = b × (1) + (−a) × x + 1 × y

w ⋅ x = y − ax + b (1)

′ ′

w ⋅x = (−a) × x + 1 × y

′ ′

w ⋅x = y − ax (2)

′ ′

w ⋅ x + b = y − ax + b

′ ′

w ⋅x +b = w⋅x (3)

For the rest of this article we will use 2-dimensional vectors (as in equation (2)).

w ⋅ x + b = 0

We can select two others hyperplanes H1 and H2 which also separate the data and have the following equations :

w ⋅ x + b = δ

and

w ⋅ x + b = −δ

However, here the variable δ is not necessary. So we can set δ = 1 to simplify the problem.

w ⋅ x + b = 1

and

w ⋅ x + b = −1

We won't select any hyperplane, we will only select those who meet the two following constraints:

or

On the following figures, all red points have the class 1 and all blue points have the class −1.

So let's look at Figure 4 below and consider the point A . It is red so it has the class 1 and we need to verify it does not violate

the constraint w ⋅ xi + b ≥ 1

When xi = A we see that the point is on the hyperplane so w ⋅ xi + b = 1 and the constraint is respected. The same applies

for B .

When xi = C we see that the point is above the hyperplane so w ⋅ xi + b > 1 and the constraint is respected. The same

applies for D , E, F and G.

With an analogous reasoning you should find that the second constraint is respected for the class −1.

Figure 4: Two hyperplanes satisfying the constraints

And now we will examine cases where the constraints are not respected:

Figure 6: The right hyperplane does not satisfy the first constraint

Figure 7: The left hyperplane does not satisfy the second constraint

Figure 8: Both constraint are not satisfied

What does it means when a constraint is not respected ? It means that we cannot select these two hyperplanes. You can see

that every time the constraints are not satisfied (Figure 6, 7 and 8) there are points between the two hyperplanes.

By defining these constraints, we found a way to reach our initial goal of selecting two hyperplanes without points between

them. And it works not only in our examples but also in p -dimensions !

In mathematics, people like things to be expressed concisely.

w ⋅ xi + b ≤ −1

y i (w ⋅ xi + b) ≥ y i (−1)

We now have a unique constraint (equation 8) instead of two (equations 4 and 5) , but they are mathematically equivalent. So

their effect is the same (there will be no points between the two hyperplanes).

This is probably be the hardest part of the problem. But don't worry, I will explain everything along the way.

Before trying to maximize the distance between the two hyperplane, we will first ask ourselves: how do we compute it ?

Let:

0

1

0 0

We will call m the perpendicular distance from x0 to the hyperplane H 1 . By definition, m is what we are used to call the

margin.

You might be tempted to think that if we add m to x0 we will get another point, and this point will be on the other hyperplane !

But it does not work, because m is a scalar, and x0 is a vector and adding a scalar with a vector is not possible. However, we

know that adding two vectors is possible, so if we transform m into a vector we will be able to do an addition.

We can find the set of all points which are at a distance m from x0 . It can be represented as a circle :

Figure 10: All points on the circle are at the distance m from x0

Looking at the picture, the necessity of a vector become clear. With just the length m we don't have one crucial information : the

direction. (recall from Part 2 that a vector has a magnitude and a direction).

We can't add a scalar to a vector, but we know if we multiply a scalar with a vector we will get another vector.

1. to have a magnitude of m

2. to be perpendicular to the hyperplane H 1

Figure 11: w is perpendicular to H1

w

Let's define u = the unit vector of w . As it is a unit vector ∥u∥ = 1 and it has the same direction as w so it is also

∥w∥

Figure 12: u is also is perpendicular to H1

1. ∥k∥ = m

2. k is perpendicular to H 1 (because it has the same direction as u)

From these properties we can see that k is the vector we were looking for.

Figure 13: k is a vector of length m perpendicular to H1

w

k = mu = m (9)

∥w∥

We did it ! We transformed our scalar m into a vector k which we can use to perform an addition with the vector x0 .

If we start from the point x0 and add k we find that the point z0 = x0 + k is in the hyperplane H 1 as shown on Figure 14.

Figure 14: z0 is a point on H1

w ⋅ z0 + b = 1 (10)

w ⋅ (x0 + k) + b = 1 (11)

w

w ⋅ (x0 + m )+b = 1 (12)

∥w∥

w⋅w

w ⋅ x0 + m +b = 1 (13)

∥w∥

The dot product of a vector with itself is the square of its norm so :

2

∥w∥

w ⋅ x0 + m +b = 1 (14)

∥w∥

w ⋅ x0 + m∥w∥ + b = 1 (15)

w ⋅ x0 + b = 1 − m∥w∥ (16)

As x0 is in H 0 then w ⋅ x0 + b = −1

−1 = 1 − m∥w∥ (17)

m∥w∥ = 2 (18)

2

m = (19)

∥w∥

- multiple kernel learningUploaded byhkgrao
- 05597072Uploaded byAli Wali
- Burgh Outs 2014Uploaded byWlisses Bonelá Fontoura
- Hit or Flop--Box Office Prediction for Feature Films - Cocuzzo, Dan & Wu, Stephen - Stanford University - December 13, 2013Uploaded bySean O'Brien
- hw02Uploaded byShubhankar Gautam
- Lecture3 Linear SVM With SlackUploaded byUma Tamil
- SvmUploaded bybudi_umm
- sdarticle_103Uploaded byAkram Al-muharami
- SVM ReportUploaded byANeek181
- SVM Lecture 2Uploaded byHeenal Mehta
- Fuzzy Support Vector Machines Based on Fcm ClusteringUploaded byMuhammad Imtiaz Hossain
- Svm TriciaUploaded byconikeec
- Pedestrian DetectionUploaded byAsif Iqbal
- Instance basedUploaded byHoàng Nam
- load Prediction in cloud computingUploaded byDeep Bhavsar
- 5Uploaded by'Tushar Chadha
- Linalg VectorsUploaded byRacez Gabon
- Lee_2009Uploaded byMaria
- Comparing Classification and Regression Tree and Support Vector Machine for Analyzing Sentiments for IPL 2016Uploaded byEditor IJRITCC
- sample-chapter--ch02.pdfUploaded bySam Royce
- Number system metric to understand the code of DNA, RNA and proteinsUploaded byAnirban Bandyopadhyay
- Physics H1jjnUploaded bycseevakanc
- Performance analysis of SVM with quadratic kernel and logistic regression in classification of wild animals.pdfUploaded bySuhas Gowda
- Tensor AnalysisUploaded byrkpragadeesh
- Resume Sudhir KumarUploaded byAbhra Basak
- Position Recognition of Node Duplication Attacks in Wireless NetworkUploaded byAnonymous kw8Yrp0R5r
- hello.docxUploaded bydede
- QTDM2 Course OutlineUploaded byTanvi Mudhale
- Improved Face Recognition Rate Using HOG FeaturesUploaded byJithesh Gopal
- Necessary Conditions Discrete OCUploaded byenbodine

- Lec 9B - Bayesian Learning IIUploaded byTalha Yousuf
- Binomial Distribution and ApplicationsUploaded bykirana_patrolina
- _d544576262dde3240e03488364d81cae_CourseraProgWeek1InstructionUploaded byTalha Yousuf
- Shab-e-Barat Kia Hai-Original Print.pdfUploaded byTalha Yousuf
- RMUploaded byTalha Yousuf
- Qurtaba ka qaziUploaded byTalha Yousuf
- Binomial Distribution and ApplicationsUploaded byTalha Yousuf
- 1Uploaded bySharath Chandra
- Data SetUploaded byTalha Yousuf
- WheelUploaded byTalha Yousuf
- Lec 1-1.pptxUploaded byTalha Yousuf
- _d544576262dde3240e03488364d81cae_CourseraProgWeek1Instruction.pdfUploaded byTalha Yousuf
- Project ReportUploaded byTalha Yousuf
- RM.pdfUploaded byTalha Yousuf
- Arduino TimersUploaded byTalha Yousuf
- _d544576262dde3240e03488364d81cae_CourseraProgWeek1InstructionUploaded byTalha Yousuf
- Lec 9B - Bayesian Learning IIUploaded byTalha Yousuf
- 2-انفرادیت پرست.txtUploaded byTalha Yousuf
- Binomial DistributionUploaded byAnjalee Prabha
- SVM - Understanding the math _ the optimal hyperplanePart-3.pdfUploaded byTalha Yousuf
- SVM - Understanding the math _ the optimal hyperplanePart-3.pdfUploaded byTalha Yousuf
- Lec 3A-1Uploaded byTalha Yousuf
- 'on Going a Journey' by William Hazlitt (1882)Uploaded byTalha Yousuf
- Chapter 3_ Support Vector Machine With Math. – Deep Math Machine Learning.ai – MediumUploaded byTalha Yousuf
- Lec 3BUploaded byTalha Yousuf
- 1Uploaded bySharath Chandra
- 1Uploaded bySharath Chandra
- x13Uploaded byTalha Yousuf
- f678abd2f50f7171a76c7cb3ec03f726 MLE for GaussianUploaded byTalha Yousuf

- Green Building - Wikipedia,Uploaded bydineshkumarbaskaran
- mcb1700 LAB_Intro_ARM Cortex m3Uploaded byRagulAN
- Equivalents of Carbon Steel QualitiesUploaded byMarcos Fuller Albano
- John Keats Negative Capability in his odes.docxUploaded byLjupcho Petreski
- Gonzales Tesis Doctoral FvUploaded byRicardo Campos Cueto
- Lecture 14 Project MonitoringUploaded bynaravichandran3662
- Estimation of Morphometric Parameters AndUploaded byInternational Journal of Research in Engineering and Technology
- StrengthsFinder ReportUploaded byMarc Walls
- Guide - QualiPoc Freerider IIIUploaded byRahim Khan
- confucius!the_analectsUploaded byFredWhalen
- February 2014 Enchanted Forest MagazineUploaded byEnchantedForrest
- VMware KB_ Accessing a vCenter Server Using Web Access or vSphere Client Fails With an SSL Certificate ErrorUploaded bymilind.v.patil
- B.C. Canada Site C Hydro-Dam Project Description Report May 2011Uploaded byPhilip Yap
- FDU 2Uploaded byPavan Kumar Vadlamani
- Pldp Report- Mba ProfUploaded byChaitanya Narvekar
- Rx-004 Cstr Series Cistotrans (1)Uploaded byMuhammad Hamza Ejaz
- Javascript Print Set Paper SizeUploaded byBuyut Joko Rivai
- Conclusions and Resolutions of the 1st Global UNESCO Chairs in Higher EducationUploaded byPentachoron
- Design Principles for ManufacturabilityUploaded byM.Saravana Kumar..M.E
- First meeting charts.docUploaded byValentin Constantin
- Aboriginal Communities and Non-renewable Resource DevelopmentUploaded byNational Round Table on the Environment and the Economy
- hadoop-hdfs-commands-cheatsheet.pdfUploaded byprince2venkat
- Causes of War; The Struggle for RecognitionrUploaded bythecatalan
- Q2( IV)Uploaded byFanuel Njobvu
- 2002may u Sfu Scs Walenstein PhdUploaded byChristian Visca Barqa
- TOXICITY STUDIES OF EURYCOMA LONGIFOLIA (JACK)BASED REMEDIAL PRODUCTSUploaded byChikita Artia Sari
- skittles term project - kai galbiso stat1040Uploaded byapi-316758771
- Promoting Entrepreneurial Growth in PakistanUploaded bySalman Khalid
- Agency Effectiveness HandbookUploaded by'Biodun Agbaje
- Chapter 5S Review Questions to AttemptUploaded byMehereen Aubdoollah