23 views

Uploaded by brain420

- Decision Trees
- A Support Vector Machine Approach to Credit Scoring
- IJPTT-V3I8P101
- D28-Detection1
- Investigation of Automatic Prediction of Software
- 21 1507618508_10-10-2017.pdf
- A Novel NeutrosophicWeighted Extreme Learning Machine for Imbalanced Data Set
- Boosting Stanford
- IJETR042461
- n22.pdf
- Unit_3_4
- winkler2018.pdf
- _79c2db8dc405a1a3418bf1d0adc6b465_intro
- Decision Tree Classification of Remotely Sensed Satellite Data Using Spectral Separability Matrix
- A2 - Teuku Hilman Revanda - 001201500038
- Skeletal Bone Age Assessment from elbow Radiographs
- E-Thesis Abdallah Alshdaifat 15-12-2014
- machine Learning Learning
- IJEAS0202025
- Movie Success Prediction using Data Mining

You are on page 1of 28

Introduction

Classification problem, evaluation of classifiers

Bayesian Classifiers

Optimal Bayes classifier, naive Bayes classifier, applications

Basic notions, choice of parameters, applications

Basic notions, split strategies, overfitting, pruning of decision

trees

Scalability to Large Databases

SLIQ, SPRINT, RainForest

Neural networks, genetic algorithm, rough set approach, fuzzy

set approaches, support vector machines, prediction

WS 2003/04 Data Mining Algorithms 7 – 81

Motivation

classification

We considered up to now

small data sets

New requirements

larger and larger commercial databases

Scalability to Large Databases:

Approaches

Sampling

use a subset of the data as training set such that

sample fits into main memory

evaluate sample of all potential splits (for numerical

attributes)

Æ poor quality of resulting decision trees

Use all data as training set (not just a sample)

WS 2003/04 Data Mining Algorithms 7 – 83

Storage and Indexing Structures

Evaluation of potential splits and selection of best split

evaluation of attribute values as potential split points

for categorial attributes

O(2m) potential binary splits for m distinct attribute values

Partitioning of training data

according to the selected split point

WS 2003/04 Data Mining Algorithms 7 – 84

SLIQ: Introduction

Binary splits

Evaluation of the splits by using the Gini-Index

k

gini (T ) = 1 − ∑ p 2j for k classes ci with

frequencies pi

j =1

avoid sorting of the training data

Attribute lists

values of an attribute in ascending order

sequential access

secondary storage resident

Class list

contains class label for each training object and

random access

main memory resident

Histograms

for each leaf node of the decision tree

SLIQ: Example

Attribute lists

Age Id

Training data

23 2

Id Age Income Class 30 1

1 30 65 G 40 3

2 23 15 B Class list

45 6

3 40 75 G Id Class Leaf 55 5

4 55 40 B 1 G N1 55 4

5 55 100 G 2 B N1

6 45 60 G 3 G N1

4 B N1

5 G N1 Income Id

6 G N1 15 2

40 4

N1 60 6

65 1

75 3

100 5

SLIQ: Algorithm

For all leaf nodes on the same level of the decision

Standard decision tree classifiers follow a depth first strategy

Sequentially scan the attribute list of attribute a, and

Determine the respective entry e in the class list

Let k be the value of the „leaf“ attribute of e

Update the histogram of k based on the value of the „class“

attribute of e

WS 2003/04 Data Mining Algorithms 7 – 88

SPRINT: Introduction

Shortcomings of SLIQ

Size of class list linearly grows with the size of the

database, i.e. with the number of training examples

SLIQ scales well only if sufficient main memory for

the entire class list is available

Goals of SPRINT

Scalability for arbitrarily large databases

Class list

there is no class list any longer

(resident in secondary storage)

no main memory data structures any longer

scalable to arbitrarily large databases

Attribute lists

no single attribute list for the entire training set

decision tree instead

waiving of central data structures supports a simple

parallelization of SPRINT

SPRINT: Example

Age Class Id car type class Id

17 high 1 family high 0

20 high 5 sportive high 1

23 high 0

Attribute lists sportive high 2

32 low 4 for node N1 family low 3

43 high 2 truck low 4

68 low 3 family high 5

N1

Age ≤ 27.5 Age > 27.5

Age Class Id age class Id

N2 N3

17 high 1 32 low 4

20 high 5 43 high 2

23 Hoch 0 Attribute Attribute 68 low 3

lists for lists for

car type class Id node N2 node N3 car type class Id

family high 0 sportive high 2

sportive high 1 family low 3

family high 5 truck low 4

8000

runtime (in seconds)

7000

6000

5000 SPRINT

4000

3000

2000

1000 SLIQ number of objects

0

0 0.5 1.0 1.5 2.0 2.5 3.0 (in millions)

list fits into main memory

SLIQ is not applicable for data sets with more than one

million entries

WS 2003/04 Data Mining Algorithms 7 – 92

RainForest: Introduction

Shortcomings of SPRINT

Does not exploit the available main memory

Goals of RainForest

Exploits the available main memory to increase the efficiency

Separate scalability aspects from quality aspects of a decision

tree classifier

Contains a class histogram for each value of a

Set of AVC sets of node k for all attributes

AVC set is significantly smaller than attribute lists

RainForest: Example

Training data

Id age income class AVC set „age“ for N1 AVC set „income“ for N1

1 young 65 G value class count value class count

2 young 15 B young B 1 15 B 1

3 young 75 G young G 2 40 B 1

4 senior 40 B senior B 1 60 G 1

5 senior 100 G senior G 2 65 G 1

75 G 1

6 senior 60 G

100 G 1

N1

age = young age = senior

N2 N3

AVC set „income“ for N2

AVC set „age“ for N2

value class count

15 B 1 value class count

65 G 1 young B 1

75 G 1 young G 2

RainForest: Algorithms

Assumption

The entire AVC group of the root node fits into main memory

Then, the AVC groups of each node also fit into main memory

Algorithm RF_Write

Construction of the AVC group of node k in main memory by

sequential scan over the training set

Determination of the optimal split for node k by using the AVC

group

Reading the training set and distribution (writing) to the

partitions

RainForest: Algorithms

Algorithm RF_Read

Avoids explicit writing of the partitions to secondary storage

possible

Training database is read for each tree level multiple times

Algorithm RF_Hybrid

Usage of RF_Read as long as the AVC groups of all nodes from

the current level of the decision tree fit into main memory

Subsequent materialization of the partitions by using RF_Write

runtime (in seconds)

20,000

number of

training objects

SPRINT (in millions)

10,000

RainForest

increases with the number n of training objects

RainForest is significantly more efficient than SPRINT

WS 2003/04 Data Mining Algorithms 7 – 98

Boosting and Bagging

Bagging

Basic idea: Learn a set of classifiers and decide the

individual votes

Boosting

Basic idea: Learn a series of classifiers, where each

examples misclassified by its predecessor

Applicable to decision trees or Bayesian classifier

Boosting: Algorithm

Algorithm

Assign every example an equal weight 1/N

For t = 1, 2, …, T do

Calculate the error of h(t) and re-weight the examples based

on the error

Normalize w(t+1) to sum to 1.0

Output a weighted sum of all the hypothesis, with

on the training set

Boosting requires only linear time and constant space

Chapter 7: Classification

Introduction

Classification problem, evaluation of classifiers

Bayesian Classifiers

Optimal Bayes classifier, naive Bayes classifier, applications

Basic notions, choice of parameters, applications

Basic notions, split strategies, overfitting, pruning of decision

trees

Scalability to Large Databases

SLIQ, SPRINT, RainForest

Neural networks, genetic algorithm, rough set approach, fuzzy

set approaches, support vector machines, prediction

WS 2003/04 Data Mining Algorithms 7 – 101

Neural Networks

Advantages

prediction accuracy is generally high

fast evaluation of the learned target function

Criticism

long training time

not easy to incorporate domain knowledge

A Neuron

µk (bias for input k)

x1 w1

x2 w2

… …

Σ f

output y

xn wn

input weight weighted activation

vector x vector w sum function

into variable y by means of the scalar product and a

nonlinear function mapping

WS 2003/04 Data Mining Algorithms 7 – 103

Network Training

obtain a set of weights that makes almost all the

Steps

Initialize weights with random values

of all the inputs to the unit

Compute the output value using the activation function

Compute the error

Update the weights and the bias

WS 2003/04 Data Mining Algorithms 7 – 104

Multi-Layer Perceptron

Output vector

Errj = O j (1 − O j )∑k Errk w jk

Output nodes

θ j = θ j + (l) Err j

wij = wij + (l ) Err j Oi

Hidden nodes Err j = O j (1 − O j )(T j − O j )

wij 1

Oj = −I

1+ e j

Input nodes

I j = ∑ wij Oi + θ j

i

Input vector: xi

WS 2003/04 Data Mining Algorithms 7 – 105

Network pruning

Fully connected network will be hard to articulate

N input nodes, h hidden nodes and m output nodes lead to

h⋅(m+N) weights

Pruning: Remove some of the links without affecting classification

accuracy of the network

Extracting rules from a trained network

Discretize activation values; replace individual activation value by

the cluster average maintaining the network accuracy

Enumerate the output from the discretized activation values to

find rules between activation value and output

Find the relationship between the input and activation value

Combine the above two to have rules relating the output to input

WS 2003/04 Data Mining Algorithms 7 – 106

Genetic Algorithms

GA: based on an analogy to biological evolution

Each rule is represented by a string of bits

An initial population is created consisting of randomly

generated rules

e.g., “If A1 and Not A2 then C2” can be encoded as 100

a new population is formed that consists of the fittest

rules and their offsprings

The fitness of a rule is represented by its classification

accuracy on a set of training examples

Offsprings are generated by crossover and mutation

Rough sets are used to approximately or “roughly”

define equivalent classes

A rough set for a given class C is approximated by two

sets: a lower approximation (certain to be in C) and an

upper approximation (cannot be described as not

belonging to C)

Finding the minimal subsets (reducts) of attributes (for

feature reduction) is NP-hard but a discernibility matrix

is used to reduce the computation intensity

Fuzzy Set

Approaches

represent the degree of membership (such as using

fuzzy membership graph)

Attribute values are converted to fuzzy values

e.g., income is mapped into the discrete categories

{low, medium, high} with fuzzy values calculated

For a given new sample, more than one fuzzy value may

apply

Each applicable rule contributes a vote for membership

in the categories

Typically, the truth values for each predicted category

are summed

WS 2003/04 Data Mining Algorithms 7 – 109

© and acknowledgements: Prof. Dr. Hans-Peter Kriegel and Matthias Schubert (LMU Munich)

and Dr. Thorsten Joachims (U Dortmund and Cornell U)

Objects belong to exactly one of

two respective classes

For the sake of simpler formulas,

the used class labels are:

y = –1 and y = +1

determine hyperplane which

separates both vector sets with a

„maximal stability“

Assign unknown elements to the

separating hyperplane halfspace in which they reside

Support Vector Machines

Definition and efficient determination of the

maximum stable hyperplane

Classes are not always linearly separable

expensive

Restriction to two classes

…

Support Vector Machines (SVMs) [Vapnik 1979, 1995]

Observation: There is no unique hyperplane to separate p1 from p2

Question: which hyperplane separates the classes best?

p2 p2

p1 p1

Criteria

Stability at insertion

Support Vector Machines: Principle

maximum margin hyperplane Maximum Margin Hyperplane (MMH)

Distance to points from any of the

two sets is maximal, i.e. at least ξ

Minimal probability that the

p2 separating hyperplane has to be

ξ

ξ moved due to an insertion

p1 Best generalization behaviour

MMH is „maximally stable“

MMH only depends on points pi whose

margin distance to the hyperplane exactly is ξ

pi is called a support vector

Recall some algebraic notions for feature space FS

r r r r

Inner product of two vectors x , y ∈ FS : x ,y

r r

e.g., canonical scalar product: x , y = ∑ ( xi ⋅ yi )

d

i =1

H (w , b ) = { x ∈ FS , w , x + b = 0 }

r r r r

1

r r

dist ( x , H ( w, b ) ) = r r ⋅ ( , x + b)

r r

w

w, w

WS 2003/04 Data Mining Algorithms 7 – 114

Computation of the

Maximum Margin Hyperplane

1) The classification error is zero

r r

yi = −1 ⇒ w, xi + b < 0

yi ⋅ ( w, xi + b ) > 0

r r

r r ⇔

yi = +1 ⇒ w, xi + b > 0

2) The margin is maximal

( , xi + b )

r r

r r r ⋅ w

x i ∈TR

object xi to the hyperplane w, w

H(w,b):

1

r r ⋅ ( w, xi + b ) ≥ ξ for i ∈ [1..n]

r r

Then: Maximize ξ subject to yi ⋅

w, w

WS 2003/04 Data Mining Algorithms 7 – 115

1

r r ⋅ ( w, xi + b) ≥ ξ

r r

Maximize ξ subject to ∀i ∈ [1..n]: yi ⋅

w, w

1

Let ξ = r r

w, w

and reformulate the condition:

y i ⋅ ξ ⋅ ( w, xi + b ) ≥ ξ

r r

∀i ∈ [1..n]:

y i ⋅ ( w, x i + b ) ≥ 1

r r

∀i ∈ [1..n]:

1 r r

Maximization of r r

w, w

corresponds to a minimization of w, w

r r

Find a vector w that minimizes w, w

(rr

subject to ∀i ∈ [1..n]: yi ⋅ w, xi + b ≥1 )

WS 2003/04 Data Mining Algorithms 7 – 116

Dual Optimization Problem

problem into a dual one by using Lagrange multipliers

n

r 1 n n r r

minimize L(α ) = ∑ α i − ∑∑ α i ⋅ α j ⋅ yi ⋅ y j ⋅ xi ⋅ x j

i =1 2 i =1 j =1

subject to ∑i =1α i ⋅ yi = 0 and 0 ≤ αi

n

Up to now only linearly separable data

If data is not linearly separable: Soft Margin Optimization

Problem of Maximum Margin Optimization: How to treat non-

linearly separable data?

Two typical problems:

WS 2003/04 Data Mining Algorithms 7 – 118

Soft Margin Optimization

Additionally regard the number of

training errors when optimizing:

p2

ξ2 ξ is the distance from p to the

i i

p1 margin (often called slack

ξ1 variable)

C controls the influence of

r r

Find a w that minimizes 1 w, w + C ⋅ ∑

n

ξ

i =1 i

2

(rr

subject to ∀i ∈ [1..n]: yi ⋅ w, xi + b ≥1−ξi and ξi ≥ 0 )

WS 2003/04 Data Mining Algorithms 7 – 119

Dual optimization problem with Lagrange multipliers:

n

1 n n r r r

Dual OP: Maximize L(α ) = ∑ α i − ∑∑ α i ⋅ α j ⋅ yi ⋅ y j ⋅ xi ⋅ x j

n

i =1 2 i =1 j =1

subject to ∑α ⋅ y

i =1

i i = 0 and 0 ≤ αi ≤ C

αi = C: pi is a support vector with ξi >0 p2

αi = 0: pi is no support vector ξ2

p1

ξ1

Decision rule:

r r r

h ( x ) = sign ∑ α i ⋅ y i ⋅ xi , x + b

xi ∈SV

WS 2003/04 Data Mining Algorithms 7 – 120

Kernel Machines:

Non-Linearly Separable Data Sets

Problem: For real data sets, a linear separation with a high

classification accuracy often is not possible

Idea: Transform the data non-linearly into a new space, and try to

separate the data in the new space linearly (extension of the

hypotheses space)

Kernel Machines:

Extension of the Hypotheses Space

Principle

input space φ extended feature space

Example

(x, y, z) φ (x, y, z, x2, xy, xz, y2, yz, z2)

polynomial of degree 2 in the input space

Kernel Machines: Example

r

x = ( x1 , x2 )

r

(

φ ( x ) = x12 , x22 , 2 ⋅ x1 , 2 ⋅ x2 , 2 ⋅ x1 ⋅ x2 ,1 )

x2 x2

x1 x12

r

x = ( x1 , x2 )

r

(

φ (x ) = x12 , x22 , 2 x1 x2 )

x2 x22

x1 x12

0 0 1

Kernel Machines

Introduction of a kernel corresponds to a feature transformation

r

φ (x ) : FS old

→ FS new

n

r 1 n n r r

Maximize L(α ) = ∑ α i − ∑∑ α i ⋅ α j ⋅ yi ⋅ y j ⋅ φ ( xi ), φ ( x j )

i =1 2 i =1 j =1

∑

n

subject to i =1

α i ⋅ yi = 0 and 0 ≤ αi ≤ C

K φ (xi , x j ) = φ ( xi ), φ ( x j )

Kernel K is a function: r r r r

r r

( r r2

K ( x , y ) = exp − γ ⋅ x − y ) K ( x , y ) = ( x , y + 1)

r r r r d

Support Vector Machines: Discussion

+ relatively weak tendency to overfitting (generalization

theory)

+ efficient classification of new objects

+ compact models

may be very high-dimensional)

– expensive implementation

– resulting models rarely provide an intuition

What Is Prediction?

First, construct a model

Second, use model to predict unknown value

Major method for prediction is regression

Linear and multiple regression

Non-linear regression

Prediction is different from classification

Classification refers to predict categorical class label

Prediction models continuous-valued functions

Predictive Modeling in Databases

Predictive modeling: Predict data values or construct

generalized linear models based on the database data.

One can only predict value ranges or category distributions

Method outline:

Minimal generalization

Attribute relevance analysis

Generalized linear model construction

Prediction

Determine the major factors which influence the prediction

Data relevance analysis: uncertainty measurement,

entropy analysis, expert judgement, etc.

Multi-level prediction: drill-down and roll-up analysis

WS 2003/04 Data Mining Algorithms 7 – 129

Models in Prediction

Linear regression: Y = α + β X

Two parameters, α and β specify the line and are to be

estimated by using the data at hand.

using the least squares criterion to the known values of

Y1, Y2, …, X1, X2, …

Multiple regression: Y = b0 + b1 X1 + b2 X2

Many nonlinear functions can be transformed into the

above.

Log-linear models:

The multi-way table of joint probabilities is approximated

by a product of lower-order tables.

Probability: p(a, b, c, d) = αab βac χad δbcd

WS 2003/04 Data Mining Algorithms 7 – 130

Locally Weighted Regression

Construct an explicit approximation to f over a local region

surrounding query instance xq

Locally weighted linear regression:

The target function f is approximated near xq using the linear

function: f$ ( x ) = w + w a ( x ) +L + w a ( x )

0 1 1 n n

minimize the squared error: distance-decreasing weight K

E ( xq ) ≡ 1 ∑ x∈nearest _ neighbors (x ,k ) ( f ( x) − fˆ ( x)) 2 ⋅ K (d ( xq , x))

2 q

( )

∆w ≡ η ∑ x∈nearest _ neighbors (x ,k ) K (d ( xq , x)) ⋅ f ( x) − fˆ ( x) ⋅ a j ( x)

j q

linear, or quadratic function.

WS 2003/04 Data Mining Algorithms 7 – 131

Prediction: Categorical Data

Chapter 7 – Conclusions

in statistics, machine learning & neural networks)

Classification is probably one of the most widely used

data mining techniques with a lot of extensions

Scalability is still an important issue for database

applications: thus combining classification with database

techniques should be a promising topic

Research directions: classification of non-relational data,

e.g., text, spatial, multimedia, etc.

References (I)

C. Apte and S. Weiss. Data mining with decision trees and decision rules. Future

Generation Computer Systems, 13, 1997.

L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees.

Wadsworth International Group, 1984.

P. K. Chan and S. J. Stolfo. Learning arbiter and combiner trees from partitioned data

for scaling machine learning. In Proc. 1st Int. Conf. Knowledge Discovery and Data

Mining (KDD'95), pages 39-44, Montreal, Canada, August 1995.

U. M. Fayyad. Branching on attribute values in decision tree generation. In Proc. 1994

AAAI Conf., pages 601-606, AAAI Press, 1994.

J. Gehrke, R. Ramakrishnan, and V. Ganti. Rainforest: A framework for fast decision

tree construction of large datasets. In Proc. 1998 Int. Conf. Very Large Data Bases,

pages 416-427, New York, NY, August 1998.

T. Joachims: Learning to Classify Text using Support Vector Machines. Kluwer, 2002.

M. Kamber, L. Winstone, W. Gong, S. Cheng, and J. Han. Generalization and decision

tree induction: Efficient classification in data mining. In Proc. 1997 Int. Workshop

Research Issues on Data Engineering (RIDE'97), pages 111-120, Birmingham,

England, April 1997.

WS 2003/04 Data Mining Algorithms 7 – 135

References (II)

J. Magidson. The Chaid approach to segmentation modeling: Chi-squared automatic

interaction detection. In R. P. Bagozzi, editor, Advanced Methods of Marketing Research,

pages 118-159. Blackwell Business, Cambridge Massechusetts, 1994.

M. Mehta, R. Agrawal, and J. Rissanen. SLIQ : A fast scalable classifier for data mining.

In Proc. 1996 Int. Conf. Extending Database Technology (EDBT'96), Avignon, France,

March 1996.

S. K. Murthy, Automatic Construction of Decision Trees from Data: A Multi-Diciplinary

Survey, Data Mining and Knowledge Discovery 2(4): 345-389, 1998

J. R. Quinlan. Bagging, boosting, and c4.5. In Proc. 13th Natl. Conf. on Artificial

Intelligence (AAAI'96), 725-730, Portland, OR, Aug. 1996.

R. Rastogi and K. Shim. Public: A decision tree classifer that integrates building and

pruning. In Proc. 1998 Int. Conf. Very Large Data Bases, 404-415, New York, NY, August

1998.

J. Shafer, R. Agrawal, and M. Mehta. SPRINT: A scalable parallel classifier for data

mining. In Proc. 1996 Int. Conf. Very Large Data Bases, 544-555, Bombay, India, Sept.

1996.

S. M. Weiss and C. A. Kulikowski. Computer Systems that Learn: Classification and

Prediction Methods from Statistics, Neural Nets, Machine Learning, and Expert Systems.

Morgan Kaufman, 1991.

- Decision TreesUploaded byAsemSaleh
- A Support Vector Machine Approach to Credit ScoringUploaded bymjalal2
- IJPTT-V3I8P101Uploaded byMosaddek Hossain
- D28-Detection1Uploaded byAnand Dubey
- Investigation of Automatic Prediction of SoftwareUploaded byjonhrithik
- 21 1507618508_10-10-2017.pdfUploaded byEditor IJRITCC
- A Novel NeutrosophicWeighted Extreme Learning Machine for Imbalanced Data SetUploaded byAnonymous 0U9j6BLllB
- Boosting StanfordUploaded byybsave
- IJETR042461Uploaded byerpublication
- n22.pdfUploaded byChristine Straub
- Unit_3_4Uploaded byAkshay Chowdhary
- winkler2018.pdfUploaded byMiljan Kovacevic
- _79c2db8dc405a1a3418bf1d0adc6b465_introUploaded byJuan Esteban Mejia Velasquez
- Decision Tree Classification of Remotely Sensed Satellite Data Using Spectral Separability MatrixUploaded byEditor IJACSA
- A2 - Teuku Hilman Revanda - 001201500038Uploaded byteuku revanda
- Skeletal Bone Age Assessment from elbow RadiographsUploaded byeditorijsaa
- E-Thesis Abdallah Alshdaifat 15-12-2014Uploaded byAbdAllah TaySeer AlShdaifat
- machine Learning LearningUploaded byNagendra Kumar
- IJEAS0202025Uploaded byerpublication
- Movie Success Prediction using Data MiningUploaded byGRD Journals
- On-Line Adaption of Class-specific Codebooks for Instance Tracking - Gall, Razavi, Gool - Proceedings of the British Machine Vision Conference - 2010Uploaded byzukun
- Introduction to Machine Learning by UrtasanUploaded byWinston Mcgee
- 1Uploaded byArnav Guddu
- 6036 Lecture NotesUploaded byMatt Staple
- DNA-2013-03Uploaded bylincol88
- SCTUR: A Sentiment Classification Technique for URDU TextUploaded byInternational Journal of Computer and Communication System Engineering
- The Design of System About Cow Activity Based on SVMUploaded byNgaNga
- Multiple Instance Boosting for Object DetectionUploaded byapi-3799599
- ThesisUploaded byPeter Stuart
- Attribute Statistics AssignmentUploaded byAugustine Barlow

- Lecture OutlineUploaded bybrain420
- Good 1Uploaded bybrain420
- Electrical Engineering Pre 2013Uploaded byAadil41290
- Usage-Based Transmission Loss AllocationUploaded bybrain420
- Economic Dispatch ProblemUploaded bybrain420
- CapacitorUploaded bybrain420
- MathType User ManualUploaded byjiugangyuan
- unit commitmentUploaded bybrain420

- Pu Transcript FormUploaded byGurpreet Maan
- UNESCO higher ed.pdfUploaded byChester Ruiter
- Katalog SteelUploaded byMuhammad Yusuf
- HashTable+Assignment (1)Uploaded bySir
- Mathematics and Art_AMSUploaded byjohn_k7408
- Tech.specification for Numrical Protection RelayUploaded byPramod B.Wankhade
- FS-C1020MFPENSM.pdfUploaded byRaymond Zhao
- Smart Inventory Management System using Weighting ScalesUploaded byInternational Journal for Scientific Research and Development - IJSRD
- Specification for Piping Stress Analysis - REFERENCIA CRITERIOUploaded byYana Parravano
- Control of Noise at Work Plus VibrationUploaded byRebecca Robinson
- UTXAustinUploaded byAndreia Khalil
- EnvironmentalUploaded byelijahmessenger
- 6383116 Subsea Industry Standardization LandscapeUploaded byDonVic
- Conceptual and Physical ModelsUploaded byDiana Alexandra Groza
- Impact of Dimensions of Mobile Banking on User SatisfactionUploaded byIwan Susanto
- Compressor Msl 15 Max Siingle StageUploaded byJoel Leandro Ibarra Coria
- Allegro TOP30Uploaded byDexter Flavier
- Digital Transformation Review 5- Gearing Up for Digital OperationsUploaded bySubrahmanyam
- ALGORITHMIC ALLURE: HEIDEGGER, HARMAN AND EVERY ICONUploaded byTony1uk
- PerforatingUploaded bydriller22
- mun position paperUploaded byapi-458726271
- Hill Cannon Car Park Consultancy Design 2011Uploaded bykurtrobbins
- Finite Element Analysis of the Orthogonal Metal Cutting Process 2000 Journal of Materials Processing TechnologyUploaded byEdo Destrada
- LSA034 - Restaurant Management SystemUploaded byLOGIC SYSTEMS
- Read the ArticleUploaded byゴンザレス ピーター
- Attribute ImportanceUploaded byZaidatul Zahraa Zain
- Instructions to Authors_Ornis MongolicaUploaded byBayanmunkh Dashnyam
- Eyewriter DiyUploaded byrameshkarthik_b
- Antennas in Satellite CommunicationUploaded byPawan Kushwaha
- Use Case DiagramsUploaded byNacho Reyes