341 views

Uploaded by tintojames

Book Slides

save

- Data Analysis and Graphics Using R
- Machine Learning with R Cookbook - Sample Chapter
- SAS Data Mining Using Sas Enterprise Miner - A Case Study Appro
- Boosting Foundations and Algorithms
- Business Analytics Using SAS
- Applied Predictive Modeling - Max Kuhn
- The Elements of Statistical Learning
- Fast Support Vector Machine Training
- Exploratory Data Analysis With R (2015)
- Applying Data Mining Techniques Using SAS Enterprise Miner
- Curso Web y Data Mining 3 Predictive Modeling Using Logistic Regresion
- Machine Learning with R Second Edition - Sample Chapter
- Applied-Predictive-Modeling-PDF-Download.pdf
- Data Wrangling With R
- 100 Data Science Interview Questions and Answers (General)
- SAS Statistics Quizzes
- SAS High Performance Forecasting
- Regression Modeling Strategies With Applications to Linear Models, Logistic and Ordinal Regression, and Survival Analysis.pdf
- Data Mining SAS
- BeITCertified_SAS_Institute_A00-212_free_questions_dumps
- Predictive Analytics Using r
- Statistics and Data With R
- Generalized Linear Models
- sas eminer
- 9781784390860_R_for_Data_Science_Sample_Chapter
- Data Management and Reporting Made Easy With Sas Learning Edition 2.0
- Building Skills in Python
- Introduction to Data Mining
- Advanced Credit Risk Modeling for Basel II Using SAS - Course Notes (2008)
- barton-chart-recorders-202e-user-manual.pdf
- Face Detection Pada Fitur Stories Media Sosial
- tabla de correcion test DISC
- KDE5000TA.pdf
- F&Q - Teste 2
- 154289793 Asuhan Keperawatan Keracunan Makanan
- Book Review DipankarGupta InterrogatingCaste
- SESIÓN DE APRENDIZAJE.docx
- Kalender-Masehi-2018.pdf
- commercial-trends-in-sports-mar-2017.pdf
- Paste + 1 Mai -2019-OBZOR (2)
- Pola Tata Ruang Penglipuran
- materi hipertensi
- edital_2017_2018 (1).pdf
- Format Laporan Anfisman
- Tesis Final
- ATO Enabled Sales Order
- Bases Premio Miguel Delibes
- tesis517
- The Origin of Museums Pp
- Propulsi.en.Id
- 112 Obedov
- apuntitov2013ambulatoria-140316090510-phpapp01.pdf
- BAB I
- Experiencias Caso de Midfulness
- My boy
- B2_150-S_Kinetologie-medicala_2014_BFKT-1
- Seminario de psicoanálisis de niños 2 -Françoise Dolto-_unlocked.pdf
- Trauma
- hhkhkhk
- Uda City Capstone Project Open a i Gym
- UdacityUltimateSkillChecklistForYourFirstDataAnalystJob.pdf
- MIT Introduction to Deep Learning Course Lecture 4 Slides - Computer Vision Deep Generative Models
- 284398301-Cambridge-Primary-I-dictionary-3-Workbook.pdf
- Lecture 0
- MIT Introduction to Deep Learning Course Lecture 5 Slides - Multimodal Learning
- International Secondary Catalogue 2017
- MIT Introduction to Deep Learning Course Lecture 3 Slides - Computer Vision
- Time Series Data Basics With Pandas Part
- Module 1 - Intro to Data Science.pptx
- MIT Introduction to Deep Learning Course Lecture 6 Slides - Deep Reinforcement Learning
- MIT Introduction to Deep Learning Course Lecture 2 Slides - Sequence Modeling
- Screw Compressors
- MIT Introduction to Deep Learning Course Lecture 1
- A Gentle Introduction to Blockchain Technology_WEB
- Iot Analytics Infoworld Deep Dive 0515
- Streams and Storm April 2014 Final
- Ifw Deep Dive R-quick Guide
- R xgboost-Tutorial
- Spark Internals Architecture 21Nov15
- 2017 Local Catalogue
- Eclipse IoT White Paper - The Three Software Stacks Required for IoT Architectures
- Cambridge International Examinations Catalogue 2017
- International Primary Catalogue 2017
- 11ND-Liggan.pdf
- ifw_dd_2016_machine_learning_final.pdf
- TRANSFORMATIONS AND ACTIONS.pdf
- Altintas Bdc14 Invitedtalk Dec8th2014 141208092953 Conversion Gate02
- Knowledge Discovery in Data: A Case Study
- A Deeper Understanding of Spark Internals Aaron Davidson

You are on page 1of 132

useR! 2014

Max Kuhn, Ph.D

Pfizer Global R&D

Groton, CT

max.kuhn@pfizer.com

Outline

Conventions in R

Data Splitting and Estimating Performance

Data Pre-Processing

Over–Fitting and Resampling

Training and Tuning Tree Models

Training and Tuning A Support Vector Machine

Comparing Models

Parallel Processing

Max Kuhn (Pfizer)

Predictive Modeling

2 / 132

Predictive Modeling

Predictive modeling (aka machine learning)(aka pattern recognition)(. . .)

aims to generate the most accurate estimates of some quantity or event.

As these models are not generally meant to be descriptive and are usually

not well–suited for inference.

Good discussions of the contrast between predictive and

descriptive/inferential models can be found in Shmueli (2010) and

Breiman (2001)

Frank Harrell’s Design package is very good for modern approaches to

interpretable models, such as Cox’s proportional hazards model or ordinal

logistic regression.

Hastie et al (2009) is a good reference for theoretical descriptions of these

models while Kuhn and Johnson (2013) focus on the practice of predictive

modeling (and uses R).

Max Kuhn (Pfizer)

Predictive Modeling

3 / 132

Modeling Conventions in R .

. . For example. + data = housingData) would predict the closing price of a house using three quantitative characteristics. For the former. the predictors are explicitly listed in an R formula that looks like: y ⇠ var1 + var2. . Max Kuhn (Pfizer) Predictive Modeling 5 / 132 . the formula > modelFunction(price ~ numBedrooms + numBaths + acres.The Formula Interface There are two main conventions for specifying models in R: the formula interface and the non–formula (or “matrix”) interface.

It also autmatically converts factor predictors into dummy variables (using a less than full rank encoding). . For some R functions (e. Unfortunately. Max Kuhn (Pfizer) Predictive Modeling 6 / 132 . klaR:::NaiveBayes. .g. R does not efficiently store the information about the formula. The formula interface has many conveniences. such as log(acres) can be specified in–line. C50:::C5. rpart:::rpart.0.The Formula Interface The shortcut y ⇠ . . can be used to indicate that all of the columns in the data set (except y) should be used as a predictor. transformations. ). For example. Using this interface with data sets that contain a large number of predictors may unnecessarily slow the computations. predictors are kept as factors.

The Matrix or Non–Formula Interface The non–formula interface specifies the predictors for the model using a matrix or data frame (all the predictors in the object are used in the model). Note that not all R functions have both interfaces. For example: > modelFunction(x = housePredictors. y = price) In this case. transformations of data or dummy variables must be created prior to being passed to the function. The outcome data are usually passed into the model as a vector object. Max Kuhn (Pfizer) Predictive Modeling 7 / 132 .

Max Kuhn (Pfizer) Predictive Modeling 8 / 132 . newSamples). summary or other methods 3 Predict outcomes for samples using the predict method: predict(fit.Building and Predicting Models Modeling in R generally follows the same workflow: 1 Create the model using the basic function: fit <. plot. The model can be used for prediction without changing the original model object. outcome.knn(trainingData. k = 5) 2 Assess the properties of the model using print.

For example.g.Model Function Consistency Since there are many modeling packages written by di↵erent people. many models have only one method of specifying the model (e. formula method only) Max Kuhn (Pfizer) Predictive Modeling 9 / 132 . there are some inconsistencies in how models are specified and predictions are made.

type = "prob") predict(obj. type = "response". type = "probability") predict(obj. type = "raw". nIter) Predictive Modeling 10 / 132 . n. type = "posterior") predict(obj.trees) predict(obj.Generating Class Probabilities Using Di↵erent Packages obj Class lda glm gbm mda rpart Weka LogitBoost Package MASS stats gbm mda rpart RWeka caTools Max Kuhn (Pfizer) predict Function Syntax predict(obj) (no options needed) predict(obj. type = "response") predict(obj.

type = "what?" (Per Package) Max Kuhn (Pfizer) Predictive Modeling 11 / 132 .

html Applied Predictive Modeling Blog: http://appliedpredictivemodeling.io/caret/ JSS Paper: http://www.github.com/ Max Kuhn (Pfizer) Predictive Modeling 12 / 132 .The caret Package The caret package was developed to: create a unified interface for modeling and prediction (interfaces to 150 models) streamline model tuning using resampling provide a variety of “helper” functions and classes for day–to–day model building tasks increase computational efficiency using parallel processing First commits within Pfizer: 6/2005.github.org/v28/i05/paper Model List: http://topepo.io/caret/bytag.jstatsoft. First version on CRAN: 10/2007 Website: http://topepo.

Max Kuhn (Pfizer) Predictive Modeling 13 / 132 . identified) in “high content screening” (Abraham et al. 2004). The sample is then interrogated by an instrument (such as a confocal microscope) where the dye deflects light and the detectors quantify that degree of scattering for that specific wavelength.Illustrative Data: Image Segmentation We’ll use data from Hill et al (2007) to model how well cells in an image are segmented (i.e. The light scattering measurements are then processed through imaging software to quantify the desired cell characteristics. then multiple dyes and multiple light frequencies can be used simultaneously. If multiple characteristics of the cells are desired.g. Cells can be stained to bind to certain components of the cell (e. nucleus) and fixed in a substance that preserves the nature state of the cell.

If cell size.Illustrative Data: Image Segmentation In these images. meaning that they have an accurate assessment of the location and size of the cell. shape. the bright green boundaries identify the cell nucleus. Others are poorly segmented. how well can we predict which cells are well–segmented (WS) or poorly–segmented (PS)? Max Kuhn (Pfizer) Predictive Modeling 14 / 132 . then it is important that the instrument and imaging software can correctly segment cells. and/or quantity are the endpoints of interest in a study. Given a set of image measurements. while the blue boundaries define the cell perimeter. Clearly some cells are well–segmented.

Illustrative Data: Image Segmentation Max Kuhn (Pfizer) Predictive Modeling 15 / 132 .

g. actin and tubulin (parts of the cytoskeleton). The data are in the caret package. the cell nucleus. They used four stains to highlight the cell body. The authors designated a training set (n = 1009) and a test set (n = 1010). channel 3 measures actin filaments). Max Kuhn (Pfizer) Predictive Modeling 16 / 132 .Illustrative Data: Image Segmentation The authors scored 2019 cells into these two bins. These correspond to di↵erent optical channels (e.

Data Splitting and Estimating Performance .

training models) determining the values of tuning parameters that cannot be directly calculated from the data calculating the performance of the final model that will generalize to new data How do we “spend” the data to find an optimal model? We typically split data into training and test data sets: Training Set: these data are used to estimate model parameters and to pick the values of the complexity parameter(s) for the model.Model Building Steps Common steps during model building are: estimating model parameters (i. Test Set (aka validation set): these data can be used to get an independent assessment of model efficacy.e. They should not be used during model training. Max Kuhn (Pfizer) Predictive Modeling 18 / 132 .

but is not generalizable (over–fitting) too much spent in testing won’t allow us to get a good assessment of model parameters Statistically. Given a fixed amount of data. We may find a model that fits the training data very well. Max Kuhn (Pfizer) Predictive Modeling 19 / 132 . From a non–statistical perspective. too much spent in training won’t allow us to get a good assessment of predictive performance. the better estimates we’ll get (provided the data is accurate). many consumers of of these models emphasize the need for an untouched set of samples the evaluate performance.Spending Our Data The more data we spend. the best course of action would be to use all the data for model building and use statistical methods to get good estimates of error.

The caret package has a function createDataPartition that conducts data splits within groups of the data.Spending Our Data There are a few di↵erent ways to do the split: simple random sampling. For classification. The base R function sample can be used to create a completely random sample of the data. stratified sampling based on the outcome. by date and methods that focus on the distribution of the predictors. this would mean sampling within the classes as to preserve the distribution of the outcome in the training and test sets For regression. Max Kuhn (Pfizer) Predictive Modeling 20 / 132 . the function determines the quartiles of the data set and samples within those groups The segmentation data comes pre-split and we will use the original training and test sets.

NULL Since channel 1 is the cell body. Case == "Test") training$Case <. Max Kuhn (Pfizer) Predictive Modeling 21 / 132 .Illustrative Data: Image Segmentation > > > > library(caret) data(segmentationData) # get rid of the cell identifier segmentationData$Cell <.subset(segmentationData. Case == "Train") testing <. AreaCh1 measures the size of the cell.NULL > > > > > training <.subset(segmentationData.NULL testing$Case <.

.5 18.frame': 1009 obs.. int 819 431 298 256 258 358 158 315 246 223 .26 1...6 22.931 . num 207 116 102 127 125 ..08 1.8 106.05 1. num 0.9 63.92 0. of $ Class : $ AngleCh1 : $ AreaCh1 : $ AvgIntenCh1 : $ AvgIntenCh2 : $ AvgIntenCh3 : $ AvgIntenCh4 : $ ConvexHullAreaRatioCh1 : $ ConvexHullPerimRatioCh1: 9 variables: Factor w/ 2 levels "PS".2 106.1:9]) 'data.797 0.7 31 46."WS": 1 2 1 2 1 1 1 2 2 2 num 133..9 28.levels(testing$Class) Since channel 1 is the cell body..2 . num 69.. num 164.Illustrative Data: Image Segmentation > str(training[..6 69.2 109.935 0.3 ..2 13. Max Kuhn (Pfizer) Predictive Modeling 22 / 132 . num 1.08 .6 . AreaCh1 measures the size of the cell.4 104.5 ...2 1. num 31..9 28 19.8 17.8 71.866 0.. > > cell_lev <...

Estimating Performance Later. pls:::RMSEP) Spearman’s correlation may be applicable for models that are used to rank samples (cor(x. pls:::R2) the root mean square error is a common metric for understanding the performance (caret:::RMSE. Max Kuhn (Pfizer) Predictive Modeling 23 / 132 . honest estimates of these statistics cannot be obtained by predicting the same samples that were used to train the model. (caret:::Rsquared. various metrics can be used to evaluate performance. In many complex models. For regression models: R 2 is very popular. y. but does not penalize complexity. the notion of the model degrees of freedom is difficult. A test set and/or resampling can provide good estimates. method = "spearman")) Of course. Unadjusted R 2 can be used. once you have a set of predictions.

. vcd:::Kappa. but this may be problematic when the classes are not balanced. .Estimating Performance For Classification For classification models: overall accuracy can be used. the Kappa statistic takes into account the expected error rate: O = 1 E E where O is the observed accuracy and E is the expected accuracy under chance agreement (psych:::cohen. .kappa. Receiver Operating Characteristic (ROC) curves can be used to characterize model performance (more later) Max Kuhn (Pfizer) Predictive Modeling 24 / 132 . ) For 2–class models.

the mda (confusion) and others. the verification package (roc. Max Kuhn (Pfizer) Predictive Modeling 25 / 132 . ROC curve functions are found in the ROCR package (performance).area). We’ll use the confusionMatrix function and the pROC package later in this class. the pROC package (roc) and others.Estimating Performance For Classification A “ confusion matrix” is a cross–tabulation of the observed and predicted classes R functions for confusion matrices are in the e1071 package (the classAgreement function). the caret package (confusionMatrix).

but require an estimate of what the overall event rate is in the population of interest (aka the prevalence) Max Kuhn (Pfizer) Predictive Modeling 26 / 132 . what is the probability that the model will predict an event results? Specificity: given that a result is truly not an event. what is the probability that the model will predict a negative results? (an “event” is really the event of interest) These conditional probabilities are directly related to the false positive and false negative rate of a method. Unconditional probabilities (the positive–predictive values and negative–predictive values) can be computed.Estimating Performance For Classification For 2–class classification models we might also be interested in: Sensitivity: given that a result is truly an event.

let’s choose the event to be the poor segmentation (PS): Sensitivity = # PS predicted to be PS # true PS # true WS to be WS Specificity = # true WS The caret package has functions called sensitivity and specificity Max Kuhn (Pfizer) Predictive Modeling 27 / 132 .Estimating Performance For Classification For our example.

we can calculate the sensitivity and specificity. The area under the ROC curve is a common metric of performance.ROC Curve With two classes the Receiver Operating Characteristic (ROC) curve can be used to estimate performance using a combination of sensitivity and specificity. For each cuto↵. the false positive rate). many alternative cuto↵s can be evaluated (instead of just a 50% cuto↵). true positive rate) by one minus specificity (eg. The ROC curve plots the sensitivity (eg. Max Kuhn (Pfizer) Predictive Modeling 28 / 132 . Given the probability of an event.

0 Specificity Max Kuhn (Pfizer) Predictive Modeling 29 / 132 . 0.750 (0.910) 0.8 ● 1.887.1.604) 0.588.500 (0.670. 0.0 0.847) 0. 0.2 0.8 0.0 0.6 0.250 (0.4 0.4 ● 0.0 ROC Curve From Class Probabilities ● 0.2 Sensitivity 0.6 0.

Data Pre–Processing .

Max Kuhn (Pfizer) Predictive Modeling 31 / 132 . etc.e. Some models have di↵erent assumptions on the predictor data and may need to be pre–processed.Pre–Processing the Data There are a wide variety of models in R. If any data processing is required. then apply them to any data set used for model building or prediction. (X 0 X ) 1 ) may require the elimination of collinear predictors. For example. it is a good idea to base these calculations on the training set. Others may need the predictors to be centered and/or scaled. methods that use the inverse of the predictor cross–product matrix (i.

Box–Cox transformations of the predictors) transformations of the groups of predictors. such as the I I the “spatial–sign” transformation (i. x 0 = x /||x ||) feature extraction via PCA or ICA Max Kuhn (Pfizer) Predictive Modeling 32 / 132 .Pre–Processing the Data Examples of of pre–processing operations: centering and scaling imputation of missing data transformations of individual predictors (e.g.e.

preProcess can do a variety of techniques. so we’ll look at this in more detail. Max Kuhn (Pfizer) Predictive Modeling 33 / 132 .Centering. Scaling and Transforming There are a few di↵erent functions for data processing in R: scale in base R ScaleAdv in pcaPP stdize in pls preProcess in caret normalize in sparseLDA The first three functions do simple centering and scaling.

center". "medianImpute". ## "spatialSign". method = c("center". Once the values are calculated. "scale". "expoTrans" preProcValues <. "YeoJohnson". the predict method can be used to do the actual data transformations. "scale")) Created from 1009 samples and 58 variables Pre-processing: centered. ## "range". scaled Max Kuhn (Pfizer) Predictive Modeling 34 / 132 . . "scale")) preProcValues Call: preProcess.Centering and Scaling The input is a matrix or data frame of predictor data. "bagImpute". "pca".default(x = trainX. names(training) != "Class"] ## Methods are "BoxCox". "ica". estimate the standardization parameters . method = c("center". "knnImpute".preProcess(trainX. First. .training[. > > > > > > trainX <.

Centering and Scaling . trainX) Max Kuhn (Pfizer) Predictive Modeling 35 / 132 . .predict(preProcValues. then apply them to the data sets: > scaledTrain <. .

all data transformations should be included within the cross–validation loop. PCA. etc) One function considered later called train that can apply preProcess within resampling loops. The would be especially true for feature selection as well as pre–processing techniques (e.Pre–Processing and Resampling To get honest estimates of performance. Max Kuhn (Pfizer) Predictive Modeling 36 / 132 . imputation.g.

Over–Fitting and Resampling .

Over–Fitting Over–fitting occurs when a model inappropriately picks up on trends in the training set that do not generalize to new samples.e. Some models have specific “knobs” to control over-fitting neighborhood size in nearest neighbor models is an example the number if splits in a tree model Often. Max Kuhn (Pfizer) Predictive Modeling 38 / 132 . Two model fits are shown. the next slide shows a data set with two predictors. poor choices for these parameters can result in over-fitting For example. assessments of the model based on the training set can show good performance that does not reproduce in future samples. We want to be able to produce a line (i. decision boundary) that di↵erentiates two classes of data. When this occurs. one over–fits the training data.

2 0.2 0.The Data Class 1 Class 2 Predictor B 0.4 0.0 0.6 0.4 0.0 0.6 Predictor A Max Kuhn (Pfizer) Predictive Modeling 39 / 132 .

2 0.Two Model Fits Class 1 Class 2 0.4 0.2 Model #1 0.6 0.8 Model #2 Predictor B 0.8 Predictor A Max Kuhn (Pfizer) Predictive Modeling 40 / 132 .4 0.4 0.8 0.6 0.6 0.2 0.

Max Kuhn (Pfizer) Predictive Modeling 41 / 132 . Resampling methods try to “inject variation” in the system to approximate the model’s performance on future samples. repeated “looks” at the test set can also lead to over–fitting Resampling the training samples allows us to know when we are making poor choices for the values of these parameters (the test set is not used). However.Characterizing Over–Fitting Using the Training Set One obvious way to detect over–fitting is to use a test set. We’ll walk through several types of resampling methods for training set samples.

2 This model is used to predict the held-out block 3 We continue this process until we’ve predicted all K held–out blocks The final performance is based on the hold-out predictions K is usually taken to be 5 or 10 and leave one out cross–validation has each sample as a block Repeated K –fold CV creates multiple versions of the folds and aggregates the results (I prefer this method) caret:::createFolds. caret:::createMultiFolds Max Kuhn (Pfizer) Predictive Modeling 42 / 132 . 1 We leave out the first block of data and fit a model.K –Fold Cross–Validation Here. we randomly split the data into K distinct blocks of roughly equal size.

K –Fold Cross–Validation Max Kuhn (Pfizer) Predictive Modeling 43 / 132 .

These splits can also be generated using stratified sampling. this procedure has smaller variance than K –fold CV. This process is repeated many times and the average performance is used.Repeated Training/Test Splits (aka leave–group–out cross–validation) A random proportion of data (say 80%) are used to train a model while the remainder is used for prediction. but is likely to be biased. caret:::createDataPartition Max Kuhn (Pfizer) Predictive Modeling 44 / 132 . With many iterations (20 to 100).

Repeated Training/Test Splits Max Kuhn (Pfizer) Predictive Modeling 45 / 132 .

This procedure also has low variance but non–zero bias when compared to K –fold CV. caret:::createResample Max Kuhn (Pfizer) Predictive Modeling 46 / 132 . Some samples won’t be selected and these samples will be used to predict performance. The process is repeated multiple times (say 30–100).Bootstrapping Bootstrapping takes a random sample with replacement. The random sample is the same size as the original data set.2% chance of showing up at least once. sample. Samples may be selected more than once and each sample has a 63.

Bootstrapping Max Kuhn (Pfizer) Predictive Modeling 47 / 132 .

Max Kuhn (Pfizer) Predictive Modeling 48 / 132 . One algorithm to select models: Define sets of model parameter values to evaluate. for each parameter set do for each resampling iteration do Hold–out specific samples . Predict the hold–out samples. Fit the model on the remainder. but there is still the issue of which model to select. end Calculate the average performance across hold–out predictions end Determine the optimal parameter set.The Big Picture We think that resampling will give us honest estimates of future performance.

6 0.4 0.K –Nearest Neighbors Classification Class 1 Class 2 ● Predictor B 0.4 0.0 0.2 0.0 0.2 0.6 Predictor A Max Kuhn (Pfizer) Predictive Modeling 49 / 132 .

for i = 1 . for k = 1. . . Hold–out data not in sample.The Big Picture – K NN Example Using k –nearest neighbors as an example: Randomly put samples into 10 distinct groups. . . 30 do Create a bootstrap sample. 29 do Fit the model on the boostrapped sample. 3. . Predict the i th holdout and save results. end Calculate the average accuracy across the 30 hold–out sets of predictions end Determine k based on the highest cross–validated accuracy. Max Kuhn (Pfizer) Predictive Modeling 50 / 132 .

The Big Picture – K NN Example 0.80 0.75 0.85 Holdout Accuracy 0.70 0.60 0 5 10 15 20 25 30 #Neighbors Max Kuhn (Pfizer) Predictive Modeling 51 / 132 .65 0.

we would like a parsimonious and interpretable model that has excellent performance. that is not usually realistic.A General Strategy There is usually a inverse relationship between model flexibility/power and interpretability. One strategy: 1 start with the most powerful black–box type models 2 get a sense of the best possible performance 3 then fit more simplistic/understandable models 4 evaluate the performance cost of using a simpler model Max Kuhn (Pfizer) Predictive Modeling 52 / 132 . In the best case. Unfortunately.

Training and Tuning Tree Models .

For the two resulting groups. in e↵ect. trees partition the X space into rectangular sections that assign a single value to samples within the rectangle. the process is repeated until a hierarchical structure (a tree) is created. the best split minimizes impurity of the outcome in the resulting data subsets. typically. Max Kuhn (Pfizer) Predictive Modeling 54 / 132 .Classification Trees A classification tree searches through each predictor to find a value of a single variable that best splits the data into two groups.

8 0.6 0.4 0.An Example First Split 1 TotalIntenCh2 < 45324 Max Kuhn (Pfizer) Node 3 (n = 555) PS 1 1 0.2 0 WS WS PS Node 2 (n = 454) ≥ 45324 Predictive Modeling 0 55 / 132 .4 0.8 0.2 0.6 0.

8 WS WS PS Node 3 (n = 447) ≥ 0.6 0.6 Max Kuhn (Pfizer) ≥ 45324 0 56 / 132 .2 0 0 Predictive Modeling 0 WS 0.67 1 Node 6 (n = 154) 1 ≥ 9.4 0.8 0.4 0.6 0.2 0.4 0.2 0.67 Node 7 (n = 401) PS Node 4 (n = 7) PS 1 1 0.6 0.2 0.8 0.4 0.8 WS 0.6 PS < 0.6 0.The Next Round of Splitting 1 TotalIntenCh2 < 45324 2 5 IntenCoocASMCh3 FiberWidthCh1 < 9.

63033 0. evtree.rpart(Class ~ .69481 0.30519) * 7) FiberWidthCh1>=9.6022 447 27 PS (0.673 154 47 PS (0.00000 1.532e+04 555 216 WS (0.38919 0. n.532e+04 454 34 PS (0.673 401 109 WS (0. The main package for fitting single trees are rpart.control(maxdepth = 2)) > rpart1 n= 1009 node).72818) * Max Kuhn (Pfizer) Predictive Modeling 57 / 132 .36967) 2) TotalIntenCh2< 4.6022 7 0 WS (0.93960 0.92511 0.61081) 6) FiberWidthCh1< 9.07489) 4) IntenCoocASMCh3< 0. yval.. rpart fits the classical “CART” models of Breiman et al (1984).06040) * 5) IntenCoocASMCh3>=0.27182 0. loss. data = training. (yprob) * denotes terminal node 1) root 1009 373 PS (0. RWeka. C50 and party. To obtain a shallow tree with rpart: > library(rpart) > rpart1 <.An Example There are many tree–based packages in R.00000) * 3) TotalIntenCh2>=4. split. control = rpart.

as.rpart to visualize the final tree.Visualizing the Tree The rpart package has functions plot. The partykit package also has enhanced plotting functions for recursive partitioning.party(rpart1) > plot(rpart1a) Max Kuhn (Pfizer) Predictive Modeling 58 / 132 . We can convert the rpart object to a new class called party and plot it to see more in the terminal nodes: > rpart1a <.rpart and text.

6 0.8 WS WS PS Node 3 (n = 447) ≥ 0.6 0.6 0.8 WS 0.67 Node 7 (n = 401) PS Node 4 (n = 7) PS 1 1 0.2 0.6 0.4 0.2 0.4 0.2 0 0 Predictive Modeling 0 WS 0.6 PS < 0.8 0.A Shallow rpart Tree Using the party Package 1 TotalIntenCh2 < 45324 2 5 IntenCoocASMCh3 FiberWidthCh1 < 9.4 0.4 0.6 Max Kuhn (Pfizer) ≥ 45324 0 59 / 132 .2 0.8 0.67 1 Node 6 (n = 154) 1 ≥ 9.

Tree Fitting Process Splitting would continue until some criterion for stopping is met. such as the minimum number of observations in a node The largest possible tree may over-fit and “pruning” is the process of iteratively removing terminal nodes and watching the changes in resampling performance (usually 10–fold CV) There are many possible pruning paths: how many possible trees are there with 6 terminal nodes? Trees can be indexed by their maximum depth and the classical CART methodology uses a cost-complexity parameter (Cp ) to determine best tree depth Max Kuhn (Pfizer) Predictive Modeling 60 / 132 .

The Final Tree Previously. the “one SE” rule is used: estimate the standard error of performance for each tree size then choose the simplest tree within one standard error of the absolute best tree size. > rpartFull <. rpart will conduct as many splits as possible.. By default.rpart(Class ~ . then use 10–fold cross–validation to prune the tree. we told rpart to use a maximum of two splits. Specifically. data = training) Max Kuhn (Pfizer) Predictive Modeling 61 / 132 .

**Tree Growing and Pruning
**

1

TotalIntenCh2

< 45324

≥ 45324

2

9

FiberWidthCh1

≥ 0.6

< 9.67

≥ 9.67

10

< 124

23

AvgIntenCh1

≥ 124

< 323.9

5

11

SkewIntenCh1

VarIntenCh4

≥ 112.3

ConvexHullAreaRatioCh1

≥ 323.9

≥ 1.17

< 1.17

24

31

VarIntenCh4

< 112.3

ShapeP2ACh1

≥ 172

< 172

≥ 1.3

13

26

32

IntenCoocASMCh4

KurtIntenCh3

AvgIntenCh4

≥ 0.17

< 0.17

< 4.05

15

NeighborAvgDistCh1

< 218.8

≥ 4.05

≥ 375.2

51

TotalIntenCh1

< 26188

34

AvgIntenCh4

LengthCh1

≥ 218.8

19

DiffIntenDensityCh4

AvgIntenCh2

≥ 20.93

36

NeighborMinDistCh1

< 22.03

≥ 0.03

≥ 26188

52

ConvexHullPerimRatioCh1

< 20.93

16

< 1.3

< 375.2

27

Accuracy

IntenCoocASMCh3

< 0.6

3

AvgIntenCh2

≥ 22.03

37

40

AvgIntenCh1

IntenCoocASMCh3

< 0.03

< 0.01

≥ 0.01

42

IntenCoocEntropyCh4

< 77.05

≥ 77.05

< 6.26

≥ 6.26

≥ 0.97

< 0.97

43

≥ 42.09

< 42.09

≥ 197.8

< 197.8

DiffIntenDensityCh4

< 110.7

≥ 110.7

≥ 44.41

●

< 44.41

44

NeighborMinDistCh1

< 25.64

Node 30 (n = 20)

1

0.8

Node 33 (n = 17)

1

0.8

Node 35 (n = 10)

1

Node 38 (n = 16)

1

0.8

0.8

Node 39 (n = 16)

1

0.8

Node 41 (n = 8)

1

0.8

Node 46 (n = 9)

1

0.8

≥ 82.95

Node 47 (n = 15)

1

Node 48 (n = 9)

0.8

Node 49 (n = 26)

1

1

0.8

0.8

Node 50 (n = 53)

1

0.8

Node 53 (n = 17)

1

0.8

Node 54 (n = 26)

1

0.8

Node 55 (n = 116)

1

PS

0.8

PS

Node 29 (n = 11)

1

0.8

PS

Node 28 (n = 13)

1

0.8

PS

Node 25 (n = 19)

1

PS

0.8

PS

Node 22 (n = 15)

1

PS

0.8

PS

Node 21 (n = 13)

1

PS

0.8

PS

Node 20 (n = 12)

1

PS

0.8

PS

1

PS

Node 18 (n = 8)

0.8

PS

Node 17 (n = 21)

1

PS

0.8

PS

Node 14 (n = 27)

1

PS

0.8

PS

Node 12 (n = 58)

1

PS

0.8

PS

1

PS

0.8

PS

Node 8 (n = 7)

1

PS

0.8

PS

Node 7 (n = 7)

1

PS

PS

Node 6 (n = 95)

0.8

0.6

0.6

0.6

0.6

0.6

0.6

0.6

0.6

0.6

0.6

0.6

0.6

0.6

0.6

0.6

0.6

0.6

0.6

0.6

0.6

0.6

0.6

0.4

0.4

0.4

0.4

0.4

0.4

0.4

0.4

0.4

0.4

0.4

0.4

0.4

0.4

0.4

0.4

0.4

0.4

0.4

0.4

0.4

0.4

0.2

0

0.2

0

0.2

0

0.2

0

0.2

WS

0.2

0

WS

0.2

0

WS

0.2

0

WS

0.2

0

WS

0.2

0

WS

0.2

0

WS

0.2

0

WS

0.2

0

WS

0.2

0

WS

0.2

0

WS

0.2

0

WS

0.2

0

WS

0.2

0

WS

0.2

0

WS

0.2

0

WS

0

WS

0.2

WS

WS

WS

0.2

0

0

WS

0.6

0.4

0.2

0

WS

0.6

0.4

0.2

0

WS

0.6

0.4

0.2

0

WS

0.6

0.4

0.2

0

WS

0.6

0.4

0.2

0

WS

0.6

0.4

WS

PS

0.8

PS

< 82.95

Node 4 (n = 345)

1

WS

Full Tree

≥ 25.64

45

DiffIntenDensityCh4

0.2

0

10^−2.0

10^−1.5

10^−1.0

10^−0.5

cp

●

●●●

●●

1

●

TotalIntenCh2

< 45324

≥ 45324

2

5

IntenCoocASMCh3

●

FiberWidthCh1

< 9.67

≥ 9.67

6

9

AvgIntenCh1

ConvexHullAreaRatioCh1

15

ShapeP2ACh1

≥ 172

< 0.6

●

< 1.17

< 172

≥ 1.3

12

16

KurtIntenCh3

AvgIntenCh4

≥ 0.6

≥ 375.2

Accuracy

≥ 1.17

10

VarIntenCh4

< 1.3

< 375.2

18

< 323.9

≥ 323.9

LengthCh1

< 20.93

≥ 20.93

20

< 4.05

≥ 4.05

NeighborMinDistCh1

< 22.03

≥ 22.03

●

●

●

●

●

●

●

●

● ●

21

AvgIntenCh1

Node 25 (n = 159)

PS

1

0.8

0.6

0.4

0.2

0

WS

Node 24 (n = 120)

PS

1

0.8

0.6

0.4

0.2

0

WS

Node 23 (n = 16)

PS

1

0.8

0.6

0.4

0.2

0

≥ 110.7

WS

Node 22 (n = 16)

PS

1

0.8

0.6

0.4

0.2

0

WS

Node 19 (n = 10)

PS

1

0.8

0.6

0.4

0.2

0

WS

Node 17 (n = 17)

PS

1

0.8

0.6

0.4

0.2

0

WS

Node 14 (n = 20)

PS

1

0.8

0.6

0.4

0.2

0

WS

Node 13 (n = 24)

PS

1

0.8

0.6

0.4

0.2

0

WS

Node 11 (n = 19)

PS

1

0.8

0.6

0.4

0.2

0

WS

Node 8 (n = 15)

PS

1

0.8

0.6

0.4

0.2

0

WS

Node 7 (n = 139)

PS

1

0.8

0.6

0.4

0.2

0

WS

Node 4 (n = 7)

PS

1

0.8

0.6

0.4

0.2

0

WS

PS

Node 3 (n = 447)

WS

Start Pruning

< 110.7

1

0.8

0.6

0.4

0.2

0

10^−2.0

10^−1.5

10^−1.0

10^−0.5

cp

●

1

●

TotalIntenCh2

●

●

≥ 45324

Accuracy

< 45324

3

FiberWidthCh1

< 9.67

≥ 9.67

1

Node 5 (n = 401)

PS

Node 4 (n = 154)

PS

1

0.8

0.6

0.6

0.6

0.4

0.4

0.4

0.2

0.2

0.2

0

WS

0.8

0

●

●

●

●●

●

●

●

●

●

● ●

●

1

0.8

WS

PS

WS

Too Much!

Node 2 (n = 454)

●●●●●●●●●

●●

●

0

10^−2.0

10^−1.5

10^−1.0

10^−0.5

cp

Max Kuhn (Pfizer)

Predictive Modeling

62 / 132

**The Final Tree
**

> rpartFull

n= 1009

node), split, n, loss, yval, (yprob)

* denotes terminal node

1) root 1009 373 PS (0.63033 0.36967)

2) TotalIntenCh2< 4.532e+04 454 34 PS (0.92511 0.07489)

4) IntenCoocASMCh3< 0.6022 447 27 PS (0.93960 0.06040) *

5) IntenCoocASMCh3>=0.6022 7

0 WS (0.00000 1.00000) *

3) TotalIntenCh2>=4.532e+04 555 216 WS (0.38919 0.61081)

6) FiberWidthCh1< 9.673 154 47 PS (0.69481 0.30519)

12) AvgIntenCh1< 323.9 139 33 PS (0.76259 0.23741) *

13) AvgIntenCh1>=323.9 15

1 WS (0.06667 0.93333) *

7) FiberWidthCh1>=9.673 401 109 WS (0.27182 0.72818)

14) ConvexHullAreaRatioCh1>=1.174 63 26 PS (0.58730 0.41270)

28) VarIntenCh4>=172 19

2 PS (0.89474 0.10526) *

29) VarIntenCh4< 172 44 20 WS (0.45455 0.54545)

58) KurtIntenCh3< 4.05 24

8 PS (0.66667 0.33333) *

59) KurtIntenCh3>=4.05 20

4 WS (0.20000 0.80000) *

15) ConvexHullAreaRatioCh1< 1.174 338 72 WS (0.21302 0.78698)

30) ShapeP2ACh1>=1.304 179 53 WS (0.29609 0.70391)

60) AvgIntenCh4>=375.2 17

2 PS (0.88235 0.11765) *

61) AvgIntenCh4< 375.2 162 38 WS (0.23457 0.76543)

122) LengthCh1< 20.93 10

3 PS (0.70000 0.30000) *

123) LengthCh1>=20.93 152 31 WS (0.20395 0.79605)

246) NeighborMinDistCh1< 22.03 32 14 WS (0.43750 0.56250)

492) AvgIntenCh1< 110.6 16

3 PS (0.81250 0.18750) *

493) AvgIntenCh1>=110.6 16

1 WS (0.06250 0.93750) *

247) NeighborMinDistCh1>=22.03 120 17 WS (0.14167 0.85833) *

31) ShapeP2ACh1< 1.304 159 19 WS (0.11950 0.88050) *

Max Kuhn (Pfizer)

Predictive Modeling

63 / 132

**The Final rpart Tree
**

1

TotalIntenCh2

< 45324

≥ 45324

2

5

IntenCoocASMCh3

FiberWidthCh1

< 9.67

≥ 9.67

6

9

AvgIntenCh1

ConvexHullAreaRatioCh1

≥ 1.17

15

VarIntenCh4

ShapeP2ACh1

≥ 172

< 0.6

< 1.17

10

< 172

≥ 1.3

12

16

KurtIntenCh3

AvgIntenCh4

≥ 0.6

≥ 375.2

< 1.3

< 375.2

18

< 323.9

≥ 323.9

LengthCh1

< 20.93

≥ 20.93

20

< 4.05

≥ 4.05

NeighborMinDistCh1

< 22.03

≥ 22.03

21

AvgIntenCh1

1

Node 24 (n = 120)

1

Node 25 (n = 159)

PS

1

≥ 110.7

Node 23 (n = 16)

PS

1

Node 22 (n = 16)

PS

1

Node 19 (n = 10)

PS

1

Node 17 (n = 17)

PS

1

Node 14 (n = 20)

PS

1

Node 13 (n = 24)

PS

1

Node 11 (n = 19)

PS

1

Node 8 (n = 15)

PS

PS

PS

1

Node 7 (n = 139)

1

0.8

0.8

0.6

0.6

0.6

0.6

0.6

0.6

0.6

0.6

0.6

0.6

0.6

0.6

0.6

0.4

0.4

0.4

0.4

0.4

0.4

0.4

0.4

0.4

0.4

0.4

0.4

0.4

0.2

0.2

0.2

0.2

0.2

0.2

0.2

0.2

0.2

0.2

0.2

0.2

0.2

0

0

Max Kuhn (Pfizer)

0

0

0

0

0

Predictive Modeling

0

0

0

0

0

WS

0.8

WS

0.8

WS

0.8

WS

0.8

WS

0.8

WS

0.8

WS

0.8

WS

0.8

WS

0.8

WS

0.8

WS

0.8

WS

PS

WS

1

Node 4 (n = 7)

PS

< 110.7

Node 3 (n = 447)

0

64 / 132

657 <2e-16 Kappa : 0.Test Set Results > rpartPred <.657 0. type = "class") > confusionMatrix(rpartPred.791 (0.555 0. testing.766 'Positive' Class : PS Max Kuhn (Pfizer) Predictive Modeling 65 / 132 .688 0.816) 0.662 0.765.783 Sensitivity Specificity Pos Pred Value Neg Pred Value Prevalence Detection Rate Detection Prevalence Balanced Accuracy : : : : : : : : 0.535 Mcnemar's Test P-Value : 0. testing$Class) # requires 2 factor vectors Confusion Matrix and Statistics Reference Prediction PS WS PS 561 108 WS 103 238 Accuracy 95% CI No Information Rate P-Value [Acc > NIR] : : : : 0.845 0.predict(rpartFull.698 0. 0.839 0.

Manually Tuning the Model CART conducts an internal 10-fold CV to tune the model to be within one SE of the absolute minimum. Max Kuhn (Pfizer) Predictive Modeling 66 / 132 . We might want to tune the model ourselves for several reasons: 10-Fold CV can be very noisy for small to moderate sample sizes We might want to risk some over–fitting in exchange for higher performance Using a performance metric other than error may be preferable. We can manually make tradeo↵s between sensitivity and specificity for di↵erent values of Cp To this end. we will look at the train funciton in the caret package. especially with severe class imbalances.

a gbm model with a specific number of iterations and tree depth) the e1071 package has a function (tune) for five models that will conduct resampling over a grid of tuning values. the errorest function in the ipred package can be used to resample a single model (e.Tuning the Model There are a few functions that can be used for this purpose in R. caret has a similar function called train for over 150 models.g. Di↵erent resampling methods are available as are custom performance metrics and facilities for parallel processing. Max Kuhn (Pfizer) Predictive Modeling 67 / 132 .

data = training. data. so we can use: > rpartTune <.train(formula.The train Function The basic syntax for the function is: > rpartTune <. using method = "rpart" can be used to tune a tree over Cp . + method = "rpart") We’ll add a bit of customization too..train(Class ~ . Max Kuhn (Pfizer) Predictive Modeling 68 / 132 . method) Looking at ?train.

data = training. the function will tune over 3 values of the tuning parameter (Cp for this model). the train function determines the distinct number of values of Cp for the data.. For rpart.The train Function By default. + method = "rpart". The tuneLength function can be used to evaluate a broader set of models: > rpartTune <.train(Class ~ . + tuneLength = 10) Max Kuhn (Pfizer) Predictive Modeling 69 / 132 .

Let’s use repeated 10–fold cross–validation instead. + trControl = cvCtrl) Max Kuhn (Pfizer) Predictive Modeling 70 / 132 .trainControl(method = "repeatedcv".. we would use > cvCtrl <. there is a control function that handles some of the optional arguments.train(Class ~ . data = training. To use five repeats of 10–fold cross–validation. repeats = 5) > rpartTune <. + tuneLength = 10. + method = "rpart".The train Function The default resampling scheme is the bootstrap. To do this.

6822 0.5221 0. The package has one that calculates the ROC curve. lev = levels(fakeData$obs)) ROC Sens Spec 0.data. sensitivity and specificity: > ## Make some random example data to show usage of twoClassSummary() > fakeData <. + obs = sample(testing$Class). the default CART algorithm uses overall accuracy and the one standard–error rule to prune the tree.frame(pred = testing$Class.The train Function Also. We might want to choose the tree complexity based on the largest absolute area under the ROC curve. + ## Requires a column for class probabilities + ## named after the first level + PS = runif(nrow(testing))) > > twoClassSummary(fakeData. A custom performance function can be passed to train.3902 Max Kuhn (Pfizer) Predictive Modeling 71 / 132 .

we tell the function to optimize the area under the ROC curve using the metric argument: > cvCtrl <.The train Function We can pass the twoClassSummary function in through trainControl. to calculate the ROC curve. + method = "rpart". + metric = "ROC".trainControl(method = "repeatedcv". However. + classProbs = TRUE) > set.. + summaryFunction = twoClassSummary. data = training.seed(1) > rpartTune <. + trControl = cvCtrl) Max Kuhn (Pfizer) Predictive Modeling 72 / 132 . we need the model to predict the class probabilities. repeats = 5.train(Class ~ . + tuneLength = 10. The classProbs option will also do this: Finally.

8 0.07 0.9 0.005. Max Kuhn (Pfizer) Predictive Modeling 73 / 132 .03 0.8 0.8 0.04 0.09 0. The final value used for the model was cp = 0.7 0.03 0.06 0.07 0.7 0.7 0. Resampling results across tuning parameters: cp 0. repeated 5 times) Summary of sample sizes: 909.02 0.05 0.08 0.7 0.04 0.3 ROC 0.005 0.04 0. 908.7 0.8 0.05 0.04 0.4 ROC was used to select the optimal model using the largest value.8 0.08 0.04 0.8 0.2 Spec SD 0..7 0.09 0.train Results > rpartTune CART 1009 samples 58 predictors 2 classes: 'PS'..04 0.07 0.07 0.8 0.02 0.1 Sens SD 0.02 0.7 Sens 0.8 0.04 0. 907. 908.8 0.7 0.8 0.01 0.5 ROC SD 0.04 0.8 0.05 0.04 0.8 0.04 0.2 0.8 0.8 0.04 0.9 0.8 0. 'WS' No pre-processing Resampling: Cross-Validated (10 fold.7 0.008 0.8 0.8 Spec 0.04 0.06 0. 909.05 0. .04 0. 908.8 0.

train can be used to plot the resampling profiles across the di↵erent models print. Let’s look at what the plot method does. Max Kuhn (Pfizer) Predictive Modeling 74 / 132 .e.train can be used to predict new samples there are a few others that we’ll mention shortly. So in our example.Working With the train Object There are a few methods of interest: plot. the final model fit (i. Additionally. the model with the best resampling results) is in a sub–object called finalModel.train shows a textual description of the results predict. rpartTune is of class train and the object rpartTune$finalModel is of class rpart.

85 ● ● ● ●●● ● ROC (Repeated Cross−Validation) ● 0.80 ● 0.65 10^−2.5 Complexity Parameter Max Kuhn (Pfizer) Predictive Modeling 75 / 132 .0 10^−1.Resampled ROC Profile plot(rpartTune.5 10^−1.70 ● 0.75 0. scales = list(x = list(log = 10))) 0.0 10^−0.

85 ● ● ● ● ● ● ● ROC (Repeated Cross−Validation) ● 0.65 0.75 0.01 0.10 Complexity Parameter Max Kuhn (Pfizer) Predictive Modeling 76 / 132 .80 ● 0.Resampled ROC Profile ggplot(rpartTune) + scale_x_log10() 0.70 ● 0.

Predicting New Samples There are at least two ways to get predictions from a train object: predict(rpartTune$finalModel.rpart. newdata) The first method uses predict. type = "class") predict(rpartTune. this must also be specified. newdata. predict.train does the same thing. If there is any extra or non–standard syntax. but takes care of any minutia that is specific to the predict method in question. Max Kuhn (Pfizer) Predictive Modeling 77 / 132 .

0.534 0.762.622 0.812 0. testing$Class) Confusion Matrix and Statistics Reference Prediction PS WS PS 539 89 WS 125 257 Accuracy 95% CI No Information Rate P-Value [Acc > NIR] : : : : 0.657 <2e-16 Kappa : 0.0167 Sensitivity Specificity Pos Pred Value Neg Pred Value Prevalence Detection Rate Detection Prevalence Balanced Accuracy : : : : : : : : 0.673 0.541 Mcnemar's Test P-Value : 0. testing) > confusionMatrix(rpartPred2.858 0.743 0.657 0.predict(rpartTune.777 'Positive' Class : PS Max Kuhn (Pfizer) Predictive Modeling 78 / 132 .Test Set Results > rpartPred2 <.813) 0.788 (0.

testing.9396 0.train has an argument type that can be used to get predicted class probabilities for di↵erent models: > rpartProbs <.9396 0. type = "prob") > head(rpartProbs) 1 5 6 7 8 9 PS 0.0604 0.8805 0.0604 Max Kuhn (Pfizer) Predictive Modeling 79 / 132 .Predicting Class Probabilities predict.1195 0.predict(rpartTune.1195 0.0604 0.0604 0.8805 0.9396 WS 0.9396 0.

+ levels = rev(cell_lev)) > plot(rpartROC. respectively. The functions plot.roc(testing$Class. levels = rev(cell_lev)) "PS"].Creating the ROC Curve The pROC package can be used to create ROC curves.836 Max Kuhn (Pfizer) Predictive Modeling 80 / 132 . predictor = rpartProbs[. print. "PS"]. "PS"] in 346 controls (testing$Class WS) < 664 cases (testing$Class PS). > library(pROC) > rpartROC <.roc and auc. The function roc is used to capture the data and compute the ROC curve.default(response = testing$Class. Data: rpartProbs[.5) Call: roc.roc generate plot and area under the curve. type = "S". Area under the curve: 0.thres = . rpartProbs[.

0 Classification Tree ROC Curve 1.2 Sensitivity 0.812) 0.0 0.6 0.6 ● 0.4 0.743. 0.0 0.0 Specificity Max Kuhn (Pfizer) Predictive Modeling 81 / 132 .500 (0.8 1.4 0.0.8 0.2 0.

boosting and random forests.Pros and Cons of Single Trees Trees can be computed very quickly and have simple interpretations. if a predictor was not used in any split. trees do not usually have optimal performance when compared to other methods. they have built-in feature selection. This last point has been exploited to improve the performance of trees via ensemble methods where many trees are fit and predictions are aggregated across the trees. Examples are bagging. Unfortunately. Max Kuhn (Pfizer) Predictive Modeling 82 / 132 . small changes in the data can drastically a↵ect the structure of a tree. the model is completely independent of that data. Also. Also.

M iterations do Fit a classification tree using sample weights (denote the model equation as fj (x )). . single trees) into strong learning algorithms. .Boosting Algorithms A method to “boost” weak learning algorithms (e. for j = 1 . forall the misclassified samples do increase sample weight end Save a “stage–weight” ( j ) based on the performance of the current model. Boosted trees try to improve the model fit over di↵erent trees by considering past fits (not unlike iteratively reweighted least squares) The basic tree boosting algorithm: Initialize equal weights per sample.g. end Max Kuhn (Pfizer) Predictive Modeling 83 / 132 .

Max Kuhn (Pfizer) Predictive Modeling 84 / 132 . The final class is determined by the sign of the model prediction. the categorical response yi is coded as either { 1. The weights are based on quality of each tree. In English: the final prediction is a weighted average of each tree’s prediction.Boosted Trees (Original “ adaBoost” Algorithm) In this formulation. 1}. then weighting each prediction M 1 X f (x ) = M j fj (x ) j =1 where fj is the j th tree fit and j is the stage weight for that tree. 1} and the model fj (x ) produces values of { 1. The final prediction is obtained by first predicting using all M trees.

Advances in Boosting

Although originally developed in the computer science literature, statistical

views on boosting made substantive improvements to the model.

For example:

di↵erent loss functions could be used, such as squared-error loss,

binomial likelihood etc

the rate at which the boosted tree adapts to new model fits could be

controlled and tweaked

sub–sampling the training set within each boosting iteration could be

used to increase model performance

The most well known statistical boosting model is the stochastic gradient

boosting of Friedman (2002).

However, there are numerous approaches to boosting. Many use non–tree

models as the base learner.

Max Kuhn (Pfizer)

Predictive Modeling

85 / 132

Boosting Packages in R

Boosting functions for trees in R:

gbm in the gbm package

ada in ada

blackboost in mboost

C50 in C50

**There are also packages for boosting other models (e.g. the mboost
**

package)

The CRAN Machine Learning Task View has a more complete list:

http://cran.r-project.org/web/views/MachineLearning.html

Max Kuhn (Pfizer)

Predictive Modeling

86 / 132

Boosting via C5.0

C5.0 is an evolution of the C4.5 of Quinlan (1993). Compared to CART,

some of the di↵erences are:

A di↵erent impurity measure is used (entropy)

Tree pruning is done using pessimistic pruning

Splits on categorical predictors are handled very di↵erently

While the stochastic gradient boosting machines diverged from the original

adaboost algorithm, C5.0 does something similar to adaboost.

After the first tree is created, weights are determined and subsequent

iterations create weighted trees of about the same size as the first.

The final prediction is a simple average (i.e. no stage weights).

Max Kuhn (Pfizer)

Predictive Modeling

87 / 132

C5.0 Weighting Scheme Correctly Classified Incorrectly Classified ● 6 ● ● ● ● 5 ● Sample Weight ● ● 4 ● ● ● 3 ● ● ● 2 1 0 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 50 100 150 200 Boosting Iteration Max Kuhn (Pfizer) Predictive Modeling 88 / 132 .

winnow: a logical that enacts a feature selection step prior to model building. type) > ## type is "class" or "prob" > ## trials can be <= the original argument Max Kuhn (Pfizer) Predictive Modeling 89 / 132 . The prediction code is fairly standard > predict(object. y = factorOutcome) There are a few arguments: trials: the number of boosting iterations rules: a logical to indicate whether the tree(s) should be collapsed into rules.C5.0 Syntax This function has pretty standard syntax: > C50(x = predictors. costs: a matrix that places unbalanced costs of di↵erent types of errors. trials.

rpart and others) Max Kuhn (Pfizer) Predictive Modeling 90 / 132 . cubist. lars.Tuning the C5. glmnet. gamboost. glmboost. 100) Note that we do no have to fit 100 models for each iteration of resampling. gbm. logitBoost. C5. train uses it whenever possible (including blackboost. no rules) with no additional feature selection. pam. We fit the 100 iteration model and derive the other 99 predictions using just the predict method.0.e. earth. pcr. enet. We will tune the model over the number of boosting iterations (1 .0 Model Let’s use the basic tree model (i. pls. lasso. . . We call this the “sub–model” trick.

seed(1) > c5Tune <.Tuning the C5. we will specify exactly what values the model should tune over. training$Class. + trControl = cvCtrl) Max Kuhn (Pfizer) Predictive Modeling 91 / 132 .0". > grid <.expand. + metric = "ROC". + method = "C5.0 Model train was designed to make the syntax changes between models minimal. + trials = 1:100. + tuneGrid = grid. + winnow = FALSE) > set.train(trainX. Here.grid(model = "tree". A data frame is used with one row per tuning variable combination.

07 0.03 0.07 Tuning parameter 'model' was held constant at a value of tree Tuning parameter 'winnow' was held constant at a value of FALSE ROC was used to select the optimal model using the largest value.04 : : 0.03 0. 908. The final values used for the model were trials = 96.Model Output > c5Tune C5.03 0. 908.07 : 0.8 0.04 0.9 Spec 0.05 0.7 0.9 0.8 ROC SD 0.8 0.9 0.8 0.04 0.8 0..04 0.9 0. repeated 5 times) Summary of sample sizes: 909.08 0.9 Sens 0.9 0.07 0.03 0.07 0.9 0.9 0. model = tree and winnow = FALSE.9 0.. Resampling results across tuning parameters: trials 1 2 : 100 100 100 100 100 ROC 0. 'WS' No pre-processing Resampling: Cross-Validated (10 fold.9 0.9 0. .8 0. Max Kuhn (Pfizer) Predictive Modeling 92 / 132 .05 : 0. 907.04 0.0 1009 samples 58 predictors 2 classes: 'PS'.03 Sens SD 0.7 : 0.07 0. 908.04 Spec SD 0. 909.8 0.8 : 0.04 0.

Boosted Tree Resampling Profile ggplot(c5Tune) 0.88 ● ● ● ● ● 0.84 ● ● 0 25 50 75 100 # Boosting Iterations Max Kuhn (Pfizer) Predictive Modeling 93 / 132 .86 ● ● ● 0.90 ● ● ● ●● ●● ●● ●● ● ● ●●●●● ●●● ●●●●●●● ● ● ● ● ●●●●●●● ● ●● ●●●●●●●●●●●●●●●●● ●●●●●●●●● ●●●●●●●●●●●●●●●●●●●● ROC (Repeated Cross−Validation) ● ● 0.

testing$Class) Confusion Matrix and Statistics Reference Prediction PS WS PS 562 83 WS 102 263 Accuracy 95% CI No Information Rate P-Value [Acc > NIR] : : : : 0.657 0.599 Mcnemar's Test P-Value : 0.721 0.846 0.186 Sensitivity Specificity Pos Pred Value Neg Pred Value Prevalence Detection Rate Detection Prevalence Balanced Accuracy : : : : : : : : 0.Test Set Results > c5Pred <.639 0. 0.760 0.803 'Positive' Class : PS Max Kuhn (Pfizer) Predictive Modeling 94 / 132 .predict(c5Tune.817 (0.657 <2e-16 Kappa : 0. testing) > confusionMatrix(c5Pred.556 0.871 0.792.84) 0.

+ response = testing$Class.2166 5 0.predict(c5Tune. + levels = rev(levels(testing$Class))) Max Kuhn (Pfizer) Predictive Modeling 95 / 132 .78337 0.roc(predictor = c5Probs$PS.86967 0.07672 0.Test Set ROC Curve > c5Probs <.1303 6 0. type = "prob") > head(c5Probs. 3) PS WS 1 0. testing.9233 > > library(pROC) > c5ROC <.

col = "#9E0142") Max Kuhn (Pfizer) Predictive Modeling 96 / 132 .default(response = testing$Class.891 > plot(rpartROC. Area under the curve: 0. predictor = c5Probs$PS. type = "S") > plot(c5ROC.Test Set ROC Curve > c5ROC Call: roc. add = TRUE. levels = rev(levels(testing$Class))) Data: c5Probs$PS in 346 controls (testing$Class WS) < 664 cases (testing$Class PS).

6 0.0.2 Sensitivity 0.6 0.0 0.8 0.0 Test Set ROC Curves 1.2 0.4 CART C5.4 0.0 0.0 Specificity Max Kuhn (Pfizer) Predictive Modeling 97 / 132 .0 0.8 1.

0 PS 0.4 0.8 1.6 0.6 0.8 1.0 WS Percent of Total 25 20 15 10 5 0 0.2 0.2 0.0 0.0 Probability of Poor Segmentation Max Kuhn (Pfizer) Predictive Modeling 98 / 132 .Test Set Probabilities histogram(⇠c5Probs$PS|testing$Class. xlab = "Probability of Poor Segmentation") 0.4 0.

Suppose a hypothetical situation: a dataset of two predictors and we are trying to predict the correct class (two possible classes). Let’s further suppose that these two predictors completely separate the classes Max Kuhn (Pfizer) Predictive Modeling 99 / 132 . SVMs for classification use a completely di↵erent objective function called the margin.Support Vector Machines (SVM) This is a class of powerful and very flexible models.

0 Predictor 2 2 4 Support Vector Machines ● ● −2 ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● −4 ● ● ●● ● ●● ● ● ● ● ● −4 ● ● ● ● ● ● ● ● ● ● ● ●● ● ● −2 ● ● ●● 0 2 4 Predictor 1 Max Kuhn (Pfizer) Predictive Modeling 100 / 132 .

**Support Vector Machines (SVM)
**

There are an infinite number of straight lines that we can use to separate

these two groups. Some must be better than others...

The margin is a defined by equally spaced boundaries on each side of the

line.

To maximize the margin, we try to make it as large as possible without

capturing any samples.

As the margin increases, the solution becomes more robust.

SVMs determine the slope and intercept of the line which maximizes the

margin.

Max Kuhn (Pfizer)

Predictive Modeling

101 / 132

0

Predictor 2

2

4

Maximizing the Margin

●

●

−2

●

●●

●

●

●

●

●

●

●

●

●

●●

●

●

●

−4

●

●

●●

●

●●

● ●

●

●

●

−4

●

●

●

●

●

●

●

●

●

● ●

●●

●

●

−2

●

●

●●

0

2

4

Predictor 1

Max Kuhn (Pfizer)

Predictive Modeling

102 / 132

**SVM Prediction Function
**

Suppose our two classes are coded as –1 or 1.

The SVM model estimates n parameters (↵1 . . . ↵n ) for the model.

Regularization is used to avoid saturated models (more on that in a

minute).

For a new point u, the model predicts:

f (u) =

0

+

n

X

↵i yi xi0 u

i=1

**The decision rule would predict class #1 if f (u) > 0 and class #2
**

otherwise.

Data points that are support vectors have ↵i 6= 0, so the prediction

equation is only a↵ected by the support vectors.

Max Kuhn (Pfizer)

Predictive Modeling

103 / 132

Support Vector Machines (SVM) When the classes overlap. The points that are within the margin (or on it’s boundary) are the support vectors Consequences of the fact that the prediction function only uses the support vectors: the prediction equation is more compact and efficient the model may be more robust to outliers Max Kuhn (Pfizer) Predictive Modeling 104 / 132 . The number of points is controlled by a cost parameter. points are allowed within the margin.

It turns out that this opens up some new possibilities. these functions can be computed quickly.The Kernel Trick You may have noticed that the prediction function was a function of an inner product between two samples vectors (xi0 u). which must satisfy specific mathematical criteria. These functions. u) = (1 + x 0 u)p Radial basis function : K (x . The predictor space can be expanded by adding nonlinear functions in x . Max Kuhn (Pfizer) Predictive Modeling 105 / 132 . include common functions: Polynomial : K (x . u) = exp (x 2 u)2 We don’t need to store the extra dimensions. Nonlinear class boundaries can be computed using the “kernel trick”.

there is a significant penalty for having samples within the margin ) the boundary becomes very flexible. SVMs also include a regularization parameter that controls how much the regression line can adapt to the data smaller values result in more linear (i. becomes very important as these models have the ability to greatly over–fit the training data. (animation) Max Kuhn (Pfizer) Predictive Modeling 106 / 132 . Tuning the cost parameter.SVM Regularization As previously discussed.e. flat) surfaces This parameter is generally referred to as “Cost” If the cost parameter is large. as well as any kernel parameters.

ksvm can be used for text mining with the tm package. Personally. It can do classification and regression with 4 kernel functions (or user defined functions) svmpath has an efficient function for computing 2–class models (including 2 kernel functions) kernlab contains ksvm for classification and regression with 9 built–in kernel functions. Max Kuhn (Pfizer) Predictive Modeling 107 / 132 . Additional kernel function classes can be written.SVM Models in R There are several packages that include SVM models: e1071 has the function svm for classification (2+ classes) and regression with 4 kernel functions klaR has svmlight which is an interface to the C library of the same name. Also. I prefer kernlab because it is the most general and contains other kernel method functions (e1071 is probably the most popular).

we have found that the mid–point between these two numbers can provide a good estimate for this tuning parameter.Tuning SVM Models We need to come up with reasonable choices for the cost parameter and any other parameters associated with the kernel. so we have two tuning parameters. the scale parameter for radial basis functions (RBF) We’ll focus on RBF kernel models here.1 and 0. This leaves only the cost function for tuning. such as polynomial degree for the polynomial kernel . However. Reasonable values of can be derived from elements of the kernel matrix of the training set. Max Kuhn (Pfizer) Predictive Modeling 108 / 132 .” Anecdotally.9 quantile of |x x 0 |2 . there is a potential shortcut for RBF kernels. The manual for the sigest function in kernlab has “The estimation [for ] is based upon the 0.

SVM Example We can tune the SVM model over the cost parameter. + # The default grid of cost parameters go from 2^-2. + metric = "ROC".seed(1) > svmTune <. + method = "svmRadial".5 to 1.train(x = trainX. + # 0. > set. + trControl = cvCtrl) Max Kuhn (Pfizer) Predictive Modeling 109 / 132 . + preProc = c("center". + # We'll fit 9 values in that sequence via + # the tuneLength argument. + tuneLength = 9. + y = training$Class. "scale").

9 0.7 0.04 Sens SD 0.9 0.05 0. 908.6 ROC SD 0.9 0. Max Kuhn (Pfizer) Predictive Modeling 110 / 132 . The final values used for the model were sigma = 0. Resampling results across tuning parameters: C 0. 909.04 0. repeated 5 times) Summary of sample sizes: 909.9 0.03 0.03 0.7 0.7 0.05 0.9 0.7 0.9 0. 908.09 0..04 0.04 0.5 1 2 4 8 20 30 60 ROC 0.03 0.08 0.8 0.07 0.9 0.07 0.7 0. .07 0. 'WS' Pre-processing: centered.03 0.01 and C = 1.SVM Example > svmTune Support Vector Machines with Radial Basis Function Kernel 1009 samples 58 predictors 2 classes: 'PS'.03 0.6 0.7 0.9 0.9 0.9 0.9 0.2 0.8 0.01466 ROC was used to select the optimal model using the largest value. scaled Resampling: Cross-Validated (10 fold.04 0.04 0.8 Spec 0.04 Spec SD 0.9 0.03 0.1 Tuning parameter 'sigma' was held constant at a value of 0.08 0. 907.9 0.04 0.09 0.04 0..07 0.9 0.04 0.8 Sens 0. 908.7 0.

125867 Probability model included. 545 training data points (out of 1009 samples) were used as support vectors.SVM Example > svmTune$finalModel Support Vector Machine object of class "ksvm" SV type: C-svc (classification) parameter : cost C = 1 Gaussian Radial Basis kernel function. Hyperparameter : sigma = 0.0146629801604564 Number of Support Vectors : 545 Objective Function Value : -395.9 Training error : 0. Max Kuhn (Pfizer) Predictive Modeling 111 / 132 .

84 ● 0. scales = list(x = list(log = 2))) ROC (Repeated Cross−Validation) ● 0.88 ● ● ● ● 0. metric = "ROC".86 ● ● 0.SVM ROC Profile plot(svmTune.82 ● 2^−2 2^0 2^2 2^4 2^6 Cost Max Kuhn (Pfizer) Predictive Modeling 112 / 132 .

794 'Positive' Class : PS Max Kuhn (Pfizer) Predictive Modeling 113 / 132 .857 0. 0. names(testing) != "Class"]) > confusionMatrix(svmPred.663 0.816 (0.864 0.714 Sensitivity Specificity Pos Pred Value Neg Pred Value Prevalence Detection Rate Detection Prevalence Balanced Accuracy : : : : : : : : 0.568 0. testing$Class) Confusion Matrix and Statistics Reference Prediction PS WS PS 574 96 WS 90 250 Accuracy 95% CI No Information Rate P-Value [Acc > NIR] : : : : 0.predict(svmTune. testing[.Test Set Results > svmPred <.657 <2e-16 Kappa : 0.791.735 0.723 0.589 Mcnemar's Test P-Value : 0.657 0.839) 0.

0.6 0.8 1.4 CART C5.6 0.2 Sensitivity 0.0 0.0 0.0 SVM 0.8 0.2 0.0 Test Set ROC Curves 1.4 0.0 Specificity Max Kuhn (Pfizer) Predictive Modeling 114 / 132 .

Comparing Models .

we have paired estimates for performance. E↵ectively. That has the e↵ect of using the same resamping data sets for the boosted tree and support vector machine. Hothorn et al (2005) and Eugster et al (2008) demonstrate techniques for making inferential comparisons using resampling.Comparing Models Using Resampling Notice that. Max Kuhn (Pfizer) Predictive Modeling 116 / 132 . before each call to train. we set the random number seed.

"scale").train(Class ~ .1. + trControl = cvCtrl) Loading required package: Loading required package: Loaded mda 0. 10.4-4 mda class > > set. 20. + tuneLength = 12.frame(decay = c(0.seed(1) > fdaTune <. 40)). + metric = "ROC".A Few Other Models > set.. + tuneGrid = data. data = training. + trControl = cvCtrl) Max Kuhn (Pfizer) Predictive Modeling 117 / 132 .. + preProc = c("center". + method = "fda".train(Class ~ .seed(1) > plrTune <. maxit = 1000. + metric = "ROC". + method = "multinom". + trace = FALSE. data = training. 1.

resamples(list(CART = rpartTune. + logistic = plrTune)) Max Kuhn (Pfizer) Predictive Modeling 118 / 132 . SVM = svmTune.0 = c5Tune. rfe and sbf. + C5. FDA = fdaTune. > cvValues <.Collecting Results With resamples caret has a function and classes for collating resampling results from objects of class train.

862 0.734 0.865 0 0.763 0.678 0.887 0.946 0 0.953 0 CART SVM C5. 1st Qu.775 0.910 0.852 0.843 0.734 0.862 0. NA's 0.875 Max.920 0 0.872 0.899 Max.730 0.730 0.568 0.794 0.897 0.859 0.938 0 0. CART 0.767 0.812 0.0 0.865 0. CART 0.859 0.541 0.922 0 0.850 0.949 0 0.676 0.724 0.678 0.936 0 0.891 0.720 0.885 0. NA's 0.723 0.841 0.846 0.720 0.944 0 0. NA's 0.841 0.712 0.776 0.889 logistic 0.816 0.803 0.873 0.869 0.844 0.852 0.952 0 0. Median Mean 3rd Qu.882 0. Median Mean 3rd Qu.900 0.0 0. 1st Qu.883 0.757 SVM 0.750 0.929 0 Min.875 SVM 0.865 0 0.814 0.568 0.827 0.779 C5.889 FDA 0.726 0.838 0 0.730 0.924 0.762 0. 0. FDA.880 0.811 FDA 0.915 0.784 logistic 0. Median Mean 3rd Qu.901 C5.919 0 Predictive Modeling 119 / 132 .676 0.resamples(object = cvValues) Models: CART.746 0.784 Max Kuhn (Pfizer) Max.0 FDA logistic Sens Spec Min.595 0. C5.937 0 0.Collecting Results With resamples > summary(cvValues) Call: summary.0. logistic Number of resamples: 50 ROC Min. SVM.891 0. 1st Qu.869 0.886 0.841 0.632 0.

dotplot. For example: > splom(cvValues. splom.Visualizing the Resamples There are a number of lattice plot methods to display the results: bwplot. xyplot. metric = "ROC". parallelplot. pscales = 0) Max Kuhn (Pfizer) Predictive Modeling 120 / 132 .

Visualizing the Resamples ROC logistic FDA C5.0 SVM CART Scatter Plot Matrix Max Kuhn (Pfizer) Predictive Modeling 121 / 132 .

0 ● FDA ● SVM ● logistic ● CART 0. metric = "ROC") ● C5.84 0.Visualizing the Resamples dotplot(cvValues.88 0.86 0.95 Max Kuhn (Pfizer) Predictive Modeling 122 / 132 .90 ROC Confidence Level: 0.

00225 There are lattice plot methods.diff.04817 -0.01070 0.0 FDA logistic 1.02812 -0.07764 1.00485 0.diff(cvValues.19e-06 FDA logistic -0.25e-13 6.02005 0.01420 3.03397 -0.resamples(object = rocDiffs) p-value adjustment: bonferroni Upper diagonal: estimates of the difference Lower diagonal: p-value for H0: difference = 0 ROC CART CART SVM C5.00585 0. Max Kuhn (Pfizer) Predictive Modeling 123 / 132 .76e-07 SVM C5.0 -0.03882 -0. such as dotplot. metric = "ROC") > summary(rocDiffs) Call: summary.01969 0.70137 0.09e-08 1.14e-05 0.13e-10 4.00934 0.Comparing Models We can also test to see if there are di↵erences between the models: > rocDiffs <.

04 −0.0 − FDA −0.0 ● C5.0 ● FDA − logistic ● CART − SVM ● CART − logistic ● CART − FDA ● CART − C5.02 Difference in ROC Confidence Level 0.02 0.995 (multiplicity adjusted) Max Kuhn (Pfizer) Predictive Modeling 124 / 132 .06 −0.Visualizing the Di↵erences dotplot(rocDiffs.0 − logistic ● C5.00 0. metric = "ROC") ● SVM − logistic ● SVM − FDA ● SVM − C5.

10 Height 0.20 0.40 plot(caret:::cluster. CART Max Kuhn (Pfizer) Predictive Modeling cvValues logistic SVM FDA C5.resamples(cvValues)) 125 / 132 .0 0.Clustering the Models The models can be clustered and used in a PCA too.30 0.

Parallel Processing Since we are fitting a lot of independent models over di↵erent tuning parameters and sampled data sets. R has many facilities for splitting computations up onto multiple cores or machines See Schmidberger et al (2009) for a recent review of these methods Max Kuhn (Pfizer) Predictive Modeling 126 / 132 . there is no reason to do these sequentially.

foreach and caret To loop through the models and data sets. Max Kuhn (Pfizer) Predictive Modeling 127 / 132 . doMPI. such as doMC. doMC uses the multicore package. which forks processes to split computations (for unix and OS X only for now). caret uses the foreach package. foreach has a number of parallel backends which allow various technologies to be used in conjunction with the package. which parallelizes for loops. these are the doSomething packages. doSMP and others. For example. On CRAN.

avNNet) may also use parallel processing too and you end up with workers 2 . no changes are needed when calling train. rfe or sbf.foreach and caret To use parallel processing in caret. rfe and sbf have options called allowParallel that let’s you choose where the parallelism occurs (or should not). For multicore > library(doMC) > registerDoMC(cores = 2) Note that. Max Kuhn (Pfizer) Predictive Modeling 128 / 132 . when using parallel processing with train. the underlying model function (e. The parallel technology must be registered with foreach prior to calling train.g. train.

4 15 20 0.2 10 0.Training Times and Speedups HPC job scheduling data from Kuhn and Johnson (2013) and the multicore package: RF SVM LDA 5 15 SVM Training Time (min) 5 Speed−Up 4 6 8 RF Training Time (min) 2 100 200 300 400 500 600 0.6 25 30 LDA Training Time (min) 10 5 10 15 Workers Max Kuhn (Pfizer) Predictive Modeling 129 / 132 .

Rep.” Statistical Science. 38(4). 199-231. Stone C (1984). Friedman J. Friedman J (2008). Inference and Prediction. Friedman J (2002). L (2001). Breiman. Hastie T. 16(3).” Computational Statistics and Data Analysis. 30. Classification and Regression Trees. 367-378.References Breiman L.” Ludwigs-Maximilians-Universitat Munchen. New York. Olshen R. The Elements of Statistical Learning: Data Mining. Tibshirani R. Department of Statistics. Tech. Eugster M. Leisch F (2008). “Statistical modeling: The two cultures. Chapman and Hall. “Stochastic Gradient Boosting. “Exploratory and Inferential Analysis of Benchmark Experiments. Hothorn T. Springer Max Kuhn (Pfizer) Predictive Modeling 130 / 132 .

Morgan Kaufmann Publishers. 25(3). Zeileis A. LaPan P. Leisch F. Max Kuhn (Pfizer) Predictive Modeling 131 / 132 . Haney S (2007). “To explain or to predict?. 8(1). Hothorn T. 675-699. Schmidberger M et al (2009). G. Li Y.” BMC Bioinformatics. Shmueli.” Journal of Computational and Graphical Statistics. Kuhn M.1. Springer Quinlan R (1993). (2010). Johnson K (2013). “Impact of Image Segmentation on High-Content Screening Data Quality for SK-BR-3 Cells.5: Programs for Machine Learning. 14(3). 340.” Journal of Statistical Software 47. “State-of-the-art in Parallel Computing with R. “The Design and Analysis of Benchmark Experiments. Applied Predictive Modeling. Hornik K (2005). C4.References Hill A. 289-310.” Statistical Science.

nnet 7. brglm 0.8-0.2. stringr 0. digest 0.1-8.7.3-33. lattice 0. CORElearn 0.1-3.0-19. earth 3. scales 0.9-19.4.4. cluster 1.15. foreach 1.2.2. e1071 1.2-7.5. colorspace 1.0-30.US-ASCII/en_US. nlme 3. C50 0.4-4. plotrix 3. labeling 0.US-ASCII Base packages: base.1. datasets.1.0. ctv 0. utils Other packages: AppliedPredictiveModeling 1. latticeExtra 0. mda 0. gtools 3.9.0 Locale: en_US. iterators 1.1. svmpath 0.14-4.2. Matrix 1.2.2-8.5. minqa 1. MASS 7. knitr 1.10.2-4.9.2.43.2.3-8. compiler 3.0 Max Kuhn (Pfizer) Predictive Modeling 132 / 132 .US-ASCII/C/en_US.1-1. kernlab 0.6. survival 2. splines. Formula 1. doMC 1. tools 3.7-9.2.0-19.2.8.20-29.4. formatR 0. mlbench 2. highr 0. reshape2 1. car 2. RcppEigen 0.3-10. pROC 1. dichromat 2.3-3.US-ASCII/en_US.4. ggplot2 0. codetools 0.1. grDevices. caret 6.1.11. class 7.1.1-1.2.0. grid.8. gtable 0.3. munsell 0.2.6-26. RColorBrewer 1. plotmo 1. Rcpp 0. lme4 1.3. partykit 0.6.3.3. methods.2.3. proto 0.5-7. x86_64-apple-darwin10.37-7.3.0-5.1.4. stats. plyr 1.Versions R Under development (unstable) (2014-06-02 r65832).3. parallel.953 Loaded via a namespace (and not attached): BradleyTerry2 1.1-6.5-9.3-10.7.0-0.1-117.0-5.US-ASCII/en_US. rpart 4. evaluate 0.1-5.6-3.1.2. graphics. Hmisc 3.

- Data Analysis and Graphics Using RUploaded byRikki Das
- Machine Learning with R Cookbook - Sample ChapterUploaded byPackt Publishing
- SAS Data Mining Using Sas Enterprise Miner - A Case Study ApproUploaded byGagandeep Singh
- Boosting Foundations and AlgorithmsUploaded byHerren Thaoo Choo
- Business Analytics Using SASUploaded bysujit9
- Applied Predictive Modeling - Max KuhnUploaded byDavid
- The Elements of Statistical LearningUploaded byBruno Santos
- Fast Support Vector Machine TrainingUploaded bymaregbesola
- Exploratory Data Analysis With R (2015)Uploaded byJennifer Parker
- Applying Data Mining Techniques Using SAS Enterprise MinerUploaded bysudheerhegade
- Curso Web y Data Mining 3 Predictive Modeling Using Logistic RegresionUploaded bySunil
- Machine Learning with R Second Edition - Sample ChapterUploaded byPackt Publishing
- Applied-Predictive-Modeling-PDF-Download.pdfUploaded byliliana stark
- Data Wrangling With RUploaded byHenne Popenne
- 100 Data Science Interview Questions and Answers (General)Uploaded byApoorva
- SAS Statistics QuizzesUploaded byMahesh Kumar Joshi
- SAS High Performance ForecastingUploaded byaminiotis
- Regression Modeling Strategies With Applications to Linear Models, Logistic and Ordinal Regression, and Survival Analysis.pdfUploaded byVedobrato Chatterjee
- Data Mining SASUploaded bypartying_shot
- BeITCertified_SAS_Institute_A00-212_free_questions_dumpsUploaded bynightcrawler191
- Predictive Analytics Using rUploaded bySunil Kamat
- Statistics and Data With RUploaded byabrax65
- Generalized Linear ModelsUploaded byBoris Polanco
- sas eminerUploaded byGangadhar Kadam
- 9781784390860_R_for_Data_Science_Sample_ChapterUploaded byPackt Publishing
- Data Management and Reporting Made Easy With Sas Learning Edition 2.0Uploaded byqwasque11
- Building Skills in PythonUploaded byaton.dehoutem331
- Introduction to Data MiningUploaded byDong-gu Jeong
- Advanced Credit Risk Modeling for Basel II Using SAS - Course Notes (2008)Uploaded bysandeep

- Uda City Capstone Project Open a i GymUploaded bytintojames
- UdacityUltimateSkillChecklistForYourFirstDataAnalystJob.pdfUploaded bytintojames
- MIT Introduction to Deep Learning Course Lecture 4 Slides - Computer Vision Deep Generative ModelsUploaded bytintojames
- 284398301-Cambridge-Primary-I-dictionary-3-Workbook.pdfUploaded bytintojames
- Lecture 0Uploaded bytintojames
- MIT Introduction to Deep Learning Course Lecture 5 Slides - Multimodal LearningUploaded bytintojames
- International Secondary Catalogue 2017Uploaded bytintojames
- MIT Introduction to Deep Learning Course Lecture 3 Slides - Computer VisionUploaded bytintojames
- Time Series Data Basics With Pandas PartUploaded bytintojames
- Module 1 - Intro to Data Science.pptxUploaded bytintojames
- MIT Introduction to Deep Learning Course Lecture 6 Slides - Deep Reinforcement LearningUploaded bytintojames
- MIT Introduction to Deep Learning Course Lecture 2 Slides - Sequence ModelingUploaded bytintojames
- Screw CompressorsUploaded bytintojames
- MIT Introduction to Deep Learning Course Lecture 1Uploaded bytintojames
- A Gentle Introduction to Blockchain Technology_WEBUploaded bytintojames
- Iot Analytics Infoworld Deep Dive 0515Uploaded bytintojames
- Streams and Storm April 2014 FinalUploaded bytintojames
- Ifw Deep Dive R-quick GuideUploaded bytintojames
- R xgboost-TutorialUploaded bytintojames
- Spark Internals Architecture 21Nov15Uploaded bytintojames
- 2017 Local CatalogueUploaded bytintojames
- Eclipse IoT White Paper - The Three Software Stacks Required for IoT ArchitecturesUploaded bytintojames
- Cambridge International Examinations Catalogue 2017Uploaded bytintojames
- International Primary Catalogue 2017Uploaded bytintojames
- 11ND-Liggan.pdfUploaded bytintojames
- ifw_dd_2016_machine_learning_final.pdfUploaded bytintojames
- TRANSFORMATIONS AND ACTIONS.pdfUploaded bytintojames
- Altintas Bdc14 Invitedtalk Dec8th2014 141208092953 Conversion Gate02Uploaded bytintojames
- Knowledge Discovery in Data: A Case StudyUploaded bytintojames
- A Deeper Understanding of Spark Internals Aaron DavidsonUploaded bytintojames