Professional Documents
Culture Documents
Search...
The metrics that you choose to evaluate your machine learning algorithms are very important.
Choice of metrics influences how the performance of machine learning algorithms is measured and compared. They influence how you weight the
importance of different characteristics in the results and your ultimate choice of which algorithm to choose.
In this post, you will discover how to select and use different machine learning performance metrics in Python with scikit-learn.
Discover how to prepare data with pandas, fit and evaluate models with scikit-learn, and more in my new book, with 16 step-by-step tutorials, 3 projects,
and full python code.
Update Jan/2017: Updated to reflect changes to the scikit-learn API in version 0.18.
Update Mar/2018: Added alternate link to download the dataset as the original appears to have been taken down.
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Metrics To Evaluate Machine Learning Algorithms in Python
Photo by Ferrous Büller, some rights reserved.
Each recipe is designed to be standalone so that you can copy-and-paste it into your project and use it immediately.
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Metrics are demonstrated for both classification and regression type machine learning problems.
For classification metrics, the Pima Indians onset of diabetes dataset is used as demonstration. This is a binary classification problem where all of
the input variables are numeric (update: download from here).
For regression metrics, the Boston House Price dataset is used as demonstration. this is a regression problem where all of the input variables are
also numeric (update: download data from here).
In each recipe, the dataset is downloaded directly from the UCI Machine Learning repository.
All recipes evaluate the same algorithms, Logistic Regression for classification and Linear Regression for the regression problems. A 10-fold cross-
validation test harness is used to demonstrate each metric, because this is the most likely scenario where you will be employing different algorithm
evaluation metrics.
A caveat in these recipes is the cross_val_score function used to report the performance in each recipe.It does allow the use of different scoring metrics
that will be discussed, but all scores are reported so that they can be sorted in ascending order (largest score is best).
Some evaluation metrics (like mean squared error) are naturally descending scores (the smallest score is best) and as such are reported as negative by
the cross_val_score() function. This is important to note, because some scores will be reported as negative that by definition can never be negative.
You can learn more about machine learning algorithm performance metrics supported by scikit-learn on the page Model evaluation: quantifying the
quality of predictions.
Click to sign-up now and also get a free PDF Ebook version of the course.
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Start Your FREE Mini-Course Now!
Classification Metrics
Classification problems are perhaps the most common type of machine learning problem and as such there are a myriad of metrics that can be used to
evaluate predictions for these problems.
1. Classification Accuracy.
2. Logarithmic Loss.
3. Area Under ROC Curve.
4. Confusion Matrix.
5. Classification Report.
1. Classification Accuracy
Classification accuracy is the number of correct predictions made as a ratio of all predictions made.
This is the most common evaluation metric for classification problems, it is also the most misused. It is really only suitable when there are an equal
number of observations in each class (which is rarely the case) and that all predictions and prediction errors are equally important, which is often not the
case.
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
7 dataframe = pandas.read_csv(url, names=names)
8 array = dataframe.values
9 X = array[:,0:8]
10 Y = array[:,8]
11 seed = 7
12 kfold = model_selection.KFold(n_splits=10, random_state=seed)
13 model = LogisticRegression()
14 scoring = 'accuracy'
15 results = model_selection.cross_val_score(model, X, Y, cv=kfold, scoring=scoring)
16 print("Accuracy: %.3f (%.3f)") % (results.mean(), results.std())
You can see that the ratio is reported. This can be converted into a percentage by multiplying the value by 100, giving an accuracy score of
approximately 77% accurate.
2. Logarithmic Loss
Logarithmic loss (or logloss) is a performance metric for evaluating the predictions of probabilities of membership to a given class.
The scalar probability between 0 and 1 can be seen as a measure of confidence for a prediction by an algorithm. Predictions that are correct or incorrect
are rewarded or punished proportionally to the confidence of the prediction.
You can learn more about logarithmic on the Loss functions for classification Wikipedia article.
Below is an example of calculating logloss for Logistic regression predictions on the Pima Indians onset of diabetes dataset.
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
14 scoring = 'neg_log_loss'
15 results = model_selection.cross_val_score(model, X, Y, cv=kfold, scoring=scoring)
16 print("Logloss: %.3f (%.3f)") % (results.mean(), results.std())
Smaller logloss is better with 0 representing a perfect logloss. As mentioned above, the measure is inverted to be ascending when using the
cross_val_score() function.
The AUC represents a model’s ability to discriminate between positive and negative classes. An area of 1.0 represents a model that made all
predictions perfectly. An area of 0.5 represents a model as good as random. Learn more about ROC here.
ROC can be broken down into sensitivity and specificity. A binary classification problem is really a trade-off between sensitivity and specificity.
Sensitivity is the true positive rate also called the recall. It is the number instances from the positive (first) class that actually predicted correctly.
Specificity is also called the true negative rate. Is the number of instances from the negative class (second) class that were actually predicted
correctly.
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
13 model = LogisticRegression()
14 scoring = 'roc_auc'
15 results = model_selection.cross_val_score(model, X, Y, cv=kfold, scoring=scoring)
16 print("AUC: %.3f (%.3f)") % (results.mean(), results.std())
You can see the the AUC is relatively close to 1 and greater than 0.5, suggesting some skill in the predictions.
4. Confusion Matrix
The confusion matrix is a handy presentation of the accuracy of a model with two or more classes.
The table presents predictions on the x-axis and accuracy outcomes on the y-axis. The cells of the table are the number of predictions made by a
machine learning algorithm.
For example, a machine learning algorithm can predict 0 or 1 and each prediction may actually have been a 0 or 1. Predictions for 0 that were actually 0
appear in the cell for prediction=0 and actual=0, whereas predictions for 0 that were actually 1 appear in the cell for prediction = 0 and actual=1. And so
on.
You can learn more about the Confusion Matrix on the Wikipedia article.
Below is an example of calculating a confusion matrix for a set of prediction by a model on a test set.
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
15 model = LogisticRegression()
16 model.fit(X_train, Y_train)
17 predicted = model.predict(X_test)
18 matrix = confusion_matrix(Y_test, predicted)
19 print(matrix)
Although the array is printed without headings, you can see that the majority of the predictions fall on the diagonal line of the matrix (which are correct
predictions).
1 [[141 21]
2 [ 41 51]]
5. Classification Report
Scikit-learn does provide a convenience report when working on classification problems to give you a quick idea of the accuracy of a model using a
number of measures.
The classification_report() function displays the precision, recall, f1-score and support for each class.
The example below demonstrates the report on the binary classification problem.
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
You can see good prediction and recall for the algorithm.
Regression Metrics
In this section will review 3 of the most common metrics for evaluating predictions on regression machine learning problems:
The measure gives an idea of the magnitude of the error, but no idea of the direction (e.g. over or under predicting).
The example below demonstrates calculating mean absolute error on the Boston house price dataset.
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
10 Y = array[:,13]
11 seed = 7
12 kfold = model_selection.KFold(n_splits=10, random_state=seed)
13 model = LinearRegression()
14 scoring = 'neg_mean_absolute_error'
15 results = model_selection.cross_val_score(model, X, Y, cv=kfold, scoring=scoring)
16 print("MAE: %.3f (%.3f)") % (results.mean(), results.std())
A value of 0 indicates no error or perfect predictions. Like logloss, this metric is inverted by the cross_val_score() function.
Taking the square root of the mean squared error converts the units back to the original units of the output variable and can be meaningful for description
and presentation. This is called the Root Mean Squared Error (or RMSE).
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
This metric too is inverted so that the results are increasing. Remember to take the absolute value before taking the square root if you are interested in
calculating the RMSE.
3. R^2 Metric
The R^2 (or R Squared) metric provides an indication of the goodness of fit of a set of predictions to the actual values. In statistical literature, this
measure is called the coefficient of determination.
This is a value between 0 and 1 for no-fit and perfect fit respectively.
You can learn more about the Coefficient of determination article on Wikipedia.
The example below provides a demonstration of calculating the mean R^2 for a set of predictions.
You can see that the predictions have a poor fit to the actual values with a value close to zero and less than 0.5.
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Summary
In this post, you discovered metrics that you can use to evaluate your machine learning algorithms.
Accuracy.
Logarithmic Loss.
Area Under ROC Curve.
Confusion Matrix.
Classification Report.
Do you have any questions about metrics for evaluating machine learning algorithms or this post? Ask your question in the comments and I will do my
best to answer it.
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Learn how in my new Ebook:
Machine Learning Mastery With Python
Evaluate the Performance of Machine Learning Algorithms in Python using Resampling Evaluate the Performance Of Deep Learning Models in Keras
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
94 Responses to Metrics To Evaluate Machine Learning Algorithms in Python
REPLY
Sayak Paul February 2, 2017 at 6:03 am #
REPLY
Jason Brownlee February 2, 2017 at 2:01 pm #
REPLY
Arek May 12, 2017 at 6:35 am #
Hello Jason
Thanks for this tutorial but i have one question about computing auc.
I’m doing binary classification with imbalanced classes and then computing auc but i have one problem. Im using keras.
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
in 3rd point im loading image and then i’m using predict_proba for result. Results are always from 0-1 but should i use predict proba?.This method is from
http://stackoverflow.com/questions/41032551/how-to-compute-receiving-operating-characteristic-roc-and-auc-in-keras
Eka solution.
REPLY
Jason Brownlee May 12, 2017 at 7:52 am #
Looks good, I would recommend predict_proba(), I expect it normalizes any softmax output to ensure the values add to one.
REPLY
Evy May 18, 2017 at 9:11 am #
Jason,
Long time reader, first time writer. I am having trouble how to pick which model performance metric will be useful for a current project. Let me give you some
background.
I have a classification model that I really want to maximize my Recall results. The reasoning is that, if I say something is 1 when it is not 1 I lose a lot of
time/$, but when I say something is 0 and its is not 0 I don’t lose much time/$ at all. Ie. I want to reduce False Negatives. Also the distribution of the
dependent variable in my training set is highly skewed toward 0s, less than 5% of all my dependent variables in the training set are 1s. Normally I would use
an F1 score, AUC, VIF, Accuracy, MAE, MSE or many of the other classification model metrics that are discussed, but I am unsure what to use now. Currently
I am using LogLoss as my model performance metric as I have found documentation that this is the correct metric to use in cases of a skewed dependent
variable, as well a situations where I mainly care about Recall and don’t care much about Precision or visa versa. I received this information from people on
the Kaggle forums.
Thank you for your expert opinion, I very much appreciate your help. If you don’t have time for such I question I will understand.
REPLY
Jason Brownlee May 19, 2017 at 8:08 am #
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
I would suggest tuning your model and focusing on the recall statistic alone.
I would also suggest using models that make predictions as a probability and tune the threshold on the probability too to optimize the recall (ROC curves
can help understand this).
REPLY
Jeppe June 9, 2017 at 7:39 pm #
Hey Jason,
Thanks for the great articles, I just have a question about the MSE and its properties. When building a linear model, adding features should always lower the
MSE in the training data, right?
It’s just, when I use the polynomial features method in SciKit, and fit a linear regression, the MSE does not necessarily fall, sometimes it rises, as I add
features.
Is it because of some innate properties of the MSE metric, or is it simply because I have a bug in my code?
REPLY
Jason Brownlee June 10, 2017 at 8:21 am #
Adding features has no guarantee of reducing MSE as far as I know. Where did you get that from?
REPLY
JONATA PAULINO DA COSTA March 18, 2019 at 12:30 am #
Olá. Moro no Brasil e sempre leio seus posts. Tenho uma rede neural recorrente LSTM e estou fazendo uma classificação binária com uma
base de dados do Twitter. Eu estou usando acuracia pra avaliar meu modelo. Você poderia sugeria uma outra maneira de eu avaliar este meu
modelo.? Estou usando keras e Python. Se você poder me ajudar com um exemplo eu agradeço.
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
REPLY
Jason Brownlee March 18, 2019 at 6:06 am #
For example, if you are classifying tweets, then perhaps accuracy makes sense. If you are predicting words, then perhaps BLEU or ROGUE
makes sense.
REPLY
Anubhav September 7, 2019 at 5:37 am #
Hi Jason,
I think where Jeppe is coming from is that by increasing features, we are increasing the complexity of our model, hence we are moving towards
overfitting.
Now in overfitted model, the predicted data points will be much closer to the actual data points and hence the MSE should decrease.
REPLY
Jason Brownlee September 7, 2019 at 5:39 am #
I disagree.
More features can better expose the structure of the problem and can result in a better fit. The model may or may not overfit, it is an orthogonal
concern.
REPLY
Cheng June 14, 2017 at 3:45 am #
Hi Jason,
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Thank you for this article. Very helpful! Now I am using Python SciKit Learn to train an imbalanced dataset. I am looking for a good metric embedded in
Python SciKit Learn already that works for evaluating the performance of model in predicting imbalanced dataset. Do you have some recommendations or
ideas? Alternatively, I knew a judging criterion, balanced error rate (BER), but I have not idea how to use it as a scoring parameter with Python?
Cheng
REPLY
Jason Brownlee June 14, 2017 at 8:50 am #
Great question.
REPLY
Huyen August 8, 2017 at 9:17 pm #
Hi Jason,
I still have some confusions about the metrics to evaluate regression problem. In cross_val_score of cross validation, the final results are the negative mean
squared error and negative mean absolute error, so what does it mean? (It means the model performs poorly or that’s the good sign that the model can
minimize the metrics?)
Additionally, I used some regression methods and they returned very good results such as R_squared = 0.9999 and very small MSE, MSA on the testing part.
However the result of cross_val_score is 1.00 +- 00 for example, so it means the model is overfitting?
So in general, I suppose when we use cross_val_score to evaluate regression model, we should choose the model which has the smallest MSE and MSA,
that’s true or not?
Thank you so much for your answer, that will help me alot
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
REPLY
Jason Brownlee August 9, 2017 at 6:29 am #
Good question.
Generally, the interpretation of the score is specific to the problem. A good score is really only relative to scores you can achieve with other methods.
Choosing a model depends on your application, but generally, you want to pick the simplest model that gives the best model skill.
REPLY
Stef August 20, 2017 at 4:39 am #
Hi Jason,
I recently read some articles that were completely against using R^2 for evaluating non-linear models (such as in the case of ML algorithms). Given that it is
still common practice to use it, whats your take on this?
Cheers
REPLY
Jason Brownlee August 20, 2017 at 6:08 am #
I recommend using a few metrics and interpret them in the context of your specific problem.
REPLY
emily October 5, 2017 at 1:14 am #
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
REPLY
Jason Brownlee October 5, 2017 at 5:24 am #
You need a metrics that best captures what you are looking to optimize on your specific problem.
Maybe you need to talk to domain experts. Maybe you need to try out a few metrics and present results to stakeholders. It could be an iterative process.
REPLY
x November 9, 2017 at 7:54 am #
REPLY
Jason Brownlee November 9, 2017 at 10:05 am #
REPLY
kono November 12, 2017 at 4:17 am #
Jason,
What are differences between loss functions and evaluation metrics? Loss function = evaluation metric – regularization terms?
Kono
REPLY
Jason Brownlee November 12, 2017 at 9:08 am #
Great question.
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
A loss function is minimized when fitting a model.
A loss function score can be reported as a model skill, e.g. an evaluation metric, but does not have to be.
Regularization terms are modifications of a loss function to penalize complex models, e.g. to result in a simpler and often better/more skillful resulting
model.
REPLY
kono November 12, 2017 at 4:02 pm #
REPLY
Jason Brownlee November 13, 2017 at 10:12 am #
You’re welcome!
REPLY
Robert December 5, 2017 at 9:17 pm #
Hi Jason,
I have the following question. Instead of using the MSE in the standard configuration, I want to use it with sample weights, where basically each datapoint
would get a different weight (it is a separate column in the original dataframe, but clearly not a feature of the trained model). How would I incorporate those
sample weight in the scoring function?
REPLY
Jason Brownlee December 6, 2017 at 9:01 am #
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Great question, I believe the handling of weights will be algorithm specific.
REPLY
Shabnam December 10, 2017 at 2:16 pm #
REPLY
Jason Brownlee December 11, 2017 at 5:21 am #
Thanks.
REPLY
Rizwan Mian January 2, 2018 at 3:13 pm #
In the general case, I see a sensitivity and specificity tradeoff when the classes overlap [1].
– How can I find the optimal point where both values are high algorithmically using python?
– Would the classifier give the highest accuracy at this point assuming classes are balanced?
Thanking in advance
[1] https://www.youtube.com/watch?v=vtYDyGGeQyo
REPLY
Jason Brownlee January 2, 2018 at 4:01 pm #
You might want to look into ROC curves and model calibration.
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
REPLY
Prashant February 21, 2018 at 7:08 pm #
Hi Jason,
REPLY
Jason Brownlee February 22, 2018 at 11:16 am #
REPLY
Matthieu February 24, 2018 at 10:25 am #
Hi Jason,
Thank you for this detailed explanation of the metrics. I would have however a question about my problem. I have a binary classification problem, where I am
interested in accuracy of prediction of both negative and positive classes and negative class has bigger instances than positive class.
1) In that case, would it be better to use “roc_auc” or “f1-score” metric to optimize accuracy of classifier ?
2) Would it be better to use class or probabilities prediction ? In the latter case how to optimize the calibration of the classifier ?
REPLY
Jason Brownlee February 25, 2018 at 7:39 am #
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
REPLY
Dan April 3, 2018 at 4:23 pm #
Thanks Jason, very helpful information as always! Which one of these tests could also work for non-linear learning algorithms? Or are you aware of
any sources that might help answer this question? Eg. results produced from SVC with rbf kernal?
REPLY
Jason Brownlee April 4, 2018 at 6:07 am #
REPLY
David April 23, 2018 at 2:08 am #
Hey Jason
Are MSE and MAE only used to compare models of the same dataset? The reason I ask is that I used an autoregression on sensory data from lets say t = 0s
to t = 50s and then used the autoregression parameters to predict the time series data from t = 50s to t = 100s. The values are very small and so I get small
MSE and MAE values but it doesn’t really mean anything. Is there any way to get an absolute score of your predictions, MSE and MAE seem to be highly
dependent on your dataset magnitude, and I can only seemed them as a way to compare models of the same dataset.
REPLY
Jason Brownlee April 23, 2018 at 6:18 am #
Perhaps you can rescale your data to the range [0-1] prior to modeling?
REPLY
vaibhav kumar May 28, 2018 at 6:02 pm #
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Dear Jason,
For categorical variables with more than two potential values, how are their accuracy measures and F-scores calculated?
I have a dataset with variables (Population class, building type, Total floors) Building Type with possible values (Residential, commercial, Industry, Special
Buildings), population class (High, MED, LOW) and the total floor is a numerical variable with values ranging from 1 to 35. After training the data I wanted to
predict the “population class”. I applied SVM on the datasets. How are the accuracy measures and F-scores calculated for my case? Is accuracy measure
and F-Score a good metric for a categorical variable with values more than one? Am I doing the correct thing by evaluating the classification of the categorical
variable (population class) with more than two potential values (High, MED, LOW)? What if any variable is an ordinal variable should the same metric and
classification algorithms are applied to predict which are applied to binary variables?
REPLY
Jason Brownlee May 29, 2018 at 6:24 am #
REPLY
Reed Guo June 7, 2018 at 5:17 pm #
Hi, Jason
I have a question and cannot find a good answer in the Internet. And in this post, it is not mentioned neither.
I use R^2 as the metrics to evaluate regression model. In which range it can indicate this is a good model?
For example:
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Thank you very much.
REPLY
Jason Brownlee June 8, 2018 at 6:06 am #
Good question, I have seen tables like this in books on “effect size” in statistics.
REPLY
Reed Guo June 15, 2018 at 3:32 pm #
REPLY
Jason Brownlee June 16, 2018 at 7:24 am #
REPLY
ND June 20, 2018 at 12:49 pm #
Hi Jason,
I’m working on a classification problem with unbalanced dataset. I’m using recall/precision and confusion matrix as my evaluation metrics. Initially in my
dataset, the observation ratio for class ‘1’ to class ‘0’ is 1:7 so I use SMOTE and up-sample the minority class in training set to make the ratio 3:5 (i.e. 60%
class ‘1’ observations).
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Recall score: 0.79
Precision score: 0.54
f1 score: 0.64
AUC score: 0.845674177201395
My question is: is it ok to select a different threshold for test set for optimal recall/precision scores as compared to the training/validation set?
Also could you please suggest options to improve precision while maintaining recall.
Thanks,
ND
REPLY
Jason Brownlee June 21, 2018 at 6:07 am #
No, threshold must be chosen on a validation set and used on a test set.
When using a test set, we are assuming we do not know the answers and the result we get is the result we get.
REPLY
ND June 21, 2018 at 2:15 pm #
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Thanks Jason. Could you recommend some options to explore in order to improve precision while maintaining recall scores for imbalanced
dataset based ml models?
Appreciate your blogs. I’ve referred to a few of them and they’ve really helpful in building my ml code.
ND
REPLY
Jason Brownlee June 21, 2018 at 4:58 pm #
REPLY
gautham July 15, 2018 at 12:40 pm #
Hello guys… Am trying to tag the parts of speech for a text using pos_tag function that was implemented by perceptron tagger. After tagging the text
i want to calculate the accuracy of input with any corpus either brown or conll2000 or tree bank.. How to find that accuracy?? Can anyone please help me out
from this problem…
REPLY
Jason Brownlee July 16, 2018 at 6:10 am #
REPLY
Claire August 18, 2018 at 10:34 pm #
Hi Jason,
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
This page looks at classification and regression problems. I’m working on a segmentation problem, classifying land cover from remotely sensed imagery.
What do you think is the best evaluation metric for this case?
REPLY
Jason Brownlee August 19, 2018 at 6:21 am #
Talk to stakeholders and nut out what is the most important way of evaluating skill of a model?
Review the literature and see what types of metrics are being used on similar problems?
Try a few metrics and see if they capture what is important?
REPLY
J.Straub September 18, 2018 at 7:25 pm #
Hi Jason,
i’m working on a multi-variate regression problem. Which regression metrics can I use for evaluation?
Thanks in advance!
REPLY
Jason Brownlee September 19, 2018 at 6:18 am #
REPLY
dy October 4, 2018 at 8:03 pm #
hi jason, its me again. -34.705 (45.574), whats the value in bracket? tq!
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
REPLY
Jason Brownlee October 5, 2018 at 5:33 am #
REPLY
omar October 20, 2018 at 9:16 pm #
how can we print classification report of more than one models through array
REPLY
Jason Brownlee October 21, 2018 at 6:11 am #
Use a for loop and enumerate over the models calling print() for each report you require.
REPLY
Felipe October 24, 2018 at 1:54 pm #
Is it possible to plot the ROC curve by using the cross_val_score function? Because I see many examples making a for instead of using the function.
REPLY
Jason Brownlee October 24, 2018 at 2:49 pm #
I don’t think so, a curve is for a single set of predictions. With CV, you would have k curves I guess.
REPLY
salma December 18, 2018 at 1:55 am #
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
How to get the performance for each class (if binary for the class 0 and for the class 1) using cross_val_score function?
And thank you.
REPLY
Jason Brownlee December 18, 2018 at 6:03 am #
REPLY
Josh Zastrow January 9, 2019 at 5:51 am #
So what if you have a classification problem where the categories are ordinal? For example, classify shirt size but there is XS, S, M, L, XL, XXL.
Accuracy or ROC curves wouldn’t tell the whole truth… does MAE or MSE make more sense?
REPLY
Jason Brownlee January 9, 2019 at 8:50 am #
Perhaps. Some cases/testing may be required to settle on a measure of performance that makes sense for the project.
REPLY
Atharva Thanekar February 4, 2019 at 5:58 pm #
REPLY
Jason Brownlee February 5, 2019 at 8:14 am #
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
You cannot calculate accuracy for a regression problem, I explain this more here:
https://machinelearningmastery.com/classification-versus-regression-in-machine-learning/
REPLY
Ghofrane February 10, 2019 at 6:47 am #
Hi Jason,
thank you for this kind of posts and comments!
I’m working on a regression problem with a cross sectional dataset.I’m using RMSE and NAE (Normalized Absolute Error).
– How do we interpret the values of NAE and compare the performances based upon them (I know the smaller the better but I mean interpretation with regard
to the average)?
I got these values of NAE for different models:
Model1: 0.629
Model2: 1.02
Model3: 0.594
Model4: 0.751
– what could be the reason of different ranking when using RMSE and NAE?
REPLY
Jason Brownlee February 10, 2019 at 9:46 am #
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
REPLY
Teklie February 22, 2019 at 8:18 am #
REPLY
Jason Brownlee February 22, 2019 at 2:44 pm #
REPLY
Prashant Priyadarshi April 9, 2019 at 2:34 pm #
Sir,
What should be the class of all input variables (numeric or categorical) for Linear Regression, Logistic Regression, Decision Tree, Random Forest, SVM,
Naive Bayes, KNN…. etc.. etc
REPLY
Jason Brownlee April 9, 2019 at 2:44 pm #
REPLY
Prashant Priyadarshi April 10, 2019 at 4:17 pm #
Eg. For Linear Regression our predictors’ variables(independent) should be numeric and hence our target variable (dependent) would also
be numeric.
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
In the same way, I want to know about other models.
REPLY
Jason Brownlee April 11, 2019 at 6:29 am #
REPLY
Gilles Xiberras April 29, 2019 at 3:23 am #
Hello Jason,
you wrote :
“The Mean Absolute Error (or MAE) is the sum of the absolute differences between predictions and actual values. It gives an idea of how wrong the
predictions were.”
I suppose that you forgot to mention “the sum … divided by the number of observations” or replace the “sum” by “mean”
Cheers Gilles.
REPLY
Jason Brownlee April 29, 2019 at 8:25 am #
REPLY
Michael May 22, 2019 at 3:21 pm #
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Hello, how can one compare minimum spanning tree algorithm, shortest path algorithm and salesman problem using metric evaluation algorithm.
REPLY
Jason Brownlee May 23, 2019 at 5:52 am #
Perhaps based on the min distance found across a suite of contrived problems scaling in difficulty?
REPLY
Abhijit Ghosh June 10, 2019 at 1:55 pm #
Hi, Nice blog . Can you suggest me some review article on the different kinds of error metrics in ML and Deep Learning ? Thanks
REPLY
Jason Brownlee June 10, 2019 at 2:05 pm #
Thanks.
And this:
https://machinelearningmastery.com/how-to-choose-loss-functions-when-training-deep-learning-neural-networks/
REPLY
Suvi August 8, 2019 at 9:14 pm #
Hi Jason, excellent post! I am a biologist in a team working on developing image-based machine learning algorithms to analyse cellular
behavior based on multiple parameters simultaneously. For me the most “logical” way to present whether our algorithm is good at doing what it’s
meant to do is to use the classification accuracy. However, the non-biologists argue we should use the R-squared value for this purpose. How can we
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
decide which is the best metrics to use, and also: what is the most used one for this type of data, when we want most of our audience to understand
how amazing our algorithm is ? Thank you.
REPLY
Jason Brownlee August 9, 2019 at 8:14 am #
Great question.
You have to start with an idea of what is valued in a model and then how to measure that. It may require using best practices in the field or talking
to lots of experts and doing some hard thinking.
Sometimes it helps to pick one measure to choose a model and another to present the model, e.g. minimize loss on validation dataset then
classification accuracy on a test set.
REPLY
Mwh August 17, 2019 at 12:02 am #
Thanks Jason,,
How can i print all the three metrics for regression together. I do not want to do cross_val_score three times.
Thanks
REPLY
Jason Brownlee August 17, 2019 at 5:47 am #
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
REPLY
Mwh August 17, 2019 at 12:58 am #
Also, what you think about Mean absolute percentage error(MAPE) https://en.wikipedia.org/wiki/Mean_absolute_percentage_error,, as a way to
report about accuracy in a regression model. Does not sound academic approach to report as a result since it is easier to interpreter,, mae give large
numbers e.g., 150 since y values in my data set usually >1000. Thanks
REPLY
Jason Brownlee August 17, 2019 at 5:50 am #
It’s great.
REPLY
Anam September 8, 2019 at 4:36 pm #
Hy Jason,
An amazing and helpful content…i have a query here that i am applying deep neural network such as LSTM,BILSTM,BIGRU,GRU,RNN, and SimpleRNN and
all these models gives same accuracy on the dataset that is
i want to know that why this happen. kindly can you please guide me about the issue. Thanks in advance.
REPLY
Jason Brownlee September 9, 2019 at 5:13 am #
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
REPLY
Mohit October 9, 2019 at 3:51 am #
Hi ,Jason
REPLY
Jason Brownlee October 9, 2019 at 8:15 am #
Hi!
REPLY
taissir October 17, 2019 at 7:20 pm #
thanks for you good paper, I want to know how to use yellowbrick module for multiclass classification using a specific model that didn’t exist in the
module means our own model
thanks
REPLY
Jason Brownlee October 18, 2019 at 5:48 am #
Leave a Reply
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Name (required)
Website
SUBMIT COMMENT
Welcome! I'm Jason Brownlee PhD and I help developers get results with machine learning.
Read More
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
How to Setup Your Python Environment for Machine Learning with Anaconda
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD