which metric is better for boosting methods Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsGradient boosting vs logistic regression, for boolean featuresAUC and classification report in Logistic regression in pythonHow much data is needed for a GBM to be more reliable than logistic regression for binary classification?XGBoost outputs tend towards the extremesWhy does Bagging or Boosting algorithm give better accuracy than basic Algorithms in small datasets?Why Decision trees performs better than logistic regressionA few questions to understand a random forest blogwhen can xgboost or catboost be better then Logistic regression?Logistic regression gradient descent classifier - more iterations leads to worse accuracyHow to get probability of classification
What is the meaning of the new sigil in Game of Thrones Season 8 intro?
Why light coming from distant stars is not discreet?
Why are Kinder Surprise Eggs illegal in the USA?
Identifying polygons that intersect with another layer using QGIS?
Why do we bend a book to keep it straight?
What causes the vertical darker bands in my photo?
Can I cast Passwall to drop an enemy into a 20-foot pit?
What does "fit" mean in this sentence?
Single word antonym of "flightless"
What's the purpose of writing one's academic biography in the third person?
How to deal with a team lead who never gives me credit?
How does debian/ubuntu knows a package has a updated version
How do I keep my slimes from escaping their pens?
Coloring maths inside a tcolorbox
Why did the Falcon Heavy center core fall off the ASDS OCISLY barge?
Overriding an object in memory with placement new
How widely used is the term Treppenwitz? Is it something that most Germans know?
What does the "x" in "x86" represent?
What exactly is a "Meth" in Altered Carbon?
Sci-Fi book where patients in a coma ward all live in a subconscious world linked together
What is the logic behind the Maharil's explanation of why we don't say שעשה ניסים on Pesach?
Why didn't this character "real die" when they blew their stack out in Altered Carbon?
What to do with chalk when deepwater soloing?
English words in a non-english sci-fi novel
which metric is better for boosting methods
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsGradient boosting vs logistic regression, for boolean featuresAUC and classification report in Logistic regression in pythonHow much data is needed for a GBM to be more reliable than logistic regression for binary classification?XGBoost outputs tend towards the extremesWhy does Bagging or Boosting algorithm give better accuracy than basic Algorithms in small datasets?Why Decision trees performs better than logistic regressionA few questions to understand a random forest blogwhen can xgboost or catboost be better then Logistic regression?Logistic regression gradient descent classifier - more iterations leads to worse accuracyHow to get probability of classification
$begingroup$
I work on a dataset of 300 000 samples and I try to make a comparison between logistic regression (with gradients descent) and a LightBoost for binary classification in order to choose the better one.
I want to know in this case which metric should I use it and WHY?
Accuracy ??
AUC Test value ??
RMSE ??
LogLoss ??
machine-learning logistic-regression gradient-descent supervised-learning boosting
$endgroup$
add a comment |
$begingroup$
I work on a dataset of 300 000 samples and I try to make a comparison between logistic regression (with gradients descent) and a LightBoost for binary classification in order to choose the better one.
I want to know in this case which metric should I use it and WHY?
Accuracy ??
AUC Test value ??
RMSE ??
LogLoss ??
machine-learning logistic-regression gradient-descent supervised-learning boosting
$endgroup$
add a comment |
$begingroup$
I work on a dataset of 300 000 samples and I try to make a comparison between logistic regression (with gradients descent) and a LightBoost for binary classification in order to choose the better one.
I want to know in this case which metric should I use it and WHY?
Accuracy ??
AUC Test value ??
RMSE ??
LogLoss ??
machine-learning logistic-regression gradient-descent supervised-learning boosting
$endgroup$
I work on a dataset of 300 000 samples and I try to make a comparison between logistic regression (with gradients descent) and a LightBoost for binary classification in order to choose the better one.
I want to know in this case which metric should I use it and WHY?
Accuracy ??
AUC Test value ??
RMSE ??
LogLoss ??
machine-learning logistic-regression gradient-descent supervised-learning boosting
machine-learning logistic-regression gradient-descent supervised-learning boosting
asked 13 hours ago
NirmineNirmine
376
376
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
Depends.
The first thing that has to be clear is that you are running an experiment, which means you need to measure both with the same metric.
Which one? Depends on which underlying problem you are solving, if what you are doing is to determine which algorithm is better, your conclusion will only be applicable to your specific dataset
Accuracy: Is possible to measure the accuracy as the comparing metric, but it becomes trivial if your dataset is unbalanced, which means you have much more positives than negatives or viceversa. The accuracy is used when the dataset is balanced and if is equally bad to have a mistake on positives and negatives. Also, it has the problem of being too dependant on the threshlod for defining positives/negatives.
Area Under the Curve: The AUC is one of the most robust metrics for measuring the ability of a model to separate positives and negatives, is unsensitive to threshold and immune to unbalanceness. I would use this.
RMSE: I only know RMSE for continuous regression, not for classification.
LogLoss: Its use is in multinomial classification
New contributor
$endgroup$
$begingroup$
Doesn't LogLoss give a fair comparison between the overall performance of two probabilistic classifiers that is unrelated to threshold / data imbalance? I don't think it suffers from the same problems as accuracy, but would be keen to know if I have missed a detail.
$endgroup$
– ajrwhite
12 hours ago
$begingroup$
Thank you Juan for your answer , so as conclusion because I work on umbalanced data I should just compare both of them based on AUC
$endgroup$
– Nirmine
11 hours ago
$begingroup$
You are right @ajrwhite, I will correct my answer
$endgroup$
– Juan Esteban de la Calle
11 hours ago
add a comment |
$begingroup$
I would say AUC is the best overall metric for classification but does not have to be the only metric, accuracy is useful too. For reference you can check this Quora regarding accuracy vs. AUC:
They both measure different things, so they are complementary.
Accuracy: Measures, for a given threshold, the percentage of points
correctly classified, regardless of which class they belong to.
AUC: Measures the likelihood that given two random points — one from
the positive and one from the negative class — the classifier will
rank the point from the positive class higher than the one from the
negative one (it measures the performance of the ranking really).
Log loss can also be a good candidate as an overall metric, why can be read here from FastAI:
Log Loss vs Accuracy
Accuracy is the count of predictions where your predicted value equals
the actual value. Accuracy is not always a good indicator because of
its yes or no nature.
Log Loss takes into account the uncertainty of
your prediction based on how much it varies from the actual label.
This gives us a more nuanced view into the performance of our model.
RMSE on the other hand is a regression metric and should not be used for classification.
$endgroup$
1
$begingroup$
I fail to see the difference between optimization metric and "evaluation metric". There may be statistics that are well suited for validation/evaluation because they can be adapted for the train/test methodlogy but there is no reason to believe that log loss cannot be used to evaluate a classifier. In fact, log loss is incredibly popular in a large amount of Kaggle competitions, as well as the Brier score. Another similar metric: RMSE. RMSE is optimized in countless regression algorithms to derive parameter estimates/splits but is also used to evaluate the performance of a regression model.
$endgroup$
– aranglol
6 hours ago
1
$begingroup$
That is because I was wrong. I was under the faulty impression that log loss was rarely used for evaluation and was mostly used as a loss function. Thank you for fact checking, will edit to avoid misleading other! :)
$endgroup$
– Simon Larsson
6 hours ago
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49403%2fwhich-metric-is-better-for-boosting-methods%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Depends.
The first thing that has to be clear is that you are running an experiment, which means you need to measure both with the same metric.
Which one? Depends on which underlying problem you are solving, if what you are doing is to determine which algorithm is better, your conclusion will only be applicable to your specific dataset
Accuracy: Is possible to measure the accuracy as the comparing metric, but it becomes trivial if your dataset is unbalanced, which means you have much more positives than negatives or viceversa. The accuracy is used when the dataset is balanced and if is equally bad to have a mistake on positives and negatives. Also, it has the problem of being too dependant on the threshlod for defining positives/negatives.
Area Under the Curve: The AUC is one of the most robust metrics for measuring the ability of a model to separate positives and negatives, is unsensitive to threshold and immune to unbalanceness. I would use this.
RMSE: I only know RMSE for continuous regression, not for classification.
LogLoss: Its use is in multinomial classification
New contributor
$endgroup$
$begingroup$
Doesn't LogLoss give a fair comparison between the overall performance of two probabilistic classifiers that is unrelated to threshold / data imbalance? I don't think it suffers from the same problems as accuracy, but would be keen to know if I have missed a detail.
$endgroup$
– ajrwhite
12 hours ago
$begingroup$
Thank you Juan for your answer , so as conclusion because I work on umbalanced data I should just compare both of them based on AUC
$endgroup$
– Nirmine
11 hours ago
$begingroup$
You are right @ajrwhite, I will correct my answer
$endgroup$
– Juan Esteban de la Calle
11 hours ago
add a comment |
$begingroup$
Depends.
The first thing that has to be clear is that you are running an experiment, which means you need to measure both with the same metric.
Which one? Depends on which underlying problem you are solving, if what you are doing is to determine which algorithm is better, your conclusion will only be applicable to your specific dataset
Accuracy: Is possible to measure the accuracy as the comparing metric, but it becomes trivial if your dataset is unbalanced, which means you have much more positives than negatives or viceversa. The accuracy is used when the dataset is balanced and if is equally bad to have a mistake on positives and negatives. Also, it has the problem of being too dependant on the threshlod for defining positives/negatives.
Area Under the Curve: The AUC is one of the most robust metrics for measuring the ability of a model to separate positives and negatives, is unsensitive to threshold and immune to unbalanceness. I would use this.
RMSE: I only know RMSE for continuous regression, not for classification.
LogLoss: Its use is in multinomial classification
New contributor
$endgroup$
$begingroup$
Doesn't LogLoss give a fair comparison between the overall performance of two probabilistic classifiers that is unrelated to threshold / data imbalance? I don't think it suffers from the same problems as accuracy, but would be keen to know if I have missed a detail.
$endgroup$
– ajrwhite
12 hours ago
$begingroup$
Thank you Juan for your answer , so as conclusion because I work on umbalanced data I should just compare both of them based on AUC
$endgroup$
– Nirmine
11 hours ago
$begingroup$
You are right @ajrwhite, I will correct my answer
$endgroup$
– Juan Esteban de la Calle
11 hours ago
add a comment |
$begingroup$
Depends.
The first thing that has to be clear is that you are running an experiment, which means you need to measure both with the same metric.
Which one? Depends on which underlying problem you are solving, if what you are doing is to determine which algorithm is better, your conclusion will only be applicable to your specific dataset
Accuracy: Is possible to measure the accuracy as the comparing metric, but it becomes trivial if your dataset is unbalanced, which means you have much more positives than negatives or viceversa. The accuracy is used when the dataset is balanced and if is equally bad to have a mistake on positives and negatives. Also, it has the problem of being too dependant on the threshlod for defining positives/negatives.
Area Under the Curve: The AUC is one of the most robust metrics for measuring the ability of a model to separate positives and negatives, is unsensitive to threshold and immune to unbalanceness. I would use this.
RMSE: I only know RMSE for continuous regression, not for classification.
LogLoss: Its use is in multinomial classification
New contributor
$endgroup$
Depends.
The first thing that has to be clear is that you are running an experiment, which means you need to measure both with the same metric.
Which one? Depends on which underlying problem you are solving, if what you are doing is to determine which algorithm is better, your conclusion will only be applicable to your specific dataset
Accuracy: Is possible to measure the accuracy as the comparing metric, but it becomes trivial if your dataset is unbalanced, which means you have much more positives than negatives or viceversa. The accuracy is used when the dataset is balanced and if is equally bad to have a mistake on positives and negatives. Also, it has the problem of being too dependant on the threshlod for defining positives/negatives.
Area Under the Curve: The AUC is one of the most robust metrics for measuring the ability of a model to separate positives and negatives, is unsensitive to threshold and immune to unbalanceness. I would use this.
RMSE: I only know RMSE for continuous regression, not for classification.
LogLoss: Its use is in multinomial classification
New contributor
edited 11 hours ago
New contributor
answered 12 hours ago
Juan Esteban de la CalleJuan Esteban de la Calle
36011
36011
New contributor
New contributor
$begingroup$
Doesn't LogLoss give a fair comparison between the overall performance of two probabilistic classifiers that is unrelated to threshold / data imbalance? I don't think it suffers from the same problems as accuracy, but would be keen to know if I have missed a detail.
$endgroup$
– ajrwhite
12 hours ago
$begingroup$
Thank you Juan for your answer , so as conclusion because I work on umbalanced data I should just compare both of them based on AUC
$endgroup$
– Nirmine
11 hours ago
$begingroup$
You are right @ajrwhite, I will correct my answer
$endgroup$
– Juan Esteban de la Calle
11 hours ago
add a comment |
$begingroup$
Doesn't LogLoss give a fair comparison between the overall performance of two probabilistic classifiers that is unrelated to threshold / data imbalance? I don't think it suffers from the same problems as accuracy, but would be keen to know if I have missed a detail.
$endgroup$
– ajrwhite
12 hours ago
$begingroup$
Thank you Juan for your answer , so as conclusion because I work on umbalanced data I should just compare both of them based on AUC
$endgroup$
– Nirmine
11 hours ago
$begingroup$
You are right @ajrwhite, I will correct my answer
$endgroup$
– Juan Esteban de la Calle
11 hours ago
$begingroup$
Doesn't LogLoss give a fair comparison between the overall performance of two probabilistic classifiers that is unrelated to threshold / data imbalance? I don't think it suffers from the same problems as accuracy, but would be keen to know if I have missed a detail.
$endgroup$
– ajrwhite
12 hours ago
$begingroup$
Doesn't LogLoss give a fair comparison between the overall performance of two probabilistic classifiers that is unrelated to threshold / data imbalance? I don't think it suffers from the same problems as accuracy, but would be keen to know if I have missed a detail.
$endgroup$
– ajrwhite
12 hours ago
$begingroup$
Thank you Juan for your answer , so as conclusion because I work on umbalanced data I should just compare both of them based on AUC
$endgroup$
– Nirmine
11 hours ago
$begingroup$
Thank you Juan for your answer , so as conclusion because I work on umbalanced data I should just compare both of them based on AUC
$endgroup$
– Nirmine
11 hours ago
$begingroup$
You are right @ajrwhite, I will correct my answer
$endgroup$
– Juan Esteban de la Calle
11 hours ago
$begingroup$
You are right @ajrwhite, I will correct my answer
$endgroup$
– Juan Esteban de la Calle
11 hours ago
add a comment |
$begingroup$
I would say AUC is the best overall metric for classification but does not have to be the only metric, accuracy is useful too. For reference you can check this Quora regarding accuracy vs. AUC:
They both measure different things, so they are complementary.
Accuracy: Measures, for a given threshold, the percentage of points
correctly classified, regardless of which class they belong to.
AUC: Measures the likelihood that given two random points — one from
the positive and one from the negative class — the classifier will
rank the point from the positive class higher than the one from the
negative one (it measures the performance of the ranking really).
Log loss can also be a good candidate as an overall metric, why can be read here from FastAI:
Log Loss vs Accuracy
Accuracy is the count of predictions where your predicted value equals
the actual value. Accuracy is not always a good indicator because of
its yes or no nature.
Log Loss takes into account the uncertainty of
your prediction based on how much it varies from the actual label.
This gives us a more nuanced view into the performance of our model.
RMSE on the other hand is a regression metric and should not be used for classification.
$endgroup$
1
$begingroup$
I fail to see the difference between optimization metric and "evaluation metric". There may be statistics that are well suited for validation/evaluation because they can be adapted for the train/test methodlogy but there is no reason to believe that log loss cannot be used to evaluate a classifier. In fact, log loss is incredibly popular in a large amount of Kaggle competitions, as well as the Brier score. Another similar metric: RMSE. RMSE is optimized in countless regression algorithms to derive parameter estimates/splits but is also used to evaluate the performance of a regression model.
$endgroup$
– aranglol
6 hours ago
1
$begingroup$
That is because I was wrong. I was under the faulty impression that log loss was rarely used for evaluation and was mostly used as a loss function. Thank you for fact checking, will edit to avoid misleading other! :)
$endgroup$
– Simon Larsson
6 hours ago
add a comment |
$begingroup$
I would say AUC is the best overall metric for classification but does not have to be the only metric, accuracy is useful too. For reference you can check this Quora regarding accuracy vs. AUC:
They both measure different things, so they are complementary.
Accuracy: Measures, for a given threshold, the percentage of points
correctly classified, regardless of which class they belong to.
AUC: Measures the likelihood that given two random points — one from
the positive and one from the negative class — the classifier will
rank the point from the positive class higher than the one from the
negative one (it measures the performance of the ranking really).
Log loss can also be a good candidate as an overall metric, why can be read here from FastAI:
Log Loss vs Accuracy
Accuracy is the count of predictions where your predicted value equals
the actual value. Accuracy is not always a good indicator because of
its yes or no nature.
Log Loss takes into account the uncertainty of
your prediction based on how much it varies from the actual label.
This gives us a more nuanced view into the performance of our model.
RMSE on the other hand is a regression metric and should not be used for classification.
$endgroup$
1
$begingroup$
I fail to see the difference between optimization metric and "evaluation metric". There may be statistics that are well suited for validation/evaluation because they can be adapted for the train/test methodlogy but there is no reason to believe that log loss cannot be used to evaluate a classifier. In fact, log loss is incredibly popular in a large amount of Kaggle competitions, as well as the Brier score. Another similar metric: RMSE. RMSE is optimized in countless regression algorithms to derive parameter estimates/splits but is also used to evaluate the performance of a regression model.
$endgroup$
– aranglol
6 hours ago
1
$begingroup$
That is because I was wrong. I was under the faulty impression that log loss was rarely used for evaluation and was mostly used as a loss function. Thank you for fact checking, will edit to avoid misleading other! :)
$endgroup$
– Simon Larsson
6 hours ago
add a comment |
$begingroup$
I would say AUC is the best overall metric for classification but does not have to be the only metric, accuracy is useful too. For reference you can check this Quora regarding accuracy vs. AUC:
They both measure different things, so they are complementary.
Accuracy: Measures, for a given threshold, the percentage of points
correctly classified, regardless of which class they belong to.
AUC: Measures the likelihood that given two random points — one from
the positive and one from the negative class — the classifier will
rank the point from the positive class higher than the one from the
negative one (it measures the performance of the ranking really).
Log loss can also be a good candidate as an overall metric, why can be read here from FastAI:
Log Loss vs Accuracy
Accuracy is the count of predictions where your predicted value equals
the actual value. Accuracy is not always a good indicator because of
its yes or no nature.
Log Loss takes into account the uncertainty of
your prediction based on how much it varies from the actual label.
This gives us a more nuanced view into the performance of our model.
RMSE on the other hand is a regression metric and should not be used for classification.
$endgroup$
I would say AUC is the best overall metric for classification but does not have to be the only metric, accuracy is useful too. For reference you can check this Quora regarding accuracy vs. AUC:
They both measure different things, so they are complementary.
Accuracy: Measures, for a given threshold, the percentage of points
correctly classified, regardless of which class they belong to.
AUC: Measures the likelihood that given two random points — one from
the positive and one from the negative class — the classifier will
rank the point from the positive class higher than the one from the
negative one (it measures the performance of the ranking really).
Log loss can also be a good candidate as an overall metric, why can be read here from FastAI:
Log Loss vs Accuracy
Accuracy is the count of predictions where your predicted value equals
the actual value. Accuracy is not always a good indicator because of
its yes or no nature.
Log Loss takes into account the uncertainty of
your prediction based on how much it varies from the actual label.
This gives us a more nuanced view into the performance of our model.
RMSE on the other hand is a regression metric and should not be used for classification.
edited 6 hours ago
answered 12 hours ago
Simon LarssonSimon Larsson
903214
903214
1
$begingroup$
I fail to see the difference between optimization metric and "evaluation metric". There may be statistics that are well suited for validation/evaluation because they can be adapted for the train/test methodlogy but there is no reason to believe that log loss cannot be used to evaluate a classifier. In fact, log loss is incredibly popular in a large amount of Kaggle competitions, as well as the Brier score. Another similar metric: RMSE. RMSE is optimized in countless regression algorithms to derive parameter estimates/splits but is also used to evaluate the performance of a regression model.
$endgroup$
– aranglol
6 hours ago
1
$begingroup$
That is because I was wrong. I was under the faulty impression that log loss was rarely used for evaluation and was mostly used as a loss function. Thank you for fact checking, will edit to avoid misleading other! :)
$endgroup$
– Simon Larsson
6 hours ago
add a comment |
1
$begingroup$
I fail to see the difference between optimization metric and "evaluation metric". There may be statistics that are well suited for validation/evaluation because they can be adapted for the train/test methodlogy but there is no reason to believe that log loss cannot be used to evaluate a classifier. In fact, log loss is incredibly popular in a large amount of Kaggle competitions, as well as the Brier score. Another similar metric: RMSE. RMSE is optimized in countless regression algorithms to derive parameter estimates/splits but is also used to evaluate the performance of a regression model.
$endgroup$
– aranglol
6 hours ago
1
$begingroup$
That is because I was wrong. I was under the faulty impression that log loss was rarely used for evaluation and was mostly used as a loss function. Thank you for fact checking, will edit to avoid misleading other! :)
$endgroup$
– Simon Larsson
6 hours ago
1
1
$begingroup$
I fail to see the difference between optimization metric and "evaluation metric". There may be statistics that are well suited for validation/evaluation because they can be adapted for the train/test methodlogy but there is no reason to believe that log loss cannot be used to evaluate a classifier. In fact, log loss is incredibly popular in a large amount of Kaggle competitions, as well as the Brier score. Another similar metric: RMSE. RMSE is optimized in countless regression algorithms to derive parameter estimates/splits but is also used to evaluate the performance of a regression model.
$endgroup$
– aranglol
6 hours ago
$begingroup$
I fail to see the difference between optimization metric and "evaluation metric". There may be statistics that are well suited for validation/evaluation because they can be adapted for the train/test methodlogy but there is no reason to believe that log loss cannot be used to evaluate a classifier. In fact, log loss is incredibly popular in a large amount of Kaggle competitions, as well as the Brier score. Another similar metric: RMSE. RMSE is optimized in countless regression algorithms to derive parameter estimates/splits but is also used to evaluate the performance of a regression model.
$endgroup$
– aranglol
6 hours ago
1
1
$begingroup$
That is because I was wrong. I was under the faulty impression that log loss was rarely used for evaluation and was mostly used as a loss function. Thank you for fact checking, will edit to avoid misleading other! :)
$endgroup$
– Simon Larsson
6 hours ago
$begingroup$
That is because I was wrong. I was under the faulty impression that log loss was rarely used for evaluation and was mostly used as a loss function. Thank you for fact checking, will edit to avoid misleading other! :)
$endgroup$
– Simon Larsson
6 hours ago
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49403%2fwhich-metric-is-better-for-boosting-methods%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown