Using Random Forest variable importance for feature selection Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)How does feature selection work in Random Forest?random forests for optimal variable selection/feature selectionIn a random forest algorithm, how can one intrepret the importance of each feature?Random Forest Variable selectionBoruta 'all-relevant' feature selection vs Random Forest 'variables of importance'Can random forest based feature selection method be used for multiple regression in machine learningIn the R randomForest package for random forest feature selection, how is the dataset split for training and testing?Getting feature importance for random forest through cross-validationReducing Bias from a Random Forest - Feature ImportanceInterpreting units for random forest variable importance
Overriding an object in memory with placement new
Sci-Fi book where patients in a coma ward all live in a subconscious world linked together
Resolving to minmaj7
What's the meaning of 間時肆拾貳 at a car parking sign
Why do we bend a book to keep it straight?
ListPlot join points by nearest neighbor rather than order
Withdrew £2800, but only £2000 shows as withdrawn on online banking; what are my obligations?
Check which numbers satisfy the condition [A*B*C = A! + B! + C!]
Why am I getting the error "non-boolean type specified in a context where a condition is expected" for this request?
String `!23` is replaced with `docker` in command line
Echoing a tail command produces unexpected output?
Coloring maths inside a tcolorbox
In predicate logic, does existential quantification (∃) include universal quantification (∀), i.e. can 'some' imply 'all'?
Fundamental Solution of the Pell Equation
What does an IRS interview request entail when called in to verify expenses for a sole proprietor small business?
How to bypass password on Windows XP account?
Should I use a zero-interest credit card for a large one-time purchase?
Is it fair for a professor to grade us on the possession of past papers?
How to run gsettings for another user Ubuntu 18.04.2 LTS
How to find out what spells would be useless to a blind NPC spellcaster?
Using et al. for a last / senior author rather than for a first author
Output the ŋarâþ crîþ alphabet song without using (m)any letters
Short Story with Cinderella as a Voo-doo Witch
Why are there no cargo aircraft with "flying wing" design?
Using Random Forest variable importance for feature selection
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)How does feature selection work in Random Forest?random forests for optimal variable selection/feature selectionIn a random forest algorithm, how can one intrepret the importance of each feature?Random Forest Variable selectionBoruta 'all-relevant' feature selection vs Random Forest 'variables of importance'Can random forest based feature selection method be used for multiple regression in machine learningIn the R randomForest package for random forest feature selection, how is the dataset split for training and testing?Getting feature importance for random forest through cross-validationReducing Bias from a Random Forest - Feature ImportanceInterpreting units for random forest variable importance
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
$begingroup$
I'm currently trying to convince my colleague that his method of doing feature selection is causing data leakage and I need help doing so.
The method they are using is as follows:
They first run a random forest on all variables and get the feature importance measure; MeanDecreaseAccuracy. They then remove all variables that score low on this measure and re-run the forest and report the out of bag error rate as the error for the model.
They argue that since the MeanDecreaseAccuracy measure is calculated using the bootstrap and out of bag records that there is no data leakage. I am trying to convince them that since the variable importance measure uses ALL data (in bag records to build the trees and out of bag records to obtain the decrease in accuracy) there is data leakage if they use this measure to do feature selection in this manner.
My solution for them was that they cannot use the out of bag error measure if they want to do feature selection, they will have to set up a proper cross validation split and perform the feature selection on the training sets only.
Am I incorrect here? Can anyone think of a convincing argument (example or paper) that I can show my colleague?
feature-selection random-forest bootstrap data-leakage
$endgroup$
add a comment |
$begingroup$
I'm currently trying to convince my colleague that his method of doing feature selection is causing data leakage and I need help doing so.
The method they are using is as follows:
They first run a random forest on all variables and get the feature importance measure; MeanDecreaseAccuracy. They then remove all variables that score low on this measure and re-run the forest and report the out of bag error rate as the error for the model.
They argue that since the MeanDecreaseAccuracy measure is calculated using the bootstrap and out of bag records that there is no data leakage. I am trying to convince them that since the variable importance measure uses ALL data (in bag records to build the trees and out of bag records to obtain the decrease in accuracy) there is data leakage if they use this measure to do feature selection in this manner.
My solution for them was that they cannot use the out of bag error measure if they want to do feature selection, they will have to set up a proper cross validation split and perform the feature selection on the training sets only.
Am I incorrect here? Can anyone think of a convincing argument (example or paper) that I can show my colleague?
feature-selection random-forest bootstrap data-leakage
$endgroup$
$begingroup$
you might like to read up on boruta
$endgroup$
– Sycorax
12 hours ago
$begingroup$
Yes I am familiar with boruta, but this isn't about the merits of boruta vs. traditional random forest importance, rather the correct implication of feature selection. From what I understand of boruta it would have the same issues if applied in the manner I describe above.
$endgroup$
– astel
9 hours ago
$begingroup$
sure, they're a way to misuse any tool. but I've found it helpful to "yes-but" the bad ideas that other people have instead of saying "no", as in "Yes, we can use random forest to do feature selection. Here's a good way that we can do that..."
$endgroup$
– Sycorax
9 hours ago
add a comment |
$begingroup$
I'm currently trying to convince my colleague that his method of doing feature selection is causing data leakage and I need help doing so.
The method they are using is as follows:
They first run a random forest on all variables and get the feature importance measure; MeanDecreaseAccuracy. They then remove all variables that score low on this measure and re-run the forest and report the out of bag error rate as the error for the model.
They argue that since the MeanDecreaseAccuracy measure is calculated using the bootstrap and out of bag records that there is no data leakage. I am trying to convince them that since the variable importance measure uses ALL data (in bag records to build the trees and out of bag records to obtain the decrease in accuracy) there is data leakage if they use this measure to do feature selection in this manner.
My solution for them was that they cannot use the out of bag error measure if they want to do feature selection, they will have to set up a proper cross validation split and perform the feature selection on the training sets only.
Am I incorrect here? Can anyone think of a convincing argument (example or paper) that I can show my colleague?
feature-selection random-forest bootstrap data-leakage
$endgroup$
I'm currently trying to convince my colleague that his method of doing feature selection is causing data leakage and I need help doing so.
The method they are using is as follows:
They first run a random forest on all variables and get the feature importance measure; MeanDecreaseAccuracy. They then remove all variables that score low on this measure and re-run the forest and report the out of bag error rate as the error for the model.
They argue that since the MeanDecreaseAccuracy measure is calculated using the bootstrap and out of bag records that there is no data leakage. I am trying to convince them that since the variable importance measure uses ALL data (in bag records to build the trees and out of bag records to obtain the decrease in accuracy) there is data leakage if they use this measure to do feature selection in this manner.
My solution for them was that they cannot use the out of bag error measure if they want to do feature selection, they will have to set up a proper cross validation split and perform the feature selection on the training sets only.
Am I incorrect here? Can anyone think of a convincing argument (example or paper) that I can show my colleague?
feature-selection random-forest bootstrap data-leakage
feature-selection random-forest bootstrap data-leakage
asked 12 hours ago
astelastel
311113
311113
$begingroup$
you might like to read up on boruta
$endgroup$
– Sycorax
12 hours ago
$begingroup$
Yes I am familiar with boruta, but this isn't about the merits of boruta vs. traditional random forest importance, rather the correct implication of feature selection. From what I understand of boruta it would have the same issues if applied in the manner I describe above.
$endgroup$
– astel
9 hours ago
$begingroup$
sure, they're a way to misuse any tool. but I've found it helpful to "yes-but" the bad ideas that other people have instead of saying "no", as in "Yes, we can use random forest to do feature selection. Here's a good way that we can do that..."
$endgroup$
– Sycorax
9 hours ago
add a comment |
$begingroup$
you might like to read up on boruta
$endgroup$
– Sycorax
12 hours ago
$begingroup$
Yes I am familiar with boruta, but this isn't about the merits of boruta vs. traditional random forest importance, rather the correct implication of feature selection. From what I understand of boruta it would have the same issues if applied in the manner I describe above.
$endgroup$
– astel
9 hours ago
$begingroup$
sure, they're a way to misuse any tool. but I've found it helpful to "yes-but" the bad ideas that other people have instead of saying "no", as in "Yes, we can use random forest to do feature selection. Here's a good way that we can do that..."
$endgroup$
– Sycorax
9 hours ago
$begingroup$
you might like to read up on boruta
$endgroup$
– Sycorax
12 hours ago
$begingroup$
you might like to read up on boruta
$endgroup$
– Sycorax
12 hours ago
$begingroup$
Yes I am familiar with boruta, but this isn't about the merits of boruta vs. traditional random forest importance, rather the correct implication of feature selection. From what I understand of boruta it would have the same issues if applied in the manner I describe above.
$endgroup$
– astel
9 hours ago
$begingroup$
Yes I am familiar with boruta, but this isn't about the merits of boruta vs. traditional random forest importance, rather the correct implication of feature selection. From what I understand of boruta it would have the same issues if applied in the manner I describe above.
$endgroup$
– astel
9 hours ago
$begingroup$
sure, they're a way to misuse any tool. but I've found it helpful to "yes-but" the bad ideas that other people have instead of saying "no", as in "Yes, we can use random forest to do feature selection. Here's a good way that we can do that..."
$endgroup$
– Sycorax
9 hours ago
$begingroup$
sure, they're a way to misuse any tool. but I've found it helpful to "yes-but" the bad ideas that other people have instead of saying "no", as in "Yes, we can use random forest to do feature selection. Here's a good way that we can do that..."
$endgroup$
– Sycorax
9 hours ago
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
You are entirely correct!
A couple of months ago I was in the exact same position when justifying a different feature selection approach in front of my supervisors. I will cite the sentence I used in my thesis, although it has not been published yet.
Since the ordering of the variables depends on all samples, the
selection step is performed using information of all samples and
thus, the OOB error of the subsequent model no longer has the
properties of an independent test set as it is not independent from
the previous selection step.
- Marc H.
For references, see section 4.1 of 'A new variable selection approach using Random Forests' by Hapfelmeier and Ulm or 'Application of Breiman’s Random Forest to Modeling Structure-Activity Relationships of Pharmaceutical Molecules
' by Svetnik et al., who address this issue in context of forward-/backward feature selection.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "65"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f403381%2fusing-random-forest-variable-importance-for-feature-selection%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
You are entirely correct!
A couple of months ago I was in the exact same position when justifying a different feature selection approach in front of my supervisors. I will cite the sentence I used in my thesis, although it has not been published yet.
Since the ordering of the variables depends on all samples, the
selection step is performed using information of all samples and
thus, the OOB error of the subsequent model no longer has the
properties of an independent test set as it is not independent from
the previous selection step.
- Marc H.
For references, see section 4.1 of 'A new variable selection approach using Random Forests' by Hapfelmeier and Ulm or 'Application of Breiman’s Random Forest to Modeling Structure-Activity Relationships of Pharmaceutical Molecules
' by Svetnik et al., who address this issue in context of forward-/backward feature selection.
$endgroup$
add a comment |
$begingroup$
You are entirely correct!
A couple of months ago I was in the exact same position when justifying a different feature selection approach in front of my supervisors. I will cite the sentence I used in my thesis, although it has not been published yet.
Since the ordering of the variables depends on all samples, the
selection step is performed using information of all samples and
thus, the OOB error of the subsequent model no longer has the
properties of an independent test set as it is not independent from
the previous selection step.
- Marc H.
For references, see section 4.1 of 'A new variable selection approach using Random Forests' by Hapfelmeier and Ulm or 'Application of Breiman’s Random Forest to Modeling Structure-Activity Relationships of Pharmaceutical Molecules
' by Svetnik et al., who address this issue in context of forward-/backward feature selection.
$endgroup$
add a comment |
$begingroup$
You are entirely correct!
A couple of months ago I was in the exact same position when justifying a different feature selection approach in front of my supervisors. I will cite the sentence I used in my thesis, although it has not been published yet.
Since the ordering of the variables depends on all samples, the
selection step is performed using information of all samples and
thus, the OOB error of the subsequent model no longer has the
properties of an independent test set as it is not independent from
the previous selection step.
- Marc H.
For references, see section 4.1 of 'A new variable selection approach using Random Forests' by Hapfelmeier and Ulm or 'Application of Breiman’s Random Forest to Modeling Structure-Activity Relationships of Pharmaceutical Molecules
' by Svetnik et al., who address this issue in context of forward-/backward feature selection.
$endgroup$
You are entirely correct!
A couple of months ago I was in the exact same position when justifying a different feature selection approach in front of my supervisors. I will cite the sentence I used in my thesis, although it has not been published yet.
Since the ordering of the variables depends on all samples, the
selection step is performed using information of all samples and
thus, the OOB error of the subsequent model no longer has the
properties of an independent test set as it is not independent from
the previous selection step.
- Marc H.
For references, see section 4.1 of 'A new variable selection approach using Random Forests' by Hapfelmeier and Ulm or 'Application of Breiman’s Random Forest to Modeling Structure-Activity Relationships of Pharmaceutical Molecules
' by Svetnik et al., who address this issue in context of forward-/backward feature selection.
answered 11 hours ago
bi_scholarbi_scholar
49113
49113
add a comment |
add a comment |
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f403381%2fusing-random-forest-variable-importance-for-feature-selection%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
you might like to read up on boruta
$endgroup$
– Sycorax
12 hours ago
$begingroup$
Yes I am familiar with boruta, but this isn't about the merits of boruta vs. traditional random forest importance, rather the correct implication of feature selection. From what I understand of boruta it would have the same issues if applied in the manner I describe above.
$endgroup$
– astel
9 hours ago
$begingroup$
sure, they're a way to misuse any tool. but I've found it helpful to "yes-but" the bad ideas that other people have instead of saying "no", as in "Yes, we can use random forest to do feature selection. Here's a good way that we can do that..."
$endgroup$
– Sycorax
9 hours ago