Using Random Forest variable importance for feature selection Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)How does feature selection work in Random Forest?random forests for optimal variable selection/feature selectionIn a random forest algorithm, how can one intrepret the importance of each feature?Random Forest Variable selectionBoruta 'all-relevant' feature selection vs Random Forest 'variables of importance'Can random forest based feature selection method be used for multiple regression in machine learningIn the R randomForest package for random forest feature selection, how is the dataset split for training and testing?Getting feature importance for random forest through cross-validationReducing Bias from a Random Forest - Feature ImportanceInterpreting units for random forest variable importance

Overriding an object in memory with placement new

Sci-Fi book where patients in a coma ward all live in a subconscious world linked together

Resolving to minmaj7

What's the meaning of 間時肆拾貳 at a car parking sign

Why do we bend a book to keep it straight?

ListPlot join points by nearest neighbor rather than order

Withdrew £2800, but only £2000 shows as withdrawn on online banking; what are my obligations?

Check which numbers satisfy the condition [A*B*C = A! + B! + C!]

Why am I getting the error "non-boolean type specified in a context where a condition is expected" for this request?

String `!23` is replaced with `docker` in command line

Echoing a tail command produces unexpected output?

Coloring maths inside a tcolorbox

In predicate logic, does existential quantification (∃) include universal quantification (∀), i.e. can 'some' imply 'all'?

Fundamental Solution of the Pell Equation

What does an IRS interview request entail when called in to verify expenses for a sole proprietor small business?

How to bypass password on Windows XP account?

Should I use a zero-interest credit card for a large one-time purchase?

Is it fair for a professor to grade us on the possession of past papers?

How to run gsettings for another user Ubuntu 18.04.2 LTS

How to find out what spells would be useless to a blind NPC spellcaster?

Using et al. for a last / senior author rather than for a first author

Output the ŋarâþ crîþ alphabet song without using (m)any letters

Short Story with Cinderella as a Voo-doo Witch

Why are there no cargo aircraft with "flying wing" design?



Using Random Forest variable importance for feature selection



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)How does feature selection work in Random Forest?random forests for optimal variable selection/feature selectionIn a random forest algorithm, how can one intrepret the importance of each feature?Random Forest Variable selectionBoruta 'all-relevant' feature selection vs Random Forest 'variables of importance'Can random forest based feature selection method be used for multiple regression in machine learningIn the R randomForest package for random forest feature selection, how is the dataset split for training and testing?Getting feature importance for random forest through cross-validationReducing Bias from a Random Forest - Feature ImportanceInterpreting units for random forest variable importance



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








3












$begingroup$


I'm currently trying to convince my colleague that his method of doing feature selection is causing data leakage and I need help doing so.



The method they are using is as follows:
They first run a random forest on all variables and get the feature importance measure; MeanDecreaseAccuracy. They then remove all variables that score low on this measure and re-run the forest and report the out of bag error rate as the error for the model.



They argue that since the MeanDecreaseAccuracy measure is calculated using the bootstrap and out of bag records that there is no data leakage. I am trying to convince them that since the variable importance measure uses ALL data (in bag records to build the trees and out of bag records to obtain the decrease in accuracy) there is data leakage if they use this measure to do feature selection in this manner.



My solution for them was that they cannot use the out of bag error measure if they want to do feature selection, they will have to set up a proper cross validation split and perform the feature selection on the training sets only.



Am I incorrect here? Can anyone think of a convincing argument (example or paper) that I can show my colleague?










share|cite|improve this question









$endgroup$











  • $begingroup$
    you might like to read up on boruta
    $endgroup$
    – Sycorax
    12 hours ago










  • $begingroup$
    Yes I am familiar with boruta, but this isn't about the merits of boruta vs. traditional random forest importance, rather the correct implication of feature selection. From what I understand of boruta it would have the same issues if applied in the manner I describe above.
    $endgroup$
    – astel
    9 hours ago










  • $begingroup$
    sure, they're a way to misuse any tool. but I've found it helpful to "yes-but" the bad ideas that other people have instead of saying "no", as in "Yes, we can use random forest to do feature selection. Here's a good way that we can do that..."
    $endgroup$
    – Sycorax
    9 hours ago

















3












$begingroup$


I'm currently trying to convince my colleague that his method of doing feature selection is causing data leakage and I need help doing so.



The method they are using is as follows:
They first run a random forest on all variables and get the feature importance measure; MeanDecreaseAccuracy. They then remove all variables that score low on this measure and re-run the forest and report the out of bag error rate as the error for the model.



They argue that since the MeanDecreaseAccuracy measure is calculated using the bootstrap and out of bag records that there is no data leakage. I am trying to convince them that since the variable importance measure uses ALL data (in bag records to build the trees and out of bag records to obtain the decrease in accuracy) there is data leakage if they use this measure to do feature selection in this manner.



My solution for them was that they cannot use the out of bag error measure if they want to do feature selection, they will have to set up a proper cross validation split and perform the feature selection on the training sets only.



Am I incorrect here? Can anyone think of a convincing argument (example or paper) that I can show my colleague?










share|cite|improve this question









$endgroup$











  • $begingroup$
    you might like to read up on boruta
    $endgroup$
    – Sycorax
    12 hours ago










  • $begingroup$
    Yes I am familiar with boruta, but this isn't about the merits of boruta vs. traditional random forest importance, rather the correct implication of feature selection. From what I understand of boruta it would have the same issues if applied in the manner I describe above.
    $endgroup$
    – astel
    9 hours ago










  • $begingroup$
    sure, they're a way to misuse any tool. but I've found it helpful to "yes-but" the bad ideas that other people have instead of saying "no", as in "Yes, we can use random forest to do feature selection. Here's a good way that we can do that..."
    $endgroup$
    – Sycorax
    9 hours ago













3












3








3





$begingroup$


I'm currently trying to convince my colleague that his method of doing feature selection is causing data leakage and I need help doing so.



The method they are using is as follows:
They first run a random forest on all variables and get the feature importance measure; MeanDecreaseAccuracy. They then remove all variables that score low on this measure and re-run the forest and report the out of bag error rate as the error for the model.



They argue that since the MeanDecreaseAccuracy measure is calculated using the bootstrap and out of bag records that there is no data leakage. I am trying to convince them that since the variable importance measure uses ALL data (in bag records to build the trees and out of bag records to obtain the decrease in accuracy) there is data leakage if they use this measure to do feature selection in this manner.



My solution for them was that they cannot use the out of bag error measure if they want to do feature selection, they will have to set up a proper cross validation split and perform the feature selection on the training sets only.



Am I incorrect here? Can anyone think of a convincing argument (example or paper) that I can show my colleague?










share|cite|improve this question









$endgroup$




I'm currently trying to convince my colleague that his method of doing feature selection is causing data leakage and I need help doing so.



The method they are using is as follows:
They first run a random forest on all variables and get the feature importance measure; MeanDecreaseAccuracy. They then remove all variables that score low on this measure and re-run the forest and report the out of bag error rate as the error for the model.



They argue that since the MeanDecreaseAccuracy measure is calculated using the bootstrap and out of bag records that there is no data leakage. I am trying to convince them that since the variable importance measure uses ALL data (in bag records to build the trees and out of bag records to obtain the decrease in accuracy) there is data leakage if they use this measure to do feature selection in this manner.



My solution for them was that they cannot use the out of bag error measure if they want to do feature selection, they will have to set up a proper cross validation split and perform the feature selection on the training sets only.



Am I incorrect here? Can anyone think of a convincing argument (example or paper) that I can show my colleague?







feature-selection random-forest bootstrap data-leakage






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked 12 hours ago









astelastel

311113




311113











  • $begingroup$
    you might like to read up on boruta
    $endgroup$
    – Sycorax
    12 hours ago










  • $begingroup$
    Yes I am familiar with boruta, but this isn't about the merits of boruta vs. traditional random forest importance, rather the correct implication of feature selection. From what I understand of boruta it would have the same issues if applied in the manner I describe above.
    $endgroup$
    – astel
    9 hours ago










  • $begingroup$
    sure, they're a way to misuse any tool. but I've found it helpful to "yes-but" the bad ideas that other people have instead of saying "no", as in "Yes, we can use random forest to do feature selection. Here's a good way that we can do that..."
    $endgroup$
    – Sycorax
    9 hours ago
















  • $begingroup$
    you might like to read up on boruta
    $endgroup$
    – Sycorax
    12 hours ago










  • $begingroup$
    Yes I am familiar with boruta, but this isn't about the merits of boruta vs. traditional random forest importance, rather the correct implication of feature selection. From what I understand of boruta it would have the same issues if applied in the manner I describe above.
    $endgroup$
    – astel
    9 hours ago










  • $begingroup$
    sure, they're a way to misuse any tool. but I've found it helpful to "yes-but" the bad ideas that other people have instead of saying "no", as in "Yes, we can use random forest to do feature selection. Here's a good way that we can do that..."
    $endgroup$
    – Sycorax
    9 hours ago















$begingroup$
you might like to read up on boruta
$endgroup$
– Sycorax
12 hours ago




$begingroup$
you might like to read up on boruta
$endgroup$
– Sycorax
12 hours ago












$begingroup$
Yes I am familiar with boruta, but this isn't about the merits of boruta vs. traditional random forest importance, rather the correct implication of feature selection. From what I understand of boruta it would have the same issues if applied in the manner I describe above.
$endgroup$
– astel
9 hours ago




$begingroup$
Yes I am familiar with boruta, but this isn't about the merits of boruta vs. traditional random forest importance, rather the correct implication of feature selection. From what I understand of boruta it would have the same issues if applied in the manner I describe above.
$endgroup$
– astel
9 hours ago












$begingroup$
sure, they're a way to misuse any tool. but I've found it helpful to "yes-but" the bad ideas that other people have instead of saying "no", as in "Yes, we can use random forest to do feature selection. Here's a good way that we can do that..."
$endgroup$
– Sycorax
9 hours ago




$begingroup$
sure, they're a way to misuse any tool. but I've found it helpful to "yes-but" the bad ideas that other people have instead of saying "no", as in "Yes, we can use random forest to do feature selection. Here's a good way that we can do that..."
$endgroup$
– Sycorax
9 hours ago










1 Answer
1






active

oldest

votes


















4












$begingroup$

You are entirely correct!



A couple of months ago I was in the exact same position when justifying a different feature selection approach in front of my supervisors. I will cite the sentence I used in my thesis, although it has not been published yet.




Since the ordering of the variables depends on all samples, the
selection step is performed using information of all samples and
thus, the OOB error of the subsequent model no longer has the
properties of an independent test set as it is not independent from
the previous selection step.



- Marc H.




For references, see section 4.1 of 'A new variable selection approach using Random Forests' by Hapfelmeier and Ulm or 'Application of Breiman’s Random Forest to Modeling Structure-Activity Relationships of Pharmaceutical Molecules
' by Svetnik et al., who address this issue in context of forward-/backward feature selection.






share|cite|improve this answer









$endgroup$













    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "65"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f403381%2fusing-random-forest-variable-importance-for-feature-selection%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    4












    $begingroup$

    You are entirely correct!



    A couple of months ago I was in the exact same position when justifying a different feature selection approach in front of my supervisors. I will cite the sentence I used in my thesis, although it has not been published yet.




    Since the ordering of the variables depends on all samples, the
    selection step is performed using information of all samples and
    thus, the OOB error of the subsequent model no longer has the
    properties of an independent test set as it is not independent from
    the previous selection step.



    - Marc H.




    For references, see section 4.1 of 'A new variable selection approach using Random Forests' by Hapfelmeier and Ulm or 'Application of Breiman’s Random Forest to Modeling Structure-Activity Relationships of Pharmaceutical Molecules
    ' by Svetnik et al., who address this issue in context of forward-/backward feature selection.






    share|cite|improve this answer









    $endgroup$

















      4












      $begingroup$

      You are entirely correct!



      A couple of months ago I was in the exact same position when justifying a different feature selection approach in front of my supervisors. I will cite the sentence I used in my thesis, although it has not been published yet.




      Since the ordering of the variables depends on all samples, the
      selection step is performed using information of all samples and
      thus, the OOB error of the subsequent model no longer has the
      properties of an independent test set as it is not independent from
      the previous selection step.



      - Marc H.




      For references, see section 4.1 of 'A new variable selection approach using Random Forests' by Hapfelmeier and Ulm or 'Application of Breiman’s Random Forest to Modeling Structure-Activity Relationships of Pharmaceutical Molecules
      ' by Svetnik et al., who address this issue in context of forward-/backward feature selection.






      share|cite|improve this answer









      $endgroup$















        4












        4








        4





        $begingroup$

        You are entirely correct!



        A couple of months ago I was in the exact same position when justifying a different feature selection approach in front of my supervisors. I will cite the sentence I used in my thesis, although it has not been published yet.




        Since the ordering of the variables depends on all samples, the
        selection step is performed using information of all samples and
        thus, the OOB error of the subsequent model no longer has the
        properties of an independent test set as it is not independent from
        the previous selection step.



        - Marc H.




        For references, see section 4.1 of 'A new variable selection approach using Random Forests' by Hapfelmeier and Ulm or 'Application of Breiman’s Random Forest to Modeling Structure-Activity Relationships of Pharmaceutical Molecules
        ' by Svetnik et al., who address this issue in context of forward-/backward feature selection.






        share|cite|improve this answer









        $endgroup$



        You are entirely correct!



        A couple of months ago I was in the exact same position when justifying a different feature selection approach in front of my supervisors. I will cite the sentence I used in my thesis, although it has not been published yet.




        Since the ordering of the variables depends on all samples, the
        selection step is performed using information of all samples and
        thus, the OOB error of the subsequent model no longer has the
        properties of an independent test set as it is not independent from
        the previous selection step.



        - Marc H.




        For references, see section 4.1 of 'A new variable selection approach using Random Forests' by Hapfelmeier and Ulm or 'Application of Breiman’s Random Forest to Modeling Structure-Activity Relationships of Pharmaceutical Molecules
        ' by Svetnik et al., who address this issue in context of forward-/backward feature selection.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered 11 hours ago









        bi_scholarbi_scholar

        49113




        49113



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Cross Validated!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f403381%2fusing-random-forest-variable-importance-for-feature-selection%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            How to create a command for the “strange m” symbol in latex? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)How do you make your own symbol when Detexify fails?Writing bold small caps with mathpazo packageplus-minus symbol with parenthesis around the minus signGreek character in Beamer document titleHow to create dashed right arrow over symbol?Currency symbol: Turkish LiraDouble prec as a single symbol?Plus Sign Too Big; How to Call adfbullet?Is there a TeX macro for three-legged pi?How do I get my integral-like symbol to align like the integral?How to selectively substitute a letter with another symbol representing the same letterHow do I generate a less than symbol and vertical bar that are the same height?

            Българска екзархия Съдържание История | Български екзарси | Вижте също | Външни препратки | Литература | Бележки | НавигацияУстав за управлението на българската екзархия. Цариград, 1870Слово на Ловешкия митрополит Иларион при откриването на Българския народен събор в Цариград на 23. II. 1870 г.Българската правда и гръцката кривда. От С. М. (= Софийски Мелетий). Цариград, 1872Предстоятели на Българската екзархияПодмененият ВеликденИнформационна агенция „Фокус“Димитър Ризов. Българите в техните исторически, етнографически и политически граници (Атлас съдържащ 40 карти). Berlin, Königliche Hoflithographie, Hof-Buch- und -Steindruckerei Wilhelm Greve, 1917Report of the International Commission to Inquire into the Causes and Conduct of the Balkan Wars

            Чепеларе Съдържание География | История | Население | Спортни и природни забележителности | Културни и исторически обекти | Религии | Обществени институции | Известни личности | Редовни събития | Галерия | Източници | Литература | Външни препратки | Навигация41°43′23.99″ с. ш. 24°41′09.99″ и. д. / 41.723333° с. ш. 24.686111° и. д.*ЧепелареЧепеларски Linux fest 2002Начало на Зимен сезон 2005/06Национални хайдушки празници „Капитан Петко Войвода“Град ЧепелареЧепеларе – народният ски курортbgrod.orgwww.terranatura.hit.bgСправка за населението на гр. Исперих, общ. Исперих, обл. РазградМузей на родопския карстМузей на спорта и скитеЧепеларебългарскибългарскианглийскитукИстория на градаСки писти в ЧепелареВремето в ЧепелареРадио и телевизия в ЧепелареЧепеларе мами с родопски чар и добри пистиЕвтин туризъм и снежни атракции в ЧепелареМестоположениеИнформация и снимки от музея на родопския карст3D панорами от ЧепелареЧепелареррр