How to reduce predictors the right way for a logistic regression modelValidating a logistic regression for a specific $x$Logistic regression with sparse predictor variablesWhat represents the output of a logistic regression in RSequential classification methodsLogistic Regression: Does my model selection process make sense?Transformations for Logistic Regression PredictorsLogistic Regression Model building (dropping p-values)Maximum number of categorical predictors in multinomial (polytomous) logistic regressionHow to determine the best forecasting model for this type of data?Why are ROC curves and AUC values not always relevant?

If the only attacker is removed from combat, is a creature still counted as having attacked this turn?

Anime with legendary swords made from talismans and a man who could change them with a shattered body

Given this phrasing in the lease, when should I pay my rent?

What the heck is gets(stdin) on site coderbyte?

Why didn't Voldemort know what Grindelwald looked like?

What is this high flying aircraft over Pennsylvania?

Air travel with refrigerated insulin

How to get directions in deep space?

Pre-Employment Background Check With Consent For Future Checks

ContourPlot — How do I color by contour curvature?

How do I tell my boss that I'm quitting in 15 days (a colleague left this week)

Ways of geometrical multiplication

Can I run 125kHz RF circuit on a breadboard?

How can I, as DM, avoid the Conga Line of Death occurring when implementing some form of flanking rule?

Check if object is null and return null

Alignment of six matrices

Difference between shutdown options

How do I prevent inappropriate ads from appearing in my game?

"Oh no!" in Latin

How to test the sharpness of a knife?

Why does the Persian emissary display a string of crowned skulls?

Language involving irrational number is not a CFL

Deciphering cause of death?

Make a Bowl of Alphabet Soup



How to reduce predictors the right way for a logistic regression model


Validating a logistic regression for a specific $x$Logistic regression with sparse predictor variablesWhat represents the output of a logistic regression in RSequential classification methodsLogistic Regression: Does my model selection process make sense?Transformations for Logistic Regression PredictorsLogistic Regression Model building (dropping p-values)Maximum number of categorical predictors in multinomial (polytomous) logistic regressionHow to determine the best forecasting model for this type of data?Why are ROC curves and AUC values not always relevant?













4












$begingroup$


So I have been reading some books (or parts of them) on modeling (F. Harrell's "Regression Modeling Strategies" among others), since my current situation right now is that I need to do a logistic model based on binary response data. I have both continuous, categorical, and binary data (predictors) in my data set. Basically I have around 100 predictors right now, which obviously is way too many for a good model. Also, many of these predictors are kind of related, since they are often based on the same metric, although a bit different.



Anyhow, what I have been reading, using univariate regression and step-wise techniques is some of the worst things you can do in order to reduce the amount of predictors. I think the LASSO technique is quite okay (if I understood that correctly), but obviously you just can't use that on 100 predictors and think any good will come of that.



So what are my options here ? Do I really just have to sit down, talk to all my supervisors, and smart people at work, and really think about what the top 5 best predictors could/should be (we might be wrong), or which approach(es) should I consider instead ?



And yes, I also know that this topic is heavily discussed (online and in books), but it sometimes seems a bit overwhelming when you are kind of new in this modeling field.










share|cite|improve this question











$endgroup$
















    4












    $begingroup$


    So I have been reading some books (or parts of them) on modeling (F. Harrell's "Regression Modeling Strategies" among others), since my current situation right now is that I need to do a logistic model based on binary response data. I have both continuous, categorical, and binary data (predictors) in my data set. Basically I have around 100 predictors right now, which obviously is way too many for a good model. Also, many of these predictors are kind of related, since they are often based on the same metric, although a bit different.



    Anyhow, what I have been reading, using univariate regression and step-wise techniques is some of the worst things you can do in order to reduce the amount of predictors. I think the LASSO technique is quite okay (if I understood that correctly), but obviously you just can't use that on 100 predictors and think any good will come of that.



    So what are my options here ? Do I really just have to sit down, talk to all my supervisors, and smart people at work, and really think about what the top 5 best predictors could/should be (we might be wrong), or which approach(es) should I consider instead ?



    And yes, I also know that this topic is heavily discussed (online and in books), but it sometimes seems a bit overwhelming when you are kind of new in this modeling field.










    share|cite|improve this question











    $endgroup$














      4












      4








      4





      $begingroup$


      So I have been reading some books (or parts of them) on modeling (F. Harrell's "Regression Modeling Strategies" among others), since my current situation right now is that I need to do a logistic model based on binary response data. I have both continuous, categorical, and binary data (predictors) in my data set. Basically I have around 100 predictors right now, which obviously is way too many for a good model. Also, many of these predictors are kind of related, since they are often based on the same metric, although a bit different.



      Anyhow, what I have been reading, using univariate regression and step-wise techniques is some of the worst things you can do in order to reduce the amount of predictors. I think the LASSO technique is quite okay (if I understood that correctly), but obviously you just can't use that on 100 predictors and think any good will come of that.



      So what are my options here ? Do I really just have to sit down, talk to all my supervisors, and smart people at work, and really think about what the top 5 best predictors could/should be (we might be wrong), or which approach(es) should I consider instead ?



      And yes, I also know that this topic is heavily discussed (online and in books), but it sometimes seems a bit overwhelming when you are kind of new in this modeling field.










      share|cite|improve this question











      $endgroup$




      So I have been reading some books (or parts of them) on modeling (F. Harrell's "Regression Modeling Strategies" among others), since my current situation right now is that I need to do a logistic model based on binary response data. I have both continuous, categorical, and binary data (predictors) in my data set. Basically I have around 100 predictors right now, which obviously is way too many for a good model. Also, many of these predictors are kind of related, since they are often based on the same metric, although a bit different.



      Anyhow, what I have been reading, using univariate regression and step-wise techniques is some of the worst things you can do in order to reduce the amount of predictors. I think the LASSO technique is quite okay (if I understood that correctly), but obviously you just can't use that on 100 predictors and think any good will come of that.



      So what are my options here ? Do I really just have to sit down, talk to all my supervisors, and smart people at work, and really think about what the top 5 best predictors could/should be (we might be wrong), or which approach(es) should I consider instead ?



      And yes, I also know that this topic is heavily discussed (online and in books), but it sometimes seems a bit overwhelming when you are kind of new in this modeling field.







      logistic predictive-models modeling predictor






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited 4 hours ago









      Ben Bolker

      23.4k16393




      23.4k16393










      asked 4 hours ago









      Denver DangDenver Dang

      226110




      226110




















          2 Answers
          2






          active

          oldest

          votes


















          3












          $begingroup$

          +1 for "sometimes seems a bit overwhelming". It really depends (as Harrell clearly states; see the section at the end of Chapter 4) whether you want to do




          • confirmatory analysis ($to$ reduce your predictor complexity to a reasonable level without looking at the responses, by PCA or subject-area considerations or ...)


          • predictive analysis ($to$ use appropriate penalization methods). Lasso could very well work OK with 100 predictors, if you have a reasonably large sample. Feature selection will be unstable, but that's OK if all you care about is prediction. I have a personal preference for ridge-like approaches that don't technically "select features" (because they never reduce any parameter to exactly zero), but whatever works ...



            You'll have to use cross-validation to choose the degree of penalization, which will destroy your ability to do inference (construct confidence intervals on predictions) unless you use cutting-edge high-dimensional inference methods (e.g. Dezeure et al 2015; I have not tried these approaches but they seem sensible ...)




          • exploratory analysis: have fun, be transparent and honest, don't quote any p-values.





          share|cite|improve this answer









          $endgroup$




















            0












            $begingroup$

            There are many different approaches. What I would recommend is trying some simple ones, in the following order:



            • L1 regularization (with increasing penalty; the larger the regularization coefficient, the more features will be eliminated)

            • Recursive Feature Elimination (https://scikit-learn.org/stable/modules/feature_selection.html#recursive-feature-elimination) -- removes features incrementally by eliminating the features associated with the smallest model coefficients (assuming that those are the least important once; obviously, it's very crucial here to normalize the input features)

            • Sequential Feature Selection (http://rasbt.github.io/mlxtend/user_guide/feature_selection/SequentialFeatureSelector/) -- removes features based on how important they are for predictive performance





            share|cite|improve this answer








            New contributor




            resnet is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.






            $endgroup$












              Your Answer





              StackExchange.ifUsing("editor", function ()
              return StackExchange.using("mathjaxEditing", function ()
              StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
              StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
              );
              );
              , "mathjax-editing");

              StackExchange.ready(function()
              var channelOptions =
              tags: "".split(" "),
              id: "65"
              ;
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function()
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled)
              StackExchange.using("snippets", function()
              createEditor();
              );

              else
              createEditor();

              );

              function createEditor()
              StackExchange.prepareEditor(
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: false,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              imageUploader:
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              ,
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              );



              );













              draft saved

              draft discarded


















              StackExchange.ready(
              function ()
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f398638%2fhow-to-reduce-predictors-the-right-way-for-a-logistic-regression-model%23new-answer', 'question_page');

              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              3












              $begingroup$

              +1 for "sometimes seems a bit overwhelming". It really depends (as Harrell clearly states; see the section at the end of Chapter 4) whether you want to do




              • confirmatory analysis ($to$ reduce your predictor complexity to a reasonable level without looking at the responses, by PCA or subject-area considerations or ...)


              • predictive analysis ($to$ use appropriate penalization methods). Lasso could very well work OK with 100 predictors, if you have a reasonably large sample. Feature selection will be unstable, but that's OK if all you care about is prediction. I have a personal preference for ridge-like approaches that don't technically "select features" (because they never reduce any parameter to exactly zero), but whatever works ...



                You'll have to use cross-validation to choose the degree of penalization, which will destroy your ability to do inference (construct confidence intervals on predictions) unless you use cutting-edge high-dimensional inference methods (e.g. Dezeure et al 2015; I have not tried these approaches but they seem sensible ...)




              • exploratory analysis: have fun, be transparent and honest, don't quote any p-values.





              share|cite|improve this answer









              $endgroup$

















                3












                $begingroup$

                +1 for "sometimes seems a bit overwhelming". It really depends (as Harrell clearly states; see the section at the end of Chapter 4) whether you want to do




                • confirmatory analysis ($to$ reduce your predictor complexity to a reasonable level without looking at the responses, by PCA or subject-area considerations or ...)


                • predictive analysis ($to$ use appropriate penalization methods). Lasso could very well work OK with 100 predictors, if you have a reasonably large sample. Feature selection will be unstable, but that's OK if all you care about is prediction. I have a personal preference for ridge-like approaches that don't technically "select features" (because they never reduce any parameter to exactly zero), but whatever works ...



                  You'll have to use cross-validation to choose the degree of penalization, which will destroy your ability to do inference (construct confidence intervals on predictions) unless you use cutting-edge high-dimensional inference methods (e.g. Dezeure et al 2015; I have not tried these approaches but they seem sensible ...)




                • exploratory analysis: have fun, be transparent and honest, don't quote any p-values.





                share|cite|improve this answer









                $endgroup$















                  3












                  3








                  3





                  $begingroup$

                  +1 for "sometimes seems a bit overwhelming". It really depends (as Harrell clearly states; see the section at the end of Chapter 4) whether you want to do




                  • confirmatory analysis ($to$ reduce your predictor complexity to a reasonable level without looking at the responses, by PCA or subject-area considerations or ...)


                  • predictive analysis ($to$ use appropriate penalization methods). Lasso could very well work OK with 100 predictors, if you have a reasonably large sample. Feature selection will be unstable, but that's OK if all you care about is prediction. I have a personal preference for ridge-like approaches that don't technically "select features" (because they never reduce any parameter to exactly zero), but whatever works ...



                    You'll have to use cross-validation to choose the degree of penalization, which will destroy your ability to do inference (construct confidence intervals on predictions) unless you use cutting-edge high-dimensional inference methods (e.g. Dezeure et al 2015; I have not tried these approaches but they seem sensible ...)




                  • exploratory analysis: have fun, be transparent and honest, don't quote any p-values.





                  share|cite|improve this answer









                  $endgroup$



                  +1 for "sometimes seems a bit overwhelming". It really depends (as Harrell clearly states; see the section at the end of Chapter 4) whether you want to do




                  • confirmatory analysis ($to$ reduce your predictor complexity to a reasonable level without looking at the responses, by PCA or subject-area considerations or ...)


                  • predictive analysis ($to$ use appropriate penalization methods). Lasso could very well work OK with 100 predictors, if you have a reasonably large sample. Feature selection will be unstable, but that's OK if all you care about is prediction. I have a personal preference for ridge-like approaches that don't technically "select features" (because they never reduce any parameter to exactly zero), but whatever works ...



                    You'll have to use cross-validation to choose the degree of penalization, which will destroy your ability to do inference (construct confidence intervals on predictions) unless you use cutting-edge high-dimensional inference methods (e.g. Dezeure et al 2015; I have not tried these approaches but they seem sensible ...)




                  • exploratory analysis: have fun, be transparent and honest, don't quote any p-values.






                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered 4 hours ago









                  Ben BolkerBen Bolker

                  23.4k16393




                  23.4k16393























                      0












                      $begingroup$

                      There are many different approaches. What I would recommend is trying some simple ones, in the following order:



                      • L1 regularization (with increasing penalty; the larger the regularization coefficient, the more features will be eliminated)

                      • Recursive Feature Elimination (https://scikit-learn.org/stable/modules/feature_selection.html#recursive-feature-elimination) -- removes features incrementally by eliminating the features associated with the smallest model coefficients (assuming that those are the least important once; obviously, it's very crucial here to normalize the input features)

                      • Sequential Feature Selection (http://rasbt.github.io/mlxtend/user_guide/feature_selection/SequentialFeatureSelector/) -- removes features based on how important they are for predictive performance





                      share|cite|improve this answer








                      New contributor




                      resnet is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.






                      $endgroup$

















                        0












                        $begingroup$

                        There are many different approaches. What I would recommend is trying some simple ones, in the following order:



                        • L1 regularization (with increasing penalty; the larger the regularization coefficient, the more features will be eliminated)

                        • Recursive Feature Elimination (https://scikit-learn.org/stable/modules/feature_selection.html#recursive-feature-elimination) -- removes features incrementally by eliminating the features associated with the smallest model coefficients (assuming that those are the least important once; obviously, it's very crucial here to normalize the input features)

                        • Sequential Feature Selection (http://rasbt.github.io/mlxtend/user_guide/feature_selection/SequentialFeatureSelector/) -- removes features based on how important they are for predictive performance





                        share|cite|improve this answer








                        New contributor




                        resnet is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                        Check out our Code of Conduct.






                        $endgroup$















                          0












                          0








                          0





                          $begingroup$

                          There are many different approaches. What I would recommend is trying some simple ones, in the following order:



                          • L1 regularization (with increasing penalty; the larger the regularization coefficient, the more features will be eliminated)

                          • Recursive Feature Elimination (https://scikit-learn.org/stable/modules/feature_selection.html#recursive-feature-elimination) -- removes features incrementally by eliminating the features associated with the smallest model coefficients (assuming that those are the least important once; obviously, it's very crucial here to normalize the input features)

                          • Sequential Feature Selection (http://rasbt.github.io/mlxtend/user_guide/feature_selection/SequentialFeatureSelector/) -- removes features based on how important they are for predictive performance





                          share|cite|improve this answer








                          New contributor




                          resnet is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.






                          $endgroup$



                          There are many different approaches. What I would recommend is trying some simple ones, in the following order:



                          • L1 regularization (with increasing penalty; the larger the regularization coefficient, the more features will be eliminated)

                          • Recursive Feature Elimination (https://scikit-learn.org/stable/modules/feature_selection.html#recursive-feature-elimination) -- removes features incrementally by eliminating the features associated with the smallest model coefficients (assuming that those are the least important once; obviously, it's very crucial here to normalize the input features)

                          • Sequential Feature Selection (http://rasbt.github.io/mlxtend/user_guide/feature_selection/SequentialFeatureSelector/) -- removes features based on how important they are for predictive performance






                          share|cite|improve this answer








                          New contributor




                          resnet is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.









                          share|cite|improve this answer



                          share|cite|improve this answer






                          New contributor




                          resnet is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.









                          answered 4 hours ago









                          resnetresnet

                          1595




                          1595




                          New contributor




                          resnet is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.





                          New contributor





                          resnet is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.






                          resnet is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.



























                              draft saved

                              draft discarded
















































                              Thanks for contributing an answer to Cross Validated!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid


                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.

                              Use MathJax to format equations. MathJax reference.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function ()
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f398638%2fhow-to-reduce-predictors-the-right-way-for-a-logistic-regression-model%23new-answer', 'question_page');

                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              How to create a command for the “strange m” symbol in latex? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)How do you make your own symbol when Detexify fails?Writing bold small caps with mathpazo packageplus-minus symbol with parenthesis around the minus signGreek character in Beamer document titleHow to create dashed right arrow over symbol?Currency symbol: Turkish LiraDouble prec as a single symbol?Plus Sign Too Big; How to Call adfbullet?Is there a TeX macro for three-legged pi?How do I get my integral-like symbol to align like the integral?How to selectively substitute a letter with another symbol representing the same letterHow do I generate a less than symbol and vertical bar that are the same height?

                              Българска екзархия Съдържание История | Български екзарси | Вижте също | Външни препратки | Литература | Бележки | НавигацияУстав за управлението на българската екзархия. Цариград, 1870Слово на Ловешкия митрополит Иларион при откриването на Българския народен събор в Цариград на 23. II. 1870 г.Българската правда и гръцката кривда. От С. М. (= Софийски Мелетий). Цариград, 1872Предстоятели на Българската екзархияПодмененият ВеликденИнформационна агенция „Фокус“Димитър Ризов. Българите в техните исторически, етнографически и политически граници (Атлас съдържащ 40 карти). Berlin, Königliche Hoflithographie, Hof-Buch- und -Steindruckerei Wilhelm Greve, 1917Report of the International Commission to Inquire into the Causes and Conduct of the Balkan Wars

                              Чепеларе Съдържание География | История | Население | Спортни и природни забележителности | Културни и исторически обекти | Религии | Обществени институции | Известни личности | Редовни събития | Галерия | Източници | Литература | Външни препратки | Навигация41°43′23.99″ с. ш. 24°41′09.99″ и. д. / 41.723333° с. ш. 24.686111° и. д.*ЧепелареЧепеларски Linux fest 2002Начало на Зимен сезон 2005/06Национални хайдушки празници „Капитан Петко Войвода“Град ЧепелареЧепеларе – народният ски курортbgrod.orgwww.terranatura.hit.bgСправка за населението на гр. Исперих, общ. Исперих, обл. РазградМузей на родопския карстМузей на спорта и скитеЧепеларебългарскибългарскианглийскитукИстория на градаСки писти в ЧепелареВремето в ЧепелареРадио и телевизия в ЧепелареЧепеларе мами с родопски чар и добри пистиЕвтин туризъм и снежни атракции в ЧепелареМестоположениеИнформация и снимки от музея на родопския карст3D панорами от ЧепелареЧепелареррр