DiscoverStatLearn 2012 - Workshop on "Challenging problems in Statistical Learning"4.1 Data-driven penalties: heuristics, results and thoughts... (Pascal Massart)
4.1 Data-driven penalties: heuristics, results and thoughts... (Pascal Massart)

4.1 Data-driven penalties: heuristics, results and thoughts... (Pascal Massart)

Update: 2014-12-03
Share

Description

The idea of selecting a model via penalizing a log-likelihood type criterion goes back to the early seventies with the pioneering works of Mallows and Akaike. One can find many consistency results in the literature for such criteria. These results are asymptotic in the sense that one deals with a given number of models and the number of observations tends to infinity. A non asymptotic theory for these type of criteria has been developed these last years that allows the size as well as the number of models to depend on the sample size. For practical relevance of these methods, it is desirable to get a precise expression of the penalty terms involved in the penalized criteria on which they are based. We will discuss some heuristics to design data-driven penalties, review some new results and discuss some open problems.
Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

4.1 Data-driven penalties: heuristics, results and thoughts... (Pascal Massart)

4.1 Data-driven penalties: heuristics, results and thoughts... (Pascal Massart)

Charles Bouveyron