Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Outliers and gross errors in the data, as well as deviations from the normality assumption of the error distribution, impact adversely statistical analyses that are based on classical procedures such as least squares and maximum likelihood methods. We briefly review the cross‐validation (CV) method to model selection and discuss its robust extension based on weighted likelihood methodology. The main advantage of weighted CV is that it is computationally fast, much faster than many of the robust model selection procedures proposed in the literature. We present the weighted CV algorithm, its operating characteristics, and illustrate its performance under symmetric and asymmetric contamination. The procedure, in the absence of contamination and under mild conditions, is asymptotically equivalent to the CV method for model selection. Additionally, it is asymptotically loss efficient and differs from the robust CV procedure introduced in the literature in that it downweights only those observations that do not fit well the full model.
This article is categorized under:
Statistical and Graphical Methods of Data Analysis > Modeling Methods and Algorithms
Statistical and Graphical Methods of Data Analysis > Robust Methods
Statistical Models > Linear Models
Statistical Models > Model Selection
Classical cross‐validation method for model selection and its robustification via weighted likelihood estimating equations method.