Statistical & Financial Consulting by Stanford PhD
Home Page
CROSS-VALIDATION

K-fold Cross-validation is a model selection tool. For each model estimated on a given data set (training-validation set), k-fold cross-validation gives a score which is an estimate of the model performance on a new data set (testing set). The k-fold cross-validation splits the training-validation data set into k equally-sized blocks. At each stage, k-1 blocks are used for estimating the model and the remaining block is used for computing its average prediction error. The blocks used for estimation are called the training data and the block used for prediction diagnostics is called the validation data. The process repeats itself until each single block is used as the validation data. The results from k different validation procedures (corresponding to k different validation blocks) are averaged to produce the overall cross-validation score. The cross-validation score is an estimate of the true prediction error on the training-validation data set.

To use cross-validation for model selection, we chose the model giving smallest cross-validation score over the whole set of candidates. Cross-validation has the advantage over the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) in not making any assumptions about the structure and distributions of the true model. On the flip side, it performs worse on small data sets.

The split of the data into blocks can be done in either deterministic or random fashion. Still, even if the split is random, cross-validation is not to be confused with bootstrap. In bootstrap we sample N data points with replacement, where N is the sample size of the training-validation set. In cross-validation we sample int(N / k) data points without replacement, where "int()" stands for the "integer part".


CROSS-VALIDATION REFERENCES

Efron, B., & Hastie, T. (2017). Computer Age Statistical Inference: Algorithms, Evidence, and Data Science. Cambridge University Press.

James, G., Witten, D., Hastie, T., & Tibshirani, R. (2017). An Introduction to Statistical Learning: with Applications in R (Corr. 7th printing). Springer New York.

Stone, M. (1974). Cross-validatory choice and assessment of statistical predictions. J. Roy. Statist. Soc. 36, pp. 111-147.

Stone, M. (1977). An asymptotic equivalence of choice of model by cross-validation and Akaike's criterion. J. Roy. Statist. Soc. 39, pp. 44-7.

Zhang, P. (1993). Model selection via multifold cross-validation. Ann.Statist. 21, pp. 299-311.

Kohavi, R. (1995). A study of cross-validation and bootstrap for accuracy assessment and model selection. International Joint Conference on Articial Intelligence.

Efron, B. & Tibshirani, R. (1997). Improvements on cross-validation: the 632+ bootstrap: method. J. Amer. Statist. Assoc. 92, pp. 548-560.

Bishop, C. (1995). Neural Networks for Pattern Recognition. Clarendon Press, Oxford.

Breiman, L., Friedman, J., Olshen, R. & Stone, C. (1984). Classication and Regression Trees. Wadsworth.

Ripley, B. D. (1996). Pattern Recognition and Neural Networks. Cambridge University Press.


BACK TO THE
STATISTICAL ANALYSES DIRECTORY


IMPORTANT LINKS ON THIS SITE