Automated Model Selection in Principal Component Analysis: A New Approach Based on the Cross-Validated Ignorance Score
datasetposted on 18.07.2019, 18:45 by Stefania Russo, Guangyu Li, Kris Villez
Principal component analysis (PCA) is by far the most widespread tool for unsupervised learning with high-dimensional data sets. It is popularly studied for exploratory data analysis and online process monitoring. Unfortunately, fine-tuning PCA models and particularly the number of components remains a challenging task. Today, this selection is often based on a combination of guiding principles, experience, and process understanding. Unlike the case of regression, where cross-validation of the prediction error is a widespread and trusted approach for model selection, there are no tools for PCA model selection enjoying this level of acceptance. In this work, we address this challenge and evaluate the utility of the cross-validated ignorance score with both simulated and experimental data sets. Application of this model selection criterion is based on the interpretation of PCA as a density model, as in probabilistic principal component analysis. With simulation-based benchmarking, it is shown to be (a) the overall best performing criterion, (b) the preferred criterion at high noise levels, and (c) very robust to changes in noise level. Tests on experimental data sets suggest that the ignorance score is sensitive to deviations from the PCA model structure, which suggests the criterion is also useful to detect model–reality mismatch.