Fundamentals of Statistics contains material of various lectures and courses of H. Lohninger on statistics, data analysis and chemometrics......click here for more. 
Home Multivariate Data Modeling PCA PCA  
See also: eigenvectors, PC loadings and scores, applications of PCA, factor analysis, Exercise  Detection of mixtures of two different wines by PCA, Exercise  Classification of unknown wine samples by PCA, Confidence Interval  
Principal Component Analysis
Principal component analysis, PCA, is a versatile method, which not only allows to take a look at highdimensional datasets (with more than three variables) but which also shows some special mathematical properties that support the calculation of multilinear regression models even in the case of strongly correlated variables. One of the problems with multivariate datasets is that they cannot be displayed on 2dimensional paper or computer screens. The more variables (dimensions) a dataset has, the more complicated and intransparent the situation becomes. This finally results in the inability to recognize any relationships at all.
If we assume that information can be gained from the dataset only in directions which show a maximum of variation, we simply have to find those directions which exhibit the maximum variation and align the rotated axes along these directions. In addition, these new axes should again be orthogonal to each other. In order to find the new axes, first the direction of the maximum extent of the dataset should be searched. This direction will be the direction of the first axis. Thereafter we use another axis which is normal to the first and rotate it around the first axis until the variation along the new axis is a maximum. Then we add a third axis, again orthogonal to the other two and in the direction of the maximum remaining variation, and so on. This procedure will be repeated until all dimensions have been "used up". The process described above is called principal component analysis (PCA) and results in a rotated coordinate system with axes showing a maximum of variation in their directions. This somewhat simplified picture can be mathematically condensed to a socalled eigenvalue problem. The eigenvectors of the covariance matrix constitute the principal components. The corresponding eigenvalues give a hint to how much "information" is contained in the individual components. The following interactive example shows a threedimensional data set and the corresponding principal components. Note that the principal components are orthogonal to each other, and the correlation between any two principal components is zero.


Home Multivariate Data Modeling PCA PCA 
Last Update: 20121008