PCA

今天总结一下,上学期pattern recognition学的PCA ^ ^ 因为也确实蛮重要的。

Background of dimensionality reduction

  1. Curse of Dimenisonality : (1) Keep adding features — the dimensionality of the feature space grows — feature space becomes sparser — it becomes much easier to find the perfect solution — overfitting — affect the predictability of the model — the performance of model will decrease. (2) Computational cost

  2. How :Dimensionality reduction — Project the data in the high-dimensionality to the fewer-dimensionality — remove reduant and irrelevant feature without incurring much loss of information — obtain a set of principal features {so it can avoid overfitting and redundancy}

  3. Advantages : Visualization, Data Compression, Noise Removal

Mathematic Background:

  1. Variance: Variance measures how spread the dataset is / how far each value in the dataset is from the mean. $$\sigma ^2 _x = var(x) = \sum (x_i - x_{mena})^2/N$$

  2. Covariance: Covariance measures the how much two variables change together. Postive covariance means X and Y are positively related. i.e. as X increases Y also increases. when the feature x and feature y are independent( or uncorrelated), then $$\sigma(x,y) = 0$$

    $$\sigma(x,y) = cov(XY) = \sum(x_i - x_{mean})(y_i - y_{mean})/N$$

  3. Covariance Matrix: $$C_{i,j} = \sigma(x_i,x_j)$$ where $$C \in R^{dXd}$$ and d is the dimension of random variables of the data. The covariance matrix with linear transformation is an important building block for understanding and using PCA, SVD, Bayes Classifier.

  4. Eigenvetors & Eigenvalues:

    Eigen Decomposition: $Ax = \lambda x$ $x$ is an eigenvector of A and $\lambda$ is the corresponding eigenvalue.

    We use Singular Value Decomposition (SVD) to decompose the covariance matrix then we can get eigenvalue and eigenvector. The eigenvectors are the unit vectors that represents the direction of the largest variance of the data while the eigenvalue represents the magnitude of this variance in the corresponding directions. The k-th largest eigenvalue of S is the variance of k-th PC.

  5. Projection:

    矩阵子空间A, 被投影矩阵B, 则矩阵B投影到子空间A:$A^TB$

PCA: (it often used in feature engineering)

  1. Work-flow

    In ML problem, sometimes the features of our input are correlated. This means we include extra dimensions in our input data but we could convert the correlated features into orthogonal features to reduce the feature dimensionality. So we could also understand the correlation between new features is zero. So principal components is a set of values of linearly uncorrelated variables.

    PCA is based on the sample covariance , we compute the covariance matrix S ‘s eigenvector { the orthonormal basis for data, so each principal component orthogonal to all the previous ones}. Project the data to the orthonormal basis in this way we discard low significance dimensions, which greatly reduced the computational resources AND complexity. Because we donot know the label of dataset, so PCA is the unsupervised method. The criterial for good representation of unsupervised learning is to minimize information loss.

    More specific, compute the PCs via SVD on the data centered data matrix. $$X_{d,n} = [(x_1 - \bar x),…,(x_n - \bar x)]$$ $$X_{d,n} = U_{d,d} D_{d,n} V_{n,n}^T$$ , $U,V$ are orthognal matrices, $D$ is a diagonal matrix.

    Then the first p eigenvectors forms $$ G = [a_1, a_2, …. a_p]$$, Project both training and testing data into the PCs space $$ y = G^T x$$

  2. Data Visualization

    Data points are represented in the rotated orthogonal coordinate system. The origin is the mean of the data points and the axes are provided by the eigenvectors.

  3. How much (the number of components in PCA) is good enough to decide the performance of the predictive modeling?**

    Decide a threshold (e.g. 0.95) of explained variance. If increasing one more principal components increases the explained variance by a large margin then go for it else stop it. Also check the accuracy of prediction with the included principal components.

  4. In what situations is PCA not a good approach for dimensionality reduction?

    PCA is a linear transformation technique, if the features are non-linearly correlated. PCA cannot capture non-linear structure. Additionally, PCA is not always an optimal feature extraction procedure for classification problem. Suppose there are C classes in the training data, PCA ignores the class information. The projection axes chosen by PCA might not provide good discriminant power.

---------------------- 本文结束----------------------