Dimension reduction is critical in many areas of data mining and machine learning. In this paper, a Covariance-preserving Projection Method (CPM for short) is proposed for dimension reduction. CPM maximizes the class discrimination and also preserves approximately the class covariance. The optimization involved in CPM can be formulated as low rank approximations of a collection of matrices, which can be solved iteratively. Our theoretical and empirical analysis reveals the relationship between CPM and Linear Discriminant Analysis (LDA), Sliced Average Variance Estimator (SAVE), and Heteroscedastic Discriminant Analysis (HDA). This gives us new insights into the nature of these different algorithms. We use both synthetic and real-world datasets to evaluate the effectiveness of the proposed algorithm.