Sliced inverse regression (SIR) is a widely used non-parametric method for supervised dimension reduction. Conventional SIR mainly tackles simple data structure but is inappropriate for data with array (tensor)-valued predictors. Such data are commonly encountered in modern biomedical imaging and social network areas. For these complex data, dimension reduction is generally demanding to extract useful information from abundant measurements. In this article, we propose higher-order sufficient dimension reduction mainly by extending SIR to general tensor-valued predictors and refer to it as tensor SIR. Tensor SIR is constructed based on tensor decompositions to reduce a tensor-valued predictor's multiple dimensions simultaneously. The proposed method provides fast and efficient estimation. It circumvents high-dimensional covariance matrix inversion that researchers often suffer when dealing with such data. We further investigate its asymptotic properties and show its advantages by simulation studies and a real data application.
Bibliographical noteFunding Information:
This work was supported in part by National Science Foundation grant DMS-1007547 . The authors are grateful to the Associate Editor and the reviewers for their insightful comments that have led to significant improvement of the article. In particular, the authors thank a referee for a reference to Kim’s thesis  and for pointing out the connection between two-tensor SIR and dimension folding SIR described in Section 5.2 . The authors also thank Bing Li and Min Kyung Kim for sharing their code and the EEG data.
© 2014 Elsevier Inc.
- Central dimension folding subspace
- Central subspace
- Sliced inverse regression
- Sufficient dimension reduction
- Tensor data
- Tensor decomposition