[1] A. Alkenani and K. Yu, Sparse MAVE with oracle penalties, Adv. Appl. Stat., 34(2013) 85-105.
[2] R. Cook, Regression graphics: ideas for studying the regression through graphics, New York, Wily, 1998.
[3] R. D. Cook and B. Li Dimension reduction for the conditional mean in regression, Ann. Stat., 30(2002) 455-474.
[4] R. D. Cook, and S. Weisberg, Discussion of Li , J. Am. Stat Assoc., 86 (1991) 328-332.
[5] J. Fan and R. Z. Li, Variable selection via non-concave penalized likelihood and its oracle properties, J. Am. Stat. Assoc., 96 (2001) 1348-1360.
[6] L. Li, Sparse sufficient dimension reduction, Biometrika, 94 (2007) 603-613.
[7] K. Li, Sliced inverse regression for dimension reduction (with discussion), J. Am. Stat. Assoc., 86(1991) 316-342.
[8] K. C. Li, On principal Hessian directions for data visualization and dimension reduction: Another application of Stein’s lemma, J. Am. Stat. Assoc., 87(1992) 1025-1039.
[9] L. Li, R. D. Cook and C. J.Nachtsheim, Model-free variable selection, J. R. Stat. Soc. Ser. B, 67(2005) 285-299.
[10] L.Li and C. J. Nachtsheim, Sparse sliced inverse regression, Technometrics, 48(2006) 503-510.
[11] L. Li and X. Yin, Sliced Inverse Regression with regularizations, Biometrics, 64(2008) 124-131.
[12] W. D. Mangold, L. Bean, and D. Adams, The impact of intercollegiate athletics on graduation rates among major NCAA Division I universities: Implications for college persistence theory and practice, J. Higher Educ, 74(5)(2003) 540-562.
[13] G. C. McDonald and R. C. Schwing, Instabilities of regression estimates relating air pollution to mortality, Technometrics, 15(3) (1973) 463-481.
[14] L. Ni, R. D. Cook and C. L. Tsai, A note on shrinkage sliced inverse regression, Biometrika, 92(2005) 242-247.
[15] D. B. Sharma, H. D. Bondell and H. H. Zhang, Consistent group identification and variable selection in regression with correlated predictors, J. Comput. Graphical Stat., 22(2)(2013) 319-340.
[16] B. W. Silverman, Density Estimation for Statistics and Data Analysis, Chapman and Hall, 1986.
[17] R. Tibshirani, Regression shrinkage and selection via the Lasso, J. Royal Stat. Soc. Ser. B, 58 (1996) 267-288.
[18] Q. Wang and X. Yin, A Nonlinear Multi-Dimensional Variable Selection Method for High Dimensional Data: Sparse MAVE, Comput. Stat. Data Anal. , 52(2008) 4512-4520.
[19] T. Wang, P. Xu and L. Zhu, Variable selection and estimation for semiparametric multiple-index models, Bernoulli, 21(1) (2015) 242-275.
[20] Y. Xia, H. Tong, W. Li and L. Zhu, An adaptive estimation of dimension reduction space, J. Royal Stat. Soc. Ser. B, 64(2002)363-410.
[21] Z. Yu and L. Zhu, Dimension reduction and predictor selection in semiparametric models, Biometrika, 100 (2013) 641-654.
[22] T. Wang, P. Xu and L. Zhu Penalized minimum average variance estimation, Statist. Sinica , 23(2013) 543-569.
[23] C. H. Zhang, Nearly unbiased variable selection under minimax concave penalty, Annal. Stat., 38 (2010) 894-942.
[24] H. Zou, The adaptive Lasso and its oracle properties. J. Am. Stat. Assoc., 101(2006) 1418-142.
[25] H. Zou and T. Hastie, Regularization and variable selection via the elastic net, J. Royal Stat. Soc., Ser. B, 67(2005) 301-320.