Plenary Speech III.
Speaker: Yuantao Gu Professor, Tsinghua University, China Speaker bio: Yuantao Gu received the B.E. degree from Xi'an Jiaotong University in 1998, and the Ph.D. degree with honor from Tsinghua University in 2003, both in Electronic Engineering. He joined the faculty of Tsinghua University in 2003 and is now a professor with Department of Electronic Engineering. He was a visiting scientist at Research Laboratory of Electronics at Massachusetts Institute of Technology during 2012 to 2013 and Department of Electrical Engineering and Computer Science at the University of Michigan in Ann Arbor during 2015. His research interests include high-dimensional signal processing, optimization, sparse signal recovery, temporal-space and graph signal processing. He is a Senior Area Editor for the IEEE Transactions on Signal Processing and an Elected Member of the IEEE Signal Processing Theory and Methods (SPTM) Technical Committee. He received the Best Paper Award of IEEE Global Conference on Signal and Information Processing (GlobalSIP) in 2015, the Award for Best Presentation of Journal Paper of IEEE International Conference on Signal and Information Processing (ChinaSIP) in 2015, and Zhang Si-Ying (CCDC) Outstanding Youth Paper Award (with his student) in 2017.
Title: Compressed Subspace Learning based on Canonical Angles Preserving Property
Abstract: Many data analysis tasks in machine learning deal with real-world data that are presented in high-dimensional spaces, which brings high computational complexity. To make the tasks easier to handle, data scientists discover many low-dimensional structures, among which the Union of Subspaces (UoS) model is a popular one, which assumes that high-dimensional data points actually lie on a few low-dimensional linear subspaces. The task of subspace learning is then to extract useful information from UoS structure of data. Since UoS structure involves only a collection of low-dimensional subspaces that cost much less to describe, a natural question is that why do we go to so much effort to processing the redundant high-dimensional representation? Motivated by this, in this plenary speech I first prove that each canonical angle between subspaces is approximately preserved by random projection with Johnson-Lindenstrauss property, and thus is called Canonical Angles preserving (CAP) property. As canonical angles best characterize the relative subspace positions, CAP property implies that subspace structure also remains almost unchanged. Based on CAP property, we propose a Compressed Subspace Learning (CSL) framework, which aims to reduce the time and storage cost of UoS-based algorithms by taking advantage of the computational efficiency and subspace structure preserving property of random projection. We demonstrate the power of the aforementioned framework by taking three subspace learning tasks, namely subspace visualization, active subspace detection, and subspace clustering, as examples. We empirically show that applying CSL can successfully circumvent the curse of dimensionality, and theoretically analyze their performance using CAP property. |