Robust Nonnegative Matrix Factorization using L1, L21 Norms

Speaker: 
Chris Ding
Abstract: 

Nonnegative matrix factorization (NMF) related models are now widely used in text mining, bioinformatics, knowledge transfer, recommender systems, semi-supervised and unsupervised learning. However so far, the basic models use least square formulation which is prone to large noise and outliers. After reviewing major NMF models, here we present robust NMF models using L1 and L21 norms which exhibit stability and robustness w.r.t. large noises. We present computational algorithms for these models with rigorous theoretical analysis. These algorithm are as efficient as the algorithms for least square formulations, avoiding the significant computational complexities routinely associated with L1, L21 formulations. Experiments in image data demonstrate the strong robustness of the robust NMF models.

Bio: 
Chris Ding obtained Ph.D. from Columbia University, did research at California Institute of Technology, Jet Propulsion Laboratory, and Lawrence Berkeley National Laboratory, before joining University of Texas at Arlington as a professor in 2007. His research areas are data mining, bioinformatics, high performance computing, focusing on matrix/ tensor approaches. He served on NIPS, ICML, KDD, IJCAI, AAAI, ICDM, SDM conference committees, and reviewed research grants for National Science Foundations of U.S., Israel, Ireland, and Hong Kong. He has given invited seminars at Berkeley, Stanford, Carnegie Mellon, University of Waterloo, University of Alberta, Google Research, IBM Research, Microsoft Research. He published 180 research papers with 6180 citations.