Latest News and Events
Learning from Data using Matchings and Graphs
Many machine learning problems on data can naturally be formulated as problems on graphs. For example, dimensionality reduction and visualization are related to graph embedding. Given a sparse graph between N high-dimensional data nodes, how do we faithfully embed it in low dimension? We present an algorithm that improves dimensionality reduction by extending the Maximum Variance Unfolding method. But, given only a dataset of N samples, how do we construct a graph in the first place? The space to explore is daunting with 2^(N2) graphs to choose from yet two interesting subfamilies are tractable: matchings and b-matchings. By placing distributions over matchings and using loopy belief propagation, we can efficiently infer maximum weight subgraphs. These fast generalized matching algorithms leverage integral LP relaxation and perfect graph theory. Applications include graph reconstruction, graph embedding, graph transduction, and graph partitioning with emphasis on data from text, network and image domains.