Learning latent structures via bayesian nonparametrics: new models and efficient inference

Update item information
Publication Type dissertation
School or College School of Computing
Department Computer Science
Author Rai, Piyush
Title Learning latent structures via bayesian nonparametrics: new models and efficient inference
Date 2013-08
Description Latent structures play a vital role in many data analysis tasks. By providing compact yet expressive representations, such structures can offer useful insights into the complex and high-dimensional datasets encountered in domains such as computational biology, computer vision, natural language processing, etc. Specifying the right complexity of these latent structures for a given problem is an important modeling decision. Instead of using models with an a priori fixed complexity, it is desirable to have models that can adapt their complexity as the data warrant. Nonparametric Bayesian models are motivated precisely based on this desideratum by offering a flexible modeling paradigm for data without limiting the model-complexity a priori. The flexibility comes from the model's ability to adjust its complexity adaptively with data. This dissertation is about nonparametric Bayesian learning of two specific types of latent structures: (1) low-dimensional latent features underlying high-dimensional observed data where the latent features could exhibit interdependencies, and (2) latent task structures that capture how a set of learning tasks relate with each other, a notion critical in the paradigm of Multitask Learning where the goal is to solve multiple learning tasks jointly in order to borrow information across similar tasks. Another focus of this dissertation is on designing efficient approximate inference algorithms for nonparametric Bayesian models. Specifically, for the nonparametric Bayesian latent feature model where the goal is to infer the binary-valued latent feature assignment matrix for a given set of observations, the dissertation proposes two approximate inference methods. The first one is a search-based algorithm to find the maximum-a-posteriori (MAP) solution for the latent feature assignment matrix. The second one is a sequential Monte-Carlo-based approximate inference algorithm that allows processing the data oneexample- at-a-time while being space-efficient in terms of the storage required to represent the posterior distribution of the latent feature assignment matrix.
Type Text
Publisher University of Utah
Subject Bayesian learning; Bayesian nonparametrics; Machine learning
Dissertation Name Doctor of Philosophy
Language eng
Rights Management Copyright © Piyush Rai 2013
Format Medium application/pdf
Format Extent 924,147 Bytes
Identifier etd3/id/3460
ARK ark:/87278/s6bs21fj
Setname ir_etd
Date Created 2015-05-12
Date Modified 2017-12-21
ID 197014
Reference URL https://collections.lib.utah.edu/ark:/87278/s6bs21fj