Improving accuracy of learning models using disjunctive normal form and semisupervised learning

Update Item Information
Title Improving accuracy of learning models using disjunctive normal form and semisupervised learning
Publication Type dissertation
School or College College of Engineering
Department Electrical & Computer Engineering
Author Mohammadabadi, Sayed Mehdi Sajjadi
Date 2017
Description The goal of machine learning is to develop efficient algorithms that use training data to create models that generalize well to unseen data. Learning algorithms can use labeled data, unlabeled data or both. Supervised learning algorithms learn a model using labeled data only. Unsupervised learning methods learn the internal structure of a dataset using only unlabeled data. Lastly, semisupervised learning is the task of finding a model using both labeled and unlabeled data. In this research work, we contribute to both supervised and semisupervised learning. We contribute to supervised learning by proposing an efficient high-dimensional space coverage scheme which is based on the disjunctive normal form. We use conjunctions of a set of half-spaces to create a set of convex polytopes. Disjunction of these polytopes can provide desirable coverage of space. Unlike traditional methods based on neural networks, we do not initialize the model parameters randomly. As a result, our model minimizes the risk of poor local minima and higher learning rates can be used which leads to faster convergence. We contribute to semisupervised learning by proposing 2 unsupervised loss functions that form the basis of a novel semisupervised learning method. The first loss function is called Mutual-Exclusivity. The motivation of this method is the observation that an optimal decision boundary lies between the manifolds of different classes where there are no or very few samples. Decision boundaries can be pushed away from training samples by maximizing their margin and it is not necessary to know the class labels of the samples to maximize the margin. The second loss is named Transformation/Stability and is based on the fact that the prediction of a classifier for a data sample should not change with respect to transformations and perturbations applied to that data sample. In addition, internal variations of a learning system should have little to no effect on the output. The proposed loss minimizes the variation in the prediction of the network for a specific data sample. We also show that the same technique can be used to improve the robustness of a learning model with respect to adversarial examples.
Type Text
Publisher University of Utah
Subject Electrical engineering
Dissertation Name Doctor of Philosophy
Language eng
Rights Management (c) Sayed Mehdi Sajjadi Mohammadabadi
Format application/pdf
Format Medium application/pdf
ARK ark:/87278/s68w804s
Setname ir_etd
ID 1440394
Reference URL https://collections.lib.utah.edu/ark:/87278/s68w804s