Wed 7 Nov 2018 18:16 - 18:18 at Georgian - Poster & SRC
Machine learning (ML) has become increasingly ubiquitous in our modern world. Expanded computational power and more sophisticated algorithms have furthered the field to tackle an increasing variety of problems such as computer vision and predictive modelling. However, due to the complex nature of these algorithms, human understanding of the underlying processes of ML is limited. This discrepancy between human knowledge and the complexity of ML algorithms leads to a problem: when ML algorithms err, how do we fix them? As humans, adjusting tens of thousands of parameters in a neural network to correct poor performance is a seemingly impossible task due to the sheer size and intricacy of these models. To address this problem, we propose using SMT solvers to correct errors in ML prediction models, while maintaining their accuracy and generalization capacities. In particular, we use Rosette, a powerful solver-aided programming language, to solve these complex optimization problems. We represent a neural network and its results as symbolic values in Rosette, which can then be solved for, under the assertion that a given data point is correct. Our goal is to iteratively correct ML errors using this method.