DeepMind, an artificial intelligence research company, in a recent blog post discusses three ways to eliminate bugs in learned predictive models. The company was founded in London in 2010. Google acquired it in 2014. In addition to London they have research centers in Edmonton and Montreal, Canada, and a DeepMind Applied team in Mountain View, California.
“Bugs and software have gone hand in hand since the beginning of computer programming,” the post reads. “Over time, software developers have established a set of best practices for testing and debugging before deployment, but these practices are not suited for modern deep learning systems. Today, the prevailing practice in machine learning is to train a system on a training data set, and then test it on another set. While this reveals the average-case performance of models, it is also crucial to ensure robustness, or acceptably high performance even in the worst case. In this article, we describe three approaches for rigorously identifying and eliminating bugs in learned predictive models: adversarial testing, robust learning, and formal verification.”