Talk
Developing Bug-Free Machine Learning Systems With Formal Mathematics
Daniel Selsam · Percy Liang · David L Dill

Tue Aug 8th 04:42 -- 05:00 PM @ C4.1

Noisy data, non-convex objectives, model misspecification, and numerical instability can all cause undesired behaviors in machine learning systems. As a result, detecting actual implementation errors can be extremely difficult. We demonstrate a methodology in which developers use an interactive proof assistant to both implement their system and to state a formal theorem defining what it means for their system to be correct. The process of proving this theorem interactively in the proof assistant exposes all implementation errors since any error in the program would cause the proof to fail. As a case study, we implement a new system, Certigrad, for optimizing over stochastic computation graphs, and we generate a formal (i.e. machine-checkable) proof that the gradients sampled by the system are unbiased estimates of the true mathematical gradients. We train a variational autoencoder using Certigrad and find the performance comparable to training the same model in TensorFlow.

Author Information

Daniel Selsam (Stanford University)
Percy Liang (Stanford University)
David L Dill (Stanford University)

Researcher in formal verification, computational biology, and voting technology policy. Donald E. Knuth Professor in the School of Engineering, Fellow of the ACM and IEEE, member of the National Academy of Engineering and the American Academy of Arts and Sciences.

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors