Workshop on Hammers for Type Theory 2016

Jasmin Blanchette, Cezary Kaliszyk (eds)

Proceedings First Workshop on Hammers for Type Theory, HaTT 2016, 210, 2016.


Automated reasoning and theorem proving have recently become major challenges for machine learning. In other domains, representations that are able to abstract over unimportant transformations, such as abstraction over translations and rotations in vision, are becoming more common. Standard methods of embedding mathematical formulas for learning theorem proving are however yet unable to handle many important transformations. In particular, embedding previously unseen labels, that often arise in definitional encodings and in Skolemization, has been very weak so far. Similar problems appear when transferring knowledge between known symbols. We propose a novel encoding of formulas that extends existing graph neural network models. This encoding represents symbols only by nodes in the graph, without giving the network any knowledge of the original labels. We provide additional links between such nodes that allow the network to recover the meaning and therefore correctly embed such nodes irrespective of the given labels. We test the proposed encoding in an automated theorem prover based on the tableaux connection calculus, and show that it improves on the best characterizations used so far. The encoding is further evaluated on the premise selection task and a newly introduced symbol guessing task, and shown to correctly predict 65% of the symbol names.




editor = {Jasmin Blanchette and Cezary Kaliszyk},
title = {Proceedings First Workshop on Hammers for Type Theory,HaTT 2016},
booktitle = {Proceedings First Workshop on Hammers for Type Theory,HaTT 2016},
series = {{EPTCS}},
volume = {210},
year = {2016},
url = {},
doi = {10.4204/EPTCS.210},
Nach oben scrollen