cl-banner

Property Preserving Embedding of First-order Logic

Julian Parsert, Stephanie Autherith, Cezary Kaliszyk

6th Global Conference on Artificial Intelligence (GCAI 2020), EPiC Series in Computing 72, pp. 70 – 82, 2020.

Abstract

Logical reasoning as performed by human mathematicians involves an intuitive under- standing of terms and formulas. This includes properties of formulas themselves as well as relations between multiple formulas. Although vital, this intuition is missing when supplying atomically encoded formulae to (neural) down-stream models. In this paper we construct continuous dense vector representations of first-order logic which preserve syntactic and semantic logical properties. The resulting neural formula embeddings encode six characteristics of logical expressions present in the training-set and further generalise to properties they have not explicitly been trained on. To facilitate training, evaluation, and comparing of embedding models we extracted and generated data sets based on TPTP’s first-order logic library. Furthermore we examine the expressiveness of our encodings by conducting toy-task as well as more practical deployment tests.

 

  PDF |    doi:10.29007/18t1   | © Copyright easychair.org

BibTeX 

@inproceedings{jpsack-gcai20,
author = {Julian Parsert and Stephanie Autherith and Cezary Kaliszyk},
title = {Property Preserving Embedding of First-order Logic},
booktitle = {6th Global Conference on Artificial Intelligence (GCAI 2020)},
editor = {Gregoire Danoy and Jun Pang and Geoff Sutcliffe},
series = {EPiC Series in Computing},
volume = {72},
pages = {70--82},
year = {2020},
publisher = {EasyChair},
url = {https://easychair.org/publications/paper/Cwgq},
doi = {10.29007/18t1}
}
Nach oben scrollen