Towards Probabilistic Inductive Logic Programming with Neurosymbolic Inference and Relaxation
article
Many inductive logic programming (ILP) methods are incapable of learning programs from probabilistic
background knowledge, for example, coming from sensory data or neural networks with
probabilities. We propose Propper, which handles flawed and probabilistic background knowledge
by extending ILP with a combination of neurosymbolic inference, a continuous criterion
for hypothesis selection (binary cross-entropy) and a relaxation of the hypothesis constrainer
(NoisyCombo). For relational patterns in noisy images, Propper can learn programs from as few
as 8 examples. It outperforms binary ILP and statistical models such as a graph neural network.
background knowledge, for example, coming from sensory data or neural networks with
probabilities. We propose Propper, which handles flawed and probabilistic background knowledge
by extending ILP with a combination of neurosymbolic inference, a continuous criterion
for hypothesis selection (binary cross-entropy) and a relaxation of the hypothesis constrainer
(NoisyCombo). For relational patterns in noisy images, Propper can learn programs from as few
as 8 examples. It outperforms binary ILP and statistical models such as a graph neural network.
Topics
TNO Identifier
1005241
Source
Theory and Practice of Logic Programming, 24(4), pp. 628-643.
Publisher
TNO
Pages
628-643