LNNs are a novel Neuro = Symbolic
framework designed to seamlessly provide key
properties of both neural nets (learning) and symbolic logic (knowledge and reasoning).
- Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly interpretable disentangled representation.
- Inference is omnidirectional rather than focused on predefined target variables, and corresponds to logical reasoning, including classical first-order logic theorem proving as a special case.
- The model is end-to-end differentiable, and learning minimizes a novel loss function capturing logical contradiction, yielding resilience to inconsistent knowledge.
- It also enables the open-world assumption by maintaining bounds on truth values which can have probabilistic semantics, yielding resilience to incomplete knowledge.
To install the LNN:
- Make sure that the python version you use in line with our setup file, using a fresh environment is always a good idea:
conda create -n lnn python=3.9 -y conda activate lnn
- Install the
master
branch to keep up to date with the latest supported features:pip install git+https://github.com/IBM/LNN
Contributions to the LNN codebase are welcome!
Please have a look at the contribution guide for more information on how to set up the LNN for contributing and how to follow our development standards.
Read the Docs | Academic Papers | Educational Resources | Neuro-Symbolic AI | API Overview | Python Module |
---|---|---|---|---|---|
If you use Logical Neural Networks for research, please consider citing the reference paper:
@article{riegel2020logical,
title={Logical neural networks},
author={Riegel, Ryan and Gray, Alexander and Luus, Francois and Khan, Naweed and Makondo, Ndivhuwo and Akhalwaya, Ismail Yunus and Qian, Haifeng and Fagin, Ronald and Barahona, Francisco and Sharma, Udit and others},
journal={arXiv preprint arXiv:2006.13155},
year={2020}
}