Results 1 
4 of
4
Unification Neural Networks: Unification by ErrorCorrection Learning
"... We show that the conventional firstorder algorithm of unification can be simulated by finite artificial neural networks with one layer of neurons. In these unification neural networks, the unification algorithm is performed by errorcorrection learning. Each timestep of adaptation of the network c ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
(Show Context)
We show that the conventional firstorder algorithm of unification can be simulated by finite artificial neural networks with one layer of neurons. In these unification neural networks, the unification algorithm is performed by errorcorrection learning. Each timestep of adaptation of the network corresponds to a single iteration of the unification algorithm. We present this result together with the library of learning functions and examples fully formalised in MATLAB Neural Network Toolbox.
Neurons or symbols: why does or remain exclusive
 in: Proceedings of ICNC’09
, 2009
"... NeuroSymbolic Integration is an interdisciplinary area that endeavours to unify neural networks and symbolic logic. The goal is to create a system that combines the advantages of neural networks (adaptive behaviour, robustness, tolerance of noise and probability) and symbolic logic (validity of com ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
NeuroSymbolic Integration is an interdisciplinary area that endeavours to unify neural networks and symbolic logic. The goal is to create a system that combines the advantages of neural networks (adaptive behaviour, robustness, tolerance of noise and probability) and symbolic logic (validity of computations, generality, higherorder reasoning). Several different approaches have been proposed in the past. However, the existing neurosymbolic networks provide only a limited coverage of the techniques used in computational logic. In this paper, we outline the areas of neurosymbolism where computational logic has been implemented so far, and analyse the problematic areas. We show why certain concepts cannot be implemented using the existing neurosymbolic networks, and propose four main improvements needed to build neurosymbolic networks of the future. 1
Unification by ErrorCorrection
"... The paper formalises the famous algorithm of firstorder unification by Robinson by means of the errorcorrection learning in neural networks. The significant achievement of this formalisation is that, for the first time, the firstorder unification of two arbitrary firstorder atoms is performed b ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
The paper formalises the famous algorithm of firstorder unification by Robinson by means of the errorcorrection learning in neural networks. The significant achievement of this formalisation is that, for the first time, the firstorder unification of two arbitrary firstorder atoms is performed by finite (twoneuron) network.
Sound and complete sldresolution for bilatticebased annotated logic programs
 In Proceedings of the International Conference INFORMATIONMFCSIT’06
"... We introduce the class of normal bilatticebased annotated firstorder logic programs (BAPs) and develop declarative and operational semantics for them. In particular, SLDresolution for these programs is defined and its soundness and completeness established. ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
We introduce the class of normal bilatticebased annotated firstorder logic programs (BAPs) and develop declarative and operational semantics for them. In particular, SLDresolution for these programs is defined and its soundness and completeness established.