LEARNING ALGORITHMS OF LAYERED NEURAL NETWORKS VIA EXTENDED KALMAN FILTERS
(EN)
WATANABE, K
(EN)
TZAFESTAS, SG
(EN)
FUKUDA, T
(EN)
Learning algorithms are described for layered feedforward type neural networks, in which a unit generates a real-valued output through a logistic function. The problem of adjusting the weights of internal hidden units can be regarded as a problem of estimating (or identifying) constant parametes with a non-linear observation equation. The present algorithm based on the extended Kalman filter has just the time-varying learning rate, while the well-known back-propagation (or generalized delta rule) algorithm based on gradient descent has a constant learning rate. From some simulation examples it is shown that when a sufficiently trained network is desired, the learning speed of the proposed algorithm is faster than that of the traditional back-propagation algorithm.
(EN)