Perceptrons and neural networks
Perceptron with two layers of links, connecting SA and AR units,
designed by Prof. Frank Rosenblatt, is basic pioneer network for
pattern recognition. Similar multilayered algorithms for processes
forecasting are known as GMDH parametric algorithms. Second F.Rosenblatt
theorem is fundamental base for all pattern recognition neural networks.
But his proposals for perceptrons links learning seems now not effective
ones:
there was proposed to vary number of hidden Aunits and choose
the coefficients of SA links by random way. Now is clear that
number of Aunits should be equal to number of realizations in
learning subsample. Then it is easy to choose links SA coefficients
to get zero error on all realizations of this subsample;
instead of AR links should be used Indicator of Minimal Value
(IMV) [4]. Information about which Aunite produce minimal signal
can be used for accurate solution of pattern recognition problems.
We can conclude that alpha and betalearning procedures, developed
by F.Rosenblatt, as well backpropagation learning are not necessary
for perceptrons design. Perceptrons can be simply calculated directly
by algebraic way. We consider linear and some nonlinear equations,
which can be linearized for small deflections from equilibrium
points. Generally all procedures of iteration type should be replaced
by solution of a system of Gauss normal equations because there
is no any constrains on number of realizations which we can take
from data sample.
Ordinary singlelayered neural network can be considered as
a committee of perceptrons. Here instead of one equation, a system
of them should be solved. But all iteration type learning procedures
can be excluded as less effective. Algebraic calculation solves
all the questions of singlemultilayered neural network design.
Twicemultilayered neural networks with active neurons unite
perceptrons (for pattern recognition) or GMDH algorithms (for
forecasting and interpolation) into multilayered structure. Mathematical
description of neural network with active neurons is a system
of equation systems which provide accurate algebraic solution.
To use any iterative procedures like stochastic approximation
or back propagation simply are nonsense.
Algebraic calculation gives zero error on all realizations of
learning subsample. But when we want to minimize error calculated
on future samples, which will be received in near future, an optimal
clusterization of learning subsample should be fulfilled. Coordinates
of clusters centers should be used as sample of filtrated from
noise initial data. This is result of ideas of the optimal physical
clusterization concept.
