The code structure can be partitioned in three main functions, which simulate the behavior of the input, hidden and output units.
Each of such functions receives some input parameters from the command
line and after an initialization phase (socket opening....)
it will stay waiting on the assigned input TCP ports.
The port numbers for each waiting daemon are selected by means of a command line option, specifically by using the "S=..." option of the "sc LOAD command". As above mentioned these are TCP ports, because the whole neural network is implemented by means of a connection oriented protocol.
Outcoming problems related to socket creation or connection setup provoke the daemon failure.
When the connections with the previous layer are set up, the connections with the following layer are created and as a consequence the learning process starts.
The whole learning phase, i.e. the pattern submission and the error evaluation, is managed by an external process "Front-End", which in our case is allocated on our ABONE node galileo.cere.pa.cnr.it.
At the end of the learning phase, each neural unit receives a SIGPIPE Unix signal, which will reset its status and makes it ready for next session.
The following is the command line capable of starting the execution
of a network daemon unit:
neurald m NUM_INPUT NUM_OUTPUT bg1... bgn ip1 fwd1... ipn fwdn
neurald i|h|o NUM_INPUT NUM_OUTPUT bg [*bg] <ip fwd >[*<ip fwd>]
m = i, h, o are respectively
the input, hidden, output units.
NUM_INPUT = how many
input input ports has the elementary unit (max is 20)
NUM_OUTPUT = how many output
ports has the elementary unit (max is 20)
bgi = back gate
(number of i-th listening port )
ipi = ip address
of the i-th node in the following layer.
fwdi = port number
where the i-th node in the following layer is listening for the connection.