Board index » delphi » Neural Networks

Neural Networks

Does somebody have source code for a Neural Network in Pascal?

Thanks.

 

Re:Neural Networks


Quote
Kusanagi wrote:
> Does somebody have source code for a Neural Network in Pascal?

I made something once, which recognize handwritten numbers. You wouldn't
get much out of it, though. A neural network simply is a curve-fitter,
where the curve-fit function is appropriately complex to be able to
represent any function ... and with a routine for fitting the data.

I suggest that you look up at the library and finds your way to do the
programming. My network was a "multi-layer-perceptron net" (MLP-net),
which used the "back-propagation-error algorithm" (BPE-algorithm) for
error reduction. I can recommend this net for similar application, but
you may want a more dynamic network, eg. if you want to do control
engineering.

--

Best regards,
Claus Futtrup

Re:Neural Networks


Quote
Kusanagi <wide...@jet.es> wrote:
>Does somebody have source code for a Neural Network in Pascal?

What flavor?  Hopefield, Kohonen, Grossberg,
backpropagation, counterpropagation, associative,
perceptron, cognitron, neocognitron, ...

    ...red

Re:Neural Networks


Quote
R.E.Donais wrote:

> Kusanagi <wide...@jet.es> wrote:

> >Does somebody have source code for a Neural Network in Pascal?

> What flavor?  Hopefield, Kohonen, Grossberg,
> backpropagation, counterpropagation, associative,
> perceptron, cognitron, neocognitron, ...

>     ...red

I'm searching for a percenptron best.

Thanks.

Re:Neural Networks


Quote
R.E.Donais wrote:

> Kusanagi <wide...@jet.es> wrote:

> >Does somebody have source code for a Neural Network in Pascal?

> What flavor?  Hopefield, Kohonen, Grossberg,
> backpropagation, counterpropagation, associative,
> perceptron, cognitron, neocognitron, ...

>     ...red

Everything ;) will be fine!
But Kohonen and backpropagation would be best!
//jan

Re:Neural Networks


Quote
Kusanagi <wide...@jet.es> wrote:
>I'm searching for a percenptron best.

A perceptron is really easy to throw together.

Define two single dimension arrays, one for input and the
other for output.  Then define a matrix whose first
dimension (rows) corresponds with the number of input
elements and an a 0-based second dimension with an upper
bound corresponding to the number of output elements.

For example, for a percept ron with 9 inputs and 4 outputs
you would have:

CONST MxINP = 9;
      MxOUT = 4;

VAR Inp: Array[1..MxINP] of Double;
    Out: Array[1..MxOUT] of Double;
    Wgt: Array[1..MxINP, 0..MxOUT] of Double;

The 0-th weight is the cell's bias which is multiplied by a
fixed input of 1.0.  This allows the cell to have a non-zero
output even if all inputs should be zero.

You initialize the network by assigning random numbers to
the weight matrix.

You would then fill the input array with the appropriate
values, and call something like the following:

PROCEDURE Solve;
VAR Sum: Double;
    i,j: Integer;
BEGIN
    For i := 1 to MxINP Do Begin
       Sum := W[i, 0]; { cell bias.   e.g. Weight * 1.0 )
       For j := 1 to MxOUT Do
          Sum := Sum + Wgt[i,j];
       { you now apply an activation function to sum }
       { and assign the result to the cell's output  }
       { Threshold }
         If Sum > 0.5 Then Out[i] := 1 Else Out := 0;
       { Sigmoid }
         Out[i] := 1.0 / (1.0 + Exp(-Sum));
    End;
END;

You compare the output vector against the desired output,
and if the results are wrong (or not w/in tolerance), you
feed the difference (error) back through the network and
modifying weights using an algorithm that tends to reduce
the amount of error.

Since this is most likely a school assignment, I'm going to
let you setup the training exemplars and program the
learning algorithm.

If you still have problems putting it all together, then
post what you've done and explain the problem you are having
and we'll see what we can do to help.

    ...red

Re:Neural Networks


I don't believe I did this!  Must be a bad day.  Anyway, I
stated things backwards.  I should have said the matrix
should have one row for each output, and each row should
have a bias plus one weight for each input.  See following
corrections:

Quote
rdon...@southeast.net (R.E.Donais) wrote:
>Kusanagi <wide...@jet.es> wrote:

>>I'm searching for a percenptron best.

>A perceptron is really easy to throw together.

>Define two single dimension arrays, one for input and the
>other for output.  Then define a matrix whose first
>dimension (rows) corresponds with the number of *output+
>elements and an a 0-based second dimension with an upper
>bound corresponding to the number of *input* elements.

>For example, for a percept ron with 9 inputs and 4 outputs
>you would have:

>CONST MxINP = 9;
>      MxOUT = 4;

>VAR Inp: Array[1..MxINP] of Double;
>    Out: Array[1..MxOUT] of Double;

     Wgt: Array[1..MxOUT, 0..MxINP] of Double;
Quote

>The 0-th weight is the cell's bias which is multiplied by a
>fixed input of 1.0.  This allows the cell to have a non-zero
>output even if all inputs should be zero.

>You initialize the network by assigning random numbers to
>the weight matrix.

>You would then fill the input array with the appropriate
>values, and call something like the following:

>PROCEDURE Solve;
>VAR Sum: Double;
>    i,j: Integer;
>BEGIN

     For i := 1 to MxOUT Do Begin
Quote
>       Sum := W[i, 0]; { cell bias.   e.g. Weight * 1.0 )
>       For j := 1 to MxINP Do

           Sum := Sum + Wgt[i,j] * Inp[j];

- Show quoted text -

Quote
>       { you now apply an activation function to sum }
>       { and assign the result to the cell's output  }
>       { Threshold }
>         If Sum > 0.5 Then Out[i] := 1 Else Out := 0;
>       { Sigmoid }
>         Out[i] := 1.0 / (1.0 + Exp(-Sum));
>    End;
>END;

>You compare the output vector against the desired output,
>and if the results are wrong (or not w/in tolerance), you
>feed the difference (error) back through the network and
>modifying weights using an algorithm that tends to reduce
>the amount of error.

>Since this is most likely a school assignment, I'm going to
>let you setup the training exemplars and program the
>learning algorithm.

>If you still have problems putting it all together, then
>post what you've done and explain the problem you are having
>and we'll see what we can do to help.

>    ...red

Re:Neural Networks


Quote
On Mon, 07 Apr 1997 16:05:01 +0200, Kusanagi <wide...@jet.es> wrote:
>Does somebody have source code for a Neural Network in Pascal?

>Thanks.

I recommend a book "Neural Networks Algorithms, Applications and
Programming Techniques" by Freeman and Skapura (published by Addison
Wesley).

Michael Glover
(Surrey, UK http://www.users.globalnet.co.uk/~glover/ )

Other Threads