## Getting Started

Neural networks are like algorithms: a black box in which there are a series of entries, the box and a series of outlets.

But there is something special about that box that represents neural networks: a series of analog controls, some turning left and some turning right in such a way that their turn affects the outputs.

In the example, suppose we have the following desired inputs and outputs:

It means that if the numbers enter 4, 5, 7, 8, (example 1), must go out 18 Y 32. Then you have to adjust those analog controls (moving for or counter clockwise) until that exit is obtained.

Once done, it is tested with the inputs 7, 2, 3, 6 and must leave 21 Y 54. In case it doesn't work with the second set of tickets, we proceed to turn those controls again and start over (and, from the beginning).

You may also like:

So until I fit with all the examples. In the case of the table, with the three sets of inputs that must give the desired outputs. In other words, the 6 Analog controls should have such a twist, that enforces the whole table (the three examples). The goal is to find those particular twists.

And how do you find those twists? At the beginning, those controls are rotated randomly and little by little they adjust. There is a mathematical procedure that helps a lot in this case so as not to adjust blindly. It should be clarified that the box has 6 analog controls, there may be many more in other implementations.

## The "Hello World" of neural networks: The simple perceptron

To start with neural networks, start from the simplest: a neuron. It is known as a simple perceptron. It is presented like this:

Two tickets, one output and three analog controls. What is it for? It is a demonstration that an algorithm can learn the AND and OR table. This is the table of the AND Value.

Let's make a perceptron learn that table, that is to say, that if it is entered in the True and False inputs, the algorithm learn that it should show in the output the value of False and so on with the whole table. The first step is to make that table quantitative:

The input and output data must be quantitative because inside that box there are mathematical formulas and procedures. Then for this example, 1 represents true and 0 represents false. And now? This is the box inside.

An analog control for each input and an internal input is added called threshold and has the value of 1 with its own analog control. Those analog controls are called weights. Now each part is named.

E1 and E2 are the inputs P1, P2 is the weights of the inputs U is the weight of the threshold S is the output f( ) is the function that gives the value to S Then the output is calculated like this:

- S = f ( E1 * P1 + E2 * P2 + 1 * U )
- To understand it better, we are going to give you some values:
- E1 = 1 (true)
- E2 = 1 (true)
- P1 = 0.9812 (a random real value)
- P2 = 3.7193 (a random real value)
- U = 2.1415 (a random real value)

So the output would be:

- S = f ( E1 * P1 + E2 * P2 + 1 * P3 )
- S = f ( 1 * 0.9812 + 1 * 3.7193 + 1 * 2.1415 )
- S = f ( 6.842 )
- And what is f()? a function that could be implemented like this:
- Continuing with the example then
- S = f ( 6.842)
- S = 1

And that's the expected value. The weights work for those inputs.

And what is f()? a function that could be implemented like this:

Continuing with the example then

S = f ( 6.842 )

S = 1

And that's the expected value. The weights work for those inputs.

Will those weights work for the other inputs? ¡Probemium!

- E1 = 1 (true)
- E2 = 0 (false)
- S = f ( E1 * P1 + E2 * P2 + 1 * P3 )
- S = f ( 1 * 0.9812 + 0 * 3.7193 + 1 * 2.1415 )
- S = f ( 3.1227)
- S = 1
- No, it didn't work, should have given zero

And then? You will have to use other values for the weights. One way is to give it other values at random. Run the process again, test with all inputs until finally the expected outputs.

## Now comes the Python code:

First we need to import the random library, This is to generate random numbers. In addition, we add the values of the table and in a variable of type array., and we store in another array numbers the random weights.

1 2 3 4 5 6 7 8 | import random datos =[[1,1,1],[1,0,0],[0,1,0],[0,0,0]] pesos = [random.uniform(-1,1),random.uniform(-1,1),random.uniform(-1,1)] aprendiendo = True salidaEntera = 0 iteracion=0 tasaAprende = 0.3 iteraciones = 0 |

We will add a while loop that will repeat as long as the neural network continues to learn, in this cycle the activation function will be executed, using the values of the randomly generated weights and the data from the AND table.

To accelerate learning, we will calculate the estimated error in each iteration, With this, in each new round the weights will be adjusted to the correct value, if the error percentage reaches zero it will not be necessary to make adjustments and in this way we will exit the while loop with the correct weights to execute the neural network.

1 | while(aprendiendo==True): iteracion=iteracion + 1 aprendiendo=False for cont in range(0,4): salidaReal = (datos[cont][0] * pesos[0] + datos[cont][1] * pesos[1] + pesos[2]) #print("salida real: ", datos[cont][0], " * ", pesos[0], " + ", datos[cont][1], " * ", pesos[1], " + ", pesos[2]) #print("salida real: ",salidaReal) if salidaReal > 0: salidaEntera = 1 else: salidaEntera = 0 salidaEntera = int(salidaEntera) print(salidaEntera) error = int(datos[cont][2] - salidaEntera) if (error != 0): pesos[0] += tasaAprende * error * datos[cont][0] pesos[1] += tasaAprende * error * datos[cont][1] pesos[2] += tasaAprende * error *1 aprendiendo = True if aprendiendo == False: break |

Once having the correct weights, we can print them on screen to view their value and store them, in addition to that we can also show the number of iterations that it cost the neuron to learn.

1 2 3 | print("iteraciones: " , iteracion) print("peso 1: ", pesos[0]) print("peso 2: ", pesos[1]) print("peso 3: ", pesos[2]) |

What's more, we can do a check, running the values in the neural network to verify that you actually learned how to solve the AND table, the code is the following:

1 2 3 | for cont in range (0,4): salidaReal= datos[cont][0] * pesos[0] +datos[cont][1] * pesos[1] + pesos[2] #print("formula: ( datos[cont][0] * pesos[0] + datos[cont][1] * pesos[1] + pesos[2])") #print("formula: (", datos[cont][0] , " * ", pesos[0], " + ", datos[cont][1], " * ", pesos[1], " + ", pesos[2], " )") #print(salidaReal) if salidaReal > 0 : salidaEntera = 1 else: salidaEntera = 0 print("entradas: ", datos[cont][0] , " y " ,datos[cont][1] , " = " , datos[cont][2] , "perceptron: " , salidaEntera) |

Now, we will check the result of our code, running:

## Final Recommendations

In this test we observed that the neuron slow 10 iterations in finding the correct weights, which are visible on screen, in addition to that we perform a check of the AND table, where we show the correct values against the values that the neural network obtained, getting the correct values.

As notice, it is normal that sometimes it takes more or less iterations since we are using random numbers to calculate the weights.

Good, we have learned how a neuron works, how it learns and how to program a simple neural network or perceptron to learn how to solve the AND table.

To learn more, You can modify the values of the AND table by the values of the OR table and check if it is also able to learn how to solve it, and you can even modify certain parts of the code to increase the learning speed.

You can download the source code of this tutorial in the repository of github.

Leave your comments about this practice and suggestions for new tutorials.

Hello july, very good tutorial.

I made my own version of your code, because I like to have a slightly more pythonic code:

https://github.com/carlosmcastro/perceptron_simple/blob/master/neural.py

Hi Carlos, looks excellent. Do not hesitate to visit our Facebook page and share your result.