Let's Build Skynet: Building your 1st Neural Network





In the previous post, we discussed about the inner workings of a Neural Network. Now that we have an understanding of how the data goes around, let's try building a simple Neural Network.

We'll be coding in Python.

Before we begin, let's go over the things you'd need to start coding the network, apart from the Judgement day dreams we've all been having...
  • First, let's start with python, head here. Select your OS and follow the instructions.
  • Next, let's get something to code in, I'd suggest something like VSCode or Atom. If you're just starting out, you can use Notepad++.

Now with all that done, let's dive straight into coding! Be sure to grab a drink...


Well then, we'll start with the imports...


numpy is a library that, as the suggests is about numbers. From here, we're going to import functions that will let us compute the exponential function, dot products and to create arrays.
random is no random import, random is very essential, as it's a library that contains functions to generate random numbers. We'll get into why we need random numbers here, in a bit...

Before we begin design, let's have a look at the network that we're trying to build.


So we've 4 inputs, and 1 output. We'll represent the inputs and outputs as matrices. Which would give us something like...



Here's the code for the whole program, we'll break it down in a bit... I'd recommend giving it one read and trying to understand the overall flow.



Now, that looks daunting at first, but it's pretty simple once you start breaking it down...
First, let's look at how our input and weights interact.


We just perform the dot product on the 2 matrices.
That wasn't so hard now was it ? Onward to the next one then...

For weight values, we initially set random values for them, instead of writing down random values each time, we make use of the random library.

We use activation functions at this point. An activation function squeezes your output from a neuron to a range of values. We're using the Sigmoid function, which will give you outputs in the range of 0 to 1. This makes processing the outputs easier, and even computation easier when you go for more than 1 layer.

Now that we're done understanding initialization of our data, let's try to understand how a model is 'trained'. Always remember that, you can't teach your dog tricks in one go...



You'll have to give it some time, sit down for multiple sessions, unless your dog gets it in one go 'cause it's a terminator bot from the future maybe 🤔

Even machines need time and practice to get stuff right. They've to be trained 'n' number of times.
So we want our algorithm to run a few times, too much is bad too. This you've to figure out by some experimentation, here we'll do it 10,000 times.

Now, going with our dog analogy, each time you show your dog how to go fetch, you'd probably have to go fetch all by yourself. And then look at how the dog does it. And then based on the mistakes it makes, you try correcting it. It's the same with machines too (atleast when you're doing supervised learning)

We calculate error, this is obtained from the formula:
Error = Desired Value - Actual Value
Well, we're calculating the error, so it's like you know your dog isn't running after the ball so you've to show him that was the mistake he's making. Similarly, we know there's a variation in the Desired and Actual Values, so we've to correct those mistakes too. We do this by 'adjusting' the weights. We follow another simple formula...
Adjustment = Error * Inputs * Gradient
Gradient, is a number that tells us how confident we're with the current weight values, it is obtained by computing the derivative of the activation function you're using, here it's the Sigmoid function.
Now, have a look at the code from above
Seems much simpler doesn't it ? It's the same code, go check...
I've packaged it into modules, as it's easier to move the code to production or to be used as an import, how you write the code is up to you...

Now, here's some stuff you can try out:

  • Try different training lengths
  • Try fixed weights instead of random
  • And try adding more data.

Comments

Popular Posts