23

The heart of Artificial Neural Networks

 4 years ago
source link: https://mc.ai/the-heart-of-artificial-neural-networks/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Perceptron

A simple artificial neuron having an input layer and output layer is called a perceptron.

What does this Neuron contain?

  1. Summation function
  2. Activation function

The inputs given to a perceptron are processed by Summation function and followed by activation function to get the desired output.

Perceptron

This is a simple perceptron, but what if we have many inputs and huge data a single perceptron is not enough right?? We must keep on increasing the neurons. And here is the basic neural network having an input layer, hidden layer, output layer.

Neural network

We should always remember that a neural network has a single input layer, output layer but it can have multiple hidden layers. In the above fig, we can see the sample neural network with one input layer, two hidden layers, and one output layer.

As a prerequisite for a neural network let us know what an activation function and types of activation function.

Activation Function

The main purpose of the activation function is to convert the weighted sum of input signals of a neuron into the output signal. And this output signal is served as input to the next layer.

Any activation function should be differentiable since we use a backpropagation mechanism to reduce the error and update the weights accordingly.

Types of Activation Function

image Source

Sigmoid

  1. Ranges from 0 and 1.
  2. A small change in x would result in a large change in y.
  3. Usually used in the output layer of binary classification.

Tanh

  1. Ranges between -1 and 1.
  2. Output values are centered around zero.
  3. Usually used in hidden layers.

RELU (Rectified Linear Unit)

  1. Ranges between 0 and max(x).
  2. Computationally inexpensive compared to sigmoid and tanh functions.
  3. Default function for hidden layers.
  4. It can lead to neuron death, which can be compensated by applying the Leaky RELU function.
So far, we learned the prerequisites that are a perceptron and activation function. Now let us dive into the working of neural network (the core of neural network).

Working of Neural Network

A neural network works based on two principles

  1. Forward Propagation
  2. Backward Propagation

Let’s understand these building blocks with the help of an example. Here I am considering a single input layer, hidden layer, output layer to make the understanding clear.

Forward Propagation

  1. Considering we have data and would like to apply binary classification to get the desired output.
  2. Take a sample having features as X1, X2, and these features will be operated over a set of processes to predict the outcome.
  3. Each feature is associated with a weight, here X, X2 are features and W1, W2 are weights. These are served as input to a neuron.
  4. A neuron performs both functions. a) Summation b)Activation.
  5. In the summation, all features are multiplied by their weights and bias are summed up. (Y=W1X1+W2X2+b).
  6. This summed function is applied over an Activation function. The output from this neuron is multiplied with the weight W3 and supplied as input to the output layer.
  7. The same process happens in each neuron, but we vary the activation functions in hidden layer neurons, not in the output layer.
We just randomly initialized the weights and continued the process. There are many techniques for initializing the weights. But, you may be having a doubt how these weights are getting updated right??? This will be answered using backpropagation.

Backward Propagation

Let us get back to our calculus basics and we will be using chain rule learned in our school days to update the weights.

Chain Rule


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK