More

    What Is A Perceptron And How Does It Work?

    Any man could, if he were so inclined, be the sculptor of his own brain.” And, someone else’s brain. One could even give a brain to a brainless machine. I mean, one has already done that hence deep learning. A fellow neuro geek in 1958 created the very first neural network, the perceptron. So, what is a perceptron and how does it work? 

    What Is A Perceptron?

    A perceptron is a single-layer neural network. It’s a linear binary classifier used in supervised learning. Psychologist Frank Rosenblatt developed it in 1958. He documented this in his research “The perceptron: a probabilistic model for information storage and organization in the brain”. 

    Rosenblatt used it to enable a computer to distinguish between cards marked on the left and cards marked on the right.

    How Does It Work?

    Let’s say we want our model to recognize triangles. It would classify the data as a triangle and not a triangle. A more evolved application would classify emails as “spam” and “not spam”. But, how does a perceptron actually work? 

    The perceptron incorporates four parts:

    1. Inputs
    2. Weights (W) and bias (B)
    3. Weighted sum
    4. Activation function

    Structure of a Perceptron

    So, we would feed the network data, known as inputs. Each input has a “weight” which refers to how much influence the input is gonna have on the output. The network then calculates the weighted sum which is the sum of each input multiplied by its weight. 

    Weighted Sum

    This then is calculated through an activation function. The latter is what helps classify the input between the required values. For example, the unit step activation function where the output is zero if the input is negative, and 1 where the input is positive. Therefore, the range is between 0 and 1. 

    Unit Step Activation Function

    However, in order to be able to modify the curve of the activation function, we add the “bias”. The bias provides the perceptron with more flexibility in modeling complex input data. So, the weighted sum plus the bias goes through the activation function which then gives us an output. 

    Activation function

    Advantages and Disadvantages of Perceptrons

    Perceptrons were definitely a revolutionary invention back in 1958. Nevertheless, they do have some drawbacks. 

    Pros

    • Simple (simpler than multilayer neural networks)
    • Performs well on problems that are linearly separable like binary classification 

    Cons

    • Limited expressive power and generalization ability 
    • Cannot process non-linear data
    • Prone to overfitting and noise

    Conclusion

    Perceptrons reflect the first efforts to classify data in two groups which led to the development of multilayer perceptrons which are commonly known as neural networks. Despite the drawbacks of perceptrons, they set the groundwork for deep learning algorithms. 

    You might like: 

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Stay in the Loop

    Stay in the loop with blockchain Witcher and get the lastest updates - Best Web Hosting

     

    Latest stories

    You might also like...