Table of content:
Introduction, Perceptron, The Perceptron Convergence Theorem, Relation between the Perceptron and Bayes Classifier for a Gaussian Environment, The Batch Perceptron Algorithm
The perceptron is the simplest form of a neural network used for the classifying linearly separable patterns. Patterns that lie on opposite sides of a hyperplane are called linearly separable patterns.
Basically, perceptron consists of a single neuron with adjustable synaptic weights and bias. The algorithm used to adjust the free parameters of this neural network first appeared in a learning procedure developed by Rosenblatt (1958, 1962).
Rosenblatt proved that if the patterns (vectors) used to train the perceptron are drawn from two linearly separable classes, then the perceptron algorithm converges and positions the decision surface in the form of a hyperplane between the two classes.
The perceptron built around a single neuron is limited to performing pattern classification with only two classes. By expanding the output layer of the perceptron to include more than one neuron, we may correspondingly perform classification with more than two classes. However, the classes have to be linearly separable for the perceptron to work properly.
Rosenblatt’s perceptron is built around the McCulloch–Pitts model of a neuron.
The summing node of the neural model computes a linear combination of the input. The resulting sum is applied to a hard limit activation function.
The neuron produces an output equal to 1 if the hard limiter input is positive, and -1 if it is negative.