Linear Classifiers
We will explore 3 major algorithms in linear binary classification.
Perceptron
In Perceptron, we take weighted linear combination of input features and pass it through a thresholding function which outputs 1 or 0. The sign of $w^Tx=0$ tells us which side of the plane $w^Tx=0$, the point x lies on. Thus by taking threshold as 0, perceptron classifies data based on which side of the plane the new point lies on.
The task during training is to arrive at the plane (defined by w) that accurately classifies the training data. If the data is linearly separable, perceptron training always converges
Logistic Regression
In Logistic regression, we take weighted linear combination of input features and pass it through a sigmoid function which outputs a number between 1 and 0. Unlike perceptron, which just tells us which side of the plane the point lies on, logistic regression gives a probability of a point lying on a particular side of the plane. The probability of classification will be very close to 1 or 0 as the point goes far away from the plane. The probability of classification of points very close to the plane is close to 0.5.
SVM
There can be multiple hyperplanes that separate linearly separable data. SVM calculates the optimal separating hyperplane using concepts of geometry.
Perceptron Model
Definition of Perceptron Model
A perceptron model, in Machine Learning, is a supervised learning algorithm of binary classifiers. A single neuron, the perceptron model detects whether any function is an input or not and classifies them in either of the classes.
Representing a biological neuron in the human brain, the perceptron model or simply a perceptron acts as an artificial neuron that performs human-like brain functions. A linear ML algorithm, the perceptron conducts binary classification or two-class categorization and enables neurons to learn and register information procured from the inputs.
There are 4 constituents of a perceptron model. They are as follows-
- Input values
- Weights and bias
- Net sum
- Activation function
The perceptron model enables machines to automatically learn coefficients of weight which helps in classifying the inputs. Also recognized as the Linear Binary Classifier, the perceptron model is extremely efficient and helpful in arranging the input data and classifying the same in different classes.
How perceptron model operates
- Enter bits of information that are supposed to serve as inputs in the first layer (Input Value).
- All weights (pre-learned coefficients) and input values will be multiplied. The multiplied values of all input values will be added.
- The bias value will shift to the final stage (activation function/output result).
- The weighted input will proceed to the stage of the activation function. The bias value will be now added.
- The value procured will be the output value that will determine if the output will be released or not.