About Single Neuron / Perceptron
The perceptron is the fundamental building block of neural networks. A single neuron takes multiple inputs, multiplies each by a learned weight, sums the results, adds a bias term, and passes the total through an activation function. This visualization walks through each computation step, showing how inputs are transformed into an output, and how changing weights and bias affects the decision.
Complexity Analysis
- Time Complexity
- O(n)
- Space Complexity
- O(1)
- Difficulty
- beginner
Key Concepts
Weighted Sum
Each input is multiplied by its weight. The weight controls how much influence that input has.
Bias Term
The bias shifts the activation threshold. It allows the neuron to fire even when all inputs are zero.
Activation Function
Without activation, a neuron is just a linear function. Activation functions introduce non-linearity for learning complex patterns.
The Perceptron's Limitation
A single perceptron can only learn linearly separable patterns. It can learn AND and OR but NOT XOR.
Common Pitfalls
Weight Initialization Matters
If all weights start at zero, all neurons compute the same thing.
Sigmoid Saturation
When |z| is large, sigmoid derivative is nearly 0 — the vanishing gradient problem.