Skip to content

Feedforward Neural Network

Category: Deep Learning
Difficulty: Intermediate
Time Complexity: O(Σ n_l × n_{l-1})
Space Complexity: O(Σ n_l)

A feedforward neural network passes data through multiple layers of neurons. Each layer applies a linear transformation (weights × inputs + bias) followed by a non-linear activation function. This visualization shows the forward propagation process layer by layer, revealing how raw inputs are transformed into predictions through successive non-linear transformations.

{
"inputValues": [
0.5,
0.8
],
"layerSizes": [
2,
3,
1
],
"activationFunction": "sigmoid",
"seed": "eigenvue-ff"
}
{
"inputValues": [
0.5,
0.8
],
"layerSizes": [
2,
3,
1
],
"activationFunction": "sigmoid",
"seed": "eigenvue-ff"
}
{
"inputValues": [
0.3,
0.7,
0.5
],
"layerSizes": [
3,
4,
4,
2
],
"activationFunction": "relu",
"seed": "deep-net"
}
{
"inputValues": [
1.0,
0.0
],
"layerSizes": [
2,
2,
1
],
"activationFunction": "sigmoid",
"seed": "minimal"
}
function forward(x, weights, biases, activation_fn):
a = x // input layer activations
for L = 1 to num_layers - 1:
z = weights[L] @ a + biases[L] // linear transform
a = activation_fn(z) // non-linear activation
return a // network output
import numpy as np
def forward(x, weights, biases, activation='sigmoid'):
a = np.array(x)
for W, b in zip(weights, biases):
z = W @ a + b
a = 1/(1+np.exp(-z)) if activation == 'sigmoid' else np.maximum(0, z)
return a

Each layer takes the previous layer’s output, applies W×a+b then activation.

Hidden layers learn internal representations of the input data.

A network with at least one hidden layer can approximate any continuous function.

  • Depth vs Width: More layers allow hierarchical features; more neurons increase per-layer capacity.

Q1: A network has layer sizes [3, 4, 2]. How many weight parameters does it have (excluding biases)?

  • A) 8
  • B) 12
  • C) 20
  • D) 24
Show answer

Answer: C) 20

Layer 0→1: 3×4 = 12 weights. Layer 1→2: 4×2 = 8 weights. Total: 20.