Feedforward Neural Network
Category: Deep Learning
Difficulty: Intermediate
Time Complexity: O(Σ n_l × n_{l-1})
Space Complexity: O(Σ n_l)
Overview
Section titled “Overview”A feedforward neural network passes data through multiple layers of neurons. Each layer applies a linear transformation (weights × inputs + bias) followed by a non-linear activation function. This visualization shows the forward propagation process layer by layer, revealing how raw inputs are transformed into predictions through successive non-linear transformations.
Try It
Section titled “Try It”- Web: Open in Eigenvue →
- Python:
import eigenvueeigenvue.show("feedforward-network")
Default Inputs
Section titled “Default Inputs”{ "inputValues": [ 0.5, 0.8 ], "layerSizes": [ 2, 3, 1 ], "activationFunction": "sigmoid", "seed": "eigenvue-ff"}Input Examples
Section titled “Input Examples”2-3-1 network (sigmoid)
Section titled “2-3-1 network (sigmoid)”{ "inputValues": [ 0.5, 0.8 ], "layerSizes": [ 2, 3, 1 ], "activationFunction": "sigmoid", "seed": "eigenvue-ff"}3-4-4-2 deep network
Section titled “3-4-4-2 deep network”{ "inputValues": [ 0.3, 0.7, 0.5 ], "layerSizes": [ 3, 4, 4, 2 ], "activationFunction": "relu", "seed": "deep-net"}2-2-1 minimal
Section titled “2-2-1 minimal”{ "inputValues": [ 1.0, 0.0 ], "layerSizes": [ 2, 2, 1 ], "activationFunction": "sigmoid", "seed": "minimal"}Pseudocode
Section titled “Pseudocode”function forward(x, weights, biases, activation_fn): a = x // input layer activations for L = 1 to num_layers - 1: z = weights[L] @ a + biases[L] // linear transform a = activation_fn(z) // non-linear activation return a // network outputPython
Section titled “Python”import numpy as np
def forward(x, weights, biases, activation='sigmoid'): a = np.array(x) for W, b in zip(weights, biases): z = W @ a + b a = 1/(1+np.exp(-z)) if activation == 'sigmoid' else np.maximum(0, z) return aKey Concepts
Section titled “Key Concepts”Layer-by-Layer Transformation
Section titled “Layer-by-Layer Transformation”Each layer takes the previous layer’s output, applies W×a+b then activation.
Hidden Representations
Section titled “Hidden Representations”Hidden layers learn internal representations of the input data.
Universal Approximation
Section titled “Universal Approximation”A network with at least one hidden layer can approximate any continuous function.
Common Pitfalls
Section titled “Common Pitfalls”- Depth vs Width: More layers allow hierarchical features; more neurons increase per-layer capacity.
Q1: A network has layer sizes [3, 4, 2]. How many weight parameters does it have (excluding biases)?
- A) 8
- B) 12
- C) 20
- D) 24
Show answer
Answer: C) 20
Layer 0→1: 3×4 = 12 weights. Layer 1→2: 4×2 = 8 weights. Total: 20.