Backpropagation Derivation - Delta Rule

I enjoyed writing my background, however the bit I was really surprised to have enjoyed writing up is the derivation of back-propagation. I’ve read many books, articles and blogs that of course venture to do the same but I didn’t find any of them particularly intuitive. The best I did find were probably that of Bishop (1995) and Haykin (1994), which I based my derivation on.

Below I include this derivation of back-propagation, starting with deriving the so-called `delta rule’, the update rule for a network with a single hidden layer, and expanding the derivation to multiple-hidden layers, i.e. back-propagation.

The Delta Rule: Learning with a Single Hidden Layer

We start by describing how to learn with a single hidden layer, a method known as the delta rule. The delta rule is a straight-forward application of gradient descent (i.e. hill climbing), and is easy to do because in a neural network with a single hidden layer, the neurons have direct access to the error signal.

The delta rule for single-layered neural networks is a gradient descent method, using the derivative of the network’s weights with respect to the output error to adjust the weights to better classify training examples.

Training is performed on a training dataset $X$, where each training sample $\mathbf{x}^n\in X$ is a vector $\mathbf{x}^n = (X^n_0, \ldots, X^n_N)$. Assume that for a given training sample $\mathbf{x}^n$, the $i$th neuron in our single-layer neural network has output $y^n_j$, target (desired) output $t^n_j$, and weights $w=(w_{j0}, \ldots, w_{jM})$, as shown in the above figure. We can consider the bias to be an extra weight with a unit input, and thus we can omit the explicit bias from the derivation.

We want to know how to change a given weight $w_{ji}$ given the output of node $j$ for a given input data sample $\mathbf{x}^n$,

where the net activation $a^n_j$ is,

To do so, we must use the error of our prediction for each output $y_j$ and training sample $X^n$ as compared to the known label $t_j$, $$$e^n_j = y^n_j - t^n_j.\label{eqn:error}$$$

For this derivation, we assume the error for a single sample is calculated by the sum of squared errors of each output. In fact, the derivation holds as long as our error function is in the form of an average (Bishop, 1995),

The chain rule allows us to calculate the sensitivity of the error to each weight $w_{ji}$ in the network,

Differentiating \eqref{eqn:errorsum} with respect to $e^n_j$,

\eqref{eqn:error} with respect to $y^n_j$,

\eqref{eqn:output} with respect to $a^n_j$,

and finally \eqref{eqn:weightsum} with respect to $w_{ji}$,

since only one of the terms in the sum is related to the specific weight $w_{ji}$.

Thus the sensitivity is, %

Typically what is variously called the local gradient, error, or simply delta, is then defined,

such that \eqref{eqn:deltasensitivity} can be rewritten,

The delta rule adjusts each weight $w_{ji}$ proportional to the sensitivity,

where $\gamma$ is a constant called the learning rate or step size. Using the delta defined in \eqref{eqn:delta}, this is simply written,

Continue to Multi-layer Backprogatation Derivation

Categories:

Updated: