# Backpropagation Derivation - Multi-layer Neural Networks

# The Limitations of Single-Layer Networks

A single-layer neural network, such as the perceptron shown in fig. 1, is only a linear classifier, and as such is ineffective at learning a large variety of tasks. Most notably, in the 1969 book *Perceptrons*, the authors showed that single-layer perceptrons could not learn to model functions as simple as the XOR function, amongst other non-linearly separable classification problems.

As shown in fig. 2, no single line can separate even a sparse sampling of the XOR function — i.e. *it is not linearly separable*. Instead, only a composition of lines is able to correctly separate and classify this function, and other non-linearly separable problems.

At the time, it was not obvious how to train networks with more than one layer of neurons, since the methods of learning neuron weights, the *perceptron learning rule* (Rosenblatt et al.1961) for perceptrons or the *delta rule* (Widrow et al., 1960) — as we derived in the previous post — for general neurons, only applied to single-layered networks. This became known as the credit-assignment problem.

# Backpropagation

The credit-assignment problem was solved with the discovery of *backpropagation* (also known as the *generalized delta rule*), allowing learning in multi-layer neural networks. It is somewhat controversial as to who first “discovered” backpropagation, since it is essentially the application of the chain rule to neural networks, however it’s generally accepted that it was first demonstrated experimentally by Rumelhart et al., 1961. Although it is “just the chain rule”, to dismiss this first demonstration of backpropagation in neural networks is to understate the importance of this discovery to the field, and to dismiss the practical difficulties in first implementing the algorithm — a fact that will be attested to by anyone who has since attempted.

The following is a derivation of backpropagation loosely based on the excellent references of Bishop (1995) and Haykin (1994), although with different notation. This derivation builds upon the derivation for the delta rule in the previous section, although it is important to note that, as shown in fig. 3, the indexing we will use to refer to neurons of different layers differs from that in fig. 1 for the single-layer case.

We are interested in finding the sensitivity of the error $E$ to a given weight in the network $w_{ij}$. There are two classes of weights for which we must derive different rules,

those belonging to*Output Neurons:**output layer*neurons, i.e. neurons lying directly before the output, such as $w_{kj}$ in fig. 3, and

### Output Layer

The output weights are relatively easy to find, since they correspond to the same types of weights found in single-layer networks, and have direct access to the error signal, i.e. $e^n_j$.

Indeed our derivation of the delta rule also describes the sensitivity of the weights in the output layer of a multi-layer neural network. With some change of notation (now indexing by $k$ rather than $j$ to match fig. 3), we can use the same delta rule sensitivity,

### Hidden Layer

We will first derive the partial derivative $\frac{\partial E^n}{\partial w_{ji}}$, for a single hidden layer network, such as that illustrated in fig. 3. Unlike in the case of a single layer network, as covered in the previous derivation of the delta rule, the weights belonging to hidden neurons have no direct access to the error signal, instead we must calculate the error signal from all of the neurons that indirectly connect the neuron to the error (i.e. every output neuron $y_k$).

Following from the chain rule we can write the partial derivative of a hidden weight $w_{ji}$ with respect to the error $E^n$,

where the sum arises from the fact that, unlike in delta rule: (10) where the weight $w_{kj}$ affects only a single output, the hidden weight $w_{ji}$ affects all neurons in the subsequent layer (see fig. 3).

We already know how to calculate the partials for the output layer from the derivation of the delta rule for single-layer networks, and we can substitute these from \eqref{eqn:outputlayer} for the output neuron and error partial derivatives,

Recall from delta rule: (2), the net activation $a$ is a sum of all previous layer weights. Thus,

and substituting from delta rule: (8) and delta rule: (10) into \eqref{eqn:twolayer2},

This bears some resemblance to the derived expression for a single-layer, and just as in delta rule: (14), we can use our definition of the delta to simplify it. For hidden layers this evaluates as

This leaves us with the more convenient expression (as we will see when deriving for an arbitrary number of hidden layers),

### Arbitrary Number of Hidden Layers

The derivation above was based on the specific case of a single hidden layer network, but it is trivial to extend this result to multiple hidden layers. There is a recursion in the calculation of the partial derivatives in \eqref{eqn:deltahidden} which holds for a network with any number of hidden layers, and which we will now make explicit.

The delta is defined,

for any adjacent neural network layers $i, j$, including the output layer where the outputs are considered to have an index $j$. The sensitivity is then,

## Leave a Comment

Your email address will not be published. Required fields are marked *