Some limits on neural networks - Hopfield net


next up previous contents
Next: Limits on learnability
Up: No Title
Previous: Some limits on neural networks - linear associator
Back: to main list of student notes

Some limits on neural networks - Hopfield net

In this section, I look at an example of limits on the amount of information that can be learnt.

Now to a different kind of net, the Hopfield net. We have a set of neurons. Each neuron has a synaptic connection to every other neuron. These connections also have weights, and the weights are symmetric, in that neuron 1 affects neuron 2 in the same way that 2 affects 1. There's an updating rule by which pairs of neurons adjust their activations so as to minimise their ``computational energy''. This rule is repeatedly applied until the whole system is in a stable state.

The best way to think of this is via a physical analogy (which Hopfield nets were developed from). Consider a set of compass needles arranged on a board. A needle's position corresponds to its activation. Each needle affects each other, via their magnetic fields. The strength of this connection is usually fixed, determined by the rate at which magnetic fields attenuate in the air. However, it could be changed by interposing something which blocks off such fields, such as soft iron. If you position two needles so that they repel each other, they will be in a state of high potential energy. They'll move so as to minimise their PE by moving to attract each other, i.e. one's south pole next to the other's north pole. If you have more than two needles, the whole set will adjust itself to find the state of lowest PE.

You can consider all possible positions of the needles, and plot PE against these positions. This gives you a potential energy landscape. The points of lowest energy or minima are, not surprisingly, the lowest points in the landscape.

Going back to nets, these points of lowest energy correspond to stored memories or patterns. Start the net off anywhere else, and it will ``roll down the energy landscape'' until it falls into one of the minima. The shape of the energy landscape is determined by the synaptic weights, and is determined during training. Retrieving a memory from a partial input corresponds to starting somewhere high on the landscape, and rolling into the nearest minimum.

So how many memories can be stored? You can do a mathematical analysis which shows that if you try to store too many, the local minima interfere. In effect, different landscapes get superimposed, and create false minima which don't correspond to real memories. Worse, the positions where the memories should be may no longer be minima anyway.

Furthermore, in any real system of compass needles, the needles will never be completely still, because of thermal excitation. If the temperature is too great, then once again, this will cause the same kind of effect as above. The same kind of thing happens in a Hopfield net. This is explained in detail, with diagrams, on page 40 of Herz. For a nice general intro to Hopfield nets, with explanation of potential energy landscapes, see the article on Neural networks in the Encyclopaedia of AI. gif


next up previous contents
Next: Limits on learnability
Up: No Title
Previous: Some limits on neural networks - linear associator
Back: to main list of student notes



Jocelyn Ireson-Paine
Wed Feb 14 23:47:23 GMT 1996