NPB 163/PSC 128

Lab #8

(due Friday 2/27)


1. Winner-take-all learning

In the last assignment, you saw that linear Hebbian learning, which converges to the principal components of the data, did not provide a particularly useful description of the data in the array D2. In this problem, you will see if a competitive hebbian learning network can discover the clusters in this data.

a) Plot the data contained in the array D2.

b) Now train a network consisting of four neurons on this data using the standard winner-take-all learning rule (eq. 9.6/9.7 of HKP chapter 9). Replot the weight vector on each update so you can watch the weight vectors evolve (by creating a graphics handle with erase mode xor and using set(h,..) as in the past assignments. Note: you may need to employ one of the techniques listed on p. 221 to avoid dead units.

c) Why are four neurons are necessary here rather than just two? How would you change the competition and learning rule so that just two units are necessary?


2. Self-organizing maps

a) Download the script kohonen.m from the web site and make sure you understand the implementation of the Kohonens algorithm (eqs. 9.21/9.22 of HKP chapter 9).

b) Run the program and observe the map adapt to the distribution of the data.

c) Now create a lesion in the input array, so that a specific region is devoid of stimulation. How does the map change?   Alternatively, overtrain the network in one area of the array - how does the map change now?


3. Hopfield nets

For this problem you will need to dowload the scripts hopnet.m, genpat.m, and the data patterns.mat.

a) Look over the script hopnet.m and make sure you understand its implementation of eq. 1 of the Hopfield paper (note that here we are using values of / instead of 1/0).

b) Load the patterns in patterns.mat and display each of them so you know what they look like. Each is a 100x1 vector that you will need to reshape into a 10x10 array in order to display as an image. Use axis image!

c) Train the weight matrix T using the outer product rule (eq. 17 of handout).

d) Run the network on one of the trained patterns to verify that it is a stable basin of attraction (i.e., set net_state equal to one of the patterns you loaded in.)

e) Now corrupt one of the patterns by randomly flipping some of the components. Run the network starting from this initial condition and observe what happens. How far can you corrupt the pattern before it can no longer be restored?

f) Generate some of your own patterns using the script genpat.m, and add these into the weight matrix T. How many patterns can you store in the network before they start to collide with each other?