NPB 261B - Lab #1


Due Monday, Jan. 24 (in class)


In this assignment you will create a reconstruction of an image which reflects the spatial resolution conveyed by parvo- and magno-cellular retinal gangion cells, assuming they act roughly as linear spatial filters.  The dendritic field diameters of these cells increase with eccentricity roughly according to the following relation:
Parvo diameter ~  .01*E
Magno diameter ~ .03*E
where ~ denotes "approximately equal" and E is eccentricity measured in degrees of visual space.  Interestingly, the spacing between adjacent retinal gangion cells of the same class also increases with eccentricity roughly according to the same relation.  In order for this sub-sampling to occur without aliasing, retinal ganglion cells would essentially need to spatially lowpass filter the image appropriately - i.e., they would have to summate over space, and the areal extent of the summation would have to increase with eccentricity in much the same way as dendritic field diameter.  The suggestion then is that retinal gangion cells are using their dendritic fields (perhaps in addition to other retinal circuitry) to do this spatial summation.  We can demonstrate the consequences of this summation by simulating an array of retinal ganglion cells processing an image, and then reconstructing an image from the outputs of these retinal ganglion cells.

A useful tool for doing this simulation is the Laplacian pyramid, which essentially provides a convenient way to produce a multi-resolution representation of an image, without loss.   If you have engineering inclinations of any kind I highly recommend reading the full paper:
Burt PJ, Adelson EH (1983)  The Laplacian pyramid as a compact image code.   IEEE Transactions on Communications, 31, 532-540.
You can proceed with this assignment though without knowing how it works in detail (although it is really a very simple and elegant idea - among the top 10 papers I have ever read).  You will use the Laplacian pyramid first to construct a multi-resolution representation of an image.  Then you will window out the center portion of each level of the pyramid to simulate how resolution falls off with eccentricity in the retina.  Finally, you will reconstruct the image from the windowed pyramid.  The result will be an image that reflects the image content conveyed by retinal gangion cells which (hypothetically) scale their summation zones as described above.

First, you must download the Matlab scripts and some example images from the following URL:
http://redwood.ucdavis.edu/bruno/npb261b/lab1/
Just grab the whole directory lab1 and put it somewhere convenient on your computer (such as in the work folder in Matlab).  Then startup Matlab and read in one of the images.  For example, to read the image einstein.jpg  you type
im=double(imread('einstein.jpg'));
The double command is needed to convert to double precision floating point numbers, which is the format in which Matlab does all arithmetic operations.  To display this image type
imagesc(im), axis image
colormap gray

You should see a 256 x 256 pixel rendition of the most famous scientist of the 20th century.  

Now, build a Laplacian pyramid from this image and display the pyramid as follows:
pyr=buildpyr(im,4);
showpyr(pyr)

This will build a pyramid composed of four levels.  You will see that the different levels of the pyramid contain different amounts of spatial detail, proceeding from coarse to fine (left to right in the display).  The leftmost image forms the top of the pyramid.  It contains a compact rendition of the original image that has been lowpass filtered and downsampled to 32 x 32 samples.  The next level contains the additional detail needed to expand it to 64 x 64.  The next level after that contains the additional detail needed for 128 x 128, and so on.  Moving up or down a level of the pyramid decreases or increases, respectively, the level of resolution by a factor of two.  To reconstruct an image from the pyramid, you first expand the top level of the pyramid, then add it to the next level of detail, then expand this and add to the next level, etc. until you get to the bottom of the pyramid.  At this point you will have the original image back.  You can use the function reconpyr to do this as follows:
imh=reconpyr(pyr);
You can see what reconpyr does by opening it up in the editor window - it is a very simple function.  

Now, to create an image that depicts what is represented by the retina we will not want a full reconstruction because we know that the retina is throwing away alot of information via summation and subsampling by the ganglion cells.  We can simulate this process by windowing each level of the pyramid as follows:
pyrw=windowpyr(pyr);
Display the windowed pyramid using showpyr(pyrw).  You will see that the center portion of each level of the pyramid has been windowed out using a Gaussian window (i.e., a circular window with smoothly tapered boundaries).  Importantly, the size of the window within each level is exactly the same - it subtends 32 x 32 sample nodes (i.e., the standard deviation of the Gaussian window is 16).  But what this translates into in terms of image pixels depends on which level of the pyramid you are in.  At the bottom level of the pyramid, each sample node corresponds to a pixel in the image.  In the next level up, a sample node corresponds to a 2 x 2 region of image pixels.  In the next level up from that it corresponds to a 4 x 4 region, and so on.  Each time you move up a level of the pyramid, you increase a sample node's region of coverage by a factor of two.  So while a 32 x 32 window in the bottom level of the pyramid corresponds to a 32 x 32 window in the image, a 32 x 32 window in the next level up corresponds to a 64 x 64 window in the image, etc.  By the time you get to the top level, a 32 x 32 window subtends the entire image.  When we expand and add all the levels together, the result is an image which contains detail in the center and gets more blurry as you move away from the center. You can compute and display this result as follows:
imh=reconpyr(pyrw);
imagesc(imh), axis image

Based on the information above, you should be able to convince yourself that the diameter of the region of spatial summation used to create the blur is increasing linearly with eccentricity (in a roughly piecewise manner) according to D = r/16, where r is the distance from the center of the image in pixels, and D is the diameter of the blur.

Now, how do we make it so that the diameter of the summation zone follows the relations above for the diameters of parvo- and magno-cellular dendritic arbors?  First, you need to figure out the viewing distance for which each pixel subtends .01 degrees.  Then, for simulating the parvo-cellular lattice you will need to set the diameter of the window so that the constant of proportionality in the relation between D and r is .01.  In the example above, the diameter of the window was 32, which yielded a constant of proportionality of 1/16.  So to get a constant of proportionality of .01 the window needs to have a diameter of 200 sample nodes.    The function windowpyr lets you set the radius of the window by passing a second argument.  To set the radius to 100 you would type
pyrw=windowpyr(pyr,100);
But for the einstein image the window now subtends the entire image already at the bottom level of the pyramid, so we won't see the effect of resolution falling off with eccentricity.  We need a bigger image.  For this, try logpicture.jpg, which is 1024 pixels wide.  To propertly view the result you will need to use another image viewer though, because unfortunately Matlab forces the image well within the figure window, so it doesn't actually fill 1024 screen pixels.  So you will need to save the image as a jpeg file and then open the file in an image viewer that lets you view the image at a ratio of 1:1 (for example Photoshop).  To save the image as a jpeg file type
imwrite(uint8(imh),'logpicture-parvo.jpg','Quality',100)
Now do the same thing for the magno-cellular lattice.

What you need to turn in:

You should turn in a lab write-up that includes the following:
  1. A print out of the filtered image for both parvo- and magno-cellular lattices, along with the window size you used for simulating the magno-cellular lattice.
  2. The viewing distance you used and how you calculated it.
  3. Your observations.  For example, can you tell that the image content in the periphery is blurred when you fixate the center of the image?  If so, what factors do you suppose have not been properly taken into account?