Pages

Thursday, 14 June 2012

After Images: Amateur Adventures in Neuroscience

     Many years ago (perhaps as early as the 1980s?), I remember reading in a book about artificial intelligence something about attempts to develop vision systems where researchers were surprised to find the processed image essentially going blank after a few seconds, if the camera were left pointed at a static scene. I don't remember the details of the book or the experiment, but it's something I've found myself thinking about from time to time, and of course it makes a certain amount of sense when you think about how neural networks work. After all, visual processing isn't just a matter of detecting the intensity of each pixel, but involves picking out patterns: edges, lines, movement, and so on. Layer upon layer of neurons perform this task, each responding to a particular, relatively simple combination of inputs from several neighbouring neurons to decide things like whether or not its tiny area of the visual field includes a boundary between different regions.

     It has recently occurred to me that I may have observed this very same phenomenon with the visual system in my own brain. I had tried to duplicate the result by staring very intently at a single point long enough for things to fade, but that never worked; even in a very calm environment where nothing appears to be moving, the semi-random saccadic eye movements force your field of vision to twitch just enough to keep things dynamic for your neural net.
     But there is a way to imprint a static image directly onto your retina, without any possibility of its moving around. I noticed this one morning as I woke up, and the high contrast image of the daylight through my window against the darkness of the rest of my room left an afterimage when I closed my eyes again. At first the image was a very sharp outline of my window, with every slat of the Viennese blind distinctly visible. But soon it began to blur and fade.
     Afterimages are caused by the retinal cells in your eye being fatigued. The cells fire when exposed to light, sending a signal through the optic nerve to the brain, but it uses metabolic resources to fire, and it takes time to regenerate those resources. Now, I thought the fading of the afterimage, after I closed my eyes again, was simply a matter of my retinal cells recovering from the effort. But then I found I could restore the afterimage for a short time by opening my eyes without looking in the direction of the window. That is, I controlled for the possibility of simply re-burning a similar after image, by looking at my darkened room rather than the window. And the image came back, though as before, only for a few seconds before it faded. I was able to make this happen again and again, for several minutes. Evidently it can take quite a while for retinal cells to rest fully.

     So here's what I think is happening here. The after image on my retina is very static, because it's fixed to actual retinal cells. The layers in my neural net quickly cease to recognize it as a pattern, and stop reporting it, precisely because it is so static. In other words, the brain automatically filters out these artifacts of the optic system itself, compensating for the fact that one retinal cell might be giving less than its normal response and another nearby one is giving its full signal, and reconstructing what the scene "really" looks like without the local variations of sensitivity.
     And when you think about it, it makes perfect sense that our visual system would adapt this way, since it's supposed to tell us what's happening in the world around us, not irrelevant administrative details about how each retinal cell is performing.

No comments:

Post a Comment