Hotwiring the Visual System May 2, 2007Posted by Johan in Neuroscience, Sensation and Perception.
A recent paper in PNAS by Pezaris and Reed (2007) outlines a potential way of providing artificial vision to the blind. This in itself is nothing new – other teams are working on producing artificial input to the visual system via the retina or by stimulating the primary visual cortex directly. The novel aspect of this paper is that Pezaris and Reed (2007) are probing the lateral geniculate nucleus (LGN) instead.
The LGN (pictured in a macaque monkey above, courtesy of the wonderful BrainMaps) serves as somewhat of a relay point, half-way between the retina and the primary visual cortex. Conveniently, it has a retinotopic organisation, meaning that the receptive areas of the neurons in this nucleus match the layout of the retina – so a straight line across the retina would be represented by a straight line (more or less!) of firing neurons in the LGN. While the primary visual cortex and even higher-level areas have the same organisation, the neurons in these areas handle more complex processing. As early as the primary visual cortex, neurons respond preferentially to specific angles of orientation, which may make it challenging to provide useful artificial input. In the LGN, simple blobs still elicit solid responses, far closer to the pixels that a camera could provide as input.
As an experiment, the Pezaris and Reed (2007) design is simple. Macaque monkeys were trained to make saccades (fast orienting eye movements) towards spots of light. The experimenters inserted electrodes into the LGN, and used a clever technique to pinpoint the location that their electrode corresponded to in the monkey’s visual field. The normal light-orienting trials were mixed with trials where a small current was administered into the electrode, stimulating neurons in the LGN. The monkeys produced the same gaze-orienting response to this stimulation, even though they viewed no spot of light whatsoever. The figure below shows the response patterns.
This result could be interpreted as a simple reflex – we know that primates will orient their eyes automatically to an unexpected stimulus, if it is salient. Such orienting responses may be mediated by the retinotectal pathway, which connects the retina to the superior colliculus and on to the dorsal visual stream, without passing the LGN. However, Pezaris and Reed (2007) could show that this was no orienting reflex in a second task, where the monkey was flashed two lights, and trained to look at one followed by the next. The monkeys could carry out this task just fine even when the second stimulus was actually electrical rather than visual.
It’s worth noting that since the monkeys responded to the electrical stimulus the way that they had been trained to respond to the visual stimulus, the two types of stimuli must have been perceived as fairly similar – stimulus generalization only goes so far. While we can’t answer this question until we get trials on humans, it’s tempting to try to imagine what the monkeys might have seen. Given what is known about processing in the LGN, it was probably just a highly specific blob of light, similar to the afterimage you can get after staring at a light bulb.
Another interesting thought is what the monkeys may have perceived when they moved their eyes. Unfortunately, this experiment used a 80-200 ms stimulus presentation, meaning that the monkeys simply didn’t have time to move their eyes until the stimulus was gone. Given Helmholz’ Outflow Theory, this is probably a good thing, as we will see next!
Essentially, Helmholz tried to understand why the world doesn’t seem to move when you move your eyes. Objectively, the images that fall on your retina as you move your eyes are the same that would be produced by the entire world shifting slightly to the side. Yet, your perceptual experience is quite different – try poking at the side of your eye and note how different it feels when your eye moves this way. Helmholz assumed that when the brain sends a signal to the eye muscle to move, it also sends a copy of that signal to a comparitor, which compares the copy of the movement order with the perceived retinal movements. As long as the movement signal copy and the retinal movement matches, the comparitor cancels the perceived motion, making the world seem stable. When you poke at your eye, there is no movement signal from the brain, so the comparitor can only conclude that the retinal image is moving because the world is moving. While Helmholz first outlined this theory back in the 19th century, it has been largely confirmed by subsequent research.
So what does this mean for our monkeys, had we given them more time to view the stimuli? Well, the monkeys are moving their eyes, but there is no change in the retinal image – the same one spot of the LGN is being stimulated, no matter where you look. So the comparitor gets a movement signal from the eye, but no retinal movement signal. The outcome of this, according to Outflow Theory, is that every time the poor monkey shifts his gaze, it will appear as if the entire world moved, just like when you poke your eye.
But how do we know this is what happens? Rather good evidence that it does comes from experiments where (unfortunate) participants have their eyes rendered immobile. This can be done by various means, and all of them are unpleasant. One favourite is by injection of Curare, which essentially produces temporary paralysis. When the paralysed participant tries to move their eyes, the experience is one of the world moving, although neither their eyes nor the retinal image moves (e.g., Matin et al, 1982). Presumably, the movement order copy is still sent to the comparitor, which concludes that since your eyes are (supposedly) moving but the world isn’t, this means the world is moving. As an aside, the effect isn’t as strong under daylight conditions when movement can be judged relative to the background, which suggests that the visual system is doing something a bit more complex than Helmholz’s comparator.
As a technical demonstration, the Pezaris and Reed (2007) paper is quite impressive. However, it’s undeniably true that some hurdles remain before anything like this will help the blind. As outlined above, a device using this technology would likely appear disturbing and even nauseating to users initially, as every eye movement causes the world to shake around violently. Perhaps it would be possible to learn to avoid eye movements, but as was mentioned above, the orienting response in primates is quite automatic and is unlikely to disappear as a result of training. Also, Pezaris and Reed (2007) don’t make much of the fact that both these input points are more or less available to direct probing, being covered at most by bone. The LGN, on the other hand, is pretty much in the middle of the brain. While single electrodes can be passed down through the cortex without causing damage (as was done in this study), you soon run into problems when you want to achieve something better than a few blobs of light. To achieve a modest 640*480 pixels resolution, you would need 307 200 input points, ie, electrodes. While I’m no neurobiologist, running such an amount of wire through the brain seems like a challenge.
On the other hand, this line of research could lead to important insights into the workings of the retinotectal pathway. As was mentioned above, this pathway goes from the retina, on to the superior colliculus, terminating in the dorsal stream, which codes spatial localisation or visually guided action (depending on if you ask Ungerleider and Mishkin or Milner and Goodale, respectively). By electrode stimulation in the LGN, you could essentially present stimuli selectively to the geniculostriate pathway (LGN to primary visual cortex, and on). The dorsal stream appears to receive input from both the retinotectal and the geniculostriate pathway, so by looking at performance differences when the stimulation is visual versus electronic, you could start to tease apart the relative contributions of these two pathways to dorsal-stream processing.
Matin, L., Picoult, E., Stevens, J.K., Edwards, M.W., Young, D., & MacArthur, R. (1982). Oculoparalytic illusion: visual-field dependent spatial mislocalizations by humans partially paralyzed with curare. Science, 216, 198-201.
Pezaris, J.S., & Reed, R.C. (2007). Demonstration of artificial visual percepts generated through thalamic microstimulation. Proceedings of the National Academy of Sciences of the United States of America, 104, 7670-7675.