Early split in the processing of faces and their expressions May 7, 2007Posted by Johan in Emotion, Face Perception, Neuroscience, Sensation and Perception, Social Neuroscience.
You might suppose that processing faces and facial expressions should be done in the same place – a face centre, perhaps the conveniently named fusiform face area? On the other hand, recognising a face should reasonably require different types of processing than recognising an expression – while recognising an individual face calls on constant features and fine detail, recognising an expression requires an ability to process motion quickly, and not necessarily with as much detail.
Vuilleumier et al (2003) capitalised on the fact that the situation I just described sounds a lot like the broad split in processing that occurs early in the visual system, in the lateral geniculate nucleus (LGN). In order to understand this paper, you may need a crash course on the basic visual system – skip ahead 4 paragraphs if you’re comfortable with the geniculostriate and retinotectal pathways. You may also want to look at a previous post on the LGN, where I also tried to explain what it’s all about.
The LGN is located close to the centre of each hemisphere, and serves as a relay station for visual input. It receives input from the retina (via the optic chiasm), and sends this signal on to the primary visual cortex. The LGN is laid out in a retinotopic fashion, meaning that a circle that falls on the retina should result in a circle of firing neurons in the LGN. Most important to the topic at hand, the LGN is divided into 6 layers, producing what essentially is 6 maps of the retina on top of one another. As an aside, smaller koniocellular neurons have been discovered between each layer, though most textbooks still pretend that these cells do not represent yet another 6 layers.
The first two layers (counting from the bottom in the figure above) are known as magnocellular. The cells here are not so good for detail or colour, but they respond well to motion. The remaining 4 layers are parvocellular. The cells in these layers have the converse characteristics: they are sensitive to detail and colour, but not to motion.
Unfortunately, it gets even more complicated. It turns out that there is yet another visual pathway, which bypasses the LGN and primary visual cortex altogether. The retinotectal pathway is believed to project from the retina and on, via the superior colliculus and the pulvinar. This pathway appears to have much the same characteristics as the magnocellular pathway.
So to summarise: some retinal ganglion cells are wired to cells in the magnocellular layers, others to the parvocellular layers, and yet others to the retinotectal pathway. The different layers of the LGN each project on separately to the primary visual cortex, where this tidy distinction is quickly blurred (to the dismay of anyone who was hoping for a tidy explanation of the visual system). There is also a separate retinotectal pathway, which is believed to have the same characteristics as the magnocellular layers of the LGN.
Still with me? We are now at a point where we can get to the topic at hand. To test the relative contributions of these various pathways to the perception of faces and facial expressions, Vuilleumier et al (2003) generated sets of faces that were altered to be preferentially processed by one of the pathways. Examples of the faces are below.
By taking out all low spatial frequency information from the faces, you end up with the pictures in the middle column below. These faces would be detected easily by the parvocellular pathway, but not by the magnocellular or the retinotectal pathways. Conversely, the low spatial frequency faces in the right column contain less information than the normal faces in the left column – and this is entirely the parvocellular pathway’s loss. The magnocellular and retinotectal pathways are presumably coded in such coarse resolution that they cannot discriminate between the faces in the left and right columns.
Vuilleumier et al (2003) used fMRI to investigate the neural responses to the three types of faces (normal, high spatial frequency, low spatial frequency), which were either neutral or fearful. The participants were asked to judge the gender of the faces, but this was mainly to keep their attention on the faces, which were presented in random order. The researcher hypothesised that areas in the ventral visual cortex (ie the fusiform face area) would be sensitive to the high spatial frequency faces, while the amygdala would be sensitive to the low spatial frequency faces.
This latter hypothesis is based on the idea that there is a kind of shortcut in the brain to enable potentially dangerous stimuli such as fearful faces to get to the amygdala as quickly as possible, where the fear response is produced that then guides the response (fight/flee etc).
Vuilleumier et al (2003) found ample support for both hypotheses – across neutral and fearful expressions, the fusiform face area responded more strongly to the high spatial frequency faces than the low spatial frequency faces, while the opposite was true of the amygdala.
To further the evidence for low spatial frequency coding in the amygdala, Vuilleumier et al (2003) also showed that the amygdala only responded to fearful faces (as compared to neutral faces) when the faces were presented at low spatial frequency. No significant difference appeared for the high spatial frequency comparison between fearful and neutral faces. Better yet, the same activation patterns appeared for a part of the thalamus corresponding to the pulvinar and the superior colliculus, which, as you may recall, was hypothesised to be part of the retinotectal pathway that enables the amygdala response. The figure below gives the mean activity across the conditions for the amygdala (d), and the pulvinar-colliculus area (e).
I think this study is a nice example of how vision research can be used to generate hypotheses for affective neuroscience. While this study is by no means conclusive (there were a few other brain activation patterns that are not so easily interpreted, which I have spared you), it does provide rather good evidence of a coarsely-coded quick shortcut in the visual system, which may enable fast amygdala-mediated fear responses.
While the idea of a continued strict division between the magno- and parvocellular pathways in “higher” visual areas has been abandoned (it turns out that even a recognition-based area like the fusiform face area receives some input from the magnocellular pathway as well), it is interesting to think that an amygdala with blurry visual input could explain why we are so easily scared of sudden movements. Before your higher-resolution, geniculostriate-mediated visual system has determined that it’s actually just clothes blowing in the wind on a laundry line, your retinotectal amygdala shortcut has already produced quite a fright based on its own, low-resolution input.
Vuilleumier, P., Armony, J.L., Driver, J., & Dolan, R.J. (2003). Distinct Spatial Frequency Sensitivities for processing Faces and Emotional Expressions. Nature Neuroscience, 6, 624-631.