Learning to recognise faces: perceptual narrowing? January 11, 2008Posted by Johan in Animals, Developmental Psychology, Face Perception, Sensation and Perception.
add a comment
That image certainly piques your interest, doesn’t it? Sugita (2008) was interested in addressing one of the ancient debates in face perception: the role of early experience versus innate mechanisms. In a nutshell, some investigators hold that face perception is a hardwired process, others that every apparently special face perception result can be explained by invoking the massive expertise we all possess with faces, compared to other stimuli. Finally, there is some support for a critical period during infancy, where a lack of face exposure produces irreparable face recognition deficits (see for example Le Grand et al, 2004). Unfortunately, save for a few unfortunate children who are born with cataracts, there is no real way to address this question in humans.
Enter the monkeys, and the masked man. Sugita (2008) isolated monkeys soon after birth, and raised them in a face-free environment for 6, 12 or 24 months. After this, the monkeys were exposed to strictly monkey or human faces for an additional month.
At various points during this time, Sugita (2008) tested the monkeys on two tasks that were originally pioneered in developmental psychology as means of studying pre-lingual infants. In the preferential looking paradigm, two items are presented, and the time spent looking at either item in the pair is recorded. The monkeys viewed human faces, monkey faces, and objects, in various combinations. It is assumed that the monkey (or infant) prefers whichever item it looks at more. In the paired-comparison procedure, the monkey is primed with the presentation of a face, after which it views a face pair, where one of the faces is the same as that viewed before. If the monkey views the novel face more, it is inferred that the monkey has recognised the other face as familiar. So the preferential looking paradigm measures preference between categories, while the paired-comparison procedure measures the ability to discriminate items within a category.
Immediately following deprivation, the monkeys showed equal preference for human and monkey faces. By contrast, a group of control monkeys who had not been deprived of face exposure showed a preference for monkey faces. This finding suggests that at the very least, the orthodox hard-wired face perception account is wrong, since the monkeys should then prefer monkey faces even without previous exposure to them.
In the paired-comparison procedure, the control monkeys could discriminate between monkey faces but not human faces. By contrast, the face-deprived monkeys could discriminate between both human and monkey faces. This suggests the possibility of perceptual narrowing (the Wikipedia article on it that I just linked is probably the worst I’ve read – if you know this stuff, please fix it!), that is, a tendency for infants to lose their ability to discriminate between categories which are not distinguished in their environment. The classic example occurs in speech sounds, where infants can initially discriminate phoneme boundaries (e.g., the difference between /bah/ and /pah/ in English) that aren’t used in their own language, although this ability is lost relatively early on in the absence of exposure to those boundaries (Aslin et al, 1981). But if this is what happens, surely the face-deprived monkeys should lose their ability to discriminate non-exposed faces, after exposure to faces of the other species?
Indeed, this is what Sugita (2008) found. When monkeys were tested after one month of exposure to either monkey or human faces, they now preferred the face type that they had been exposed to over the other face type and non-face objects. Likewise, they could now only discriminate between faces from the category they had been exposed to.
Sugita (2008) didn’t stop there. The monkeys were now placed in a general monkey population for a year, where they had plenty of exposure to both monkey and human faces. Even after a year of this, the results were essentially identical as immediately following the month of face experience. This implies that once the monkeys had been tuned to one face type, that developmental door was shut, and no re-tuning occurred. Note that in this case, one month of exposure to one type trumped one year of exposure to both types, which shows that as far as face recognition goes, what comes first seems to matter more than what you get the most of.
Note a little quirk in Sugita’s (2008) results – although the monkeys were face-deprived for durations ranging from 6 to 24 months, these groups did not differ significantly on any measures. In other words, however the perceptual narrowing system works for faces, it seems to be flexible about when it kicks in – it’s not a strictly maturational process that kicks in at a genetically-specified time. This conflicts quite harshly with the cataract studies I discussed above, where human infants seem to lose face processing ability quite permanently when they miss out on face exposure in their first year. One can’t help but wonder if Sugita’s (2008) results could be replicated with cars, houses, or any other object category instead of faces, although this is veering into the old ‘are faces special’ debate… It’s possible that the perceptual narrowing observed here is a general object recognition process, unlike the (supposedly) special mechanism with which human infants learn to recognise faces particularly well.
On the applied side, Sugita (2008) suggests that his study indicates a mechanism for how the other-race effect occurs – that is, the advantage that most people display in recognising people of their own ethnicity. If you’ve only viewed faces of one ethnicity during infancy (e.g., your family), perhaps this effect has less to do with racism or living in a segregated society, and more to do with perceptual narrowing.
Sugita, Y. (2008). Face perception in monkeys reared with no exposure to faces. Proceedings of the National Academy of Sciences (USA), 105, 394-398.
Discriminating individual faces from neural activation December 29, 2007Posted by Johan in Face Perception, Neuroscience.
How do we recognise faces? The vast majority of research into face perception has attempted to answer this question by restricting their investigations to a small section of the fusiform gyrus, which Kanwisher and colleagues named the Fusiform Face Area (FFA) in 1997. It is commonly proposed that the FFA handles not only the detection but also the recognition of individual faces. A recent paper by Kriegeskorte et al (2007) suggests that instead, a region in the right anterior inferotemporal cortex (aIT – ahead of and above the FFA) encodes information about different faces, while the FFA does not. In order to understand the finer points of this finding, it is necessary to explain the basic assumptions of univariate neuroimaging analysis, and how it is used to identify the FFA. Skip ahead a paragraph if this is familiar territory.
The classic fMRI or PET analysis consists of taking an experimental condition and a control condition, and asking “which areas respond significantly more to the experimental condition than the control?” The resulting activations can be said to constitute areas that are specifically implicated in the experimental condition. For example, the FFA is usually defined as the part of the fusiform gyrus that responds more to faces than to houses. Note that there is an element of inference or assumption involved in then concluding that this bit of brain is the bit that does faces, since other areas might also respond to faces without being detected in a relatively insensitive univariate whole-brain analysis. The common acceptance of this type of contrast analysis stems in part from its practical utility. For example, the FFA corresponds closely to the critical lesion site that causes prosopagnosia (an inability to recognise faces), and activation in this area can be correlated with behavioural performance at various face recognition tasks.
In this study, contrasts were used to identify the FFA in each participant, in addition to a region in the aIT that also responded more to faces than to objects. To do this, Kriegeskorte et al (2007) used only four stimuli, as shown below.
Although contrasting faces and houses revealed the previously mentioned activations in the FFA and aIT, contrasting the two different faces produced no activations.
Kriegeskorte et al (2007) next used a type of pattern analysis, where the FFA and aIT voxels were used as input. The specifics of this type of analysis are too complex to discuss in detail here (see this review by Norman et al, 2006 which concerns a related technique, and a previous post), but essentially, this analysis uses multivariate statistics to assess whether the overall pattern of activation in an area differs significantly between conditions. If it does, it can be inferred that the area processes information about the categories. Pattern analyses are far more sensitive than traditional contrasts when it comes to differences within a region, but they achieve this sensitivity by sacrificing spatial localisation. Kriegeskorte et al (2007) used a range of pattern analyses, but their results are nicely summarised by the analysis depicted in this figure.
In this analysis, Kriegeskorte et al (2007) attempted to discriminate between the two faces based on an increasing number of voxels, expanding from the FFA and aIT regions that were revealed by the contrast. The lines on the y-axis show whether the patterns evoked by the two faces are significantly different in the voxels. Only the voxels in the right aIT respond significantly differently to the two faces, and this difference becomes significant early, when around 200 voxels are included. By contrast, even when 4000 voxels around the FFA are included, encompassing much of the temporal and occipital lobes, the activation here cannot discriminate between the two faces.
So to summarise, both the FFA and the aIT (among other areas) respond more to faces than to houses, but only the aITS responds differentially to specific faces. Although these results lend themselves to the conclusion that the FFA does some type of face detection while the aITS is involved in encoding the identity of faces, Kriegeskorte et al (2007) suggest that it probably isn’t that simple. Previous studies have found identity-specific activations in the FFA using other paradigms (e.g., Rotshtein et al, 2005), so Kriegeskorte et al (2007) go for the classic neuroimaging cop-out of suggesting that identity information nevertheless exists in the FFA, but at a resolution beyond that of current scanners. However, note that the fact that the identity effects in the aIT were detectable suggests that this area might play a larger role in this task than the FFA does, at least. Kriegeskorte et al (2007) note that prosopagnosia may be caused by lesions to the FFA region, but also by aIT lesions, and suggest that face recognition depends on interactions between (among others) these two areas.
From a more methodological standpoint, it is interesting to note that although a contrast between the two faces yielded no significant effects, differences appeared in a pattern analysis. This is a nice example of how pattern analysis may be a more sensitive measure.
The aIT has not received a great deal of attention previously as a face recognition region, so Kriegeskorte et al (2007) are probably going to face close scrutiny, as they have essentially posited that the region plays a leading role in the holy grail of face perception – the recognition of individual faces. It is interesting to note, however, that these findings do offer a means of reconciling fMRI results from humans with data from single-cell recording studies in monkeys, which have revealed identity-specific face responses primarily in anterior temporal regions. Such monkey regions correspond far better to the aIT than the FFA, which has been somewhat of a problem for the conventional account of the FFA as a Swiss army knife of face perception (but see Tsao et al, 2006 for evidence of a better monkey homologue of the FFA).
Really though, the most striking thing about this study is that current neuroimaging technique enables us to discriminate between the neural representation of these two faces. When you look at the faces above, it is clear that physically, they are quite similar. It is quite inspiring to think that it is nevertheless possible to pick out these undoubtedly subtle differences in the evoked neural response pattern.
Kriegeskorte, N., Formisano, E., Sorger, B., and Goebel, R. (2007). Individual faces elicit distinct response patterns in human anterior temporal cortex. Proceedings of the National Academy of Sciences (USA), 104, 20600-20605.
Amygdala-Orbitofrontal interplay in cognitive flexibility November 29, 2007Posted by Johan in Learning, Neural Networks, Neuroscience.
1 comment so far
This rat doesn’t get sucrose, but is probably happier than Stalnaker et al’s rats
Today’s title may be the least accessible yet, but bear with me; this is an interesting paper. Stalnaker et al (2007) investigated the neural basis of what they call cognitive flexibility – this is a very fancy term for a rat’s ability to handle a conditioning paradigm known as reversal learning. The method that Stalnaker et al used serves as a good example of the paradigm.
Rats were first trained on an odour discrimination task. Poking at one little door that is laced with odour A produces delivery of a tasty sucrose solution. Poking at another little door that is laced with odour B produces delivery of an unpleasant quinine solution (incidentally, quinine is a component in Vermouth, but we’ll assume that these particular rats like their martinis very dry). The actual door that is associated with each odour is varied, so that the rats have to rely on the odour cues alone to learn how to get their treat. Once the rats have achieved a criterion level of accuracy at this task, the contingency reverses, so that odour B now produces a treat while odour A produces quinine. The model finding is that the rats will be slower to learn the reversal than they were to learn the original task.
Stalnaker et al were interested in investigating the role of orbitofrontal cortex (OFC) and the basolateral amygdala (ABL) in bringing about this reversal. There are two basic ideas on how this might work: the OFC might directly encode the preferred stimulus, or the OFC might play an indirect role where it facilitates changes in downstream areas, such as the ABL. So in other words, downstream areas bring about the actual behaviour, while the OFC plays more of a modulatory role in telling the downstream areas when contingencies change.
To test these notions, Stalnaker et al lesioned the OFC in one group of rats, the ABL in another group, and both the OFC and the ABL in a third group. After this, the rats learned the odour discrimination task. The three groups did not differ significantly at this point. In other words, neither area or the combination of them was necessary to learn the task. Next, the rats went through two serial reversals – odour A switched places with odour B, and then back again. The effect of the brain lesions was measured by the number of trials taken to learn the reversals to the same accuracy level as the initial odour task.
Rats with OFC damage were slower to learn the reversals than the other groups. However, rats with ABL lesions and rats with combined OFC and ABL lesions did not significantly. So in other words, although OFC lesions in isolation cause impairments, this effect is abolished when the ABL is sectioned as well.
Stalnaker et al interpret these findings as support for an indirect role for the OFC in reversal learning. The ABL is stubborn, simply put. Without the modulatory influence of the OFC, the ABL persists in responding as though the contingency had not reversed, which produces slower reversal learning. By removing the ABL as well, this persistent influence is gone and reversal learning can occur normally. It is somewhat counter-intuitive that lesioning more of the brain helps, but there you go.
This is a nice study because it answers one question, but asks a number of new questions. If the rats can carry out reversal learning normally without either the OFC or the ABL, why is this circuit even involved in the paradigm, that is, why should OFC lesions have an effect, if the pathway as a whole is not needed? Also, if the ABL produces such deficient behaviour when the OFC is lesioned, why don’t lesions to the ABL affect behaviour? And most importantly, if behaviour is normal after ABL and OFC lesions, which other area must be lesioned to impair behaviour yet again. And what happens if this area is lesioned in isolation?
Enough questions to make your head spin, but the take-home message for those studying humans is that there is an entire range of complex interactions in the brain that fMRI, with its blurry temporal resolution and lack of experimental manipulation, can only hint at. We know much about functional localisation in the human brain, but the issue of how these areas connect and interact is largely uncharted territory.
Stalnaker, T.A., Franz, T.M., Singh, T., and Schoenbaum, G. (2007). Basolateral Amygdala Lesions Abolish Orbitofrontal-Dependent Reversal Impairments. Neuron, 54, 51-58.