jump to navigation

Learning to recognise faces: perceptual narrowing? January 11, 2008

Posted by Johan in Animals, Developmental Psychology, Face Perception, Sensation and Perception.
add a comment

Blogging on Peer-Reviewed Research That image certainly piques your interest, doesn’t it? Sugita (2008) was interested in addressing one of the ancient debates in face perception: the role of early experience versus innate mechanisms. In a nutshell, some investigators hold that face perception is a hardwired process, others that every apparently special face perception result can be explained by invoking the massive expertise we all possess with faces, compared to other stimuli. Finally, there is some support for a critical period during infancy, where a lack of face exposure produces irreparable face recognition deficits (see for example Le Grand et al, 2004). Unfortunately, save for a few unfortunate children who are born with cataracts, there is no real way to address this question in humans.

Enter the monkeys, and the masked man. Sugita (2008) isolated monkeys soon after birth, and raised them in a face-free environment for 6, 12 or 24 months. After this, the monkeys were exposed to strictly monkey or human faces for an additional month.

At various points during this time, Sugita (2008) tested the monkeys on two tasks that were originally pioneered in developmental psychology as means of studying pre-lingual infants. In the preferential looking paradigm, two items are presented, and the time spent looking at either item in the pair is recorded. The monkeys viewed human faces, monkey faces, and objects, in various combinations. It is assumed that the monkey (or infant) prefers whichever item it looks at more. In the paired-comparison procedure, the monkey is primed with the presentation of a face, after which it views a face pair, where one of the faces is the same as that viewed before. If the monkey views the novel face more, it is inferred that the monkey has recognised the other face as familiar. So the preferential looking paradigm measures preference between categories, while the paired-comparison procedure measures the ability to discriminate items within a category.

Immediately following deprivation, the monkeys showed equal preference for human and monkey faces. By contrast, a group of control monkeys who had not been deprived of face exposure showed a preference for monkey faces. This finding suggests that at the very least, the orthodox hard-wired face perception account is wrong, since the monkeys should then prefer monkey faces even without previous exposure to them.

In the paired-comparison procedure, the control monkeys could discriminate between monkey faces but not human faces. By contrast, the face-deprived monkeys could discriminate between both human and monkey faces. This suggests the possibility of perceptual narrowing (the Wikipedia article on it that I just linked is probably the worst I’ve read – if you know this stuff, please fix it!), that is, a tendency for infants to lose their ability to discriminate between categories which are not distinguished in their environment. The classic example occurs in speech sounds, where infants can initially discriminate phoneme boundaries (e.g., the difference between /bah/ and /pah/ in English) that aren’t used in their own language, although this ability is lost relatively early on in the absence of exposure to those boundaries (Aslin et al, 1981). But if this is what happens, surely the face-deprived monkeys should lose their ability to discriminate non-exposed faces, after exposure to faces of the other species?

Indeed, this is what Sugita (2008) found. When monkeys were tested after one month of exposure to either monkey or human faces, they now preferred the face type that they had been exposed to over the other face type and non-face objects. Likewise, they could now only discriminate between faces from the category they had been exposed to.

Sugita (2008) didn’t stop there. The monkeys were now placed in a general monkey population for a year, where they had plenty of exposure to both monkey and human faces. Even after a year of this, the results were essentially identical as immediately following the month of face experience. This implies that once the monkeys had been tuned to one face type, that developmental door was shut, and no re-tuning occurred. Note that in this case, one month of exposure to one type trumped one year of exposure to both types, which shows that as far as face recognition goes, what comes first seems to matter more than what you get the most of.

Note a little quirk in Sugita’s (2008) results – although the monkeys were face-deprived for durations ranging from 6 to 24 months, these groups did not differ significantly on any measures. In other words, however the perceptual narrowing system works for faces, it seems to be flexible about when it kicks in – it’s not a strictly maturational process that kicks in at a genetically-specified time. This conflicts quite harshly with the cataract studies I discussed above, where human infants seem to lose face processing ability quite permanently when they miss out on face exposure in their first year. One can’t help but wonder if Sugita’s (2008) results could be replicated with cars, houses, or any other object category instead of faces, although this is veering into the old ‘are faces special’ debate… It’s possible that the perceptual narrowing observed here is a general object recognition process, unlike the (supposedly) special mechanism with which human infants learn to recognise faces particularly well.

On the applied side, Sugita (2008) suggests that his study indicates a mechanism for how the other-race effect occurs – that is, the advantage that most people display in recognising people of their own ethnicity. If you’ve only viewed faces of one ethnicity during infancy (e.g., your family), perhaps this effect has less to do with racism or living in a segregated society, and more to do with perceptual narrowing.

Sugita, Y. (2008). Face perception in monkeys reared with no exposure to faces. Proceedings of the National Academy of Sciences (USA), 105, 394-398.


Discriminating individual faces from neural activation December 29, 2007

Posted by Johan in Face Perception, Neuroscience.

Blogging on Peer-Reviewed ResearchHow do we recognise faces? The vast majority of research into face perception has attempted to answer this question by restricting their investigations to a small section of the fusiform gyrus, which Kanwisher and colleagues named the Fusiform Face Area (FFA) in 1997. It is commonly proposed that the FFA handles not only the detection but also the recognition of individual faces. A recent paper by Kriegeskorte et al (2007) suggests that instead, a region in the right anterior inferotemporal cortex (aIT – ahead of and above the FFA) encodes information about different faces, while the FFA does not. In order to understand the finer points of this finding, it is necessary to explain the basic assumptions of univariate neuroimaging analysis, and how it is used to identify the FFA. Skip ahead a paragraph if this is familiar territory.

The classic fMRI or PET analysis consists of taking an experimental condition and a control condition, and asking “which areas respond significantly more to the experimental condition than the control?” The resulting activations can be said to constitute areas that are specifically implicated in the experimental condition. For example, the FFA is usually defined as the part of the fusiform gyrus that responds more to faces than to houses. Note that there is an element of inference or assumption involved in then concluding that this bit of brain is the bit that does faces, since other areas might also respond to faces without being detected in a relatively insensitive univariate whole-brain analysis. The common acceptance of this type of contrast analysis stems in part from its practical utility. For example, the FFA corresponds closely to the critical lesion site that causes prosopagnosia (an inability to recognise faces), and activation in this area can be correlated with behavioural performance at various face recognition tasks.

In this study, contrasts were used to identify the FFA in each participant, in addition to a region in the aIT that also responded more to faces than to objects. To do this, Kriegeskorte et al (2007) used only four stimuli, as shown below.

Although contrasting faces and houses revealed the previously mentioned activations in the FFA and aIT, contrasting the two different faces produced no activations.

Kriegeskorte et al (2007) next used a type of pattern analysis, where the FFA and aIT voxels were used as input. The specifics of this type of analysis are too complex to discuss in detail here (see this review by Norman et al, 2006 which concerns a related technique, and a previous post), but essentially, this analysis uses multivariate statistics to assess whether the overall pattern of activation in an area differs significantly between conditions. If it does, it can be inferred that the area processes information about the categories. Pattern analyses are far more sensitive than traditional contrasts when it comes to differences within a region, but they achieve this sensitivity by sacrificing spatial localisation. Kriegeskorte et al (2007) used a range of pattern analyses, but their results are nicely summarised by the analysis depicted in this figure.

In this analysis, Kriegeskorte et al (2007) attempted to discriminate between the two faces based on an increasing number of voxels, expanding from the FFA and aIT regions that were revealed by the contrast. The lines on the y-axis show whether the patterns evoked by the two faces are significantly different in the voxels. Only the voxels in the right aIT respond significantly differently to the two faces, and this difference becomes significant early, when around 200 voxels are included. By contrast, even when 4000 voxels around the FFA are included, encompassing much of the temporal and occipital lobes, the activation here cannot discriminate between the two faces.

So to summarise, both the FFA and the aIT (among other areas) respond more to faces than to houses, but only the aITS responds differentially to specific faces. Although these results lend themselves to the conclusion that the FFA does some type of face detection while the aITS is involved in encoding the identity of faces, Kriegeskorte et al (2007) suggest that it probably isn’t that simple. Previous studies have found identity-specific activations in the FFA using other paradigms (e.g., Rotshtein et al, 2005), so Kriegeskorte et al (2007) go for the classic neuroimaging cop-out of suggesting that identity information nevertheless exists in the FFA, but at a resolution beyond that of current scanners. However, note that the fact that the identity effects in the aIT were detectable suggests that this area might play a larger role in this task than the FFA does, at least. Kriegeskorte et al (2007) note that prosopagnosia may be caused by lesions to the FFA region, but also by aIT lesions, and suggest that face recognition depends on interactions between (among others) these two areas.

From a more methodological standpoint, it is interesting to note that although a contrast between the two faces yielded no significant effects, differences appeared in a pattern analysis. This is a nice example of how pattern analysis may be a more sensitive measure.

The aIT has not received a great deal of attention previously as a face recognition region, so Kriegeskorte et al (2007) are probably going to face close scrutiny, as they have essentially posited that the region plays a leading role in the holy grail of face perception – the recognition of individual faces. It is interesting to note, however, that these findings do offer a means of reconciling fMRI results from humans with data from single-cell recording studies in monkeys, which have revealed identity-specific face responses primarily in anterior temporal regions. Such monkey regions correspond far better to the aIT than the FFA, which has been somewhat of a problem for the conventional account of the FFA as a Swiss army knife of face perception (but see Tsao et al, 2006 for evidence of a better monkey homologue of the FFA).

Really though, the most striking thing about this study is that current neuroimaging technique enables us to discriminate between the neural representation of these two faces. When you look at the faces above, it is clear that physically, they are quite similar. It is quite inspiring to think that it is nevertheless possible to pick out these undoubtedly subtle differences in the evoked neural response pattern.

Kriegeskorte, N., Formisano, E., Sorger, B., and Goebel, R. (2007). Individual faces elicit distinct response patterns in human anterior temporal cortex. Proceedings of the National Academy of Sciences (USA), 104, 20600-20605.

Encephalon #37 is out December 3, 2007

Posted by Johan in Links, Neuroscience.
add a comment

The latest issue of neuroscience blogging carnival Encephalon has just been posted by Bora over at Blog Around The Clock. Lots of interesting stories in a concise write-up.

Amygdala-Orbitofrontal interplay in cognitive flexibility November 29, 2007

Posted by Johan in Learning, Neural Networks, Neuroscience.
1 comment so far

This rat doesn’t get sucrose, but is probably happier than Stalnaker et al’s rats

Blogging on Peer-Reviewed Research Today’s title may be the least accessible yet, but bear with me; this is an interesting paper. Stalnaker et al (2007) investigated the neural basis of what they call cognitive flexibility – this is a very fancy term for a rat’s ability to handle a conditioning paradigm known as reversal learning. The method that Stalnaker et al used serves as a good example of the paradigm.

Rats were first trained on an odour discrimination task. Poking at one little door that is laced with odour A produces delivery of a tasty sucrose solution. Poking at another little door that is laced with odour B produces delivery of an unpleasant quinine solution (incidentally, quinine is a component in Vermouth, but we’ll assume that these particular rats like their martinis very dry). The actual door that is associated with each odour is varied, so that the rats have to rely on the odour cues alone to learn how to get their treat. Once the rats have achieved a criterion level of accuracy at this task, the contingency reverses, so that odour B now produces a treat while odour A produces quinine. The model finding is that the rats will be slower to learn the reversal than they were to learn the original task.

Stalnaker et al were interested in investigating the role of orbitofrontal cortex (OFC) and the basolateral amygdala (ABL) in bringing about this reversal. There are two basic ideas on how this might work: the OFC might directly encode the preferred stimulus, or the OFC might play an indirect role where it facilitates changes in downstream areas, such as the ABL. So in other words, downstream areas bring about the actual behaviour, while the OFC plays more of a modulatory role in telling the downstream areas when contingencies change.

To test these notions, Stalnaker et al lesioned the OFC in one group of rats, the ABL in another group, and both the OFC and the ABL in a third group. After this, the rats learned the odour discrimination task. The three groups did not differ significantly at this point. In other words, neither area or the combination of them was necessary to learn the task. Next, the rats went through two serial reversals – odour A switched places with odour B, and then back again. The effect of the brain lesions was measured by the number of trials taken to learn the reversals to the same accuracy level as the initial odour task.

Rats with OFC damage were slower to learn the reversals than the other groups. However, rats with ABL lesions and rats with combined OFC and ABL lesions did not significantly. So in other words, although OFC lesions in isolation cause impairments, this effect is abolished when the ABL is sectioned as well.

Stalnaker et al interpret these findings as support for an indirect role for the OFC in reversal learning. The ABL is stubborn, simply put. Without the modulatory influence of the OFC, the ABL persists in responding as though the contingency had not reversed, which produces slower reversal learning. By removing the ABL as well, this persistent influence is gone and reversal learning can occur normally. It is somewhat counter-intuitive that lesioning more of the brain helps, but there you go.

This is a nice study because it answers one question, but asks a number of new questions. If the rats can carry out reversal learning normally without either the OFC or the ABL, why is this circuit even involved in the paradigm, that is, why should OFC lesions have an effect, if the pathway as a whole is not needed? Also, if the ABL produces such deficient behaviour when the OFC is lesioned, why don’t lesions to the ABL affect behaviour? And most importantly, if behaviour is normal after ABL and OFC lesions, which other area must be lesioned to impair behaviour yet again. And what happens if this area is lesioned in isolation?

Enough questions to make your head spin, but the take-home message for those studying humans is that there is an entire range of complex interactions in the brain that fMRI, with its blurry temporal resolution and lack of experimental manipulation, can only hint at. We know much about functional localisation in the human brain, but the issue of how these areas connect and interact is largely uncharted territory.

Stalnaker, T.A., Franz, T.M., Singh, T., and Schoenbaum, G. (2007). Basolateral Amygdala Lesions Abolish Orbitofrontal-Dependent Reversal Impairments. Neuron, 54, 51-58.

Evidence for shallow voters, or mere exposure? November 15, 2007

Posted by Johan in Applied, Face Perception, Social Psychology.

Picture by Brandt Luke Zorn, Wikimedia Commons

Blogging on Peer-Reviewed ResearchIacoboni has gotten in trouble recently for some bizarre, non-peer reviewed and much publicised studies investigating voters’ neural reactions to the different presidential candidates. Vaughan noted that it is a little surprising that Iacoboni, who has done some fantastic work, would put his name on such weak research. I couldn’t help but be reminded of a post over at Dr Petra Boynton’s blog on the shameless proposals she has received from marketing companies. Essentially, the business model is that you as a researcher either gather some junk data yourself for handsome compensation, or alternatively, you simply sign off on a ready-made article. It is a credibility-for-cash transaction.

Unfortunately, such spin doctor stories might get in the way of real research on voter behaviour. In the latest issue of PNAS, Ballew and Todorov (2007) report that election outcomes can be predicted from fast face judgements in participants who know neither of the candidates. In other words, to some extent voting behaviour is influenced by quick judgments of appearance – maybe the guy with the better hair really does win. Although this study is very interesting, there are a few shortcomings that will be discussed at the end of this post.

Ballew and Todorov gathered pictures of the winner and the runner-up from 89 gubernatorial races. The pairs were shown to participants, who picked the candidate that seemed more competent (other measures were also used, but I’ll spare you the details). In order to avoid familiarity effects, Ballew and Todorov also included a check for whether the participants recognised any of the candidates. Trials in which the participant did recognise a candidate were excluded. The paper contains three experiments, of which I will cover the first two.

In experiment 1, participants were specifically instructed to base their decision on their gut feeling of which candidate would be more competent. The stimuli were presented for 100 ms, 250 ms, or until the participants responded.

Across all conditions, the competence judgements were significantly above chance (50 percent) in predicting the elected candidate. The three conditions did not differ significantly amongst themselves. Looking across all races, the participants’ averaged “vote” achieved an accuracy of 64 percent in predicting the election outcome. This may seem like a trivial increase over chance, but keep in mind that the participants based this decision on only a very brief exposure to an unfamiliar face. The fact that they could predict the winner suggests that voter behaviour is to some extent determined by the same type of fast, automatic evaluations.

In experiment two, Ballew and Todorov sought to investigate whether this effect could be modulated by the instructions that the participants received. Since Ballew and Todorov are advocating the notion that these judgments are automatic and fast, it becomes important to show that participants gain nothing when they have more time to plan their response. Thus, one group was instructed to deliberate carefully over their decision, and were given no time limits for viewing or responding. A response deadline group viewed the stimulus until they responded, which they had to do within 2 seconds. Finally, the 250 ms condition from experiment 1 was replicated for comparison.

In addition to this, Ballew and Todorov restricted the candidate photos to pairs in which the candidates shared the same gender and ethniticity. This was done since results in experiment 1 indicated that predictions were stronger for such pairs.

As in experiment 1, participants in all conditions were significantly more likely to pick a winning candidate. However, when investigating how each group’s “vote” predicted the election outcome, the deliberation group was not significantly above chance, while the two short-exposure non-deliberation groups were above chance, achieving an average accuracy of 70.9 percent between the two. In other words, careful deliberation and slow responding actually hindered performance.

I think these results are nice, since they offer an explanation for why candidates are so well-groomed (particularly the winners), even though no voter would ever admit to basing their choice on the candidate’s appearance. However, I see two issues with this research. First, although Ballew and Todorov asked their participants to rate competence, was this really what the participants were responding to? Given the fast processing that was necessary in the conditions where the participants performed well, it is perhaps unlikely that they were able to incorporate the instructions. Ballow and Todorov compared the ‘gut feeling’ instructions to a condition where participants were asked deliberate, but unfortunately they confounded the ‘instructions’ variable by giving the participants in the deliberation group unlimited time, in addition to different instructions effectively. It would also have been nice to see a control condition where participants indicated which face was more attractive rather than more competent, to show that participants were responding to something more abstract than attractiveness.

The second problem is more fundamental. Ballew and Todorov used participants from the US who viewed US gubernatorial candidates. In other words, it is likely that participants had been exposed to some of the candidates beforehand. We know from a phenomenon called the mere exposure effect that we tend to like things that we know better. It is not unlikely that winning candidates received more media exposure, so the participants may simply have responded to their increased familiarity with the winning candidate.

Ballew and Todorov tried to control for this by removing trials where the participants reported that they recognised the candidate, but this may be insufficient. Research on the mere exposure effect shows that even subliminal exposure to an object can increase self-rated liking for it. So even if the participants didn’t recognise the face, they may still have been exposed to it, and this may have biased their ratings. You might also think that winning candidates may have gained more exposure simply by acting as governor following the election. However, this account can be ruled out by the third experiment, which I haven’t reported here. Essentially, Ballew and Todorov replicated their findings with voters before an election.

To rule out mere exposure effects more conclusively, Ballew and Todorov would have done well to use candidates from local elections in other countries, where any kind of prior exposure would be more unlikely. You can’t help but feel that in using US voters and US guvernatorial candidates, Ballew and Todorov are sacrificing accuracy of measurement for face validity and impact. It is quite powerful to show that US voters respond this way to US candidates – it drives home the point that this is an effect that likely operates outside of the lab too. That being said, I’m not sure if this is a reasonable trade-off to make.

Finally, it’s worth noting that even if Ballew and Todorov’s results really do measure mere exposure (we would need to carry out more research to confirm that), that doesn’t render the findings invalid. It merely means that the mechanism that brings about the behaviour isn’t fast, automatic judgment of facial features, but fast, unconscious biasing based on prior exposure.

Ballew, C.C., and Todorov, A. (2007). Predicting political elections from rapid and unreflective face judgments. Proceedings of the National Academy of Sciences (USA), 104, 17948-17953.