jump to navigation

Learning to recognise faces: perceptual narrowing? January 11, 2008

Posted by Johan in Animals, Developmental Psychology, Face Perception, Sensation and Perception.
add a comment

Blogging on Peer-Reviewed Research That image certainly piques your interest, doesn’t it? Sugita (2008) was interested in addressing one of the ancient debates in face perception: the role of early experience versus innate mechanisms. In a nutshell, some investigators hold that face perception is a hardwired process, others that every apparently special face perception result can be explained by invoking the massive expertise we all possess with faces, compared to other stimuli. Finally, there is some support for a critical period during infancy, where a lack of face exposure produces irreparable face recognition deficits (see for example Le Grand et al, 2004). Unfortunately, save for a few unfortunate children who are born with cataracts, there is no real way to address this question in humans.

Enter the monkeys, and the masked man. Sugita (2008) isolated monkeys soon after birth, and raised them in a face-free environment for 6, 12 or 24 months. After this, the monkeys were exposed to strictly monkey or human faces for an additional month.

At various points during this time, Sugita (2008) tested the monkeys on two tasks that were originally pioneered in developmental psychology as means of studying pre-lingual infants. In the preferential looking paradigm, two items are presented, and the time spent looking at either item in the pair is recorded. The monkeys viewed human faces, monkey faces, and objects, in various combinations. It is assumed that the monkey (or infant) prefers whichever item it looks at more. In the paired-comparison procedure, the monkey is primed with the presentation of a face, after which it views a face pair, where one of the faces is the same as that viewed before. If the monkey views the novel face more, it is inferred that the monkey has recognised the other face as familiar. So the preferential looking paradigm measures preference between categories, while the paired-comparison procedure measures the ability to discriminate items within a category.

Immediately following deprivation, the monkeys showed equal preference for human and monkey faces. By contrast, a group of control monkeys who had not been deprived of face exposure showed a preference for monkey faces. This finding suggests that at the very least, the orthodox hard-wired face perception account is wrong, since the monkeys should then prefer monkey faces even without previous exposure to them.

In the paired-comparison procedure, the control monkeys could discriminate between monkey faces but not human faces. By contrast, the face-deprived monkeys could discriminate between both human and monkey faces. This suggests the possibility of perceptual narrowing (the Wikipedia article on it that I just linked is probably the worst I’ve read – if you know this stuff, please fix it!), that is, a tendency for infants to lose their ability to discriminate between categories which are not distinguished in their environment. The classic example occurs in speech sounds, where infants can initially discriminate phoneme boundaries (e.g., the difference between /bah/ and /pah/ in English) that aren’t used in their own language, although this ability is lost relatively early on in the absence of exposure to those boundaries (Aslin et al, 1981). But if this is what happens, surely the face-deprived monkeys should lose their ability to discriminate non-exposed faces, after exposure to faces of the other species?

Indeed, this is what Sugita (2008) found. When monkeys were tested after one month of exposure to either monkey or human faces, they now preferred the face type that they had been exposed to over the other face type and non-face objects. Likewise, they could now only discriminate between faces from the category they had been exposed to.

Sugita (2008) didn’t stop there. The monkeys were now placed in a general monkey population for a year, where they had plenty of exposure to both monkey and human faces. Even after a year of this, the results were essentially identical as immediately following the month of face experience. This implies that once the monkeys had been tuned to one face type, that developmental door was shut, and no re-tuning occurred. Note that in this case, one month of exposure to one type trumped one year of exposure to both types, which shows that as far as face recognition goes, what comes first seems to matter more than what you get the most of.

Note a little quirk in Sugita’s (2008) results – although the monkeys were face-deprived for durations ranging from 6 to 24 months, these groups did not differ significantly on any measures. In other words, however the perceptual narrowing system works for faces, it seems to be flexible about when it kicks in – it’s not a strictly maturational process that kicks in at a genetically-specified time. This conflicts quite harshly with the cataract studies I discussed above, where human infants seem to lose face processing ability quite permanently when they miss out on face exposure in their first year. One can’t help but wonder if Sugita’s (2008) results could be replicated with cars, houses, or any other object category instead of faces, although this is veering into the old ‘are faces special’ debate… It’s possible that the perceptual narrowing observed here is a general object recognition process, unlike the (supposedly) special mechanism with which human infants learn to recognise faces particularly well.

On the applied side, Sugita (2008) suggests that his study indicates a mechanism for how the other-race effect occurs – that is, the advantage that most people display in recognising people of their own ethnicity. If you’ve only viewed faces of one ethnicity during infancy (e.g., your family), perhaps this effect has less to do with racism or living in a segregated society, and more to do with perceptual narrowing.

References
Sugita, Y. (2008). Face perception in monkeys reared with no exposure to faces. Proceedings of the National Academy of Sciences (USA), 105, 394-398.

What do you know, additives really do cause hyperactivity September 29, 2007

Posted by Johan in Abnormal Psychology, Developmental Psychology, Psychopharmacology.
add a comment

This post is about a very different E211.

A few months back, the menu from of a local Chinese takeaway caught my eye. Apart from the lengthy questionnaire, which customers could complete to receive £2 off (pretty smart way of gathering customer data for a non-chain takeaway), the menu also made numerous claims that all products were absolutely free of additives, including the ubiquitous Monosodium glutamate (MSG) and colourings. This is a good thing, the menu claimed, because additives cause ADHD in children.

My initial reaction was to silently promise myself never to order from that take-away, just as I wouldn’t buy my aspirin in a pharmacy that sells magnet bracelets (although this is a hard rule to follow in the UK, where homeopaths are funded by the NHS), or books from the Christian Science Reading Room. However, it turns out these guys weren’t far off the mark, as a recent study from The Lancet shows (by the way and for the record, this is apparently by no means the first study to report this).

McCann et al (2007) recruited two groups of kids (ages 4 and 8-9), who received two additive cocktails and placebo in different sequences, all disguised in juice. While the exact makeup of the mixes varied, both featured Sodium benzoate (aka, e211). For reference, the contents of one of these mixes was about equivalent to the food colouring present in 2 56-gram packets of sweet for the 3-year-olds, so the doses were not far outside of what a kid might consume on a daily basis.

Using a range of behavioural and peer-rating measures, McCan et al were able to show that on the whole, one of the mixes was associated with increased hyperactive behaviour in the three-year-olds, while both mixes were associated with increase hyperactive behaviour in the 8-9-year-olds. So keeping your kids away from food colouring may not be such a bad idea, after all.

I think this is a beautiful finding, because it’s just the sort of result that I would dismiss as spurious, had it been obtained by an association study, e.g., “hyperactive kids consume more additives than non-hyper kids” (a topic I touched upon recently). It is quite easy to suppose that, for instance, hyperactive kids like sweet, sugary foods with lots of additives better than others, but apparently that isn’t the whole story. This is a prime example of the power of the randomised, double-blind control trial in ruling out alternative accounts.

So either the Chinese takeaway is lucky enough that a belief they held for the wrong reason happens to be true, or someone on staff reads medical journals. I know where to get my Sichuan chicken next time, anyhow.

References
McCann, D., Barrett, A., Cooper, A., Crumpler, D., Dalen L., Grimshaw, K.,Kitchin E., Lok, K., Porteous, L., Prince E., Sonuga-Barke E., Warner, J.O., and Stevenson, J. (In Press – don’t you hate how medics always squeeze in half the department as authors? It’s almost as bad as the human genome project publications. Anyhow, back to the reference). Food additives and hyperactive behaviour in 3-year-old and 8/9-year-old children in the community: a randomised, double-blinded, placebo-controlled trial. The Lancet.

Detecting genetic disorders with 3d face scans September 16, 2007

Posted by Johan in Abnormal Psychology, AI, Applied, Behavioural Genetics, Developmental Psychology, Face Perception.
add a comment

Following on from last week’s post on smile measuring software, The Scotsman (via Gizmodo) reports on the work by Hammond and colleagues at UCL, who are developing 3d face scans as a quick, inexpensive alternative to genetic testing. This is not as crazy as it sounds at first since it is known that in a number of congenital conditions, the hallmark behavioural, physiological or cognitive deficits are also (conveniently) accompanied by characteristic appearances. The classic example of this is Down syndrome, which you need no software to recognise. More examples appear in the figure above, where you can compare the characteristic appearances of various conditions to the unaffected face in the middle.

Hammond’s software can be used to identify 30 congenital conditions, ranging from Williams syndrome (a sure topic of a future post) to Autism, according to the Scotsman. I know of no facial characteristics of autism, so I would take that part of the story with a grain of salt. The system claims an accuracy rate of over 90 percent, which is not conclusive, but certainly good enough to inform a decision to carry out genetic tests that are. The UCL press release gives some more information about how the software works:

The new method compares a child’s face to similarly aged groups of individuals with known conditions and selects which condition looks the most similar. In order to do this, collections of 3D face images of children and adults with the same genetic condition had to be gathered, as well as controls or individuals with no known genetic condition.

It really is too bad that the software uses 3d images – those cameras are neither cheap nor ubiquitous, which somewhat defeats the point of using this software as an affordable alternative to (or initial screening for) genetic testing. I can’t help but wonder if it wouldn’t be possible to achieve similar accuracy using normal portraits. If you can tell how much someone is smiling in a photo, you should be able to pick up on that extra chromosome…

Big brother knows best? Maybe not June 24, 2007

Posted by Johan in Behavioural Genetics, Developmental Psychology, Rants.
add a comment

As a big brother, it is tempting to accept the conclusions that Kristensen and Bjerkedal (2007) draw in their recent article in Science. According to these researchers, IQ is associated with birth order, more specifically social birth order. This measure was created by looking at families where the oldest sibling had died, thus leaving what was biologically the middle child as the “social big brother” (note that the IQ data comes from army conscripts, so all the tested siblings were male). Kristensen and Bjerkedal found that even in these families, the older surviving sibling tended to have a slightly higher IQ than the younger sibling, as the figure below shows.

Note that differing ages is not a factor here, since all siblings were tested at the same age. Kristensen and Bjerkedal argue that this supports a social interpretation of the IQ difference in terms of the family environment, rather than a biological account based on the notion that the first-born might have experienced a better pre-natal environment.

This story has made the rounds both in the news and in blogs, and generally there is surprisingly little criticism. I think Kristensen and Bjerkedal fail to consider an alternative explanation for their results.

Consider the factors that go into deciding whether you want to have another child or not. It is likely that you will consider your experiences with the child or children you already have. Parents who are not as satisfied with their current child or children are less likely to have another child, and this is likely to work the same way when deciding to have child number two, three, and on.

Note that I’m assuming here that parents will pick up on a child’s IQ and that this trait will express itself relatively early on, before the parents decide whether to have another child. So the data for the non-smart first-borns aren’t represented accurately in this analysis, since their parents didn’t have more kids as often.

But here’s the catch: each time you procreate, your chances of hitting the jackpot (ie, all Smart Kids) decreases. This follows from basic probability: if the chance of having one Smart Kid with your genes is x, the chance of having two Smart Kids (x²) must be smaller, and the chance of all Smart Kids continues to decline in this fashion. You’re playing with the same chromosomes each time, so it’s reasonable to assume that the probability is constant.

So if the parents consider their luck before deciding to have another kid, and if they count their luck in the number of Smart Kids, you would expect IQ to drop off as it does in the figure above. Parents who have a first Smart Kid are more likely to have a second child, and parents who have two Smart Kids are more likely to have a third. But with each new child, the chance of the jackpot (all Smart Kids) declines.

To summarise: The parents’ decision to have more children is determined in part by the IQ of the existing children, which means that more intelligent children are also more likely to have younger siblings. But conversely, this selection won’t operate on the youngest child, and will operate to a lesser extent on middle children.

With this account of the data, there is nothing particularly surprising about Kristensen and Bjerkedal’s essential finding that the social big brother (ie, the middle sibling) is smarter than the third sibling. You would expect this, given the interaction between the genetic lottery and the parental choice to procreate.

Let me refute one criticism that could be raised against this account: You might think that the younger siblings in families where the first-born died should have about the same IQ as the younger siblings in families where the first-born didn’t die. This is not the case, as a comparison between the black and the green dots in the figure above shows.

Before you conclude anything from that, look in the supplements for the article. You will find that Kristensen and Bjerkedal restricted their dead first-born sample to cases where the first-born was stillborn, or had died before the age of 1. So maybe the first-born simply wasn’t alive for long enough for the parents to base their further procreation decisions on the smartness of this kid. If this is true, the same parental decision-making process that would normally be based on the first-born is now based on the second-born: Smart Kids are more likely to get younger siblings, while not so Smart Kids are less likely to have younger siblings.

It’s worth emphasising that these are subtle effects. The difference in this study was around 3 points (M=100), so while my discussion of Smart Kids and not so Smart Kids above may sound categorical, I’m only trying to make my point salient. Even if my little theory above is correct, IQ is likely to only play a small role in whether parents choose to procreate or not – otherwise, we would see larger effect sizes in studies such as this one.

If nothing else, I think this study highlights the issue of effect sizes in psychology. Is an IQ difference of 3 points worth discussing? What is the relevance of such a small effect? Can it form the basis of policy changes, or advice to parents? Surely not. I’m not even sure if it has theoretical relevance – surely there are factors out there that explain a bigger part of the IQ variability, and where the exact underlying causes are equally unknown. Yet, this study is treated as if it is hugely important. Look at the quotes below from the New York Times, for instance:

Three points on an I.Q. test, experts said, amount to a slight edge that could be meaningful for someone teetering between an A and a B, for instance, or even possibly between admission to an elite liberal-arts college and the state university, some experts said. They said the results are likely to prompt more intensive study into the family dynamics behind such differences.

“I consider this study the most important publication to come out in this field in 70 years; it’s a dream come true,” said Frank J. Sulloway, a psychologist at the Institute of Personality and Social Research at the University of California in Berkeley.

The edge between liberal-arts college or state university? The most important study in the field for the past 70 years? Don’t believe the hype.

References
Kristensen, P., Bjerkedal, T. (2007). Explaining the Relation Between Birth Order and Intelligence. Science, 316, 1717.

Encephalon #25 arrives June 20, 2007

Posted by Johan in Developmental Psychology, Links, Social Neuroscience.
add a comment

Grab it now at Psyblog. Some favourite posts:

Developing Intelligence reports on some evidence that children have a difficult time telling fantasy from reality. This notion may seem common sense, but this is one of the first empirical demonstrations I’ve heard of.

Omnibrain posted a video of a laughing rat, which almost made me run out and get one for myself.

Finally, Memoirs of a Postgrad explains what embodiment means, as applied in embodied cognition but also in AI research. With mirror neurons being all the rage these days, embodied theories are everywhere. This post is a nice introduction to what the fuss is all about. It’s worth emphasising that embodiment theories preceded  the discovery of mirror neurons, and indeed, it’s not clear that mirror neurons are necessary for embodied representations on a neural level.

Follow

Get every new post delivered to your Inbox.