Light blogging: James Haxby talk June 3, 2007Posted by Johan in Face Perception, Neuroscience, Sensation and Perception.
In this video, James Haxby talks about how a multivariate techique called pattern classification can be used to extract more data from fMRI than was previously thought possible.
Haxby’s research has an interesting angle – while most cognitive neuroscientists are going for strict modularity by creating conditions with contrasts that show how individual areas contribute to a given process (for instance, the fusiform face area), Haxby looks at distributed representations. The central idea is that information can be coded in a distributed pattern of activation, so for example, face processing is not exclusively localised to the FFA, but instead occurs as the result of a distributed pattern of activation that includes but is not limited to the FFA. This was the result reported in Haxby et al’s 2001 paper in Science. Haxby et al were able to show, among other things, that they could predict from the patterns of nonmaximal responses (ie, excluding the FFA and areas that responded maximally to objects) what type of stimulus the participant was looking at. So clearly, the brain may use the information that is being conveyed here, in the areas that aren’t conventionally considered to be important for faces or objects.
As I understand it, the key difference between this technique and the classic subtraction method is as follows:
In typical fMRI subtraction analysis, you look at which areas are uniquely involved in task A but not task B. For instance, if you subtract the activation when the participant views houses from the activation when they view faces, the FFA emerges, while the reverse calculation produces an area known as the Parahippocampal Place Area. Thus, you get the tidy, isolated spots of activation that are so typical of fMRI results – but in reality, the activation that is produced by faces goes far beyond the FFA. So essentially, the subtraction technique discards the activation that doesn’t contribute uniquely to faces but not places, to continue this example.
In pattern matching, you instead carry out statistical tests to see if the pattern of activation when viewing faces is different from the pattern of activation when viewing objects. This makes the neural activation patterns far more difficult to interpret with the naked eye, but if Haxby and other critics of strict modularity are correct in arguing that information is coded in distributed representations rather than in separate modules, this is pretty much the only way of getting at those representations.
The talk is a little bit technical, but I think anyone with more than a passing interest in neuroscience will find it worthwhile to spend half an hour watching it.
UPDATE: I wrote this post at a much earlier point in my studies, and it’s now clear that the comparison between different analyses is a bit inaccurate. The video is still worth watching, so I’ve kept this post on file, but be aware that the post itself may be way off. See this review by Norman et al (2006) for a more authoritative take.
Haxby, J.V., Gobbini, M.I., Furey, M.L., Ishai, A., Schouten, J.L., and Pietrini, P. (2001). Distributed and Overlapping Representations of Faces and Objects in Ventral Temporal Cortex. Science, 293, 2425-2430.
James Haxby lecture: “Implications of decoding for theories of neural representation“