jump to navigation

Object- or Viewer-centered Coding in the Superior Temporal Sulcus April 22, 2007

Posted by Johan in Neuroscience, Sensation and Perception, Social Neuroscience.
trackback

The Superior Temporal Sulcus (STS) is a groove in the temporal lobe (shown in a Macaque monkey above) that appears to play a special role in visual perception: neurons in this area respond selectively to various forms of biological motion. For instance, the figure above shows the location of neurons in the STS that respond to walking and bending of the knees (we’ll get back to this figure shortly).

Undoubtedly, some of you will assume that you’re now reading yet another post about mirror neurons, but this is not true. Mirror neurons are found more anterior (ie to the front) to the STS in the ventral prefrontal cortex, and above the STS in the anterior inferior parietal lobule (all this in macaques, mind). As you’re all painfully aware by now, mirror neurons are so called because they respond both to the perception of (mainly goal-directed) actions in others, and to corresponding actions in the monkey itself. In contrast, neurons in the STS only respond to the actions of others. So it would seem that the monkey (and presumably also human) brain has at least three distinct areas that deal more or less directly with perceiving the actions of other individuals, though it’s far from clear how these areas divide up the work, or how they are inter-connected.

The figure above is from a paper by Jellema and Perrett (2006) that sought to investigate whether STS neurons use object- or viewer-centered coding. These abstract ideas can be boiled down to the following: if the neurons are object-centered, they will respond to the same stimulus no matter which way it’s turned, while viewer-centered neurons would only respond to the stimulus when it is presented in a given orientation. Jellema and Perrett (2006) also argue that a third coding category exists, namely goal-directed coding. Goal-directed coding can be seen in mirror neurons that only respond when the experimenter picks up an object, not to the pantomine of picking up without an object, or to the object viewed in isolation. A STS-relevant example would be the neurons that Jellema and Perrett (2006) found, which respond to bending of the knees only when the legs are standing on a surface. Bending of the knees in the air produced no response.

But how can they know that the neuron responded to this, and not some other aspect of the stimulus? Well, the knee-bending example turns out to provide a nice example of how the work involved in these biological motion single-cell recording studies is carried out:

The simplest explanation for the cell’s responsiveness would of course be that it responded to an object moving downward. […] Therefore, we presented an agent jumping from a 40 cm high elevation while keeping the knees straight. This constituted a lowering of the body without knee flexing, and produced a very much reduced response (as well as sore knees for the agent!)

As you may have feared, the basic design is to have someone stand in front of the monkey doing stuff, all the while listening to the neuronal firing rate over speakers to spot when the recorded neuron responds (not that I can think of a better way of testing this).

And how was the the goal-directed knee-bending response discovered?

Knee flexion with the principal body axis oriented horizontally was achieved by the experimenter lying on a mobile table, at 2 m distance from the subject. Two different versions were presented. In the first, the feet made contact with the wall of the testing room throughout the flexing and straightening cycles. In this scenario, knee extension was achieved by exerting force onto the wall, thereby pushing the mobile table (with the experimenter on top) away from the wall. To achieve knee bending, a second experimenter (who remained out of sight) pushed the table towards the wall. In the second version, the feet never touched the wall (50 cm minimal distance between feet and wall). The experimenter simply flexed and straightened the knees in the air just above the surface of the table. Interestingly, only knee bending with the feet maintaining contact with the wall produced a response (Fig. 3g), while knee flexing without the feet making contact with the wall produced no response at all.

This must have been one confused monkey. Do note the subtle distinction in who’s carrying out the task: the comfy knee-bending is done by an “experimenter,” while the 40 cm knees-straight drop is performed by an “agent,” which I presume is a euphemism for “undergraduate research assistant.”

Jellema and Perrett (2006) report on a range of STS neurons that respond to various behaviours, but their principal findings are well summarised by the figure at the top of this post. The series of coronal STS slices goes from anterior (left) to posterior (right). Note that Jellema and Perrett (2006) only recorded from the anterior part of the STS. As you can see, it’s a bit of a mixed bag – some cells are object-centered, others viewer-centered. The principal finding is the large group of object-centered neurons that can be seen as grey rings and triangles in the slices marked +18 and +16. Previous investigations had not found as large groups of object-centered neurons, which Jellema and Perrett (2006) attribute to the relatively anterior recording site that they used. This matches the overall trend in their data for viewer-centered coding to appear in the posterior sections, while object-centered coding appears in the anterior. This is also supported by previous findings that response latencies are longer at anterior sites, and that the main outputs from the STS are in the anterior region. The authors suggest that since object-centered representations likely build on viewer-centered representations, processing in the STS may move back to front.

Another thing to note from the figure above is that all the responding neurons were in the upper bank of the STS. Jellema and Perrett (2006) did record from both, so it’s an open question what those neurons might respond to – apparently it isn’t knee-bending and shattered knees, at least.

Finally, Jellema and Perrett (2006) argue that the goal-centered neurons they found may represent basic understanding of causal relations (“he picked up the ball”), if the STS projects on to mirror neurons in premotor cortex. They also note, however, that behavioural evidence shows that monkeys have little capacity to understand or imitate the actions of others – this is apparently the territory of primates. With this in mind, the neurons they’ve found may, in concert with previously reported mirror neurons, form the basis of what rudimentary social learning and imitation that monkeys are capable of. But the fact that primates are capable of visually-guided imitation suggests that we have something that the macaques lack. Thus, it would seem that macaque monkeys (on whom the bulk of the mirror neuron research was carried out, incidentally) are somewhat lacking in this area as models of human neuroanatomy.

References
Jellema, T., & Perrett, D.I. (2006). Neural representations of perceived bodily actions using a categorical frame of reference. Neuropsychologia, 44, 1535-1546.

Comments»

No comments yet — be the first.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: