jump to navigation

Amygdala-Orbitofrontal interplay in cognitive flexibility November 29, 2007

Posted by Johan in Learning, Neural Networks, Neuroscience.
1 comment so far

This rat doesn’t get sucrose, but is probably happier than Stalnaker et al’s rats

Blogging on Peer-Reviewed Research Today’s title may be the least accessible yet, but bear with me; this is an interesting paper. Stalnaker et al (2007) investigated the neural basis of what they call cognitive flexibility – this is a very fancy term for a rat’s ability to handle a conditioning paradigm known as reversal learning. The method that Stalnaker et al used serves as a good example of the paradigm.

Rats were first trained on an odour discrimination task. Poking at one little door that is laced with odour A produces delivery of a tasty sucrose solution. Poking at another little door that is laced with odour B produces delivery of an unpleasant quinine solution (incidentally, quinine is a component in Vermouth, but we’ll assume that these particular rats like their martinis very dry). The actual door that is associated with each odour is varied, so that the rats have to rely on the odour cues alone to learn how to get their treat. Once the rats have achieved a criterion level of accuracy at this task, the contingency reverses, so that odour B now produces a treat while odour A produces quinine. The model finding is that the rats will be slower to learn the reversal than they were to learn the original task.

Stalnaker et al were interested in investigating the role of orbitofrontal cortex (OFC) and the basolateral amygdala (ABL) in bringing about this reversal. There are two basic ideas on how this might work: the OFC might directly encode the preferred stimulus, or the OFC might play an indirect role where it facilitates changes in downstream areas, such as the ABL. So in other words, downstream areas bring about the actual behaviour, while the OFC plays more of a modulatory role in telling the downstream areas when contingencies change.

To test these notions, Stalnaker et al lesioned the OFC in one group of rats, the ABL in another group, and both the OFC and the ABL in a third group. After this, the rats learned the odour discrimination task. The three groups did not differ significantly at this point. In other words, neither area or the combination of them was necessary to learn the task. Next, the rats went through two serial reversals – odour A switched places with odour B, and then back again. The effect of the brain lesions was measured by the number of trials taken to learn the reversals to the same accuracy level as the initial odour task.

Rats with OFC damage were slower to learn the reversals than the other groups. However, rats with ABL lesions and rats with combined OFC and ABL lesions did not significantly. So in other words, although OFC lesions in isolation cause impairments, this effect is abolished when the ABL is sectioned as well.

Stalnaker et al interpret these findings as support for an indirect role for the OFC in reversal learning. The ABL is stubborn, simply put. Without the modulatory influence of the OFC, the ABL persists in responding as though the contingency had not reversed, which produces slower reversal learning. By removing the ABL as well, this persistent influence is gone and reversal learning can occur normally. It is somewhat counter-intuitive that lesioning more of the brain helps, but there you go.

This is a nice study because it answers one question, but asks a number of new questions. If the rats can carry out reversal learning normally without either the OFC or the ABL, why is this circuit even involved in the paradigm, that is, why should OFC lesions have an effect, if the pathway as a whole is not needed? Also, if the ABL produces such deficient behaviour when the OFC is lesioned, why don’t lesions to the ABL affect behaviour? And most importantly, if behaviour is normal after ABL and OFC lesions, which other area must be lesioned to impair behaviour yet again. And what happens if this area is lesioned in isolation?

Enough questions to make your head spin, but the take-home message for those studying humans is that there is an entire range of complex interactions in the brain that fMRI, with its blurry temporal resolution and lack of experimental manipulation, can only hint at. We know much about functional localisation in the human brain, but the issue of how these areas connect and interact is largely uncharted territory.

References
Stalnaker, T.A., Franz, T.M., Singh, T., and Schoenbaum, G. (2007). Basolateral Amygdala Lesions Abolish Orbitofrontal-Dependent Reversal Impairments. Neuron, 54, 51-58.

Light blogging: Jeff Hawkins on the state of neuroscience June 11, 2007

Posted by Johan in Connectionism, Neural Networks, Neuroscience.
add a comment

The video below is a nice introduction to some theoretical issues in neuroscience. It comes from a non-science conference, so it should be quite easy to keep up with even if you know little about neuroscience. More about the lecturer and his theories below the video.

Jeff Hawkins is a somewhat controversial figure in neuroscience. Most famous for his work with Palm where he took part in creating the first handheld computers, he has recently got himself (and his considerable assets) into neuroscience. As the lecture outlines, this wasn’t his first foray into the area, but still – the guy has no Ph.D. and would clearly not be taken seriously by anyone if he didn’t bring his bags of money along for the ride.

Hawkins argues that what’s lacking is a theoretical framework. We have lots of data, but little in the way of theory. This is where Hawkins seeks to contribute, most famously with his hierarchical temporal model of memory. The Wikipedia article is a bit thin (though note the criticisms of the work) – Memoirs of a Postgrad has a far better summary of the theory here. The theory is that the brain is basically a big memory system that uses past experiences to make predictions about the future – this is Hawkins’ definition of intelligence. So in this view, we literally learn from experience.

The lecture above also gives you a glimpse into why Hawkins’ efforts have been less than well received by neuroscientists. Around the 7-minute mark, he happily asserts that we need no more data, we need a theory. Thus, the lack of a theoretical basis in neuroscience is because of a lack of theoretical effort, not because we simply don’t know enough about the brain to make sense of it all yet. It’s easy to see how some neuroscientists get a little provoked by such remarks. It doesn’t help when he essentially compares the paradigm of current researchers to the pre-Galileo, pre-Evolution worldviews by drawing analogies to prior scientific revolutions like the heliocentric solar system, plate tectonics and evolution. According to Hawkins, neuroscience is bordering on a similar revolution, and it will be theoretical – there will be no empirical discovery, just a “heureka” moment when someone figures out how all the current mysteries add up.

Google Tech talk Lecture: Computational Neuroscience March 15, 2007

Posted by Johan in Connectionism, Neural Networks, Neuroscience.
add a comment

Google holds regular internal Tech talks – lectures given by researchers in pretty much any field that catches someone’s interest at Google. The lectures get posted on Google Video. Here is a lecture on Computational Neuroscience by Bill Softky, ambitiously named “Hacking the Brain by Predicting the Future and Inverting the Un-Invertible.”

Softky is a theoretical neuroscientist, and true to form, he argues that understanding the basic elements of neural functioning is not going to lead to a major leap forward. Instead, he wants to understand the mathematical logic underlying neural circuitry. He makes a rather bold prediction that there will soon be a “general algorithm,” which explains all neural activity. This algorithm is not necessarily going to be a set of equations. Rather, it will be a model, an overall architecture. Softky finds support for the idea that the brain is governed by a unitary algorithm in the basic notion that the surface of cortex is pretty much the same everywhere, even though this uniform structure manages completely different forms of processing, depending on what bit of cortex you look at. Likewise, the fact that the area where the primary visual cortex normally resides is recruited for other purposes in congenitally blind people is used by Softky to emphasise the point that cortex has a general structure, which learns and specialises through a general algorithm.

The lecture gets technical fast, and the constant computer geek metaphors are a bit tiresome (for the love of God, stop referring to everything you do as “hacking!”), but the content makes up for it. The Q & A session at the end is particularly interesting – some fairly penetrating questions are asked, especially considering that the audience is unlikely to have a background in neuroscience.

How does the Hippocampus interface with Cortex? March 9, 2007

Posted by Johan in Connectionism, Neural Networks, Neuroscience, Sleep.
1 comment so far

Micrograph of a neuron in hippocampal area CA1 (Hahn, Sakmann & Mehta, 2006).

Hahn, Sakmann & Meta report some intriguing results, for now only available in a press release, as PNAS apparently offers no ahead of print feature. A related paper by the same group (Hahn, Sakmann & Mehta, 2006) is available, however, which is where I got the micrograph of a cell in hippocampal area CA1 that you see above.

Hahn et al anesthetised rats to simulate deep sleep, while recording the activity of cells in the hippocampus and cortex simultaneously. They found that excitatory activity in the cortex produced an echo in hippocampal cells. The pattern of activity in the hippocampus was not uniform: cells in the dentate gyrus showed a strong response, CA3 region cells showed a weaker response, and CA1 region cells were seemingly inhibited by the cortical activity.

One reason why this is interesting is that a popular theory of memory consolidation makes the opposite prediction, that is, the hippocampus should largely drive cortical activation. In the connectionist model of McClelland, McNaughton and O’Reilly (1995), the hippocampus is positioned as the “trainer” of long-term memory structures in the cortex.

The “trainer” role for the hippocampus arose from the observation that while a single connectionist network can easily accommodate large amounts of information, incrementally adding new items causes catastrophic interference. Simply put, this occurs because when new memories are formed by changing the connection weights in the network, the older memory traces are disrupted, since they relied on the previous weightings. By positing a second network that uses more easily changeable connection weights to allow for rapid learning, interleaved learning is made possible.

In this view, then, the hippocampus acquires new information rapidly, after which it trains the larger and slower cortical network on the new information. The hippocampus interleaves this new information with re-activation of older memories, which allows the old and new memories to co-exist in the cortical network, without catastrophic interference. McClelland et al suggested that this consolidation process occurs during slow-wave sleep.

To tie this in with Hahn et al’s results, the cortex-driven hippocampal activation could be viewed speculatively as the re-activation of memory traces described in the McClelland et al model. But it’s hard to see how the hippocampus then “trains” the cortical network on new informaton, when no activity seems to go in this direction. This leaves us with the possibility that this hippocampus-driven training of the cortical network occurs at a different point in the circadian cycle, or with the more bleak possibility that there is something fundamentally wrong with the functional division that McClelland et al proposed.

In any case, the method used by Hahn et al is quite fascinating. Up until now, very little has been known about how the hippocampus interacts with the cortex to play its (disputed!) role in memory formation and consolidation. The paper in PNAS is definitely one to look out for.

References
The Mehta Lab’s publications at Brown

Brown University Press Release

Hahn, T.G., Sakmann, B., & Mehta, M.R. (2006). Phase-locking of hippocampal interneurons’ membrane potential to neocortical up-down states. Nature Neuroscience, 9, 1359-1361.

McClelland, J.L., McNaughton, B.L., & O’Reilly, R.C. (1995). Why There are Complementary Learning Systems in the Hippocampus and Neocortex: Insights from the Successes and Failures of Connectionist Models of Learning and Memory. Psychological Review, 102, 419-457.

AI and Emotion February 23, 2007

Posted by Johan in Cognition, Connectionism, Neural Networks, Social Neuroscience.
add a comment

There is a story on BBC news about FEELIX Growing, a project which uses neural networks to design robots that will interact with humans in flexible, adaptive ways. Interestingly, the researchers seem to think that the way to achieve this is to design the software so that it is sensitive to human emotion:

“The human emotional world is very complex but we respond to simple cues, things we don’t notice or we don’t pay attention to, such as how someone moves,” said Dr Canamero, who is based at the University of Hertfordshire.

From the FEELIX Growing website:

Adaptation to incompletely known and changing environments and personalization to their human users and partners are necessary features to achieve successful long-term integration. This integration would require that, like children (but on a shorter time-scale), robots develop embedded in the social environment in which they will fulfil their roles.

It’s an interesting proposition, that robots should show some sensitivity to emotion in order to appear more human. The researchers will apparently rely on off-the-shelf robots with custom software, so don’t count on this leading to a robot that passes the Turing test. Nevertheless, acknowledging the role of affect in human-computer interactions is bound to be a good idea. In fact, this is already happening: supposedly, phone queueing software is designed to detect swearing, so that angry callers can be bumped to an operator immediately.

Follow

Get every new post delivered to your Inbox.