add a comment
The video below is a nice introduction to some theoretical issues in neuroscience. It comes from a non-science conference, so it should be quite easy to keep up with even if you know little about neuroscience. More about the lecturer and his theories below the video.
Jeff Hawkins is a somewhat controversial figure in neuroscience. Most famous for his work with Palm where he took part in creating the first handheld computers, he has recently got himself (and his considerable assets) into neuroscience. As the lecture outlines, this wasn’t his first foray into the area, but still – the guy has no Ph.D. and would clearly not be taken seriously by anyone if he didn’t bring his bags of money along for the ride.
Hawkins argues that what’s lacking is a theoretical framework. We have lots of data, but little in the way of theory. This is where Hawkins seeks to contribute, most famously with his hierarchical temporal model of memory. The Wikipedia article is a bit thin (though note the criticisms of the work) – Memoirs of a Postgrad has a far better summary of the theory here. The theory is that the brain is basically a big memory system that uses past experiences to make predictions about the future – this is Hawkins’ definition of intelligence. So in this view, we literally learn from experience.
The lecture above also gives you a glimpse into why Hawkins’ efforts have been less than well received by neuroscientists. Around the 7-minute mark, he happily asserts that we need no more data, we need a theory. Thus, the lack of a theoretical basis in neuroscience is because of a lack of theoretical effort, not because we simply don’t know enough about the brain to make sense of it all yet. It’s easy to see how some neuroscientists get a little provoked by such remarks. It doesn’t help when he essentially compares the paradigm of current researchers to the pre-Galileo, pre-Evolution worldviews by drawing analogies to prior scientific revolutions like the heliocentric solar system, plate tectonics and evolution. According to Hawkins, neuroscience is bordering on a similar revolution, and it will be theoretical – there will be no empirical discovery, just a “heureka” moment when someone figures out how all the current mysteries add up.
Google Tech talk Lecture: Computational Neuroscience March 15, 2007Posted by Johan in Connectionism, Neural Networks, Neuroscience.
add a comment
Google holds regular internal Tech talks – lectures given by researchers in pretty much any field that catches someone’s interest at Google. The lectures get posted on Google Video. Here is a lecture on Computational Neuroscience by Bill Softky, ambitiously named “Hacking the Brain by Predicting the Future and Inverting the Un-Invertible.”
Softky is a theoretical neuroscientist, and true to form, he argues that understanding the basic elements of neural functioning is not going to lead to a major leap forward. Instead, he wants to understand the mathematical logic underlying neural circuitry. He makes a rather bold prediction that there will soon be a “general algorithm,” which explains all neural activity. This algorithm is not necessarily going to be a set of equations. Rather, it will be a model, an overall architecture. Softky finds support for the idea that the brain is governed by a unitary algorithm in the basic notion that the surface of cortex is pretty much the same everywhere, even though this uniform structure manages completely different forms of processing, depending on what bit of cortex you look at. Likewise, the fact that the area where the primary visual cortex normally resides is recruited for other purposes in congenitally blind people is used by Softky to emphasise the point that cortex has a general structure, which learns and specialises through a general algorithm.
The lecture gets technical fast, and the constant computer geek metaphors are a bit tiresome (for the love of God, stop referring to everything you do as “hacking!”), but the content makes up for it. The Q & A session at the end is particularly interesting – some fairly penetrating questions are asked, especially considering that the audience is unlikely to have a background in neuroscience.
How does the Hippocampus interface with Cortex? March 9, 2007Posted by Johan in Connectionism, Neural Networks, Neuroscience, Sleep.
1 comment so far
Hahn, Sakmann & Meta report some intriguing results, for now only available in a press release, as PNAS apparently offers no ahead of print feature. A related paper by the same group (Hahn, Sakmann & Mehta, 2006) is available, however, which is where I got the micrograph of a cell in hippocampal area CA1 that you see above.
Hahn et al anesthetised rats to simulate deep sleep, while recording the activity of cells in the hippocampus and cortex simultaneously. They found that excitatory activity in the cortex produced an echo in hippocampal cells. The pattern of activity in the hippocampus was not uniform: cells in the dentate gyrus showed a strong response, CA3 region cells showed a weaker response, and CA1 region cells were seemingly inhibited by the cortical activity.
One reason why this is interesting is that a popular theory of memory consolidation makes the opposite prediction, that is, the hippocampus should largely drive cortical activation. In the connectionist model of McClelland, McNaughton and O’Reilly (1995), the hippocampus is positioned as the “trainer” of long-term memory structures in the cortex.
The “trainer” role for the hippocampus arose from the observation that while a single connectionist network can easily accommodate large amounts of information, incrementally adding new items causes catastrophic interference. Simply put, this occurs because when new memories are formed by changing the connection weights in the network, the older memory traces are disrupted, since they relied on the previous weightings. By positing a second network that uses more easily changeable connection weights to allow for rapid learning, interleaved learning is made possible.
In this view, then, the hippocampus acquires new information rapidly, after which it trains the larger and slower cortical network on the new information. The hippocampus interleaves this new information with re-activation of older memories, which allows the old and new memories to co-exist in the cortical network, without catastrophic interference. McClelland et al suggested that this consolidation process occurs during slow-wave sleep.
To tie this in with Hahn et al’s results, the cortex-driven hippocampal activation could be viewed speculatively as the re-activation of memory traces described in the McClelland et al model. But it’s hard to see how the hippocampus then “trains” the cortical network on new informaton, when no activity seems to go in this direction. This leaves us with the possibility that this hippocampus-driven training of the cortical network occurs at a different point in the circadian cycle, or with the more bleak possibility that there is something fundamentally wrong with the functional division that McClelland et al proposed.
In any case, the method used by Hahn et al is quite fascinating. Up until now, very little has been known about how the hippocampus interacts with the cortex to play its (disputed!) role in memory formation and consolidation. The paper in PNAS is definitely one to look out for.
The Mehta Lab’s publications at Brown
Hahn, T.G., Sakmann, B., & Mehta, M.R. (2006). Phase-locking of hippocampal interneurons’ membrane potential to neocortical up-down states. Nature Neuroscience, 9, 1359-1361.
McClelland, J.L., McNaughton, B.L., & O’Reilly, R.C. (1995). Why There are Complementary Learning Systems in the Hippocampus and Neocortex: Insights from the Successes and Failures of Connectionist Models of Learning and Memory. Psychological Review, 102, 419-457.
The Brain Doesn’t Work Like the Internet January 11, 2007Posted by Johan in Connectionism.
add a comment
In popular science articles, connectionist models are often introduced with the internet analogy – a bunch of nodes that collectively produce output, where some nodes have more connections, and thus more influence on the output, than others. The linked article is just one example of this.
Of course, they are completely wrong. In fact, the brain works like the blogosphere. You see, the problem with the internet analogy is that at each node is an entity that is generally capable of making far more complex things of input than simply excitatory or inhibitory output.
Not so with the blogosphere. Think of each blog as a node. Think of each blogger’s RSS aggregator as the sum of excitatory connections (granted, the model fails to account for inhibitory connections). When enough of the other bloggers (nodes) copy/paste a news story into their blog (i.e., fire), the sum of the collected blogging (input) reaches a treshold, and the blogger posts the same story. Thus, every other blogger that happened to be connected to our original blogger receives input, and we achieve spreading activation.
On second thought, the lack of inhibition is not a problem of the model – it is a problem of the blogosphere.