jump to navigation

Storage of the Motion After-Effect March 31, 2007

Posted by Johan in Neuroscience, Sensation and Perception.
1 comment so far

Courtesy of Wikipedia

Omnibrain had a story on the motion after-effect a while back, including a link to a rather nice demonstration. To get an idea of the effect, just look at the centre of the moving pattern for 20 seconds or so. When you look away, things should look fairly weird for a while.

So why does this happen? The most popular way to explain this effect is to use a so-called ratio model (e.g., Barlow & Hill, 1963). The model itself is simple, but it does rely on a bit of prior knowledge of how neurons work. Skip two paragraphs ahead if you don’t need to be reminded of the basics.

Imagine that for each bit of the retina, there is a neuron that fires when there is movement in one direction, and a complementary neuron that fires in response to movement in the opposite direction. Most likely, there’s a bunch of these pairs, each responding maximally to a particular direction of motion. By the way, these neurons are probably nowhere near your retina, unless you’re a rabbit – Barlow and Hill thought they could be found in the lateral geniculate nucleus, though most researchers today would probably prefer to place them in the primary visual cortex, or in area MT.

When you view the trippy moving stimulus, each receptive field only gets stimulation in a single direction (since the receptive field is small). So in a given spot, only a couple of the neurons have been firing like mad – namely, the ones that respond to that direction of motion. The rest have been firing at their base rate (neurons generally fire spontaneously at a low rate in the absence of any stimulation). As you get tired of the irritating techno track and decide to look away, the stimulated neurons go into a refractory period, that is, their sensitivity to stimulation drops for a while, which reduces their firing rate.

Knowing this, the basic idea behind the ratio model is simple: if your brain decides which way the world is moving by comparing the ratio of firing between these direction-sensitive neurons that are tuned in opposite directions, the ratio will shift in favour of the opposite direction as you look away, because while the neurons that responded to the opposite direction are firing at base rate as they have done all along, the neurons that did respond to the direction of movement are now firing at less than base rate.

And indeed, Barlow and Hill (1963) showed that in the retina of the rabbit (where the ganglion cells are direction-sensitive, unlike human ganglion cells), something like this seemed to be happening.

The ratio model is such a tidy explanation that most investigators paid no mind to a few papers published by Spigel (1960, 1962) in an obscure journal. The ratio model predicts that it doesn’t really matter what you do after adapting to the moving stimulus. As long as you don’t view the stimulus again, the refractory period only lasts for a few seconds – the underlying biochemical process is well understood, and really doesn’t allow for any exceptions.

While Spigel made no explicit reference to the ratio model, his results violate this prediction. In a nutshell, Spigel had his participants adapt to a rotating stimulus. Then, he stopped the rotation and had the participants view a static version of the stimulus (note that this is slightly different from merely looking away from the stimulus). The participants reported when they no longer experienced the after-effect. Next, the participants adapted to the rotating stimulus again, but this time following adaptation, they were placed in complete darkness for the same length of time that they previously experienced the after-effect. After this, they once again viewed the static version of stimulus.

The design of this experiment may seem a bit contrived, so maybe it’s a good idea to test it on yourself to see how it works. Open the moving stimulus and this image of the stimulus in separate tabs or windows. Make sure you have a stopwatch and a quick way of switching from the moving stimulus to the static stimulus (for example, alt-tab for windows, or command/control-[tab number] for tabs). If your browser automatically resizes oversized images, make sure you view this image in full size. View the moving stimulus for 30 ompa-ompas, switch to the image of the stimulus, and start your stopwatch. Stop it when the image is no longer moving, and record the duration of the effect. Next, view the moving stimulus for another 30 ompa-ompas. After this, look away from the screen and start your stopwatch. When an equal amount of time to the original effect duration has passed, look back at the image of the stimulus (time the effect again if you want to be scientific). Lo and behold, the effect is (hopefully!) still there. If you want to be properly scietific, you may also wish to test yourself in the first condition one more time to check that the after-effect still lasts for roughly the same length of time, thus ruling out any possible practice effects.

In the original study, Spigel (1960) found that following the time spent in darkness, the participants still reported an after-effect. The duration of the effect was significantly shorter than in the original condition, but it was nevertheless there. Spigel (1962) found that the storage effect also appeared if the participants simply viewed a blank white surface between adaptation and testing, and later investigators found that essentially, the storage effect appears pretty much regardless of what the participants view between adaptation and testing (Thompson & Wright, 1994) – this is pretty much what you just did.

It’s very tricky to explain Spigel’s (1960, 1962) findings with a ratio model. There’s no reason why this purported imbalance in firing rates between opposing neurons should essentially disappear during the pause between adaptation and testing, only to re-appear when you view the static stimulus again. While no one seems to have tested how long the after-effect can be stored at most, Masland (1969) found that 15 minutes of adaptation produced an after-effect that was visible when viewing the static stimulus after around 24 hours. There is no way that the adaptation of single neurons can explain an effect that lasts so long.

Finally, a caveat: I think there may be two different processes that produce motion after-effects. One mediates the immediate effect you get when you look away from the screen after viewing the trippy stimulus that Omnibrain found. This effect goes away after a few seconds, and while it does come back when looking at the static version of the stimulus, the effect is not that that strong. The other process mediates the after-effect you get when viewing a static version of the adapted stimulus. It’s worth noting that some stimuli (particularly rotating ones) don’t produce much of an immediate effect, but you do get a nice after-effect when you view the static version. This suggests to me that the ratio model may not be plain wrong – but it’s only part of the story.

Unfortunately, no one seems to know what the other part is (but see van de Grind et al, 2004 for a good but heavy-going attempt to make sense of it all).

References
Barlow, H.B., & Hill, R.M. (1963). Evidence for a Physiological Explanation of the Waterfall Phenomenon and Figural After-Effects. Nature, 200, 1345.

Masland, R.H. (1969). Visual Motion Perception: Experimental Modification. Science, 165, 819-821.

Spigel, I.M. (1960). The Effects of Differential Post-Exposure Illumination on the Decay of the Movement After-effect, Journal of Psychology, 50, 209-210.

Spigel, I.M. (1962). Contour Absence as a Critical Factor in the Inhibition of the Decay of a Movement Aftereffect. Journal of Psychology, 54, 221-228.

Thompson, P., Wright, J. (1994). The Role of Intervening Patterns in the Storage of the Movement Aftereffect. Perception, 23, 1233-1240.

Van de Grind, W.A., van der Smagt, M.J., & Verstraten, F.A.J. (2004). Storage for Free: A Surprising Property of a Simple Gain-Control Model of Motion Aftereffects. Vision Research, 44, 2269-2284.

You Make Pearson Cry #3: Daycare Correlates March 28, 2007

Posted by Johan in Developmental Psychology, You Make Pearson Cry.
2 comments

A recent report on the NICHD study on early childcare argues that time spent in daycare is associated with disruptive behaviour:

keeping a preschooler in a day care center for a year or more increased the likelihood that the child would become disruptive in class — and that the effect persisted through the sixth grade.

The effect was slight, and well within the normal range for healthy children, the researchers found. And as expected, parents’ guidance and their genes had by far the strongest influence on how children behaved.

But the finding held up regardless of the child’s sex or family income, and regardless of the quality of the day care center. With more than two million American preschoolers attending day care, the increased disruptiveness very likely contributes to the load on teachers who must manage large classrooms, the authors argue.

On the positive side, they also found that time spent in high-quality day care centers was correlated with higher vocabulary scores through elementary school.

Controlling for genetics, parenting, family income, and gender is probably a good start, but even with such controls it’s not exactly difficult to think of other interpretations of the data:

  1. Disruptive children get put in daycare more because their parents need time off.
  2. Children in daycare have parents who generally have less time for them, in and out of daycare.
  3. Parents who put their children in daycare differ from stay-at-home parents on some other third variable

I’m also a bit curious about how they controlled for parenting, since I imagine a pretty crucial factor in parenting styles is going to be whether you believe in daycare or not. But before I spend too much time trying to make sense of the data, what does it look like, exactly?

In 2001, the authors reported that children who spent most of their day in care not provided by a parent were more likely to be disruptive in kindergarten. But this effect soon vanished for all but those children who spent a significant amount of time in day care centers.

Every year spent in such centers for at least 10 hours per week was associated with a 1 percent higher score on a standardized assessment of problem behaviors completed by teachers, said Dr. Margaret Burchinal, a co-author of the study and a psychologist at the University of North Carolina.

The statistical trick of creating extreme groups and then testing for significance tells me that they failed to find a significant correlation between time in daycare and scores on the test. By taking the extreme cases you can squeeze out a significant result, but unfortunately, you’re no longer just comparing daycare to no daycare. You’re comparing a lot of daycare to no daycare at all, and this may make your groups less comparable on other variables since you’re looking at extreme groups only, without considering the bulk of the sample that is somewhere in-between.

The effect size is a bit of a joke. a 1 percent different in scores on a test? And this is a cause for concern? Even while admitting that there were positive correlations with vocabulary scores? Making the unlikely assumption that daycare is the cause of both these effects, perhaps scoring 1 percent higher in this disruptive behaviour measure is actually a worthwhile trade-off for improved vocabulary scores?

While a large sample is always better, I think this study raises the question of how small a significant effect can be to be considered meaningful. This $200 million project (no exaggeration) could obviously afford a huge sample, to be able to get significance on a 1 percent difference. But what does this mean? Should parents take this into consideration? It’s quite possible that there is a 1 percent difference in scores on this measure for disruptive behaviour linked to a whole pile of other, less politically loaded factors that we know nothing about, because no one is looking for them.

Ultimately, a major part of why so much of basic sociology is open to interpretation is the reliance on correlational measures. Sociology, at its best, should be able to inform policy – to tell us what the best option is, for parents and for policy-makers. In this case, with a $200 million budget, I could easily see how experimental measures could be employed instead: get a sample of lower-income families and offer them free day-care in exchange for participation in the study. Out of the people who volunteer, assign half to day-care, and half as a control group. The study wouldn’t generalise beyond that part of the socioeconomic spectrum, but at least you would be able to show causation.

Bring on the Encephalon March 26, 2007

Posted by Johan in Neuroscience.
add a comment

Encephalon #19 is out at Peripersonal Space. It’s somewhat of a special on the supposed dichotomy between emotion and reason, a topic close to my area of interest.

I particularly liked a report by Bohemian Scientist on a recent squabble in Neuroscience regarding the merits of a paper in Nature, which criticised our current model for understanding membrane potentials. This model forms part of the bedrock of research in Neuroscience (think evolution through natural selection, almost), so naturally, the authors catch some flak for rocking the boat. BS ties this controversy nicely to the wider Academic tradition of less-than-open-minded and enlightened debate.

Also, Psyblog is putting together a top 10 list of the most important studies in Psychology. The nominations are in, and now it’s time to vote. While I normally don’t think of top lists as particularly meaningful, this one is useful for a quick check that you’re at least familiar with the 10 nominees. Finally, Brainblogger has an interesting summary of key papers in the budding field of neural implants.

Moral Decision-Making and the Ventromedial Prefrontal Cortex March 23, 2007

Posted by Johan in Emotion, Neuroscience, Social Neuroscience.
5 comments

According to one popular theory on the role of affect in executive function, the Ventromedial Prefrontal Cortex (VMPFC) is the interface between affect and decision-making. Affective responses to possible avenues of actions arise from locations such as the amygdala, and are weighed into the decision-making process. In this view, then, the abnormal behaviour that is observed after lesions to the VMPFC is caused by a lack of affective inputs into decision-making, with intact rational cost-benefit analysis.

Koenigs et al (2007) decided to investigate this further, by comparing the performance of a group of patients with lesions to the VMPFC to healthy controls, and an additional group with lesions elsewhere in the brain. The participants were presented with dilemmas which were either non-moral (e.g., would you take the slow, scenic route, or the fast, boring route), impersonal and moral (e.g., choosing whether to lead dangerous smoke into a room where a single person is, or letting the smoke get into a room where three persons are), or personal and moral (e.g., choosing to save 5 patients by taking another patient’s organs against his will).

As the figure above shows, patients with lesions to the Ventromedial Prefrontal Cortex respond “too rationally” to personal moral dilemmas. Healthy controls generally opted not to act in these dilemmas (perhaps an example of what Kahneman called omission bias), while VMPFC patients generally endorsed immoral but ultimately rational behaviours.

I think this story is particularly interesting because this is a case where VMPFC patients are in a sense more rational than the rest of us. Objectively, if you want to save as many lives as possible, you should act as the VMPFC patients do. This is unusual, because the behaviour of frontal lobe patients is generally not best described as rational. For example, in studies using the Iowa gambling task, patients with the same type of brain damage show an inability to weigh cost versus benefits – the VMPFC patients show a tendency to use a high-risk, high-reward strategy that is ultimately less successful (and thus, less rational) than an alternative low-risk, low-reward strategy.

So it would seem that the behaviour of the VMPFC patients in the study by Koenigs et al (2007) can’t be explained by saying that the patients are acting solely on a cold, rational cost-benefit analysis, without input from affective and moral components. If it were this simple, VMPFC patients would not be impaired at the Iowa gambling task.

One way of explaining why VMPFC patients are impaired at both these tasks is to assume a general insensitivity to fear. The case of choosing to save lives through involuntary surgery and the case of choosing a high-risk strategy when gambling both reflect situations where most of us are probably deterred to some extent by fear of the consequences. To quote Koenigs et al (2007):

In the absence of an emotional reaction to harm of others in personal moral dilemmas, VMPC patients may rely on explicit norms endorsing the maximization of aggregate welfare and prohibiting the harming of others. This strategy would lead VMPC patients to a normal pattern of judgements on low-conflict personal dilemmas but an abnormal pattern of judgements on high-conflict personal dilemmas, precisely as was observed.

If you want to know more, the Neurophilosopher beat me to blogging this article. There is also a write-up in the New York Times.

Would You Get Tested? March 18, 2007

Posted by Johan in Abnormal Psychology, Behavioural Genetics.
add a comment

(via Slashdot)

The NY Times has a long story on what it’s like to be diagnosed with Huntington’s Disease early, decades before the first symptoms. In a cruel twist, the woman that the article follows works at a nursing home, looking after, among others, Huntington’s patients.

Huntington’s Disease is a heritable condition, which results in gradual but massive degradation of the brain. The condition is caused by an abnormal number of repeats on the short arm of chromosome 4, and the number of repeats predicts the onset of the disease. Initially, symptoms such as jerky, uncontrollable motions appear, but this is followed by personality changes, cognitive impairments, and loss of control over in particular the mouth and the face, eventually rendering the patient unable to swallow. Huntington’s Disease is a dominant trait, meaning that unlike other genetic diseases such as Haemophilia, you only need to inherit the disease from one parent. This, coupled with the rareness of the disease, means that in most cases, the child of a Huntington’s patient has a 50 % risk of inheriting the disease, unless both parents have Huntington’s or the afflicted parent is homozygous (ie, inherited the disease from both of their parents).

While a test has been available for some time, many people with Huntington’s in their family choose not to get tested, as a positive result is essentially a death sentence. The woman in the NY Times story seems to be of the opposite opinion: she felt that she could not plan her life until she knew how much time she would have.

I also came across a short documentary made by a man whose mother is afflicted. It’s a good opportunity to get a glimpse into what the disease is like.

Follow

Get every new post delivered to your Inbox.