jump to navigation

How useful is a group-level truth to the individual? September 18, 2007

Posted by Johan in Applied, Economics, Math & Stats, Social Neuroscience.
add a comment

Today’s post is about the tension between the often-times weak predictions of psychology and other sciences, and the decisions of the individual. My blog stats might serve as an example. The figure below plots the number of visitors over the last 30 days.

You would be hard pressed to make any predictions regarding how many visitors the blog will have tomorrow. Nor would you be able to say what the overall trend in the data is. The story is similar when we look at the number of visitors per week:

You can now predict that the blog should receive somewhere between 400 and 700 visitors this coming week, but it looks like the blog is receiving pretty much a constant stream of visitors. This is unlikely, considering how little blogging has taken place lately. Let’s look at the number of visitors by month instead.

At this level, the trend is very clear. The blog was growing steadily up until June, which is almost exactly when I went on a summer break.

The point here is that trends may emerge from a set of data that looks thoroughly chaotic on a lower level. This is a familiar story to psychologists, economists, and others seeking to predict human behaviour: predictions are only valid on the group level (at best!). In the example above, I averaged over time to produce a trend, while predicting human behaviour usually involves averaging over many data points (i.e., people) instead. The common principle of patterns emerging from noise holds, however.

So far, so obvious. However, the corollary of this is that when it comes to the individual case, most predictions that scientific psychology offers are so weak that they are next to useless.

For example, while I and other scientifically-oriented psychologists mock the various psychoanalysts and other psychotherapists who come up short compared to cognitive behavioural therapy in the research, the differences between the different therapies in terms of success rates are rather subtle. For a government welfare program where it is likely that thousands of clients will use the service, it makes a lot of sense to go for the most supported remedy, as the subtle differences between them become quite noticeable here. However, as a depressed individual, the difference in efficacy between the different therapies is trivial.

I think this is part of the reason why there is a considerable rift between clinical psychology and research psychology. The researcher looks at the group statistics and sees clear advantages for one remedy or another, while the clinician meets one individual after another, and is thus exposed fully to the amazing variability in how well a given approach works.

Similar concerns arise in epidemiology, the branch of medicine that deals with prevention rather than cures. The NY Times has an excellent article on the methodological difficulties that epidemiology faces, which I will only sample from. The ideal randomised double-blind control trial (where patients randomly receive treatment or placebo, and neither patient nor researcher knows who receives what) is very expensive, and thus much epidemiological research relies on weaker association studies, where patients usually fill out questionnaires about various health practices (e.g., how often do you exercise?), and the researchers then see how well the sample gets on. The goal is to correlate diagnoses or mortality rates with the questionnaire scores in order to discover what causes the disease. Or rather, what behaviours are associated with the disease.

The biggest feather in epidemiology’s hat is the link between cancer and smoking. However, as the NY Times article points out, this effect was truly magnificent, with smokers experiencing an increase in the risk of certain cancers by thousands of percent compared to non-smokers. Current studies into the health effects of hormone replacement therapy, vitamins, and omega3 fish oils deal with far smaller effect sizes, and thus other explanations for the observed associations become more probable.

The NY Times article does a good job explaining why the finding that people who take vitamins live longer doesn’t necessarily mean that vitamins keep you alive. I want to emphasise a different aspect of these studies: because the change in mortality or disease incidence is usually counted in the tens of percent at best, following this advice really isn’t going to make a dramatic difference to the individual. Much like the depressed patient choosing therapies, the effects of making these “fine tuning” health choices are so small that you might as well not bother (see also a previous post on the link between birth order and IQ).

But of course most of us do bother, myself included. No one wants to die any earlier than they necessarily have to. Unfortunately, due to the weak nature of the evidence upon which we base these decisions, it is not outside the realm of possibility that we aren’t just wasting money, but also damaging our health. The case in point would be hormone replacement therapy (HRT), which eases the strain of menopause. As the NY Times article outlines, HRT was originally found to have protective effects on mortality, if anything, and the drugs became massively popular. Later research has instead found that HRT may in fact increase mortality, only to settle on an uncomfortable juxtaposition where the experts deem that HRT is harmful if you start later in life, but helpful if you start as you go through menopause. There are probably smaller risks associated with vitamins and other supplements, but it isn’t necessarily the case that you stand nothing to lose by keeping up with the latest health craze.

The NY Times article advices the reader to simply ignore most health advice (including that regularly published in the Times, one wonders), unless the effect sizes are very large (as in smoking and cancer), or if there is what the author terms a “bolt from blue sky” effect (as in the link between asbestos and certain types of cancer), where few alternative accounts of the association are plausible.

So far, the conclusion is rather glum: psychology, epidemiology, and other sciences not covered here may have little firm advice to offer the individual. It’s worth remembering, though, that this conclusion is only valid for the individual, and at a single point in time. For example, there are studies to show that mirroring the poses and postures of the person you’re speaking to tends to result in that person liking you better. This helps me little in that one crucial employment interview since the effect is subtle, and thus unlikely to prove a deciding factor in whether I get the job or not. However, if I were Machiavellian enough to make posture mirroring my habit, over time it might work out in my favour, much like how patterns emerge in the blog stats above, if you average over long-enough time periods.

Can research be both relevant and fun? April 29, 2007

Posted by Johan in Cognition, Economics.
add a comment

While most science bloggers were up in arms over Shelley’s successful campaign against Wiley, a bit of controversy has been stirring up over in economics. (I had no idea I was interested in economics, but judging by the amount of blogging that I’ve done on it, I am. Go figure) Noam Scheiber wrote an article in New Republic, subtly titled How Freakonomics is Ruining the Dismal Science. The article has now found its way online, thanks to a blogger who almost certainly is in violation of fair use, unlike the Retrospectacle head honcho.

For those of you who have somehow missed it, Freakonomics is a book that rogue economist Steve Levitt co-wrote with Steve Dubner. Essentially, it’s a collection of pop-science write-ups of studies Levitt has published over the years. This research, concerning unusual topics like the economics of drug-dealing and regression analyses that investigate whether sumo wrestling is rigged, turned out to have quite a bit of mass appeal, as Freakonomics promptly sold in marginally fewer copies than the Bible back in 2005.

Not everyone is so impressed. As the title hints at, Scheiber’s article is a scathing attack on Levitt’s research, with some borderline ad hominem elements. The article’s central thesis is that Levitt’s popular and academic success is part of a larger movement in economics that has had a dangerous influence on impressionable economics grad students. Apparently, they have now abandoned the rigorous and perhaps dull study of the macro-economy in favour of fast and fun studies of unusual topics, Freak-style. Scheiber argues that the consequence of this development is that method has become more important than theory. The studies no longer reveal anything of theoretical significance – instead they are novelties, getting attention not because of what they reveal, but because of how they reveal it. Oh, and along the way we also get to learn that Levitt has a squeaky voice and is a poor lecturer, in a perhaps less well-considered comment towards the end of the article.

Anyone who achieves success on the level of Levitt is bound to have a few scathing critics on the web, but the interesting bit about this particular case is that Levitt has responded to the Scheiber’s criticisms on the Freakonomics blog. Apart from responding to Scheiber’s ad hominems and pointing out a few inaccuracies (apparently, Scheiber does not have a PhD in economics and has never met Levitt, contrary to what his article seems to suggest), Levitt argues rather forcefully that the use of “clever” methods in no way precludes theoretical relevance. He points to a number of hard, real-life issues that his research has tackled (not citing the sumo study, surprisingly), in support of this claim.

In a way, Levitt is absolutely right. Many of the studies in Freakonomics are ones that, to quote the awarding criterion for the IgNobel prize that Levitt is sure to win sooner or later, makes you laugh and then think. For instance, a chapter in the book is dedicated to Levitt’s somewhat controversial notion that the vast drop in violent crime that the US experienced in the 1980′s and 90′s is a direct consequence of Roe vs. Wade, 10-20 years earlier. Levitt conjures up a range of statistics and deductive reasoning to support an argument that goes something like this:

1. If aborted fetuses are unwanted, the babies that were born before 1973 rather than being aborted were unwanted.
2. Unwanted children are at risk for crime and anti-social behaviour.
3. Thus, Roe vs. Wade meant that unwanted children were no longer being born at the same rate following 1973. This results in a drop in crime some 15-20 years later because that’s when the unwanted children would have otherwise started their criminal careers.

The argument is simple enough, but it is also quite original. Most people do have an initial visceral reaction to the notion of somehow equating unborn babies with potential criminals, but once you get past that point the idea is not entirely easy to refute.

To be fair, Scheiber has a point in that Levitt’s research is light on theory – this is something that Levitt himself admits to in Freakonomics. The controversial crime drop theory aside, most of the research in Freakonomics makes a practical point about real life, but cannot be easily fitted into the theoretical framework of economics. A lot of it is really best classified as sociology or political science. Perhaps part of the reason why Levitt seems to bother some economists is that he does this research as a professor of economics, often publishing his results in economics journals.

It’s not dissimilar to the way most empirically-based psychologists react to psychoanalysts, reflexologists, or even Dr Phil – by calling themselves psychologists, they contribute to a definition or a stereotype of psychology that many people in research detest. Much of the ire that both Levitt and Dr Phil receive from their peers is probably caused by the way they “make us look bad.” Neither one would get nearly as much of a reaction if they didn’t insist on calling themselves economist and psychologist, respectively.

Anyway, I wonder if they will be a Freakonomics of psychology. The best-selling psychology researchers, people like Pinker or Damasio, are perhaps better known for their style of writing and insight rather than for the sheer originality or wow-factor of their research. Still, there is some psychology research out there that would fit the bill – for one, Godden and Baddeley’s (1975) study on context dependency of memory comes to mind. In this study, divers encoded and recalled lists either over or under water, which produced a nice crosswise interaction: recall was superior when the encoding and recall context was identical, as can be seen below.

(Apologies for the poor quality)

Another prime example would be the (now numerous) studies that use a person in a gorilla suit to probe inattentional blindness (one example). The idea is to have the participants perform a demanding visual task, while casually letting a gorilla walk by. It is strikingly unusual that the participant reports having even seen the gorilla, when asked afterwards.

However, there is no real Levitt in Psychology, yet. Psychologists win IgNobels all the time, but it’s possible that most are too concerned with their reputation to don the gorilla suit for more than the odd study…

References
Godden, D., & Baddeley, A.D. (1975). Context-Dependent Memory in Two Natural Environments: On Land and Under Water. British Journal of Psychology, 71, 99-104.

Follow

Get every new post delivered to your Inbox.