How useful is a group-level truth to the individual? September 18, 2007Posted by Johan in Applied, Economics, Math & Stats, Social Neuroscience.
Today’s post is about the tension between the often-times weak predictions of psychology and other sciences, and the decisions of the individual. My blog stats might serve as an example. The figure below plots the number of visitors over the last 30 days.
You would be hard pressed to make any predictions regarding how many visitors the blog will have tomorrow. Nor would you be able to say what the overall trend in the data is. The story is similar when we look at the number of visitors per week:
You can now predict that the blog should receive somewhere between 400 and 700 visitors this coming week, but it looks like the blog is receiving pretty much a constant stream of visitors. This is unlikely, considering how little blogging has taken place lately. Let’s look at the number of visitors by month instead.
At this level, the trend is very clear. The blog was growing steadily up until June, which is almost exactly when I went on a summer break.
The point here is that trends may emerge from a set of data that looks thoroughly chaotic on a lower level. This is a familiar story to psychologists, economists, and others seeking to predict human behaviour: predictions are only valid on the group level (at best!). In the example above, I averaged over time to produce a trend, while predicting human behaviour usually involves averaging over many data points (i.e., people) instead. The common principle of patterns emerging from noise holds, however.
So far, so obvious. However, the corollary of this is that when it comes to the individual case, most predictions that scientific psychology offers are so weak that they are next to useless.
For example, while I and other scientifically-oriented psychologists mock the various psychoanalysts and other psychotherapists who come up short compared to cognitive behavioural therapy in the research, the differences between the different therapies in terms of success rates are rather subtle. For a government welfare program where it is likely that thousands of clients will use the service, it makes a lot of sense to go for the most supported remedy, as the subtle differences between them become quite noticeable here. However, as a depressed individual, the difference in efficacy between the different therapies is trivial.
I think this is part of the reason why there is a considerable rift between clinical psychology and research psychology. The researcher looks at the group statistics and sees clear advantages for one remedy or another, while the clinician meets one individual after another, and is thus exposed fully to the amazing variability in how well a given approach works.
Similar concerns arise in epidemiology, the branch of medicine that deals with prevention rather than cures. The NY Times has an excellent article on the methodological difficulties that epidemiology faces, which I will only sample from. The ideal randomised double-blind control trial (where patients randomly receive treatment or placebo, and neither patient nor researcher knows who receives what) is very expensive, and thus much epidemiological research relies on weaker association studies, where patients usually fill out questionnaires about various health practices (e.g., how often do you exercise?), and the researchers then see how well the sample gets on. The goal is to correlate diagnoses or mortality rates with the questionnaire scores in order to discover what causes the disease. Or rather, what behaviours are associated with the disease.
The biggest feather in epidemiology’s hat is the link between cancer and smoking. However, as the NY Times article points out, this effect was truly magnificent, with smokers experiencing an increase in the risk of certain cancers by thousands of percent compared to non-smokers. Current studies into the health effects of hormone replacement therapy, vitamins, and omega3 fish oils deal with far smaller effect sizes, and thus other explanations for the observed associations become more probable.
The NY Times article does a good job explaining why the finding that people who take vitamins live longer doesn’t necessarily mean that vitamins keep you alive. I want to emphasise a different aspect of these studies: because the change in mortality or disease incidence is usually counted in the tens of percent at best, following this advice really isn’t going to make a dramatic difference to the individual. Much like the depressed patient choosing therapies, the effects of making these “fine tuning” health choices are so small that you might as well not bother (see also a previous post on the link between birth order and IQ).
But of course most of us do bother, myself included. No one wants to die any earlier than they necessarily have to. Unfortunately, due to the weak nature of the evidence upon which we base these decisions, it is not outside the realm of possibility that we aren’t just wasting money, but also damaging our health. The case in point would be hormone replacement therapy (HRT), which eases the strain of menopause. As the NY Times article outlines, HRT was originally found to have protective effects on mortality, if anything, and the drugs became massively popular. Later research has instead found that HRT may in fact increase mortality, only to settle on an uncomfortable juxtaposition where the experts deem that HRT is harmful if you start later in life, but helpful if you start as you go through menopause. There are probably smaller risks associated with vitamins and other supplements, but it isn’t necessarily the case that you stand nothing to lose by keeping up with the latest health craze.
The NY Times article advices the reader to simply ignore most health advice (including that regularly published in the Times, one wonders), unless the effect sizes are very large (as in smoking and cancer), or if there is what the author terms a “bolt from blue sky” effect (as in the link between asbestos and certain types of cancer), where few alternative accounts of the association are plausible.
So far, the conclusion is rather glum: psychology, epidemiology, and other sciences not covered here may have little firm advice to offer the individual. It’s worth remembering, though, that this conclusion is only valid for the individual, and at a single point in time. For example, there are studies to show that mirroring the poses and postures of the person you’re speaking to tends to result in that person liking you better. This helps me little in that one crucial employment interview since the effect is subtle, and thus unlikely to prove a deciding factor in whether I get the job or not. However, if I were Machiavellian enough to make posture mirroring my habit, over time it might work out in my favour, much like how patterns emerge in the blog stats above, if you average over long-enough time periods.