Dr. Andreas Eenfeldt is 6’7” tall – a fact I’ve managed to work into two pre-cruise roasts, as well as the speech I gave on this year’s cruise. You can always spot the man in a crowd, unless he happens to be standing amidst an NBA team.
You might expect that his brother is equally tall, or that he’ll have a son someday who’s as tall or taller. But that expectation would be wrong. His brother (an affable guy I’ve met on two cruises) is 6’3” – still tall, but four inches shorter than Andreas. That’s because Andreas is an anomaly. He’s proof of the statistical principle that given enough chances, chance happens. And what happened in his case was probably a genetic quirk that produced some extra human growth hormone.
In his book How Not To Be Wrong: The Power of Mathematical Thinking, math professor Jordan Ellenberg explains how anomalies can produce study results that are statistically significant, but nonetheless due entirely to chance. I recounted those points in a previous post.
In a later chapter, Ellenberg describes how we can fooled by the flip-side of given enough chances, chance happens: regression to the mean. After chance happens, things tend to return to normal.
Ellenberg begins the chapter (titled The Triumph of Mediocrity) by recounting the work of a professor of statistics named Horace Secrist. After examining data on hundreds of businesses in the 1920s, Secrist wrote an influential paper with a rather startling conclusion: the competitive forces of American capitalism lead to mediocrity in business. Secrist’s evidence was that when businesses produced record-high profits one year, they tended to produce average profits in subsequent years. Likewise, firms that produced record-low profits tended to show higher profits in subsequent years. Therefore, something about capitalism must produce middle-of-the-road mediocrity, Secrist concluded.
But as Ellenberg explains, what Secrist’s data demonstrates isn’t an inherent flaw in competitive capitalism, but the simple fact that anomalies tend to be followed by a regression to the mean. A business can have a bang-up year for any number of reasons: a disaster that affects the competition, a temporary surge in demand, etc. When the temporary conditions that produced the record profits go away, so do the record profits. It’s business as usual again. That doesn’t mean the company’s management drifted into mediocrity.
Or on the subject of height:
People drawn from the tallest segment of the population are almost certain to be taller than their genetic predisposition would suggest. They are born with good genes, but they also got a boost from the environment and chance. Their children will share their genes, but there’s no reason the external factors will once again conspire to boost their height over and above what heredity accounts for. And so on average, they’ll be taller than the average person, but not quite so tall as their beanpole parents. That’s what causes regression to the mean: not a mysterious mediocrity-loving force, but the simple workings of heredity mixed with chance.
I don’t know how Dr. Eenfeldt would feel about being called a beanpole, but you get the idea. It’s not likely that he’ll produce a son who is 6’7” or taller.
Ellenberg points out that regression to the mean is also common in sports. An NFL running back has a record-setting year, scores a huge contract as a result, then never puts up those stratospheric numbers again. Grumbling fans assume that with all those guaranteed millions in the bank, the running back lost his desire or work ethic. In fact, it’s probably just a case of regression to the mean. The running back was never going to have another season like the record-breaking one, fat contract or not.
As a personal example that doesn’t involve fat contracts or money of any kind, I saw regression to the mean during the past five days, when I played several rounds of disc golf against my buddy Jimmy Moore. Last year I kept our scores in a spreadsheet. My average at the end of that week was 6 under par on the dot. (It’s an easy course, by the way.)
In one game yesterday, I tossed a 10 under par … four strokes better than average, and my lowest score ever when playing against Jimmy. The next game (a mere 90 minutes later), I finished just two under par … four strokes worse than average. When I tabulated all my scores for the five days we played, my average was … wait for it … 5.8 under par. I have good games and not-so-good games, but I always seem to drift back to right around 6 under par.
So what does all this have to do with diet and health? Plenty. Here’s an example from Ellenberg:
Almost any condition in life that involves random fluctuations in time is potentially subject to the regression effect. Did you try a new apricot-and-cream-cheese diet and find you lost three pounds? Think back to the moment you decided to slim down. More than likely it was a moment at which the normal up-and-down of your weight had you at the top of your usual range, because those are the kinds of moments when you look down at the scale, or just at your midsection, and say Jeez, I’ve gotta something. But you might well have lost three pounds, apricots or no apricots, when you trended back toward your normal weight.
Okay, so maybe a loss of a few pounds that would have happened anyway fools people into continuing with a wacky diet for awhile. That’s not a big concern in my opinion. The real concern is when regression to the mean fools doctors and researchers, or gives researchers a chance to fool doctors and the public at large.
Ellenberg doesn’t get into the subject, but Chris Masterjohn did in an article from 2011. You can read the entire article if you want the details, but here’s the main point: regression to the mean can exaggerate the reported efficacy of drugs and other treatments.
Suppose researchers are screening subjects for a trial on a cholesterol-lowering drug. Part of the screening process is a lipid panel. Because of the principle that given enough chances, chance happens, some people are going to have a spike in cholesterol on the day of the screening. So they’re now labeled as people with high cholesterol and enrolled in the study. When they’re screened again later, their cholesterol is lower – which it would have been anyway because of regression to the mean. But the lower number is attributed entirely to the drug.
As Masterjohn points out, if it’s a large enough study and the subjects were properly randomized, the regression-to-the-mean effect should be roughly equal in both groups, because both groups would include equal numbers of people whose first screening occurred on a day when their cholesterol was spiking. But smaller studies and studies in which subjects weren’t carefully randomized are particularly prone to the regression-to-the-mean effect. To quote from Masterjohn’s article:
And thus we see that many published research findings are false. Some of these false findings exist because we would inevitably expect by the laws of probability for a small handful of well conducted, thoroughly reported, and appropriately interpreted studies to uncover apparent truths that are really false simply by random chance. This emphasizes the need to look at the totality of the data. Some will be false because of regression to the mean. This emphasizes the need to critically evaluate the data in each study.
The New Yorker ran an article in 2010 about the “decline effect” — the tendency of significant results reported in scientific experiments to end up less significant or even insignificant in practice or in further experiments:
All sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants.
The article partly blames the usual suspects, such as data manipulation and publication bias:
John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
I’m sure Ioannidis is correct about the intentional distortions. But the article also mentions apparently honest scientists who were surprised when they couldn’t reproduce their own results, using the same methods. Makes me wonder if some of the decline effect is simply the result of regression to the mean.
Either way, the lesson is clear: a lot of drugs are approved based on results that don’t hold up over time. So when your doctor prescribes the latest wonder drug, keep in mind it may not be such a wonder drug after all. As the New Yorker article says, in the field of medicine, the phenomenon [the decline effect] seems extremely widespread.
And if you’re getting ready to pick your team for a fantasy football league, I wouldn’t count on the players who had record-setting seasons in 2014 to repeat that performance.
If you enjoy my posts, please consider a small donation to the Fat Head Kids GoFundMe campaign.