Last week I pointed out several flaws in how researchers gathered data for the NIH-AARP Diet and Health Study, which has generated a slew of scary headlines such as Animal Fat Linked to Pancreatic Cancer.
I also mentioned that even without those flaws, observational studies can at best only produce statistical associations. They don’t prove cause and effect … although you wouldn’t always know that from the headlines.
When people mention that obesity is associated with Type II diabetes and therefore must cause diabetes, I’ll sometimes reply that gray hair is also associated with diabetes and suggest we start giving Grecian Formula to everyone to prevent it. That usually generates a reply along the lines of, “Come on, that’s ridiculous. A lot of people develop diabetes when they’re older and happen to have gray hair.”
That’s the good news: people don’t confuse an association with a cause when it’s obviously ridiculous. The bad news is that if an association isn’t ridiculous, researchers often do believe they’re seeing cause and effect – especially if the association confirms a pre-existing bias.
Since observational studies produce so many alarmist headlines, I thought it would be a worthwhile exercise to recall just how spectacularly wrong a theory based on a statistical association can be. This is a real-world example that generated a lot of headlines back in the day.
For decades, heart-disease researchers have known what while women certainly do develop heart disease, they typically develop it later in life than men … usually after menopause. Naturally, this got the white-coat crowd wondering if female hormones – particularly estrogen – might protect against heart disease. The theory seemed to make sense: men don’t produce as much estrogen as women, and women don’t produce as much after menopause.
In the 1960s, men were given estrogen as part of a large clinical trial called the Coronary Drug Project – but that arm of the trial was stopped early because the men taking estrogen began dying from heart disease at a higher rate than men in the control group. So the theory was adjusted: estrogen appears to protect women from heart disease, but not men.
Then a major observational study gave the estrogen theory some real traction. For 15 years, the Harvard Nurses Health Study had been tracking the diets, health habits and disease rates of more than 120,000 nurses. When researchers pored over the mountains of data produced by that study, they found a startling statistic: women who took estrogen had a 40% lower rate of heart disease than women who didn’t. And women who continued taking estrogen were less likely to suffer a heart attack than women who took it for awhile and then stopped.
You can imagine the research papers and the headlines that resulted. There calls among researchers and doctors alike to start prescribing estrogen to all post-menopausal women who had risk factors for heart disease. More cautious researchers called for a controlled clinical trial before estrogen was given out like heart-healthy candy, and were criticized for it. How could they, in good conscience, deny this obvious wonder drug to millions of women while waiting for long clinical trials to play out?
A pharmaceutical company, Wyeth-Ayerst, eventually funded the clinical trials – hoping, of course, that estrogen would be shown to prevent heart disease. More than 16,000 women were randomized and enrolled in the study. For five years, half received estrogen and half received a placebo.
The results were hardly what Wyeth-Ayerst had expected: The women taking estrogen developed heart disease at a higher rate – 30% higher, in fact. They were also more likely to suffer a stroke … another cardiovascular disease. Later clinical trials confirmed the bad news.
The experts were flabbergasted. The statistical correlation in the Harvard Nurses Study couldn’t have been more convincing: women who took estrogen were far less likely to have a heart attack. And it couldn’t have been fluke – there were too many subjects involved.
So what happened? Nobody can say for sure, but some researchers at the time offered an explanation that makes perfect sense: the women in the Harvard study who took estrogen were more concerned about their health. That’s why they took a hormone replacement in the first place.
In other words, estrogen didn’t create healthy nurses, but health-conscious nurses did take estrogen. Meanwhile, the health-conscious nurses were less likely to develop heart disease … for any number of reasons.
This really isn’t all that surprising. In clinical trials, people who religiously take their pills tend to have better health outcomes than people who don’t. And guess what? It doesn’t matter if the pill they’re taking is the actual drug or the placebo. The difference is in the people, not necessarily in the pill.
Some people care about their health. Some people are lackadaisical about health. Researchers call them “adherers” and “non-adherers.” I have my own, more colorful labels. The point is, we’re talking about different kinds of people, and that difference can produce statistical correlations in observational studies that have little if anything to do with the true cause and effect.
Think about the estrogen studies again for a moment: we now know that estrogen doesn’t prevent heart disease and in fact can make it worse. And yet in a large, observational study, taking estrogen was associated with a steep reduction in heart disease – almost certainly because health-conscious women were more likely to take it.
Now think about some of the alarmist headlines and health-nanny propaganda you’ve read over the years, and ask yourself what’s really going on. Here a few examples I came up with:
Does a diet high in saturated fat cause cancer and heart disease? Nope. But since saturated fat has been demonized for 30 years, health-conscious people probably eat less of it.
Does giving up meat make you healthier? Nope. But most people who become vegetarians are probably health conscious.
Do whole grains prevent diabetes and cancer? Hell, no. But they’re less likely to cause those diseases than white-flour products, and health-conscious people are more likely to choose them.
Does watching Fat Head at least three times give you a high IQ? Uh … no. But I’d like to think there’s a strong statistical correlation.
If you enjoy my posts, please consider a small donation to the Fat Head Kids GoFundMe campaign.