My sophomore year in college, I took an Introduction to Humanities course (which I enjoyed very much), and was pleased when I walked into class one day and found that my physics professor was giving a guest lecture.
“Doc,” as we called him, was a major science wonk. He held PhDs in physics and mathematics and taught courses in both disciplines, as well as freshman chemistry. No matter what the class, his lectures frequently took unexpected side trips into anthropology, biology, astronomy – well, heck, you name it, and he knew it. Probably the only other person I’ve met who reads as many books per year is Dr. Mike Eades.
Doc’s guest lecture was about the need for general scientific literacy, and he told us at one point, “No matter what field you plan to go into, learn math. Math is how you know when you’re being lied to.”
I agree wholeheartedly. Trouble is, math just plain scares a lot of people. Stick a couple of x’s and y’s into an equation, and they get a case of brain-freeze that can otherwise be produced only by gulping a Slurpee.
After researching Fat Head, I’m convinced most reporters are prone to Slurpee Syndrome. When they report on this-or-that new study, they don’t want to strain their brains by poring over the actual data. So they just read the abstract and the author’s conclusion, then write the story. That’s how a lot of bad science becomes embedded in our consciousness.
There are notable exceptions, of course. Gary Taubes thoroughly analyzes the hard data – but Gary has a degree in physics from Harvard. That’s why I found it laughable when some media-darling doctors claimed his hypotheses about adaptive thermogenesis and homeostasis would violate the laws of thermodynamics. Yeah, right … the guy with the degree in physics forgot all about the laws of thermodynamics. Luckily for us, there were actual doctors on hand to set the record straight.
To restate Doc’s warning – with a little Mark Twain stirred in – if you can’t do the math, researchers with an agenda will use lies, damned lies and statistics to bamboozle you. Let’s look at how they do it.
Just flat-out lie about the results. Thirty years after the start of the famous Framingham study, the authors of a new report stated that “The most important overall finding is the emergence of the total cholesterol concentration as a risk factor for coronary heart disease in the elderly.” In plain English: high cholesterol kills old people.
Just one little problem: the hard data showed no correlation at all between heart disease and cholesterol in the elderly, as Dr. Uffe Ravnskov pointed out in his excellent book “The Cholesterol Myths.” If you look at the actual data points, they’re all over the place.
Perhaps hoping to clarify the issue, the American Heart Association put actual numbers on the claim: “The results of the Framingham study indicate that a 1% reduction of cholesterol corresponds to a 2% reduction in CHD risk.” But when Dr. Ravnskov crunched the data, he found that the Framingham subjects whose cholesterol decreased over time actually had a higher rate of heart disease, not a lower one.
With this apparent inability to recognize if numbers are going up or down, I respectfully suggest that the Framingham researchers resign their positions and go work for a congressional budget committee. (I also respectfully suggest that reporters who couldn’t be bothered with examining the data start covering school-board meetings.)
Scare people with percentages. When you see a percentage, you’re looking at the results of multiplication or division. But when you see the word “difference,” you are – if the researcher is honest – looking at simple subtraction. If a value goes from 20 to 22, it’s an increase of 10%, but the difference is 2. (Still with me? Good; you have a functioning brain.)
Multiplication and division can produce big, impressive-sounding percentages that are in fact nearly meaningless. Here’s an example that helped enshrine the “cholesterol kills” theory:
After a major study with the acronym MRFIT was concluded, the researchers announced that people with high cholesterol were over 400% more likely to die of heart disease. Ohmigosh!! Get me into an Ornish program, now! I must reduce my cholesterol!
That’s a big, scary number. Let’s see how they came up with it.
Over the course of the study, 0.3% of the men whose cholesterol was below 170 died from heart disease. Meanwhile, 1.3% of the men whose cholesterol was over 265 died of heart disease. Over 265?! Dead man walking! Buy your casket now and save!
And in fact, since 1.3/0.3 = 4.33, you could say the relative risk is 400%.
Now flip the numbers and look at the actual difference. In the low cholesterol group, 99.7% did not die from a heart attack. Among the very high cholesterol group, 98.7% did not die from a heart attack. That’s a difference of 1.0%. In other words, if you go up the scale from low cholesterol to very high cholesterol (nearly 100 points higher), the real difference is that an extra 1 in 100 men died of heart disease. Not quite such a scary number, is it?
Wow people with percentages. Percentages work in the other direction, too. You’ve probably seen the Lipitor ads where Pfizer announces that this wonder drug reduces heart attacks by 36%. That sure sounds impressive … until you look at the actual difference.
In the study cited by Pfizer, men with known risk factors for heart disease took either Lipitor or a placebo. In the placebo group, barely more than 3% had a heart attack. In the Lipitor group, 2% had a heart attack. Use division, and you get that impressive 36% reduction. But the difference, once again, is 1 in 100, or 1%. Boy, that’s worth giving your liver a major smack-down.
And by the way, the difference in the heart-attack rate for women who take statins and women who don’t is: zero. You can multiply that difference, divide it, square it, triangle it, stick it inside a trapezoid, whatever … you still can’t come up with a reason for women to take statins – ever.
Count only the numbers you like. (Otherwise known as “cherry picking” your data.) In trials conducted on statins, researchers will happily announce that fewer people died of heart disease. (Not a whole lot fewer … see above.) But they’re curiously silent about how many people died overall. In fact, they refuse to release what’s called the “all-cause mortality” data. Gee, I wonder why?
If I conduct a five-year study in which I give one group a daily glass of orange juice, and another group a daily glass of rat poison, I can guarantee you that far fewer people in the rat-poison group will die from heart disease. But I doubt the all-cause mortality numbers would help sell rat poison as a new wonder drug.
(And in fact, studies of the first cholesterol-lowering drugs were stopped because too many subject were dying of cancer … but at least they didn’t get heart disease.)
Here’s another example of cherry-picking data: Awhile back, newspaper headlines were practically screaming that smoking cigars is nearly as dangerous as smoking cigarettes – you know, high rates of cancer and all that. The American Cancer Society jumped all over the report. And since I enjoy two or three cigars per week while taking my five-mile walks, this one grabbed my interest.
As it turns out, the researchers compiled their data from men who smoke five or more cigars per day. Now, a good cigar can easily take an hour to smoke. So unless these guys were standing outside half the day, they were lighting up indoors and inhaling a lot of smoke. And I’m pretty sure anyone who smokes five cigars a day isn’t exactly a health nut. Lord only knows what else these guys put into their bodies.
Most cigars smokers don’t even average one per day. When other researchers compared men who smoke one cigar per day with nonsmokers, the difference in cancer rates was insignificant. (The cigar smokers were also more likely to have Austrian accents and be elected governor of California.)
Confound it! In a real, true, worthwhile study, you compare large groups of people who are statistically identical except for one variable: one group is taking a drug, or adding fiber to their diets, or meditating while listening to Yanni music. Everything else should be the same.
If everything else isn’t the same, you’ve got confounding variables, which means your study data is worthless. You can’t just compare x. You’ve got to compare x while making sure that a, b, c, y and z are virtually identical.
Dean Ornish claims a lowfat diet prevents heart disease. Why? Because he put people on his diet, and by gosh, they had fewer heart attacks than the control group. But Ornish also had the study group stop smoking, start exercising, and take classes in stress management. That’s pretty convenient, considering that smoking and stress are two of the biggest causes of heart disease.
Call me crazy, but I’m pretty sure if I took a group of smoking, stressed-out couch potatoes and had them stop smoking, start exercising, meditate while listening to Yanni, and take up chewing tobacco, their rate of heart disease would plummet. Then I could claim that chewing tobacco prevents heart disease. Heck, I’d probably get a lifetime pass to NASCAR races.
So the next time you see a newspaper headline announcing that some new study “proves” this or that will kill you – or save you – ask yourself a few questions: What exactly did they count? What didn’t they count? Are we looking at a percentage change, and if so, how did they calculate it? What’s the real difference? Did they control their variables? And – most importantly – do the raw numbers actually support the conclusion?
And Doc, if you’re still alive, I want you to know you were one of the best teachers I ever had, at any level. You told me once I was good at math and should look into programming computers for a living, which I thought was a silly idea at the time. Now it’s how I pay the mortgage. It’s also how I paid for Fat Head.
You were even more brilliant than I thought.
If you enjoy my posts, please consider a small donation to the Fat Head Kids GoFundMe campaign.
“No matter what field you plan to go into, learn math. Math is how you know when youโre being lied to.โ
That’s pretty much word for word what I tell everyone too. Makes life so much easier.
I remember dining with an oncologist and after the meal he lit up a cigar. He must’ve seen an alarmed look on my face. He plainly told me that there is no statistically significant evidence that 1 cigar a week increases the chance of mouth cancer and, even so, mouth cancer is much better than lung cancer to treat and you have to live your life. I didn’t take up smoking cigars ’cause its not my thing, but it shows how much perceptions play on people.
Statistically significant is an interesting term. You no doubt remember the “caffeine causes birth defects” study. Well, they fed mice caffiene and found it increased the chance of birth defects in the case vs. control. That’s what was reported in the media. The true reporting should have said:
“the consumption of the equivalent of 200 cups of coffee a day by a human caused a statistically significant increase in birth defects in mice”.
The study was repeated by a group with the number reduced to a much more reasonable 15 cups of coffee a day. Now that is still a LOT of caffiene, though there was no statistically significant link observed.
We had a great chemistry lecturer that talked about cases such as these being no less than scientific fraud.
That kind of fraud is rampant, and it’s agenda-driven.
My wife usually had one cup of coffee while pregnant with each of our two girls, who are strong and healty. I told her I didn’t believe a cup a day could possibly do any harm.
I don’t like being around cigarette smoke, but the notion that second-hand smoke causes cancer is based on scientific drivel. I don’t know if you have access to the Penn & Teller series titled “Bull$#@%,” but they went through the numbers in one episode. The upshot is that if you’re a nonsmoker and live with a smoker, the odds that you’ll get lung cancer increase by about 1 in 13 million.
And even that is suspect. Smokers don’t care about health, and people who marry them probably don’t either, at least compared the general population. They are at least somewhat less likely to have good health overall health habits.
A magician I met on a cruise ship (I was working as a comedian) had an experience similar to yours. He was at a dinner where one of the guests was C. Everett Koop, former Surgeon General. Koop ordered a porterhouse steak. When the magician asked why he wasn’t concerned about the fat and cholesterol, Koop replied, “Everything you’ve heard about that is wrong.”
Wish he’d said so while he was still Surgeon General.
Great stuff here….for a comedian!
Thanks, Brian.
You actually made math somewhat interesting here. I never had trouble with math until they started adding x y z to the numbers. Then the old eyes started to glaze over. And I could never “get” math puzzles, and to this day I can’t understand the sudoku craze. Give me a good old-fashioned crossword puzzle, or a cryptogram, and leave the numbers out of the puzzles!
I really enjoy your style of writing. It’s a pleasure to read a well-written, thought-provoking piece.
I’m afraid a lot of people get permanently turned off by incompetent math teachers, the ones who only have a single way of explaining a concept. If you don’t get it the the first time, they can’t help you.
A friend of mine had one of those for high-school algebra. He flunked and labeled himself a math dummy.
Later, my brother tutored him for a makeup course, which he aced. In college, the supposed math dummy won an award that’s given to the non-math major with the highest scores in math classes.
Great post. I forwarded this to my dad, who’s constantly fighting off my mom’s protests over his occasional cigar smoking. His response:
“Thanks for the affirmation! I think I’ll go smoke one. He didn’t even mention that cigar smokers are likely to enjoy a bourbon or two while they smoke that hour long stick.”
Tell your dad to light one up for me. Since I smoke mine while taking a long walk, the bourbon would probably be a bad idea.
Oh goody, another reason to soapbox on my favorite topic ๐
The problems with the mathematical field called “statistics” run right to its conceptual foundations. We generally want answers to questions like “How much should I believe that I will die sooner from eating 6 eggs a day?”, but orthodox statistics cannot (in a mathematically provable sense) answer this question. Instead we get “statistics”, which are actually just numbers describing data, and from which you’re supposed to wave your hands and magically get the answer to the real scientific question. It doesn’t work, hence the ongoing confusion.
Consider: I take a coin out of my pocket, and offer to play the following game. You pay $100 to play. If the coin comes up heads, I pay you $300, otherwise I keep your $100. Should you play? Orthodox statistics can’t answer this question, because it only deals with “data” (results from experiments) and not other types of information (such as your belief that this sounds too good to be true and is probably a scam). Even given data, statistics doesn’t answer the right question, which is “how likely am I to win $300?” The best it can do is answer something like “I flipped the coin 10 times and say 7 heads; how likely is that if the coin is fair?”.
If two people have the same information, they should come to the same conclusions. That many can read a scientific study and come to completely different conclusions is a good sign there’s something fundamentally wrong with “statistics” (which, again, can be demonstrated mathematically).
If you want to learn some really useful mathematics, take a look at Probability Theory as formulated by Ed Jaynes (this often goes under the poorly chosen moniker “Bayesian statistics”). Unfortunately, I have yet to find a good introductory description of Probability Theory. I took a crack at it on my blog, but don’t think I really succeeded in making the topic understandable. It’s too bad, because once you “get it”, thinking about the sort of questions you’ve discussed here becomes very clear. Indeed, those people we value as “rational” (like yourself and Gary Taubes) tend to think this way. Probability Theory is just a mathematical encoding of a rational thought process.
Exactly right. My statistics professor posed a question once: “Okay, class, suppose I’ve already flipped this coin nine times, and it’s come up tails eight of those times. What are the odds it will come up heads the next time?”
The answer is that the odds are 1 in 2, as for every other flip. The coin has no memory. And you’d probably have to flip it thousands of time to reach a true 50-50 split in the outcome.
Don’t leave yourself out of that “rational” category. I’ve read your posts.
Actually the chances are the coin or the tosser are not โfairโ, that is the coin is not equally likely to come up heads or tails. If the tosser is skilled enough, you have the same chances of winning as you do of winning 3 card monte or the shell game.
Thatโs why you insist on using the same dice as the other players.
@Tom:
Thanks for the complement.
The coin flip problem has many interesting subtleties. Your professor is correct, given that he is 100% sure that the coin is fair. But of course, nothing is ever 100% certain. Sivia and Skilling have a nice pithy introduction to the mathematical framework to which I alluded above, including a discussion of inferring the fairness of a coin. Chapter 1 of the book is also included in its entirety on Google Books. It’s only about 10 pages, and a very nice exposition of historical aspects, problems with orthodox statistics, and the basic concepts of Probability Theory. Worth a read.
Yes, we would of course have to assume the coin is fair. Give it a nick, and the odds could change.
And if you really want to put the Framingham and the MRFIT data in perspective, this post over at Hyperlipid will really help: http://high-fat-nutrition.blogspot.com/search/label/Cholesterol%20within%20nations%20studies
As Peter mentions, I’d be looking to lower my HBA1c levels, rather than my LDL.
Very good post. Simply looking at total LDL is worthless, because LDL can be big and fluffy and small and dense. Saying you have too much LDL is like saying your body has too many cells. What kind of cells? Cancer cells? Muscle cells? Fat cells?
It’s true that understanding the math gives you the perspective to understand what is being discussed. Moreover, I agree that statistics get misused, and that there are fraudulently run studies. Still, I think it’s incorrect to suggest that a percentage change is inherently misleading.
If it is in fact true that 0.3% of the men whose cholesterol levels are below 170 die from heart disease, and 1.3% of the men whose cholesterol levels are over 265 die of heart disease, then from equal sized pools, over four times as many men will die of heart disease. It doesn’t prove cause and effect, and the difference is only one percent of the pool, but the result will be pretty darned relevant to the dead people. So the question is, how many people with cholesterol levels over 265 are similar enough to the demographic of the study participants to have the results be relevant (100K?, 1M?, 5M?)? Now, what is 1% of that number? That’s the number of lives we’re talking about.
If my cholesterol levels were over 265, Iโd certainly look into my options.
In the strictest sense, it’s true that a percentage is not misleading — it’s mathematically correct, after all — but it can be used to produce the impression of a large effect when the actual difference is small, which is why I think people need to understand how percentages are calculated.
The MRFIT researchers could have used percentages to make this statement, which would be equally correct: “For 99% of the men in this trial, cholesterol levels had no effect whatsoever on heart-attack rates.”
Or suppose we advertised to prospective Lipitor customers this way: “This drug may cause liver problems, impotence, muscle fatigue and memory loss. And for 99% of the men who take it, Lipitor will not prevent a heart attack.”
If my cholesterol were over 265, I’d only want to know whether the LDL was the small/dense type or the big/fluffy type. If it were fluffy, I wouldn’t worry about the total number at all.
Among older people, those with higher cholesterol have longer lifespans, most likely because cholesterol appears to have anti-cancer properties.
You might enjoy Peter’s analysis of the Framingham and MRFIT data, as Ellen suggested.
I completely agree that these numbers can be and are used to be misleading (and with much of your original post for that matter), but that works in both directions – both over emphasizing and diminishing the importance of something. My point is simply that 1% of a population can be a significant number, particularly when it’s the difference between 98.7% and 99.7% in a large population. That’s 3/4 of the people. Does it mean you should take drugs with bad side effects, maybe not, but it also doesn’t mean the issue should be ignored if there are other options.
I’m not following … how does 3/4 figure into it?
@John:
I think more important than whether or not it is misleading is that percentage change (or any other statistic) doesn’t give you enough information to evaluate your options. A drug might decrease your chance of death by a given disease by 50%, but it makes a big difference whether or not the chance of dying from that disease is 0.01% or 99%. Correspondingly you’d need absolute risk numbers for side effects of the drug, and of course they cost money as well.
Notice that drug companies almost always publish relative risk numbers when extolling the benefits of their drugs, and absolute risk numbers to downplay the side-effects.
On top of which, the all-cause mortality figures should be mandatory. As it is, the Pfizer study lasted 10 years and produced an absolute difference of 1 in 100. That means if you treat 100 at-risk men (and only men) for 10 years, you would in theory prevent one heart attack — but not necessarily one death, because not all heart attacks are fatal.
But what they don’t tell us is this: if you compare the two groups, what are the odds that an extra man in the Lipitor group will die of cancer, or stroke, or some other disease? If we treat 100 men for 10 years, do we in fact prevent even one extra death overall?
Sorry if that wasnโt clear. Roughly three quarters of the people who fall into the 1.3% donโt fall into the .3%. Yes – itโs effectively the 400% logic all over again, but if the people or the costs of .3% of a population have any significance (such as for a large population and/or an expensive procedure), then the 1% difference between the two is quite a bit more significant.
Now I get it, thanks.
@Dave
I agree. Either representation (% or difference) can be misleading without the absolute numbers to put them in perspective, and companies may use both to their advantage.
The original post stated that multiplication and division can produce big, impressive-sounding percentages that are in fact nearly meaningless, and that the real difference is that an extra 1 in 100 men died of heart disease, which is not such a scary number. To the extent that this suggests what I just agreed on with you (about the use of misleading numbers), it is correct.
Nevertheless, to the extent that the health of 1/100 men is “nearly meaningless,” I respectfully disagree. Whether you want to call it 400% or 1%, taken in perspective it can be a significant number.
As for cholesterol as compared to LDL (from a prior post) – I know very little about the subject. That said, unless I could determine that the cholesterol study wasn’t valid, I think I’d still look into my options. Whether that would lead me to seek some other form of information (e.g., LDL), I don’t know.
Separate issue, but the high cholesterol group also would’ve included people with hyperlipidemia. These folks can have cholesterol readings of 1,000. It was a bit convenient for the MRFIT researchers to lump them into the “high cholesterol” group, since there’s a huge difference between 265 and 1,000.
Here’s the quick version of cholesterol vs. LDL: Not all LDL is the same. It can be large and fluffy or small and dense (known as Type B).
The fluffy stuff doesn’t penetrate the arterial wall; it’s too big. And if anything, it appears to have anti-inflammatory properties, which makes sense, because cholesterol goes up when inflammation is present. This is another reason to suspect that higher cholesterol is a response to heart disease in some people, not a cause.
The small, dense stuff can penetrate the arterial wall, which leads to plaque buildup. The irony is that high-carb, lowfat diets — the recommended diet to prevent heart disease — produce more dense LDL.
Anyway, given what we now know about LDL, it’s pointless to scare people about having “high LDL” without determining the type. Many people with low LDL end up with heart disease. Dwight Eisenhower’s total cholesterol was only 165 when he had his first attack, for example. He didn’t have a high LDL reading, but obviously his LDL was small and dense.
Okay, that wasn’t as brief as I planned, but you get the idea.
Personally I think the percentages are the least of the worries.
I’m sure you’ve heard about the link between colon cancer and red meat. Well the case for the study was 2 pounds of red meat a day for something like 2 years. Now, I like meat, and I can eat a 1 pound steak if I’m in the zone, maybe with a little bacon, but who routinely eats that much meat in a day?
Keep in mind that the average calories of that much ribeye steak (my choice) is a little over 1000, and that’s if you cook with no fat.
And these sorts of studies are everywhere. I would much rather percentage values (with however much spin put on them) that are applicable to ME, with an average meat intake per day of about 300g.
The read meat study was one of those “association” studies that are nearly worthless. Mike Eades and Jimmy Moore both wrote good commentaries.
People lived on a hunter-gatherer diet for hundreds of thousand of years, but cancer didn’t become common until we started consuming sugar, white flour, and Frankenstein vegetable oils. The Masai still live almost exclusively on their cattle — much like the Plains Indians lived on buffalo — but cancer is nearly nonexistent in the tribe.
Studies like this are what scared a friend of mine into giving up red meat, because colon cancer runs in her family. But guess what? She eats bread, potatoes, pretzels, non-dairy creamers (corn syrup and hydrogenated oils), etc. If anything is going to give her colon cancer, it’s all that starchy garbage.
@John,
The key question is whether or not you have information to evaluate your options. Don’t expect to get it from drug companies, doctors, or the government. Ultimately, the only person who is properly motivated to make informed decisions about your health is you. Make sure you get the real information, not just the marketing message. And be sure to evalulate the motives of those providing the information. For instance, I suspect Tom would be ecstatic if Fat Head made as much money as drug companies get treating 100 people with statins for 10 years. I know I’d take that deal ๐
Pretty sure I could retire on that kind of dough.
The study that Mike Eades and Jimmy Moore were berating was a completely different one than I was talking about. While the studies that they talked about basically pick’n’mix conclusions from a wealth of convoluted data, the one I’m talking about was flawed in it’s approach. I found Dr Eades’ description of the ‘adherent effect’ particularly interesting.
While observational studies are particularly prone to bias and have minimal error control, the case-control studies are often plain unrealistic.
e.g. Observational study: 98% of deaths occured within 24 hours of consumption of dihydrogen monoxide. Therefore, people should avoid consuming dihydrogen monoxide.
Case control study: Case subjects were subjected to prolonged full body immersion in dihydrogen monoxide, control subjects played whack the penguin in an office recliner. Mortality rates were significantly higher in the case group. Therefore, skin contact with dihydrogen monoxide can cause death.
While both are flawed, they are flawed in different ways. Though people take ‘different types of studies’ corroborating the same evidence, they take it as proof. Trouble is, when people are inclined to believe it, these studies are only published so that academics can gather at a conference and pat each other on the back while contemplating their rapidly increasing blood sugar levels over a tofu burger (hold the mayo).
Do you have a link to the study you referred to earlier? I’d like to take a peek.
If I ever produce another documentary, it’ll probably be about how science is manipulated. I had no idea how agenda-driven this stuff is until I started working on Fat Head.
I can’t find the study, though it was mentioned in New Scientist in a sidenote to the 200 cups of coffee study talking about faulty design. This was probably about 4 years ago.
I did find this little gem while searching though. If html works you will see some odd reasoning being used:
Cancer Causes & Control Vol 6, p164-179 Giovanucci, E.
Abstract: Some factors related to Westernization or industrialization increase risk of colon cancer. It is believed widely that this increase in risk is related to the direct effects of dietary fat and fiber in the colonic lumen. However, the fat and fiber hypotheses, at least as originally formulated, do not explain adequately many emerging findings from recent epidemiologic studies. An alternative hypothesis, that hyperinsulinemia promotes colon carcinogenesis, is presented here. insulin is an important growth factor of colonic epithelial cells and is a mitogen of tumor cell growth in vitro. Epidemiologic evidence supporting the insulin/colon-cancer hypothesis is largely indirect and based on the similarity of factors which produce elevated insulin levels with those related to colon cancer risk. Specifically, obesity-particularly central obesity, physical inactivity, and possibly a low dietary polyunsaturated fat to saturated fat ratio-are major determinants of insulin resistance and hyperinsulinemia, and appear related to colon cancer risk. Moreover, a diet high in refined carbohydrates and low in water-soluble fiber, which is associated with an increased risk of colon cancer, causes rapid intestinal absorption of glucose into the blood leading to postprandial hyperinsulinemia. The combination of insulin resistance and high glycemic load produces particularly high insulin levels. Thus, hyperinsulinemia may explain why obesity, physical inactivity, and a diet low in fruits and vegetables and high in red meat and extensively processed foods, all common in the West, increase colon cancer risk.
So let me get this straight… the current evidence supporting red meat causing cancer is lacking, so you formulate a new theory that implicates refined carbohydrates in cancer pathology and this explains why red meat causes colon cancer?
It’s twisted logic indeed, but that’s how they think: obesity and insulin resistance are associated, and therefore (drumroll, please) we’ve decided being obese makes you insulin resistant. Now we can take all our usual suspects for obesity (like fatty foods) and blame them for insulin resistance, and therefore cancer as well.
Never seemed to occur to them that obesity and insulin resistance may simply have the same root cause … like that Food Pyramid diet they started recommending 30 years ago.
Hi Tom!
I’ve just seen your movie and I must say it’s very funny. And it’s about time someone makes a movie on this topic. I’ll show it to all my coworkers, who see my weightloss, but tell me I’ll die of heart disease. ๐ I’ve just told them the “every diet is high in fat because you burn your body fat”-argument from your last post yesterday. That was funny, thanks for that.
I have one little question: Did you lose any weight on your saturated fat pigout month? The video doesn’t say.
Your blog was a great find. This post reminded me of my favorite math teacher. He’s been really brilliant at explaining things and taught me basically that math is just a language to express thoughts, just like German or English, so there’s no reason to be scared. Everything I’ve ever known about math I know from him or taught myself based on what I’ve learned from him – because most math teachers suck big time. I’ve studied Electrical engineering and am now working at a research institute – pretty much because of him. One good math teacher can do a lot of good in this stupid world.
I lost two pounds during the saturated-fat pigout month, which means I basically stayed the same. Two pounds could be water. But considering how much food I was eating, the fact that I didn’t gain weight was interesting to me.
I like your math teacher’s way of putting it. That’s what I tell people who are math-phobic … math is just a description. If I have 200 pennies and I divide them into four equal piles then take away one pile, I have 150 pennies left. So 3/4 of 200 = 150. It’s not as hard as people (and bad math teachers) make it out to be.
@TonyNZ,
Excellent point that all studies are likely to be “flawed” in some way. One of the things I think is most problematic in science as it is actually practiced is that scientists (just like everybody else) tend to think in terms of absolute truth or falsehood. But these are pathological cases, because no amount of observation or experimentation can provide perfect information. The corollary is that scientists often focus on a single hypothesis. As your examples show quite clearly, that observations support a hypothesis doesn’t mean it’s the *best* hypothesis. When studying something as complex as human metabolism, there are many different possible hypotheses. Just because Dr. Bob’s favorite one is supported by data at the 95% level doesn’t mean there isn’t an alternative supported at the 99.99% level. But try telling that to Dr. Bob and see how far you get.
One of the worst issues is this nonsense: “let the data speak for themselves”. This is tantamount to saying that the only relevant information for assessing a hypothesis comes from the data generated by your study. But this is a mess. Dr. Bob does a study whose data supports hypothesis A. Dr. Dick does a study refuting hypothesis A. How do you reconcile these things? Or suppose that the truth of hypothesis A depends on hypothesis B. Dr. Bob’s study only obtained data relevant to hypothesis A, but how does he include the known evidence for B when assessing belief in A?
The saturated fat mess is a great example of this. Stephan at Whole Health Source has a nice literature review on the topic (Part I and Part II). Some say sat fat causes disease, some say it doesn’t, and most found no effect. The scientists involved in a particular study would probably defend their conclusions to the death:
Dr. Bob: Dr. Dick’s study is fundamentally flawed as he did not correct for protective effect of Cocoa Puffs against heart disease.
Dr. Dick: Dr. Bob’s feet stink and he hates Buddha.
Ideally, Dr. Bob would start with the belief implied by Dr. Dick’s study (and any other relevant information), and update it with the results of his own study. Drs. Bob and Dick should then be in agreement as to the resultant degree of belief that sat fat causes disease, because they both have the same information. In reality, it turns into a wee-weeing contest. Be sure to do the ad hockery two-step to avoid getting your shoes wet.
Inclusion of “any other relevant information” is also important. For the case of saturated fat, we can look at a few things. First of all, it is chemically pretty boring, precisely because it is saturated. There’s a reason that highly unsaturated fats form the substrates for biologically active substances like prostaglandins: the double bonds make them more reactive. Second, for saturated fat to have some negative health impact, it would need to have some specific metabolic effect compared to unsaturated fats, e.g. clogging up cholesterol receptors. There’s no evidence of such, so before one collected the first datum on the sat fat/disease relationship, we would have a low expectation that we’d even find such a relationship. And if the data did show a relationship, we would certainly want to consider alternative hypotheses with higher a priori expectations of being true (e.g. maybe it wasn’t the bacon but the Cocoa Puffs). But if this is ever done, it is selectively and qualitatively (“My study agreed with Dr. Dick’s conclusions, and everyone knows Dr. Dick was right, so I’m right too. Nyah nyah nyah!”).
As a wise man once said: Sientits arr dumm.
Very good point. Researchers fall in love with their own hypotheses and then ignore alternate explanations for what they observed. We see a slight rise in heart disease as cholesterol levels go up, so cholesterol must cause heart disease. Then we ignore the possibility that it’s inflammation that truly causes heart disease, and the body is merely producing more cholesterol to protect against the inflammation.
Stephan’s summaries are excellent. Thanks for the links. Malcolm Kendrick did a masterful job of explaining bad study design and ad-hoc reasoning in his book The Great Cholesterol Con. He is also, hands down, the funniest doctor ever to write a book. I laughed out many times as I was reading it.
I agree whole-heartedly!! I have always questioned statistics and I am in no way a mathematician. I am a logical thinker though and have always questioned studies in the same way that you have just explained. Statistics can confound anyone and sound so accurate because most of us can’t do the math.
I love your work Tom… keep it coming, I’m just loving hearing a sensible approach for a change.
I just wish more media reporters would question the statistics. Thanks for the compliment on the blog.
@Dave
The ultimate problem with many of these studies is that they do not consider or quantify potential sources of error appropriately.
One of the best demonstrations of sources of error was by my year 12 (11th grade in the states I think) physics teacher, around the time I decided science was for me. We had to list all of the ways error could be introduced to the experiment. Basically we had to time how long a ball took to fall from a set height to ground to work out a relationship for gravity and we came up with about 40 odd ways of introducing error.
There were the obvious ones like, timing error on release and landing, parallax error dropping the ball, systematic error in the equipment. There were also ones so left field that 99% of people wouldn’t think about them, such as:
Air temperature, changing the density and as such deceleration due to friction.
Inconsistencies in the surface of the ball doing the same
Height above sea level lessening the effect of gravity.
Iron content in the ball and possible inductive force effects.
Now this sounds pedantic, but by the time you evaluate the potential error of these and incorporate it into the results… 40 factors with an error of 0.1% each is 4%, and many of these errors were bigger. And we invariable missed some (I don’t recall talking about relativistic effects) He basically showed that without THOROUGH error minimisation, we could prove that within 95% confidence, gravity was acting to push us away from earth via accumulated errors in the experiment.
Now any experiment invovlving biology has many more factors than that. I’m not saying that we should incorporate relativistic effects into biological studies (although…), just at least thoroughly evaluate the obvious and achievable sources of confounding errors.
The sad part of it is that many researchers in the diet, health and disease fields aren’t interested in finding errors. They’re more interested in explaining them away.
I must say that I am good at maths and took a 100 level statistics paper last year. At the begininning it was easy but got harder and I scraped through with a C+.
Statistics is hard and it doesn’t comfort me that I now use software to get the results anyway.
So while I can spot obvious manipulation of statistics I have to put trust in someone else (or computers) that the numbers they are giving are accurate. I’m a little annoyed that there was no discussion of statistics and manipulation in the paper, just equations.
But where do we get most of these numbers? The media.
What did jolt me last year was a reminder form a criminology lecturer. The news-media is not here to serve us, inform us or tell us the truth. The media is here to make money – never forget that.
Hi Tom, I also have a question I have been thinking about since seeing your documentary. I’m looking for the answer myself but you may already know after your research.
I’m eating a lower-carb diet, but I have a sweet tooth (actually a sweet jawful of teeth). So I consume things like diet soda and jellies for a sweet no-calorie hit.
Now if my body produces insulin based on sugar it detects in the bloodstream, diet sodas should be fine (ignoring all the badness in the chemicals).
But if my body produces insulin based on the taste of sweet things, or the artificial sweetener fools my body into thinking it is sugar, then I’m going to be creating insulin I don’t need.
Do you have any light to shed? Thanks either way.
From what I’ve read, the taste of anything sweet will produce an insulin spike in some people. It’s like a Pavlovian response; the tongue tastes a sweet drink or food, the brains decides sugar is coming down the pike, and starts releasing insulin. If there is in fact no sugar in what you’re eating, you can end up with low blood sugar — insulin tries to knock down sugar that isn’t there.
For others, this doesn’t seem to happen. I guess it’s an individual thing. But I’m cutting artificial sweeteners anyway, because I’m trying to limit the amount of lab-created substances I ingest.
Do you know of a good site talking about statistics, etc.
I have an assignment to do where I have to critically appraise a research paper. I am doing a unit on evidence-based practice and this is one of the assignments we need to do. Very hard to understand the results that they are presenting. I haven’t done any statistics before.
You could check these to start:
http://en.wikipedia.org/wiki/Confidence_interval
http://en.wikipedia.org/wiki/Statistical_significance
http://en.wikipedia.org/wiki/P-value
The book “Super Crunchers” has some good chapters on statistics, pretty readable.
Mike Eades has also written some excellent articles about how to analyze studies. Here’s one.
Here’s another.
And another. And one more.
If anyone else knows of a good site that explains statistics in plain English, shout it out.
Thanks Tom!
De nada. Hope the links help.
P.S. I’ve read all of Dr Eades posts – he’s great.
Indeed. He has a rare ability to explain medical science to non-doctors.
@Sue —
Found a nice article about study statistics. The author manages to explain the concepts in plain English.
http://www.sportsci.org/resource/stats/pvalues.html
His home page is http://www.sportsci.org/resource/stats/index.html.
Lots of links to his articles, slideshows, etc. I’d say the guy is a real treasure trove for anyone who wants to learn about statistical analysis.
(Tony, you’ll be glad to know he’s a countryman of yours. You must grow good writers there.)
–Tom
There’s also an excellent book on the subject with the telling title “How to lie with statistics” ๐
And it’s not that difficult to do. According to my research, 99% of the people who died from heart disease last year had eaten at least a pound of carrots in the previous 10 years.
New Zealand: more than sheep, cheese and hobbits.
Haven’t seen this guy, but I have been to a couple of lectures of his contemporaries mentioned in his acknowledgements. New Zealand has some phenomenal university lecturers for it’s size. With two universities commonly in the top 100 globally and a population of 4 million, we don’t do too badly.
Sheep, cheese, excellent universities, good scenery from what I’ve seen in flicks … maybe I should emigrate. The government there isn’t busy running up any trillion-dollar deficits and saving the bill for future generations, is it?
Yeah we have it pretty good. There are a number of problems, but taken relative to anywhere else, I think I’d rather be here. As far as the debt burden, it is nowhere near as bad in the southern hemisphere, partly due to the failure of the banks to jump on the sub-prime bandwagon.
Then again, this was in the news today, so we aren’t exempt from rubbish.
If you can’t be bothered clicking the link, let me summarise…
OMG HEALTHY FAST FOOD CHOICES ARE MAKING PEOPLE FAT!
At least it wasn’t local research…
Looks like McDonald’s better dump those salads and save the nation’s health.
Great post Tom. Thanks!
My pleasure, and as always, it’s good to hear from you.
@TonyNZ,
Definitely agree that more effort should be put into identifying sources of error. I would actually call these “areas of uncertainty”. The way stats people think of “error” is in terms of random variables, which potentially excludes many types of information (like whether or not the coin you’re about to flip has two heads).
That said, in many cases (such as epidemiological studies) it is impractical to nail down more than a very small fraction of the specific uncertainties and their interrelationships. One should still include this information in the analysis, i.e. “I know there’s thousands of variables I haven’t measured, and I have little to no clue how they are related to the hypothesis under test”. This is generally ignored, which is why statistical associations from observational studies often seem far more compelling than simple common sense would indicate.
I like this theme you are using… what is it?
Madingo.
This author cannot even do arithmetic himself. He says: “And in fact, since 1.3/0.3 = 4.33, you could say that 1.3 is over 400% higher.”
Wrong. You could say that 1.3 is 333% higher, not “over 400% higher”.
Some of his other hyperbole is specious as well.
Yes, learn math so you don’t have to learn from this fellow.