Archive for the “Good Science” Category

A dietary shift is definitely happening. Here’s how I know that for sure:

The big-money bankers are on board.

In case you didn’t see it in the comments section of my most recent post, Credit Suisse just published an 84-page report titled Fat: The New Health Paradigm. I skimmed it and was impressed, but my initial response was why is a bank publishing this?

The answer (echoed by a handful of readers) is that Credit Suisse is an investment bank, and their reports are intended to inform investors of economic trends. If there’s a big movement among consumers to embrace natural fats and cut back on grains and vegetable oils, that will of course have an economic impact. Probably not a good time to invest in General Mills.

Two of the bullet points from the document’s summary section make that clear:

  • What is the outlook? Globally, we expect fat to grow from the current 26% of calorie intake to 31% by 2030, with saturated fat growing the fastest and going from 9.4% of total energy intake to 13%. This implies that fat consump¬tion per capita will grow 1.3% a year over the next fifteen years versus a rate of 0.9% over the last fifty years. We expect saturated fat to grow at 2% a year versus a historical rate of 0.6% a year; monounsaturated at 1.3% a year versus 1.0%; polyunsaturated omega-6 to decline 0.2% a year versus a 1.3% past growth rate and polyunsaturated omega-3 to grow at 0.7% a year versus 1.6% a year over the last 50 years.
  • Among foods, the main winners are likely to be eggs, milk and dairy products (cheese, yogurt and butter) and nuts with annual rates of growth around 2.5-4%. The losers are likely to be wheat and maize and to a lesser extent solvent-extracted vegetable oils. Meat consumption per capita should grow at 1.4% a year and fish at 1.6% supported by a fast expanding aquacul¬ture industry.

But there’s waaaaay more to the report than predictions of what consumers will be buying or not buying in the near future. There are explanations of the various types of fats, a history of fat in the human diet, and a history of the anti-fat hysteria that took hold in the 1960s and became official policy in the 1980s. There’s a lovely, concise section that looks at the evidence (more like lack of evidence) that fat causes heart disease and obesity. There’s a similar section on the health effects of red meat. And of course, there are sections on the recent shift in consumer attitudes about fat.

I’m still reading the thing (since I have a full-time job and all that), but here’s a sample of other bullet points from the opening summary:

  • Triangulating several topics such as anthropology, breast feeding, evolution of primates, height trends in the human population, or energy needs of our various vital organs, we have concluded that natural fat consumption is lower than “ideal” and if anything could increase safely well beyond current levels.
  • The 1960s brought a major change in the perception of fat in the world and particularly in the U.S., where saturated fat was blamed for being the main cause behind an epidemic of heart attacks. We will see that it was not saturated fat that caused the epidemic as its consumption declined between 1930 and 1960. Smoking and alcohol were far more likely factors behind the heart attack epidemic.
  • Saturated fat has not been a driver of obesity: fat does not make you fat. At current levels of consumption the most likely culprit behind growing obesity level of the world population is carbohydrates. A second potential factor is solvent-extracted vegetable oils (canola, corn oil, soybean oil, sunflower oil, cottonseed oil). Globally consumption per capita of these oils increased by 214% between 1961 and 2011 and 169% in the U.S. Increased calories intake—if we use the U.S. as an example—played a role, but please note that carbohydrates and vegetable oils accounted for over 90% of the increase in calorie intake in this period.
  • A proper review of the so called “fat paradoxes” (France, Israel and Japan) suggests that saturated fats are actually healthy and omega-6 fats, at current levels of consumption in the developed world, are not necessarily so.
  • Doctors and patients’ focus on “bad” and “good” cholesterol is superficial at best and most likely misleading. The most mentioned factors that doctors use to assess the risk of CVDs—total blood cholesterol (TC) and LDL cholesterol (the “bad” cholesterol)—are poor indicators of CVD risk. In women in particular, TC has zero predictive value if we look at all causes of death. Low blood cholesterol in men could be as bad as very high cholesterol. The best indicators are the size of LDL particles (pattern A or B) and the ratio of TG (triglycerides) to HDL (the “good” cholesterol). A VAP test to check your pattern A/B costs less than $100 in the U.S., yet few know of its existence.
  • Based on medical and our own research we can conclude that the intake of saturated fat (butter, palm and coconut oil and lard) poses no risk to our health and particularly to the heart. In the words of probably the most important epidemiological study published on the subject by Siri-Tarino et al: “There is no significant evidence for concluding that dietary saturated fat is associated with an increased risk of CHD or CVD.” Saturated fat is actually a healthy source of energy and it has a positive effect on the pat¬tern A/B.
  • The main factor behind a high level of saturated fats in our blood is actually carbohydrates, not the amount of saturated fat we eat.

Wow. Great stuff … from a bank.

In case you had any doubts that most doctors don’t keep up with the latest diet and health research, the report includes this finding:

We conducted two proprietary surveys of doctors, nutritionist and consumers to understand better their perception of the issues we mentioned previously. All three groups showed superficial knowledge on the potential benefits or risks of increased fat consumption. Their views are influenced significantly more by public health bodies or by WHO and AHA rather than by medical research. Even on the “easy” topic of cholesterol, 40% of nutritionists and 70% of the general practitioners we surveyed still believe that eating cholesterol-rich foods is bad for your heart.

Go figure. The nutritionists are more likely than doctors to know that cholesterol has been found not guilty of causing heart disease.

In term of macronutrients, 45% of the doctors surveyed said that their perception of protein has improved, versus only 5% saying it has worsened; 29% of the doctors said that their perception of fat has improved versus only 7% saying it has worsened; and 15% only said that their perception of carbohydrates has improved versus 26% saying it has worsened.

Answering what makes you fat if eaten in large quantities, the doctors correctly pointed to sugar and carbohydrates (32% and 26%); fat and saturated fats are not as bad (23% and 16%) and protein collected only 2% of the responses.

However, the doctors believed that the best diet for weight loss is a low calorie one (65%), followed by low carbohydrate (36%) and low fat (7%). Among nutritionists, 42% prefer the low carbohydrate diet, against 30% for the general practice group.

Let’s focus on the positive. Yes, nearly two-thirds of doctors surveyed believe low-calorie diets are best for weight loss, but only 7% recommended a low-fat diet, versus 36% who recommended a low-carb diet. I’d wager a large sum that 15 or 20 years ago, more doctors would have been recommending a low-fat diet than a low-carb diet. It’s progress. And I was pleasantly surprised to see that 42% of the nutritionists recommend a low-carb diet.

I plan to read the entire report when I can. If anything jumps out at me as particularly interesting, I’ll post about it.

In the meantime, I see this report as another sign that the arterycloggingsaturatedfat! paradigm is dying out. The American Heart Association doesn’t want it to happen, The Guy From CSPI doesn’t want it to happen, the USDA Dietary Guidelines Committee doesn’t want it to happen, and countless makers of low-fat and low-cholesterol food-like products don’t want it to happen. But it’s happening.

And you can take that to the bank.


Comments 74 Comments »

My previous post quoted from a study in which researchers induced rats to overeat and gain weight by injecting them with insulin – a.k.a. the “acute appetite suppressant,” according to some.

I’m not usually a big fan of rat studies because of how they’re conducted. Researchers will feed fats a high-fat (ahem) “Atkins” diet of frankenfats, casein and corn starch, then pretend the results have some bearing on how a diet of meats and eggs will affect the health of human beings. The study I cited in my last post, however, wasn’t a diet study. It was study of how a hormone affects appetite and weight.  The insulin was injected directly.

Based on links in the comments, I looked for and found a handful of studies that demonstrate what insulin does to human subjects. Let’s take a look.

In this study, diabetes patients were treated either with 1-2 injections of insulin per day (called the conventional therapy by the researchers) or multiple daily injections (called intensive therapy by the researchers). Here are the results:

Intensively treated patients gained an average of 4.75 kg more than their conventionally treated counterparts (P < 0.0001). This represented excess increases in BMI of 1.5 kg/m(2) among men and 1.8 kg/m(2) among women. Growth-curve analysis showed that weight gain was most rapid during the first year of therapy. Intensive therapy patients were also more likely to become overweight (BMI >or=27.8 kg/m(2) for men, >or=27.3 kg/m(2) for women) or experience major weight gain (BMI increased >or=5 kg/m(2)). Waist-to-hip ratios, however, did not differ between treatment groups. Major weight gain was associated with higher percentages of body fat and greater fat-free mass, but among patients without major weight gain, those receiving intensive therapy had greater fat-free mass with no difference in adiposity.

So people treated more aggressively with insulin ended up gaining about 10 pounds more than those treated with less insulin. For many, the difference was more body fat. For others, it was a mix of more body fat and more lean mass. Well, no surprise there. Insulin spurs growth. That’s some why body-builders shoot the stuff. But if you have a tendency to get fat, higher insulin will make you fatter.

In this study, researchers treated diabetics with a sulphonylurea (drug that stimulates insulin internally), or with insulin directly, or with diet (which was labeled conventional therapy). The goal was to improve glucose control, not weight. But weight changes were included in the results:

Weight gain was significantly higher in the intensive group (mean 2.9 kg) than in the conventional group (p<0.001), and patients assigned insulin had a greater gain in weight (4.0 kg) than those assigned chlorpropamide (2.6 kg) or glibenclamide (1.7 kg).

Subjects who were either stimulated to produce more insulin or given insulin directly gained more weight than those treated with diet, and those given insulin directly gained the most.

In this study, researchers added three different insulin therapies to metformin:

In an open-label, controlled, multicenter trial, we randomly assigned 708 patients with a suboptimal glycated hemoglobin level (7.0 to 10.0%) who were receiving maximally tolerated doses of metformin and sulfonylurea to receive biphasic insulin aspart twice daily, prandial insulin aspart three times daily, or basal insulin detemir once daily (twice if required). Outcome measures at 1 year were the mean glycated hemoglobin level, the proportion of patients with a glycated hemoglobin level of 6.5% or less, the rate of hypoglycemia, and weight gain.

And the conclusion based on the results:

A single analogue-insulin formulation added to metformin and sulfonylurea resulted in a glycated hemoglobin level of 6.5% or less in a minority of patients at 1 year. The addition of biphasic or prandial insulin aspart reduced levels more than the addition of basal insulin detemir but was associated with greater risks of hypoglycemia and weight gain.

More insulin resulted in lower glycated hemoglobin (a.k.a, what’s measured in an A1C test as an indicator of average glucose levels over time), but also in more weight gain.

In this study, researchers put 50 subjects on a weight-loss diet after running a series of lab tests. They wanted to identify which factors predicted success or failure in losing weight and keeping it off. Here’s what they found:

On the basis of body weight trajectories, 3 subject clusters were identified. Clusters A and B lost more weight during energy restriction. During the stabilization phase, cluster A continued to lose weight, whereas cluster B remained stable. Cluster C lost less and rapidly regained weight during the stabilization period. At baseline, cluster C had the highest plasma insulin, interleukin (IL)-6, adipose tissue inflammation (HAM56+ cells), and Lactobacillus/Leuconostoc/Pediococcus numbers in fecal samples. Weight regain after energy restriction correlated positively with insulin resistance (homeostasis model assessment of insulin resistance: r = 0.5, P = 0.0002) and inflammatory markers (IL-6; r = 0.43, P = 0.002) at baseline.

The conclusion:

The resistance to weight loss and proneness to weight regain could be predicted by the combination of high plasma insulin and inflammatory markers before dietary intervention.

Yes, there was more going on here than insulin levels – inflammation and a difference in gut bacteria. But the point is that those with high plasma insulin (the “acute appetite suppressant”) lost less weight and regained it more quickly. I don’t think their appetites were suppressed very effectively.

In this study, diabetics were treated with an “intensive program” of insulin for six months. Once again, the goal was glucose control, not weight control. But weight did change:

During treatment, mean serum insulin levels increased from 308 ± 80 to 510 ± 102 pM, while body weight increased from 93.5 ± 5.8 to 102.2 ± 6.8 kg.

After six months, the “intensive program” of insulin led to an average weight gain of just over 19 pounds.

I suppose the explanation from the “insulin is an acute appetite suppressant” crowd would be that these studies were conducted on diabetics who are by definition insulin resistant. Right. And so are a helluva lot of people out there who are overweight and looking for a way to drop the pounds. We see it over and over in these studies: higher insulin, whether produced internally or given as a treatment, leads to more weight gain.  Dr. Lustig apparently speaks the truth when he says I can make anyone fat with enough insulin.
(Lustig is an endocrinologist, in case you’ve forgotten.  Hormones are his specialty.)

So for the people most desperate to lose weight, the “insulin is an acute appetite suppressant” notion is clearly a load of bologna. If their appetites were acutely suppressed, they wouldn’t be obese; they’d be anorexic. As part of their weight-loss strategy, need to bring down their insulin levels. Period.


Comments 73 Comments »

Many of you may recall a kerfuffle raised by a controversial article claiming that we’ve got it all wrong about insulin. Far from being a driver of weight gain, according to the article, insulin is actually an acute appetite suppressant. It makes us less hungry, by gosh, not more.

That article seems to show up on diet-related social media sites on a regular basis. A few weeks ago, some born-to-be-lean jock who joined the Fat Head Facebook group for the sole purpose of being an annoying jackass posted a link to it. I responded, which started a back-and-forth debate.

If insulin is such a fabulous appetite suppressant, I asked, then explain this: type 2 diabetics produce high levels of insulin. They’re also far more likely than other people to be overweight or obese. So why isn’t the acute appetite suppressant causing them to eat less and lose weight?

The response (I’m paraphrasing here): Well, ya see, ya big dummy, type 2 diabetics are insulin resistant by definition. So they don’t experience the effects of the acute appetite suppressant.

Hmmm. Okay, then, explain this one: People will start eating foods that provoke a high insulin response — a big tub of popcorn, or a big bag of chips — swearing to high heaven they’re only going to eat, say, half. Then they eat the whole thing. Then after cursing at themselves for not having any discipline, they go get more. Why isn’t the acute appetite suppressant kicking in and stopping them from eating way more than they intended?

The response (I’m paraphrasing again here): Well, ya see, ya big dummy, the food-reward properties of the popcorn or chips override the appetite-suppressant effect of the insulin.

Ahh, I see. So there’s really no need to adopt a diet that reduces your insulin levels, because insulin is actually an acute appetite suppressant … unless 1) you’re insulin resistant (like so many obese people), or 2) when you reach for high-carb foods, you choose the ones that taste good.

Well, that is fabulous news indeed for all the obese people out there who aren’t insulin resistant and prefer carbohydrate foods with little or no flavor.

I chose not to engage in an endless online debate because I had more important things to do, like write software code for work, take the girls to their piano lessons, and rearrange my shoes by size, color and length of service.

But while digging up some research for the book project, I stumbled across a study abstract that caught my attention because it mentioned something about using insulin to induce weight gain. So I called upon one of my super-secret, deeply embedded, password-protected double-agents in academia to get a copy of the full paper.


It’s the normal rats I’m interested in. Here are some quotes from the paper:

To induce overeating and weight gain in normal rats without brain damage, we used periodic injections of long acting insulin. Measurements of body weight and ad-lib food (powdered Purina chow) and water intake were taken on 23 Sherman female rats, housed at 80 ± 2° F. during a 2-wk. control period, 2 wk. of insulin treatment, and a 2-wk. recovery period. The insulin dose was 8 units per injection for the first 3 days, then 12 units thereafter.

Boy, those researchers must have been disappointed. Here they were, hoping to induce overeating and weight gain, and yet they injected the rats with an acute appetite suppressant. Big mistake, obviously.

All rats given protamine zinc insulin increased their food intake, presumably in response to hypoglycemia. In the short term experiment, 11 of the 23 rats survived by consuming nearly twice their normal daily food intake.

The rats who didn’t survive apparently died because they couldn’t eat enough to keep their little bodies fueled while the insulin drove down their blood sugar and locked up their fat cells. They were eating like crazy, but starving at the cellular level.

That reminds me of the conclusion from another paper in my files: appetite is largely a function of how much fuel is available at the cellular level, not how much fuel is consumed.

Their average weight gain was 58 gm. during the 2 wk. of insulin treatment, as compared to 13 gm. during the previous 2 wk.

This confirms the original observations of Mackay et al. (1940) and extends their results to indicate that marked obesity as well as overeating can be produced with insulin.

So marked obesity and overeating can be produced with injections of the acute appetite suppressant. Got it. Well, perhaps the rats injected with insulin just happened to find that powdered Purina chow waaaay more rewarding all of a sudden. Maybe the researchers added salt.

Every rat taken off the insulin regime after 2 wk. ate subnormal amounts of food and lost weight precipitously. On the average they were anorectic for 4 days, and lost 46 gm., which was 79% of the weight previously gained under the influence of insulin.

Researchers stopped injecting the rats with the acute appetite suppressant, and the rats responded by eating less and losing weight.

Boy, that almost sounds like what happened when I jettisoned a lot of insulin-producing foods from my diet. Reduce circulating levels of that acute appetite suppressant, and I’m just not as hungry.


Comments 78 Comments »

Dr. Andreas Eenfeldt is 6’7” tall – a fact I’ve managed to work into two pre-cruise roasts, as well as the speech I gave on this year’s cruise. You can always spot the man in a crowd, unless he happens to be standing amidst an NBA team.

You might expect that his brother is equally tall, or that he’ll have a son someday who’s as tall or taller. But that expectation would be wrong. His brother (an affable guy I’ve met on two cruises) is 6’3” – still tall, but four inches shorter than Andreas. That’s because Andreas is an anomaly. He’s proof of the statistical principle that given enough chances, chance happens. And what happened in his case was probably a genetic quirk that produced some extra human growth hormone.

In his book How Not To Be Wrong: The Power of Mathematical Thinking, math professor Jordan Ellenberg explains how anomalies can produce study results that are statistically significant, but nonetheless due entirely to chance. I recounted those points in a previous post.

In a later chapter, Ellenberg describes how we can fooled by the flip-side of given enough chances, chance happens: regression to the mean. After chance happens, things tend to return to normal.

Ellenberg begins the chapter (titled The Triumph of Mediocrity) by recounting the work of a professor of statistics named Horace Secrist. After examining data on hundreds of businesses in the 1920s, Secrist wrote an influential paper with a rather startling conclusion: the competitive forces of American capitalism lead to mediocrity in business. Secrist’s evidence was that when businesses produced record-high profits one year, they tended to produce average profits in subsequent years. Likewise, firms that produced record-low profits tended to show higher profits in subsequent years. Therefore, something about capitalism must produce middle-of-the-road mediocrity, Secrist concluded.

But as Ellenberg explains, what Secrist’s data demonstrates isn’t an inherent flaw in competitive capitalism, but the simple fact that anomalies tend to be followed by a regression to the mean. A business can have a bang-up year for any number of reasons: a disaster that affects the competition, a temporary surge in demand, etc. When the temporary conditions that produced the record profits go away, so do the record profits.  It’s business as usual again.  That doesn’t mean the company’s management drifted into mediocrity.

Or on the subject of height:

People drawn from the tallest segment of the population are almost certain to be taller than their genetic predisposition would suggest. They are born with good genes, but they also got a boost from the environment and chance. Their children will share their genes, but there’s no reason the external factors will once again conspire to boost their height over and above what heredity accounts for. And so on average, they’ll be taller than the average person, but not quite so tall as their beanpole parents. That’s what causes regression to the mean: not a mysterious mediocrity-loving force, but the simple workings of heredity mixed with chance.

I don’t know how Dr. Eenfeldt would feel about being called a beanpole, but you get the idea. It’s not likely that he’ll produce a son who is 6’7” or taller.

Ellenberg points out that regression to the mean is also common in sports. An NFL running back has a record-setting year, scores a huge contract as a result, then never puts up those stratospheric numbers again. Grumbling fans assume that with all those guaranteed millions in the bank, the running back lost his desire or work ethic. In fact, it’s probably just a case of regression to the mean. The running back was never going to have another season like the record-breaking one, fat contract or not.

As a personal example that doesn’t involve fat contracts or money of any kind, I saw regression to the mean during the past five days, when I played several rounds of disc golf against my buddy Jimmy Moore. Last year I kept our scores in a spreadsheet. My average at the end of that week was 6 under par on the dot.  (It’s an easy course, by the way.)

In one game yesterday, I tossed a 10 under par … four strokes better than average, and my lowest score ever when playing against Jimmy. The next game (a mere 90 minutes later), I finished just two under par … four strokes worse than average.  When I tabulated all my scores for the five days we played, my average was … wait for it … 5.8 under par. I have good games and not-so-good games, but I always seem to drift back to right around 6 under par.

So what does all this have to do with diet and health? Plenty. Here’s an example from Ellenberg:

Almost any condition in life that involves random fluctuations in time is potentially subject to the regression effect. Did you try a new apricot-and-cream-cheese diet and find you lost three pounds? Think back to the moment you decided to slim down. More than likely it was a moment at which the normal up-and-down of your weight had you at the top of your usual range, because those are the kinds of moments when you look down at the scale, or just at your midsection, and say Jeez, I’ve gotta something. But you might well have lost three pounds, apricots or no apricots, when you trended back toward your normal weight.

Okay, so maybe a loss of a few pounds that would have happened anyway fools people into continuing with a wacky diet for awhile. That’s not a big concern in my opinion. The real concern is when regression to the mean fools doctors and researchers, or gives researchers a chance to fool doctors and the public at large.

Ellenberg doesn’t get into the subject, but Chris Masterjohn did in an article from 2011. You can read the entire article if you want the details, but here’s the main point: regression to the mean can exaggerate the reported efficacy of drugs and other treatments.

Suppose researchers are screening subjects for a trial on a cholesterol-lowering drug. Part of the screening process is a lipid panel. Because of the principle that given enough chances, chance happens, some people are going to have a spike in cholesterol on the day of the screening. So they’re now labeled as people with high cholesterol and enrolled in the study. When they’re screened again later, their cholesterol is lower – which it would have been anyway because of regression to the mean. But the lower number is attributed entirely to the drug.

As Masterjohn points out, if it’s a large enough study and the subjects were properly randomized, the regression-to-the-mean effect should be roughly equal in both groups, because both groups would include equal numbers of people whose first screening occurred on a day when their cholesterol was spiking. But smaller studies and studies in which subjects weren’t carefully randomized are particularly prone to the regression-to-the-mean effect. To quote from Masterjohn’s article:

And thus we see that many published research findings are false. Some of these false findings exist because we would inevitably expect by the laws of probability for a small handful of well conducted, thoroughly reported, and appropriately interpreted studies to uncover apparent truths that are really false simply by random chance. This emphasizes the need to look at the totality of the data. Some will be false because of regression to the mean. This emphasizes the need to critically evaluate the data in each study.

The New Yorker ran an article in 2010 about the “decline effect” — the tendency of significant results reported in scientific experiments to end up less significant or even insignificant in practice or in further experiments:

All sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants.

The article partly blames the usual suspects, such as data manipulation and publication bias:

John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”

In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.

I’m sure Ioannidis is correct about the intentional distortions.  But the article also mentions apparently honest scientists who were surprised when they couldn’t reproduce their own results, using the same methods.  Makes me wonder if some of the decline effect is simply the result of regression to the mean.

Either way, the lesson is clear:  a lot of drugs are approved based on results that don’t hold up over time.  So when your doctor prescribes the latest wonder drug, keep in mind it may not be such a wonder drug after all.  As the New Yorker article says, in the field of medicine, the phenomenon [the decline effect] seems extremely widespread.

And if you’re getting ready to pick your team for a fantasy football league, I wouldn’t count on the players who had record-setting seasons in 2014 to repeat that performance.


Comments 45 Comments »

My college physics professor once gave a lecture to a humanities class on the need for scientific literacy. At one point, he told us, “No matter what field you plan to go into, learn math. Math is how you know when you’re being lied to.”

I recently finished a book that makes the same point. How Not To Be Wrong: The Power of Mathematical Thinking was written by a math professor named Jordan Ellenberg who does a nice job of explaining mathematical concepts without causing the reader’s eyes to glaze over.

I liked the book, but before you run out and buy a copy, I should mention that much of the material it covers seems unrelated to the title. Yes, it’s interesting to learn how some MIT students crunched numbers and devised a plan to guarantee themselves payouts from the Massachusetts lottery under certain conditions, but the chapter won’t teach you how not to be wrong … unless you’re designing a lottery, that is.

That being said, there are several sections that are relevant for people interested in the health sciences. Rather than write one very long post about those sections, I figured I’d cover one or two topics in a short series of posts. So let’s start with a topic near and dear to my heart …

Statistically Significant

As I mentioned in my Science For Smart People speech, when most people say an event or a fact is significant, they mean it’s important or meaningful. But in the world of scientific studies, significant simply means that based on tried and true statistical formulas, the result is not likely due to chance. It’s important not to confuse the two meanings.

In science, significance is expressed as a p-value, which Ellenberg explains in the book. If the p-value is .10, there’s a 10% chance the results were due to chance. For the results of a study to be called statistically significant, the p-value must be .05 or smaller. But again, significant doesn’t necessarily mean important.

Given a large enough sample size and enough data to crunch, scientists could say, for example, that cigar smokers have a higher rate of mouth cancer and that the difference is significant. But if the “significant” difference is one additional case of mouth cancer for every 250,000 people, most of us wouldn’t consider that meaningful or important. The actual odds of developing mouth cancer have barely changed at all.

Ellenberg makes the same point about the meaning of significant, then tags on some additional warnings for readers who don’t want to be bamboozled by media reports on the latest something-will-kill-you or something-will-save-you study. One of those warnings falls into the scientists are freakin’ liars category:

And all this assumes the scientists in question are playing fair. But that doesn’t always happen…. If you run your analysis and get a p-value of .06, you’re supposed to conclude that your results are statistically insignificant. But it takes a lot of mental strength to stuff years of work in the file drawer. After all, don’t the numbers for that one subject look a little screwy? Probably an outlier, maybe try deleting that line of the spreadsheet. Did we control for age? Did we control for the weather? Did we control for age and the weather? Give yourself license to tweak and shade the statistical tests you carry out on your results, and you can often get that .06 down to a .04.

Now imagine the numbers you’re crunching are for what was supposed to be a breakthrough drug and there are millions of dollars at stake. You get the idea.

But here’s what I consider the most important (and significant) point Ellenberg makes in the chapter: If the p-value is .05, that means the odds are only 1-in-20 that those impressive results were due to chance, right? Right … which means given enough chances, I could end up with impressive results that are significant, but still due solely to chance.

Ellenberg asks us to imagine 20 scientists running independent experiments to determine if eating a particular color of jelly bean causes outbreaks of acne. In 19 of the experiments, the color of the jelly beans consumed makes no difference. But in one of the 20 experiments, the subjects who ate green jelly beans had more outbreaks of acne – and those results are significant, because the statistical odds of them being due to chance are just 5%.

The 19 scientists who found no difference grumble, light a cigar, toss back a scotch, stuff their papers in their desk drawers, and go write their next grant proposal. The one scientist who found a significant difference proudly publishes his results … and a day later, there are media headlines trumpeting the now-established “fact” that green jelly beans cause acne.

The significant result was due to chance. But as Ellenberg points out, given enough chances, chance happens. That’s why the significant results of many studies don’t hold up and can’t be replicated.

So (and this is me talking, not Ellenberg) … now let’s think about how science is conducted for Big Pharma. Drug companies aren’t required to publish all their results, so they don’t. They aren’t required to share the raw data with other scientists, so they don’t. A good friend of mine has a brother who worked in Big Pharma and admitted to my friend, “We just keep running studies until we get the results we want.”

Given enough chances, chance happens. And if that p-value of .06 is the best we could get after multiple chances, well, perhaps a little tweaking here and there …

That’s why I don’t trust the results of studies funded by drug companies. Ellenberg doesn’t come out and say as much directly, but he does mention that industry-funded studies often can’t be successfully replicated.

Math is how you know – or at least have reason to suspect – you’re being lied to.

More mathematical-thinking examples from the book in future posts.


Comments 51 Comments »

Well, Fat Heads, Tom had a long trip back from the Low Carb Cruise and I’m sure is preparing a great report, so that means I get to stay in the chair for another day. Wheee!

Okay, by going long I’m not referring to this guest-blogging stint. And I don’t mean like a long run. I mean thinking about a long, long time. Like evolutionary time and how it works into some of my recent (and not so recent) reading material and my model of reality.

Studying different ideas on how we got here and what it means and how everything relates seems to occupy quit a bit of my daydreaming time these days. Things like nutrition and economics and how we developed as runners are all part of the same model. They fit together. They explain things in an internally consistent manner.

So I thought I’d share a few more books that have added to this model. The first I read a few years ago and it dealt directly with some evolutionary ideas. The other two don’t explore evolutionary models directly, but with some of the modern fallout of not considering these realities.

Here’s the first one:

News flash — it’s highly likely, in spite of it being politically perilous to say so, that men and women are different, and in ways that are and will remain significant.

Sorry to just spring that on you.

But instead of just anecdotally compiling differences, Barbara and Allan Pease traveled the world talking to researchers studying the brain and evolutionary biology to illuminate, sometimes hilariously, just how and why we got that way.

Again, in deference to the politically correct times we live in, several of their sources insisted on some level of anonymity to avoid the wrath of the elite and jeopardy to their funding.

For example, men and women have different visual processing. Men tend to see out distances and be more “tunnel-visioned” — required for hunting on the savannah; whereas women tend to have better peripheral vision, which is important to detecting threats to her offspring. They also tend to read expressions and body language better (i.e., “intuition”) for the same reason. On the humorous side, they give an example of a couple leaving a party where the woman is asking “oh my God, did you see the looks those two women were giving each other!?!” with the natural male response of “huh?”

Auditory processing is different, also, with men having more ability than women to detect the direction a sound is coming from, while women are more attenuated to voices and inflections.

They spend some time talking about sexuality and how it can be affected by the levels of testosterone and estrogen the fetus is subjected to in the womb. This seemed related to the whole epigenetics field that is getting more attention, where it’s not just a matter of what chromosome pairs you have, but also how other factors affect expression of those genes. So sexuality, affected by both in utero and environmental factors, becomes not just an either/or proposition, but more of a “spectrum,” as, for instance, Autism is now understood. Or in other words, maybe Monsanto and Big Soy created Caitlyn Jenner!

It was one of those books that you don’t necessarily think they’ve proved the point on everything they talk about, but they all made you think. Agree or not, the writing is very entertaining and at times outright hilarious.

I hadn’t thought directly about the book for a long time, but when I read the next two books — really more something of a matched set — they seemed to tie back to this idea of fundamental differences that we ignore at our own peril:

These are both by the same author – Dr. Leonard Sax. He actually wrote the Boys Adrift book first, then followed up with the “Girls on the Edge.” Sax doesn’t go way back into the evolutionary model of differences between boys and girls, but makes a strong case that they do indeed exist and the fact that progressive society insists on denying their existence are harming children of both sexes.

He talks about how the kindergarten experience that Tom and I had, and perhaps many of you, no longer exists. For us, that meant at age 5 we were spending half a day finger painting, gluing things together, having stories read, maybe working on some letter recognition, counting, and playing outside on the playground.

Today’s kindergarten is the equivalent of our old first or even second grade — all day long, heavily focused on reading and academics, without nearly the amount of physical activity, so another priority is “sitting still.”

The problem is, girls’ brains at age five are generally ready to begin reading. The reading capacity of a five year old boy’s brain is about the equivalent of a three year old girl’s. You couldn’t design a better model to frustrate young boys, convince them that they’re dumb, and to begin hating school. Oh, and not being able to sit still is now a medically treated condition. i.e., “ADHD.”

[One interesting piece of research Sax cites in one of the books was on the psychology school classic where you give a bunch of kids toys and the girls end up playing with mostly dolls and some trucks, where the boys play almost exclusively with the trucks. Then you debate on whether it's some innate preference or environmental/social conditioning. You know which answer you're supposed to get, right?

Researchers did the same experiment with monkeys. Guess what -- the female monkeys played mostly with the dolls and some of the trucks, and the male monkeys played almost exclusively with the trucks. The theory is that it goes to the different visual processing between males and females in primates.]

Sax has other areas where each sex is being led astray. For boys, in addition to the feminization of school, they are also constantly exposed via the popularity and ubiquity of video games to levels of depraved behavior (i.e., rape and murder of innocents are rewarded in the games) unheard of even in hard-core porn in my generation, and medication for ADHD as already mentioned.

For girls, they are now sexualized way before they have reached emotional maturity, subjected to a 24/7 cyberbubble that stunts the growth of a real identity, and obsessions — whether being thin, the “brain,” the athlete, etc.

As a common issue for both boys and girls, Sax hits a resounding bulls-eye with “Environmental Toxins.” Unfortunately, he hits the bulls-eye on the wrong target.

Or at least, hits a single target to the exclusion of much bigger, more sound targets. I think most Fat Heads will be naturally open to the idea that the things we’re putting into, on, and around our bodies is having continuing bad effects on our collective health. Not just in terms of the sundry diseases of civilization, but also in the realm of epigenetics. There’s soy and its estrogen-mimicking havoc, gluten, the ungodly amounts of sugar in the SAD, etc. Sax points out that girls are entering puberty months and even years earlier than just a couple of decades ago. Men’s sperm counts are lower, boys’ bone are more brittle, and research indicates that exposure to environmental estrogens makes females less female and males less male (again, see Ms. Jenner).

Sax seems to have a near singular focus as to the source of all these evils as — plastic bottles. As in BPA. He has the studies and information to support his point, but I couldn’t help but think, “really? How about picking up a copy of Wheat Belly or something?” Is America’s problem, and our children’s health really more endangered by a plastic bottle of soda than by the rest of the crap on the school lunch program menu? So I’m with him on the environmental toxin idea — that we are literally playing roulette with our ancient genetic code — but think he’s missing a large part of the picture.

Other than that one issue, I thought both books were good reads, along with some very good suggestions for interested parents and grandparents who want to raise kids into healthy, productive adults. That makes their value higher than adding a few more interesting ideas to inform my model of the universe. I already bought a copy of “Girls on the Edge” for our daughter as the granddaughters are 5 and 7.

Hmmm. Looks like I could have also meant the length of this post when I said going long.

Ah well, I really enjoyed getting to man The Big Chair again for awhile. Hope I gave you couple of things to think about or add to your summer reading list. If nothing else, you got another great recipe from The Oldest Son out of the deal!


The Older Brother


Comments 14 Comments »