Like a Wide-Eyed K
Lie, Cheat and Ste
Let's Make a Move
Let's Just Call Je
Let's Get Rid of t
Last Push
Knights of the Rou
Kindergarten Camp
Kind Of Like Cream
Kill or Be Killed

Little Miss Perfec
Livin' On the Edge
Loose Lips Sink Sh
Love Goggles
Love Is In the Air
Love is in the Air
Love Many, Trust F
Loyalties Will Be
Mad Scramble and B
Mad Treasure Hunt
Like Selling Your Soul to the Devil,_ _Mozaffarian tells of how some doctors will make more money from a patient who dies after undergoing unnecessary treatment._ _So much so that they call it their "death package."_ —The Huffington Post, June 1, 2008 # **INTRODUCTION** In 2004, David Cutler, a prominent health economist at Harvard, published a critique of "managed care" in the _New England Journal of Medicine_. The study, funded by the Robert Wood Johnson Foundation, is generally considered the most influential piece of research on the relationship between health care and health since the 1960s. The title says it all: "How Much Should We Spend on Health Care? Less Than We Do." Cutler and his colleagues found that "people who are sicker use more health care and more health care services of lower value." A group of eight physicians and professors from the University of Chicago looked into Cutler's findings and found even worse news. In a paper published in the _New England Journal of Medicine_ , these scientists added that "physicians and policymakers should acknowledge the limits of our knowledge and make decisions based on hard evidence, not anecdotes." The Cutler study was a major turning point in America's struggle with health costs. Many political leaders believe the study made a convincing case for slashing government health spending, along with Medicare and Medicaid. But the Cutler study doesn't tell the whole story. Cutler and his colleagues failed to account for a huge expense, and the failure of their paper became clear only as they analyzed their data. For all of the study's talk of "evidence-based medicine," Cutler's findings couldn't survive the most basic of scientific tests. The Cutler study wasn't scientific at all. We set out to investigate the methods behind the Cutler study, and we quickly discovered an elaborate series of lies, distortions, and distortions of distortions. We found out that Cutler and his coauthors had manipulated their data to produce "evidence" that backed up their political views. By doing so, they managed to sell America a false bill of goods. That's right: we call it _the false bill of goods_. **IT'S ONE OF THE MOST EMPOWERING IN TURN-OF-THE-CENTURY STUDIES** , a scientific paper that gave rise to an entire subculture of academic hucksters selling bogus "evidence" to the public. And in many ways, it _was_ a major turning point in America's struggle with health costs. But it was a turning point for the wrong reasons. In its original form, the paper is an important lesson in how bad studies and bad journalism can be used to justify bad policy. In our retelling, we will show how Cutler and his coauthors used statistical sleight of hand to paint the picture of an American health care system in crisis. As we shall see, in its final form, the Cutler study is not only a statistical fraud, but a political fraud as well. # **THE EQUIVALENT OF A BOX OF CANDY** Let's start with what is technically wrong with the Cutler study. Cutler and his coauthors ran their experiment on four states in the late 1990s. The study controlled for differences in demographics and hospital prices in its data and found that the price of health care was lower in states with more competition among insurance companies. The researchers said the results showed "lower expenditures in [the states they studied] for health care goods and services of lower average medical intensity," using terms like "lower expenditures" and "medical intensity" loosely. Those words and phrases sound nice. But buried beneath all of those soothing words, there was a big problem. In the Cutler study, the researchers made a mistake—a very big mistake. They picked their data without checking their work. They knew what results they wanted to see. So they made sure to control for exactly what they wanted to see in their data. To find out, you just have to run the numbers. In this case, those numbers show that what Cutler et al. did was wrong, dishonest, and, well, let's just say it's not a good thing to do if you want to keep a job in the Harvard economics department. In any case, we will explain how Cutler and his colleagues pulled off their fraud in Chapter 2. For now, we just want to emphasize that Cutler and his coauthors had a clear goal—they wanted their data to show a relationship between competition and health care prices. But they had only one data point that showed that relationship. They didn't have any data that showed the lack of relationship between competition and health care prices. There are two ways to produce a false bill of goods—or, in our case, a "false-positive." The first method is easy: pick a state where there's plenty of competition, go look at hospitals' billing rates, pick another state, and so on, until you find one where you can see a difference. Then you write your story about the lack of competition leading to high medical prices. _But the second method is a little harder to pull off. You can't just go around picking places where prices are high. The higher the prices, the more likely you are to get caught. There's always a way around it._ The story Cutler and his coauthors were looking for was the one about hospitals charging too much in states without competition. It's easy to make that story look bad. Just pick a state where the prices are high. That's easy. But what if your results aren't that way? How do you deal with it? You can't just pick one or two states to get more favorable results—it's only cheating if you have the results _you_ want before you begin. So what's the way around it? You need a bigger and better study, one that looks at more data. With more data, you can find enough spots in the world where hospitals don't charge too much—you just have to find those spots and prove they're the exception, not the rule. This is where it gets really tricky. To pull off your story, you have to fool a lot of people. That's what a study has to do: fool as many as possible into believing your "evidence." So let's review how Cutler and his coauthors did that. First, they chose a small, well-defined population. They decided that their paper should focus on the people they call "Medicare beneficiaries." That means, of course, that if you're over 65, you can't really take part in their experiment. Second, they picked a small number of hospitals—10 hospitals in total, spread out over four states. These hospitals were more likely to have better data than most hospitals across the country. Third, they focused on a specific category of health care. The authors only looked at "hospital inpatient" data. This was great, because they wouldn't have to go looking for all of the places where hospitals might be charging too much. Fourth, they picked a single state, Tennessee, as their study's base—because that state, at least initially, was known to have prices that were high. So the odds of Cutler's study showing any relationship were higher. Next, they had to set a baseline for their experiment. What did they call "average" medical intensity? They chose medical intensity as their baseline because it was _not_ the same as health care prices. Now you might think that Cutler et al. didn't care that the "average medical intensity" was not the same as "health care prices." But you'd be wrong. For the same reason that you can't just go to a state and pick places where prices are high, you can't just pick an "average price." How would you choose one? You have to pick something else—something that isn't the same as prices. Cutler and his coauthors chose medical intensity as their baseline, a term that is used to describe hospitals' "average" charges, for every state. They made the reasonable assumption that in the average state, health care prices don't vary wildly from this baseline. "In this chapter we will use a uniform level of medical intensity... for each State in order to isolate the effect of competition." But there's another reason Cutler and his coauthors chose medical intensity as their baseline. Medical intensity is a standard way of quantifying the _process_ of health care. If you walk into a hospital, you're not going to be charged $2 million if you cut your leg, and the hospital isn't going to charge you $2 million for a cold. But you're going to be charged what it costs the hospital to treat your "average patient" for each category of medical attention. That's the baseline—the average. This fact is often lost on reporters who write about health care. But as a simple matter of economics, price is not a function of medical intensity. Costs of health care products vary depending on what product or service you are buying. Health care products differ in price because they are different products. Medical intensity is not one of those products. It is one way of quantifying how much it costs a hospital to deliver a service, and it is the _baseline_ —the level of medical intensity that every hospital starts out with. This difference in what prices actually are and what medical intensity is should sound familiar. But here's what happens: reporters will sometimes forget this difference, when they