Stat counter


View My Stats

Sunday, May 5, 2013

Houston, we have a problem

This has been an interesting week. The NEJM published findings from perhaps a one in a lifetime unintentional randomized human trial which developed as a consequence of political expediency and scarce resources. (The Oregon Experience) The state of Oregon found itself in a not so happy place in 2008 when it was faced with demand for Medicaid enrollment which the state simply could not raise the revenue to meet. The solution was to hold a lottery. Those who won the lottery were enrolled and those who did not win we followed. The methods of the manuscript were reported as follows:
"Approximately 2 years after the lottery, we obtained data from 6387 adults who were randomly selected to be able to apply for Medicaid coverage and 5842 adults who were not selected. Measures included blood-pressure, cholesterol, and glycated hemoglobin levels; screening for depression; medication inventories; and self-reported diagnoses, health status, health care utilization, and out-of-pocket spending for such services. We used the random assignment in the lottery to calculate the effect of Medicaid coverage."
The results were summarized as follows as well:
" We found no significant effect of Medicaid coverage on the prevalence or diagnosis of hypertension or high cholesterol levels or on the use of medication for these conditions. Medicaid coverage significantly increased the probability of a diagnosis of diabetes and the use of diabetes medication, but we observed no significant effect on average glycated hemoglobin levels or on the percentage of participants with levels of 6.5% or higher. Medicaid coverage decreased the probability of a positive screening for depression (−9.15 percentage points; 95% confidence interval, −16.70 to −1.60; P=0.02), increased the use of many preventive services, and nearly eliminated catastrophic out-of-pocket medical expenditures.
CONCLUSIONS
This randomized, controlled study showed that Medicaid coverage generated no significant improvements in measured physical health outcomes in the first 2 years, but it did increase use of health care services, raise rates of diabetes detection and management, lower rates of depression, and reduce financial strain."
The pundits on both the right and left have had a field day, each spinning the results to justify whatever set of beliefs they already hold. Conservatives have looked at the results and believe they justify their long held beliefs that expansion of Medicaid is not worth the substantial financial commitments. Liberals have focused on the lower rates of depression, earlier diagnoses of diabetes, and the lowered financial strain to justify the impact. However, I think Megan McArdle really identifies the aspects of this study which are most provocative and ultimately disturbing (McArdle). She points out that while various authors cite numbers such as "44,000 people a year die from the lack of health insurance", there is little data to support this contention. This recent Oregon study is not the first to run contrary to this contention. As she wrote:
"Ideally what you want to test the effect is not survey data, but a randomized controlled study: divide people into two groups, and give one of those groups insurance, while the other group stays uninsured. As you can imagine, it's hard to do. For one thing, you'd probably have trouble getting people to stay in the control group once you put them there. For another, it's going to be pretty expensive to insure a bunch of people just so you can see how many of them get sick.

Luckily for us, in 1971, Rand went ahead and did it anyway. Well, close. They took thousands
of families and divided them up into five groups. Group one got totally free care. Groups two through four got "traditional" (for the time) insurance plans with various degrees of cost sharing, ranging from 20% to 95% (meaning that in Group 4, only 5% of your average expenses were paid). The cost sharing plans also had a cap on the percentage of your income that you'd have to pay out of pocket. Group 5 got an HMO. Then they looked to see what differences emerged in health outcomes.

Shocker: none did."
We have a problem which again she deftly identifies:
You can squint hard at the data and say, well, sure, the effects weren't statistically significant, but there was some improvement!  Much such squinting has been going on.  But if there had been a slight, not-statistically-significant decline in the health of the Medicaid participants, I'm skeptical that many--or any--of our squinters would have been touting the probative power of those sorts of small effects.  As someone I was talking to earlier noted, "It's got huge confidence intervals" is not normally the sort of thing you hear when arguing that a study supports your thesis.  Our intuitions about health care, not the data, are doing a lot of heavy lifting here.

When you do an RCT with more than 12,000 people in it, and your defense of your hypothesis is that maybe the study just didn't have enough power, what you're actually saying is "the beneficial effects are probably pretty small".  Note that we're talking about a study the size of a pretty good Phase III trial for Lipitor, Caduet, or Avandia--some of the leading new drugs for treating high cholesterol, hypertension, and diabetes.  Of course, to be fair, those trials enroll only people with the disease they're targeting, so you should get more statistical power--but then, to also be fair, many of those studies have many fewer than 12,000 participants and still achieve statistical significance. "
The short answer is we will never have the wherewithal and statistical power to answer this question.  It would take more people and longer time periods. Not gonna happen...What is the likelihood that we are going to get a larger study which can address this issue. I would suggest it approaches zero.

Jonathan Chaidt (Chaidt) takes this to the next logical step, not only questioning the value of insurance, but of medical interventions in general:
"The Oregon study does not raise particular questions about the efficacy of Medicaid; it raises questions about the efficacy of medical care in general. Measuring the impact of medicine is just really hard to do, yet almost nobody would volunteer to follow this frustrating fact to its logical conclusion and forgo the benefits of modern medicine."
Here we have a study (Oregon Medicaid Study) which calls into question not only the value of insurance, but has prompted questions regarding the value of modern medical interventions in general. It's findings are completely counter-intuitive to everything that we who practice at the bedside hold to be true. We invest our practice lifetimes to have impact and yet randomized trials of large numbers of people seem to suggest the effect size is rather modest; so modest that it is difficult to impossible to measure. What we practitioners preferentially recall is when we have had substantial (and usually acute) impacts while I suspect we tend to filter out many of the more mundane and less impactful interventions. Chaidt's response to facing this dilemma is interesting. He holds such faith in modern medical intervention that he holds to the belief that few if any would forego. He is correct but the question becomes are they completely justified in their strongly held beliefs?

This obviously is not a trivial issue and what our answer's might be to this really depend on how the questions are framed. The Oregon study uses a series of surrogates as endpoints hoping that these will track with some degree of fidelity with end points which are actually important. For the most part, insurance is not important until you need resources to pay for some intervention which is actually important. However, only a fraction of the actual interventions for which insurance is required (in our current payment system) can be demonstrated to deliver value to recipients, and not even all who receive them. The effect size of any given intervention is small and it get diluted out based upon a heterogeneous population who are recipients of many different interventions.

Behind all this is the likelihood that the health care industry does many things to individual patients which are not likely to provide individual patients with any benefit. These activities, both when looked at individually and in aggregate, are expensive in both time and money. As we spend increasing sums of time and money in the health care domains, it becomes more and more doubtful that this is a sustainable pathway. We believe that individuals should not be required to make decisions regarding the value of any specific interventions based on the allocation of their personal resources. However, we are essentially incapable of defining whether many (if not most) of the interventions deliver value when looked in the aggregate. How can we make decisions allocating commonized resources when the data to make reasonable decisions does not exist and is not likely to ever be available and unambiguous. The result will be that increasing amounts of wealth will be moved into the public realm and allocated through bare knuckle political pathways. As ugly as our political processes appear to be now, just wait until the financial stakes are higher.



No comments: