Hospital performance
Does it make a difference?
Maybe you know that you can now compare hospitals with regard to a number of "performance measurements." This does not refer to how well Mass General can play the hammered dulcimer, but to a number of indices that are considered basic to acute hospital care. Does the hospital give aspirin to everyone having a heart attack? Are people with heart failure encouraged to quit smoking when they leave the hospital? Et cetera. These indices are considered important not just on the experts' say-so, but because they have been associated in the scientific literature with improved outcomes. People who take aspirin after a heart attack live longer (and have fewer repeat MIs) than those who don't; smoking hurts heart failure; etc. These studies are, in general, randomized trials of large populations.
What's missing is the link between populations, outcomes, and hospitals. Do hospitals that perform better according to these indices reap the benefits (in terms of reduced morality for their patients) that the literature of populations would indicate? If a hospital gives more of its heart-attack patients aspirin than another hospital, will the first hospital have a lower rate of death due to heart attacks than the second? It seems plausible.
Comes a study by Werner and Bradlow to answer this question. In brief, the answer seems to be "Yes, but not much." To continue the MI (heart attack) example: eight percent of all hospitals surveyed were in the 75th percentile of achieving all reported measures. (That is, all the things that are supposed to happen before or after an acute MI in a hospital happened seventy-five percent of the time or more in these hospitals.) When these hospitals were compared to those in the 25th percentile, the mortality due to heart attacks at one year after the event was about two percent less. For pneumonia (another disease represented among the performance standards), the difference is about one percent.
So that's it? A hospital does everything right, more often than the other guy, and the mortality rate is only reduced by a few percentage points? The indices must be less closely related to mortality than we thought. The authors, however, as responsible scientists, take a more nuanced take. First, if you amortize the few percentage points' worth of difference over thousands of patients -- perform the thought experiment of moving all the patients who get seen at the "worst" 25th percentile to the "best" 75th -- the number of lives saved would reach the thousands. Secondly, we need to remember that wholesale improvements in mortality take societal and medical revolutions, like large-scale reductions in smoking, introduction of the intensive care unit, or (perhaps!) government intervention to reduce consumption of trans fatty acids. If "mere" organizational optimization (boring paper-pushing, hospital by hospital) can make a few percentage points' difference, then something small can be huge indeed.
There are the typical limitations and qualifications to attach to any study, of interest mostly to specialists like me. For instance, this is a "cross-sectional" study, overlaying snapshots of performance measures with snapshots on mortality -- not a more time- and resource-consuming, but potentially more rigorous, follow-up of a population over years to see if implementation of such performance standards leads in cause-and-effect fashion to improvement in mortality. And, of course, it's always problematic to compare the mortality rates of Raucous Public Hospital to Fancy Private Hospital, which differ for reasons much deeper than performance standards. It's possible (even expected) that even after controlling for every possible variable that could confound the relationship between performance and mortality, there are some factors still left out.
These caveats aside, this study may be a small but encouraging sign.
12/17/06
Labels:
epidemiology,
medicine,
mortality,
outcomes analysis
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment