Sorry, you need to enable JavaScript to visit this website.
Skip to main content

Where Does Risk-Adjusted Mortality Fit Into a Safety Measurement Program?

Ian Scott, MBBS, MHA, MEd | March 1, 2015 
View more articles from the same authors.


Much attention is being focused worldwide on identifying and using robust, easy to collect, and universally applicable measures of hospital safety. The risk-adjusted hospital mortality rate (R-AHMR)—and its various incarnations including the hospital standardized mortality ratio (HSMR) and the standardized hospital mortality indicator (SHMI)—is one such measure. The commonly used HSMR is a ratio of the observed number of in-hospital deaths to the number expected based on the hospital's case mix. Ratios greater than 1 (or 100 depending on how the ratio is reported) are seen to suggest unsafe care, while ratios less than 1 (or 100) are seen to suggest safe care. These measures have become popular as screening tools for hospital safety in the United Kingdom, Sweden, Netherlands, Canada, United States, and Australia.(1,2)

It's easy to see why R-AHMRs have become popular: they relate to a hard end-point that everyone would consider important; they are easy to assemble from routinely collected administrative datasets; they are amenable to powerful statistical analyses; they enable comparisons between institutions; and their graphical formats are seemingly easy to interpret. Advocates contend R-AHMRs shed public light on unsafe hospitals and spur them on to improve much faster than would occur by their own introspection or through public reporting of less tangible measures.

Flaws in the looking glass

But as indicators of unsafe hospitals, R-AHMRs possess some basic flaws.(3) First, they deal with an outcome—death—that is comparatively rare (5% to 10%) among contemporary hospitalized patients. Moreover, when in-hospital deaths are subjected to forensic clinical analysis, studies show that only 5% can be attributed to unsafe care.(4) Because of these low frequencies, mathematical modeling suggests that among 100 hospitals with a R-AHMR above average, only 8 will truly be more unsafe than the average hospital.(5) Conversely, 10 out of 11 hospitals that are actually more unsafe than average would demonstrate an average or below average R-AHMR, given that most quality problems, while associated with injury and prolonged hospital stays, do not cause death.(5) Consequently, as a screening tool for safety, R-AHMRs are limited by low sensitivity (most unsafe practices do not cause death) and low specificity (most deaths do not reflect unsafe care).

Sensitivity may be increased by enlarging the number of evaluable deaths by including those occurring up to 30 days after admission, on the premise that, as a result of today's short hospital stays, unsafe inpatient care may not manifest as death until after discharge.(6) But when is postdischarge patient mortality more likely due to poor hospital care than to poor community-based care or simple natural history of disease—is it after 7, 10, 30, or even 90 days? Specificity may be increased by excluding those deaths that can reasonably be attributed to advanced disease with a poor prognosis, despite the very best care. But this presupposes that terminally ill or palliative patients are managed in much the same way and reliably coded across all hospitals. Some Canadian hospitals were able to reduce their HSMR from "very bad" (more than 100) to "really good" (less than 100) in just 12 months just by recoding palliative care patients—without any change in the quality of care.(7) And what do we do with the patient rendered "palliative" by some rare but egregious medical misadventure?(8)

The second major flaw relates to risk adjustment. Despite best efforts to homogenize hospital patient populations, when using R-AHMRs as a screening tool, we must adjust for differences between hospitals in patient-related prognostic factors that are independent of safety. But the outputs of risk adjustment regression models depend on the input variables, which can vary considerably.(9) Certain comorbidities that impact on mortality (such as morbid obesity, dementia, and heart failure) are inconsistently recorded in hospital statistics, as is the level of frailty and disability.(10) Comorbidities present at the time of admission must be reliably differentiated from new diagnoses arising during hospitalization, which may reflect avoidable complications.(11) Removing vague or undetermined diagnoses may render risk adjustment more accurate, but as these account for around 20% of all hospital deaths in Canada and the United Kingdom (12,13), this further limits R-AHMR sensitivity. Different risk adjustment models also produce substantially different R-AHMRs, with 1 large US study involving 83 acute care hospitals showing that when 4 different but commonly used risk-adjustment models were applied to mortality data from each hospital, 12 out of 28 hospitals with high R-AHMRs as classified by 1 model had low R-AHMRs when classified by 1 or more of the other models.(14)

The third major flaw is the weak and inconsistent correlation between hospital-wide R-AHMRs and explicit quality indicators. In a chart review of 378 patients who died from stroke, myocardial infarction, or pneumonia in 11 outlier hospitals with elevated HSMRs, no differences were seen compared to hospitals with lower HSMRs in adherence rates for 31 recommended processes of care.(15) A systematic review of 36 similar studies concluded that R-AHMRs are poor predictors of unsafe hospital care.(16)

Effects of misunderstood mortality rates

If their limitations are not appreciated, R-AHMRs can mislead and have unintended effects. Unfavorable rates arising from erroneous data or analyses can trigger external inquiries that stigmatize individual hospitals, lower morale and public confidence, and encourage "gaming"—for example by upgrading risk assessments—or the pursuit of inappropriately aggressive care. Moreover, HSMRs by themselves do not drive quality improvement. Although the Mid-Staffordshire hospital system in the UK had elevated HSMRs for years, it was whistleblowers among staff and patients who attracted the public attention that led to the 2009 inquiry, while 21 other UK hospitals with similarly elevated HSMRs escaped attention.(13) In addition, most of the subsequent decrease in the Mid-Staffordshire HSMR (from 127 to 90) occurred during the course of the inquiry as a result of patient deselection and coding changes, before the completion of any substantial remedial work.(17)

Other studies have shown that movements in HSMR, either up or down, are more often due to random variation, regression to the mean, coding variations, and secular trends than to changes in practice safety.(7,9) Even if all hospitals provided a reasonable level of safety, as the HSMR is a relative measure, there will always be hospitals with ratios above 100.(18)

Finally, hospital-wide R-AHMRs do not enable hospital clinicians or administrators to easily pinpoint correctable processes of care at the level of individual units. Even within a hospital whose R-AHMR seems reasonable, individual diagnoses and procedures may demonstrate higher than expected mortality.

Alternative perspective on mortality rates

R-AHMRs may be more suited to monitoring changes in mortality over time within individual hospitals, thus circumventing much of the confounding inherent in between-hospital comparisons. In this way, each hospital serves as its own historical control, assuming no substantive change in coding practices, patient case mix, or service configuration over the short to medium term. But perhaps we should turn our attention away from R-AHMRs to ensuring mandated and timely peer review of every in-hospital death and using continuously monitored risk-adjusted statistical process control methods (19) to flag unfavorable trends in diagnosis-specific mortality at an early stage when only a very small number of excess deaths have occurred.

First do no harm by avoiding faulty statistics and interrogate every death

Hospital-wide R-AHMRs based on routinely collected data are blunt and inaccurate screening tools for identifying hospitals that are putatively more unsafe than others. They can falsely label hospitals as poor performers and fail to detect many others that harbor problems. In contrast, timely review of all in-hospital deaths and continuous monitoring of diagnosis-specific mortality trends within hospitals may provide more productive and acceptable means for identifying and responding to unsafe care.

Ian Scott, MBBS, MHA, MEdDirector of Internal Medicine and Clinical Epidemiology, Princess Alexandra HospitalAssociate Professor of Medicine, University of QueenslandBrisbane, Australia


1. Jarman B, Pieter D, van der Veen AA, et al. The hospital standardised mortality ratio: a powerful tool for Dutch hospitals to assess their quality of care? Qual Saf Health Care. 2010;19:9-13. [go to PubMed]

2. HSMR: A New Approach for Measuring Hospital Mortality Trends in Canada. Ottawa, ON, Canada: Canadian Institute for Health Information; 2007. ISBN: 9781554651849. [Available at]

3. Scott IA, Brand CA, Phelps GE, Barker AL, Cameron PA. Using hospital standardised mortality ratios to assess quality of care—proceed with extreme caution. Med J Aust. 2011;194:645-648. [go to PubMed]

4. Hogan H, Healey F, Neale G, Thomson R, Vincent C, Black N. Preventable deaths due to problems in care in English acute hospitals: a retrospective case record review study. BMJ Qual Saf. 2012;21:737-745. [go to PubMed]

5. Girling AJ, Hofer TP, Wu J, et al. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling study. BMJ Qual Saf. 2012;21:1052-1056. [go to PubMed]

6. Drye EE, Normand SL, Wang Y, et al. Comparison of hospital risk-standardized mortality rates calculated by using in-hospital and 30-day models: an observational study with implications for hospital profiling. Ann Intern Med. 2012;156:19-26. [go to PubMed]

7. Penfold RB, Dean S, Flemons W, Moffatt M. Do hospital standardized mortality ratios measure patient safety? HSMRs in the Winnipeg Regional Health Authority. Healthc Pap. 2008;4:8-24. [go to PubMed]

8. Tu YK, Gilthorpe MS. The most dangerous hospital or the most dangerous equation? BMC Health Serv Res. 2007;7:185. [go to PubMed]

9. Mohammed MA, Deeks JJ, Girling A, et al. Evidence of methodological bias in hospital standardised mortality ratios: retrospective database study of English hospitals. BMJ. 2009;338:b780. [go to PubMed]

10. Powell H, Lim LL, Heller RF. Accuracy of administrative data to assess comorbidity in patients with heart disease: an Australian perspective. J Clin Epidemiol. 2001;54:687-693. [go to PubMed]

11. Glance LG, Osler TM, Mukamel DB, Dick AW. Impact of the present-on-admission indicator on hospital quality measurement: experience with the Agency for Healthcare Research and Quality (AHRQ) Inpatient Quality Indicators. Med Care. 2008;46:112-119. [go to PubMed]

12. 2009 Hospital Standardized Mortality Ratio (HSMR) Public Release. Ottawa, ON, Canada: Canadian Institute for Health Information; 2009.

13. How Safe is Your Hospital? Dr Foster Unit. London, UK: Imperial College London; 2009. [Available at]

14. Shahian DM, Wolf RE, Iezzoni LI, Kirle L, Normand SL. Variability in the measurement of hospital-wide mortality rates. N Engl J Med. 2010;363:2530-2539. [go to PubMed]

15. Dubois RW, Rogers WH, Moxley JH III, Draper D, Brook RH. Hospital inpatient mortality. Is it a predictor of quality? N Engl J Med. 1987;317:1674-1680. [go to PubMed]

16. Pitches DW, Mohammed MA, Lilford RJ. What is the empirical evidence that hospitals with higher risk-adjusted mortality rates provide poor quality care? A systematic review of the literature. BMC Health Serv Res. 2007;20:91-98. [go to PubMed]

17. Francis R. The Mid Staffordshire NHS Foundation Trust Inquiry. Independent inquiry into care provided by Mid Staffordshire NHS Foundation Trust. January 2005–March 2009. Volume I. London, UK: The Stationary Office; 2010. ISBN: 9780102964394. [Available at]

18. Spiegelhalter D. Statistics behind the headlines. Have there been 13,000 needless deaths at 14 NHS trusts? BMJ. 2013;347:f4893. [go to PubMed]

19. Duckett SJ, Coory M, Sketcher-Baker K. Identifying variations in quality of care in Queensland hospitals. Med J Aust. 2007;187:571-575. [go to PubMed]

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Related Resources From the Same Author(s)
Related Resources