Sorry, you need to enable JavaScript to visit this website.
Skip to main content

Identifying Adverse Events Not Present on Admission: Can We Do It?

James M. Naessens, ScD | October 1, 2008 
View more articles from the same authors.
Save
Print

Perspective

Interest is growing in the use of existing data sources to identify opportunities to improve the delivery and safety of medical care, to measure and compare quality and patient safety, and even to change provider incentives through pay for performance initiatives. The Agency for Healthcare Research and Quality's (AHRQ) Patient Safety Indicators (PSIs) (based on ICD-9-CM diagnosis codes from hospital billing data) were developed to screen for potential complications and medical mishaps.(1,2) Groups ranging from HealthGrades to University Healthsystem Consortium (UHC) use these PSIs to rate the quality of care at different institutions.(3)

One problem with the use of PSIs and other algorithms (4) to identify potential problems with medical care is the difficulty in separating comorbidities (those conditions present at hospital admission) from complications or hospital-acquired conditions.(5-8) Beginning in October 2007, changes were made in hospital billing forms to meet the Center for Medicare and Medicaid Services (CMS) mandate to submit an additional field for each secondary diagnosis to indicate whether that condition was present at time of hospital admission (POA).(9)

The POA designation will become increasingly important in the near future. CMS will soon implement a process that will not pay hospitals for an identified set of conditions that "could reasonably have been prevented through the application of evidence-based guidelines."(9) The Leapfrog Group is collecting information by hospital on the same conditions in its 2008 survey.(10) AHRQ now offers a version of PSIs that incorporates the new POA administrative codes to improve the utility of the indicators by reducing the number of cases identified with conditions present before hospitalization (false positives).

Given the increasing focus on hospital quality measurement, accurately identifying adverse events and comorbidities present at the time of admission is critical. This article will explore how well our present instruments perform in this key area.

Will POA Coding Enable the Identification of Adverse Events through Billing Data?

Even with the new POA codes, a number of issues must still be considered:

Variation in POA Coding

Several studies have shown that identifying whether a condition was present on admission is not an exact science. In our early work in assessing interrater reliability of determining the timing of an illness or complication based on blinded review of the medical record, we found that agreement differed across disease type. There was more agreement on the timing of myocardial infarct, stroke, and pulmonary embolism (kappa>0.8) than on the timing of renal failure, decubitus ulcer, and pneumonia (kappa from 0.58 to 0.73).(11) A Canadian study found agreement between routinely abstracted data and chart review to be poor (kappa<0.5) for seven conditions, moderate (0.512) In recent assessments of interrater reliability among cases identified with selected PSIs, researchers at the University of Michigan found low agreement between nurse review and original coder review on cases present on admission, high agreement on conditions that developed in the hospital, and an overall kappa of 0.4.(7)

There also appears to be significant variation between institutions. A study of discharges in 2003 from hospitals in the states of New York and California, where POA coding has been in place for more than a decade, found that patterns in POA coding differed across institutions.(13) Although smaller hospitals had more discharges with all secondary diagnoses labeled as POA, the occurrence of acquired conditions at larger hospitals may be more related to higher intensity treatments and higher case mix than to quality of care issues. The study also found that the percentage of hospitals that coded all secondary diagnoses as POA on all records was higher in New York than in California. In an accompanying editorial, Iezzoni (14) suggests that the study raises serious questions about how consistently hospitals in experienced states perform POA coding. Clear coding guidelines and oversight will be necessary to ensure accuracy of POA indicators.

Variation in ICD-9-CM Coding and Limitations of Reporting

Variation is not only evident across institutions on POA coding; there are substantial differences in diagnosis coding practices. Romano and colleagues (15) found that half of the difference in postoperative (after back surgery) complication rates observed across hospitals was attributed to variations in the collection, coding, and reporting of diagnosis codes. Furthermore, the study found that hospitals with higher-than-expected rates of complications reported twice as thoroughly as hospitals with fewer complications than expected, clear evidence of a reporting bias.

Other limitations in the use of administrative diagnostic data include the incomplete collection of conditions due to restrictions on the number of secondary diagnosis fields.(1) In our experience with Minnesota's mandatory reporting of the National Quality Forum list of serious adverse events, we report "unstageable" pressure ulcers in addition to stage 3 or 4 ulcers acquired after admission. However, only 25% of the last 16 reported patients had an ICD-9-CM secondary diagnosis code of a pressure ulcer (codes 707.00–707.09). These patients typically have multiple morbidities and long hospitalizations. Our administrative system has a limitation of 15 diagnoses, and it is possible that the decubitus ulcer was identified by the coder but was not placed high enough on the list of possible diagnoses to be captured in our repository.

Limitations in ICD-9-CM Coding System

There are issues about the granularity and coverage of our current coding system; hence the plans to eventually shift to ICD-10. In their study of PSIs in the Veterans Health Administration system, Rosen and colleagues (16) noted that adverse events from surgery are more amenable to ICD-9-CM coding than other types of events.

Differentiation of the Trivial from the Catastrophic

The presence of a hospital-acquired condition provides little information about the seriousness of an adverse event. Even after eliminating cases with POA conditions, review of cases coded with hemorrhage and/or hematoma or cases coded with accidental puncture and laceration identified a range of conditions from blood use within expected norms and incidental lysis of adhesions (both relatively trivial procedures) to life-threatening situations. Multiple studies (7,13), including our examination of hospitalizations in 2005 (6), have shown that the vast majority of patients with diagnoses coded as not present on admission appear to have relatively minor problems with no diagnosis-related group (DRG) or severity changes. While efforts should be made to reduce all adverse events, the severity of the problem should be considered for public reporting or pay for performance.

Interpretation: Adverse Event Versus Medical Error

How will the identification of adverse events be interpreted? Not all adverse events are preventable. In their assessment of the pediatric PSIs, Scanlon and colleagues (17) classified each event into three classes: preventable, nonpreventable, and uncertain. They found that the extent of cases that were clearly "nonpreventable" ranged from about 20%–80%, and clear preventability never exceeded 52%. Studies have also suggested that sicker patients are at higher risk of adverse events. We found higher rates of hospital-acquired conditions among hospital transfers and among physician-referred versus self-referred or primary care patients.(6) Current risk adjustment methods for PSIs may not be adequate for appropriate interpretation. Hughes (18) calls for efforts to separate "preventable" adverse events from events that result from underlying disease factors or are expected sequelae of treatment. It is unlikely that this can be done with only administrative data. In our own review of PSIs, we have seen substantial differences of opinions depending on the background and experience of the reviewer.

Incentives—DRG Creep in Reverse?

Based on the experience with the introduction of DRGs and the proliferation of software to help "optimize" the coding of hospital discharges, coding and reporting practices for conditions not present on admission can be expected to change. As Iezzoni (14) points out, an unintended consequence is that tying penalties in payment to the presence of a diagnosis code creates financial incentives to underreport those codes.

Summary

Echoing others, we must proceed with caution.(6,7,14,17) It is likely that POA coding will ultimately enhance the value of administrative data in identifying hospital adverse events, but not without further review and refinement. Today, however, the variability of thoroughness of reporting and accuracy of coding across institutions, combined with the low percentage of hospital-acquired conditions deemed "preventable," still limit the use of diagnoses from billing data as a source of quality measurement for public reporting and pay for performance.

James M. Naessens, ScDAssistant Professor Health Care Policy & ResearchMayo Clinic

Acknowledgments: The author thanks Monica Van Such for editorial suggestions and Sara Hobbs Kohrt for her manuscript preparation support.

References

Back to Top

1. Zhan C, Miller MR. Administrative data based patient safety research: a critical review. Qual Safe Health Care. 2003;12:ii58-ii63. [go to PubMed]

2. Patient Safety Indicators Overview. Rockville, MD: AHRQ Quality Indicators, Agency for Healthcare Research and Quality; February 2006. Available at: https://qualityindicators.ahrq.gov/measures/psi_resources

3. Barron WM, Krsek C, Weber D, Cerese J. Critical success factors for performance improvement programs. Jt Comm J Qual Patient Saf. 2005;31:220-226. [go to PubMed]

4. Iezzoni LI. Risk Adjustment for Measuring Health Care Outcomes. Ann Arbor, MI: Health Administration Press; 1994. ISBN: 156793207X.

5. Naessens JM, Huschka TR. Distinguishing hospital complications of care from pre-existing conditions. Int J Qual Health Care. 2004;16:i27-i35. [go to PubMed]

6. Naessens JM, Campbell CR, Berg B, Williams AR, Culbertson R. Impact of diagnosis-timing indicators on measures of safety, comorbidity, and case mix groupings from administrative data sources. Med Care. 2007;45:781-788. [go to PubMed]

7. Bahl V, Thompson MA, Kau TY, Hu HM, Campbell DA Jr. Do the AHRQ patient safety indicators flag conditions that are present at the time of hospital admission? Med Care. 2008;46:516-522. [go to PubMed]

8. Coffey R, Milenkovic M, Andrews RM. The Case for the Present-on-Admission (POA) Indicator. Rockville, MD: Agency for Healthcare Research and Quality; 2006. HCUP Methods Series Report No. 2006-01. Available at: http://www.hcup-us.ahrq.gov/reports/2006_1.pdf

9. Present on Admission (POA) Indicator Reporting by Acute Inpatient Prospective Payment System (IPPS) Hospitals: Present on Admission (POA) Indicator Reporting and Hospital-Acquired Conditions (HAC). Baltimore, MD: Center for Medicare & Medicaid Services; 2007. Available at: https://www.cms.gov/Outreach-and-Education/Medicare-Learning-Network-MLN/MLNProducts/Downloads/wPOA-Fact-Sheet.pdf

10. The Leapfrog Hospital Survey. Leapfrog Group Web site. Available at: https://leapfroggroup.org/data-users/leapfrog-hospital-survey

11. Naessens JM, Brennan MD, Boberg CJ, et al. Acquired conditions: an improvement to hospital discharge abstracts. Qual Assur Health Care. 1991;3:257-262. [go to PubMed]

12. Quan H, Parsons GA, Ghali WA. Assessing accuracy of diagnosis-type indicators for flagging complications in administrative data. J Clin Epidemiol. 2004;57:366-372. [go to PubMed]

13. Zhan C, Elixhauser A, Friedman B, Houchens R, Chiang YP. Modifying DRG-PPS to include only diagnoses present on admission: financial implications and challenges. Med Care. 2007;45:288-291. [go to PubMed]

14. Iezzoni LI. Finally present on admission but needs attention. Med Care. 2007;45:280-282. [go to PubMed]

15. Romano PS, Chan BK, Schembri ME, Rainwater JA. Can administrative data be used to compare postoperative complication rates across hospitals? Med Care. 2002;40:856-867. [go to PubMed]

16. Rosen AK, Rivard P, Zhao S, et al. Evaluating the patient safety indicators: how well do they perform on Veterans Health Administration data? Med Care. 2005;43:873-884. [go to PubMed]

17. Scanlon MC, Harris JM Jr, Levy F, Sedman A. Evaluation of the agency for healthcare research and quality pediatric quality indicators. Pediatrics. 2008;121:e1723-e1731. [go to PubMed]

18. Hughes JS, Averill RF, Goldfield NI, et al. Identifying potentially preventable complications using a present on admission indicator. Health Care Financ Rev. 2006;27:63-82. [go to PubMed]

 

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Save
Print
Sections
Related Resources From the Same Author(s)
Related Resources