Pain is a major quality issue. The objective of this study was to evaluate the effectiveness of a series of interventions on pain management.
This controlled clinical trial (April 1, 2002, to February 28, 2003) involved the staggered implementation of 3 interventions into 2 blocks of matched hospital units. The setting was an 1171-bed hospital. A total of 3964 adults were studied. Interventions included education, standardized pain assessment using a 1- or 4-item (enhanced) pain scale, audit and feedback of pain scores to nursing staff, and a computerized decision support system. The main outcome measures were pain assessment and severity and analgesic prescribing.
Units using enhanced pain scales had significantly higher pain assessment rates than units using 1-item pain scales (64% vs 32%; P<.001), audit and feedback of pain results was associated with increases in pain assessment rates compared with units in which audit and feedback was not used (85% vs 64%; P<.001), and the addition of the computerized decision support system was associated with significant increases in pain assessment only when compared with units without audit and feedback (79% vs 64%; P<.001). The enhanced pain scale was associated with significant increases in prescribing of World Health Organization step 2 or 3 analgesic for patients with moderate or severe pain compared with the 1-item scale (83% vs 66%; P=.01). The interventions did not improve pain scores.
A clinically meaningful pain assessment instrument combined with either audit and feedback or a computerized decision support system improved pain documentation to more than 80%. The enhanced pain scale was associated with improved analgesic prescribing. Future interventions should be directed toward altering physician behavior related to titration of opioid analgesics.
Pain is the most common symptom experienced by hospitalized adults1- 2 and is highly prevalent in multiple patient populations and settings.3- 12 Whereas pain management guidelines exist,13- 16 and the Joint Commission for the Accreditation of Healthcare Organizations has mandated routine pain assessment in hospitals, these strategies have not been rigorously evaluated. Institutionwide quality improvement studies that have used these guidelines have met with only modest success in improving pain assessment rates17- 19 and have yet to demonstrate improvements in pain intensity.17 These studies have been limited by small sample sizes, targeted at isolated hospital units or patient populations (eg, those with cancer), lacked adequate control groups, or used cross-sectional rather than longitudinal designs.17
In an effort to institutionalize improvement in pain management, this study evaluated the incremental effectiveness of a series of additive interventions designed to improve the detection and treatment of pain in hospitalized patients. Education in pain management was followed by a series of interventions modeled after strategies that have been shown to change practice patterns and improve health outcomes in other settings.
This controlled clinical trial enrolled a 25% random sample of all eligible subjects admitted to matched medical/surgical units at a large teaching hospital. Physician and nursing education in pain management was followed by the staggered implementation of combinations of additive interventions based on published pain guidelines16,20- 21 into 2 blocks of matched medical/surgical hospital units at 6-month intervals. The interventions consisted of the following: (1) patient education and nursing pain assessment of current pain, worst pain, pain relief, and acceptability of pain; (2) audit and feedback to nursing staff of patients' pain intensity and staff compliance with assessment; and (3) a computerized clinical decision support system (CDSS) to guide analgesic prescribing. Table 1 summarizes the study design.
This study was performed at Mount Sinai Hospital, New York, a 1171-bed hospital. Nine medical/surgical units were selected for inclusion based on similar baseline patient demographics and pain scores (3 general medicine, 2 general surgery, 2 specialty surgery, 1 oncology, and 1 mixed oncology/general medicine). Each unit contained 32 to 36 beds, with the exception of 1 general medicine unit (18 beds). A general medicine, general surgery, and specialty surgery unit were matched to a similar unit, and the smaller general medicine unit was combined with the oncology unit and matched to the mixed oncology/general medicine unit.
We prospectively screened a 25% random sample of all admissions to each of the 9 study units daily, Monday through Friday, from February 16, 2001, through February 15, 2003, for study enrollment and approached eligible patients for informed consent. All patients admitted to each study unit were exposed to the interventions, although data were collected only from those enrolled. Eligibility criteria and study enrollment details are in the Figure.
Subject enrollment data. Percentages may not total 100 because of rounding.
The intervention was divided into 4 phases (Table 1).
Nurse educators conducted pain management training on all hospital nursing units and during orientation for newly hired nurses using modules developed from published guidelines.16,20,22- 24 Three of us (R.S.M., D.E.M., and D.F.) conducted grand rounds for all clinical departments and led workshops on pain management for house staff.
Hospital policy required that pain be assessed at least once per nursing shift using a 4-point scale (0 indicates none; 1, mild; 2, moderate; and 3, severe). During phase 2, an enhanced pain assessment that included more comprehensive information and that could better guide analgesic prescribing was implemented only on block B. Block B nurses asked patients to rate their current pain, worst pain, pain relief, and whether the level of pain was acceptable to them on 4-point scales. Pain scores were printed on vital sign reports.
During phase 3, the expanded pain assessment was placed into block A and the audit and feedback intervention was implemented on block B. Three process measures and 1 outcome measure were selected for audit and feedback based on previous studies25- 26 and guidelines16,20- 21 (Table 1). The last 2 months of phase 2 were audited, and the first feedback reports were provided to the units' nursing managers at the beginning of phase 3 and monthly thereafter. Feedback reports detailed the individual units' performance and benchmarked their performance against block B units and block B as a whole. Block A did not receive feedback.
During phase 4, the CDSS was implemented simultaneously on both blocks. Because of the design of Mount Sinai Hospital's clinical information system, and similar to other studies27- 30 involving CDSS linked to order entry systems, we were unable to randomize the implementation of CDSS by block. In addition, randomization by block presented the potential for contamination because physicians treating patients on CDSS units were likely to be simultaneously caring for patients on non–CDSS-exposed units. Contamination was unlikely in earlier phases because nurses were unit based.
The CDSS was modeled on a previously described program.31 Physicians ordering analgesics on the hospital's order entry system (TDS; Eclipsys Corporation, Atlanta, Ga) were provided with links to reports summarizing patient characteristics relevant for analgesic prescribing and to the CDSS. The CDSS provided recommendations for initiating analgesics, dose escalation, switching agents, patient-controlled analgesia, bowel regimens, and opioid adverse effect management. After accessing recommendations, physicians entered medication orders. The CDSS was available to nurses through their charting pathways. The algorithms for the CDSS were developed by the investigators based on published guidelines, reviewed by content experts in pain medicine, and piloted with physicians. All nonpediatric clinical departments received CDSS training.
Beginning in phase 2, trained clinical interviewers blinded to the study intervention interviewed enrolled subjects within 48 hours of admission and then once daily, Monday through Friday. Patients were asked to rate their current pain, their worst pain over 24 hours, their pain relief with analgesics, and whether their pain was acceptable to them. Pain and pain relief were rated on 4-point scales (Table 1). Data were gathered directly from patients rather than from nursing reports because of concerns that nursing compliance with pain assessment would not be high enough to ensure that reliable and valid pain reports could be obtained.
Outcomes included measures of pain assessment, pain severity, and analgesic prescribing. We evaluated the percentage of patients who had a daily pain assessment for each nursing shift for the first 5 days after enrollment, the percentage of patients who reported moderate to severe pain the day of enrollment (medicine patients) or on postoperative day 1 (surgery patients) and who continued to have moderate to severe pain 72 to 96 hours later, mean pain scores for the first 72 hours after enrollment (medical patients) or on postoperative days 1 through 3 (surgical patients), the percentage of patients with 2 or more days of nurse-recorded moderate to severe pain receiving a World Health Organization (WHO) step 2 or 3 analgesic,32 the percentage of patients receiving standing opioids for 48 hours or more who were receiving a concomitant laxative, and the percentage of patients receiving meperidine. A reduction in meperidine use was considered an important outcome given uniformity among professional guidelines that meperidine not be used as an analgesic because of its epileptogenic metabolites.15- 16,21,33 Pain scores reported to clinical interviewers were used in all analyses.
Hierarchical multivariate linear and logistic regression models (HLMs) were used to examine the interventions' effects on outcomes. Because the intervention was assigned at the level of the hospital unit, but measures were at the individual patient level, there was the potential that data from patients on the same unit would be correlated, leading to incorrect standard errors.34- 35 The HLMs took into account these nested data and allowed for correct inferences.36- 37
A 3-level HLM model (shift, patient, and hospital unit) was used to test the effect of the intervention on nursing pain assessment using data collected for each patient for each of the 3 daily nursing shifts. We hypothesized that assessment might vary across shifts because of changes in nursing staff ratios and differences in unit activities. Two-level HLMs (patient and unit) were developed to examine the interventions' effects on pain severity and prescribing of analgesics and laxatives. Both 2- and 3-level models controlled for patient characteristics, block, and phase, and accounted for unmeasured (random) effects at patient and hospital unit levels. The 3-level model additionally accounted for the fixed effects of the nursing shifts.
Data were analyzed with SAS statistical software (PROC MIXED for continuous variables and GLIMMIX for discrete variables) (SAS Institute Inc, Cary, NC).38 The 3-level hierarchical model was estimated using a computer program (MLwiN).39- 40
The analyses examined whether (1) the enhanced pain scale was associated with better outcomes than the 1-item pain scale, (2) the addition of audit and feedback to the enhanced pain scale was associated with better outcomes than the enhanced pain scale alone, (3) the addition of the CDSS to the enhanced pain scale was associated with improved outcomes, and (4) the addition of the CDSS to the enhanced pain scale with audit and feedback was associated with improved outcomes. Hypotheses were tested directly based on the significance levels of dummy variables representing the main effect and interaction of each block and phase, controlling for the patient characteristics of race, sex, age, insurance status, surgical vs medical patient, and diagnosis. Point estimates were determined for each block and phase subgroup along with tests of the joint hypotheses previously noted, based on the HLM analysis. The Mount Sinai School of Medicine institutional review board approved the study.
Patient characteristics are given in Table 2. There were few significant differences between blocks.
Controlling for Table 2 variables, patients on units using the enhanced pain scale were significantly more likely to have their pain assessed than those on units in which the 1-item pain scale was used (variable estimate, 0.62; SE, 0.14; P<.001), audit and feedback of pain results was associated with significant increases in pain assessment rates compared with units without audit and feedback (variable estimate, 0.80; SE, 0.11; P<.001), and the addition of the CDSS was associated with significant increases in pain assessment only when compared with units that lacked audit and feedback (variable estimate, 1.13; SE, 0.14; P<.001) (Table 3). Overall, the percentage of patients who received at least 1 pain assessment per day improved from 32.1% with the standard pain assessment to 79.3% when the enhanced pain scale was combined with the CDSS, and to more than 80% for interventions using audit and feedback.
The intervention did not alter pain severity (Table 3). The percentage of patients who had 72 to 96 hours of persistent pain following study enrollment and overall mean pain scores remained relatively constant across both blocks and all 3 phases. Mean pain scores recorded by nurses were on average 0.1 points lower than those recorded by research staff for the same interview day (P<.001).
Several interventions improved analgesic prescribing. The enhanced pain scale was associated with significant increases in WHO step 2 or 3 analgesic prescribing for patients with moderate to severe pain compared with the standard pain scale. The CDSS was associated with a significant reduction in the use of meperidine in block B. No intervention seemed to improve laxative prescribing. Use of the CDSS was low: physicians used the CDSS a mean of 3.3 times per day for enrolled patients.
We also examined changes in analgesic prescribing. For patients with persistent moderate to severe pain who were not receiving opioids on day 1 and who were subsequently prescribed an opioid, the average dose in parenteral morphine sulfate equivalents was 32 mg/d at day 3. For patients with persistent pain who were receiving an opioid on day 1, the mean percentage dose increase in parenteral morphine sulfate equivalents was 17% (51.4 mg on day 1 to 60.4 mg on day 3). There were no significant differences across either block or phase.
To our knowledge, this is the largest study to examine a series of previously well-described but relatively untested interventions to improve pain management in hospitals. Improving the treatment of pain requires that pain be routinely assessed and that, once identified, appropriate analgesic medications be prescribed. To our knowledge, no consistent and generalizable interventions have been demonstrated to achieve these goals.17- 19 Our study suggests that routine assessment of pain severity, relief, and acceptability can be effectively used within a large medical center; is more likely to be used than a simpler assessment that only addresses severity; and results in improved analgesic prescribing. In addition, our study suggests that audit and feedback of pain results to nursing staff can significantly increase assessment rates. Although the intervention did improve several process measures related to analgesic prescribing (increase in WHO step 2 or 3 analgesics and reductions in meperidine prescribing), we did not observe reductions in pain severity.
Improving the detection of pain in hospitalized patients is a fundamental first step in improving its treatment, and one that has been difficult to achieve.17 Consistent with prior reports,17 this study demonstrated that combining nursing education with the implementation of a 1-item pain severity question at vital sign assessment is ineffectual in ensuring universal pain assessment. Several interventions in this study increased pain assessment rates. An enhanced pain assessment that provided nurses with clinically meaningful information doubled the prevalence of daily pain assessment compared with the use of a standard 1-item scale (64.0% vs 32.1%). The addition of audit and feedback or CDSS to nursing units using the enhanced pain scale increased daily pain assessment rates to more than 79.4%, suggesting that regular nursing assessment of pain in hospitals can be achieved by combining our enhanced pain scale to either audit and feedback of nursing pain scores or a CDSS. Although nursing-recorded pain scores were statistically different than those obtained by research staff, the average difference was only 0.1 points on the 4-point scale, and not large enough to justify the added expense of assigning pain assessment to nonclinical staff for the purpose of audit. Which intervention (audit and feedback or CDSS) is most cost-effective will likely depend on local institutional factors.
This study is one of the first large-scale studies to demonstrate improvement in analgesic prescribing. A recent systematic review17 of hospital interventions designed to improve pain management found that only 3 of 20 studies evaluated analgesic prescribing, and none of these studies demonstrated significant changes as a result of their interventions. In this study, the enhanced pain scale was associated with significant increases in the prescribing of WHO step 2 or 3 analgesics compared with the single-item scale.
The interventions used in this study did not lead to reductions in pain despite improved prescribing of WHO step 2 or 3 analgesics. Possible explanations for this finding might be a failure to titrate to pain relief or that patients reporting severe pain may have declined additional analgesia. In a prior study,41 31% of patients with severe pain reported that this was acceptable to them in the setting of their illness.
Computerized decision support systems have been shown to be effective in influencing physicians' behaviors.29,42 A possible reason for the system's failure in this study was the fact that the CDSS did not automatically prompt or require prescribers to use it and was used for only a few enrolled patients. Although we sought an active CDSS, the physician information technology advisory committee was willing to use a passive system only. Two recent reviews42- 43 of CDSS studies published after this study was initiated found that voluntarily activated decision support systems are relatively ineffectual in altering physician behavior.
There are limitations to this study. First, given the Joint Commission for the Accreditation of Healthcare Organizations' pain standards, it was impossible to determine whether the routine assessment of pain was associated with improved pain management compared with no assessment. It is unlikely, however, that the rate of pain assessment in the absence of a standard scale would be higher than the 32.1% that we observed with the 1-item measure. Second, although we did not observe statistical differences among patient characteristics between the 2 blocks, it is possible that some unmeasured factor contributed to the differences observed. We believe that this is unlikely because our study design allowed us to compare each intervention's effect, with the exception of the CDSS, across blocks and within blocks across phases. Thus, if there were either block (unmeasured differences between units) or phase (unmeasured differences over time) effects, our analyses would have identified them. Third, it is possible that our study was underpowered to detect differences in pain severity. Nevertheless, even if the study was underpowered, the magnitude of the effect is likely to be too small to be clinically relevant. Fourth, our study was undertaken in a large academic medical center. It is possible that in a smaller hospital, different results might have been obtained. Fifth, because physicians were not geographically based, it is possible that their exposure to the block B interventions influenced their behavior on block A. Finally, because each phase of this study lasted 6 to 8 months, it is unclear whether the observed results persisted after the study's close.
To our knowledge, this is the largest study to examine interventions to improve the assessment and management of pain in a large US hospital. By using a rigorous experimental design, this study identified successful and generalizable interventions that significantly improved nursing assessment of pain to more than 80% and prescribing of appropriate analgesics. The implementation of a passive CDSS designed to assist physician analgesic prescribing did not improve pain scores. Our data suggest that hospitals can accomplish the first step in improving the management of pain—ensuring routine pain detection and scaling—by implementing an enhanced pain scale, as described in this study, in combination with regular audit and feedback of pain assessment results to nursing staff. Future efforts to improve analgesic prescribing and relief of pain should target physicians and focus on opioid titration and adverse effect management. Possible strategies could include either an active CDSS or audit and feedback of pain scores and analgesic prescribing practices to physicians.
Correspondence: R. Sean Morrison, MD, Department of Geriatrics, Mount Sinai School of Medicine, Campus Box 1070, New York, NY 10029 (email@example.com).
Accepted for Publication: December 24, 2005.
Author Contributions: Dr Morrison had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Financial Disclosure: None.
Funding/Support: This study was supported by grant R01 HS10539 from the Agency for Healthcare Research and Quality. Drs Morrison and Siu are recipients of midcareer investigator awards in patient-oriented research from the National Institute on Aging, and Dr Morrison was a Paul Beeson Physician Faculty Scholar. Dr Meier is the recipient of an Academic Career Leadership Award from the National Institute on Aging. Dr Moore is the recipient of a Minority Supplement (R01 HS10539-03S1) from the Agency for Healthcare Research and Quality.
Role of the Sponsor: The funding bodies had no role in data extraction and analyses, in the writing of the manuscript, or in the decision to submit the manuscript for publication.
Country-Specific Mortality and Growth Failure in Infancy and Yound Children and
Association With Material Stature
Use interactive graphics and maps to view and sort country-specific infant and early
dhildhood mortality and growth failure data and their association with maternal
Thank you for submitting a comment on this article. It will be reviewed by JAMA Internal Medicine editors. You will be notified when your comment has been published. Comments should not exceed 500 words of text and 10 references.
Do not submit personal medical questions or information that could identify a specific patient, questions about a particular case, or general inquiries to an author. Only content that has not been published, posted, or submitted elsewhere should be submitted. By submitting this Comment, you and any coauthors transfer copyright to the journal if your Comment is posted.
* = Required Field
Disclosure of Any Conflicts of Interest*
Indicate all relevant conflicts of interest of each author below, including all relevant financial interests, activities, and relationships within the past 3 years including, but not limited to, employment, affiliation, grants or funding, consultancies, honoraria or payment, speakers’ bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued. If all authors have none, check "No potential conflicts or relevant financial interests" in the box below. Please also indicate any funding received in support of this work. The information will be posted with your response.
Register and get free email Table of Contents alerts, saved searches, PowerPoint downloads, CME quizzes, and more
Subscribe for full-text access to content from 1998 forward and a host of useful features
Activate your current subscription (AMA members and current subscribers)
Purchase Online Access to this article for 24 hours
Some tools below are only available to our subscribers or users with an online account.
Download citation file:
Web of Science® Times Cited: 27
Customize your page view by dragging & repositioning the boxes below.
and access these and other features:
Enter your username and email address. We'll send you a link to reset your password.
Enter your username and email address. We'll send instructions on how to reset your password to the email address we have on record.
Athens and Shibboleth are access management services that provide single sign-on to protected resources. They replace the multiple user names and passwords necessary to access subscription-based content with a single user name and password that can be entered once per session. It operates independently of a user's location or IP address. If your institution uses Athens or Shibboleth authentication, please contact your site administrator to receive your user name and password.