0
We're unable to sign you in at this time. Please try again in a few minutes.
Retry
We were able to sign you in, but your subscription(s) could not be found. Please try again in a few minutes.
Retry
There may be a problem with your account. Please contact the AMA Service Center to resolve this issue.
Contact the AMA Service Center:
Telephone: 1 (800) 262-2350 or 1 (312) 670-7827  *   Email: subscriptions@jamanetwork.com
Error Message ......
Original Investigation |

Effect of Patient-Specific Ratings vs Conventional Guidelines on Investigation Decisions in Angina:  Appropriateness of Referral and Investigation in Angina (ARIA) Trial FREE

Cornelia Junghans, PhD; Gene Feder, MD; Adam D. Timmis, FRCP; Sandra Eldridge, PhD; Neha Sekhri, MRCP; Nick Black, MD; Paul Shekelle, MD; Harry Hemingway, FRCP
[+] Author Affiliations

Author Affiliations: Department of Epidemiology and Public Health, University College London Medical School (Drs Junghans and Hemingway), Department of Primary Care, Queen Mary's School of Medicine and Dentistry, University of London (Drs Feder, Eldridge, and Sekhri), Cardiac Directorate, Barts and The London NHS Trust (Dr Timmis), and Department of Public Health and Policy, London School of Hygiene and Tropical Medicine (Dr Black), London, England; and Greater Los Angeles Veterans Affairs Health Care System, Los Angeles, Calif (Dr Shekelle).


Arch Intern Med. 2007;167(2):195-202. doi:10.1001/archinte.167.2.195.
Text Size: A A A
Published online

Background  Conventional guidelines have limited effect on changing physicians' test ordering. We sought to determine the effect of patient-specific ratings vs conventional guidelines on appropriate investigation of angina.

Methods  Randomized controlled trial of 145 physicians receiving patient-specific ratings (online prompt stating whether the specific vignette was considered appropriate or inappropriate for investigation, with access to detailed information on how the ratings were derived) and 147 physicians receiving conventional guidelines from the American Heart Association and the European Society of Cardiology. Physicians made recommendations on 12 Web-based patient vignettes before and on 12 vignettes after these interventions. The outcome was the proportion of appropriate investigative decisions as defined by 2 independent expert panels.

Results  Decisions for exercise electrocardiography were more appropriate with patient-specific ratings (819/1491 [55%]) compared with conventional guidelines (648/1488 [44%]) (odds ratio [OR], 1.57; 95% confidence interval [CI], 1.36-1.82). The effect was stronger for angiography (1274/1595 [80%] with patient-specific ratings compared with 1009/1576 [64%] with conventional guidelines [OR, 2.24; 95% CI, 1.90-2.62]). Within-arm comparisons confirmed that conventional guidelines had no effect but that patient-specific ratings significantly changed physicians' decisions toward appropriate recommendations for exercise electrocardiography (55% vs 42%; OR, 2.62; 95% CI, 2.14-3.22) and for angiography (80% vs 65%; OR, 2.10; 95% CI, 1.79-2.47). These effects were robust to physician specialty (cardiologists and general practitioners) and to vignette characteristics, including older age, female sex, and nonwhite race/ethnicity.

Conclusion  Patient-specific ratings, unlike conventional guidelines, changed physician testing behavior and have the potential to reduce practice variations and to increase the appropriate use of investigation.

Figures in this Article

Medical societies1 and governments2,3 have made a large investment in developing and implementing clinical practice guidelines to reduce variation in physician behavior and to make medical care more cost-effective. Despite this investment, the effect of conventional guidelines on changing physician behavior is modest when tested in controlled investigations4 and in the context of routine practice.5,6 In coronary heart disease, numerous guidelines address investigation strategies, but few data suggest that investigation guidelines change practice.7 Many patients with angina do not undergo adequate investigation, and the prognostic effect of inequitable underuse among women and older patients is a concern.8,9

A recent meta-analysis10 of decision support interventions identified the use of specific recommendations, a computerized format, and integration into the clinician work flow as 3 factors associated with changing physician behavior. Patient-specific ratings offer these features, as they are tailored to individual patient characteristics and are easily incorporated into the consultation in a computerized format (eg, as online prompts) with unambiguous recommendations that a test is appropriate or inappropriate for each patient. Patient-specific ratings of the appropriateness of investigation are derived using explicit methods11 that take into account the evidence base and incorporate the judgments of generalists and specialists.

The specific objective of the Appropriateness of Referral and Investigation in Angina (ARIA) Trial was to determine the extent to which patient-specific ratings change physician testing behavior compared with conventional guidelines. We randomized physicians to patient-specific ratings or to conventional guidelines, and recommendations for exercise electrocardiography (ECG) and angiography were assessed. We used patient vignettes to control for case-mix variation.12

The patient-specific ratings consisted of online prompts stating whether the specific vignette was considered appropriate or inappropriate for investigation. Access to detailed information on how the ratings were derived was included.

TRIAL PARTICIPANTS AND PATIENT VIGNETTES

Physicians were eligible to participate in the trial if they were members of the British Cardiac Society (1032 cardiologists) or were general practitioners (2206 physicians, limited to 1 per practice) in the primary care trusts referring to 9 cardiothoracic centers in England and Scotland. Eligible physicians were invited to participate online. Physicians were randomized to an intervention of patient-specific ratings derived by 2 independent expert panels or by conventional American Heart Association13 and European Society of Cardiology14 guidelines. Each physician was asked to judge the appropriateness of exercise ECG or angiography in 12 Web-based patient vignettes before the intervention and then in 12 vignettes after the intervention (Figure 1). Physicians were told that this was a study of decision support but not that it was a randomized trial of different types of decision support. Physicians were reimbursed for their participation. At registration, physicians were asked about details of their practice (Table 1). The study was carried out between September 1, 2004, and June 29, 2005.

Place holder to copy figure label and caption
Figure 1.

Flowchart of physicians through the trial. ECG indicates electrocardiography; GPs, general practitioners.

Graphic Jump Location
Table Graphic Jump LocationTable 1. Characteristics of Trial Participants*
DEVELOPMENT OF PATIENT VIGNETTES

We developed 48 unique vignettes of patients with suspected or confirmed angina based on unique combinations of clinical factors (indications). Each indication had previously been rated for the appropriateness of exercise ECG and angiography by 2 expert panels independently, using the RAND/University of California at Los Angeles method.11 The panels each comprised 5 cardiologists, 1 cardiothoracic surgeon, and 5 general practitioners with an interest in cardiology. These 22 physicians were recruited from the same 9 centers on the basis of expertise (15 had published in the field) and peer nomination. They were not eligible for the trial (further details are available at http://www.ucl.ac.uk/ceg/studies). We chose only indications that were rated in the same appropriateness category by both panels, providing a more reliable definition of appropriateness. To resemble real-life patients, the vignette narratives described these clinical factors in free text (See the boxed example on page 198),15 included information on patient occupation and comorbidity, and represented a wide range of clinical and demographic characteristics, as summarized in Table 2.

Table Graphic Jump LocationTable 2. Characteristics of 48 Patient Vignettes Used in the Trial

We divided the 48 vignettes into 4 blocks of 12 and then created sequences of 2 blocks (1 preintervention and 1 postintervention) with evenly distributed clinical and demographic factors from all possible combinations of the 4 blocks (12 sequences of 2 blocks). The sequence of vignettes was random within each block. Participants in both arms were given 2 blocks of 12 vignettes following randomization (1 before the intervention and, on completion, 1 with the intervention. They were asked to make recommendations for investigation using a 5-point scale (1, definitely do; 2, probably do; 3, unsure; 4, probably do not do; or 5, definitely do not do).

INTERVENTIONS

The unit of randomization was individual physicians. A research assistant randomized 363 physicians using minimization software to balance recruitment by the 9 centers and the 2 clinical specialties. Physicians received an intervention after they completed the first set of 12 vignettes without decision support. In the conventional guidelines arm, physicians were automatically provided online guideline paragraphs most relevant to each vignette, as well as links to the full-text guidelines. The physicians were free to use any of the guidelines, and their use was recorded. In the patient-specific ratings arm, physicians were automatically provided online ratings applicable to the vignette, ranging from 1 to 9 (1-3, inappropriate; 4-6, equivocal; or 7-9, appropriate) for exercise ECG or angiography. For example, if the rating for exercise ECG was 7 for that particular patient vignette, the electronic prompt to physicians was “the expert panels recommend exercise testing (rating 7)”. Physicians were also given access to detailed information on how the ratings were derived.

OUTCOMES

The primary outcome for all test-ordering decisions was agreement of physicians' recommendations with those made by the 2 independent expert panels. Agreement was defined by a physician recommending definitely or probably doing a test rated appropriate by the panels or by recommending definitely or probably not doing a test rated inappropriate. An unsure recommendation was interpreted as disagreement.

As a secondary outcome, we investigated the agreement of physicians' recommendations with those of the American Heart Association and the European Society of Cardiology guidelines, which are based on the pretest probability of coronary artery disease. The guidelines recommend that exercise ECG be performed if the pretest probability is intermediate (20%-80%) and not be performed if the pretest probability is lower than 20% or higher than 80%. In the subset of 25 vignettes without confirmed coronary artery disease or previous exercise ECG results, we assessed the pretest probability of coronary artery disease using the Duke score16 based on the vignette characteristics of age, sex, smoking, cholesterol levels, resting ECG changes, typicality of chest pain, and presence of diabetes mellitus. The pretest probability score allowed each of these 25 vignettes to be categorized according to American Heart Association and European Society of Cardiology recommendations for exercise ECG.

SAMPLE SIZE CALCULATION

We estimated that 140 physicians were required in each arm to detect an odds ratio (OR) of 1.50 for the effect of patient-specific ratings vs conventional guidelines, assuming that agreement with the panel was found in 60% of decisions in the conventional guidelines arm, with 80% power and at 5% significance. Because individual recommendations may be correlated for individual physicians (ie, clustered) or for individual vignettes (ie, nonindependent), the effective sample size may be less than the total number of decisions analyzed. To take this into account in the sample size calculation, we assumed intracluster correlation coefficients of 0.06 between physicians and 0.04 between vignettes.

STATISTICAL ANALYSIS

The unit of analysis was physician recommendations on each vignette, and we used random-effects logistic regression analysis allowing for intracluster correlation. Between physicians, intracluster correlation coefficients for recommendations in agreement with patient-specific ratings were 0.06 for exercise ECG and less than 0.01 for angiography; for recommendations in agreement with conventional guidelines, they were less than 0.01. Between vignettes, intracluster correlation coefficients were negligible, so we allowed for clustering by physician only. We calculated ORs (95% confidence intervals [CIs]) comparing agreement between the recommendations made by trial physicians and the 2 outcomes between trial arms and within trial arms, using physicians as their own controls. We investigated whether effects differed in prespecified subgroups (physician specialty and age, sex, and race/ethnicity of the patient in the vignette) by fitting interaction terms. We excluded from the main analysis 71 physicians who did not complete the first 12 vignettes. We also examined the volume of appropriately recommended tests in both arms following the intervention. Analysis was conducted blind to the intervention using STATA 8.0 (StataCorp LP, College Station, Tex).

PARTICIPANT AND VIGNETTE CHARACTERISTICS

Cardiologists and general practitioners did not differ between trial arms with respect to practice characteristics (Table 1) or region. Physicians who registered but did not complete the first 12 vignettes did not differ in practice characteristics compared with those who did. The 4 blocks of 12 vignettes were equally distributed by intervention and by physician specialty. Missing recommendations represented 2% (134/6072) of exercise ECG decisions and 3% (194/6485) of angiography decisions and did not differ by intervention arm or by physician specialty. Therefore, we analyzed 5938 decisions for exercise ECG and 6291 decisions for angiography.

BETWEEN-ARM COMPARISONS

Before the intervention, decisions about the first 12 vignettes were no more appropriate in one arm than in the other: for exercise ECG, appropriate decisions were 619 (42%) of 1475 in the patient-specific ratings arm and 633 (43%) of 1484 in the conventional guidelines arm; for angiography, the corresponding figures were 1018 (65%) of 1557 and 1015 (65%) of 1563. Figure 2 shows that after the intervention physicians receiving patient-specific ratings were more likely to reach appropriate decisions about the second set of 12 vignettes for exercise ECG (54.9% vs 43.5%; OR, 1.57; 95% CI, 1.36-1.82) and for angiography (79.9% vs 64.0%; OR, 2.24; 95% CI, 1.90-2.62) compared with those receiving conventional guidelines. The effect of patient-specific ratings on changing test-ordering behavior was consistent across vignettes with different age, sex, and race/ethnicity variables and by physician specialty (Figure 3). Decisions in the patient-specific ratings arm vs the conventional guidelines arm did not differ (OR, 1.15; 95% CI, 0.93-1.41) for the secondary outcome (based on the Duke score for exercise ECG).

Place holder to copy figure label and caption
Figure 2.

Effect of patient-specific ratings vs conventional guidelines on correct testing decisions based on 2 independent expert panels (between-arm comparisons). AHA/ESC indicates American Heart Association/European Society of Cardiology; CI, confidence interval; ECG, electrocardiography; and OR, odds ratio.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 3.

Effect within age, sex, and race/ethnicity of patient vignettes and of physician specialty. CI indicates confidence interval; ECG, electrocardiography; GPs, general practitioners; and OR, odds ratio.

Graphic Jump Location

Overall, the use of patient-specific ratings led to an increase in the volume of appropriate tests ordered compared with the use of conventional guidelines. These values were 699 (84%) of 831 vs 541 (66%) of 817 for exercise ECG (P<.001) and 732 (81%) of 906 vs 604 (68%) of 890 for angiography (P<.001).

WITHIN-ARM (BEFORE AND AFTER) COMPARISONS

Figure 4 shows within-arm comparisons before and after the intervention. Physicians receiving patient-specific ratings made more appropriate decisions on exercise ECG (OR, 2.62; 95% CI, 2.14-3.22) and on angiography (OR, 2.10; 95% CI ,1.79-2.47) following the intervention, while conventional guidelines did not lead to an increase in appropriate decisions. Decisions within the patient-specific ratings arm (OR, 1.13; 95% CI, 0.91-1.37) and within the conventional guidelines arm (OR, 1.05; 95% CI, 0.87-1.26) did not change for the secondary outcome (based on the Duke score for exercise ECG). At baseline among patients in whom angiography was deemed appropriate by the expert panels, women seemed to be less likely to be recommended for angiography than men before the intervention (OR, 0.72; 95% CI, 0.54-0.97). This apparent sex inequity was attenuated by the patient-specific ratings intervention (OR, 0.94; 95% CI, 0.67-1.32). This was also the case for exercise ECG decisions (baseline: OR, 0.83; 95% CI, 0.58-1.19; postintervention: OR, 0.93; 95% CI, 0.70-1.23) and for angiography decisions (baseline: OR, 0.61; 95% CI, 0.40-0.92; postintervention: OR, 0.74; 95% CI, 0.45-1.12) in vignettes of nonwhite race/ethnicity.

Place holder to copy figure label and caption
Figure 4.

Change in decision making before vs after the intervention, with P values for interactions. CI indicates confidence interval; ECG, electrocardiography; and OR, odds ratio.

Graphic Jump Location

In this multicenter trial, physicians randomized to patient-specific ratings changed their test-ordering decisions markedly, whereas those given conventional guidelines did not. To our knowledge, this is the first trial comparing computerized patient-specific ratings vs conventional guidelines in investigation decisions or in any aspect of cardiovascular disease. Based on more than 10 000 investigation decisions in the management of suspected or confirmed angina, these findings suggest that patient-specific ratings are a promising novel technology for improving appropriate investigation among patients with angina in primary and secondary care.

Chest pain in ambulatory care is probably the most common initial presentation of coronary artery disease, and early accurate diagnosis is important to prevent progression to acute coronary syndromes.17 However, symptoms of angina in the general population are commonly not diagnosed18 or are treated empirically in the absence of confirmatory investigation.8 Even among patients being assessed by cardiologists, there is an appreciable coronary event rate among patients who are told that their pain is noncardiac.19 To our knowledge, there are no randomized controlled trials of different investigation strategies among patients with chest pain; hence, there are no gold standard recommendations.

The primary outcome used in the ARIA Trial was the agreement between trial physicians' recommendations and those made earlier by 2 independent expert panels. This outcome has the advantages of reliability (both panels agreed) and face validity (it incorporated the judgments of generalists and specialists). Furthermore, patient-specific ratings have been shown to have prognostic validity for various procedures such as knee or hip replacement20 and revascularization21 (patients had better outcomes if they received the appropriate intervention compared with those who did not).

A secondary outcome, applicable to a subset of decisions, was based on the guideline recommendation that exercise ECG should be carried out among patients with intermediate pretest probability of coronary disease. We chose this outcome for a sensitivity analysis to test the possibility that physicians in the conventional guidelines arm might be more likely to change when judged by the guideline standard, based on the Duke score. No such effect was observed. Instead we observed a small (15%) nonsignificant advantage in the patient-specific ratings arm. This lack of effect on the outcome based on the Duke score is not surprising, as there was only moderate agreement with appropriateness based on expert panel ratings. Therefore, one third of 18 vignettes rated appropriate for exercise ECG had low or high pretest probability based on the Duke score. This lack of agreement is not unexpected: different variables are considered (the Duke score does not take into account angina severity), and (crucially) the Duke score was developed in a tertiary care setting and is known to overestimate the probability of disease in primary care.22

We observed a marked effect of patient-specific ratings on physician test-ordering behavior among cardiologists and general practitioners, suggesting that the ratings were universally applicable, clinically credible, and unambiguous. Our findings are consistent with a previous trial on paper-based decision support in treatment decisions for back pain.23 In our study, the effect of patient-specific ratings was consistent across vignettes of women, older patients, and different racial/ethnic groups. Among those in whom investigation was deemed appropriate by the expert panels, women and patients from ethnic minorities were less likely to be recommended for testing before the intervention. These differences were attenuated and became nonsignificant following the patient-specific ratings intervention. Taken together, these findings lend further support to the superiority of patient-specific ratings over conventional guidelines in changing physician behavior and offer a potential means to address inequitable underuse of investigation.

Previous interventions that included guidelines, audit, and feedback have had modest effects on test ordering and have focused largely on the overuse of investigation.24,25 The limited effect of guidelines on testing behavior may reflect difficulties in applying recommendations made among broad groups of patients to individuals, as well as the complexity and sheer volume of information contained in the guidelines. While in our study patient-specific ratings were readily available for each vignette in the patient-specific ratings arm, guidelines (even summarized and electronically accessible) had to be screened for information by participants in the conventional guidelines arm. Also, the same patient may appear under different sections of a single guideline, with potentially conflicting recommendations.

The lack of improvement in the management of patients with angina or asthma found in a randomized comparison of computerized guidelines vs no decision support was attributed to the poor uptake of the guidelines.7 Despite attempts on our part to enhance the uptake of guidelines, including providing relevant sections and readily accessible full-text guidelines, and despite 86% of participants accessing the guidelines during their decision-making process, we likewise found no change in management with the use of guidelines.

The strengths of our multicenter study lie in a rigorous trial design, the large number of clinicians from primary and secondary care, and a well-designed intervention applicable to everyday clinical decision making. The main limitation of our trial is the use of patient vignettes. However, they offer the advantage of standardizing the patient characteristics across trial arms, largely removing the possibility that differences in patients might confound the trial findings. The validity of patient vignettes has been established in the evaluation of diagnostic accuracy, physical examination, and actual clinical practice in outpatient settings and primary care.12 In addition, vignettes have been successfully used to explore physician behavior in ordering investigation and procedures in real-life practice.26,27

The positive findings in the ARIA Trial strengthen the case for randomized trials in real-life patient decision making, as we cannot exclude the possibility that the effectiveness of the intervention is altered by patient, physician, or system features in actual practice.28 We also cannot exclude the possibility that the physicians who volunteered to participate in the trial were an unrepresentative group and were more likely to change their practice with the patient-specific ratings intervention than physicians who did not volunteer. However, this is unlikely because their characteristics are representative of all physicians in participating centers. The electronic health record, with its capacity for embedded decision support, offers a route to implementing patient-specific ratings in routine patient care. For exercise ECG, 55% of decisions agreed with the expert panels in the patient-specific ratings group. Strengthening the empirical evidence base, particularly with studies in unselected patients in primary care, may further improve the validity of patient-specific ratings and agreement. Our findings revealed an increase in the number of appropriate tests ordered with patient-specific ratings, which could address the underuse of revascularization,21 for which angiography is a prerequisite in patients with angina. For policy makers, patient-specific recommendations provide a benchmark against which to measure health care inequalities and quality of care.

Patient-specific ratings substantially changed physician testing behavior, but conventional guidelines did not. Despite the challenge of an increasingly complex patient population29 and the inherent limitations of any form of clinical guidance, these patient-specific ratings represent (to our knowledge) the most meticulously tested and systematic attempt to guide physician practice, and they offer a promising intervention for implementation in routine care.

Correspondence: Harry Hemingway, FRCP, Department of Epidemiology and Public Health, University College London Medical School, 1-19 Torrington Pl, London WC1E 6BT, England (h.hemingway@ucl.ac.uk).

Author Contributions: Drs Junghans and Eldridge had full access to all of the data; Drs Junghans and Hemingway take responsibility for the integrity of the data and the accuracy of the data analysis for this study. Study concept and design: Junghans, Feder, Timmis, Eldridge, Sekhri, Black, Shekelle, and Hemingway. Acquisition of data: Junghans and Feder. Analysis and interpretation of data: Junghans, Feder, Eldridge, Black, and Shekelle. Drafting of the manuscript: Junghans, Feder, and Hemingway. Critical revision of the manuscript for important intellectual content: Junghans, Feder, Timmis, Eldridge, Sekhri, Black, Shekelle, and Hemingway. Statistical analysis: Junghans and Eldridge. Obtained funding: Junghans, Feder, Timmis, Eldridge, Black, Shekelle, and Hemingway. Administrative, technical, and material support: Eldridge and Sekhri. Study supervision: Feder, Black, and Hemingway.

Financial Disclosure: None reported.

Funding/Support: This study was supported in part by the NHS Service Delivery and Organisation Research and Development Programme and the Department of Health. Dr Hemingway is supported by a Public Health Career Scientist Award from the Department of Health.

Role of the Sponsors: The sponsors had no input in the design or conduct of the study; the collection, management, analysis, or interpretation of the data; or the preparation, review, or approval of the manuscript.

Acknowledgment: We acknowledge the late Sarah Cotter, PhD, who contributed to the statistical analysis of this study. We thank Michael Kimpton and Steve Hayes for developing the ARIA Trial Web site and Melvyn Jones, MD, for back-translating the vignettes into indications for validation.

Institute of Medicine, Crossing the Quality Chasm: A New System for the 21st Century.  Washington, DC National Academy Press2001;
Office of Quality and Performance, Clinical practice guidelines 2005;http://www.oqp.med.va.gov/cpg/cpg.htm. Accessed October 29, 2006
National Institute for Clinical Excellence, The Guideline Development Process: Information for National Collaborating Centres and Guideline Development Groups.  London, England National Institute for Clinical Excellence2001;
Grimshaw  JMThomas  REMacLennan  G  et al.  Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess 2004;8iii- iv, 1-72
PubMed
Woolf  SH Practice guidelines: a new reality in medicine, III: impact on patient care. Arch Intern Med 1993;1532646- 2655
PubMed
Sheldon  TACullum  NDawson  D  et al.  What's the evidence that NICE guidance has been implemented? results from a national evaluation using time series analysis, audit of patients' notes, and interviews. BMJ 2004;329e999
Eccles  MMcColl  ESteen  N  et al.  Effect of computerised evidence based guidelines on management of asthma and angina in adults in primary care: cluster randomised controlled trial. BMJ 2002;325e941
Hemingway  HMcCallum  AShipley  MManderbacka  KMartikainen  PKeskimäki  I Incidence and prognostic implications of stable angina pectoris among women and men. JAMA 2006;2951404- 1411
PubMed
McLaughlin  TJSoumerai  SBWillison  DJ  et al.  Adherence to national guidelines for drug treatment of suspected acute myocardial infarction: evidence for undertreatment in women and the elderly. Arch Intern Med 1996;156799- 805
PubMed
Kawamoto  KHoulihan  CABalas  EALobach  DF Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ 2005;330e765
Fitch  KBernstein  SJAguilar  MD The RAND/UCLA Appropriateness Methods User's Manual.  Santa Monica Calif: RAND Corp2001;
Peabody  JWLuck  JGlassman  PDresselhaus  TRLee  M Comparison of vignettes, standardized patients, and chart abstraction: a prospective validation study of 3 methods for measuring quality. JAMA 2000;2831715- 1722
PubMed
Gibbons  RJChatterjee  KDaley  J  et al.  ACC/AHA/ACP-ASIM guidelines for the management of patients with chronic stable angina: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Committee on Management of Patients With Chronic Stable Angina). J Am Coll Cardiol 1999;332092- 2197
PubMed
Task Force of the European Society of Cardiology, Management of stable angina pectoris: recommendations of the Task Force of the European Society of Cardiology. Eur Heart J 1997;18394- 413
PubMed
Goldman  LHashimoto  BCook  EFLoscalzo  A Comparative reproducibility and validity of systems for assessing cardiovascular functional class: advantages of a new specific activity scale. Circulation 1981;641227- 1234
PubMed
Pryor  DBHarrell  FE  JrLee  KLCaliff  RMRosati  RA Estimating the likelihood of significant coronary artery disease. Am J Med 1983;75771- 780
PubMed
Timmis  ADFeder  GHemingway  H Prognosis of stable angina pectoris: why we need larger population studies with higher endpoint resolution. Heart In press
PubMed
Hemingway  HShipley  MBritton  APage  MMacFarlane  PMarmot  M Prognosis of angina with and without a diagnosis: 11 year follow up in the Whitehall II prospective cohort study. BMJ 2003;327e895
Sekhri  NFeder  GJunghans  CHemingway  HJTimmis  AD How effective are rapid access chest pain clinics? prognosis of incident angina and non-cardiac chest pain in 8762 consecutive patients. Heart
PubMed10.1136/hrt.2006.090894
Quintana  JMEscobar  AArostegui  I  et al.  Health-related quality of life and appropriateness of knee or hip joint replacement. Arch Intern Med 2006;166220- 226
PubMed
Hemingway  HCrook  AMFeder  G  et al.  Underuse of coronary revascularization procedures in patients considered appropriate candidates for revascularization. N Engl J Med 2001;344645- 654
PubMed
Sox  HC  JrHickam  DHMarton  KI  et al.  Using the patient's history to estimate the probability of coronary artery disease: a comparison of primary care and referral practices. Am J Med 1990;897- 14
PubMed
Shekelle  PGKravitz  RLBeart  JMarger  MWang  MLee  M Are nonspecific practice guidelines potentially harmful? a randomized comparison of the effect of nonspecific versus specific guidelines on physician decision making. Health Serv Res 2000;341429- 1448
PubMed
Verstappen  Wter Riet  Gvan der Weijden  THermsen  JGrol  R Variation in requests for imaging investigations by general practitioners: a multilevel analysis. J Health Serv Res Policy 2005;1025- 30
PubMed
Kerry  SOakeshott  PDundas  DWilliams  J Influence of postal distribution of the Royal College of Radiologists' guidelines, together with feedback on radiological referral rates, on X-ray referrals from general practice: a randomized controlled trial. Fam Pract 2000;1746- 52
PubMed
Freeman  VGRathore  SSWeinfurt  KPSchulman  KASulmasy  DP Lying for patients: physician deception of third-party payers. Arch Intern Med 1999;1592263- 2270
PubMed
Sirovich  BEGottlieb  DJWelch  HGFisher  ES Variation in the tendency of primary care physicians to intervene. Arch Intern Med 2005;1652252- 2256
PubMed
Tierney  WMOverhage  JMMurray  MD  et al.  Can computer-generated evidence-based care suggestions enhance evidence-based management of asthma and chronic obstructive pulmonary disease? a randomized, controlled trial. Health Serv Res 2005;40477- 497
PubMed
Boyd  CMDarer  JBoult  CFried  LPBoult  LWu  AW Clinical practice guidelines and quality of care for older patients with multiple comorbid diseases: implications for pay for performance. JAMA 2005;294716- 724
PubMed

Figures

Place holder to copy figure label and caption
Figure 1.

Flowchart of physicians through the trial. ECG indicates electrocardiography; GPs, general practitioners.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 2.

Effect of patient-specific ratings vs conventional guidelines on correct testing decisions based on 2 independent expert panels (between-arm comparisons). AHA/ESC indicates American Heart Association/European Society of Cardiology; CI, confidence interval; ECG, electrocardiography; and OR, odds ratio.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 3.

Effect within age, sex, and race/ethnicity of patient vignettes and of physician specialty. CI indicates confidence interval; ECG, electrocardiography; GPs, general practitioners; and OR, odds ratio.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 4.

Change in decision making before vs after the intervention, with P values for interactions. CI indicates confidence interval; ECG, electrocardiography; and OR, odds ratio.

Graphic Jump Location

Tables

Table Graphic Jump LocationTable 1. Characteristics of Trial Participants*
Table Graphic Jump LocationTable 2. Characteristics of 48 Patient Vignettes Used in the Trial

References

Institute of Medicine, Crossing the Quality Chasm: A New System for the 21st Century.  Washington, DC National Academy Press2001;
Office of Quality and Performance, Clinical practice guidelines 2005;http://www.oqp.med.va.gov/cpg/cpg.htm. Accessed October 29, 2006
National Institute for Clinical Excellence, The Guideline Development Process: Information for National Collaborating Centres and Guideline Development Groups.  London, England National Institute for Clinical Excellence2001;
Grimshaw  JMThomas  REMacLennan  G  et al.  Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess 2004;8iii- iv, 1-72
PubMed
Woolf  SH Practice guidelines: a new reality in medicine, III: impact on patient care. Arch Intern Med 1993;1532646- 2655
PubMed
Sheldon  TACullum  NDawson  D  et al.  What's the evidence that NICE guidance has been implemented? results from a national evaluation using time series analysis, audit of patients' notes, and interviews. BMJ 2004;329e999
Eccles  MMcColl  ESteen  N  et al.  Effect of computerised evidence based guidelines on management of asthma and angina in adults in primary care: cluster randomised controlled trial. BMJ 2002;325e941
Hemingway  HMcCallum  AShipley  MManderbacka  KMartikainen  PKeskimäki  I Incidence and prognostic implications of stable angina pectoris among women and men. JAMA 2006;2951404- 1411
PubMed
McLaughlin  TJSoumerai  SBWillison  DJ  et al.  Adherence to national guidelines for drug treatment of suspected acute myocardial infarction: evidence for undertreatment in women and the elderly. Arch Intern Med 1996;156799- 805
PubMed
Kawamoto  KHoulihan  CABalas  EALobach  DF Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ 2005;330e765
Fitch  KBernstein  SJAguilar  MD The RAND/UCLA Appropriateness Methods User's Manual.  Santa Monica Calif: RAND Corp2001;
Peabody  JWLuck  JGlassman  PDresselhaus  TRLee  M Comparison of vignettes, standardized patients, and chart abstraction: a prospective validation study of 3 methods for measuring quality. JAMA 2000;2831715- 1722
PubMed
Gibbons  RJChatterjee  KDaley  J  et al.  ACC/AHA/ACP-ASIM guidelines for the management of patients with chronic stable angina: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Committee on Management of Patients With Chronic Stable Angina). J Am Coll Cardiol 1999;332092- 2197
PubMed
Task Force of the European Society of Cardiology, Management of stable angina pectoris: recommendations of the Task Force of the European Society of Cardiology. Eur Heart J 1997;18394- 413
PubMed
Goldman  LHashimoto  BCook  EFLoscalzo  A Comparative reproducibility and validity of systems for assessing cardiovascular functional class: advantages of a new specific activity scale. Circulation 1981;641227- 1234
PubMed
Pryor  DBHarrell  FE  JrLee  KLCaliff  RMRosati  RA Estimating the likelihood of significant coronary artery disease. Am J Med 1983;75771- 780
PubMed
Timmis  ADFeder  GHemingway  H Prognosis of stable angina pectoris: why we need larger population studies with higher endpoint resolution. Heart In press
PubMed
Hemingway  HShipley  MBritton  APage  MMacFarlane  PMarmot  M Prognosis of angina with and without a diagnosis: 11 year follow up in the Whitehall II prospective cohort study. BMJ 2003;327e895
Sekhri  NFeder  GJunghans  CHemingway  HJTimmis  AD How effective are rapid access chest pain clinics? prognosis of incident angina and non-cardiac chest pain in 8762 consecutive patients. Heart
PubMed10.1136/hrt.2006.090894
Quintana  JMEscobar  AArostegui  I  et al.  Health-related quality of life and appropriateness of knee or hip joint replacement. Arch Intern Med 2006;166220- 226
PubMed
Hemingway  HCrook  AMFeder  G  et al.  Underuse of coronary revascularization procedures in patients considered appropriate candidates for revascularization. N Engl J Med 2001;344645- 654
PubMed
Sox  HC  JrHickam  DHMarton  KI  et al.  Using the patient's history to estimate the probability of coronary artery disease: a comparison of primary care and referral practices. Am J Med 1990;897- 14
PubMed
Shekelle  PGKravitz  RLBeart  JMarger  MWang  MLee  M Are nonspecific practice guidelines potentially harmful? a randomized comparison of the effect of nonspecific versus specific guidelines on physician decision making. Health Serv Res 2000;341429- 1448
PubMed
Verstappen  Wter Riet  Gvan der Weijden  THermsen  JGrol  R Variation in requests for imaging investigations by general practitioners: a multilevel analysis. J Health Serv Res Policy 2005;1025- 30
PubMed
Kerry  SOakeshott  PDundas  DWilliams  J Influence of postal distribution of the Royal College of Radiologists' guidelines, together with feedback on radiological referral rates, on X-ray referrals from general practice: a randomized controlled trial. Fam Pract 2000;1746- 52
PubMed
Freeman  VGRathore  SSWeinfurt  KPSchulman  KASulmasy  DP Lying for patients: physician deception of third-party payers. Arch Intern Med 1999;1592263- 2270
PubMed
Sirovich  BEGottlieb  DJWelch  HGFisher  ES Variation in the tendency of primary care physicians to intervene. Arch Intern Med 2005;1652252- 2256
PubMed
Tierney  WMOverhage  JMMurray  MD  et al.  Can computer-generated evidence-based care suggestions enhance evidence-based management of asthma and chronic obstructive pulmonary disease? a randomized, controlled trial. Health Serv Res 2005;40477- 497
PubMed
Boyd  CMDarer  JBoult  CFried  LPBoult  LWu  AW Clinical practice guidelines and quality of care for older patients with multiple comorbid diseases: implications for pay for performance. JAMA 2005;294716- 724
PubMed

Correspondence

CME
Meets CME requirements for:
Browse CME for all U.S. States
Accreditation Information
The American Medical Association is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The AMA designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 CreditTM per course. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Physicians who complete the CME course and score at least 80% correct on the quiz are eligible for AMA PRA Category 1 CreditTM.
Note: You must get at least of the answers correct to pass this quiz.
You have not filled in all the answers to complete this quiz
The following questions were not answered:
Sorry, you have unsuccessfully completed this CME quiz with a score of
The following questions were not answered correctly:
Commitment to Change (optional):
Indicate what change(s) you will implement in your practice, if any, based on this CME course.
Your quiz results:
The filled radio buttons indicate your responses. The preferred responses are highlighted
For CME Course: A Proposed Model for Initial Assessment and Management of Acute Heart Failure Syndromes
Indicate what changes(s) you will implement in your practice, if any, based on this CME course.
Submit a Comment

Multimedia

Some tools below are only available to our subscribers or users with an online account.

Web of Science® Times Cited: 12

Related Content

Customize your page view by dragging & repositioning the boxes below.

Articles Related By Topic
Related Collections
JAMAevidence.com

Users' Guides to the Medical Literature
Clinical Scenario

Users' Guides to the Medical Literature
TIME Trial—Resolution