0
We're unable to sign you in at this time. Please try again in a few minutes.
Retry
We were able to sign you in, but your subscription(s) could not be found. Please try again in a few minutes.
Retry
There may be a problem with your account. Please contact the AMA Service Center to resolve this issue.
Contact the AMA Service Center:
Telephone: 1 (800) 262-2350 or 1 (312) 670-7827  *   Email: subscriptions@jamanetwork.com
Error Message ......
Original Investigation |

Physicians’ Diagnostic Accuracy, Confidence, and Resource Requests:  A Vignette Study

Ashley N. D. Meyer, PhD1,2,3; Velma L. Payne, PhD, MBA1,2,3; Derek W. Meeks, MD1,2,3; Radha Rao, MD2,3; Hardeep Singh, MD, MPH1,2,3
[+] Author Affiliations
1Houston Veterans Affairs Health Services Research and Development Center of Excellence and the Section of Health Services Research, Houston, Texas
2Michael E. DeBakey Veterans Affairs Medical Center, Houston, Texas
3Baylor College of Medicine, Houston, Texas
JAMA Intern Med. 2013;173(21):1952-1958. doi:10.1001/jamainternmed.2013.10081.
Text Size: A A A
Published online

Importance  Little is known about the relationship between physicians’ diagnostic accuracy and their confidence in that accuracy.

Objective  To evaluate how physicians’ diagnostic calibration, defined as the relationship between diagnostic accuracy and confidence in that accuracy, changes with evolution of the diagnostic process and with increasing diagnostic difficulty of clinical case vignettes.

Design, Setting, and Participants  We recruited general internists from an online physician community and asked them to diagnose 4 previously validated case vignettes of variable difficulty (2 easier; 2 more difficult). Cases were presented in a web-based format and divided into 4 sequential phases simulating diagnosis evolution: history, physical examination, general diagnostic testing data, and definitive diagnostic testing. After each phase, physicians recorded 1 to 3 differential diagnoses and corresponding judgments of confidence. Before being presented with definitive diagnostic data, physicians were asked to identify additional resources they would require to diagnose each case (ie, additional tests, second opinions, curbside consultations, referrals, and reference materials).

Main Outcomes and Measures  Diagnostic accuracy (scored as 0 or 1), confidence in diagnostic accuracy (on a scale of 0-10), diagnostic calibration, and whether additional resources were requested (no or yes).

Results  A total of 118 physicians with broad geographical representation within the United States correctly diagnosed 55.3% of easier and 5.8% of more difficult cases (P < .001). Despite a large difference in diagnostic accuracy between easier and more difficult cases, the difference in confidence was relatively small (7.2 vs 6.4 out of 10, for easier and more difficult cases, respectively) (P < .001) and likely clinically insignificant. Overall, diagnostic calibration was worse for more difficult cases (P < .001) and characterized by overconfidence in accuracy. Higher confidence was related to decreased requests for additional diagnostic tests (P = .01); higher case difficulty was related to more requests for additional reference materials (P = .01).

Conclusions and Relevance  Our study suggests that physicians’ level of confidence may be relatively insensitive to both diagnostic accuracy and case difficulty. This mismatch might prevent physicians from reexamining difficult cases where their diagnosis may be incorrect.

Figures in this Article

Sign in

Create a free personal account to sign up for alerts, share articles, and more.

Purchase Options

• Buy this article
• Subscribe to the journal

Figures

Place holder to copy figure label and caption
Figure 1.
Study Procedure and Questions Asked in the 4 Diagnostic Phases for Each Case
Graphic Jump Location
Place holder to copy figure label and caption
Figure 2.
Physicians’ Mean Diagnostic Accuracy and Confidence in That Accuracy as a Function of Diagnostic Phase and Case Difficulty (Easier vs More Difficult)

Lab indicates laboratory testing; error bars represent ±1 SEM.

Graphic Jump Location

Tables

References

Correspondence

CME
Meets CME requirements for:
Browse CME for all U.S. States
Accreditation Information
The American Medical Association is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The AMA designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 CreditTM per course. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Physicians who complete the CME course and score at least 80% correct on the quiz are eligible for AMA PRA Category 1 CreditTM.
Note: You must get at least of the answers correct to pass this quiz.
You have not filled in all the answers to complete this quiz
The following questions were not answered:
Sorry, you have unsuccessfully completed this CME quiz with a score of
The following questions were not answered correctly:
Commitment to Change (optional):
Indicate what change(s) you will implement in your practice, if any, based on this CME course.
Your quiz results:
The filled radio buttons indicate your responses. The preferred responses are highlighted
For CME Course: A Proposed Model for Initial Assessment and Management of Acute Heart Failure Syndromes
Indicate what changes(s) you will implement in your practice, if any, based on this CME course.
Submit a Comment
Brilliant study, shocking results
Posted on September 3, 2013
Jason Maude
Founder, Isabel Healthcare
Conflict of Interest: Founder of Isabel Healthcare, a company which produces diagnosis decision making aids.
This study on diagnosis accuracy and confidence is brilliant and the authors are to be congratulated. However, the results that show an accuracy rate of 55% for the easy cases and just 6% for the hard cases are truly shocking and the authors’ statement that  "overall diagnostic accuracy was rather low- 31% across the 4 cases” must be the understatement of the year. Having been used in other studies, the cases used are generally acknowledged to be difficult but if the physicians taking part in the study thought they were difficult then why didn’t the confidence rates fall from the 60-70% level to 10-20%? There is nothing in wrong in admitting very little confidence in your judgement on a particular case (with the implication that you will find out more) but there is something wrong in giving a false sense of confidence. Accuracy of diagnosis appears to have been barely affected after the history and physical stages with labs and imaging, begging the question what value is really provided by the expensive and invasive testing?Lack of time did not seem to be an issue in the low levels of accuracy. It seems that the high levels of confidence meant that the physicians did not request additional resources. Rather than the term 'over confidence' a more apt explanation may be the "illusion of knowledge". The over confidence results from the illusion of knowledge.One of the solutions suggested by the authors is  "engaging patients in creative ways" one of which could be actively encouraging patients to use sophisticated symptom checkers before the consultation so that they can contribute more productively to the diagnostic process.
Submit a Comment

Multimedia

Some tools below are only available to our subscribers or users with an online account.

Web of Science® Times Cited: 6

Sign in

Create a free personal account to sign up for alerts, share articles, and more.

Purchase Options

• Buy this article
• Subscribe to the journal

Related Content

Customize your page view by dragging & repositioning the boxes below.

See Also...
Articles Related By Topic
Related Collections
PubMed Articles
Jobs
JAMAevidence.com

Users' Guides to the Medical Literature
Chapter 14. The Process of Diagnosis

Users' Guides to the Medical Literature
Was the Diagnostic Evaluation Definitive?

brightcove.createExperiences();