Author Affiliations: School of Medicine (Ms Reid) and Division of General Internal Medicine, Department of Medicine (Dr Mehrotra), University of Pittsburgh, and RAND Health, RAND Corp (Drs Friedberg, Adams, McGlynn, and Mehrotra), Boston, Massachusetts, Pittsburgh, Pennsylvania, and Santa Monica, California.
Information on physicians' performance on measures of clinical quality is rarely available to patients. Instead, patients are encouraged to select physicians on the basis of characteristics such as education, board certification, and malpractice history. In a large sample of Massachusetts physicians, we examined the relationship between physician characteristics and performance on a broad range of quality measures.
We calculated overall performance scores on 124 quality measures from RAND's Quality Assessment Tools for each of 10 408 Massachusetts physicians using claims generated by 1.13 million adult patients. The patients were continuously enrolled in 1 of 4 Massachusetts commercial health plans from 2004 to 2005. Physician characteristics were obtained from the Massachusetts Board of Registration in Medicine. Associations between physician characteristics and overall performance scores were assessed using multivariate linear regression.
The mean overall performance score was 62.5% (5th to 95th percentile range, 48.2%-74.9%). Three physician characteristics were independently associated with significantly higher overall performance: female sex (1.6 percentage points higher than male sex; P < .001), board certification (3.3 percentage points higher than noncertified; P < .001), and graduation from a domestic medical school (1.0 percentage points higher than international; P < .001). There was no significant association between performance and malpractice claims (P = .26).
Few characteristics of individual physicians were associated with higher performance on measures of quality, and observed associations were small in magnitude. Publicly available characteristics of individual physicians are poor proxies for performance on clinical quality measures.
To improve the quality of care received by their beneficiaries, some health plans use physician report cards and tiered physician networks to steer their members toward physicians who provide high-quality care. However, most patients do not have access to physician quality measures. Furthermore, the quality metrics available to some patients are limited in scope and reflect only a few aspects of overall quality of care. Patients are therefore encouraged to use publicly available proxies for clinical performance when choosing a physician. The Agency for Healthcare Research and Quality advises patients to consult state medical boards and to seek information on board certification and training as a way to assess the quality of care physicians provide.1 The consumer Web site HealthGrades2 limits its “recognized doctor” and “5-star doctor” labels to physicians who are board certified, who have never had their license revoked, and who are free of disciplinary actions or malpractice claims. Malpractice claims and board certification status, along with procedure-specific experience, are judged by consumers to be much more indicative of the quality of care delivered by a physician than ratings by government agencies or independent medical institutions.3
There seems to be a tacit belief that these physician characteristics are a signal for clinical quality. However, the value of publicly available individual physician characteristics as predictors of clinical quality is unclear. Few definitive or broadly applicable conclusions have emerged from previous studies that examined the relationship between individual physician characteristics and quality of care. The relationship between performance on quality measures and physicians' history of malpractice claims or disciplinary actions has not been studied to our knowledge.4- 6 In general, studies have found an inverse relationship between years of experience and performance on quality measures.7- 9 There have been mixed findings in the relationship between quality and other characteristics such as sex,8- 14 board certification status,8,15,16 and medical school site (ie, international vs domestic).8,17,18 Previous investigations of relationships between individual physician characteristics and performance on quality measures have been limited by the number of physicians assessed, the available physician characteristics, and the scope and validity of quality metrics used. Much of the previous literature related to physician characteristics and clinical quality has had a narrow clinical focus, each study examining only a limited range of processes, conditions, or specialties.
In this study we examined, in a large sample of Massachusetts physicians, the relationship between a number of physician characteristics and performance on a broad range of quality measures.
Physician performance scores were created using a deidentified aggregated claims data set of 1.13 million patients ages 18 to 65 years who were enrolled continuously in 1 of 4 Massachusetts commercial health plans in 2004 to 2005. Taken together, the 4 plans constituted over 85% of the commercial market in the state. The data set included all professional, inpatient, facility, and pharmacy claims. Physicians were linked across the 4 health plans using a crosswalk developed by the Massachusetts Health Quality Partners (MHQP) that connects a unique physician identifier to the health care provider (physician) numbers used by each health plan.19 Children younger than 18 years were excluded because no pediatric quality measures were used. Elderly persons (>65 years) were also excluded because coinsurance with Medicare was inconsistently recorded, and the plans could not reliably identify those for whom Medicare was the primary payer.
The MHQP maintains a database of all health care providers who have a contract with any of the major commercial health plans in the state. From this sample of health care providers, we eliminated those who practiced outside Massachusetts and those who did not bill at least 1 claim to any of the 4 health plans in 2004 to 2005. We then eliminated nonphysicians (ie, podiatrists, chiropractors, acupuncturists), physicians with no assigned specialty, pediatricians, and specialties with no applicable quality measures or direct patient care (eg, pathology, radiology). After these exclusions, physicians in 23 specialties contributed data to the analysis.
Publicly available data on individual physician characteristics were obtained from the Massachusetts Board of Registration in Medicine.20 The board publicly releases, for each physician, information on birth date, medical school graduation date, medical school attended, board certification status, sex, payment on malpractice claims, and disciplinary actions. These data are entered and updated by physicians at the time of licensure and relicensure. However, malpractice and disciplinary information is maintained by the board, and data are not self-entered by physicians. From this database we eliminated physicians with a limited license (ie, residents). Experience was measured by years since medical school graduation. Medical schools in the United States were matched to their 2008 US News and World Report 's rankings in research and primary care.21 Malpractice claims included those on which a payment was made from March 30, 1998, to February 28, 2008. The board's disciplinary archives listed all disciplinary and public actions by the board from June 9, 1999, through June 18, 2008.22 We did not include 5 publicly available variables for analysis. Two of these variables (criminal convictions, hospital disciplinary actions) were very rare among physicians. Two variables (number of articles published, awards) were inconsistently entered by physicians, and 1 variable (work site) had unclear definitions. For example, it was unclear how a physician might choose between “educational institution,” “hospital,” or “clinic.”
We used the RAND claims-based Quality Assessment (QA) Tools to assess performance on measures of clinical quality. The development of the QA tools measures has been described in previous publications.23,24 Briefly, RAND staff selected conditions that were identified as leading causes of death, illness, and utilization of health care; staff physicians reviewed established national guidelines and medical literature to identify key processes of care subject to potential overuse and underuse throughout the continuum of care for each condition. Four 9-member multispecialty expert panels, each with a diversity of geography, practice setting, and sex, were convened to assess the validity of the indicators using the RAND–University of California, Los Angeles, modified Delphi method. The QA Tools measures were initially developed to be abstracted from medical records and included 439 measures; these have been subsequently adapted to be scored using claims records. The claims-based QA Tools measures used in our analyses include 124 indicators of quality of care for 22 acute and chronic conditions, as well as preventive care which are listed in the eAppendix.
Instances when recommended care was indicated or provided were attributed to the individual physicians who triggered the indicator. Each physician's composite performance score was created by dividing the number of instances in which recommended care was delivered by the number of instances in which patients were eligible for such care and were assigned to that physician. This composite method has been described as the “overall score” method in previous literature.25 To prevent differences in the ease of delivering needed care (eg, the mean rate of mammography for the state is much higher than the mean rate of cervical cancer screening) from affecting physicians' overall performance scores, we standardized the expected performance on each indicator by subtracting its statewide mean from each physician's score on that indicator. This process created a “measurement difficulty-adjusted” performance composite score, the mean of which was zero across all physicians.26
We created multivariate linear regression models to examine the associations between physician characteristics and performance scores. The unit of analysis was the individual physician. The dependent variable was the composite difficulty-adjusted performance score. The independent variables were physician sex, board certification status, experience (years since graduation from medical school), medical school location (domestic or international), medical school ranking (within or below the top 10 in the 2008 US News and World Report rankings), malpractice claims (none vs ≥1 in the past 10 years), and disciplinary actions by the board (none vs ≥1 in the past 10 years). The regression was weighted by the number of quality measure opportunities attributed to each physician.
We ran several different versions of the regression model using different subsets of physicians and performance data: (1) all physicians and all indicators; (2) all physicians, but with separate regressions for acute, chronic, and preventive care indicators; (3) all physicians, but with separate regressions for female-patient–specific and male-patient–specific indicators (eg, recommended prenatal or mammography care for women, and recommended benign prostatic hypertrophy or sexually transmitted infection care for men); and (4) all indicators, but with separate regression models for the 5 specialties that averaged greater than 150 quality measure opportunities per physician (internal medicine, family/general practice, cardiology, obstetrics and gynecology, and endocrinology).
Performance scores are presented as the mean score for the group of physicians possessing each characteristic. We created these scores by solving the regression model created for each care type or physician specialty to find the percentage-point difference in difficulty-adjusted performance score attributable to that characteristic. We then added that quantity to the unadjusted mean performance score to arrive at a quantity representing the percentage of recommended care that physicians with that characteristic provide, adjusted for the degree of difficulty of each measure. To address the testing of multiple comparisons, we calculated the critical P value that limited the false discovery rate (the expected rate of type 1 error among all significant statistical tests) to 5%.27P values below this threshold were considered statistically significant. All statistical analyses were performed using SAS software (version 9.2; SAS Institute Inc, Cary, North Carolina).
Of the 30 122 physicians in the MHQP database, there were 12 959 physicians in the 23 selected specialties who had a full license, who practiced in Massachusetts, and who submitted 1 or more claims in 2004 to 2005. We then excluded the 2249 physicians with no attributed quality measures and the 302 physicians who could not be linked to the physician characteristics data set. The remaining 10 408 physicians (80.3%) were the basis of our analysis. There were 1 704 686 quality measure opportunities included in the analysis, a mean of 163.8 events per physician (range, 1-3329).
Most physicians were male (70.1%), board certified (92.8%), domestically trained (83.0%), and in possession of allopathic medical degrees (97.7%) (Table 1). They had a wide breadth of experience in practice; 15.2% had less than 10 years, and 24.7% had 30 or more years of experience. Few had made payments on malpractice claims in the past decade (10.2%), and fewer had disciplinary actions against them in that time (1.0%). Approximately 1 in 10 attended schools ranked in the top 10 by US News and World Report21 for research (12.6%) or primary care (9.8%) (Table 1). The physicians were distributed across the 23 specialties, but 34.5% of the physicians in the sample practiced internal medicine (Table 2).
Among all physicians, the mean unadjusted overall performance score was 62.5%, with a 5th to 95th percentile range of 48.2% to 74.9%. Performance scores varied by condition, ranging from 30.9% for cataract care to 68.0% for congestive heart failure care. Unadjusted performance scores for all physicians for the 20 most frequently occurring indicators are shown in Table 3.
In a multivariate model including all physicians and all types of care, female physicians scored higher than male physicians (1.6 percentage points; P < .001), board-certified physicians scored higher than those without board certification (3.3 percentage points; P < .001), and domestically trained physicians scored higher than internationally trained physicians (1.0 percentage point; P < .001) (Table 4). There were no statistically significant associations between performance and allopathic vs osteopathic degree, medical school rankings, disciplinary actions, malpractice claims, or years of experience. The available physician characteristics explained only 2.8% of overall variation in physician performance.
Separate regressions models for acute, chronic, and preventive care demonstrated that board certification was associated with higher quality on 2 of the 3 types of care (1.8 percentage points for acute care [P = .001]; 5.9 percentage points for preventive care [P < .001]) (Table 4). Of the physician characteristics, the greatest differences in quality were generally seen among the preventive care measures (female physicians, 5.3 percentage points higher than male physicians [P < .001]; board-certified physicians, 5.9 percentage points higher than noncertified physicians [P < .001]; domestically trained physicians, 2.7 percentage points higher than internationally trained physicians [P < .001]; having paid a malpractice claim, 3.7 percentage points higher than vs no paid malpractice claim [P < .001]).
Using separate regression models for male- and female-specific measures, we found that female physicians had significantly higher performance scores than male physicians on female-specific measures (4.4 percentage points higher; P < .001) and male-specific measures (5.2 percentage points higher; P = .22). The latter difference was not statistically significant (Table 4).
Using separate regression models for each of 5 common specialties in our physician population, we found no physician characteristics that were consistently associated with higher clinical quality across all specialties (Table 5). However, the associations seen overall for all physicians and for all types of care paralleled those seen in internal medicine.
Consumers are encouraged to use physician characteristics, such as board certification and lack of paid malpractice claims, as a signal for quality.1,2 Yet in our study, few individual physician characteristics are consistently associated with higher quality, and when present, these associations are small in magnitude and are generally not significant in a practical sense. If one looks just at the 3 physician characteristics that had an association with quality, the difference in overall composite performance between the average physician with the best combination of these characteristics (female, board-certified, domestically trained), and the average physician with the worst combination (male, noncertified, internationally trained physician) is only 5.9%. Also, this is the average difference. Among physicians with the best combination there is a wide range of performance (48.8.5%-75.3%, 5th to 95th percentile); this range is quite similar to the range of all physicians (48.2%-74.9%). Thus, there is little evidence to suggest that a patient will consistently receive higher quality care by switching to a physician with these characteristics. Overall, the results highlight the need for externally available quality information for consumer use.
Despite the finding that physician characteristics are imprecise proxies for consumers to use in assessing quality, we did find some characteristics that were associated with higher performance. Board certification was associated with high performance scores at the overall level and with both acute and preventive care. We recognize that this is an association and does not imply that board certification itself drives the difference between higher- and lower-quality physicians. However, this association does provide preliminary evidence, suggesting that there may be some quality-of-care benefit to be derived from maintenance of certification programs or the inclusion of board-certification activities as a requirement for maintenance of licensure.28 Furthermore, while past studies15,16 have examined the relationship between board certification and quality in an assortment of specific clinical areas, to our knowledge, this is the first to demonstrate a robust relationship between board certification and clinical quality across a broad range of clinical conditions and types of care.
It is striking that we found no consistent association between the number of malpractice claims or disciplinary actions and quality. Although malpractice claims have strong associations with measures of physician communication,29 physician communication style and other physician attributes associated with malpractice claims may have an inconsistent relationship to the process measures of quality that we investigated. Our results in this regard are similar to those of previous research, showing little association between malpractice claims and negligent care.30 In addition, the very low numbers of physicians with disciplinary actions against them by the board in our sample make it difficult to detect any association.
In contrast to the previous literature, we did not find any associations between physicians' years of experience and quality. There are several potential explanations for this difference. The previous systematic review by Choudhry et al7 used a much broader definition to measure quality, including performance on theoretical evaluations such as written examinations or hypothetical clinical scenarios, guideline adherence for therapy or prevention, or health outcomes such as mortality; and included individual studies with narrow areas of clinical quality assessment. Our study used only process-based measures of quality of care across a broad range of clinical areas. Furthermore, while the studies included in the systematic review assessing academic knowledge as a marker of quality all showed consistently negative associations between age and quality, results were somewhat more mixed when quality was measured by adherence to guidelines, a method more analogous to our own work. Finally, while most studies in the systematic review found a negative association between experience and quality, 21% of the studies in the review reported no effect, similar to the findings of our work.
Our study has limitations. The investigated physician characteristics are the major publicly available data on individual physicians that are easily accessible to consumers. However, we recognize that in the future, patients may have access to physician-level performance on some quality metrics. When available, these metrics may be different and narrower in scope than those used in this study. In addition, although we used a broader range of clinical quality measures than any other study to our knowledge, the scope of the quality metrics is inherently limited. The RAND Quality Assessment Tools covered 22 conditions and included solely process-based measures. It is possible that there are stronger associations between physician characteristics and performance on quality measures that were not investigated, (eg, measures of patient experience or mortality). Owing to inherent limitations in medical claims, quality measurement using claims is less robust than quality measurement based in a medical records review. However, one key advantage of using claims is that it allowed us to assess quality of care for a large number of physicians.
Others have noted relationships between selected practice characteristics and quality measure performance,8,31 but these practice characteristics were not available for the current analysis. Few physician practice characteristics are publicly reported by the Massachusetts Board of Registration in Medicine, and their availability to patients who are choosing a physician is relatively limited. The question of whether generalists or specialists provide better care for specific conditions is not well addressed by our study because we assessed the quality of care across an aggregated group of conditions rather than on a condition-by-condition basis. This question has been investigated in other settings.32,33
Our study was limited to Massachusetts, a state with a high density of academic medical centers and higher overall quality of care than the national average.34 It is possible that in this setting of higher clinical quality, the effect of physician characteristics may be less important than it would be in a setting where the overall quality of care is lower.
In conclusion, we found that individual physician characteristics are poor proxies for performance on clinical quality measures and are not well suited for use as such by patients. Public reporting of individual physician quality data may provide the consumer with more valuable guidance when seeking providers of high-quality health care.
Correspondence: Ateev Mehrotra, MD, MPH, University of Pittsburgh School of Medicine, 230 McKee Pl, Ste 600, Pittsburgh, PA 15213 (firstname.lastname@example.org).
Accepted for Publication: January 27, 2010.
Author Contributions: Ms Reid had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: Reid, Adams, and Mehrotra. Acquisition of data: Reid, McGlynn, and Mehrotra. Analysis and interpretation of data: Reid, Friedberg, Adams, McGlynn, and Mehrotra. Drafting of the manuscript: Reid and Friedberg. Critical revision of the manuscript for important intellectual content: Reid, Adams, McGlynn, and Mehrotra. Statistical analysis: Adams. Obtained funding: Reid, McGlynn, and Mehrotra. Study supervision: McGlynn and Mehrotra.
Financial Disclosure: None reported.
Funding/Support: This research was supported by a contract from the US Department of Labor and a grant from the Commonwealth Fund. Dr Mehrotra's salary was supported by a career development award (KL2 RR024154-03) from the National Center for Research Resources, a component of the National Institutes of Health. Ms Reid was supported by a stipend from the University of Pittsburgh School of Medicine Dean's Summer Research Program.
Previous Presentations: This study was presented at the American College of Physicians’ Internal Medicine Annual Meeting; April 24, 2009; Philadelphia, Pennsylvania; the Society of General Internal Medicine’s Annual Meeting; May 14, 2009; Miami, Florida; and the AcademyHealth Annual Research Meeting, Health Care Workforce Interest Group Meeting; June 27, 2009; Chicago, Illinois.
Additional Contributions: Julie Lai, MPH, and Scott Ashwood, MA, provided excellent programming for this study. Barbra Rabson, MPH, and Jan Singer, MA, MPH, from the Massachusetts Health Quality Partners, facilitated obtaining access to the data sets used in this project.
Thank you for submitting a comment on this article. It will be reviewed by JAMA Internal Medicine editors. You will be notified when your comment has been published. Comments should not exceed 500 words of text and 10 references.
Do not submit personal medical questions or information that could identify a specific patient, questions about a particular case, or general inquiries to an author. Only content that has not been published, posted, or submitted elsewhere should be submitted. By submitting this Comment, you and any coauthors transfer copyright to the journal if your Comment is posted.
* = Required Field
Disclosure of Any Conflicts of Interest*
Indicate all relevant conflicts of interest of each author below, including all relevant financial interests, activities, and relationships within the past 3 years including, but not limited to, employment, affiliation, grants or funding, consultancies, honoraria or payment, speakers’ bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued. If all authors have none, check "No potential conflicts or relevant financial interests" in the box below. Please also indicate any funding received in support of this work. The information will be posted with your response.
Some tools below are only available to our subscribers or users with an online account.
Download citation file:
Web of Science® Times Cited: 30
Customize your page view by dragging & repositioning the boxes below.
Users' Guides to the Medical Literature
Chapter e23. How to Use an Article About Quality Improvement
All results at
Enter your username and email address. We'll send you a link to reset your password.
Enter your username and email address. We'll send instructions on how to reset your password to the email address we have on record.
Athens and Shibboleth are access management services that provide single sign-on to protected resources. They replace the multiple user names and passwords necessary to access subscription-based content with a single user name and password that can be entered once per session. It operates independently of a user's location or IP address. If your institution uses Athens or Shibboleth authentication, please contact your site administrator to receive your user name and password.