From the Departments of Medicine (Drs Hauer, Wachter, and Auerbach) and Epidemiology and Biostatistics (Drs Wachter and McCulloch), University of California, San Francisco; and St George's University School of Medicine, Grenada, West Indies (Ms Woo). The authors have no relevant financial interest in this article.
Hospitalists are increasingly serving as inpatient attendings at teaching hospitals. The educational impact of this new model is unclear. We evaluated the relationship between type of attending (hospitalist vs traditional) and trainees' ratings of attending teaching and the overall ward rotation.
We analyzed data from a Web-based evaluation system containing all house staff and student evaluations of their attendings and internal medicine ward rotations at 2 university-affiliated teaching hospitals over a 2-year period (1999-2001).
The overall evaluation completion rate was 91% (1587 of 1742 evaluations) by trainees working with 17 hospitalists and 52 traditional attendings. Trainees reported significantly more overall satisfaction with hospitalists than traditional attendings (8.3 vs 8.0 on a 9-point scale; P<.001) and rated hospitalists' overall teaching effectiveness as superior (4.8 vs 4.5 on a 5-point scale; P<.001). Perceived overall educational value of rotations was higher with hospitalist attendings (3.9 vs 3.7 on a 5-point scale; P = .04). Trainees evaluated hospitalists' knowledge, teaching, and feedback as superior to that of traditional attendings. There were no significant differences in reports of attendings' interest in teaching or patients, availability, or emphasis on cost-effectiveness.
Trainees reported more effective teaching and more satisfying inpatient rotations when supervised by hospitalists. This analysis suggests that hospitalists may possess or accrue a specific inpatient knowledge base and teaching skill that distinguishes them from nonhospitalists.
Catalyzed by increasing data regarding efficiency and possible quality advantages,1- 3 large numbers of hospitals have adopted the hospitalist model for inpatient care on general medicine wards. Although the model was initially implemented at community-based nonteaching hospitals, academic medical centers and teaching hospitals have also adopted hospitalist systems over the past 5 years.4 As a result, internal medicine (IM) house staff and medical students receive a substantial proportion of their teaching from hospitalists.
Despite this major restructuring in the inpatient educational environment, few published studies have examined the impact of hospitalists on trainee education. In the first year of the hospitalist system at the University of California, San Francisco (UCSF) there was a nonsignificant trend toward increased house staff satisfaction.5 A hospitalist program at Brigham and Women's Hospital, Boston, Mass, also increased resident satisfaction.6 In a pediatric hospital, the use of hospitalists who cared for overflow admissions led to improved teaching and residents' case-mix.7 House staff satisfaction with rotations supervised by 4 hospitalists from the University of Chicago, Chicago, Ill, was greater than with traditional attendings.8 Although these studies raise the possibility of a hospitalist educational advantage, each involved relatively small numbers of hospitalists, single hospital sites, and marginal evaluation response rates, introducing the possibility of biases. None addressed the impact of hospitalists on medical student education.
We undertook this study to understand trainees' perceptions of hospitalists' teaching, taking advantage of a mature hospitalist system at 2 sites, a large number of experienced hospitalists, a data collection system with a high response rate, and a system that assures quasirandom distribution of trainees to both hospitalist and nonhospitalist attendings.
We conducted a retrospective cohort study using data collected in evaluations completed by house staff and students regarding their attendings and rotations at a university tertiary care hospital and a community-based teaching hospital between July 1, 1999, and June 30, 2001. Evaluations were a standard component of educational program evaluation and quality improvement and were completed by third-year medical students in the core IM clerkship, fourth-year students in the IM subinternship, and postgraduate year 1, 2, and 3 IM house staff on IM ward rotations. Trainees who did not work with an attending for at least 2 weeks did not complete evaluations of their attendings. Neither trainees nor attendings were aware of this research project at the time of their evaluations. The Committee on Human Research approved the research protocol.
Ward teams consist of a hospitalist or nonhospitalist attending, IM resident, 1 or 2 interns, often a fourth-year medical student, and 1 or 2 third-year medical students. During the study period, 43.7% of UCSF house staff were in the categorical program, 13.4% were in an evidence-based medicine track, and 42.9% were in a primary care program; 48% of the house staff were female. Of the interns, 9.6% were preliminary.
This study occurred at 2 UCSF hospitals: Moffitt-Long Hospital and Mount Zion Hospital, San Francisco, Calif. Moffitt-Long Hospital is the 500-bed UCSF academic medical center with hospitalists who are faculty focused on the general medical care of hospitalized patients. Hospitalists spent at least 25% of their time attending on the medical wards or medical consultation service in either 2-week or 1-month blocks. Seven hospitalists also held administrative and teaching positions related to the inpatient service, such as program director or chief of service. Two hospitalists were board certified in subspecialties of IM, and all were board certified in IM.
Mount Zion Hospital is a 280-bed community-based teaching hospital and a major teaching site for students and house staff. Of the 5 hospitalists at Mount Zion, 4 were general internists and 1 was an internist with subspecialty board certification. These hospitalists were primarily focused on inpatient care and maintained small ambulatory practices.
Traditional attendings (n = 52) at both sites were university faculty who served as ward attendings in a role similar to hospitalists, generally 2 to 4 weeks per year. Most traditional attendings were general internists who reduced their outpatient duties while attending, and 13 traditional attendings were subspecialists. At Mount Zion, many community-based physicians directed care for their individual patients but were not evaluated by trainees because they did not serve as ward team attendings.
During the study period, hospitalists attended 54.5% of available ward months. Thus, of the 7 teams at Moffitt-Long, 4 would typically be directed by hospitalists and 3 by traditional attendings. Students, house staff, and attendings were assigned to inpatient ward teams at random in most cases. Nearly all team personnel decisions were made to optimize the schedule based on organizational considerations (ie, avoiding residents and interns with the same clinic day on one team or being on call 2 days in a row). The scenario of a trainee being placed on a certain team because of a specific request occurred less than once per month. House staff wrote all orders and provided 24-hour coverage to all inpatients. There were no differences in other inpatient care systems (eg, level of house staff coverage, computer systems, case managers, social workers, or nursing staff) available to traditional or hospitalist physicians.
Data were drawn from E*Value (Advanced Informatics, LLC, Brooklyn Park, Minn; http://www.advancedinformatics.com), a Web-based evaluation and reporting system that distributes and collects questionnaires. Trainees automatically received e-mail notices of pending evaluations, and periodic e-mail reminders were sent until evaluations were completed. Trainees could not see their own evaluations until their assessments of the rotation and attending were complete.
Evaluations of attendings consisted of 13 items scored on a 9-point scale (1 = unsatisfactory, 5 = satisfactory, and 9 = outstanding) and 1 item (overall teaching effectiveness) scored on a 5-point scale (1 = poor, 5 = excellent). The house staff rotation evaluation form was modeled after the Program Requirements for Residency Education in IM established by the American Council for Graduate Medical Education, Chicago, Ill (http://www.acgme.org), and included 19 items on a 5-point scale (1 = poor, 5 = excellent). The student clerkship evaluation form was a standard form used in all clerkships at the school and contained 19 items on a similar 5-point scale.
We used t tests and Fisher exact tests to characterize differences in responses to survey items and demographic characteristics based on attending type. Using a backward elimination and manually entered variables, we then fit multivariate mixed effects models to determine the independent association of attending type on trainee evaluations, allowing for clustering of effects within individual attendings and respondents using the generalized estimating equation. Variables were selected for entry based on the statistical significance of their association with the outcome, observed confounding with other independent variables, or to maintain face validity. Multivariable models of satisfaction with rotation contained information regarding hospital, time of year, and workload (for house staff). Models examining satisfaction with attending contained hospital, time of year, and type of attending (specialist or generalist among traditional attendings). We also conducted secondary analyses that included attending physicians of similar academic rank. All analyses were conducted using SAS version 8.2 for Windows (SAS Systems, Cary, NC).
The overall evaluation completion rate was 91% (1587 of 1742 evaluations). Trainees completed 917 (91%) of 1004 evaluations of attendings, including 599 (93%) of 647 by house staff and 318 (89%) of 357 by students (P = .24 for the difference in response rate between house staff and students). The response rate for rotation evaluations was 670 (91%) of 738, including 501 (94%) of 534 by house staff and 169 (83%) of 204 by students (P = .06).
Traditional (ie, nonhospitalist) attendings were more likely to hold a higher academic rank and to be subspecialty trained, although most attendings in both groups were male, junior faculty, and general internists (Table 1).
In unadjusted analyses (Table 2), hospitalists were rated more favorably as role models (8.4 vs 8.0; P = .002), for teaching effectiveness (4.8 vs 4.5 on a 5-point scale; P<.001), and for overall effectiveness (8.3 vs 8.0; P<.001). Hospitalists were perceived as showing greater interest in teaching (8.4 vs 8.2; P = .01) and as having more knowledge of inpatient medicine (8.4 vs 7.9; P<.001) and pathophysiology (8.1 vs 7.6; P<.001). Although hospitalists were observed to place greater emphasis on cost-effectiveness, both groups received the lowest mean ratings on this item (7.5 vs 6.7; P<.001). Trainees more strongly agreed that hospitalists discussed patients with them (8.4 vs 8.2; P = .04) and provided feedback (7.9 vs 7.1; P<.001). Trainees rated the 2 groups of attendings equivalently in terms of interest in patients as individuals, encouragement of house staff discussions and opinions, enjoyment in teaching, and availability. These results were robust, even after adjusting for clustering of effects within respondents and physician characteristics in multivariable models (Table 2). Among hospitalists, there were no significant correlations between number of months attended and any of the items on the attending evaluation form.
We then examined whether there were significant differences between ratings of generalist traditional attendings, specialist traditional attendings, and hospitalists (Table 3). In multivariable models, hospitalists and specialist traditional attendings received similar ratings for knowledge and pathophysiology discussions, whereas generalist traditional attendings were rated lower. There were no significant differences in other items such as interest in patients among the 3 groups. There were no significant differences between hospitalists and specialists in either unadjusted or adjusted analyses of any of the evaluation items (data not presented).
In unadjusted analyses, house staff rated the overall educational value of rotations (3.9 vs 3.7; P = .04) and the effectiveness of feedback (3.8 vs 3.6; P = .004) more favorably with hospitalists (Table 4). There were no unadjusted differences in trainees' ability to achieve the curricular goals of the rotation and maintain the appropriate balance of experience and supervision based on attending type. In multivariate models, differences in overall educational experience and effectiveness of feedback remained (Table 4). Attending type did not have an independent effect on student evaluations of rotations, faculty teaching quality, observation of clinical interactions, or feedback. However, student satisfaction with rotations was significantly correlated with their satisfaction with house staff teaching (P = .002).
In our study, trainees supervised by hospitalists rated their attending physicians and their overall experiences on IM rotations more highly than did trainees supervised by traditional attendings. Hospitalists were rated more highly for teaching effectiveness, knowledge of relevant subject matter, discussion of pathophysiology, emphasis on cost-effectiveness, and provision of appropriate and effective feedback.
In contrast to prior studies examining the educational impact of hospitalist programs, our study began more than 4 years after our adoption of the hospitalist model.2,3 By that time, a stable faculty of hospitalists staffed 55% of all ward months, a change that reduced the number of months required of traditional attendings and thus allowed traditional attendings to attend (or not) based on their own interest and inpatient skill rather than as a required duty. This selection effect in the control group therefore conspires to narrow the differences between hospitalist and nonhospitalist evaluations. It is thus not surprising that traditional attendings performed similarly on questions related to interest in patient care and teaching. Because the UCSF hospitalist program was well established, traditional attendings may have adopted many of the teaching and practice standards of hospitalists, further biasing our results to finding little or no difference and making it possible that the differences we did observe are even more relevant to education.
Items for which hospitalists received higher ratings describe a specific skill set that they either brought to their position or developed via frequent ward duty. The hospitalist model is based on the premise that attendings who spend more time in the hospital will specialize through experience, leading to more effective and efficient care.1- 3 This experience difference in our cohort was substantial, with hospitalists spending 3.6 times more time in the inpatient setting compared with traditional attendings each year. This finding suggests that any hospitalist advantage reflects specialized expertise rather than a difference in commitment toward or enjoyment of teaching. This "practice makes perfect" phenomenon has been previously observed in clinical outcomes in intensive care, care for human immunodeficiency virus, and interventional cardiologic treatment.9- 12 Our findings extend previous evidence suggesting that spending more time teaching and in clinical work is associated with being identified as an excellent role model by relating the importance of experience across multiple domains of teaching.13
To our knowledge, our study was the first to address the impact of the hospitalist model on medical student education. We have previously described the theoretical advantages and disadvantages of this attending model unique to the student position.14 Hospitalist attendings' availability and expert role modeling may enhance students' educational experience. However, the decrease in patient length of stay in a hospitalist system may reduce students' clinical learning opportunities, and the limited contact with nonhospitalist attendings can diminish role modeling of subspecialty and biomedical research career paths. In our study, we found that students' satisfaction with rotations was more highly correlated with their satisfaction with their residents' teaching than with that of their attendings. This finding affirms the dominance of the resident's role on student satisfaction with the attending or rotation, as has been seen in other work.15- 17 This finding could also be interpreted as affirming hospitalists' ability to preserve residents' autonomy in clinical decision making and teaching—both key elements of house staff training. In either case, additional research is needed to determine methods by which hospitalists can enhance student education.
Hospitalists in our study were credited with giving more effective feedback compared with traditional attendings, a factor also associated with excellent role models in previous research.13 It is possible that greater inpatient teaching experience facilitates hospitalists' ability to describe trainees' behaviors and knowledge with greater specificity or increases their comfort with the challenging process of delivering meaningful feedback. Feedback and evaluation are major indicators of the quality of a training program,18 and on a national level, feedback and evaluation have been incorporated into programs designed to improve clinical teaching.19
It is important to point out that all physicians in our study were rated highly, supporting a policy of encouraging involvement of motivated nonhospitalist attendings and raising overall expectations to enhance teaching programs. Although evaluations of teaching may be subject to the same "grade inflation" that is commonly seen in evaluating trainees,20 our evaluations nonetheless suggest that trainees were satisfied with the teaching they received from both types of attendings. Traditional attendings provide benefits in terms of diversity of teaching content that may offset some of the advantages of hospitalists. Models that replace traditional ward attendings with hospitalists elevate the importance of creating other venues to facilitate educational interactions between trainees and both outpatient generalists and subspecialists.14,21 Given the decline in trainee interest in physician-scientist and primary care careers,22,23 the need for innovative training programs that promote diverse career paths will persist.24
Our study has several limitations. Our results, which were derived from a large academic medical center and an affiliated community teaching hospital, may not be fully generalizable to other hospital systems. However, our hospitalist service was constructed in a fashion similar to many academic hospitalist models, and the sites of our study were similar to those in previous studies.4 In contrast to prior single-site studies, we collected data from 2 separate attending and house staff training groups, further increasing the generalizability of our results. It is possible that unmeasured differences in physician characteristics, such as age or years since completing training, may have influenced our knowledge-related findings. However, our results were unchanged in robust models that included adjustment for physician factors as well as in secondary analyses limited to physicians of similar academic rank. There is little a priori reason to think that performance of teaching tasks that were clear expectations of all physicians, such as giving feedback, would be dependent on factors such as age. The educational and clinical impact of the numerical differences in ratings we identified based on attending type cannot be determined from our data. Finally, trainee evaluations are only one of many possible mechanisms of faculty evaluation, and our data are not a comprehensive summary of the clinical skill or academic success of our ward attendings.
Strengths of our study include the large number of evaluations and the high response rate over 2 academic years. The inclusion of both house staff and students allowed for comparison of the effects of attending teaching on different levels of learners. The large number of hospitalists at our institution minimized the possibility that our results were due to the particular teaching strength of a small number of attendings. The concurrent comparison group of traditional attendings controlled for changes in the health care system that might have confounded our results had we used historical comparison groups. The range of questions on each evaluation form allowed characterization of the particular strengths of different attending types as perceived by trainees.
This large study of trainee evaluations in a large and well-established hospitalist system demonstrates that house staff and students were more satisfied with hospitalist attendings' teaching and with rotations supervised by hospitalists. Hospitalists consistently received higher marks for their knowledge base, emphasis on cost-effective practice, and feedback. Future research should determine the specific behaviors that distinguish hospitalists' educational and clinical activities and elucidate factors that all clinician-educators can incorporate into their ward teaching.
Correspondence: Karen E. Hauer, MD, Department of Medicine, University of California, San Francisco, 533 Parnassus Ave, U-137, Box 0131, San Francisco, CA 94143-1031 (firstname.lastname@example.org).
Accepted for publication November 28, 2004.
Dr Auerbach is supported by Mentored Research Career Development Training grant K08 HS11416-02 from the Agency for Healthcare Research and Quality, Rockville, Md.
The results of this study were presented in part at the Clerkship Directors in Internal Medicine Annual Meeting; October 18, 2001; Tucson, Ariz; and at the Society of General Internal Medicine Annual Meeting; May 3, 2002; Atlanta, Ga.
We thank Erin Hartman for her expert editorial assistance.
Thank you for submitting a comment on this article. It will be reviewed by JAMA Internal Medicine editors. You will be notified when your comment has been published. Comments should not exceed 500 words of text and 10 references.
Do not submit personal medical questions or information that could identify a specific patient, questions about a particular case, or general inquiries to an author. Only content that has not been published, posted, or submitted elsewhere should be submitted. By submitting this Comment, you and any coauthors transfer copyright to the journal if your Comment is posted.
* = Required Field
Disclosure of Any Conflicts of Interest*
Indicate all relevant conflicts of interest of each author below, including all relevant financial interests, activities, and relationships within the past 3 years including, but not limited to, employment, affiliation, grants or funding, consultancies, honoraria or payment, speakers’ bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued. If all authors have none, check "No potential conflicts or relevant financial interests" in the box below. Please also indicate any funding received in support of this work. The information will be posted with your response.
Some tools below are only available to our subscribers or users with an online account.
Download citation file:
Web of Science® Times Cited: 38
Customize your page view by dragging & repositioning the boxes below.
Enter your username and email address. We'll send you a link to reset your password.
Enter your username and email address. We'll send instructions on how to reset your password to the email address we have on record.
Athens and Shibboleth are access management services that provide single sign-on to protected resources. They replace the multiple user names and passwords necessary to access subscription-based content with a single user name and password that can be entered once per session. It operates independently of a user's location or IP address. If your institution uses Athens or Shibboleth authentication, please contact your site administrator to receive your user name and password.