Author Affiliations: School of Medicine (Ms Chen), Divisions of Medicine (Dr Dhruva) and Cardiology (Dr Redberg), and School of Pharmacy (Dr Bero), University of California, San Francisco.
Training is essential for physicians inserting many new devices or conducting new procedures. Better outcomes for cardiovascular devices are positively associated with physician practice time and volume.1,2 To allow for training, device studies may designate an initial stage for physicians to learn how to use an investigational device. This phase is intended so that “changes in physician proficiency with use of a new technology can be evaluated vis a vis their impact on clinical performance.”3(p70) Training patients (also called roll-in, learning, or run-in patients) are these first individuals in whom a physician uses an investigational device. Decisions on whether to use training patients, how many to permit, and whether to include their results in subsequent outcome analyses are highly variable and currently performed via clinical judgment in the absence of written guidelines.
Accurate and complete safety and efficacy reporting is especially critical because the use and complexity of cardiovascular devices have increased dramatically over the past decade.3 It is already known that complication rates are higher in actual use than in clinical trials.4 The exclusion of any patient outcomes, including those of training patients from premarket approvals (PMAs), would further magnify any differences between clinical study and real-world outcomes and may result in higher safety and efficacy rates than can be expected for actual use. With increasing emphasis on real-world data, the exclusion of patient outcomes may also make studies less useful for actual clinical practice.5 While devices have many important benefits, their use has also been associated with numerous complications, including death.6 Furthermore, the US Food and Drug Administration (FDA) approval preempts consumer lawsuits for patients injured by devices.7 Current legislation in Congress seeks to redress this situation.8
The FDA evaluation of highest-risk devices, such as pacemakers and stents, occurs through a PMA process based on design information and clinical data9- 11 The number of training patients, their demographics, and characteristics of investigational devices more likely to involve studies with training phases and how this data are used in PMAs are all unknown. To learn more about the use of training patients, we examined the FDA summary of all studies conducted in support of FDA-approved cardiovascular devices from 2000 to 2007.
We abstracted data from the Summaries of Safety and Effectiveness Data (summaries) posted on the FDA Web site. The FDA states that each data summary “is intended to present a reasoned, objective, and balanced critique of the scientific evidence which served as the basis of the decision to approve or deny the premarket application.”12 We cataloged 78 summaries, representing all PMAs for cardiovascular devices received by the FDA from January 1, 2000, through December 31, 2007, and approved at the time of a search performed on October 15, 2008. The detailed method of selection and characteristics of PMAs included in this study have been previously described.13
All mentions of training patients (herein, the term training will be used to refer to all patients described as roll-in, learning, run-in, or investigational) were identified. We abstracted data on device type, PMA submission year, number of training patients, their demographics, and if data from training patients were included in primary end-point analyses. Data were extracted by one of us (C.E.C.) and verified by a second (S.S.D.). The coding for each study was as follows:
Training/roll-in/lead-in/run-in patients involved: Coded as 1 if PMA explicitly mentioned involvement by training patients and as 0 if they were not noted in the PMA. Training patients were also described as roll-in or lead-in or run-in patients. If stated, the number of training patients was recorded.
Training patients excluded from analysis: Studies involving training patients were coded as 1 if the training data were excluded from analysis of the primary end point or 0 if training patients were subsequently folded into safety or efficacy analysis along with designated treatment patients. The total number of primary end points as well as the number of studies from which training patient results were excluded were both documented.
Demographic data: Coded as 1 (stated) or 0 (not stated) if any information was provided about age, sex, race, or medical history of training patients.
Statistical check of differences between training and randomized patients: Coded as 1 (stated) or 0 (not stated) either as a description of a statistical test performed or as a statement declining statistically significant differences in safety and efficacy outcomes between training and nontraining treatment patients. Provision of data describing outcomes experienced by training patients was also recorded.
Prespecified: Coded as 1 if the number of training patients was prespecified as a per-physician, per-site, or global limit on training patient enrollment. Coded as 0 if guidelines governing training patients were not mentioned or if they were indefinitely permitted as needed for physician training or the resolution of safety concerns.
Device type and study strength: Codes for blinding, number and percentage of US sites, and device type were used as defined previously.13 Randomization was recoded to reflect only the organization of the main study (and not the training phase, which was defined as nonrandomized).
Data on training patients were also abstracted from meeting transcripts of the Circulatory Systems Devices Panel of the FDA Committee on Devices and Radiological Health (CDRH).14 Of the 17 devices whose summaries referenced use of training patients, 3 were reviewed in a Circulatory Systems Devices Panel meeting. We also searched the Web site clinicaltrials.gov for information on training patients in those 20 studies referencing their involvement.
We examined the use and reporting of training patients per summary, per study, and per primary end point. The PMAs with and without training patients were compared by submission year, device type, and study characteristics such as randomization and blinding. P values were calculated using the Fisher exact test.
Multivariate logistic models controlling for device category were used to examine if training patients have become more common over time and if their use is associated with randomization, site characteristics, or post hoc end-point analysis. Study characteristics were correlated with the relative magnitude of training patient participation, defined as the percentage of patients receiving the experimental device classified as training patients, using a binomial generalized linear model (GLM). A GLM was also used to estimate the association between absolute training patient enrollment and study characteristics. Regressions were clustered by PMA to account for nonindependence among study characteristics for the same device.
The FDA sent 1 of us (R.F.R.) 10 (the 5 oldest and 5 most recent) confidential PMA documents in order to compare them with the summaries. The 10 summaries were checked against the original and confidential company PMAs to rule out any discrepancies between data presented in one but not the other.
Seventeen of the 78 cardiovascular device summaries (22%) involved training patients. There were 123 studies in these summaries, and 20 (16%) enrolled training patients (Table 1). In these 20 studies, training patients constituted a mean of 23% (interquartile range, 14%-28%) of all patients receiving the investigational device. Overall, 859 patients were identified as training patients (Table 1). However, the actual count is higher because 2 studies mentioned the use of training patients without stating how many.
There was variation in the proportion of training patients in the studies. One study enrolled 54 training patients compared with 60 in the subsequent treatment arm. In contrast, only 12 training patients were included in another study involving 251 treatment patients. There was no overall relationship between the proportion of training patients and device type.
Training patients were excluded from some end-point analyses (either safety, efficacy, or both) in all studies (Table 1). Five studies (25%) qualitatively reported no differences in outcomes between training and nontraining patients in the text of the summary. Four studies (20%) provided data on outcomes for the training patients. However, when data on training patients were provided, it was not necessarily accompanied by statistical testing. Only 1 study provided both data and textual description of statistical analyses on outcomes for training patients. Two studies (10%) provided information on age, sex, and comorbidities. Most studies (90%) did not provide any demographic information on training patients.
Seven of 20 studies (35%) prespecified a target enrollment for training patients (Table 1). In 2 instances, the number of allowed training patients was prespecified as “adequate physician training.” As described in one summary15(p11):
Each center enrolled a series of “device run-in” subjects to provide training and ensure operator familiarity with the device. . . . After the Medical Monitor determined there were no safety concerns and the Sponsor determined the site had sufficient experience with the device, the site was authorized to enroll subjects into the randomization phase.
One study with training patients did not identify a primary end point and so could not be included in the end-point analyses. Overall, training patients were excluded from 40 of 43 primary end-point analyses (93%). One study included training patients in its 3 safety analyses. No study included training patients in efficacy analyses (Table 1).
There was no temporal trend in the percentage of PMAs involving training patients from 2000 to 2007 in univariate analysis and no overall relationship between device type and use of training patients (Table 2). Summaries for electrophysiologic devices referenced training patients less frequently than those for nonelectrophysiologic devices (Fisher exact test; P = .03). Univariate analysis of study-level quality characteristics revealed no association between training usage and randomization, blinding, or post hoc end-point analysis (Table 2). None of the studies involving training patients were blinded, compared with 20% blinding (13% double-blinded, 7% single-blinded) in those studies not involving training patients (n = 103).
In multivariate logistic analysis controlling for device type, PMA submission year was not associated with training phase (odds ratio [OR], 0.83, P = .13). In multivariate linear analysis, the association between submission year and the proportion of treatment patients who were training patients was similarly not significant (β = −0.23; P = .10). In study-level multivariate analysis, randomization was significantly associated neither with usage of training patients (OR, 1.78; P = .31) nor with the proportion of treatment patients who were training patients (β = 0.94; P = .16). Higher numbers of study sites were neither associated with an increased likelihood of training usage (OR, 1.01; P = .55) nor with more training patients (β = 0.14; P = .08).
Adverse events were reported for some training patients. One summary reported 9 deaths in 91 training patients (9.9%) compared with 5 deaths in 135 treatment patients (3.7%).16 Another summary noted that 1-year cumulative adverse event rates were 38% among training patients (20 of 52 patients) compared with 25% (49 of 200) among the main study group.17 In another summary, the mean stent delivery time for training patients was 1.17 hours compared with 0.98 hours for treatment patients; primary delivery success was also lower among training patients (89.8% vs 94.4%).18
Review of the proprietary PMAs did not change our findings with regard to training patients. There were no mentions of training patients in the PMAs that were not already in the summaries that we reviewed. In the one proprietary PMA that included training patients there were neither additional data on their demographics nor inclusion of additional outcome information.
The CDRH transcripts of panel meetings contained no additional information on background characteristics or on outcomes experienced by training patients. Eleven studies involving training patients were listed on the clinicaltrials.gov Web site. No information on training patients was published on clinicaltrials.gov.
We found that almost one-quarter of premarket cardiovascular device applications submitted to the FDA from 2000 to 2007 included a training phase to allow physicians to improve their proficiency. Outcomes from training patients were rarely included in data used by the FDA for the evaluation of safety and efficacy. Furthermore, the exclusion of training patients meant that outcomes from 5% to 57% of patients receiving the investigational device were not included in end-point analyses. Basic information, such as age, sex, or how patients fared following device implantation, was frequently not included for the training patients. The use and dissemination of devices after FDA approval is mainly by many new operators, and their first few patients are all essentially training patients. Thus, it is essential for patients and physicians to have access to and know the data regarding a device's safety and effectiveness profile in the first patients being treated, so they can have more accurate information and expectations about device performance.
Criteria for training patient enrollment were provided in only a third of studies. We expected that training patients would be more extensively used for more technically challenging devices and in multiple site studies. However, there were no such relationships, suggesting that training patients are being used on a nonsystematic, ad hoc basis. The publication of guidelines on enrollment of training patients, as well as a requirement to include their data in the summary, would promote consistency and transparency, consistent with the FDA's Transparency Initiative.19
There is a learning curve in medical procedures, and the importance of training is well accepted. It is likely that at least some, if not all, of the operators performing the training procedures were proctored by more experienced operators. Greater operator experience with cardiovascular devices has a positive impact on patient outcomes.1,2 In fact, the first patients are designated as training patients precisely because the sponsor (with FDA concurrence) expects less favorable outcomes in the first patients to receive the device for each operator. As described in one summary, the inclusion of a training phase is intended to “give each implanter experience with the new lead so that subsequent implants will more appropriately represent the true performance of the lead.”20(p5) However, after FDA approval, devices become widely available and are used by many new physicians. Thus, it is important to make available the data on the actual safety and efficacy of training patients We believe that including data from training patients would more accurately reflect the true performance of the device.
The importance of the learning curve is seen in multiple studies. For example, there was significantly increased mean procedure duration and 30-day death in a physician's first 80 carotid stent interventions,21 higher rates of femoral reconstruction, and longer surgery time experienced by patients undergoing initial abdominal aortic aneurysm endovascular graft repair,22 as well as increased likelihood of procedural success in coronary angioplasty with operator experience.23
Three-quarters of studies in summaries involving training patients did not check for outcome differences with nontraining patients. However, in a study24 of 91 training patients, there were 2 failed stent deliveries, 3 stent misplacements, 1 stent migration, and 1 delivery system failure. Comparable figures were not reported for nontraining patients, nor was any comment made on their statistical significance in the summary. Because it is known that training patients experience adverse events, it is important that data detailing outcomes experienced by training patients be described in the summary, along with appropriate statistical tests establishing their significance in comparison to nontraining treatment patients.
The need to more clearly document safety and efficacy outcomes experienced by training patients has been recognized previously. In a 2005 statement, the Society for Cardiovascular Angiography and Interventions recommended that25(p166)
During and after training, carotid interventional procedures should be performed with documentation of outcomes, including immediate results, complications, and follow-up. In view of the steep learning curve . . . We support the creation of a mandatory national multispecialty registry database for reporting of outcomes and assessment of ongoing institutional and individual operator competence.
The National Cardiovascular Data Registry as the CARE Registry now hosts this database.26 However, data on training patients are not publicly available from this registry.
Training patients are an integral part of actual use of any new device, and thus their data should be reported and publicly available to patients and physicians. Training patients contribute to the skill development of physicians, but their data should not be excluded from FDA applications or public knowledge. Excluding results from training patients “breach[es] implied contracts with the patients who participate in these studies (assum[ing] that they are contributing to a growth in knowledge).”27(p978)
Our work raises additional questions. Are training patients informed that they are investigational and that their outcomes will not be included in the main study results? Are training patients safeguarded by the same safety protections as subsequent study participants? If they are potentially subject to lower efficacy and higher complication rates, what legal rights are accorded to training patients under preemption? Is the exclusion of training patients from subsequent data analysis prespecified in all cases, or are there cases in which exclusion is determined post hoc?
Because original PMAs are not publicly available to protect proprietary information, this study relied mainly on clinical data as cited in the subsequent summary. Because this study relied on approved PMAs, it is possible that some applications not approved by the FDA because of issues related to training data may have been excluded. It is also possible that many more studies involved a training phase that was either not reported to the FDA in the original PMA or not referenced by the FDA in the accompanying summary. However, the confidential PMA application involving training patients who we examined did not contain information on demographics and outcomes for those patients beyond that which was reported in the corresponding summary. This was the case despite the FDA requirement that all PMA applications summarize the “subject selection and exclusion criteria, study population, study period, safety and effectiveness data, adverse reactions and complications, patient discontinuation, patient complaints, device failures and replacements”3 for all supporting clinical studies.
To be as complete as possible, we also checked publicly available data from Circulatory System Devices Panel meetings and clinicaltrials.gov listings for cardiovascular device studies that used training patients. These additional sources did not contain further information on training patients, which suggests substantial underreporting of these important data. Access to complete FDA reviews for medical devices would facilitate a more thorough understanding of clinical outcomes.28
Training patients are common in PMA cardiovascular device studies on which FDA approval is based. The inclusion of a training phase in which participant and outcomes data are not included in data evaluated by the FDA means that device approval is based on data not likely to be replicated in actual use. A more robust analytic strategy would be to include training patients in the main study results and also to report results both with and without data from training patients. This method would encourage greater transparency and better understanding of rates of physician training for new devices.
Training patient usage is not a temporary phenomenon: both absolute and relative training patient participation have remained stable this decade. To protect the rights of training patients, to prevent bias in safety and efficacy outcomes, and to better understand the effect of operator learning on device performance, we call for increased transparency of data from training patients.
Correspondence: Rita F. Redberg, MD, MSc, School of Medicine, Division of Cardiology, University of California, San Francisco, 505 Parnassus Ave, Ste M-1180, San Francisco, CA 94143-0124 (firstname.lastname@example.org).
Accepted for Publication: September 27, 2010.
Published Online: November 22, 2010. doi:10.1001/archinternmed.2010.445
Author Contributions: All authors had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: Chen, Dhruva, Bero, and Redberg. Acquisition of data: Chen, Dhruva, and Redberg. Analysis and interpretation of data: Chen, Dhruva, Bero, and Redberg. Drafting of the manuscript: Chen, Dhruva, and Redberg. Critical revision of the manuscript for important intellectual content: Chen, Dhruva, Bero, and Redberg. Statistical analysis: Chen. Study supervision: Redberg.
Financial Disclosure: Dr Redberg is a member of the US FDA Circulatory System Devices Panel and a member of the California Technology Assessment Forum.
Funding/Support: Ms Chen was supported by the Paul and Daisy Soros Fellowship for New Americans and a University of California at San Francisco Dean's Summer Research Fellowship.
Role of the Sponsors: The funders had no role in the study design, data analysis, and manuscript preparation.
Disclaimer: Dr Redberg is editor of the Archives and was not involved in the editorial evaluation or editorial decision to accept this work for publication.
Additional Contributions: Charles McCulloch, PhD, provided statistical assistance. Ms Chen and Dr Dhruva thank the University of California, San Francisco, Pathway to Discovery in Health and Society for support.
Thank you for submitting a comment on this article. It will be reviewed by JAMA Internal Medicine editors. You will be notified when your comment has been published. Comments should not exceed 500 words of text and 10 references.
Do not submit personal medical questions or information that could identify a specific patient, questions about a particular case, or general inquiries to an author. Only content that has not been published, posted, or submitted elsewhere should be submitted. By submitting this Comment, you and any coauthors transfer copyright to the journal if your Comment is posted.
* = Required Field
Disclosure of Any Conflicts of Interest*
Indicate all relevant conflicts of interest of each author below, including all relevant financial interests, activities, and relationships within the past 3 years including, but not limited to, employment, affiliation, grants or funding, consultancies, honoraria or payment, speakers’ bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued. If all authors have none, check "No potential conflicts or relevant financial interests" in the box below. Please also indicate any funding received in support of this work. The information will be posted with your response.
Some tools below are only available to our subscribers or users with an online account.
Download citation file:
Web of Science® Times Cited: 5
Customize your page view by dragging & repositioning the boxes below.
Enter your username and email address. We'll send you a link to reset your password.
Enter your username and email address. We'll send instructions on how to reset your password to the email address we have on record.
Athens and Shibboleth are access management services that provide single sign-on to protected resources. They replace the multiple user names and passwords necessary to access subscription-based content with a single user name and password that can be entered once per session. It operates independently of a user's location or IP address. If your institution uses Athens or Shibboleth authentication, please contact your site administrator to receive your user name and password.