Problems in Clinical Trials

We all assume that Food and Drug Administration (FDA) oversight in drug approval is sufficient to produce a safe and effective product.  In order for a drug to be sold in the United States, the drug must go through an extensive round of clinical trials that study the effects of the drug in humans.  But, is it really sufficient?

Unfortunately, it is not. The approval of Avastin was revoked recently because it was shown to be ineffective in the treatment of breast cancer. Another problem is that doctors are allowed by law to prescribe medicines off-labelPromethazine was an over-the-counter drug that was approved in adults for rhinitis. Unfortunately, after approval, it was commonly used in children (off-label) and discovered that promethazine caused fatal respiratory depression in children under 2 years of age.  In order to alleviate these problems, the FDA continuously posts public health advisories that warn the public of adverse effects.  However, this may not be sufficient.

Clinical trials for drug development are huge undertakings, costing the drug industry a lot of money and time.  Data is collected from clinical trials, from a sample of the population.  A clinical trial must be randomized, blind, and placebo controlled to be considered for approval.  After collecting data, statistics are calculated to show that the drug is safe and effective.

How is it that the FDA can approve of a drug that is ineffective, or has serious side-effects?  There are several ways clinical trials can go wrong: inadequate sample size, inappropriate statistical tests, failure to report adjustments for multiple endpoints, misinterpretation of statistics, or improper design of the trial to account for patient drop out.20  Any one of these problems can cause a "Type I" error, where the FDA approves of an ineffective drug, or a "Type II" error, where the FDA rejects a drug that should have been approved.

However, the most important issue appears to be improper selection of subjects for the clinical trial.  While clinical trials appear to be meeting FDA guidelines, one study finds that less than 50% of clinical trials are properly randomized.2  Randomization and selection bias concerns have been recognized since the mid-1970’s,3,4 and the FDA has tried to eliminate or reduce these problems.  In order to streamline the process of drug development in the U.S., the FDA had set up advisement from several committees, including the Advisory Committees, the American Academy of Pediatric Committee on Drugs, the Pharmaceutical Manufacturing Association and outside consultants.3  The final recommendation from these committees was that the FDA should re-examine the guidelines set up for patient selection and randomization every two years.3  Unfortunately, these findings never went into effect.  As a consequence, there continues to be frustration among new-drug sponsors, academicians, and the FDA.  These frustrations have led to a schism between the FDA approval process, clinical protocol set up, and academic studies being initiated and conducted.3   Moreover, the lack of implementation of the 1970’s recommendations has led to longer and costlier drug studies.2,3  The present day cost of drug study is approximately  $50-100 million and the length of patents are 17 year, which includes the time of the clinical trials that can be roughly 8 to 9 years.  But, problems still persist.

There are four basic problems with sub-optimized randomization and/or above acceptable selection bias issues.  These are:

1. Inclusion and Exclusion Criteria.  Patient are recruited by the inclusion and exclusion criteria set up in the protocol.5,6   Even though these criteria should be helpful, the specificity and the enormity of the criteria end up hurting the studies in the long run.  This is because the aim of the study is to generate data that can be extrapolated for use for the general population.  However, many of these studies focus on studying pharmaceuticals on an ideal patient that are not typically found in the medical setting.  Additionally, investigators may also utilize, subconsciously or otherwise, the knowledge of the treatment codes as the basis for deciding when to enroll patients, which can create a type of statistical error (Type I) even if the sample size is large.1,5,7  Listing these criteria may in fact be creating “loopholes” problems in terms of patient selection even before any randomization is occurring.1  Some cancer researchers have noted that less than 50% of eligible patients are enrolled in a clinical trial.8-10

2. Bonuses and Promotions.  Because the FDA mandates that pharmaceuticals must show safety and effectiveness by undergoing three time-consuming clinical  trial  Phases, there is a push by the pharmaceutical companies to complete the studies as quickly as possible.  Usually the first two Phases are conducted in small groups of healthy volunteers and specific disease bearing patients, respectively, and these studies are not lengthy.  Therefore randomization and bias issues generated in Phase 1 and Phase 2 studies are negligible.  However, Phase 3 studies are pivotal, and consume the most time and resources.  This includes using multiple medical centers, many investigators, and many patients.  Thus, Phase 3 studies are at a higher risk of bias and randomization issues.2

Traditionally, the progression of studies was sequential.  Subsequently, only when the results of the proceeding study were known, would the next clinical study be conducted.  However, probably due to financial pressures, many companies opt for simultaneous and/or overlapping of Phase 2 and Phase 3 studies.2  This practice of simultaneous or overlapping studies does not allow for information to flow properly, and in the end, devalues the quality of the study and the final results elucidated.

Bonuses and promotions are also used by sponsors to foster rapid patient enrollment, especially in off-shore studies.3  This practice not only skews the patient selection and possibly randomization,11 but it may also cross ethics that are needed to conduct a given study.  Additionally this practice promotes quicker enrollment of the most severely ill patients, and may not represent the typical U.S. patient.

3. Failed Trials.  There are probably far too many “failed trials” are occurring.  Unfortunately, the true number of failed trails is not known, because the FDA does not release this information.2,5  Therefore this data cannot be analyzed.  It has been suggested that failed trials may be due to uncritical selections of marginally disease-bearing patients.2,4,5  The problem of failed trials may be compounded by the fact that clinical misdiagnosis occurs at a significant rate (20%12 or more13), and it may be on the rise.14

4. Consents.  The consent forms available to the patient are not adequate to help patients understand what the study stands for and what role the study plays in their care.5  By either improving the consent form design, or the process of consenting, studies would improve recruiting and possibly get better quality of studies.5,6,15,16  A majority of the patients today are white males with only limited education.3,5,7  And these patients may be signing the consent form without proper understanding of what is involved and what are the risks and benefits.  Additionally, patients with higher education may be avoiding study enrollment due to unclarity of the consent forms. 

Many sponsors and legislators (including Senator Brown of MA) have criticized the drug development process as “being too regulated” or “having not enough clarity.”  However, as mentioned previously, without having the proper foundation, the FDA, the industry and the academicians will flounder,7  and this lack of clarity may in fact be stifling the new drug development process.2,3

Changes are needed

One of the biggest obstacles is the lack of agreement between representatives from industry, academics and agency in what is required to make clinical trials valid.17,18  These representatives need to define and refine clinical trial endpoints, analysis method, and biomarkers.4,19  Better quantifying and defining measures would increase the accuracy and consistency and decrease the intra-platform variability.  Better quantifying of inclusion and exclusion criteria, bonus use, limiting priority study use, analyzing failed trial information, enrolling patients with stable symptoms with wide-latitude of severity, and utilizing parallel multi-center studies with differing modes of implementation1,2may all help relieve the selection issues.      

Lastly, two main options are available to expand the quality of clinical trials.  These options include generating computer models or patient disease process and better tests to sort out selection and randomization issues.  Additionally, patient data must be tested regularly for errors including Type I and Type II errors.5  If selection bias is suspected during an ongoing  trial, then adjustments need to be made.

Adjustments could be to one of the following:

1. Using varying randomization block sizes in trial to protect the study from outcome predictions.

2. Maintaining a registry of all patients screened including those excluded from the trial along with their date/time of screening, screening questions and required baseline questions.

3. If a treatment code was unmasked for a given patient, then the block needs to be redefined.

4. Having two investigators, one who evaluate the patients and one who recruits the patients for a study.

5. Not reusing the patient number if the patient if a patient drops out of a study and not transferring the treatment of this patient to the new patient being recruited.

Conclusion

Clinical trials are complex and involved.  They produce valuable results that are transferred to the general public.  Therefore, their results have huge implications.  Clinical trials require proper design with respect to randomization and selection bias.  Addressing these issues will help generate clinical results that are useful with higher quality and more validity.  Moreover a healthier partnership between the key players, namely the sponsor, the academician and the agency, needs to be fostered.

References

1. Berger VW, Exner DV. Detecting Selection Bias in Randomized Clinical Trials* 1. Controlled clinical trials. 1999;20(4):319-327.

2. Robinson DS, Rickels K. Concerns about clinical drug trials. Journal of Clinical Psychopharmacology. 2000;20(6):593.

3. Gilbert D, Beam T, Kunin C. The implications for Europe of revised FDA guidelines for clinical trials with anti-infective agents. European Journal of Clinical Microbiology & Infectious Diseases. 1990;9(7):552-558.

4. McKee AE, Farrell AT, Pazdur R, Woodcock J. The role of the US Food and Drug Administration review process: clinical trial endpoints in oncology. The oncologist. 2010;15(Supplement 1):13.

5. Antman K, Amato D, Wood W, et al. Selection bias in clinical trials. Journal of Clinical Oncology. 1985;3(8):1142.

6. Palter SF. Ethics of clinical trials. Semin Reprod Med. 1996;14(2):85-92.

7. Ellis P. Attitudes towards and participation in randomised clinical trials in oncology: a review of the literature. Annals of Oncology. 2000;11(8):939.

8. Lee JY, Breaux SR. Accrual of radiotherapy patients to clinical trials. Cancer. 1983;52(6):1014-1016.

9. Hunter CP, Frelick R, Feldman A, et al. Selection factors in clinical trials: results from the Community Clinical Oncology Program Physician's Patient Log. Cancer treatment reports. 1987;71(6):559.

10. Martin JF, Henderson WG, Zacharski LR, et al. Accrual of patients into a multihospital cancer clinical trial and its implications on planning future studies. American journal of clinical oncology. 1984;7(2):173.

11. Lustgarten A, Cherry B. Drug testing goes offshore. Fortune. 2005;152(3):66.

12. Shojania KG, Burton EC, McDonald KM, Goldman L. Changes in rates of autopsy-detected diagnostic errors over time. JAMA: the journal of the American Medical Association. 2003;289(21):2849.

13. Kern KA. Medical malpractice involving colon and rectal disease: a 20-year review of United States civil court litigation. Diseases of the colon & rectum. 1993;36(6):531-539.

14. Kirch W, Shapiro F, Fölsch UR. Health care quality: Misdiagnosis at a university hospital in five medical eras. Journal of Public Health. 2004;12(3):154-161.

15. Miller R, Willner HS. The two-part consent form. New England Journal of Medicine. 1974;290(17):964-966.

16. Stiles PG, Epstein MK, Poythress NG, Edens JF. Formal Assessment of Voluntariness With a Three-Part Consent Process. Psychiatric Services. 2011;62(1):87.

17. Peto R, Pike M, Armitage P, et al. Design and analysis of randomized clinical trials requiring prolonged observation of each patient. II. Analysis and examples. British journal of cancer. 1977;35(1):1.

18. Ellenberg JH, Armitage P, Chalmers TC, et al. Biostatistical collaboration in medical research. Biometrics. 1990:1-32.

19. Li J, Kelm KB, Tezak Z. Regulatory perspective on translating proteomic biomarkers to clinical diagnostics. Journal of Proteomics. 2011.

20. Jaykaran, Yadav P, Chavda N, Kantharia N.D. Some issues related to the reporting of statistics in clinical trials published in Indian medical journals: a survey. International Journal of Pharamacology.  2010, 6(4), 354-359.

 

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    *

    This blog is kept spam free by WP-SpamFree.