Some of the many strengths of EBP include: finding better procedures, stopping negative procedures, learning from other people’s mistakes, providing a basis for clinical judgment, legal protection, best utilization of resources and ultimately best clinical practice (Straus et al 2000, p. 837-40; Trinder 2000, p. 2; ).
By utilising the evidence to provide the best practice possible, a clinician or practitioner is capable of reducing if not removing the possible harms of treatment. According to Trinder ‘EBP remains firmly committed to the modernist promise that risk can be assessed and controlled by expert knowledge and that potential harm of interventions can be minimised and the potential benefits maximised’ (2000, p. 7-8).
EBP is an approach that ‘promotes the collection, interpretation, and integration of client-reported, clinician-observed, and research-derived evidence’ (McKibbon, Wilczynski, Hayward, Walker-Dilks, & Haynes 1995, p. 4). It does not command your clinical decisions, but may be utilised to ensure that through the ‘conscientious, explicit and judicious use of current best evidence in making decisions about the care of your client you have developed the most effective and efficient treatment decision for your patient (1997, p. 2).
EBP may help identify procedures that are not cost-effective and may be dropped. This is not to say that it will drop procedures and change the management plan of a patient based on economic restraints, but that it will look for more effective methods of providing treatment (Straus et al 2000, p.839). Likewise, EBP may help identify new procedures and justify their cost. According to Trinder ‘supporters and advocates of EBP claim that the approach results in the best practice and the best use of resources’ (2000, p. 2).
EBP may become the common language through which different healthcare disciplines communicate, such as the medical, physiotherapy, nursing and paramedical disciplines (Sackett et al, 1997, p.17). Furtheremore EBP principles do not change from undergraduate to post-graduate education and hence are a greater advantage to the life-long process of study associated with any professional clinician or practitioner (Echt et al 1991, p. 781-2).
Weaknesses of evidence based practice
The weaknesses of EBP include: the limitations to samples in research, the required need to still make clinical judgments (EBP is only a guideline), the fact that information develops rapidly and beyond one person therefore protocols are useful, the fact that it requires new skills from clinicians, may in fact raise and not necessarily lower the cost of health care, it cannot replace experience, and the paucity of proof that EBP actually works (Straus and McAlister 2000, p. 838; Trinder 2000, p. 2).
EBP requires new skills of the clinician, including efficient literature-searching, and the application of formal rules of evidence in evaluating the clinical literature, such as the five steps in the process of EBP as described by Sacket et al (1997, p. 3).
EBP is also not an ivory tower or armchair medicine but a way of staying on top of a busy professional life. It is not an alternative to experience.
EBP is not cook-book medicine imposed from above and slavishly followed but an active process which integrates the doctor’s own expertise, the external evidence and the patients’ preferences. Clinical guidelines are similarly subject to this flexible approach. External clinical evidence can inform but never replace individual clinical expertise and it is this expertise that decides even whether the external evidence is relevant to the patient at all (Howizts 1996, p. 320).
EBP is not necessarily a cost-cutting exercise but a method of looking for the most effective ways to improve the quality and quantity of patients’ lives. This may in fact raise, not lower, the cost of care (Straus et al 2000, p.839).
Because EBP requires an upwards approach that integrates the best external evidence with individual clinical expertise and patient-choice, it cannot result in slavish, cook-book approaches to individual patient care. External clinical evidence can inform, but can never replace, individual clinical expertise, and it is this expertise that decides whether the external evidence applies to the individual patient at all and, if so, how it should be integrated into a clinical decision (Coats 2004, p.4-5). Similarly, any external guideline must be integrated with individual clinical expertise in deciding whether and how it matches the patient’s clinical state, predicament, and preferences, and thus whether it should be applied.
EBP involves tracking down the best external evidence with which to answer our clinical questions. To find out about the accuracy of a ‘diagnostic test, a clinician needs to find proper cross-sectional studies of patients clinically suspected of harbouring the relevant disorder, not a randomised trial’ (Oxman, Sackett, and Guyatt 1993, p. 2). For a question about prognosis, a clinician needs a proper follow-up studies of patients assembled at a uniform, early point in the clinical course of their disease. And sometimes the evidence required will come from the basic sciences such as genetics, immunology and basic pathophysiology. It is when asking questions about therapy that one should try to ‘avoid the non-experimental approaches, since these routinely lead to false-positive conclusions about efficacy’ (Coats 2004, p.2-3).
Because the randomised trial, and especially the systematic review of several randomised trials, is so much more likely to inform us and so much less likely to mislead us, it has become the ‘gold standard’ for judging whether a treatment does more good than harm. However, according to Straus et al, even though ‘randomised clinical trials are considered to be the ‘gold standard’ for establishing the effects of an intervention, they are not necessarily the best sources for answering questions about diagnosis, prognosis or harm’ (1997, p.839). Furthermore, some questions about therapy that would ordinarily require randomised trials, may not require randomised when trials of successful interventions would prove otherwise fatal or cannot wait for the trials to be conducted. And if no randomised trial has been carried out for our patient’s predicament, a clinician should follow the trial to the next best external evidence and work from there.
Despite its ancient origins, evidence-based medicine remains a relatively young discipline whose positive impacts are just beginning to be validated and it will continue to evolve (Oxman et al 1993, p. 2). This evolution will be enhanced as various undergraduate, post-graduate, and continuing medical education programmes adopt and adapt it to their learners’ needs. These programmes, and their evaluation, will provide further information and understanding about what EBP is, and what it is not. As yet, however, there is ‘no good evidence to suggest that EBP actually works (Trinder 2000, p.4).
Critical appraisal of clinical practice involves additional time and effort, and may be perceived as wasteful; however, this time and effort may be reduced by clinicians developing effective searching skills and simple guidelines for assessing the validity of research papers. In addition, one can emphasize that critical appraisal, as a strategy for solving clinical problems is most appropriate when the problems are common in one’s own practice (Oxman et al 1993, p. 3).