Critics have argued against EBP on the basis of many common misperceptions of EBP as well as many correct failings associated with EBP. These primarily include: the argument that many doctors were already doing these things; that good evidence is often deficient in many areas; that lack of evidence and lack of benefit are not the same; that the more data are pooled and aggregated the greater the difficulty in comparing the patients in the studies with the patients presenting; that EBP is a covert method or rationing resources, is overly simplistic and often restrains professionals; that many clinicians lack the time and resources to practice EBP and require new skills to utilize EBP (Guyatt, Cairns, Churchill 1992, p.268; Trinder 2000, p.2; and Straus 2000, p. 837-9). Furthermore, those who agree that ‘EBP makes good sense in theory, have quite appropriately demanded evidence for whether it improves patient outcomes’ (Miles, Bentley, Polychronis and Grey 1997, p. 83-5). Although, the ethical and moral implications of such a randomized controlled trial, relating to withholding evidence in the clinical treatment of patients may never be appropriately justifiable.
In developing the EBP some have argued that the new paradigm is sometimes misinterpreted. For example, many have argued that EBP recognition of the limitations of intuition, experience, and understanding of pathophysiology in permitting strong inferences are a rejection of these routes to knowledge altogether.
A common critic misperception or argument against EBP is that it ignores the clinical experience and clinical intuition of the practitioner or clinician. In many ways it is important to expose learners to exceptional clinicians who have a gift for intuitive diagnosis, a talent for precise observation, and excellent judgement in making difficult management decisions. Untested signs and symptoms should not be rejected out of hand. They may prove extremely useful, and ultimately be proved valid through rigorous testing. The more experienced clinicians can dissect the process they use in diagnosis, and clearly present it to learners, the greater the benefit. Similarly, the gain for students will be greatest when clues to optimal diagnosis and treatment are culled from the barrage of clinical information in a systematic and reproducible fashion (Craig, Irwig and Stockler 2001, p. 1-3).
Institutional experience can also provide important insights. Diagnostic tests may differ in their accuracy depending on the skill of the practitioner. A local expert in, for instance, diagnostic ultrasound, may produce far better results that the average from the published literature. The effectiveness and complications associated with therapeutic interventions, particularly surgical procedures, may also differ across institutions. When optimal care is taken to both record observations reproducibly and avoid bias, clinical and institutional experience evolves into the systematic search for knowledge that forms the core of evidence-based medicine (Straus and McAlister 2000, p.839).
Another argument is that the understanding of basic investigation and pathophysiology plays no part in evidence-based medicine. The dearth of adequate evidence demands that clinical problem-solving must rely on an understanding of underlying pathophysiology. Moreover, a good understanding of pathophysiology is necessary for interpreting clinical observations and for appropriate interpretation of evidence. However, numerous studies have ‘demonstrated the potential fallibility of extrapolating directly from the bench to the bedside without the intervening step of proving the assumptions to be valine in human subjects’ (Echt, Leibson, Mitchell, Peters, Obias 1991, p. 781).
Some critics have argued that EBP ignores standard aspects of clinical training such as the physical examination. A careful history and physical examination provides much, and often the best, evidence for diagnosis and directs treatment decisions. The clinical teacher of EBP must give considerable attention to teaching the methods of history and clinical examination, with particular attention to which items have demonstrated validity and to strategies to enhance observer agreement (Echt et al 1991, p. 781-2).
Large randomized controlled trials are extraordinarly useful for examining discrete interventions for carefully defined medical conditions. The more complex the patient population, the conditions, and the intervention, the more difficult it is to separate the treatment effect from random variation. Because of this, a number of studies obtain insignificant results, either because there is insufficient power to show a difference, or because the groups are not well-enough ‘controlled’ (Straus et al 2000, p.839).
Furthermore, the critic may argue that EBP has been most practised when the intervention tested is a drug. Applying the methods to other forms of treatment may be harder, particularly those requiring the active participation of the patient because bliding is is more difficult (Stephenson and Imrie 1998, p.1). The types of trials considered ‘gold standard’ (i.e. randomized double-blind placebo-controlled trials) are very expensive and thus funding sources play a role in what gets investigated. For example, the government funds a large number of preventive medicine studies that endeavor to improve public health as a whole, while pharmaceutical companies fund studies intended to demonstrate the efficacy and safety of a particular drugs, so long as the outcomes are in their favour (Coats 2004, p.2-3). Furtheremore, ‘determining feasibility and relevance to the real world is often difficult’ (Stephenson and Imrie 1998, p.2).
One of the fears of EBP is that purchasers and managers will control it in order to cut the costs of health care. This would not only be a misuse of EBP but suggests a fundamental misunderstanding of its financial consequences. Doctors practising EBP will identify and apply the most efficacious interventions to maximise the quality and quantity of life for individual patients; this may raise rather than lower the cost of their care (Straus et al 2000, p.839).
Many of the studies that are published in medical journals may not be representative of all the studies that are completed on a given topic (published and unpublished) or may be misleading due to conflicts of interest (i.e. publication bias); therefore the array of evidence available on particular therapies may not be well-represented in the literature.