Much has been written and presented in recent years concerning the topic of "outcomes." Currently, many audiologists understand the value of measuring the effects of what they do using one or more of the many objective or subjective outcome measures available.
One of the advantages of measuring outcomes is to identify and adopt effective clinical processes and reject ineffective ones. While on the surface, this may seem like a good idea and easy to administer, it is more difficult than it appears. For the individual clinician, making systematic clinical decisions on the basis of individual clinical experiences is plagued with potential pitfalls.
The fact that we achieve good (or poor) results with a particular hearing aid manufacturer or technology with one or two patients does not necessarily mean that our outcomes will be equally as successful (or unsuccessful) with other patients. We know from a generation of hearing aid research, that there are many variables (demographic, audiologic, cognitive, emotional, life-style, etc.) that influence hearing aid success. As a matter of fact, there is not even universal agreement as to what exactly constitutes hearing aid success. Is it improved audibility, speech recognition, satisfaction, quality of life, profit, returns for credit, or some other metric? Is success best determined using "objective" measures such as speech intelligibility or "subjective" measures using a patient-centered questionnaire? Because there are so many variables involved, it often requires carefully designed and conducted clinical trials using large numbers of subjects in order to "parcel" out the influence of one variable vs. the others. The busy clinician who manages patients presenting with a multitude of clinical problems and needs, and with a variety of instruments possessing different technologies, is not in a particularly good position to determine what works best for each patient. It is also the nature of the hearing aid industry that technology changes so rapidly, that we never truly gain enough experience with one technology before we move on to the next.
In view of the challenges facing the clinician, and assuming we all want to provide the most effective treatment each and every time, I ask....
How does one determine the most effective course of treatment?
Medicine has struggled with this issue for a number of years and has been increasingly relying on what is known as Evidence-Based Medicine (EBM) or Evidence-Based Practice (EBP) to assist the individual clinician with making informed decisions given a particular clinical problem. EBP is described as "the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence-based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research" (Sackett, Rosenberg, Gray, Haynes & Richardson, 1996, p.71).
The reader should pay particular attention to "individual clinical expertise". Detractors have criticized the EBP approach as "cookbook medicine" but it is clear that clinical experience plays an important role. Also note the word "expertise". One does not become a clinical expert as a result of a degree or certificate, but rather through extensive experience that allows a clinician to make reasonably informed decisions and judgments. As Sackett, et al (1996) explain, "External clinical evidence can inform, but can never replace, individual clinical expertise, and it is this expertise that decides whether the external evidence applies to the individual patient at all and, if so, how it should be integrated into a clinical decision" (p.72).
There are many sources available to assist the clinician in establishing an evidence-based approach to clinical practice. These include books, non-peer-reviewed and peer-reviewed journals, electronic bibliographic databases such as Medline and Pub Med and several websites specializing in EBP. While each of these can contribute to the clinician's knowledge base, there is a difference in the quality of the evidence among these sources. These differences, as they apply to treatment research, have been codified through a classification system developed by the Agency for Healthcare Research and Quality (AHRQ) as follows:
Level 1: Large randomized trials with clear-cut results (low risk of error)
Level 2: Small, randomized trials with uncertain results (moderate to high risk of error)
Level 3: Nonrandomized, contemporaneous controls
Level 4: Nonrandomized, historical controls and expert opinion
Level 5: Uncontrolled studies, case series, and expert opinion
Why is it important to distinguish between levels of evidence? Because the level of evidence influences the strength of recommendations for the performance of a particular procedure. The AHRQ classifies the strength of recommendations as follows:
Level I: Usually indicated, always acceptable and considered useful and effective
Level IIa: Acceptable, of uncertain efficacy and may be controversial. Weight of evidence in favor of usefulness/efficacy
Level IIb - Acceptable, of uncertain efficacy and may be controversial. May be helpful, not likely harmful
Level III - Not acceptable, of uncertain efficacy and may be harmful
So, given what we know about evidence grading, do audiologists have any EBP documents to support specific clinical protocols and if so, what is the strength of that evidence? Actually, we have two documents - ASHA's taskforce on adult hearing aid fitting, PPP 19.0 (ASHA, 1997) and the Joint Committee's Statement on adult hearing aid fitting (AAA, 2000). These two documents share important common characteristics - they are essentially created by a task force of experts whose recommendations are based, primarily, on personal experience and other consensus panel-based guidelines. In addition, the references upon which these guidelines are based do not contain any large-scale clinical trials upon which the recommendations are made.
Using the AHRQ's evidence grading system, these guidelines would likely be graded as Level 4 evidence (nonrandomized, historical controls and expert opinion) with a Level IIa or IIb recommendation (acceptable, of uncertain efficacy and may be controversial). This is not to suggest that expert opinion has no value, but because of the potential for bias, it is considered a "lower" level of evidence than is typically required to recommend a set of specific clinical guidelines to a field of practitioners.
Having said this, however, we do have some strong evidence to suggest certain approaches to amplification. The first being that hearing aids are effective (e.g. Larson et al, 2000) This large, randomized control trial comparing peak clipping, compression limiting and wide dynamic range compression clearly demonstrated the overall efficacy of amplification but also suggested that there is no compelling evidence to recommend one type of output limiting circuit over another.
We also have strong evidence to support the effectiveness of directional amplification. For example, Amlani (2001) performed a meta-analysis on 146 separate studies on directional microphone hearing aids to conclude that directional instruments provide a significant advantage over omnidirectional instruments when data are pooled across all variables. The analysis also indicated, however, that there is no advantage to directional instruments in highly reverberant environments. Meta-analyses, such as the one performed by Amlani, are particularly useful in fields such as audiology where the numbers of subjects in individual studies may be too small to reveal treatment effects. A meta-analysis combines the data across many studies to reveal treatment effects that may not be apparent in individual studies using small numbers of subjects.
There are still many unanswered hearing aid-related questions that can only be answered with Level 1 evidence. For example,
Is outcome improved if we directly measure LDLs?
Is outcome improved if we measure speech intelligibility in noise?
Is outcome improved if we use objective measures vs. subjective measures?
Is outcome improved if we measure patient expectations, motivation, and attitudes?
Is outcome improved if we use real ear measures to verify hearing aid performance?
Until we answer these and other clinical questions with high quality studies to help us establish clinical practice guidelines, audiologists need to be intelligent and discerning consumers of the information we receive. We need to identify the level of evidence this information represents.
Finally we need to heed Dr. Frank C. Wilson's advise that "Neither unaudited experience nor logical thought can replace controlled clinical trials, so until documentation of a procedure's effectiveness can be demonstrated, it should be considered a false idol and worship withheld."
American Academy of Audiology (2000). Audiology Today, Joint Committee on Clinical
Practice Algorithms and Statements, McLean, VA.
American Speech-Language-Hearing Association (1997). Preferred Practice Patterns for
the Profession of Audiology. Rockville, MD.
Amlani, A. (2001). Efficacy of Directional Microphone Hearing Aids: A Meta-Analytical
Perspective. J Am Acad Audiol 12:202-214.
Larson, V., Williams, D., Henderson, W., et al (2000). Efficacy of 3 Commonly Used
Hearing Aid Circuits: A Crossover Trial. JAMA, 284(14):1806-1813.
Sackett, D., Rosenberg, W., Gray, J., Haynes, R., Richardson, W. (1996). Evidence-
Based Medicine: What it is and what it isn't. British Medical Journal, 312(7023), 71-72.
Recommended Evidence-Based Medicine Websites:
Agency for Healthcare Research & Quality
International Classification of Functioning, Disability & Health http://www3.who.int/icf/icftemplate.cfm?myurl=homepal&mytitle=Home Page
*Center for Evidence-Based Medicine
The Cochrane Collaboration
Database of Abstracts of Reviews of Effects (DARE)
Evidence Based Medicine Online
Evidence-Based Medicine Resource Center
*Introduction to Evidence-Based Medicine
Evidence-Based Medicine Learning and Information Services
(*highest recommendation as an excellent starting point)