AudiologyOnline Phone: 800-753-2160


Unitron Hear Life - November 2023

Constructing a Hearing Aid Fitting Using the Latest Clinical Evidence

Constructing a Hearing Aid Fitting Using the Latest Clinical Evidence
Brian Taylor, AuD
May 21, 2012
Share:
This article is sponsored by Unitron.

Editor's Note: This is a transcript of an AudiologyOnline live seminar.

For the next hour, we will talk about constructing hearing aid fittings using clinical evidence. We will compare the construction of hearing aid selection & fitting to the construction process of a house to make this fun and maybe even a little entertaining. Here is a general overview of today's course. We will start with a quick review of evidence based practice, why fitting a hearing aid is a lot like constructing a house, and finally, how we can apply some of the research findings from the past few years into your selection and fitting process, mainly as it relates to features and benefits.

Before we get started I think it is important to tell you my motivation for creating this course. Of course, manufacturers have good intentions 1, but they often use a lot of jargon when they name a hearing aid feature. The features need to sound innovative and cutting edge for good reason, but I think that oftentimes unknowingly leads to confusion for the audiologist. This does not make our jobs any easier when we have to try to sort out what these terms mean and what the features actually might do for a patient in real-world listening situations. So, one of the advantages of applying evidence based thinking to the selection and fitting process is to demystify the definition of these terms used by our friends in manufacturing.

Another question I would like to pose early on is, "Who should do most of the thinking when selecting and fitting hearing aids?" Do we put our thinking caps on as audiologists in the clinic, or let the manufacturer do most the thinking for us by using first fit and preset settings on hearing instruments? I do not think there is necessarily a right or wrong answer, but it is a question that merits a lot of consideration. It is certainly ok for the manufacturer to do some of the thinking. Clinicians don't have the time to set the parameters of every feature. The question really is which features require the audiologist to think through on how to select and fine tune, and which ones can the manufacture pre-set for us. And, does any recent research help guide us on these questions?

One final note worthy of consideration is the actual decision making process. That is, how do people make decisions. Specifically, I am referring to not just clinical decisions, but decisions in general. Here's a few tidbits of information from I book I use when I teach Unitron account executives entitled The Art of Selling Your Message: the 5 Paths to Persuasion (Miller & Williams, 2004) The authors talk about 5 distinctly different social styles and how each style contributes to the decision making process. In essence, there are five possible ways a person could make a decision. As I review each social style, think about how an audiologist that exhibits this style might make a decision about amplification for each patient.

First are charismatic decision makers. They are bold in their decision making and innovative. Perhaps someone like Steve Jobs would fit that description. Next we have thinkers. Those are people that look at a lot of research and ponder the evidence before making their decision. Third, we have followers who see what their colleagues are doing - so called best practices -- and use that information to help them make decisions. Then there are skeptics. These people may ask a lot of questions and are skeptical of solutions. Like thinkers, skeptics often rely on clinical evidence in the decision making process. Lastly, we have controllers, who are individuals that do not necessarily go along with the crowd, and who might come across to someone as stubborn in their decision making processes.

Based on a random sampling of audiologists and other sales professionals in the industry, I believe about 50% of the people fitting hearing aids fit into the "follower" category. This is not necessarily a bad thing. Followers often cautiously rely on testimonials and the opinions of other who they trust before they make a decision. In short, they follow the herd and may be uncomfortable about challenging their manufacturer's representative during an office visit. Followers may lack the tenacity to scour Pub Med to answer a burning question about the acclimatization manager feature.

With that said, here are the ways audiologists are likely to make clinical decisions. One, you can follow the opinion leader, do what has worked best in the past. This might involve listening to your favorite manufacturer's representative and what they have to say about a particular product or feature, and using to make clinical decisions. Two, you can read the journals and apply the evidence using a process like the one I will describe in a moment. Or, a third option would be to use some combination of evidence and experience. I think most of us probably use some combination of those things, but one of the points that I want to make is that in today's world, where outcomes do matter more than ever, applying evidence in a systematic fashion to our clinical decision is gaining in importance. Some audiologists may have to get outside their comfort zone of following the leader to apply the best available evidence in their decision making process. So, let's talk a little bit about evidence based thinking and how we can apply that to our decision making process.

Evidence-Based Thinking2a

It is important to note that evidence based thinking is a fairly time consuming process when it is done correctly. In a special issue of JAAA (Cox, 2005) there was an excellent article about how to use evidence based thinking in your decision making process. Several other articles in the same issue talk about evidence based principles, but Dr. Cox's article outlines the essential steps of an evidence based review process. An in-depth review of how to conduct an evidence-based review is outside the scope of this course; however, it helps to have a brief review of the five steps of evidence-based practice guidelines.2b Step one would be to generate a focused question. For example, "Does directional microphone technology improve speech intelligibility in noise in everyday listening situations?" Step number two would be to go to Pub Med or any of the other peer-reviewed search engines and conduct a keyword search. The key words come from the question that you formulated. Using our example of directional microphones and benefit, some of the key words could be a combination of benefit, satisfaction, real-world effectiveness or directional microphones. Step number three is to narrow the search after you get your results by carefully reading the abstract and eliminate any of the abstracts that do not directly pertain to your question. For the articles that do pertain to your question, carefully read the entire article. In this case with directional microphones, my search resulted in 37 abstracts, I eliminated 20, which gives us 17 papers to read. Not exactly a day's worth of work. You can see it is a very cumbersome laborious process when done correctly.

Step four is to pay attention to the study design, the blinding of the authors and the subjects, the number of subjects, or the power of the study, and then you would grade that evidence based on those key factors. One important factor would be the design of the study. Was it randomized, did it use a control group, or was it non randomized and did not use a control group? It is a higher grade of evidence if it is a randomized study. A non randomized study can be very useful, but the grade -or quality of the study - is a little bit lower.

There are different levels of evidence. Depending on exactly who you read, there are between four and six levels. Level-one evidence is a well-designed randomized control trial all the way to much lower level of evidence. Level two is a well-designed controlled study without randomization. Level three is a well-designed non-experimental study or case studies. Level four is an expert opinion or consensus statement.

Step number five in our evidence-based process would be, after you have read the articles and graded the evidence, is to make a recommendation or modify a clinical procedure based on your reading of the evidence.

Of course, there is laboratory evidence, or studies that take place in a contrived, controlled listening environment like your sound booth. And, there is real-world evidence which is what happens when the patient walks out into everyday listening situations. In any evidence based paradigm real-world evidence is the gold standard.

The focus of this presentation is on constructing a hearing aid fitting, and I have tried to make it easier for you by conducting the evidence based review for you. Like Best Buy's Geek Squad, my team of Unitron experts tries to make things easy for you. They have taken care of steps one through five for you. For each feature mentioned here, I will try and give you the bottom line interpretation of the evidence reviewed by our Geek Squad.

To make this fun, I wanted to use a home construction analogy and tie that into evidence based principles for selecting hearing aids. We all would agree that hearing aids over the last several decades have gotten a lot smaller, and, believe it or not, so have houses. If you are interested in home construction, you might want to check out Jay Shafer of Tumbleweed Tiny House Company. They construct tiny houses, many less than one-hundred square feet. Some of them are portable and others are stationary, but the whole idea behind the tiny house is to have a small eco footprint.3 They are very green and very inexpensive relative to other houses. You can actually buy blueprints from Jay to build your own tiny home. It looks small, but I understand it is actually quite spacious inside.

If you are constructing tiny houses, here are some things you have the think about. First you have to get permission to build, clear the building site, build a foundation, construct the post and the beams, raise the walls, put on a sloping roof with rafters, install windows and doors, and finally, someone would inspect your work to make sure that it is meeting all the building codes for your area. Constructing a hearing aid fitting is much like constructing a tiny house.

Let's look at how we might construct a hearing aid fitting.4 First, we would have to restore audibility and maintain comfort in noise, which we think of as the foundation, and then we have comfort in noise. Comfort in noise serves as our walls. We know how important listening comfort in noise can be for many patients. Like a well insulated wall, features that improve comfort in noise can protect our patient from the elements. Rather than cold winds, it's annoyance from noise.

The roof to the house would be our ability to improve speech intelligibility in noise. Like a good roof with a skylight that allows the sun to shine into our living room, certain hearing aid features reduce unwanted background noise while enhancing the desired speech signal. The foundation, roof and walls are all pretty standard. Usually, the style conforms to the local tastes of your area. (You don't see tile roofs in Minnesota) And, as building materials incrementally improvement, energy efficiency improves as well. The same concept holds true for hearing aid features.

Finally, we have the installation of doors and windows. Doors and windows come in all shapes and sizes and they often are an artistic expression of the home owner. For example, in some cool and trendy neighborhoods, you might find a lot of doors with funky colors. Certain features, like wireless streaming, manual user control, data learning, automatic program switching, and adaptation managers fit this description. Their use varies with the tastes and preferences of the patient. Not see if you agree, but I think this is a fun analogy that helps demystify how many hearing aid features work.

Like any well constructed house, there are certain materials required to build a home: lumber, cement, concrete, glass for the windows, et cetera. There are also required materials for any modern hearing aid fitting: wide dynamic range compression (WDRC), automatic feedback cancellation, digital noise reduction, directional microphones, and others. Let's elaborate on the solid foundation of a hearing aid fitting.

In my analogy, the foundation is our ability to restore audibility and provide comfort in quiet listening situations. Some of the features that we use to do that include several channels of WDRC, expansion, automatic feedback reduction algorithms and AGC-O, which helps us limit the maximum power output (MPO) of the hearing aid. These are all features that build a solid foundation of restored audibility and comfort. Let's look at some of the evidence as it relates to a few of these features.

Number of WDRC channels

WDRC repackages sounds into the patient's residual dynamic range, and when we do our job right, it squeezes as many sounds of speech into the range between the patient's threshold and loudness discomfort level. What are some of the variables to construct a solid WDRC foundation? We need to think about the number of WDRC channels, the compression knee point and ratio, and attack and release times. The first question related to building a solid foundation is how many channels of WDRC are needed to optimize speech intelligibility? There are published studies that look at this question using data collected in a laboratory setting. The evidence suggests that 5 channels or fewer of WDRC are needed to optimize speech intelligibility in quiet (Woods, Van Tassell, Rickert, & Trine, 2006) and, according to one study, 8 to 16 channels is probably enough to optimize speech in background noise (Yund & Buckles, 1995). I am not going into the details of the studies in the interest of time, but I encourage you to check these out if you want to dig a little deeper.

Another study looked at the same question indirectly by examining how many channels of WDRC are need to accurately match a prescriptive target (Aazh & Moore, 2007). In this study, they found that only 36% of the manufacturer's first fits come within plus or minus 10dB of the NAL-NL1 target. That is one piece of evidence suggesting you have to pay close attention to first-fit settings - and very likely do some tweaking of them - - if your goal in building a solid foundation is to optimize audibility.

So how many channels of WDRC are needed to optimize speech intelligibility? Probably more than 8 to 16 is not a selling point. There is no evidence to say a 30 channel hearing aid is worse than a hearing aid that has 8 channels of WDRC, but I certainly would not use anything more than 16 channels as a reason to purchase a premium product over a business class or even an economy line product if it has 8 channels of WDRC. Although more than 8 or 16 channels are not needed to optimize audibility or match a prescriptive target, the audiologist certainly needs to do most of the thinking when it comes to selecting the proper number of WDRC channels for patients.

WDRC Release Time

Let's look at another variable in building a solid hearing aid foundation, and that is release time. There are a couple of things to keep in mind about the release time of WDRC. The consensus among experts is that a short release time is between 10 and 100 milliseconds, and a longer release time would be something greater than 500 to 600 milliseconds. It is important to note that some release times are beyond one second. So the clinical question here is, "Does getting the clinical release time right contribute to a more successful fitting?"

Some of the arguments for short release time are improved audibility and normal loudness perception. A potential downside to a short WDRC release time might be increased distortion and noise associated with the quick on-and-off release time. On the other hand, a longer release time is thought to maintain the intensity relationships between the speech sounds and, in theory, is supposed to contribute to a more "natural" sounding instrument. However, a potential negative of a longer release time is an "off-the-air" effect with softer input sounds. A few studies document that long release times are preferred by those with so-called slow thinkers or those who had lower than average cognitive scores (e.g. Foo, Rudner, Ronnberg, & Lunner, 2007). In 2010, there was a really nice paper by Robin Cox and Jingjing Xu that looked at this same question of release time using some real-world evidence. This study divided two groups based on their cognition scores, and, interestingly, they found no significant relationship between cognition scores and release time preference. They did, however, find that patients who had a higher score on the Abbreviated Profile of Hearing Aid Benefit (APHAB) Cox & Alexander, 1995) were the patients that scored better and had a preference for the longer release time.

So, then, does getting the release time correct contribute to a more successful fitting? There is a limited amount of evidence. It might make some difference based on lab studies for patients that had lower cognition scores, but if you look at the Cox and Xu study from 2010, they showed there is no real relationship between release time preferences and cognition scores, so this would tell us that release time in the real world probably does not make that much of a difference. There are a lot of things you need to think about during the fitting, and based on my reading of the evidence, this is one setting I would leave up to the manufacturer. There's no sense pondering the release time settings of your fitting in most cases.5

Automatic Feedback Reduction

Automatic feedback reduction algorithms reduce the feedback without sacrificing stable gain or head room. These algorithms are very important for open fittings when acoustic feedback is more of a potential problem. The question many of us ask in regard to feedback cancellation is, "Is any one manufacturer's automatic feedback canceller better than another?" This would seem to be a pretty significant reason to go with one manufacturer over another. There are a couple of ways that we evaluate the integrity of a feedback cancellation system. Probably the most straightforward one is to look at a measurement called additional gain before feedback, which is really turning up the gain with the feedback canceller off and activating the feedback canceller and then increasing the gain again until you encounter feedback. That is very easy measurement to do with the probe microphone system. I do this with headphones plugged right into the probe mic system, so I can actually hear the very beginning of feedback when the hearing aid is in the patient's ear. Run the test with the canceller turned OFF until you hear feedback, then activate the feedback canceller and re-run the test until you hear feedback.

A study performed using KEMAR showed significant differences in the amount of additional gain before feedback across six different manufacturers (Merks, Banerjee & Trine, 2006). Manufacturer D was less than 5dB stable gain and Manufacturers C and F were both well over 10dB. Those are sizable differences even for the time. These are feedback cancellers that are likely no longer available. Ricketts et al (2008) did the same type of study but used actual ears. The data also showed sizable differences of stable gain before feedback, with one manufacturer at less than 2dB of additional gain and a couple that are over 10dB. What is interesting, however, is that for the manufacturer that achieved 10dB of stable gain, there was quite a bit of variability, from less than 5dB all the way to 17 or 18dB. So the averages can be a little misleading.

So, to answer our question if one manufacturer's feedback canceller is better than another, I would say that there seem to be some fairly large differences across manufacturers. Also, it seems that one of the most critical variables is the ear canal geometry of the patient. Some patients seem to have ears that are more prone to feedback than others. This tells me it is probably a good idea to measure this additional gain before feedback before you turn the patient loose. As a general rule, I like to see about 10 dB of additional gain before feedback with an open canal fitting. Taking the time to measure additional gain before feedback not only tests the integrity of feedback cancellation system, but it also might tell you who is not a good candidate for an open canal fitting. The audiologist, not the manufacturer, has to do most of the thinking here.

Monaural vs. Bilateral?

When you are constructing a home, one of the things you have to decide is if you want a one car or two car garage. With hearing aids the question might be, "Do you want your patients to have one hearing aid or two?" The current bilateral fit rate is about 85-90%. I think most of us know the advantages of binaural hearing such as reduced head-shadow effect, improved localization, binaural squelch and loudness summation. Experienced clinicians know, however, there are a significant number of patients with bilateral hearing loss that actually do well and even prefer a unilateral fit. This is what I want to review right now.

Many studies have been conducted over the years that compared both outcomes and patient preferences for monaural vs. bilateral hearing aid use. These studies are conducted in one of two ways, either by using field trials and retrospective studies. Field trials are where the researchers let the subjects or patients switch between one or two hearing aids in a controlled period of time. These studies showed that 41% of patients preferred to wear one hearing aid rather than two. Retrospective studies are where the patients were originally fit with two hearing aids and then are asked later on after they had the hearing aids for a while if they preferred one or two. An average of several retrospective studies indicated that 21% of patients had a preference for one hearing aid. Conclusions from the earlier studies6 show that a substantial number of patients preferred the unilateral arrangement, even though they had a hearing loss in both ears. Audiometric data, age, binaural release from masking and binaural interference problems are actually non-predictors for unilateral use.

Cox and her colleagues (2011b) published a recent study examining unilateral and bilateral fittings for patients who had a loss in both ears. They included 49 participants in the study for a 12 week field trial. The majority of the participants used behind-the-ear hearing aids, while a minority of the group also used custom in-the-ear and canal hearing aids.7 They were also specifically looking at what some of the predictors for preference of one hearing aid over two might be. Some of the factors they evaluated were degree of hearing loss, personality and binaural processing variables.

The participants went through a series of pre-fitting tests including hearing tests and questionnaires. All subjects were then fitted using the same approach. Then the subjects had a 12 week wearing schedule: three weeks with the binaural aids and three weeks each in a monaural aided condition. At the end of the 12 week trial, they were asked in an exit interview if they preferred one or two hearing aids. Subjects were then categorized based on their preference and given outcome measures in the form of questionnaires: the International Outcome Inventory for Hearing Aids (IOI-HA), Device-Oriented Subjective Outcome (DOSO) Scale, and the Abbreviated Profile of Hearing Aid Benefit (APHAB). The IOI HA is a seven-question measure of satisfaction and benefit. The DOSO is a measurement that is intended to factor out personality components and only look at device components of outcome. They found that 43 of 94 subjects (46%) preferred wearing one hearing aid after a long extended trial. Ninety percent of those that preferred one hearing aid were very or reasonably certain of their choice. Their choices, according to the study (Cox et al., 2011b) were very stable over time. In addition, they found in the study that only one pre fitting variable was significant predictor of preference, and that was the unaided APHAB.

Some of the reasons over half the patients preferred one device were that they could hear speech as well or better with one, a more natural-sounding voice, adequate or better hearing in noise, one hearing aid helps as much as two, and there was more convenient telephone use with one aid as opposed to two. One of the other important findings from this study was that the group that preferred two hearing aids had higher outcomes on the APHAB than those that preferred only one. So back to the question, "Are two hearing aids better than one?" This is difficult to summarize in a couple of sentences, but, I think, it is probably not as often as we think that patient's prefer a bilateral arrangement over wearing only one hearing aid. Patients probably need to be given the opportunity to learn about the pros and cons of one vs. two hearing aids - -maybe using one is not as bad as we think for many patients. You may want to provide informational counseling of the benefits and limitations of binaural versus monaural fittings or share research, such as the Cox et al. (2011b) study, in a way that it is meaningful to the patient.

Hot Topic - Comfort in the Hearing Home

Back to the analogy of constructing a home, one hot topic in home construction is green-energy heating systems and environmentally-friendly ways to make our homes more comfortable. Likewise, audiologists have relatively new and perhaps improved ways to provide comfort and audibility to our patients. Frequency lowering, extended bandwidth and something I refer to as binaural enhancements are all ways that we may be able create a more comfortable listening experience and provide improved audibility. Nonlinear frequency compression, for example, takes the high frequency energy from around 3000 to 5000Hz and moves it down into a lower frequency region without disturbing the sound below 1000Hz. This sounds good, but is there any real-world or laboratory evidence to suggest that frequency lowering is effective?

One study published in The Hearing Journal (O'Brien, Yeend, Hartley, Keidser, & Nyffeler, 2010), that looked at two different features: frequency lowering and high-frequency directionality. Each subject experienced frequency lowering on and off for eight weeks. There was very little difference at eight weeks post fitting between the conditions with frequency compression on versus frequency compression off. The real-world results from this study showed that there is no significant difference between frequency compression and a conventional frequency response. Based on this one study, frequency lowering has no significant effect on localization, speech recognition or real-world reports of benefit. That is just one interesting study and I'm fairly sure other studies had similar conclusions.

Another hot topic - maybe it's more of a concept - is the use of frequency lowering for cochlear dead zones. A dead zone is a part of the cochlea that is not responding to sound. You can measure this with the TEN test from Brian Moore and colleagues (2004). At least one study by Van Summers published in JASA back in 2004 shows that experienced clinicians are quite proficient at identifying dead zones based on the audiogram. The research indicates that if you identify a dead zone, for example, around 2000Hz, you would amplify 1.7 times the edge of the dead zone. In this case that would be 1.7 times 2000Hz, the cutoff frequency for the dead zone, which equals 3400. So we would amplify to 3400Hz in this example. So this begs the questions, "If you identify a dead zone, do you compress the high frequencies?"

Robyn Cox, again, has done a lot of remarkable work in the last two years in a number of areas. She and her team looked at the idea of providing a broadband frequency response to a patient that has a confirmed dead zone (Cox, Alexander, Johnson, & Rivera, 2011a). They looked at 170 patients, with a useable total of 307 ears, where a third (31%) of the subjects had a measurable dead region by way of the TEN test (Moore et al., 2004). They compared QuickSIN scores between two conditions. One with the high frequencies rolled off and the other with high frequency response intact for subjects with and without dead regions. Both groups of subjects actually performed better on the QuickSIN when they were given the high frequencies. This is an interesting study that shows that having a broadband response out to 4000 or 5000Hz is beneficial for most patients, including those that have a high-frequency dead zone. The conclusion of this study is that there is no evidence to support reducing high-frequency gain in hearing aid fittings for listeners with or without high-frequency dead zones.

Noise Reduction

We have gone through building a strong foundation, which is audibility in comfort and quiet. Next we went to building and raising the walls of our tiny house by providing comfort in noise with noise reduction algorithms. When we provide digital noise reduction to improve speech intelligibility, we make things more relaxed in noise and improve resource allocation for our patients. To give you a quick summary of how digital noise reduction works, know that there is more than one type in most products these days. Modulation detection is probably the most common. Modulation detectors on the hearing aid count the number of modulations in the signal and reduce the gain in the channel that is classified as noise.8 Once the hearing aid classifies noise in a specific channel, it can attenuate the noise somewhere between around 2 and 10 dB, depending on the model and manufacturer. Some of the questions we might want to ask ourselves regarding noise reduction are, "Does noise reduction improve speech intelligibility in noise? Do patients have a preference for noise reduction? Are there other variables that might help the patient when they are fit with noise reduction?" Let's look at some of the evidence surrounding these noise reduction questions.

It has been shown in many studies that processed based noise reduction does not improve speech intelligibility (Ricketts & Dhar, 1999; Alcantara, Moore, Kuhnel, & Launer, 2003; Ricketts & Hornsby, 2005). There is evidence that shows that adult patients have a preference for noise reduction in quiet and in noise (Walden, Surr, Cord, Edwards, & Olsen, 2000; Boymans & Dreschler, 2000; Ricketts & Hornsby, 2005; Alcantara et al., 2003; Marcoux, Yathiraj, Cote, & Logan, 2006; Mueller, Weber, & Hornsby, 2006; Powers, Branda, Hernandez, & Pool, 2006; Keidser, Carter, Chalupper, & Dillon, 2007). Noise reduction appears to make things more relaxed and more comfortable - at least according to these lab reports. Although not specifically designed to study process based noise reduction, we do have some real-world evidence (Johnson, Cox, & Alexander, 2010) that looks at the question by using the APHAB. The original APHAB was published in 1995 before noise reduction and directional microphones were readily available. The creators of the AHPAB decided a few years ago to renorm the APHAB with directional microphones and noise reduction. After all digital electronics have revolutionized hearing aid technology, right?

They compared scores on the APHAB for four different subscales from the 1995 norms to the 2010 norms. Three of the four subscales (ease of communication, reverberation, background noise and aversiveness) showed no differences between 1995 and 2010 considering the updates in digital hearing aids. The only exception was on the aversiveness scale, where there was a significant difference in the 2005 technology. You can look at the results of this study in couple of different ways. If you take an optimistic view of noise reduction, you could say that on one important subscale of the APHAB, aversiveness is improved with some combination of precessed-based and spatially-based noise reduction.

One other study -- this one from Bentler and her colleagues in 2008 - had a real world component. Their study compared several onset times of noise reduction to the aided condition without noise reduction. The APHAB was improved for all noise reduction onset times compared to the unaided pre-test condition. Furthermore, they found no significant difference between three noise-reduction onset times and when the noise reduction was turned off. The bottom line of this study was that noise reduction is effective regardless of the onset. Another interesting side note of this study, if memory serves me correct, is that patient journal entries indicated that the noise-reduction-off condition was associated with a lower probability of success. To me, that means the APHAB may not be sensitive enough to pick up some of the real-world benefits of noise reduction.

Over the last year we have started to see more studies that look at what is called dual processing or cognitive allocation in relation to the use of noise reduction in hearing aids. One study (Sarampalis, Kalluri, Edwards, & Hafter, 2009) used the SPIN (Bilger, Nuetzel, & Rabinowitz, 1984) to see if noise reduction contributed to more success dual processing and improvements in speed of processing. They looked at the following three conditions with noise reduction on and with noise reduction off: speech intelligibility in noise, recall from memory and visual reaction time. As you may remember, the SPIN has high-context word lists as well as sentences that have no contextual information. As you would expect, there was no significant difference with noise reduction on or off for speech intelligibility in noise. However, there is some data here that would suggest that noise reduction can be effective at improving memory recall at a fairly aversive signal to noise ratio.9 The same holds true for visual reaction time in that noise reduction seems to be beneficial according to the findings of this study. To answer one of our questions about noise reduction, noise reduction does not improve speech intelligibility but may improve aversiveness to noise and listening in comfort.

Do patients prefer noise reduction to be turned on? There is good evidence that would suggest yes, they do prefer it and it doesn't look like it makes things any worse, so it is good to have it on. How might it help out the patient in other ways? There are some published studies and others on the way that show that noise reduction frees up brain processing capacity on the dual-processing tasks and might even reduce cognitive effort. Also, when you consider all the things you have to think about during the fitting, the research in this area would probably suggest that the manufacturer, not you, can do the bulk of the thinking on noise reduction. In other words, leaving it ON and not thinking about it is okay.

Directional Microphones

Like the roof on a 150-year old Italian villa, not much changes with respect to directional microphones, or does it? Most of us know the goal of directional microphones is to improve the SNR of the listening situation. So before we get into some of the real-world and laboratory evidence surrounding the big picture, I want to talk quickly about a new microphone feature called anti cardioid or reverse cardioid. This pattern flips the null from the back to the front, and, in theory, helps the patient, if noise is in front and the talker of interest is directly behind them. It is an interesting idea, but is there laboratory or real-world evidence to support this new feature? The answer is yes - at least in the lab.

Mueller, Weber and Bellanova (2011) recently published a study addressing this topic. This is a laboratory study which evaluated speech perception using the Hearing in Noise Test (HINT; Nilsson, Soli, & Sullivan, 1994) by placing noise in the front and speech in the back. They found that the reverse cardioid pattern improved scores on the HINT by an average of 5.7dB.

Like most other laboratory studies comparing directional microphones to the omni-directional condition, the results show a pretty big advantage for the directional microphone arrangement. Typically, in a test booth there is a substantial improvement in the directional setting over an omnidirectional setting, depending on how the noise is oriented relative to the speech. If we take the very same directional microphone and go into everyday real-world listening, there is often very little difference between how a patient rates performance or preferences with omni and directional microphones (Walden & Walden, 2004).

A good example of a real-world directional microphone study with great clinical value is one published by David Gnewikow in 2009. This study was published in JRRD, which is a VA publication that can be downloaded on the web. It was a three-year, double blinded study with a large number of participants. The patients were classified into three groups according to the severity of their hearing loss and wore hearing aids in both the directional and omnidirectional mode for a specific period of time. He used the HINT and the Connected Speech Test (CST; Cox, Alexander, & Gilmore, 1987) as objective measures of hearing aid benefit. Using these lab tests the directional condition out performs the omni condition for all of those groups. What happens, however, when they go into the real world? Which one does the patient prefer? In that situation, you see that the directional microphone condidtion does not always win. In fact, for the moderate hearing loss group, omni directional was preferred over the directional mode.

Just when you think researchers have investigated all the pertinent issues surrounding directional microphone use, a few more relevant studies get published. Wu and Bentler (2010a, 2010b) at the University of Iowa examined visual cues and aging and how that might affect directional microphone benefit and preference. They published a two-part paper on the relationship between the use of visual cues and directional microphone benefit, one with laboratory results and the other with field trials. They compared omni-directional to directional performance in the auditory only and the auditory-plus-visual-cue conditions. Part of the study looked at the impact visual cues might have on directional microphone benefit and preference. In the laboratory outcomes, subjects listened at a number of different SNRs from +10dB to -10dB and then rated their preferences for audio and visual cues. I thought the -2dB SNR was most interesting, because it is a very common signal to noise ratio that a person would encounter in the real world. The microphone arrangement did not seem to matter as much as the availability of auditory-visual cues. The conclusion from the study, I think, is that the availability of visual cues -at least for some patients - contributes more to hearing aid benefit in noise than the use of directional microphones. That tells me that if clinicians had some way to quickly evaluate a patient's ability to lip read, it might be an indicator of performance with hearing aids in the real world. So it might be worth a few extra minutes to evaluate lip reading ability. If they do well on lip reading, it might be positive indication they will do better in the real world with the directional microphones. I think this study is a good example of how an audiologist's clinical ability is just as critical as the sophistication of the technology. Score one for audiologists and their need to do most of the thinking.

The same researchers, Wu and Bentler (2010b), looked at directional microphone performance in older patients. They used journal entries from the participants as the real world component of the study and the HINT test as their lab measure. In the lab, they found that as a person's age increased, the HINT scores for the directional condition did not fall off all that much, but in the real world, as the person aged beyond around 60 or 70 years, the real-world directional preference declined quite dramatically. This tells us that for older patients in the real world, even though they might have a great score in the lab, directional microphone benefits tends to decrease as patients get older. In the real world, patients over the age of 70 or so will not do as well as somebody who is younger, even though the microphones might be working exactly the same.

Obviously, there is more to the equation than hearing aid technology. Things you must really consider when fitting a hearing aid are the unaided SNR loss, which can be measured with the QuickSIN (Killion, Niquette, Gudmundsen, Revit, & Banerjee, 2004), the acceptable noise level score of the patient, the intent and expectations of the listener, lip reading ability and age of the listener. One way to look at some of the other variables would be to chart out their QuickSIN and acceptable noise level (ANL; Nabelek, Tucker, & Letowski, 1991) score on what we call a red flag matrix. I published a paper a few months ago on AudiologyOnline along with Jill Bernstein, who is an audiologist from Buffalo, New York, that looks at how you can use the two scores as an effective way to counsel patients about directional microphone processing (Taylor & Bernstein, 2011). In essence, you take the two unaided results and plot them on the matrix. You can refer to the paper for more in-depth information as I do not have time to go into detail today.

One other piece of information that might help with fitting directional microphones is an article published by Pam Souza (2009) also on AudiologyOnline, looking at the relationship between QuickSIN scores and pure-tone averages. It showed that patients with losses more severe than 60dB for the pure-tone average have greater dispersion on the unaided QuickSIN scores. This study would show that gathering unaided QuickSIN scores might be helpful in the counseling process, especially for those with more severe hearing losses on the audiogram. While audiologists probably don't need to think a whole lot about how the directional microphone works10, we do need to think about how the directional mic system interacts with the lipreading ability, listening environment, SNR loss and age of the patient.

Gone Wireless?

Now, back to our home construction analogy. So far, we have the entire house built from the foundation to the walls and roof, but we still need the installation of doors and windows. Sticking with this analogy, these would be features like wireless streaming, manual user control of gain and other features, data learning and automatic program switching. Those are all extras that go on to the hearing aid that make it more seamless for the patient to use. Ease of use, flexibility and convenience are some of the goals of these features. Let's pick out just one of these features: wireless streaming of an audio signal directly to the hearing aids. As you know, you can stream the audio signal from a cell phone directly to both hearing aids in a couple of different ways. Depending on the manufacturer, sometimes you need the gateway or streaming device, and for other manufacturers, you hold the phone up to one ear and it streams over to the other ear automatically, without the use of the neck worn device.

There is one interesting study that looked at wireless and acoustic hearing aid telephone strategies (Picou & Ricketts, 2011). Since it examined telephone use in several listening conditions, I think this study has a lot of practical value. There were 20 participants in the study with mild sloping to severe hearing loss. The researchers compared speech recognition in three different telephone conditions: acoustic telephone, wireless transmission with the external microphone muted and then wireless transmission with the external microphone active. They also looked at non occluding open fits and open fits with an occluding dome. For the acoustic telephone condition, they found that for an occluding dome, the bilateral signal resulted in the best outcome, followed by the non occluding dome. Let's say you are fitting someone with open canal devices, along with a neck worn wireless streamer. This study would indicate that these patients would do just as well using the acoustic telephone program than the wireless device that streams into both ears. On the other hand, for a more occluded fitting, the results of this study support the use of bilateral wireless streaming for telephone use.

This study is a good example of how audiologists have to evaluate several variables and make a decision that is best for each patient. Simply throwing technology at the problem will not solve anything. Clinicians, not the manufacturer, have to do most of the thinking when in comes to optimizing aided telephone performance.

Building the Hearing Aid Fitting

That completes the construction of our home as well as our hearing aid fitting. Be sure to construct the fitting using the essential materials including noise reduction, directional microphones, compression and value-added features. Having reviewed some of the most current articles over the past few years, I would challenge all of you to ask good questions of your manufacturer's reps. It's okay to play the role of the skeptic. When a manufacturer's representative comes to your office and they have a new feature they want to talk to you about, ask them about the evidence to support the feature both in the lab and in the real world. Ask about the benefit that your patients can expect. It is up to all of us as good clinicians to ask the tough questions and to be critical thinkers. I hope this short course at least got you thinking a little differently about how you construct your next hearing aid fitting.

References

Aazh, H., & Moore, B.C. (2007). The value of routine real ear measurement of the gain of digital hearing aids. Journal of the American Academy of Audiology, 18, 653-664.

Alcantara, J.I., Moore, B.C.J., Kuhnel, V., & Launer, S. (2003). Evaluation of the noise reduction system in a commercial digital hearing aid. International Journal of Audiology, 42, 34-42.

Bentler, R., Wu, Y.H., Kettel, J., & Hurtig, R. (2008). Digital noise reduction: outcomes from laboratory and field studies. International Journal of Audiology, 47(8), 447-460.

Bilger, R.C., Nuetzel, J.M., & Rabinowitz, W.M. (1984). Standardization of a test of speech perception in noise. Journal of Speech, Language, and Hearing Research, 27, 32-48.

Boymans, M. & Dreschler, W.A. (2000). Field trials using a digital hearing aid with active noise reduction and dual-microphone directionality. Audiology, 39(5), 260-268.

Cox, R.M. (2005). Evidence-based practice in provision of amplification. Journal of the American Academy of Audiology, 16(7), 419-438.

Cox, R.M. & Alexander, G.C. (1995). The abbreviated profile of hearing aid benefit. Ear & Hearing, 16(2), 176-186.

Cox, R.M., Alexander, G.C., & Gilmore, C. (1989). Development of the connected speech test (CST). Ear & Hearing, 8(5S), 119S-126S.

Cox, R.M., Alexander, G.C., Johnson, J., & Rivera, I. (2011a). Cochlear dead regions in typical hearing aid candidates: prevalence and implications for use of high-frequency speech cues. Ear & Hearing, 32(3), 339-348.

Cox, R.M., Schwartz, K.S., Noe, C.M., & Alexander, G.C. (2011b). Preference for one or two hearing aids among adult patients. Ear & Hearing, 32(2), 181-197.

Cox, R.M. & Xu, J. (2010). Short and long compression release times: speech understanding, real-world preferences, and association with cognitive ability. Journal of the American Academy of Audiology, 21(2), 121-138.

Foo, C., Rudner, M., Ronnberg, J., & Lunner, T. (2007). Recognition of speech in noise with new hearing instrument compression release settings requires explicit cognitive storage and processing capacity. Journal of the American Academy of Audiology, 18(7), 618-631.

Gnewikow, D., Ricketts, T., Bratt, G.W., & Mutchler, L.C. (2009). Real-world benefit from directional microphone hearing aids. Journal of Rehabilitation Research and Development, 46(5), 603-618.

Johnson, J.A., Cox, R.M., & Alexander, G.C. (2010). Development of APHAB norms for WDRC hearing aids and comparison with original norms. Ear & Hearing, 31(1), 47-55.

Keidser, G., Carter, L., Chalupper, J., & Dillon, H. (2007). Effect of low-frequency gain and venting effects on the benefit derived from directionality and noise reduction in hearing aids. International Journal of Audiology, 46(10), 554-568.

Killion, M.C., Niquette, P.A., Gudmundsen, G.I., Revit, L.J., & Banerjee, S. (2004). Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss in normal-hearing and hearing-impaired listeners. Journal of the Acoustical Society of America, 116(4), 2395-2405.

Marcoux, A.M., Yathiraj, A., Cote, I., & Logan, J. (2006). The effect of a hearing aid noise reduction algorithm on the acquisition of novel speech contrasts. International Journal of Audiology, 45(12), 707-714.

Merks, I., Banerjee, S., & Trine, T. (2006). Assessing the effectiveness of feedback cancellers in hearing aids. The Hearing Review, 13(4), 53-57.

Miller, R.B., & Williams, G.A. (2004). The 5 Paths to Persuasion The Art of Selling Your Message. Warner Business Books: New York.

Moore, B.C.J., Glasberg, B.R., & Stone, M.A. (2004). New version of the TEN test with calibrations in dBHL. Ear & Hearing, 25(5), 478-487.

Mueller, H.G., Weber, J., Bellanova, M. (2011). Clinical evaluation of a new hearing aid anti-cardioid directivity pattern. International Journal of Audiology, 50(4), 249-254.

Mueller, H.G., Weber, J., & Hornsby, B.W.Y. (2006). The effects of digital noise reduction on the acceptance of background noise. Trends in Amplification, 10(2), 83-93.

Nabelek, A.K., Tucker, F.M., & Letowski, T.R. (1991). Toleration of background noises: Relationship with patterns of hearing aid use by elderly persons. Journal of Speech and Hearing Research, 34, 679-685.

Nilsson, M., Soli, S.D., & Sullivan, J.A. (1994). Development of of the hearing in noise test for the measurement of speech reception thresholds in quiet and in noise. Journal of the Acoustical Society of America, 95(2), 1085-1099.

O'Brien, A., Yeend, I., Hartley, L., Keidser, G., & Nyffeler, M. (2010). Evaluation of frequency compression and high-frequency directionality. The Hearing Journal, 63(8), 32, 34-37.

Picou, E.M., & Ricketts, T.A. (2011). Comparison of wireless and acoustic hearing aid-based telephone listening strategies. Ear & Hearing, 32(2), 209-220.

Powers, T., Branda, E., Hernandez, A. & Pool, A. (2006). Study finds real-world benefit from digital noise reduction. The Hearing Journal, 59(20), 26-30.

Ricketts, T.A., & Dhar, S. (1999). Comparison of performance across three directional hearing aids. Journal of the American Academy of Audiology, 10(4), 180-189.

Ricketts, T.A., & Hornsby, B.W. (2005). Sound quality measures for speech in noise through a commercial hearing aid implementing digital noise reduction. Journal of the American Academy of Audiology, 16(5), 270-277.

Sarampalis, A., Kalluri, S., Edwards, B., & Hafter, E. (2009). Objective measures of listening effort: effects of background noise and noise reduction. Journal of Speech, Language, and Hearing Research, 52, 1230-1240.

Souza, P. (2009). Severe hearing loss- recommendations for fitting amplification. AudiologyOnline, Article 2181. Direct URL: www.audiologyonline.com/articles/article_detail.asp?article_id=2181

Summers, V. (2004). Do tests for cochlear dead regions provide important information for fitting hearing aids? Journal of the Acoustical Society of America, 115(4), 1420-1423.

Taylor, B., & Bernstein, J. (2011, August 1). The red-flag matrix hearing aid counseling tool. AudiologyOnline, Article 2380. Direct URL: www.audiologyonline.com/articles/article_detail.asp?article_id=2380

Walden, B., Surr, R., Cord, M., Edwards, B., & Olson, L. (2000). Comparison of benefits provided by different hearing aid technologies. Journal of the American Academy of Audiology, 11(10), 540-560.

Walden, T., & Walden, B. (2004). Predicting success with hearing aids in everyday living. Journal of the American Academy of Audiology, 15(5), 342-352.

Woods, W.S., Van Tassell, D.J., Rickert, M.E., & Trine, T.D. (2006). SII and fit-to-target analysis of compression system performance as a function of number of compression channels. International Journal of Audiology, 45(11), 630-644.

Wu, Y.H., & Bentler, R.A. (2010a). Impact on visual cues on directional benefit and preference: part I- laboratory tests. Ear & Hearing, 31(1), 22-34.

Wu, Y.H., & Bentler, R.A. (2010b). Impact on visual cues on directional benefit and preference: part II- field tests. Ear & Hearing, 31(1), 35-46.

Yund, E.W., & Buckles, K.M. (1995). Multichannel compression hearing aids: effect of number of channels on speech discrimination in noise. Journal of the Acoustical Society of America, 97(2), 1206-1223.

Footnotes

1 The irony that the author is employed by a leading hearing aid manufacturer should not be lost on the reader.

2a If you are familiar with the concept of how to conduct an evidence-based review, you might want to skip over this section.

2b If you're a student, it's essential to take a course in evidence-based practice. If you're a busy clinician, read the Cox (2005) paper that's cited here or read the book "How to Read a Paper" by Trisha Greenhalgh. The bottom line is a course or book will enhance your critical thinking skills.

3Rechargeable hearing aids would be the equivalent of an eco-friendly house .

4 Although not quite as captivating as the house analogy, Figure 9-1 (p235) of the Taylor Mueller text "Fitting & Dispensing Hearing Aids" would be a good one to use if you need a visual while reading this "manologue."

5 If there is a researcher out there studying release times of WDRC and how they affect the outcome of the fitting, please chime in. You might have a different opinion. I know you are out there!

6 The Cox article cited in the next paragraph does a nice job of reviewing these earlier studies from the 70s through the 90s.

7None of the participants used open canal fits, which is also interesting.

8 I prefer to use the term process-based noise reduction because it helps distinguish it from the spatially-based noise reduction of directional microphones.

9If you are like me, it is getting tedious reading about the results of all these studies without a graph or chart. You might want to go back to the handout of the slides that you should have downloaded before you began reading this article!

10 Of course, it is good to know the intensity level at which an automatic microphone system switches to the directional mode and where the nulls might be. These can be determined with some careful probe mic analysis.

Explore 35+ courses in partnership with Salus University

brian taylor

Brian Taylor, AuD

Director of Practice Development & Clinical Affairs

Brian Taylor is the Director of Practice Development & Clinical Affairs for Unitron. He is also the Editor of Audiology Practices, the quarterly publication of the Academy of Doctor’s of Audiology. During the first decade of his career, he practiced clinical audiology in both medical and retail settings. Since 2003, Dr. Taylor has held a variety of management positions within the industry in both the United States and Europe. He has published over 30 articles and book chapters on topics related to hearing aids, diagnostic audiology and business management. Brian is the co-author, along with Gus Mueller, of the text book Fitting and Dispensing Hearing Aids, published by Plural, Inc. He holds a Master’s degree in audiology from the University of Massachusetts and a doctorate in audiology from Central Michigan University.   Brian Taylor is the Director of Practice Development & Clinical Affairs for Unitron. He is also the Editor of Audiology Practices.



Related Courses

Unitron TrueFit Software: A Live Introduction
Presented by Jill Benner, AuD
Recorded Webinar
Course: #32772Level: Introductory1 Hour
New to fitting Unitron products or need a refresher on the software? Increase your fitting confidence with this easy beginner software course. At the end of this live demonstration you will have the confidence to quickly and easily get through a first fit of a Unitron Hearing Instrument.

Sound Scene Classification in Hearing Aids
Presented by Donald Hayes, Jan Storhaug, AuD, CCC-A, Douglas Baldwin, AuD
Recorded Webinar
Course: #33550Level: Intermediate1 Hour
Acoustic classification is an important part of modern automatic hearing instruments. This course will assist in understanding the importance of a hearing aid manufacturer's development philosophy with respect to how modern hearing systems are designed to work.

Advanced TrueFit v5.4 Software Training
Presented by Riley Garrone
Recorded Webinar
Course: #38934Level: Intermediate1 Hour
This course will provide an in-depth review of Unitron TrueFit fitting software, review what’s new in Unitron TrueFit v5.4, highlight Unitron’s signature features, and review Experience Innovations. Finally, to support continuous patient success, we will review troubleshooting tips and tricks specific to Unitron products and features.

Experience Innovations: Enhance Your Counseling with Log It All
Presented by Steve Hallenbeck, AuD, Courtney Smith, MA, CCC-A
Recorded Webinar
Course: #38167Level: Intermediate1 Hour
Unitron's Experience innovations combine innovations like performance tracking and continuous care features to enhance the service you provide and help patients get the most from each hearing care experience. While this course will provide an overview of all the Experience Innovations, we will dissect Log It All specifically and review updates and how they can be seamlessly folded into any existing clinical operations.

Relate 3.0 Product & Software Training
Presented by Kristina Petraitis, AuD, FAAA
Recorded Webinar
Course: #37151Level: Intermediate1 Hour
This course will introduce the comprehensive product portfolio and feature set of Relate 3.0. Relate 3.0 offers new, powerful sound performance and connectivity on a new platform. Attendees will be introduced to the Relate 3.0 product family including lithium-ion rechargeable products. Course attendees will also learn what’s new in the latest release of Aura:fit Relate v5.1 fitting software version. In addition, attendees will learn about the new enhancements to the Hearing Remote app that offers personalization and enhanced flexibility for the end users.

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.