Gus Mueller: Welcome! This is our third year of the Vanderbilt Audiology’s Journal Club on AudiologyOnline for those of you who are first-timers. All of the previous journal club courses are archived and can be found in the AudiologyOnline library. I encourage you to check them out. These sessions all feature a guest from the Vanderbilt University audiology faculty reviewing and discussing pertinent journal articles around a particular theme. Today's guest is Dr. Todd Ricketts, and he will be talking about recent research and evidence surrounding hearing aid selection and fitting. In addition, each Vandy Journal Club contains a few other special features relevant to audiology clinical practice that we hope keep these seminars lively and interesting, and of course they are all CE-eligible. So let's get started.
What They’re Reading at Vandy
Many of you know that in the past few years, there has been a lot of discussion surrounding cognition and its relationship to hearing aid processing. In our general clinic routine, most of us will probably change our counseling strategies and general hearing aid orientation for someone with poor cognition. We might even alter hearing aid style or the fitting arrangement. But the more difficult question that has come up over the years is, “Should we actually change the hearing aid processing based on the patient’s cognition?”
How do you know if a person has good or bad cognition, and at what point is it bad enough that you should change the hearing aid settings? What I have been reading lately is the recent 20Q article from Pam Souza (2012) on AudiologyOnline. In this particular article, Pam discusses if cognition measures were completed, would it change the way you fit hearing aids—compresion time constants perhaps? Much of the research discussed in her article related to cognition and hearing aid signal processing has been conducted by her and her colleagues. She brings up some interesting topics, including, “What is the role of the clinical audiologist in conducting cognition testing? Is this something that we should be doing? If so, what tests of cognition should we use?” You can find the answers to all of those questions and more in this article, available on AudiologyOnline.
Hearing Aid Selection and Fitting: Some Guidance from Recent Research Findings
Todd Ricketts, as many of you know, is one of the premier researchers in the area of hearing aids. What you maybe do not know is that, along with all the research that he conducts, he is the Director of Graduate Studies at Vanderbilt University. He also is actively involved in teaching, running the Vandy AuD program, and mentoring PhD students—he is just one very busy man. We are lucky that he has taken the time away from his schedule today to share with us what he has been reading relative to hearing aids. I am sure that he will also let us know how many of these research findings work right into our daily clinical practice. With that, Todd, we are ready for your first article.
Todd Ricketts: Thank you. As Gus mentioned, I wanted to discuss some articles that I have been reading most recently, and I have tried to select ones that I think might be interesting to professionals in clinical practice.
The first article is by Wu and Bentler (2012c) who examined reverberation and ceiling performance for word recognition and its impact on directional benefit. These authors were interested in both measuring directional benefit and the impact of vision on directional benefit, as well as trying to predict it. They have conducted some modeling in this article to examine whether the magnitude of directional benefit can be predicted. There was some previous work suggesting that in a laboratory environment when you are only presenting speech, predicting directional benefit can be fairly straightforward. However, things get a little trickier when you add vision into the mix.
These authors published an article two years ago (Wu & Bentler, 2010) that received a lot of interest because one of the conclusions was that when you include vision cues for speech recognition, even in the omnidirectional mode, some listeners are already reaching ceiling performance. That is, they are already obtaining about 100% correct. So these authors, and many others after their publication, concluded that, perhaps, directional microphone technology does not provide significant benefit at some typical signal-to-noise ratios (SNR) of about -3 dB or better when vision is present.
One of the other things that we know is that reverberation tends to decrease directional benefit in many environments. Wu and Bentler (2012c) were curious to know whether reverberation might reduce the negative impact of ceiling effects; because reverberation tends to reduce performance, the directional microphone might provide more benefit. This issue is important clinically, because if directional microphone hearing aids do not provide significant advantages when vision is present, perhaps their real-world benefit has been overstated for many situations.
In this article, the authors measured directional benefit in 19 adults with sensorineural hearing loss. They used two different speech recognition tests. For the auditory-only (AO) condition, they used the Hearing in Noise Test (HINT); and for auditory-visual (AV) condition, they used the Connected Speech Test (CST). They did this in two different environments: one with low levels of reverberation, of about 0.2 seconds, and one with a moderate reverberation time of about 0.7 seconds. Then they calculated the modified speech intelligibility index (mSII) for both speech materials. For those of who are unfamiliar with this measure, the speech intelligibility index (SII) is the newest version of the articulation index. They were attempting to use this to predict directional benefit, both for the AO and AV situations, and did some corrections for vision based on both the ANSI standard corrections for vision and some new corrections they used in their previous work (Wu & Bentler, 2010).
In examining directional benefit in an AO environment with 0.2 second and 0.7 second reverberation times, we see that there is significant directional benefit in the neighborhood of 3.0 dB to perhaps 5.5 dB on the HINT. As would be expected, the directional benefit decreased with increasing reverberation. These findings are consistent with what we have seen in past studies.
When you go to the AV condition, however, we see a bit of a change. Again, there is decreasing directional benefit with increasing SNRs. They tested with the CST at fixed SNRs of -6, -2, 2, and 6 dB. However, there was significant directional benefit in the AV condition at the higher reverberation times, even at relatively positive SNRs of +2 dB or so. We still see approximately 10 to 11% of directional benefit.
The other thing that we see in the AV condition, unlike the AO condition, is that there was more directional benefit with more reverberation. Consistent with their proposed hypothesis, actual performance on the CST, even at the +2 SNR in the omnidirectional condition, is not reaching a 100% score, so there is a bit more room for directional benefit to occur.
They did find, consistent with past work, that the mSII model accurately predicts directional benefit when vision is not present. But the mSII model underestimated directional benefit in both reverberant conditions when vision was present. So they modeled this with a couple of different corrections. Their new corrections improved predictions in the low reverberation condition; however, directional benefit was still underestimated in the moderately reverberant environment.
Why is this important? I think it shows that hearing aid users are expected to achieve the greatest directional benefit in situations in which they do not reach ceiling performance. So whenever a listener is struggling because of their inability to understand speech in noise or reverberation, we see that we can achieve some significant directional benefit, even when we have visual cues to help us along. The bottom line is that the present study (Wu & Bentler, 2012c) suggests that in the real world, even in face-to-face communication when there is high reverberation or moderate reverberation, we might expect some directional benefit.
The next study that I wanted to talk about is from Brian Moore, Christian Fullgrabe and Michael Stone (2011), looking at compression parameters in hearing aids. Specifically, they were asking, “What are the preferred gain and compression parameters for a 5-channel compression hearing aid?” This was a simulated hearing aid with 5 independent channels of compression, which was programmed for some overlap between the channels. They wanted to know how those preferred parameters, which they measured using paired comparisons, compared to those of a validated prescriptive procedure, their own CAMEQ-HF2. Specifically, they were most interested in preferred gain when they were conducting these comparisons.
Why does this matter? I think there are still mixed results regarding how much compression parameters impact the overall fitting. There is also a renewed interest in high-frequency extension and what the optimal setting should be. This is particularly interesting, given that verification of extended high frequencies is still quite difficult. Finally, they were interested in seeing whether the paired comparisons lead to a different average result or starting point than a validated prescriptive method for the extended high frequencies.
These authors examined participants with mild to moderate hearing loss, and those listeners expressed a preference for pairs of sounds, including speech sounds and musical sounds. They used a variety of different musical sounds, including percussion, classical music, and a jazz trio. The sounds in each pair were derived from the same token and differed along a single dimension in the type of processing applied, and we will talk about that in just a minute. For the speech sounds, participants judged pleasantness or clarity in noise, and for the musical sounds, they judged pleasantness.
This was actually conducted in four separate experiments. In the first experiments, they were investigating the time delay of the audio signal relative to the gain control signal. This is sometimes referred to as the alignment delay. Some people call this look-ahead compression. The idea is that you apply gain, and then allow the audio signal to go through after some alignment delay. In theory, you suppress the overshoots and undershoots that we get from compression, particularly with very fast time constants. In this case, they examined these effects for both fast and slow time constants.
Next, they examined slow, medium, and relatively fast attack and release times, and they looked at bandwidth, upper cutoff frequencies of 5 kHz, 7.5 kHz, or 10 kHz. Then in the last experiment, they examined gain in the high frequencies.
What they found was that there was no effect for speech or most musical signals. There was no effect for clarity, either. When they looked at pleasantness, percussive sounds and fast time constants only, you did see small, but significant effects. In that case, a little bit of alignment delay was a positive. In fact, the more alignment delay that they had for these sounds, the more pleasant they were viewed. However, one of the problems with alignment delay is that it ends up being a delay in the hearing aid. In other words, an alignment delay adds to the total processing delay.. So the authors discussed the fact that too long of an alignment delay and too long of a processing delay in general can be problematic. Because of this, the authors concluded that a relatively short alignment delay should be used, because it might offset some of the issues related to pleasantness for percussive instruments. So they kept this optimal 2.5 ms alignment delay for the rest of their experiments.
As I mentioned earlier, the authors studied the subjects’ preferences in terms of bandwidth. There was a trend for pleasantness to decrease slightly with increasing bandwidth. When the condition changed from 5 to 10 kHz, there was a general trend for the sound to become less pleasant. This only reached significance for the female talker with fast compression, but the same overall trend was there for most of the conditions.
Regarding bandwidth, one of the things they also studied was potential correlations with other different predictive variables. A few years ago, we looked at some of these same issues in my lab (Ricketts, Dittberner, & Johnson, 2008). One of the things that we found was that people with steeply sloping hearing losses tend to prefer a narrower bandwidth, and people with a shallower slope with hearing loss, regardless of the magnitude of hearing loss, tended to prefer a broader bandwidth. Moore and colleagues’ (2011) data was consistent with those findings as well. So even though there was a general trend towards decreasing pleasantness with increasing bandwidth, that trend was actually reversed in listeners with shallower slopes to their hearing loss.
The authors also examined speech clarity in background noise. In this case, we see an advantage for extended bandwidth in general. Clarity was significantly higher for the 7.5 and 10 kHz bandwidths than for the 5 kHz bandwidth. That was true for both fast and slow compression and for both male and female talkers.
In terms of compression speed and high-frequency gain, compression speed did not have much of an effect, but there was a small effect for clarity. That was true for the mid and high-level inputs. For the highest-level inputs, there was actually an advantage to slow-acting compression for both pleasantness and clarity, and that advantage for clarity was also present in average speech inputs.
In terms of pleasantness, they found maximum pleasantness was rated for gains equal to and under their validated prescriptive method. Speech clarity was not affected by change in the gain at high frequencies. What this suggests is that there was not really a change at all with clarity, but there was a change in pleasantness, and more gain was negative.
Why is this important? There are many individual differences shown here, but the effect sizes were generally very small. I think there are a few general trends that are important for us to consider, which have some clinical application:
- First, I do not think we have to worry much about alignment delay. The effects are very small and really only for a few instruments. If you have musicians or patients who listen to a lot of percussive instruments and are using fast time constants, that is a case where obtaining hearing aids that introduce an alignment delay may be of interest.
- The effects of time constants are also small, but slower time constants in the offline 5-channel compression systems were judged as slightly more clear, on average.
- Extending the high-frequency bandwidth seems to make things slightly more clear, and it is liked slightly better by those individuals who have a hearing loss with a relatively shallow slope.
- Finally, when we are assigning that extended high-frequency gain, we probably want to be assigning it to be equal or less than that prescribed by the CAMEQ2-HF. I think one of the things that I found of interest in this study is there were not big differences when they dropped below this prescriptive method. That is pretty important, I think, because this particular prescriptive method compares to the NAL-NL2, which is certainly much more popular in the United States, and prescribes quite a bit more high-frequency gain that the NAL-NL2.
For this portion of our review, I decided to mix things up a little bit. There were several good comprehensive studies published in the last few months, but there were also several studies that had, what I saw, as just one or two important clinical points. So rather than reviewing one article in more depth, I’m going to review a few more articles in a bit less depth, and really point out the important clinical message.
The first two articles (Mueller, Weber & Bellanova, 2011; Kuk & Keenan, 2012) are studies examining a newer technology that has been introduced more recently: reverse directional microphones. Depending on the manufacturer, they are called things such as anti-cardioid or reverse cardioid. The bottom line is that this is directional microphone technology that can be programmed to be most sensitive in a direction other than the front. So this was a situation where speech was presented either from the front or from the back.
In the case of the Mueller et al. article (2011), speech was presented from the rear and noise was presented from the front. This would be similar to a situation where you are the driver in a car talking to someone in the backseat. The trend that you see in both articles is really the same if we compare performance in omnidirectional mode to the reverse directional mode, and we focus just on speech arriving from the rear. We see a large and significant advantage for the reverse cardioid microphones in both studies.
This is not very surprising, but it does point out that there is a relatively new technology out there, and for some very specific listening situations like driving a car and listening to a talker behind you, there are cases where these reverse directional microphones can be quite advantageous. I should also point out that if you have a very good performing directional microphone in those same environments as we see with the Mueller et al. study (2011), a directional microphone can actually make things worse than an omnidirectional when you are in noise and you are trying to listen to a speech signal from behind. This of course will depend on the automatic default polar pattern for such a listening situation.
Hearing Aid Placebo Effect
There was a very interesting study many years ago by Ruth Bentler and colleagues (Bentler, Niebuhr, Johnson, & Flamme, 2003) looking at a placebo effect with digital hearing aids. That study found that just by labeling a hearing aid as digital, you could obtain a very positive response from listeners. So we have known that there is indeed a placebo effect for quite some time. This newer study by Dawes, Powell and Munro (2011) points out that we need to control for placebo effects in hearing aid trials. It was not a labeling of digital versus non-digital, it was just a very simple labeling of modern technology. The researchers labeled the technology as either conventional or new. Subjects were fit with exactly the same hearing aid and verified with probe-microphone techniques to exactly the same targets. In one case, they were told it was new technology, and in the other case they were told it was conventional technology. But, again, it was exactly the same hearing aid.
The authors did sound quality ratings for comfort, clarity, overall impression, and overall sound quality. Sure enough, the new hearing aid was rated as significantly better than the conventional hearing aid in every case, even though new was exactly the same as old. I found it interesting that speech recognition even trended slightly higher. Although 2% was a very small difference, it is curious that people would obtain slightly better speech recognition scores. This suggests that even when you do something as straightforward as speech recognition, it may be the case when you label something new that the patient tries harder on the given task.
At the end of this study, they asked the subjects what their preference was for the new versus conventional technology. About a quarter of participants said there was not really a difference between the two, three-quarters preferred the new technology; no one expressed a preference for the conventional technology.
Clinical Measures of Directional Performance
This is another publication from Wu and Bentler (2012b), and I think it is directly applicable for clinic. They were looking at clinical measures of directivity and how they matched up with laboratory measures of directivity. I know that Wu and Bentler, like myself, are great advocates for clinically evaluating directional microphones, because we know they do fail over time. Most commonly, dirt and moisture can get into the microphones or their ports. We do see decreased performance over time, so it is important to evaluate them so that they can be repaired. Sometimes that repair is as simple as changing microphone screens.
In this study (Wu & Bentler, 2012b), the directivity index was measured across a range of hearing aids; their laboratory-based directivity was quite variable. They also measured the clinically-based directivity of these instruments using either a front-to-back ratio or a front-to-side ratio. In order to compare these data on the same scale, they plotted the relative directivity, and came up with a normative scale where you have the relative directivity of the instrument.
One interesting finding was that there was much stronger agreement for the front-to-side ratio than the front-to-back ratio. These studies were conducted both in the test box and in the free field. The authors concluded that if you want the most sensitive clinical measure in terms of reduced directivity, maybe what should be looked at is front-to-side ratio. Otherwise, you may miss a directional microphone that is misbehaving early on before it gets to the point where it is really showing poor directional performance.
The next study I’ll mention was by Souza, Wright, and Bor (2012). They studied compression conditions and their effects over a long period of time. One of the things we know from past research is that preference for and benefit from certain compression parameters seems to be relatively individualized. That is, some patients prefer one type of processing and others prefer another. This was a study looking at underlying factors of why people might perform better with one type of compression versus another.
In Souza et al. (2012), they used a simulated compression system. It was a 16-channel compression system with relatively independent channels, which I might mention, is somewhat rare. They were comparing whether linear processing versus compression affected vowel identification. They tested normal-hearing users as well as two other hearing-impaired groups, classified by sloping hearing loss and flat hearing loss. For the normal-hearing group, there was really no significant difference between vowel identification for linear versus multichannel compression.
There was, however, a big difference for the two hearing-impaired groups. In both hearing-impaired groups, the multichannel compression degraded performance; however, the degradation in performance was greater for flat hearing losses, and flat hearing losses are associated with broader auditory filters in the low frequencies. Even though this was not a production hearing aid and is not using the same level of overlap that we see in many commercial hearing aids, this lends some evidence that our patients with broader auditory filters, specifically those with flat hearing losses, may be more susceptible to difficulties with multichannel compression, especially high numbers of channels with relatively fast time constants. They may be more likely to perform more poorly with this technology.
Volume Controls and Multiple Memories
The next study that I thought was quite interesting was a study by Shilpi Banerjee (2011). She was looking at what people do relative to having volume control wheels (VCW) and multiple memory capability. Hearing aids have become more automatic over the years, but it’s still fair to question if this is what patients want. We have seen data that suggest that even though patients tend to like fairly automatic hearing aids, they still like having a VCW. So Banerjee (2011) was interested in looking at whether patients actually use the volume control and/or multiple memories in a hearing aid where there was some sort of automatic setting.
This was conducted both by survey and by data logging within the hearing aid. For the vast majority of observations, patients either remained or reported remaining in their default setting. For a period of time, they had a VCW; for another period time, they had multiple memories; and for a period of time, they had both. The vast majority of time, the preferred hearing aid setting was the user settings, and they were the same in both ears. We did see that for a period of time, patients did report and did change their volume, and/or did use multiple memories or both.
The altered settings of the hearing aid were most often used in difficult listening situations, which makes some sense. If listeners are having difficulty, they try to change something. We certainly see that, despite the fact that hearing aids have become very automatic, patients do prefer to be able to adjust their hearing aids in some situations.
Effects of Noise Reduction for Children
Next I wanted to review a few studies by Andrea Pittman (2011a; 2011b). The first study (Pittman, 2011a) looked at children and their ability to categorize words with auditory and visual distractors, both in quiet and in the presence of noise in normal-hearing and hearing-impaired listeners. She looked at how many trials it takes to learn novel words. I want to focus most on the question, “What happens in hearing-impaired listeners with regard to turning noise reduction on?” Word categorization was not affected positively or negatively by noise reduction. That is similar to previous findings suggesting that commercial implementations of noise reduction do not have a significant effect of speech recognition or word categorization.
If we look at the other study (Pittman, 2011b), however, we see that noise reduction can show some benefits. She was looking again at the trials of learning new words in younger versus older children. Her results showed that both groups learn words much more quickly in quiet than they do in noise. The other thing we noticed was that, for the young children, when noise reduction was turned on in the presence of noise, there was really no effect. But for the older children, word learning improved significantly. This means that the older children learned novel words more quickly in a smaller number of trials. Pittman concluded that perhaps less gain in noise helps these older children, and maybe there are some benefits to noise reduction for word learning in those older children.
Mueller: For those of you who are new to the Vandy Journal group, we dedicate a moment each session to present the Commodore Award. This is in reference to Commodore Cornelius Vanderbilt who made a very generous donation in 1873, which prompted the founding of Vanderbilt University. The Vanderbilt sports teams, in fact, are known as the Commodores, or just the Dores. In each meeting of our Journal Club, the guest presenter selects an article worthy of the Commodore Award, typically one that has particular importance, significance, relevance or is unique is some way. Todd, what article did you choose for this session’s Commodore Aware?
Ricketts: There were several excellent articles, but the Commodore Award goes to an article by Robyn Cox and colleagues (Cox, Schwartz, Noe, & Alexander, 2011) who examined an important clinical question that has been examined over and over again, “What is the preference among adult patients for one versus two hearing aids?”
These authors asked, “What proportion of patients with symmetrical hearing loss prefer one or two hearing aids after being fitted for a period of time?” And, further, “Are there pre-fitting variables that can be used to predict which patient will prefer one hearing aid rather than two?” Again, this was a trial where they allowed patients to be fitted with one or two hearing aids to see which they really liked.
This matters, quite significantly, because most practitioners believe that the use of two hearing aids is the ideal fitting for adults with bilateral symmetrical hearing loss. And I think that we have seen a lot of laboratory studies to support that. However, previous research has consistently shown that a substantial portion of these patients actually prefer to use only one hearing aid. Given that difference in the research, if we default to bilateral fittings, that can lead to a perception that we push hearing aids, even though we are trying to give patients the most benefit possible. Defaulting to a unilateral fitting is expected to limit the potential benefits from hearing aids, given that a lot of those bilateral benefits are, in fact, real.
So Cox et al. (2011) looked at 94 subjects, 50-85 years old. They were bilaterally fit with 2005-2007 era hearing aids. One of the inclusion criteria was that they had to be open-minded about using one or two hearing aids. The subjects participated in a 12-week field trial. The subjects had three weeks structured where they were on a schedule to try one versus two, and then nine weeks unstructured where they could choose to use one or two hearing aids. After the field trial, each subject stated his or her preference for one or two hearing aids and completed self-report outcome questionnaires. Prior to these trials, the investigators measured potential predictors, including demographic, audiometric, auditory lifestyle, personality, and binaural processing variables.
At the end of this trial, they found that 54% of the subjects preferred using two hearing aids. In other words, about half preferred two and about half preferred one. Interestingly, audiometric hearing loss, previous experience, and auditory lifestyles were not predictive of aided preference. There was no simple predictor to assume which individuals might prefer one versus two hearing aids. I want to point out that most people who indicated that they preferred one or two hearing aids were either reasonably or very certain that that is what they wanted.
So what were the reasons for preference? There were a greater proportion of subjects that said they preferred one hearing aid because of comfort issues, though some people preferred two because of comfort issues. A larger percentage of people also preferred one hearing aid over two for quality. A larger proportion of the group that preferred one said it met their needs.
So where did we see big differences? For those that preferred one, they reported the other hearing aid was too loud and that the telephone was more comfortable with one. A large percentage of those subjects that preferred two reported that they felt better balance, more clarity, and that having just one hearing aid was too soft.
In terms of group differences, subjects who preferred two hearing aids tended to report slightly better real-world outcomes. They generally reported more hearing problems without hearing aids in daily life. They experienced more binaural loudness summation, and they had ears that were more equivalent in dichotic listening tasks. Specifically, those individuals that had a very strong right-ear preference tended prefer one hearing aid versus two to a greater proportion.
Why is this important? These authors tried a variety of ways to predict the best approach for determining whether people wanted one or two. But even though they tested these people on multiple measures, they found that for only two-thirds of subjects were they able to accurately predict their preference. To that end, the authors’ suggestion was to recognize that many patients who seem to be ideal candidates for bilateral hearing aids may actually prefer to wear only one. They suggest conducting a candid, unbiased systematic field trial allowing each patient to compare unilateral and bilateral fittings in daily life, but they also acknowledge that this might necessitate more fitting sessions. You have to weigh this against the potential for increased satisfaction and selecting the most cost-effective patient-centered solution, given that it likely will take more time and intervention on the part of the clinician.
Mueller: Thank you, Todd. Let’s move into questions.
Questions & Answers
Are there any other terms being used in industry to describe alignment delay?
Ricketts: Not that I am aware of. Compression time constants are very specific to manufacturers. Typically, the only manufacturers that would be interested in this are those with very fast-acting compression, or at least fast attack times, which most manufacturers do have. Another term that I have heard is “overshoot suppression.” This is not going to be something that is programmable by the clinician, but it is something that certain researchers have talked about in the past. I think the good news about alignment delay is that it probably does not matter very much.
How widespread is the clinical use of speech recognition in noise testing?
Ricketts: I think that there is a variety of range of use of speech-in-noise testing. I do not have survey data on how many people actually use speech and noise testing. Gus, are you aware of data?
Mueller: Actually, there is an article we published at AudiologyOnline two years ago that includes some data related to this. There have been many others, but I can remember that what we found was that the most popular speech-in-noise test was the QuickSIN, and that about 10% of respondents said they used this test routinely, and about 20% said they used it “some of the time.” Now how often “some of the time is,” I do not know. These were all people who were fitting hearing aids. About two-thirds of them were audiologists and one-third were dispensers.
Frequently, people talk about the HINT. But how do we get a copy to use clinically?
I would contact House Ear Institute about that question. I know that you can purchase the HINT-Children (HINT-C) directly from them. The HINT has been made, since the time of its initial development, into an application where the testing is all done by an appliance. But there are several speech-in-noise tests out there. Gus mentioned one of the popular ones for adults, the QuickSIN. The BKB-SIN (Bamford-Kowal-Bench Speech in Noise) is also out there also, and Etymotic Research published both of those.
This next question comes in from Dr. Ruth Bentler.
Without the article in front of me, I am curious why the Pittman (2011a; 2011b) group thinks less gain in noise might help older children rather than easier listening for multitasking. Do they actually measure the impact of the digital noise reduction on the gain output? The digital noise reduction would have little impact on the speech gain, depending on the algorithm.
Ricketts: Absolutely true. Their explanation of why digital noise reduction might have helped was a little muddled in my opinion as well. I think what they are suggesting is that there certainly is less gain between the speech signals. I think– and this is my take on the results – that perhaps the older children not having to listen at such a high level of amplification all the time between the speech might help them. Maybe this is related to a listening effort or auditory fatigue issue. Certainly, I would agree that the output level for speech when noise is present shouldn’t be any different.
Two other variables I can think of in regard to the “one versus two” article would be looking at greater occlusion effect with two, and monetary budgetary issues.
Actually Dr. Cox mentions both of those variables in the article (Cox et al., 2011), I just did not have time to discuss it. She points out that that study was done as open-fit hearing aids were first being introduced. There is actually good data to show that rejection of bilateral amplification is actually lower in open instruments than it is in closed instruments. So the reduction of some of the negatives of bilateral fittings for an open-fitting context might change these data to some extent. Certainly, budgetary issues impact having one versus two hearing aids.
Mueller: Was this a VA-funded study? Were these free hearing aids or not?
Ricketts: Half the group, I believe, were VA, and half were not. I thought that part of the group were through Dr. Cox’s lab and part of the group was through the VA.
Mueller: Okay. Another thing relative and agreeing with what you just stated, is that one of the reasons that people liked one hearing aid was improved quality. I am wondering if that goes back to the occlusion effect. Normally, you would not think you would have improved quality of sound with one versus two hearing aids. Maybe what they were referring to was the occlusion effect.
Ricketts: Yes, I am curious about that myself.
Do you think there are implications for the real in-the-drawer rate based on the Cox et al. (2011) study?
Certainly, I think that is the case. Given how many people really did go to their preference of one and how strong that preference was, it suggests that if that is true, there is probably a relatively high in-the-drawer rate for hearing aids. I think that is one of the reasons that the outcome of this study is very important.
Banerjee, S. (2011). Hearing aids in the real world: Use of multimemory and volume controls. Journal of the American Academy of Audiology, 22(6), 359-374.
Bentler, R., Niebuhr, D., Johnson, T., & Flamme, G. (2003). Impact of digital labeling on outcome measures. Ear and Hearing, 24, 215-224.
Cox, R. M., Schwartz, K. S., Noe, C. M., & Alexander, G. C. (2011). Preference for one or two hearing aids among adult patients. Ear and Hearing, 32(2), 181-197.
Kuk, F., & Keenan, D. (2012). Efficacy of a reverse cardiod directional microphone. Journal of the American Academy of Audiology, 23(1), 64-73.
Moore, B. C. J., Fullgrabe, C., Stone, M. A. (2011). Determination of preferred parameters for multichannel compression using individually fitted simulated hearing aids and paired comparisons. Ear and Hearing, 32(5), 556-568.
Mueller, H. G., Weber, J., & Bellanova, M. (2011). Clinical evaluation of a new hearing aid anti-cardiod directivity pattern. International Journal of Audiology, 50(4), 249-254.
Pittman, A. (2011a). Children’s performance in complex listening conditions: effects of hearing loss and digital noise reduction. Journal of Speech, Language, and Hearing Research, 54(4), 1224-1239.
Pittman, A. (2011b). Age-related benefits of digital noise reduction for short-term word learning in children with hearing loss. Journal of Speech, Language, and Hearing Research, 54(5), 1448-1463.
Ricketts, T. A., Dittberner, A. B., Johnson, E. E. (2008). High-frequency amplification and sound quality in listeners with normal through moderate hearing loss. Journal of Speech, Language, and Hearing Research, 51(1), 160-172.
Souza, P. (2012, August 6). 20Q: Cognition measures- They might change the way you fit hearing aids! AudiologyOnline, Article 20928. Retrieved from https://www.audiologyonline.com/articles/august-20q-by-pamela-souza-6925
Souza, P., Wright, R., & Bor, S. (2012). Consequences of broad auditory filters for identification of multichannel-compressed vowels. Journal of Speech, Language and Hearing Research, 55(2), 474-486.
Wu, Y. H., & Bentler, R. A. (2012a). Impact of visual cues on directional benefit and preference: Part II- field tests. Ear and Hearing, 31(1), 35-46.
Wu, Y. H., & Bentler, R. A. (2012b). Clinical measures of hearing aid directivity: Assumption, accuracy and reliability. Ear and Hearing, 33(1), 44-56.
Wu, Y. H., & Bentler, R. A. (2012c). The influence of audiovisual ceiling performance on the relationship between reverberation and directional benefit: perception and prediction. Ear and Hearing, 33(5), 604-614.
Cite this content as:
Ricketts, T. (2013, January). Vanderbilt audiology’s journal club with Dr. Todd Ricketts: Hearing aid selection and fitting - guidance from recent research findings. AudiologyOnline, Article #11488. Retrieved from https://www.audiologyonline.com.