AudiologyOnline Phone: 800-753-2160


Signia Conversation - March 2024

Vanderbilt Audiology's Journal Club with Dr. Todd Ricketts - Hearing Aid Technology

Vanderbilt Audiology's Journal Club with Dr. Todd Ricketts - Hearing Aid Technology
Todd Ricketts, PhD
January 26, 2015
Share:

Editor’s Note: This text course is an edited transcript of a live webinar. Download supplemental course materials.

Dr. Todd Ricketts:  Welcome, and thank you for attending.  There were many interesting articles on hearing aid technology and fitting procedures this past year, and I had a tough time narrowing it down to just a few select articles to discuss today.  I decided to focus on those that I thought have interesting clinical and real-world implications.  It is helpful for clinicians when research looks at how things work in the real-world settings that our patients are in everyday.

Article 1:  Relating Working Memory to Compression Parameters

 in Clinically-fit Hearing Aids

The first article I selected examines working memory and how it relates to compression parameters (Souza & Sirow, 2014).  This is continuing work over the last few decades where investigators have tried to better select compression processing for individual listeners.  I think we are getting closer to that goal. 

There are a number of previous studies that have suggested that working memory may influence speech recognition performance as a function of compression speed (Foo, Rudner, Ronnberg, & Lunner, 2007; Gatehouse, Naylor, & Elberling, 2006; Lunner & Sundewall-Thoren, 2007; Ohlenforst, Souza, & Macdonald, 2014), but most of these studies have been under very controlled laboratory conditions, often with basic one or two-channel compressors. 

The study by Souza and Sirow (2014) is a relatively small study with only 12 participants, but they explored whether you might see similar effects in clinically-fitted hearing aids.  There has been considerable speculation that fast-acting compression alters the speech envelope, and that might create some difficulty for individuals matching this altered acoustic signal to the long-term memory stores.  Specifically, this might be more difficult for listeners with low working memory.  If we could find some support for a real-world relationship, it may further support development of clinical measures for optimizing selection of compression parameters in hearing aids. 

Methods

In this study, the authors looked at 27 adults who were patients in a private practice audiology clinic.  They were all fitted with mini receiver-in-the-canal (RIC) instruments programmed with the Desired Sensation Level version 5 (DSL v5) and appropriate real-ear verification.  These investigators looked at working memory using a reading span test.  This task flashes a word on a computer screen every 800 msec, and there are strings of words that increase in string length.  At the end, the listener makes a judgment of whether the sentence made sense or not. 

Most adults can tell you whether it was a sentence or not, but one of the things that becomes more difficult, particularly when the string of words becomes longer, is remembering what the first or the last word in the string was.  In this particular task, the outcome measure is recall for the first or last word in the string.  

In addition, these investigators measured speech recognition in noise using two lists from the QuickSIN.  They did this testing at a loud-but-okay level, and for most listeners, that was 83 dB SPL.  The participants had a wide range of hearing losses, but most were typical patients with downward sloping hearing loss. 

Souza and Sirow (2014) also used three commercial devices and a broad range of compression parameters.  Hearing aid A had the slowest attack and release times, which was in the range of seconds.  Hearing aid B was slow with a 5 msec attack and a 1000 msec release.  That particular hearing aid can also be manually set to faster time constants, so they performed a trial with hearing aid B-slow and hearing aid B-fast.  Hearing aid C was set with the fastest time constants.  Ten participants were tested in all four conditions, and 17 were tested with 3 of the 4 conditions. 

Results

Both groups (low working memory and high working memory) had similar performance for slow attack and release in hearing aid A and the hearing aid B-slow setting.  We start to see some performance separation, however, with the faster time constants.  Those with high working memory performed significantly better on the QuickSIN with fast time constants as demonstrated by their signal-to-ratio (SNR) scores.  They could repeat the sentences at increasingly adverse SNRs.  For hearing aid C, there was a difference in excess of 5 dB SNR on the QuickSIN between the low and high working memory groups; it is important to note that a significant difference on the two-list form of the QuickSIN is about 2 to 3 dB, so these certainly would be considered significant differences between the groups. 

When we look at individual performance as a function of working memory, we see a much more scattered picture, particularly for the slow compression settings.  In summary, working memory was not a good predictor of who does well with slower compression time constants.  Hearing loss and age were relatively good predictors. 

However, when we go to the fast time constants, particularly the fastest, working memory alone accounted for about 30% of the variance in the data.  When they combined working memory with hearing loss, that information accounted for about 70% of the total variance.  These are significant predictors for differences in speech recognition with fast-acting compression.

This is interesting data, considering the differences in the commercial hearing aids.  It suggests that younger patients with less hearing loss and high working memory may benefit from faster compression time constants, while older patients with more hearing loss, particularly those with lower working memory, may be better off with the slower time constants.  As the authors point out, this continues to add to mounting evidence that the use of cognitive testing may help contribute to an evidence-based method of prescribing appropriate compression attack and release times. 

Article 2:  The Effect of Hearing Aid Technologies

 on Listening in an Automobile

The next study (Wu, Bentler, Stanziola, 2013) looked at the newest directional microphone-based signal processing schemes and how they might help listening in a noisy automobile.  They compared three different mini behind-the-ear (BTE) commercial hearing aids.  All three were capable of omnidirectional and conventional adaptive directional or automatic directional mode, but each, individually, were capable of reverse directionality, side transmission or signal sharing, and some sort of side suppression. 

The authors examined speech recognition in an automobile.  They made the recordings in a van travelling 70 miles per hour on I-80 in Iowa.  Listening in a vehicle is certainly one environment that we hear about from our patients.  It also presents a fairly unique listening situation.  There are often not visual cues, particularly for the driver.  Talker position is usually off to the side or behind, and noise positions are often concentrated on the side, with wind noise and engine noise from the front and tire noise coming from underneath.

Methods

In this study, speech recognition performance was compared using the Connected Speech Test (CST).  All the sentences were recorded in a standard van through mini BTE hearing aids with a thin tube fitted to a KEMAR, who was seated in the passenger seat.  Each hearing aid was programmed to a sloping hearing loss using NAL-NL1 gain prescriptions.  The recorded materials were then later presented to listeners via earphones. 

Please refer to your handout to see the seating/listening arrangement in the car.  KEMAR was in the passenger seat, and there was a loudspeaker situated either off to the left side or the back of the seat to simulate someone talking from the side or the back seat. 

What does this different directional technology do?  It depends on the direction of the signal.  Hearing aid 1 was programmed for back directionality, hearing aid 2 for signal sharing and hearing aid 3 for side suppression.  There is a fair bit of similarity across the hearing aids when the signal comes from the front and noise comes from the back.  For hearing aid 1 and hearing aid 2, there was a polar plot that looked very similar to standard directional processing.  For hearing aid 3, there was directional processing in the high frequencies and a little more omnidirectional processing in the low frequencies.

We start to see large differences across the technologies when we present signals from the side.  For hearing aid 1, there is a focus on the side from which the signal is generated.  When the signal comes from the left side, there is more focus on the left hearing aid, and the right hearing aid is minimized, but the SNR in the left hearing aid is about what you would get out of a directional microphone on the left side. 

In hearing aid 2, we see a very different pattern.  We see full-band signal sharing in this case.  Both hearing aids are being fed the same signal, and they are focused on that left side.  In hearing aid 3, we see a hearing aid that is omnidirectional on the left with some suppression of signals from the side for the right hearing aid. 

Finally, when we go to the condition with noise in front and speech behind, we see a reverse directional pattern, where the signal from behind is focused on for hearing aids 1 and 2.  Hearing aid 3 is not designed to have a reverse directionality, so instead, it goes into more of an omnidirectional pattern when the signal comes from behind. 

Results

In terms of the back directional or steering algorithm (hearing aid 1), one of the important things in this data is that we see omnidirectional, directional, and this new technology.  I think it is important to note that in an automatic adaptive directional hearing aid, as is typically fit, the hearing aid will usually be in the directional mode in a vehicle.  It is noisy by default, and then there is speech.  For many hearing aids in a standard, automatic directional hearing aid, they would be in the directional mode, not the omnidirectional.  For this reason, I think it is important to compare benefits not only to omnidirectional, but to directional as well. 

The results indicated that back directional or steering algorithm was better for the signal coming from the rear than the omnidirectional mode, and it was much better than the directional mode.  There was no difference between the new technology and either directional or omnidirectional modes, but there was a trend that it is at least slightly better than directional, even though the data in this study does not reach significance. 

When we look at signal sharing in hearing aid 2, we see significant advantages related to directional and omnidirectional modes.  This hearing aid provides an advantage for both of the listening conditions that were evaluated.  One caveat to signal sharing is that you are getting the same signal in both hearing aids.  You lose some of the interaural time and level difference cues.  Accurate localization is not expected.  Even though there is a benefit here, I think we also have to consider that there is at least the potential for some disruption of localization. 

One of the findings for side suppression was considerable fluctuation in the background noise for some of the recordings, since this was recorded on a highway with real highway noise.  The authors discussed this at length in the article, but the bottom line is that the results for side suppression technology are more questionable, but it certainly does not look like this technology is worse than omnidirectional, like we see from the back. 

The authors noted that road noise was relatively high, at 78 dBA, for this particular testing.  They also measured it in sedans at different speeds and noted that to be 60 to 73 dBA, or a fair bit lower.  This might have led to the lower performance that they measured.  Finally, the authors did note that the results were relatively consistent with preference data and the measured SNRs. 

The importance of this data is obvious.  These new microphone technologies can provide benefits in specific vehicle-based listening situations.  I also think that standard directional settings may lead to detriments when listening in a vehicle.  In fact, listeners may be better off manually switching into an omnidirectional mode.  The tradeoffs relative to localization and other factors depend on the technology that is being implemented.  I think the benefits are clear, but the trade-offs are going to require some additional work. 

Article 3:  Impact of Advanced Hearing Aid Technology on Speech Understanding for Older Listeners with Mild to Moderate, Adult-Onset, Sensorineural Hearing Loss

The next study (Cox, Johnson, & Xu, 2014) has very interesting clinical implications.  The authors intended to answer the question, “Do patients fit with premium versus basic hearing aid technologies have different outcomes related to speech understanding and quality of life?” as evidenced by outcomes in a diary. 

We know that hearing aids differ greatly by price as a function of level of technology; the higher level devices are generally marketed as better quality.  One of the problems with this sort of marketing is that it ignores the idea of matching the right technology to the individual’s specific communication needs.  This article does a good job of arguing that point.  It is important to individualize our communication needs assessment and then pick appropriate technology.  A lack of difference between the higher-level and lower-level technologies may suggest that differences in technology are not substantial enough for patients to notice, and therefore, not worth the increase in cost. 

What does hearing aid “level” mean?  This has changed over the last few years.  In modern hearing aids, even the most basic levels typically include many features that used to be considered advanced features.  These include multichannel compression, directional microphones, digital noise reduction, feedback suppression and noise reduction. 

When we look at higher level hearing aids, they are distinguished by the fact they have these same basic features, but they include more complex, automatic and adaptive versions of the basic features.  Many of these features are intended to optimize speech understanding.  Some manufacturers add additional features.  These might include things like bilateral data sharing, bilateral signal sharing, learning volume controls, impulse noise reduction, reverberation suppression, and wind noise suppression.  If we examine these features, the differentiation is not so much based on trying to optimize speech recognition, but rather on sound quality or annoyance issues. 

Methods

The investigators examined 25 different participants.  They included both new and experienced hearing aid users, and all 25 participants completed blinded, month-long field trials.  Each of these 25 listeners wore 4 different pairs of hearing aids.  Two of the hearing aid pairs were basic, and two were premium.  These basic and premium-level hearing aids were from two different manufacturers, and all four instruments were mini BTE style hearing aids.  The fittings were compared with the Four Alternative Auditory Feature test, which is a laboratory speech understanding test, with steady-state noise surrounding the listener.  They also presented standardized questionnaires and an open-ended diary where listeners were asked to nominate five situations where the hearing aid helped and five situations where the hearing aid was not helpful. 

All fittings were based on the NAL-NL2 targets with real-ear verification and rule-based fine tuning.  They looked at bilateral loudness balance, loudness of average speech, loudness comfort, and quality of own voice.  Follow-up and further fine tuning took place within a week of the fitting.  Importantly, any remote controls, learning capabilities or other advanced features in the premium devices were made available to the listeners. 

The questionnaires given after each month included the Abbreviated Profile of Hearing Aid Benefit (APHAB), which was completed both as aided and unaided measure of benefit, the Speech Spatial and Qualities of Hearing Scale (SSQ), and the Device Oriented Subjective Outcomes Scale (DOSO), which examined hearing aid performance.  They also looked at change in overall quality of life and examined diaries related to hearing when listening with the trial hearing aids.

Results

The experienced group did have more hearing loss than the new group, so these two groups were somewhat different, however, there were no differences between the new and experienced group in terms of outcomes.  The two groups, therefore, are lumped together for the rest of the analysis.  The laboratory-based speech recognition is a four alternative forced-choice task of word identification.  There were very clear benefits of providing amplification, particularly for soft and average input levels, but no real differences across the four different hearing aid technologies. 

When the researchers examined hearing aid benefit with the APHAB in a composite score, there were also very large and clear advantages to amplification.  However, there were no differences across the different hearing aid technologies. 

The investigators also looked at whether there were interactions between the technologies and the outcome measures, and they did not find any.  They took the three outcome measures, including the APHAB, and placed them on the same scale, coming up with an overall composite of the three self-report inventories.  They found no differences between the four hearing aid technologies. 

By and large, across all four hearing aid technologies, patients reported a significant increase in quality of life.  There were only a very small number of patients that either saw no change or a slight decrease, and that only occurred with the basic technologies. 

In the diaries where participants nominated positive situations, the authors categorized the situations including speech perception, sound quality, music perception, and localization.  One of the most striking things about these diaries is that the overwhelming number of situations that were nominated were related to speech perception.  The authors argue that this provides some evidence of how important speech perception is to hearing aid wearers relative to other outcomes.  If you look at all of the other outcomes, there were very few times that any one type of outcome was nominated.  Importantly, there were no differences between the four hearing aid technologies across any of the outcomes, including speech perception.

When you look at negatives in the diaries, there was, again, a dominance of speech perception.  The second most dominant issue nominated in the diaries was that were not problems to report.   The only difference that we see in some of the negatives is that there were some sound quality issues nominated, but not significant differences across the four technologies. 

The bottom line from this research is that we have additional evidence that amplification results in large and significant benefits.  Issues surrounding speech understanding appear to be by far the most pivotal for patients when they assign benefit for different hearing aid technologies.  As a consequence, technologies that do not impact speech understanding are likely to have little effect on general outcomes.  This is one of the reasons that we saw no real effect of some of the premium hearing aid features on these outcomes.  There were no statistically significant or clinically important differences in improvement between the basic and premium hearing aid technologies for either group of listeners, and that likely is because they were focused on speech understanding. 

We cannot assume that more expensive hearing aids will be better.  My bias is that the role of the clinician in selecting, fitting, and optimizing hearing aid technology is as important as the technology itself.  We cannot assume more is better.  We have to accurately apply technologies that address the individual patient’s listening needs.  Some patients do benefit from some of the additional technologies, even those that have fairly minor or small impacts on listening in daily life, but may get rid of an annoyance or a difficulty that a patient has.  This is summarized well by the authors, who stated that comprehensive best fit practice fitting protocols should be followed if we want to optimize results for every patient.

Article 4:  The Effect of Hearing Aid Noise Reduction on

Listening Effort in Hearing Impaired Adults

The last article is from Desjardins and Doherty (2014).  There has been considerable interest in noise reduction for a long period of time, but I think people have been somewhat disappointed in this technology because they have looked at it for speech recognition benefits.  This article examines other potential benefits for noise reduction.  Specifically, the authors looked at benefits related to listening effort.  Given the fact that there is longstanding evidence that the majority of noise reduction algorithms do not significantly improve speech recognition, is there potential for benefits in other areas, such as listening effort, as measured by a dual-task paradigm? 

There are many different ways to define listening effort.  Desjardins and Doherty (2014) define listening effort similar to many others, and that is as the cognitive resource requirements necessary for an individual to understand speech.  In other words, it is the amount of mental capacity that performance of a listening task occupies, most importantly in a capacity-limited system. 

This is the idea behind using a dual-task paradigm.  If we ask an individual to process, understand and repeat speech, that will of course use up resources.  If it becomes increasingly difficult to do that, there are fewer and fewer resources left over to do other tasks.  If we introduce a secondary task, then performance on the secondary task should decline as effort increases.  If listening effort is reduced by the hearing aid’s signal processing, it may lead to less fatigue and a variety of other potential benefits. 

Methods

In this study (Desjardins & Doherty, 2014), there were 12 experienced listeners.  They were all fitted with commercial BTE hearing aids with a disposable closed canal mold.  The noise reduction used was a commercially-available modified form of spectral subtraction.  For the primary speech recognition task, they used the Revised Speech in Noise (R-SPIN) test, which has both low and high-probability sentences.  They were presented to participants in a female two-talker babble.  They individualized the listening conditions so that performance was moderately difficult, which was 76% correct on the R-SPIN, and then they had a difficult listening condition, which was 50% correct on the R-SPIN.  They varied the SNR individually to obtain those values. 

The secondary task was visual motor tracking.  In this task, a target rotated along an elliptical track, and the participant’s task was to visually track the target.  The participants’ time on target was scored during the presentation.

In addition to the listening effort, they also did some predictive measures.  Working memory was assessed using the Reading Span task and perceptual processing speed by the Digit Symbol Substitution Test (DDST) from the Wechsler Adult Intelligence Scale-III.  In this latter task, there are pairs of numbers and symbols, and the task is to code as many numbers as possible with the correct symbols in a two-minute time window.  Lastly, they looked at self-perceived ease of listening as a secondary task of listening effort. 

Results

The investigators examined the gain change from the noise reduction system.  The noise reduction system reduced gain for the combined speech in noise, a little more overall in the low frequencies, but other than that, the gain change was somewhat similar for the difficult and moderate conditions. 

There was no significant change in ease of listening between noise-reduction-on and noise-reduction-off for either the moderate or difficult conditions.  There was a small trend for better ease of listening in the moderately difficult condition with noise reduction on; you can question whether that might reach significance if there had been more than 12 subjects. 

In terms of listening effort, importantly, they saw no change in speech recognition, suggesting that what happens on the secondary task is a reflection of a change in listening effort.  They found significantly lower listening effort, but only in the more difficult environment.  The more difficult environment, on average, was about 1.6 dB SNR, which was compared to about a 4.4 dB SNR in the easier task.  There was no interaction with context, so the same effect was found for both the low and the high context sentences of the R-SPIN, but there was a relatively clear change in listening effort. 

In this study, there was a trend for individuals with faster processing speed to expend less listening effort with the noise reduction activated in the more difficult listening conditions.  We do not know if this trend is significant, but we saw this in the Souza and Sirow (2014) study I reviewed first on compression.  Those with the best processing and cognitive function and the least hearing loss seemed to benefit more from complex signal processing that is aimed at improving listening in noise.  While these technologies can be useful, they seem to benefit those with the least impoverished hearing systems. 

These data support the benefits from noise reduction that are unrelated to changes in speech recognition.  They also suggest that listening-effort benefits from noise reduction are most likely found in the most difficult listening situations, which was consistent with data from Sarampalis et al. (2009) with normal hearing listeners.  Finally, context does not seem to limit the benefits.

In Closing:  A Few Clinical Tidbits

I have a few final clinical tidbits for you.  I could not limit this to just four research articles, so I want to briefly summarize some additional articles that had interesting implications. 

Tidbit 1:  Directional Microphones May Lead to Loss of Audibility for Sounds in the Rear Hemisphere Increasing Localization Problems

This study by Brimijoin, Whitmer, McShefferty, & Akeroyd (2014) is what I would consider a preliminary exploratory study, but it has interesting outcomes.  They included listeners with hearing impairment and tasked the subjects to face a female talker while surrounded by male talker babble. There was a motion tracking system used to determine their motion path as they attempted to find the female talker. 

For smaller off-axis angles, people did better with the directional mode.  They were able to find targets faster than in the omnidirectional mode.  In the larger off-axis target angles, importantly, everyone was equally accurate in finding a target.  When fitted with directional microphones, however, patients took longer and used more complex movements.  In fact, they frequently made turns in the wrong direction.

The authors argue that the increase in movement complexity causes sound to not be heard clearly, and rather than turn towards them, listeners have to search for this new signal, particularly when sounds are played from the rear.  When trying to locate talkers in complex environments, clinical patients may complain about having to find signals in the rear hemisphere, because, in some cases with a directional hearing aid, those sounds may be lost. 

Tidbit 2:  Long-Term Language Benefits from Frequency-Lowering Technologies May be Limited

The next study (Bentler, Walker, McCreery, Arenas, & Rousch, 2014) is one of two from this year that looked at frequency-lowering technologies.  There are many studies, particularly with children, that have shown speech recognition benefits, specifically for high-frequency consonant sounds.  So do these speech recognition benefits have long-term implications for speech and language benefits?  This study looked at 66 children who were 3, 4 and 5 years old with hearing loss.  These children were fitted with nonlinear frequency compression or conventional amplification for at least six months. 

They examined demographic characteristics, audibility, speech and language outcomes and speech perception, though the speech perception was only measured in the five-year-olds across the two technology groups.  Importantly, the data revealed no differences in speech or language outcomes or speech perception between the technologies.

One of the things that we are seeing in the data with frequency-lowering technologies is that the long-term benefits related to speech and language are perhaps a bit less than we had hoped for.  It may be that there are some benefits to this technology in individual children, and it may be that we need to look more precisely at who benefits from this technology and who does not. 

Tidbit 3:  Emphasis on Emotionally-Focused Communication

Finally, the last study addresses emotion and the patient (Ekberg, Grenness & Hickson, 2014).  This study analyzed 65 videos of clinic appointments with 23 different audiologists.  They looked at the concerns and questions that patients had and found that patients expressed a large number of psychosocial concerns; typically, those emotional responses were negative.  When there is a negative emotional response, it calls for a compassionate response back from the clinician, directly addressing the emotions, validating the feelings, or inviting further disclosure or expansion of those concerns. 

However, in the analysis of the videos, audiologists typically did not address these concerns and instead focused on the progression of the process and continued the hearing aid selection and fitting process.  When they did that, patients often re-raised or escalated their concerns in subsequent turns.  Sometimes these escalations and lack of the audiologist addressing the concerns led to a failure in the process, where the patient left the appointment without making the decision to purchase a hearing aid. 

The bottom line is that a greater emphasis on emotionally-focused communication on behalf of the audiologists could result in both improved outcomes and relationships with patients.  Recognizing that the patients’ emotional reactions are real continues to be important. 

I hope you found some of these articles to be interesting and useful for your clinical practices.  It was a pleasure to be here for the Vandy Journal Club, once again.  I am happy to take some questions.

Questions and Answers

Dr. Gus Mueller:  Your first article (Souza & Sirow, 2014) suggests that fast time constants would be better for certain people.  Of course, this is not the first time that has been suggested.  Let’s take that into the clinic from a practical standpoint.  The researchers used very fast time constants.  If you are a typical audiologist and you are buying from the big six manufacturers, as about 90% of audiologists are, are there really many hearing aids available that default to, or can be programmed to that kind of fast compression?

Dr. Ricketts:  I think that is an important part of their analysis.  They did this analysis only for the very fastest time constants.  It would be interesting to see whether that predictive nature occurred for just the fast compression as well.  There certainly are hearing aids available with syllabic time constants, but if you look at the majority of the companies, you would have to make that selection based on the specific instrument, and it is not switchable within the same instrument for the vast majority of companies. 

Dr. Mueller:  I can only think of one or two that allow you to switch at all, and even those do not have time constants that fast. 

Dr. Ricketts:  That is an interesting point, because it suggests that if you believe this is important for your patient, it may affect the brand of hearing aid that you dispense.  Then you would also have to weigh whether the other features available were appropriate for your patients.

Audience member:  Should we have any concern regarding application of these findings based on the low end of the study?

Dr. Ricketts: Absolutely.  The low end on the compression study (Souza & Sirow, 2014) tells us something about how strong that relationship is.  Whether or not you can use some of the predictions and trends is going to take further research.

Dr. Mueller:  Regarding a couple other studies you reviewed.  You followed the vehicle study data (Wu, Bentler, & Stanziola, 2013) with the Cox, Johnson and Xu (2014) data.  The Wu data showed that the sophisticated features available in premium instruments do indeed work in improving speech intelligibility in noisy environments.  Then we go into the Cox data, where the premium hearing aids had this very same technology—they were the same premier hearing aids used in the Wu and colleagues study. But now, the premium hearing aids did not rate any higher than the basic hearing aids. 

I recall a study with directional hearing aids that you published about 10 years ago with Paula Henry and David Gnewikow (Ricketts, Henry, & Gnewikow, 2003).  I think you concluded that a scale such as the APHAB might be too general to pull out some of the speech-understanding-in-noise benefits that were truly there.  You had designed a different scale that was more directed to the benefits that you might see from directional microphones, and when you used that scale, a directional benefit appeared.  So going back to the Cox article, were the benefits simply not there, or do you think that the benefit was there and these questionnaires or diaries were too general to uncover the benefit?

Dr. Ricketts:  I think it is the latter.  For the diary, all they are doing is nominating five situations where they saw a benefit and five situations where they did not.  It is the case that speech recognition in very specific situations or in general situations is of paramount importance. Specific to my point is using a patient-oriented, communication-difficulty-oriented selection and fitting process with these patients; talk to them about the situations in which they are having difficulty and then pick technologies that address those.  I have little doubt that selecting the right technology for patients that say, “One of the things that bothers me is I can’t understand my wife in the car,” will be tremendously beneficial, even if it does not show up on a general measure of benefit. 

Audience member:  In the Ekberg study you reviewed, did the authors look for outcomes in the videos where the audiologists did address the emotional issues versus those that did not?

They did not discuss that in the article. I think they were looking at how often those emotional issues were addressed.  Their conclusion was that audiologists generally did not address them.  That is not to say that audiologists in some of those situations did not address them.  The authors noted that when the audiologists did not address them, that the situations often did escalate. 

They did not specifically state this, but I think that when the concerns were addressed, they did not necessarily escalate because the authors did not acknowledge that.  It is potentially a fairly rich data set, and it would be interesting to look more at whether there are negatives to trying to address the concerns.  I could see is that addressing some of those emotional concerns adds to the time taken in the clinical appointment.  That is probably one of the reasons why audiologists may try to stay on track.  However, if it leads to failure of the appointment, then we are crossing a line. 

References

Bentler, R., Walker, E., McCreery, R., Arenas, R. M., & Roush, P. (2014). Nonlinear frequency compression in hearing aids: impact on speech and language development. Ear and Hearing, 35(4), 143-152. doi: 10.1097/AUD.0000000000000030.

Brimijoin, W. O., Whitmer, W. M., McShefferty, D., & Akeroyd, M. A. (2014). The effect of hearing aid microphone mode on performance in an auditory orienting task. Ear and Hearing, 35(5), 204-212. doi: 10.1097/AUD.0000000000000053.

Cox, R. M., Johnson, J. A., & Xu, J. (2014). Impact of advanced hearing aid technology on speech understanding for older listeners with mild to moderate, adult-onset, sensorineural hearing loss. Gerontology, 60(6), 557-568. doi: 10.1159/000362547.

Desjardins, J. L. & Doherty, K. A. (2014). The effect of hearing aid noise reduction on listening effort in hearing-impaired adults. Ear and Hearing, 35(6), 600-610. doi: 10.1097/AUD.0000000000000028.

Ekberg, K., Grenness, C., & Hickson, L. (2014). Addressing patients' psychosocial concerns regarding hearing aids within audiology appointments for older adults. American Journal of Audiology, 23(3), 337-350. doi: 10.1044/2014_AJA-14-0011.

Foo, C., Rudner, M., Ronnberg, J., & Lunner, T. (2007). Recognition of speech in noise with new hearing instrument compression release settings requires explicit cognitive storage and processing capacity. Journal of the American Academy of Audiology, 18(7), 618-631.

Gatehouse, S., Naylor, G., & Elberling, C. (2006). Linear and nonlinear hearing aid fittings-1. Patterns of benefit. International Journal of Audiology, 45(3), 130-152.

Lunner, T., & Sundewall-Thoren, E. (2007). Interactions between cognition, compression, and listening conditions: effects on speech-in-noise performance in a two-channel hearing aid. Journal of the American Academy of Audiology, 18(7), 604-617.

Ohlenforst, B., Souza, P. & MacDonald, E. (2014). Interaction of working memory, compressor speed and background noise characteristics. Paper presented at the American Auditory Society, Scottsdale, AZ.

Ricketts, T.,  Henry P., & Gnewikow D. (2003). Full time directional versus user selectable microphone modes in hearing aids. Ear and Hearing, 24(5), 424-439.

Sarampalis, A., Kalluri, S., Edwards, B., & Hafter, E. (2009). Objective measures of listening effort: effects of background noise and noise reduction. Journal of Speech, Language, and Hearing Research, 52(5), 1230-1240. doi: 10.1044/1092-4388(2009/08-0111).

Souza, P. E., & Sirow, L. (2014). Relating working memory to compression parameters in clinically-fit hearing aids. American Journal of Audiology, ePub ahead of print. doi: 10.1044/2014_AJA-14-0006.

Wu, Y. H., Bentler, R. A., & Stanziola, R. W. (2013). The effect of hearing aid technologies on listening in an automobile. Journal of the American Academy of Audiology, 24(6), 474-485. doi: 10.3766/jaaa.24.6.4.

Cite this Content as:

Ricketts, T. (2015, January). Vanderbilt Audiology's Journal Club with Dr. Todd Ricketts - hearing aid technology. AudiologyOnline, Article 13177. Retrieved from https://www.audiologyonline.com.

 

Rexton Reach - April 2024

todd ricketts

Todd Ricketts, PhD

Associate Professor at the Vanderbilt Bill Wilkerson center for Otolaryngology and Communication Sciences and Director of the Dan Maddox Hearing Aid Research Laboratory

Todd A. Ricketts, Ph.D, CCC-A, is an associate professor at the Vanderbilt Bill Wilkerson center for Otolaryngology and Communication Sciences and Director of the Dan Maddox Hearing Aid Research Laboratory. Prior to moving into the Vanderbilt position in 1999, Todd spent three years as an assistant professor at Purdue University. His current research interests are focused in amplification and microphone technology, as well as the relationship between laboratory and everyday benefit. Todd has published more than fifty scholarly articles and book chapters. To date he has presented over 100 scholarly papers/poster presentations, short courses, mini-seminars, and workshops to professional and scholarly conferences both nationally and internationally. He was also named a fellow of the American Speech Language Hearing Association in 2006. He continues to pursue a federally and industry funded research program studying the interaction between amplification technology, listening environment and individual differences as they impact benefit derived from hearing aids and cochlear implants. His current work includes examination of the viability of directional technology for school aged children, the relative benefits and limitations of manual switching, automatic switching and “asymmetric” microphone technology;the impact of extended high frequency bandwidth on user perceived sound quality as a function of hearing loss and the relative benefits and limitations of bilateral cochlear implants. He also serves as the chair of the Vanderbilt University Institutional Review Board: Behavioral Sciences Committee. 



Related Courses

Vanderbilt Audiology Journal Club: Update in Hearing Aid Research with Clinical Implications
Presented by Erin Margaret Picou, AuD, PhD, Todd Ricketts, PhD
Recorded Webinar
Course: #33164Level: Intermediate1 Hour
This course will cover a review of recent key hearing aid journal articles with clinical implications, by Drs. Todd Ricketts and Erin Picou from Vanderbilt University.

Vanderbilt Audiology Journal Club: Clinical Insights from Recent Hearing Aid Research
Presented by Todd Ricketts, PhD, Erin Margaret Picou, AuD, PhD, H. Gustav Mueller, PhD
Recorded Webinar
Course: #37376Level: Intermediate1 Hour
This course will review new key journal articles on hearing aid technology and provide clinical implications for practicing audiologists.

20Q: Hearing Aid Verification - Will AutoREMfit Move the Sticks?
Presented by H. Gustav Mueller, PhD, Todd A. Ricketts, PhD
Text/Transcript
Course: #31600Level: Advanced1 Hour
This article discusses autoREMfit hearing aid fitting, how it compares to best practices in hearing aid verification, and provides considerations and recommendations for professionals using autoREMfit to help optimize accuracy.

Vanderbilt Audiology Journal Club: Cognition and Self-Efficacy
Presented by Todd Ricketts, PhD, Erin Margaret Picou, AuD, PhD
Recorded Webinar
Course: #35026Level: Intermediate1 Hour
This course will review new key journal articles on hearing aid technology and provide clinical implications for practicing audiologists.

Vanderbilt Audiology Journal Club: Hearing Aid Technology for Practicing Audiologists
Presented by Todd Ricketts, PhD, Erin Margaret Picou, AuD, PhD
Recorded Webinar
Course: #35819Level: Intermediate1 Hour
This course will review new key journal articles on hearing aid technology and provide clinical implications for practicing audiologists.

This course is part of the 2021 Industry Innovations Summit on AudiologyOnline.

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.