AudiologyOnline Phone: 800-753-2160


Unitron Hear Life - November 2023

Multiple Processing Strategies to Accommodate Various Listening Preferences

Multiple Processing Strategies to Accommodate Various Listening Preferences
Donald Hayes, PhD
January 19, 2004
Share:
This article is sponsored by Unitron.

Overview
Today's multi-memory hearing aids provide different processing strategies for different listening environments. A recent MarkeTrak survey demonstrated the need for more than one signal processing strategy. In the survey, Kochkin noted consumers expect high performance in multiple listening situations; however, only one in four hearing aid wearers were satisfied in as many as 75% of the situations they experienced. Kochkin's data also indicated that satisfaction and performance can be improved across a wider range of listening situations by using hearing aids that combine multiple features such as multiple memories, multiple channels and multiple microphones . Combining such features provides the professional with greater flexibility and the ability to tailor amplification for specific listening situations.

The purpose of this study was twofold. 1- To examine preferred processing in quiet, noise, and music for two groups of subjects, and 2, - To demonstrate how the strengths of a given strategy, in this case ASP, can be exploited in one memory of the device, without exposing its weaknesses.

Listeners compared several processor choices using a four-channel, three-memory Unitron Hearing Unison™ 4 hearing aid. Historically, in single memory devices, automatic low-frequency attenuation in noise, also known as automatic signal processing (ASP) or BILL processing, has been effective for some individuals, and of limited value to others . Effectiveness has been shown to be specific to some hearing loss configurations or listening situations. By placing ASP in one memory of a multi-memory device, we can offer alternate processing strategies (i.e., linear, WDRC etc.) in other memories for alternate listening situations, when ASP is not efficacious. The use of other digitally controlled features (i.e., adjustable crossover frequencies, switchable omni/directional microphones) can enhance fitting flexibility far beyond what was previously possible.

Meeting the Need for Different Listening Preferences

Efficacy in multiple listening situations is critical for user satisfaction. Multi-memory hearing aids give hearing healthcare professionals the flexibility to fit clients with various types and degrees of hearing loss effectively for different listening environments. Providing multiple processing strategies in digital hearing aids allows greater ability to customize the aids' performance for the wearer and the situation. There is no need to compromise certain situation-specific processing strategies across non-ideal environments.

In high end products, a multitude of advanced features and algorithms provide maximal benefit. However, recent advances in entry-level digital hearing aids are making multiple memory hearing aid circuits available for the entry-level digital consumer. These hearing aids present highly effective solutions through intelligent fitting software, broader circuit options and more positive fitting experiences for professionals and wearers, with fewer fine-tuning and follow-up appointments.

Today's digital multi-memory hearing aids allow us to provide different types of signal processing for different listening environments, in one unit.

What type of processing is preferred and what factors influence that preference? The following study was undertaken to answer these questions.

Test Conditions

This study was conducted using custom-made and behind-the-ear, four-channel, three-memory, Unitron Hearing Unison 4 hearing aids. Our overall goal was to determine the preferred processor choices in a series of listening environments for individuals with various types and degrees of hearing loss.

There were 24 participants. Their ages ranged from 18 to 80 years. 15 custom (completely-in-the-canal, in-the-canal, half-shell, full-shell) and 10 behind-the-ear fittings were undertaken. 17 participants were experienced hearing aid wearers; 7 were new wearers. Participants were separated into two groups: those with mild-moderate sensorineural losses and those with severe-profound sensorineural losses.

Baseline gain settings matched targets calculated using NAL-NL1. Listening comparisons were conducted in four different laboratory conditions: quiet; multi-talker babble at 70 dB SPL; highway/road noise at 80 dB SPL; and music (classical/jazz) at 60-70 dB SPL. The NAL-NL1 targets were used as prescribed for the quiet condition. Gain modifications consistent with standard Unitron Hearing environmental offsets were applied in the other three conditions. For example, a low-frequency gain cut and slight high-frequency gain boost relative to NAL-NL1 targets were used for the speech-babble conditions. All speech signals were presented at 0° azimuth and all noise or music inputs were presented from 180°. Participants were asked to indicate their preference for one of four processing strategies in each listening condition: Wide Dynamic Range Compression (WDRC), Linear with Output Compression Limiting (LL), Automatic Signal Processing (ASP) and Adaptive Input Compression Limiting (AC). Please note, LL differs from AC in that AC has an 8:1 compression ratio and variable input kneepoints from 70 -90 dB whereas LL uses infinite compression with a 20 dB range relative to the maximum MPO setting of the aid.

Description of Test Results

Analysis of test results revealed hearing loss configuration was one factor that influenced processor preferences. Figure 1 shows the mean pure tone thresholds + 1 standard deviation for the two groups of subjects. Those groups have been named mild-moderate and severe-profound.

(Figure 1)


Figures 2-5 below show the processing strategy preferences for the two groups in each listening situation. It can be seen in Figure 2 that 43% of the listeners in the mild-moderate group preferred linear amplification with adaptive compression limiting (AC) for speech in quiet. However, 59% of the severe-profound listeners preferred linear amplification with output compression limiting (LL) in the same listening situation. Please note - normal conversational speech (not soft speech) was presented in quiet. This might have impacted the listener's preferences relative to the WDRC strategy.

(Figure 2)


Preferences for music were very different from those for speech. The greatest number of participants in the mild-moderate group (61%) preferred linear limiting (LL) whereas the majority of severe-profound participants (64%) preferred WDRC for listening to music. See Figure 3.

(Figure 3)


Both groups of participants were provided with the same three processing strategy choices for listening in quiet and for listening to music. All subjects compared WDRC, LL and AC in those conditions as shown in Figure 2-3. However, in the presence of group/party noise (Figure 4) the mild-moderate group was asked to compare WDRC, LL and ASP. The severe-profound group was asked to compare LL, ASP and AC strategies. The same processing strategies were compared by the two groups in the intense traffic noise condition shown in Figure 5.

The mild-moderate group had a marked preference for ASP processing (56%) over WDRC and LL in the presence of a group or party noise (Figure 4). However, the severe-profound group was much more evenly divided between LL, ASP and AC with ASP being preferred slightly less often (25%) than the other two.

(Figure 4)


When the type of noise was changed from party noise to intense traffic noise, the mild-moderate group showed an increased preference for WDRC (63%) over ASP (30%) processing. The severe-profound group showed a greater preference for LL (50%) and AC (50%) in traffic noise, with no one preferring ASP processing.

(Figure 5)


The group preferences by listening situation are summarized in the following table:



When these results were used to set default processor choices by program in the Unifit™ fitting software, spontaneous acceptance of Quick Fit settings was very high. Few manual adjustments were required and subjects had positive comments regarding sound quality and clarity of speech without fine-tuning.

Positive Impact of Findings

  1. The field trial findings allowed us to build more intelligence into Unitron Hearing's initial Unifit™ Quick Fit settings taking into consideration degree of hearing loss, hearing aid history and typical listening preferences, to meet the needs of the majority of listeners quickly and easily.

  2. Since individual preferences do exist, the ability to customize processor selection in each memory of a multi-memory device is important to meet individual hearing aid wearer's needs.

  3. Several processing algorithms in one multi-memory digital device, such as the Unison 4, eliminates the need to compromise useful listening strategies in non-ideal environments.

  4. The findings indicate that it should be easier to move clients from linear to WDRC without worrying they will reject amplification because they can have linear processing in at least one memory.

  5. Multiple processing choices translate into a more positive fitting experience for both fitters and wearers. Default settings are designed to provide best performance and comfort.

  6. Less fine-tuning, fitting adjustments and follow-up appointments are required.

ASP Processing to Accommodate Varied Listening Preferences
Just as a person's listening environment can change from one moment to the next, hearing aids should ideally provide benefit in quiet one-on-one settings, and also in cars, restaurants, workplaces, and in the outdoors. Multi-memory hearing aids give hearing healthcare professionals the flexibility to fit clients with varying types and degrees of hearing loss effectively for different listening environments. Providing multiple processing strategies means greater customization of the hearing aids' performance for wearers in particular situations.

Three-memory hearing aids can offer ASP as one of three available processing strategies. ASP can be used to improve performance in listening situations where wearers find it most useful.

Early ASP Hearing Aids

ASP has been available in analog hearing aids since the 1980s. The main objective of ASP is to improve audibility for quiet speech and reduce the upward spread of masking in noise. A more descriptive name for ASP is Bass Increase at Low Levels (BILL). By definition, BILL processing means that the low-frequency gain of the hearing aid automatically increases as incoming signals get softer. In other words, in quiet, when signal levels are low, more low-frequency gain is applied by the circuit. As loudness levels increase, low-frequency gain is automatically reduced.

(Figure 6)


Figure 6 shows how a low-frequency noise (727 in flight) at three different input levels (50, 60 and 70 dBA) would have been processed by an early ASP hearing aid. This hearing aid circuit was designed to respond most effectively to low-frequency sounds. Any type of signal, including speech, could have been used in this example. The gain in the low-frequencies is reduced as input level increases. Note the clearly defined ASP high-frequency transition point at 1000 Hz marks the upper limit of this gain reduction.

The Goals of ASP

The primary objective of ASP is to provide a wideband frequency response for soft or average loudness levels of speech in quiet. The theory is that increasing low-frequency amplification in quiet conditions (BILL) can improve audibility and sound quality. This effect was shown by Moore, et. al. The secondary objective of ASP processing is to reduce the energy levels of background noise in the presence of speech. In this case, it is assumed that decreasing low-frequency gain at higher input levels should reduce the upward spread of masking caused by intense low-frequency noise. In the end, this one-size-fits-all attempt to maximize benefits across widely varying listening situations compromised performance and demanded too much from a single processing strategy. The alternative to the one-size-fits-all attempt, is to use two or more separate sound processing strategies, each of which have been optimized for specific situations.

ASP vs. WDRC

ASP should not be confused with Wide Dynamic Range Compression (WDRC), which is based on a different fitting philosophy. Granted, the primary objective of WDRC processing is the same as one goal of ASP processing: to provide maximum audibility for soft speech in quiet. However, the secondary objective of WDRC is to restore normal loudness perception of suprathreshold sounds. Therefore, the goal of WDRC in noise is to help maintain comfort rather than to eliminate upward spread of masking. Figure 7 illustrates how a wideband WDRC instrument would process the same jet noises as those shown in Figure 6. It is clear from the two figures that ASP provides a more linear input/output function in the high-frequencies than does WDRC. While WDRC has become the strategy of choice to improve audibility for soft speech, it has not consistently outperformed other types of processing in noise. This may be due to the reduction in high-frequency gain in the presence of a low-frequency noise. In such a case, the high-frequency linearity of ASP shown in Figure 6 would probably be superior.

(Figure 7)


Limitations of Early ASP

ASP was initially used as a one-size-fits-all circuit with an automatically adjustable frequency response for any listening situation. For some people, this worked very well. Unfortunately, researchers quickly demonstrated that the earliest ASP hearing aids circuits were of limited value for most hearing aid wearers. Early ASP circuits had a limited range of adjustment for bass increase in quiet and they often used a static frequency transition point at 1000 Hz, as shown in Figure 6.

Second generation ASP aids appeared in the 1990s. These provided a greater range of bass increase in quiet, giving the practitioner the flexibility to fit a wider selection of hearing threshold configurations. They also employed adaptive high-frequency transition points that changed automatically across a range of input levels. These products were more favorably received by clinicians than earlier models. Some experiments showed ASP processing could perform superior to linear amplification, particularly in the presence of low-frequency background noise. However, because ASP hearing aids were at that time predominantly analog, single-memory devices, only one processing strategy was possible for use in all environments.

Digital ASP Hearing Aids - A New Generation of ASP

In digital hearing instruments, such as Unitron Hearing's Unison 4, ASP is one of four processing strategies available in a multi-memory digital hearing aid. The ASP processing strategy can be used exclusively to improve performance in background noise. The presence of four-channel amplification, switchable directional microphones, and three programmable memories, eliminates reliance on one-size-fits-all programs and allows optimization of multiple processing strategies for various listening environments.

Four-channel processing and directional microphones enable new levels of flexibility when adjusting processing strategies for specific situations. These circuits allow the ASP processing in noise, while providing alternate processing strategies for different listening environments.

As the field trial results indicate, digital hearing instruments can provide a choice of processing strategies such as WDRC, Linear Limiting, Adaptive Compression, and ASP Noise Suppression, each co-existing in separate, user-controlled, programmable memories. The ASP processor can be set exclusively for noisier environments, where its automatic low-frequency attenuation will provide maximal benefit. An example of Unison's ASP algorithm, as measured in omnidirectional mode, can be seen in Figure 8.

(Figure 8)


Note that, unlike early ASP devices with a fixed kneepoint, the kneepoint of low-frequency attenuation in Unison 4 is adaptive. The kneepoint in Figure 8 rises from 1500 Hz to 3000 Hz as the noise changes from 50 dB to 70 dB. Therefore, not only does the amount of gain reduction adapt to changes in the noise levels, so does the bandwidth over which that gain reduction occurs.

Multi-channel ASP

(Figure 9)


(Figure 10)


With a four-channel hearing aid that has adjustable crossover frequencies, it is possible to alter the frequencies that serve as the ASP high-frequency transition points by changing the crossover settings. If the BILL component of the ASP processing is active primarily in the two lower-frequency channels, then shifting the crossover frequency between the second and third channels will move the ASP high-frequency transition point.

For example, when the F1 and F2 crossover controls in Unison 4 are shifted to lower frequencies, the ASP transition points will shift to lower frequencies as well. Figures 9 and 10 illustrate this effect. The reverse is also true. Specifically, when the crossover controls are shifted to higher frequencies, the ASP transition points shift to higher frequencies too.

This becomes clinically relevant for people using ASP in one memory. For example, if the ASP algorithm is not reducing enough low-frequency gain in noisy environments, you might try increasing the crossover controls to extend the ASP effect further into the higher frequencies. Conversely, if you feel the ASP algorithm is reducing low-frequency gain too much, or that the ASP is adversely affecting the audibility of higher frequency speech sounds, you might decrease the crossover controls to restrict the attenuation to a lower range of frequencies. This is one more example of how the interaction of multiple digital features can improve the performance of a processing strategy such as ASP.

ASP Noise Suppression and Directional Microphones

When it comes to listening in background noise, the performance of directional microphone technology is well documented. However, the impact of combining directional microphones and ASP Noise Suppression is not intuitively obvious. To help shed some light on this combination, consider the example below:

  • A speech signal and a low-frequency noise (car engine) are both presented to the hearing aid simultaneously.

  • The speech is always presented at 0° azimuth or directly in front of the listener's head.

  • The car noise is presented either at 0° (front) or 180° (back) representing noise from in front of, or behind the listener.

  • When the speech signal and the car noise emanate from the same direction, 0° azimuth, the ASP algorithm reduces low-frequency gain as expected.

  • When the speech signal and car noise are spatially separated by 180° the output is determined by the microphone rather than the ASP algorithm. (The directional microphone reduces the level of the signal from 180° azimuth before it reaches the ASP algorithm whereas the omnidirectional microphone does not.)


Observation: The low-frequency gain is not further reduced by the ASP algorithm.By combining ASP Noise Suppression with directional microphone technology, low-frequency background noise can be attenuated regardless of the location of the noise source. If the background noise is behind the wearer, the directional microphone reduces the level of the noise. However, in more diffuse noise settings, where low-frequency noise is emanating from the front and/or all directions, the ASP circuitry controls the noise levels. The interaction between the ASP algorithm and the directional microphone varies depending on the levels, frequency content, location of noise sources and the ASP settings. However, their interaction may provide optimum listening for the wearer in a variety of noisy conditions.

Summary

It has been demonstrated with the field trial results (above) that preferences for different processing strategies vary across hearing loss groups and listening situations. This highlights the need for the availability of a variety of processing strategies in entry-level hearing aids where a plethora of adaptive features may not be available. Overall, this study confirms that clients exposed to many different listening environments will benefit from multiple processing choices in their hearing instruments.

Digital technology may improve the performance of a given processing strategy in specific listening situations. For example, in some digital hearing instruments, ASP is one of many available algorithms and can be used exclusively to improve performance in background noise. Combinations of the other three algorithms can be used to optimize performance in other listening environments. The use of ASP with adjustable crossovers and directional microphones allow greater fitting flexibility and additional patient benefits. These customizations translate into a more positive fitting experience for both fitters and wearers.

Author
This article was submitted to Audiology Online by Donald Hayes, Ph.D., Manager of Audiology Research and Training, Unitron Hearing, Kitchener, Ontario. Field trial data was compiled by Bill Christman and Nancy Tellier, Audiologists, Unitron Hearing. Correspondence can be addressed to Donald Hayes, PhD, Unitron Hearing, 20 Beasley Drive, P.O. Box 9017, Kitchener, ON N2G 4X1, Canada; email: don.hayes@unitron.com.

Works Cited

1. Kochkin, S., MarkeTrak VI: 10-year customer satisfaction trends in the US hearing instrument market. Hear Rev, 2002. 9(10): p. 14-25, 46.

2. Kochkin, S., Customer satisfaction and subjective benefit with high-performance hearing instruments. Hearing Review, 1996. 3(12): p. 16-26.

3. Ono, H., J. Kanzaki, and K. Mizoi, Clinical results of hearing aid with noise-level-controlled selective amplification. Audiology, 1983. 22(5): p. 494-515.

4. Fabry, D.A., et al., Do adaptive frequency response (AFR) hearing aids reduce 'upward spread' of masking? J Rehabil Res Dev, 1993. 30(3): p. 318-25.

5. Cook, J.A., S.P. Bacon, and C.A. Sammeth, Effect of low-frequency gain reduction on speech recognition and its relation to upward spread of masking. J Speech Lang Hear Res, 1997. 40(2): p. 410-22.

6. Sammeth, C.A. and M.T. Ochs, A review of current ''noise reduction'' hearing aids: rationale, assumptions, and efficacy. Ear Hear, 1991. 12(6 Suppl): p. 116S-124S.

7. Moore, B., C. Lynch, and M. Stone, Effects of the fitting parameters of a two-channel compression system on the intelligibility of speech in quiet and in noise. British Journal of Audiology, 1992. 26(Dec): p. 369-79.

8. Dempsey, J.J., Effect of automatic signal-processing amplification on speech recognition in noise for persons with sensorineural hearing loss. Ann Otol Rhinol Laryngol, 1987. 96(3 Pt 1): p. 251-3.

9. Stein, L., T. McGee, and P. Lewis, Speech recognition measures with noise suppression hearing aids using a single-subject experimental design. Ear Hear, 1989. 10(6): p. 375-81.

10. van Tasell, D.J., S.Y. Larsen, and D.A. Fabry, Effects of an adaptive filter hearing aid on speech recognition in noise by hearing-impaired subjects. Ear Hear, 1988. 9(1): p. 15-21.

11. Vonlanthen, A., Basic Signal Processing Strategies, in hearing instrument technology for the Hearing Healthcare Professional, J. Danhauer, Editor. 2000, Singular: San Diego. p. 136-141.

12. Tyler, R. and F. Kuk, The effects of ''noise suppression'' hearing aids on consonant recognition in speech-babble and low-frequency noise. Ear and Hearing, 1989. 10(Aug): p. 243-9.

13. Kuk, F.K., R.S. Tyler, and L. Mims, Subjective ratings of noise-reduction hearing aids. Scandinavian Audiology, 1990. 19: p. 237-44.

14. Tyler, R.S. and F.K. Kuk, Consonant recognition and quality judgments of noise-reduction hearing aids. Acta Otolaryngol Suppl, 1990. 469: p. 224-9.

15. Horwitz, A., C. Turner, and D. Fabry, Effects of different frequency response strategies upon recognition and preference for audible speech stimuli. Journal of Speech and Hearing Research, 1991. 34(Oct): p. 1185-96.

Click here to visit the Unitron Hearing website.

Industry Innovations Summit Recordings Available

donald hayes

Donald Hayes, PhD

Director of Audiology at Unitron Hearing Ltd. in Kitchener, Ontario

Donald Hayes, Ph.D. has been an audiologist for 18 years. He is the Director of Audiology at Unitron Hearing Ltd. in Kitchener, Ontario. Don Hayes is the Director of Audiology at Unitron Hearing Ltd. in Kitchener, Ontario.



Related Courses

Beyond Hearing Aids, Connections to a Larger World with Unitron’s Accessory Portfolio
Presented by Courtney Smith, MA, CCC-A
Live WebinarThu, Jun 20, 2024 at 12:00 pm EDT
Course: #39683Level: Intermediate0.5 Hours
Better hearing goes so much further beyond hearing aids and gets even better when you include accessories for TV streaming, 1 on 1 conversations, and so much more. Join us to learn about the latest technology to take the hearing experience beyond hearing aids with Unitron’s accessory portfolio.

Unitron TrueFit Software: A Live Introduction
Presented by Jill Benner, AuD
Recorded Webinar
Course: #32772Level: Introductory1 Hour
New to fitting Unitron products or need a refresher on the software? Increase your fitting confidence with this easy beginner software course. At the end of this live demonstration you will have the confidence to quickly and easily get through a first fit of a Unitron Hearing Instrument.

Factors Related to Hearing Instrument Wearing Time
Presented by Kristina Petraitis, AuD, FAAA
Live WebinarThu, Jul 25, 2024 at 12:00 pm EDT
Course: #39684Level: Intermediate0.5 Hours
When patients love the experience, they want to wear their hearing instruments as much as possible. Join us to learn how the latest technology can inspire maximum daily use.

Sound Scene Classification in Hearing Aids
Presented by Donald Hayes, Jan Storhaug, AuD, CCC-A, Douglas Baldwin, AuD
Recorded Webinar
Course: #33550Level: Intermediate1 Hour
Acoustic classification is an important part of modern automatic hearing instruments. This course will assist in understanding the importance of a hearing aid manufacturer's development philosophy with respect to how modern hearing systems are designed to work.

Advanced TrueFit v5.4 Software Training
Presented by Riley Garrone
Recorded Webinar
Course: #38934Level: Intermediate1 Hour
This course will provide an in-depth review of Unitron TrueFit fitting software, review what’s new in Unitron TrueFit v5.4, highlight Unitron’s signature features, and review Experience Innovations. Finally, to support continuous patient success, we will review troubleshooting tips and tricks specific to Unitron products and features.

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.