AudiologyOnline Phone: 800-753-2160


Widex Smarter Solution - January 2024

The Essential Building Blocks of Hearing Aid Selection and Fitting: A Beginner's Guide to Applying Evidence-Based Thinking

The Essential Building Blocks of Hearing Aid Selection and Fitting: A Beginner's Guide to Applying Evidence-Based Thinking
Brian Taylor, AuD
July 14, 2008
Share:
Introduction

Before a patient begins using hearing aids, there are several essential components of the selection and fitting process the audiologist must account for in order for the patient to achieve maximum benefit and satisfaction. Given the number of advanced features in modern hearing aids and their commercial value, it is imperative that audiologists use proven strategies for selecting and fitting them. Using an approach guided by the best available clinical evidence, this article will review the essential components of the hearing aid selection and fitting process, attempting to show the relationship between important advanced hearing aid features and the clinical procedures needed to select and fit them. After reading this review article, audiologists will have a better understanding of how technology and the clinical procedures used to determine hearing aid candidacy fit together. In other words, if each of the essential building blocks described in this paper are used correctly, the patient has a better chance of experiencing full benefit and satisfaction from hearing aids.

Building Block #1: Improving Audibility and Maximizing Comfort of Soft and Average Sounds


The majority of patients fit with hearing aids have sensorineural hearing loss. One of the chief characteristics of sensorineural hearing loss is an inability to perceive low-intensity sounds. For the typical patient, this means that soft sounds need to be amplified more than sounds of average and above-average intensity. This model of cochlear hearing loss lead to the development of hearing aid amplifiers providing more gain for soft sounds relative to average and loud sounds (Villchur, 1973). Today, wide dynamic range compression (WDRC) is the accepted amplification strategy, allowing the hearing aid to repackage sound into the patient's residual dynamic range. By definition, WDRC uses low compression kneepoints (
Nearly all digital hearing aids utilize multiple channels of compression. Anywhere between two and thirty-two independently controlled channels of compression are found in a modern digital hearing aid. Although very few published studies have examined the relationship between the number of channels and real-world hearing aid benefit, there is some evidence suggesting that up to five channels in quiet listening situations (Woods, Van Tasell, Rickert, & Trine, 2006) and up to 16 channels in noisy listening conditions (Yund & Buckles, 1995) can improve speech intelligibility.

Audibility of the speech signal is restored by providing gain across the entire range of frequencies wherein a hearing loss is present. Exactly how much gain is required at each frequency is determined by the prescriptive fitting approach the audiologist decides to use. A review of the published research in this area favors the NAL-type prescriptive fitting approach to other fitting philosophies when fitting adults. The NAL prescriptive fitting method (such as the current NAL-NL2) is a loudness equalization procedure, which aims to make amplified sound equally loud across the frequency spectrum in order to maximize intelligibility. Loudness equalization fitting formulas, like NAL-NL2, take both audibility and comfort into account. This is in contrast to loudness normalization fitting formulas that attempt to deliver amplified sounds to the end-user at a loudness level equal to that of a normal-hearing listener. Although loudness normalization procedures are preferred for fitting pediatric populations, adults tend to prefer the gain resulting from the use of loudness equalization fitting procedures instead (Keidser & Grant, 2001).

After the patient's hearing thresholds have been measured, these values can be entered into the fitting software of choice, at which time a prescriptive fitting target is generated. It would be tempting to think that it does not matter which manufacturer you use;however, data suggests that there are large differences between the "true" prescriptive fitting target and the version of it that each manufacturer implements in their fitting software. This difference, moreover, is exacerbated when individual differences in ear canal resonance and microphone location effects are taken into consideration. As Figure 1 shows, the measured gain can be upwards of 10 to 15 dB different than the estimated gain. This data, as well as other similar studies, strongly suggest the importance of conducting probe microphone measures in order to verify the closeness of the prescriptive target match. Furthermore, one systematic meta-analysis of the evidence indicates that matching a gain-for-average target and verifying it with probe microphone measures is equated with patient satisfaction (Mueller, 2005).



Figure 1. The difference between the simulated insertion gain and actual insertion gain for 10 different hearing aids, all programmed to the same loss (Hawkins & Cook, 2003). Reprinted with permission from The Hearing Journal and its publisher, Lippincott, Williams & Wilkins.

The first building block of successful hearing aid use is maximizing audibility and comfort of important sounds, particularly speech. The advanced feature that accomplishes this goal is multiple channels of wide dynamic range compression. In order to ensure that this feature is optimized for adults, the audiologist must verify that a loudness equalization prescriptive fitting target is approximated, using probe microphone measures.

Building Block#2: Comfort in Noise


Nearly all digital hearing aids on the market today use more than one type of digital noise reduction strategy to reduce various types of ambient noise. Although each manufacturer may use a proprietary name for each type of noise reduction, some examples include expansion, modulation-based noise reduction and impulse noise reduction. This article will focus on the various types of digital noise reduction (DNR) algorithms that can be adjusted (i.e., turned "on" and "off") by the audiologist. The first type that most manufacturers employ is a modulation-based noise reduction algorithm. Sound modulations are analyzed by the hearing aid's on-board signal classification system according to frequency, amplitude, and modulation depth and steepness. If the signal classification system determines the primary signal to be noise in any given channel, or if the signal-to-noise ratio in any channel is small, the gain is reduced. As you may gather, there are many variables to modulation-based noise reduction systems, and every manufacturer uses different implementations of the same core strategy.

In addition to modulation-based noise reduction algorithms, most manufacturers employ a second type of noise reduction algorithm in many of their products. One type of noise reduction strategy that appears to be gaining favor is fast-acting gain reduction designed to reduce the initial peak of an impulse sound. This type of fact-acting DNR reduction relies on the signal classification system to identify signals of short duration and high intensity. When those types of signals are detected, the hearing aid instantaneously reduces the amplitude of the signal's peak.

When noise reduction algorithms first appeared on the market a decade ago, professionals were optimistic that their implementation would result in an improvement in speech-understanding-ability in noise for their end-users. Although this topic has been researched extensively, there is no evidence in peer reviewed sources to suggest that current implementations of modulation-based noise reduction strategies (or any other type of digital noise reduction) improve the signal-to-noise ratio in everyday listening situations (Mueller & Bentler, 2005).

There are some studies, however, suggesting that DNR contributes to a more comfortable or relaxed listening experience for the end-user. In one peer reviewed study using paired comparisons (Ricketts & Hornsby, 2005), listeners preferred DNR turned on in both low (+6 dB SNR) and high (+1 dB SNR) levels of background noise. Over the past five years there have also been several real-world studies in non-peer reviewed journals (e.g. Powers, Branda, Hernandez, & Pool, 2006) suggesting various types of DNR algorithms make noisy listening situations more comfortable or relaxing for the end-user. The conclusion of these real-world studies would be that current implementations of DNR contribute to a more comfortable listening experience without doing any harm to the patient's perception of speech intelligibility.

Although each manufacturer's implementation of modulation-based DNR is unique, there is no evidence suggesting that one manufacturer's implementation is preferred over another (Bentler & Chiou, 2006). Given the lack of perceived differences in algorithms between manufacturers, combined with the findings that noise reduction is preferred by most end-users in conditions of both high- and low-level noise, the logical conclusion is that current implementations of DNR contribute to a more comfortable listening experience in noise. Furthermore, it can be inferred that DNR does no harm to the patient's listening experience, and therefore, should be turned "on" whenever possible.

Considering that the typical manufacturer now employs two or more types of DNR in their high-end product, exactly how an audiologist establishes expectations for the use of DNR is an important issue to consider. The Acceptable Noise Level (ANL) test seems to be a viable option that helps quantify the amount of annoyance individuals experience in noisy listening situations. Recent studies have shown that the unaided ANL is related to hearing aid usage (Nabelek, Freyaldenhoven, Tampas, Burchfield, & Muenchen, 2006) and benefit, as measured on one self-report of outcome (Taylor, 2008). The ANL can be effectively used as a pre-fitting hearing aid selection tool.

The ANL score is generated by comparing the difference between the most comfortable listening level (MCL) and the background noise level (BNL). When the calculated ANL is 13dB or higher, there is evidence suggesting that patients have an 83% chance of not using their hearing aids (Nabelek, et al., 2006). Although virtually all patients can benefit from improved comfort in noise that is received through DNR technology, the ANL score may help audiologists set more effective real-world expectations regarding the use of hearing aids with DNR.

DNR performance can be measured objectively in the clinic with probe microphone measures. Although this measured performance in the lab may not be equated with real-world benefit, demonstrating the effectiveness of DNR technology to patients instills confidence in their decision to purchase customized hearing aids. During the fitting procedure, the audiologist can deactivate DNR in the fitting software. While the DNR is turned "off" and the hearing aid is turned "on," simply run an REAR and store it in the probe microphone system. Next, activate the DNR and run a second REAR, paying close attention to the time it takes for the hearing aid's signal classification system to recognize the sound as noise and reduce it. The difference between the two REAR curves is the amount of attenuation the DNR system provides for the specific type of input signal you employ. The choice of input signal is critical for this probe microphone measure, as the input signal used for this test needs to contain a large number of modulations, so that it is classified as noise, rather than speech. This procedure is an effective way to demonstrate to patients how DNR technology works in conditions simulating real-world listening.

Building Block#3: Speech Intelligibility in Noise


Directional microphone technology (DMT) has been proven to improve the signal-to-noise ratio for hearing aid users (Mueller & Ricketts, 2000;Ricketts, 2005). In simple terms, directional microphones reduce the intensity level of sounds arriving from behind and/or the sides of the patient, relative to sounds arriving from the front of the patient.

There are currently several different ways DMT can be implemented in modern hearing aids. Hearing aids with fixed directional microphones can be operated by the patient via a manual switch, allowing the patient to toggle between omni-directional and directional microphone programs. Many modern hearing aids also employ automatic and adaptive directional DMT. Automatic DMT relies on the signal classification system of the hearing aid to calculate the intensity level of noise arriving from the back and sides, relative to the front of the listener. Although each manufacturer's on-board signal classification system is slightly different, all automatic DMTs switch between the omni-directional and directional programs without relying on the listener to make the program change manually.

Most manufacturers also employ adaptive DMT in much of their product line. Adaptive DMT allows the directional microphone polar pattern to change depending on location of the primary noise source, as determined by the signal classification system on board the hearing aid. Regardless of the type of DMT on the hearing aid, the directivity index (DI) is the accepted objective laboratory measure of directional microphone performance. However, there is no published peer-reviewed evidence to date indicating that either adaptive or automatic DMT outperforms its manual counterparts in everyday listening conditions (Bentler, Tubbs, Egge, Flamme, & Dittberner, 2004;Bentler, Mueller, & Palmer, 2006).

Given the large individual differences in directional microphone performance across patients, it is not possible to predict directional microphone benefit from the audiogram or speech intelligibility scores (Ricketts & Mueller, 2000). There are also considerable differences in the directivity index, depending on the space between microphone ports, venting effects, and microphone location. All of these factors must be accounted for when fitting hearing aids with DMT to patients. Furthermore, because DMT technology requires extra physical space, and some patients will not wear larger hearing aids, it becomes important to know which patients can opt for smaller devices without sacrificing the potential benefits of DMT mounted in a traditional BTE or custom device. (Note: DMT is readily available in open canal (OC) products;however their directivity index is generally half of that measured in a non-OC device) Although DMT has the potential to benefit all patients regardless of hearing loss, the Quick Speech-in-Noise (Quick SIN) test has been shown to be a practical tool that helps the audiologist determine directional microphone candidacy as well as establish real-world expectations (Killion, 1997).

Signal-to-noise ratio loss (SNR loss) is the increase in signal-to-noise ratio required by a listener to obtain 50% correct of words or sentence lists, compared to that of normal-hearing peers. SNR loss cannot be predicted from pure-tone thresholds or word recognition scores obtained in quiet listening conditions (Wilson, McArdle, & Smith, 2007). Therefore, SNR loss needs to be measured with a test like the QuickSIN (Etymotic Research, 2001) during the pre-fitting appointment.

Results from the QuickSIN help the audiologist determine what patients may be less likely to need DMT in real-world listening situations. For example, if the SNR loss during the pre-fitting appointment is 3dB or less, simply making sounds audible and comfortable may be enough for the patient to experience improved speech intelligibility in noise. In other words, WDRC technology boosts the soft sounds of speech enough so that the patient's brain takes over and is able to adequately understand speech - even in the presence of noise. Patients with near-normal scores on the QuickSIN may still benefit from the use of DMT;however, it may not be an urgent need, compared to patients with poorer QuickSIN scores.

On the other hand, patients with poor scores on the unaided QuickSIN need to be informed that DMT found on board any modern hearing aid will not be enough to overcome the unfavorable signal-to-noise ratio needed to carry on a conversation in high levels of noise. These patients need to know that supplemental technology designed to maximize the signal-to-noise ratio is needed (e.g., array microphone, portable FM system) if they want to carry on conversations in very noisy situations. Figure 2 outlines the degree of SNR loss and the recommended features corresponding to the degree of SNR loss.



Figure 2. Degree of SNR loss (Killion,1997).

Given the variability in performance across DMT, directional microphone performance must be objectively measured at the time of the fitting using probe microphone measures. This is easily done by conducting a front-to-back measure using the probe microphone equipment of choice. In order to complete a front-to-back measure, first run an REAR curve from zero degrees azimuth and a second REAR curve from the side or back of the patient. The difference between the two REAR curves is the directional microphone benefit as measured objectively in the lab. Both REAR curves must be run with the DMT turned "on."

Building Block #4: Make Loud Sounds "Loud, but Okay"


One of the goals of any fitting is ensuring that loud sounds stay below the loudness discomfort level of the patient. However, survey data would suggest that this basic goal is not readily accomplished, as it has been reported that upwards of one-third of all hearing aid users are dissatisfied with "comfort with loud sounds" (Kochkin, 2002). This would suggest that many audiologists are not setting the maximum output on hearing aids correctly.

When it comes to setting maximum output on hearing aids, there are two schools of thought. One, maximum output can be estimated from pure-tone thresholds (Storey, Dillon, Yeend, & Wigney, 1998). Once the thresholds are measured they are entered into the fitting software, and loudness discomfort levels (LDLs) are systematically estimated based on empirical data. In one study (Mackersie, 2007), approximately 18% of subjects had LDLs that exceeded the maximum output of the hearing aid. The author of this study concludes that aided LDLs can be measured during the fitting.

A second school of thought is that LDLs should be measured during the pre-fitting appointment using warble tones at discrete frequencies. The underlying reason for measuring the unaided LDLs is because of the large degree of variability in the LDLs of individuals (Bentler & Cooley, 2001). Once LDLs are obtained at specific key frequencies (e.g., 500 Hz and 2000 Hz) the reference equivalent SPL (RETSPL) is added to the LDL value. This value is then entered into the fitting software. There is some evidence suggesting that the unaided LDL contributes to the success of the hearing aid fitting (Mueller & Bentler, 2005).

The technology that limits the maximum output levels in any modern hearing aid is output compression, usually called AGC-O. AGC-O has a high compression kneepoint (>70dB SPL) and large compression ration (8:1 or higher). AGC-O is typically implemented in a single channel rather than multiple channels. This has important ramifications when it comes to setting the AGC-O kneepoints. Although there have been no systematic studies published on the topic, single channel AGC-O would seem to present compromises to the upper ranges of the amplified speech signal, only because a single channel controls the entire frequency response. On the other hand, multiple channels of AGC-O supposedly allow the audiologist to use more than one LDL score to more finely tailor the hearing aid, so that maximum output stays below the AGC-O kneepoint.

Another important factor to consider when adjusting the AGC-O kneepoint at a level just below the patient's unaided LDL is how the hearing aid manufacturer actually uses the unaided LDL to establish maximum output. According to some recently published data there is a 5 to 15 dB difference in maximum output across manufacturers when the same unaided LDL scores are entered into the fitting software. Furthermore, there is also a considerable difference between hearing aids in the maximum output as measured in a 2cc coupler in terms of the measured unaided LDL and the LDL when it is predicted from the thresholds.

Given the high degree of variability of both unaided LDL scores and how each manufacturer uses this data to set maximum output in its hearing aids, audiologists would be wise to do the following in order to ensure that patients do not experience discomfort with loud sounds:
  1. Know how your favorite hearing aids use both predicted and measured unaided LDL data to establish maximum output. This can be done by entering both types of scores into the manufacturer's software and measuring the results in a 2cc coupler. Coupler data can then be compared to expected results (LDL + RETSPL).

  2. Measure the unaided LDL at 500 and 2000 Hz, using the IHAFF loudness contour chart and an ascending procedure. Take the LDL score, add the RETSPL at each respective frequency and enter this number into the fitting software. Be sure to account for any differences in expected results per the guidelines of #1 above.

  3. Using probe microphone equipment, conduct a RESR measure during the fitting appointment. The RESR can be conducted by running a REAR curve using a 90dB warble tone sweep, ensuring that the aided response stays just below the measured LDL threshold.

  4. In order to account for any measurement errors, it is a good idea to cross-check the RESR by conducting an aided LDL test. This can be completed using the same IHAFF ascending procedure mentioned previously and an 85dB SPL input signal of your choosing.

  5. Because laboratory measures of aided loudness discomfort are not equated with loudness discomfort in real-world listening conditions, audiologists are encouraged to incorporate self-reports of aided loudness satisfaction into the following appointments. Sections of the APHAB (Cox & Alexander, 1995) or the Profile of Aided Loudness (PAL) (Palmer, Mueller, & Moriarty, 1999) can be used for this purpose.
Evidence-Based Hearing Aid Design

One question remains: how much of negative word-of-mouth regarding hearing aids is performance driven and how much is process-quality driven? There is ample data, published over two decades ago, suggesting that product quality is sufficient enough in a modern hearing aid to overcome much of the disability associated with mild to moderately-severe sensorineural hearing loss;therefore, it is in the best interest of audiologists to implement an evidence-based approach when making important clinical decisions.

Over the past few years manufacturers have had to focus on evidence-based design in order to provide proof that their marketing claims are accurate. In essence, this means that manufacturers must have proof, either from well-designed laboratory or real-world studies, indicating that their implementation of various hearing aid features actually benefit patients. The downloadable spreadsheet below provides an overview of the most current laboratory and real-world evidence for many of the current implementations of common features found in modern hearing aids. The data outlined here gives the audiologist the best available evidence to help make decisions regarding most of today's advance hearing aid features. It is also important to note that every major manufacturer offers all of the features (with slightly different specific implementations and different proprietary names) shown in the spreadsheet.

Click Here to View view spreadsheet

Conclusion

This article summarizes the four essential building blocks of hearing aid technology and reviews several pre-fitting and fitting procedures that need to be conducted by the audiologist in order to maximize benefit and satisfaction. Following a pre-fitting clinical protocol utilizing the ANL test, QuickSIN and LDL measures is likely to contribute to a better selection decision. Additionally, during the time of the fitting, if probe measures are used for verification in terms of achieving prescriptive fitting targets and objectively measuring the performance of advanced features, patients are much more likely to feel they are getting their money's worth from their hearing instruments. Considering that recent survey data continues to suggest that one in six hearing aids are not used, it is in the best interest of both the profession and the patient to incorporate evidence-based thinking into the selection and fitting process.

References

Bentler, R. & Cooley, L. (2001). An examination of several characteristics that affect the prediction of OSPL90. Ear & Hearing, 22, 3-20.

Bentler, R., Tubbs, J., Egge, J., Flamme, G., & Dittberner, A. (2004). Evaluation of an adaptive directional system in a DSP hearing aid. American Journal of Audiology, 13(1), 73-79.

Bentler, R. & Chiou, L. (2006). Digital noise reduction: An overview. Trends in Amplification, 10(2), 67-82.

Bentler, R., Mueller, H.G., & Palmer, C. (2006). Evaluation of a second-order directional microphone hearing aid I: Speech perception outcomes. Journal of the American Academy of Audiology, 17(3), 179-189.

Cox, R.M., & Alexander, G.C. (1995). The abbreviated profile of hearing aid benefit. Ear & Hearing, 16(2), 176-186.

Hawkins, D.B., & Cook, J.A. (2003). Hearing aid software predictive gain values: How accurate are they? The Hearing Journal, 56(7), 26-31.

Keidser, G. & Grant, F. (2001). Comparing loudness normalization (IHAFF) with speech intelligibility maximization (NAL-NL1) when implanted in a two-channel device. Ear and Hearing, 22(6), 501-515.

Killion, M. (1997). SNR loss: I can hear what people say, but I can't understand them. The Hearing Review, 4(12), 10-14.

Kochkin, S. (2002). 10-Year Customer Satisfaction Trends in the US Hearing Instrument Market. The Hearing Review, 9(10), 14-25, 46.

Mackersie, C. (2007). Hearing aid maximum output and loudness discomfort: Are unaided loudness measures needed? Journal of the American Academy of Audiology, 18(6), 504-514.

Mueller, H.G. & Ricketts, T. (2000). Directional microphone hearing aids: An update. The Hearing Journal, 53(5), 10-19.

Mueller, H.G. (2005). Fitting hearing aids to adults using prescriptive methods: An evidence-based review of effectiveness. Journal of the American Academy of Audiology, 16(7), 448-460.

Mueller, H.G., & Bentler, R. (2005). Fitting Hearing Aids Using Clinical Measures of Loudness Discomfort Levels: An Evidence-Based Review of Effectiveness. Journal of the American Academy of Audiology, 16(7), 461-472(12).

Nabelek, A.K., Freyaldenhoven, M.C., Tampas, J.W., Burchfield, S.B., & Muenchen, R.A. (2006). Acceptable noise level as a predictor of hearing aid use. Journal of the American Academy of Audiology, 17(6), 626-639.

Niquette P, Gudmundsen G, Killion M. (2001). QuickSIN Speech-in-Noise Test Version 1.3. Elk Grove Village, IL: Etymotic Research.

Palmer, C.P., Mueller, H.G., & Moriarty, M. (1999). Profile of aided loudness: A validation procedure. The Hearing Journal, 52(6), ). 34-42.

Powers, T., Branda, E., Hernandez, G., & Pool, A. (2006). Study finds real-world benefit from digital noise reduction. The Hearing Journal, 59(2), 26-28.

Ricketts, T. & Mueller, G. (2000). Predicting directional hearing aid benefit for individual listeners. Journal of the American Academy of Audiology, 11, 561-569.

Ricketts, T. (2005). Directional microphones: Then and now. Journal of Rehabilitation Research and Development, 42(4), Supplement 2, 133-144.

Ricketts, T. & Hornsby, B. (2005). Sound quality measures for speech in noise through a commercial hearing aid implementing digital noise reduction. Journal of the American Academy of Audiology, 16(5), 270-277.

Storey, L., Dillon, H., Yeend, I., & Wigney, D. (1998). The National Acoustic Laboratories' procedure for selecting the saturation sound pressure level for hearing aids: Experimental validation. Ear and Hearing, 19, 267-279.

Taylor, B. (2008). Predicting real world benefit from the ANL test. The Hearing Journal, (in press).

Villchur, E. (1973). Signal processing to improve speech intelligibility in perceptive deafness. Journal of the Acoustical Society of America, 53, 1646-1657.

Wilson, R.H., McArdle, R.A., & Smith, S.L. (2007). An evaluation of the BKB-SIN, HINT, Quick-SIN, and WIN materials on listeners with normal hearing and listeners with hearing loss. Journal of Speech, Language, and Hearing Research, 50(4), 844-856.

Woods, W., Van Tasell, D., Rickert, M., & Trine, T. (2006). SII and fit-to-target analysis of compression system performance as a function of number of compression channels. International Journal of Audiology, 45(11), 630-644.

Yund, E.W., & Buckles, K.M. (1995). Discrimination of multi-channel compressed speech in noise: Long-term hearing in hearing-impaired subjects. Ear and Hearing, 16(4), 417-427.
Rexton Reach - April 2024

brian taylor

Brian Taylor, AuD

Director of Practice Development & Clinical Affairs

Brian Taylor is the Director of Practice Development & Clinical Affairs for Unitron. He is also the Editor of Audiology Practices, the quarterly publication of the Academy of Doctor’s of Audiology. During the first decade of his career, he practiced clinical audiology in both medical and retail settings. Since 2003, Dr. Taylor has held a variety of management positions within the industry in both the United States and Europe. He has published over 30 articles and book chapters on topics related to hearing aids, diagnostic audiology and business management. Brian is the co-author, along with Gus Mueller, of the text book Fitting and Dispensing Hearing Aids, published by Plural, Inc. He holds a Master’s degree in audiology from the University of Massachusetts and a doctorate in audiology from Central Michigan University.   Brian Taylor is the Director of Practice Development & Clinical Affairs for Unitron. He is also the Editor of Audiology Practices.



Related Courses

A Deeper Look at Sound Environments
Presented by Don Schum, PhD
Recorded Webinar
Course: #33536Level: Intermediate1 Hour
The characteristics of the sound environment have a fundamental impact on the performance of the hearing aid user. In this course, we will describe the obvious and sometimes more subtle aspects of sound environments that will affect hearing aid performance.

The Subjective Evaluation of a New Hearing Aid Fitting
Presented by Don Schum, PhD
Recorded Webinar
Course: #35584Level: Intermediate1 Hour
The final judge of the success of a new fitting will of course be the patient, and the criteria that they use may not always be in line with an objective audiological measure. This course will review some of the issues and options at play when having the patient weigh in on the value of the new devices.

Auditory Wellness: What Clinicians Need to Know
Presented by Brian Taylor, AuD, Barbara Weinstein, PhD
Audio
Course: #36608Level: Intermediate0.5 Hours
As most hearing care professionals know, the functional capabilities of individuals with hearing loss are defined by more than the audiogram. Many of these functional capabilities fall under the rubric, auditory wellness. This podcast will be a discussion between Brian Taylor of Signia and his guest, Barbara Weinstein, professor of audiology at City University of New York. They will outline the concept of auditory wellness, how it can be measured clinically and how properly fitted hearing aids have the potential to improve auditory wellness.

Vanderbilt Audiology Journal Club: Clinical Insights from Recent Hearing Aid Research
Presented by Todd Ricketts, PhD, Erin Margaret Picou, AuD, PhD, H. Gustav Mueller, PhD
Recorded Webinar
Course: #37376Level: Intermediate1 Hour
This course will review new key journal articles on hearing aid technology and provide clinical implications for practicing audiologists.

61% Better Hearing in Noise: The Roger Portfolio
Presented by Steve Hallenbeck
Recorded Webinar
Course: #38656Level: Introductory1 Hour
Every patient wants to hear better in noise, whether it be celebrating over dinner with a group of friends or on a date with your significant other. Roger technology provides a significant improvement over normal-hearing ears, hearing aids, and cochlear implants to deliver excellent speech understanding.

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.