AudiologyOnline Phone: 800-753-2160

Oticon Intent - April 2024

Providing Amplification to the Aging Auditory System

Providing Amplification to the Aging Auditory System
Donald J. Schum, PhD
January 24, 2011

Editor's note: This is an edited transcript of the live expert e-seminar. To view the course recording, click here.


This course is part of a four-part series of seminars on AudiologyOnline entitled "Myths, Assumptions and Other Good Ideas About the Way We do Hearing Aid Work." The recordings of the other courses in the series can be viewed on AudiologyOnline. I wanted to include a course on aging in this series because, as audiologists, we obviously spend a lot of our time dealing with older patients. They are our classic hearing aid patients. Oftentimes we do not spend a lot of time understanding aspects of aging that may influence the way we do hearing aid work. In this course I want to take a look at the aging patient from two perspectives, the way the auditory system works and the way the cognitive system works, to highlight the fact that we are dealing with a larger system than just the ears. When you take a look at the larger system and how it can change with age, it may very well impact your clinical decisions as far as selecting and fitting hearing aids.

While we were in school, whether in a genetics class or maybe in a speech-language pathology class where we learned about syndromes, we may have come across the term FLK, which stands for funny looking kid. At first the term may have seemed to be cruel or flippant, but people who work with pediatrics from a medical perspective quickly realize that FLK is a very useful term. The term indicates that there is something in the appearance in a child that makes you suspect a syndromic involvement, and it causes your professional radar to go up. A couple of years ago, I was reading through some material on healthcare in the elderly and I came across the term TMB. TMB is a term used by other healthcare professionals who work with older individuals. TMB stands for too many birthdays. Like FLK, at first sounds somewhat flippant, but it is used for a very particular reason, as a good shorthand form of communication from one professional to the other. It refers to is the fact there will be things that are not "normal" about that older person when you define "normal" in terms of what you would expect in a young adult. However, when you think about the fact that an older adult has been around for 70 or 80 years and that the systems in the body change over time, wear out, and start to break down perhaps what you're seeing are just the signs of normal, natural aging. What you may observe in an older patient as "not normal" may not necessarily be clinically significant or threatening to their overall health. It may be just that their body is starting to break down as a result of the aging process. I think that that is a very important observation to make when we talk about doing hearing aid work with older patients. There are things that will change about the body in a normal and natural way over the years that could influence the way we go about our work.

In this talk, I want to look at aging from both an audiology perspective and from a cognitive perspective. There are audiological aspects of aging that we do not often talk about in our profession, and I will discuss those here, but most of this presentation is going to cover the cognitive implications of getting older. How do patients' sensory or neurological systems change with time, especially in regard to speech understanding and communication interactions? And, how do those changes affect the way we might do hearing aid work with these patients? The goal is to try to broaden our mindset as a field to remember that we may be dealing with some older patients who may have more going on than just a classic sensorineural hearing loss.

Aging: The Audiology Perspective

Let's start by talking about the audiological aspects of aging, and what in particular we should be paying attention to with our older patients. When we do hearing aid work, one of our core assumptions is the assumption of full audibility. This means that we assume that with all else being equal, the more audibility we can provide the patient's auditory system via the use of amplification, the better the patient is going to be able to hear and understand speech across a range of situations. This assumption is perfectly valid, and it guides much of our clinical practice. However, it does not necessarily explain everything about the aging auditory system that we need to recognize as audiologists.

Let's dig a little deeper into this assumption. Consider a person with normal hearing who has average hearing thresholds of 0 dB SPL and average uncomfortable loudness level (UCL) of approximately 100 dB SPL, so they have a 100 dB dynamic range to work with.

When you consider the task of listening to speech throughout a given day, from the softest speech levels to the most intense speech levels, a person with normal hearing is going to use almost the entire dynamic range. Throughout the day, we can be challenged with speech coming in at all different levels, so we're using most of our dynamic range. One of the basic principles about sensorineural hearing loss is that, while hearing thresholds are increased, UCL doesn't necessarily change much, especially for people in the mild to moderate hearing loss range. As a clinician, when your patient has a classic sensorineural hearing loss, you're going to be dealing with a reduced dynamic range. When we go about doing hearing aid work with these patients, one of the approaches we use is to take that speech range and use nonlinear application to map it out within the patient's dynamic range. This is a basic description that we're all familiar with of amplification and nonlinear hearing loss.

It's Not Just About the Number Hair Cells

In regard to the elderly, however, the basis for this approach needs closer examination. This full-audibility approach is based on a simplified assumption about sensorineural hearing loss that has been used for the last 10 to 15 years; that is, sensorineural hearing loss is basically the relationship between how many inner hair cells are lost and how many outer hair cells are lost. This assumption does not take into account the other aspects of sensorineural hearing loss that we know can occur. The simplified assumption looks at the typical uncomfortable loudness (UCL) pattern as a function of hearing loss. We know that UCL does not change much with mild and moderate hearing loss. With moderate through profound loss, the UCL starts increasing upward but it does not increase at the same rate as the hearing thresholds, so you end up with an ever-decreasing dynamic range. The part of this assumption that oversimplifies what may be happening in many elderly is in regard to hair cell loss. We assume that with up to about 50 dB HL thresholds or moderate hearing loss, there is primarily a loss of outer hair cell function with essentially intact inner hair cells. We assume that as you move beyond moderate hearing loss into the severe and profound hearing range, there is more and more involvement or damage to the inner hair cells. The nice thing about this simplifying assumption is that it accounts for relationship between threshold and UCL, or LDL (loudness discomfort level) as it was referred to historically.

Individual Variability

Kamm, Dirks & Mickey (1978) tracked the relationship between average hearing threshold and UCL, which they referred to as LDL. Their data show that UCL does not change much as a function of hearing threshold until about 50 dB HL, and then it slowly increases. This relationship has been replicated in many other studies throughout the years. The reason that I mention the Kamm and colleagues study (1978) is because, in addition to looking at average data, they included individual data. What you see from their data is that there is a tremendous amount of variability in loudness perception and the perception of uncomfortably loud levels across the broad range of ages of patients that have sensorineural hearing loss. Loudness perception is on one of the core psychoacoustic dimensions that we talk about in our field and the problem with using an average or a median "best-fit" for all patients is that it does not account for individual variability. As audiologists, we assume that the best fit accounts for everything that we know is going on with each patient. The individual data in fact can vary so much from the median or average data that it should immediately cause us to start thinking that there is more going on physiologically with these patients than hair cell loss. It's not as simple as counting up the number of hair cells that are missing, and that's what I want to get at when we consider the aging auditory system.

Focusing simply on inner hair cell and outer hair cell loss in regard to sensorineural hearing loss assumes that all the other structures and functions within the inner ear and auditory system are working fine. The reality is that that might not always be the case. As you know, the auditory system is complex and more than just hair cells are required for hearing to occur. In order for the hair cells to function properly, the membrane that they sit upon and the membranes that interact with the hair cells have to be intact and functioning properly. There has to be the right electrochemical balance in the fluids surrounding the hair cells. They need an intact neural structure to send the message up the line in a normal fashion, and many other complex processes have to be happening at the same time. You can't necessarily be certain that that is exactly what is going on with each patient. Take a patient who comes into your office and you measure a moderate hearing loss in the mid to high frequencies. You cannot assume that the only thing that has changed structurally or physiologically is the way the hair cells are firing. There are a lot of other things that could be going on.

Types of presbycusis

Back in the 1970s, Dr. Harold Schuknecht (1974) published some information about presbycusis that we probably all learned in our studies as audiologists. However, we may have conveniently stopped paying attention to this information over the years since counting hair cells seemed to do a fine job of explaining sensorineural hearing loss and how to fit nonlinear technology. What Dr. Schuknecht's work tells us is that hair cell loss alone does not explain everything that can be going wrong physiologically with a patient with sensorineural hearing loss.

He pointed out there are likely four types of presbycusis: sensory, metabolic, mechanical and neural. Sensory presbycusis refers to what we've been discussing so far; the sensitivity of the hair cells. Back in the 1970s, the relationship between inner and outer hair cells was not fully understood, however Schuknecht was correct to point out that in order for the hair cells to fire appropriately, the basic neural mechanism has to work well. He also pointed out that you can lose hair cells as you get older. Importantly, there are a lot of other things that can go wrong within the auditory system as we get older. There could be metabolic changes.

As you know, endolymph and perilymph surround the neural structures within the inner ear. The electrical balance between the two fluid spaces has to be correct in order for the actual nerve firing to take place. Physical changes in a person's body can throw off that chemistry. You can have a disorder where the auditory system has the full complement of inner and outer hair cells, but still is not responding normally because the conditions around them are not right. There can be mechanical changes within the auditory system, for example, changes in stiffness and elasticity. The inner ear system requires a very smooth coordinated movement of the physical structures within it. If there are mechanical changes that impact those structures' mobility, they can also impact the ability of the auditory system to do its job. Finally, there could be neural changes in the auditory system. There can be loss or dysfunction of the neural structures that carry the information from the inner hair cells to the central auditory system. It is well recognized that in an aging auditory system, changes to neural structures can alter their ability to transmit information in a normal manner.

These changes can account for some of the variability we see in sensorineural hearing loss. Some of the variability may be related to other non-sensory aspects of presbycusis as well, but my point is that our simplified model of inner and outer hair cell function is insufficient. This doesn't necessarily mean that we should change the way we do hearing aid work, but it is a reminder that there are other things that can go wrong in the auditory system that we may want to consider.

Aging: The Cognitive Perspective

Now let's focus on the cognitive side of the puzzle. As audiologists, we don't spend a lot of time understanding what cognitive changes we expect as a person gets older but it is valuable to do so. One of the ways that I like to describe the effect of sensorineural hearing loss, especially in complex listening environments, is as the loss of the ability to organize sound. Patients don't report not hearing enough in noisy settings. Rather, they talk about hearing too much. They do not hear more than a person with normal hearing is hearing; in fact, they probably hear less because of their sensitivity loss. When they talk about hearing too much, they are reporting the loss of ability to effectively keep individual sounds apart from each other. This is what I mean by the ability to organize sound, and it is based on the psychoacoustic effects of sensorineural hearing loss.

One of the things that may also contribute to our older patients' difficulty hearing in noise is the ability of the aging cognitive system to handle demanding organizational tasks. When we refer to complex listening environments where our patients struggle the most to hear, there can be a lot of different things happening. There could be more than one talker, and the person with hearing loss wants to listen to one talker and ignore the other talker. There could be movement of the different sound signals. Maybe the person speaking is moving; maybe the person listening is moving; Perhaps they are both moving as they walk together. Competing noise sources could also be moving. All of these variables can make the listening task much more challenging. In addition, there can be stable non-speech sources like blowers from air conditioning or traffic going by. Traffic is a non-speech sound, but it continuously changes in spectrum and level and location, which can challenge a cognitive system that is already relatively highly tasked. There can be other distractions in a complex listening environment that make it difficult to pay attention to the conversation, such as visual distractions or a shift in focus. In communication situations where there are multiple people involved in a conversation, the person who is providing the speech signal might change from time to time, and the listener has to shift focus to follow along. In a good conversation with multiple people, you expect it to happen; you don't want one person doing all the talking. As the focus of the conversation changes, it places an extra demand on the cognitive system. It is another challenge for someone who already has trouble organizing the sound environment because of sensorineural hearing loss. So when we talk about speech understanding with the elderly, it is important to understand that it is very much a cognitive task and that the challenge to this cognitive system might be great.

This goes back to the fact the we are dealing with more than just a set of ears when we are doing hearing aid work with a patient; we are dealing with a total system. We have to remember that hearing is not independent from the rest of the nervous system. Whatever happens in the peripheral auditory system eventually is encoded and sent to the central nervous system. When we manage patients clinically, we need keep in mind that peripheral hearing difficulties and the aging nervous system are closely linked, and we need to factor in that knowledge as we set goals for each person's listening success.

Normal Aging: Typical Changes

Aging causes a variety of changes in the body. These include changes in motor skills, as well as in sensory sensitivity and acuity. Aging causes changes to short-term memory, as well as to the ability to get information in and out of memory quickly. Sensory motor reaction time changes and is one of the reasons why older individuals often drive slower; they are dealing with a lot of information and are afraid their reaction time is not going to be as quick as it was when they were younger. Processing and decision speed tends to slow down as we age. In other words, as we age the ability to manage a fast flow of information slows down, such as when we're driving down an unfamiliar highway, trying to read the road signs, decide where to get off and where to turn. Selective attention, or the ability to pay attention to one thing while ignoring others, slows down with aging. This can be a factor in complex listening situations where competition and distraction play a role.

Normal Aging: Abilities that do not change

It is also important to recognize the things that do not typically change as a person gets older. One is long-term memory. With the normal aging process, when there are no specific neurological issues, older individuals are able to retrieve things from memory. You can sit down with your grandparents and have a conversation about when they were children and they remember the details. They may not be able to recall those details as quickly as they used to, but once they do the memories are as vivid and as real as ever. Intelligence doesn't change. A person's intelligence quotient (IQ) doesn't seem to erode over time. Linguistic skills do not change either. A person is just as sophisticated a user of language when they're older as when they are younger. Short-term memory may be an issue or the ability to recall things of a short-term nature may be changed with aging. The ability to remember a phone number may change, not necessarily because core skills are breaking down but rather that the ability to use the memory part of the system has been compromised.

Neural Slowing

Experts in aging explain these changes within the neural slowing hypothesis. When you consider the cognitive implications of aging and how an older person perform tasks, it doesn't appear that long-term memory or linguistic issues are at play, but rather the ability for things to happen quickly within the nervous system seems to slow down. As you know, the auditory system requires a lot of synchronized and rapid firing. Good ABR waveforms result from a lot of highly synchronized, very rapid responses of the auditory nervous system in response to acoustic stimuli. The neural slowing hypothesis explains that neural events don't happen as rapidly, hence the reason why short-term memory is affected, sensorimotor reaction time slows, and why processing and decision speed are decreased. The aging neurological system does not operate as quickly and efficiently as it has in the past.

Real-Time Speech Understanding

A tremendous amount of processing is required to understand speech. When we measure word recognition in the audiology clinic, we don't necessarily assess processing. Real-time speech understanding is not about repeating words in a sound booth, which is a sort of an artificial challenge to the cognitive system. Word recognition testing challenges the sensory system to properly encode the stimuli, but it is not a particularly sensitive measure of how well someone can process that information once it gets into the central system. Real-time speech understanding is the ability to extract meaningful information from ongoing conversation. In conversation, people want to get information, to think about what is being said, react to it, and share information. It is far more than decoding the individual words that you hear.

Speech understanding in real time is on-going; to be an effective part of the conversation, you have to continuously keep up with the flow. Normally, it is effortless to follow the back and forth of a conversation; you typically don't have to put specific effort into decoding the words that are being said. Instead, you're spending more time interpreting what is being said, deciding what you're going to say, producing what you're going to say, committing things to memory and doing many other things in addition to just decoding the words.

When you're a listener in a conversation, you are typically at the mercy of the person who is producing the speech to decide the rate of information flow. In other words, speech understanding is usually externally paced. It is driven by the person doing the talking. Occasionally, patients with hearing loss are willing to be assertive enough to try and control the pace of information coming at them because they have trouble keeping up. In reality, however, our social behavior and mores dictate that a listener does not try to control the way another person talks. A person who is older, has hearing loss and has trouble keeping up in the conversation is not going to be prone to try to control the situation. They will not typically say, "Can you please slow down? You're talking too fast. I really can't keep up." It is just not something that happens in our culture. In this way, speech understanding to some degree is related to social behavior.

Models of Speech Understanding

If you take a look at a simplified model of speech understanding, the central theme a bottom-up process, where acoustic information enters the auditory system and is interpreted by the system as phonemes and words and sentences. The interpretation is driven by the acoustic information. What we know about real speech understanding, however, is that the listener uses a lot of centrally-mediated information to help interpret what is being said. Initially, a listener might be very dependent on acoustic information because he or she has no idea what the speaker is going to say, but once the speech starts flowing, the listener starts using other cues to help interpret what is being said. Cues may include word frequency, phonemic information, sentence structure, stress pattern, gesture and other situational cues.

We draw on centrally-stored information to help predict what a person is going to say and to try and interpret what they're saying as they're saying it. By the time a speaker nears the end of a sentence we have a good idea what they're going to say. By the end of the sentence what we're really doing is confirming that the speaker is saying what we thought he was going to say, rather than simply sitting passively waiting for him to say it. There are a lot of very interesting paradigm parallels that have been used to confirm that kind of behavior.

Here is an example of the cognitive processing that we use during conversation. End this sentence for me: "Please pass the ________." Which word did you insert at the end? Milk? Salt? You probably thought of something that related to the everyday situation of people sitting around a table and asking for something that was on the table. Maybe it was wine, or potatoes, or napkins. But typically, when you got to the end of this sentence, you had the final word narrowed down to a short list of nouns that are typically found on a table. What happens if I say "Please pass the slow moving truck"? You probably have a very different scenario in your mind now. Perhaps you envision driving down a two lane highway with a slow moving farm vehicle in front of you, and the sentence is uttered as instructions for the driver to pass the truck. When you didn't know that the sentence "Please pass the _____" was taking place in a car driving down the road, you immediately jumped to some sort of dinner table setting. The words that would come in at the end of that sentence are based on that situation. This demonstrates just how much active interpretation is happening during listening; it's much more than merely decoding words.

Here's another example that involves sensory memory. If I say "Please drop the clothes in the basket...(pause) the dry cleaners". What happened? When you hear "Please drop the clothes in the basket" you interpret the sentence one way, i.e., that I was telling you to gather a pile of clothes and drop them in a basket. But when you hear the final phrase " the dry cleaners" you rearrange your interpretation of the sentence. The basket isn't where I want you to put the clothes, but rather it's a modifier or a qualifier for the word clothes. You didn't have any trouble re interpreting that sentence when you heard the final phrase. Most people have very flexible cognitive systems and are able to switch their interpretation very quickly. But again, this demonstrates how much active interpretation you do when you're listening. Listening requires the ability to move information within your cognitive system and use information, store information, and to store things temporarily. It requires you to have a temporary interpretation of meaning. It requires you to do a lot of cognitive processing other than just decoding of acoustic information and it is very much dependent on an efficiently-timed cognitive system.

Do Elderly Patients with Sensorineural Loss Have Neural Slowing?

The question then arises, "Is there evidence that the cognitive systems of people who are older and have sensorineural hearing loss don't work as effectively?"

I was involved in a project a number of years ago (Schum & Mathews, 1992) that used a sentence test called the SPIN test. The SPIN test was not commercially released but it was a good research tool. It used two types of sentences: highly cued sentences and low context sentences. The subject's task was to repeat the last word of each sentence, similar to the "Please pass the ____" exercise. An example of a highly cued sentence would be "The watchdog gave a warning growl." By the time you get to the last word of the sentence, because of its highly-cued nature, you have a pretty good idea of what it would be. An example of a low-cued sentence would be "Jon was talking about the growl." The normative data on the SPIN, which was collected across a broad age range of listeners, showed a predictive relationship between high-context items and low-context items. You expect to see a better performance with high-context items because the cues enable the last words of the sentence to be predicted pretty easily. We were interested in determining if older individuals could use the contextual information efficiently.

In general, the majority of the data from the older individuals fell within the range of expected performance based on the normative data. However, there was a significant minority of older patients whose results fell below expected performance. In other words, there was significant statistical evidence showing that some older individuals could not use contextual cues as efficiently as younger individuals. This was perfectly consistent with the neural slowing hypothesis - that as people age their ability to efficiently and effectively decode information starts to slow down.

Relationship Between Cognitive Abilities and Speech Understanding in Noise

If we believe the evidence that some older individuals can't use their cognitive systems as effectively, then the question is, "Does that mean they can't understand speech as well?" Thomas Lunner (2003) examined the relationship between core cognitive ability and speech understanding. He measured cognitive ability using a variety of tests, for example, memory span. He demonstrated a general relationship between the ability to understand speech in noise and memory span; that is, the better the memory span, the better the person was able to handle background noise and the better the speech understanding ability in noise.

Another measure of cognitive ability that Lunner used was lexical decision speed. Lexical decision speed is how quickly a person is able to make judgments about words, that is, interpret words as appropriate or not. This is a cognitive linguistic task. Lunner found a relationship between lexical decision speed and speech understanding in noise. The time it took for a person to make a cognitive judgment about the nature of a word reflected the person's ability to understand language in the presence of background noise.

This was one of the first important sets of data to indicate the relationship between cognitive breakdowns or normal cognitive changes within the system and basic speech understanding skills. This link between core speech understanding in noise performance and cognitive processing skills is a relevant issue that we should pay attention to as audiologists.

Does that Mean a Hearing Aid Will Not be Useful for the Elderly Patient?

The evidence suggests that cognitive processing is related to basic speech understanding. Does that mean that an older person who may have cognitive processing issues can't get good use out of hearing aids? If you work with hearing aids, you realize this is probably not the case because there are many, many, successful older hearing aid users, however, it is important to look at the data.

Cognitive Performance and Hearing Aid Benefit

Adrian Davis (2003) studied the question of hearing aid benefit as a function of cognitive processing. He compared unaided and aided performance using traditional speech understanding measures. He also used two particularly sensitive cognitive measures to determine whether or not patients who did well or did poorly on the cognitive performance measures could benefit from hearing aids. He grouped the subjects into four different categories: those who showed low performance on both cognitive measures ("low, low"), those who showed high performance on both cognitive measures ("high, high"), those that showed low performance on one measure and high on the other ("low, high"), and vice-versa ("high, low").

What he found was that, whether or not the patient was good or poor at cognitive performance tasks, they still were able to benefit from hearing aids. Consistent with the data from Lunner (2003) was his finding that basic speech understanding performance was poor for patients with poor cognitive processing skills. So, persons who had low performance on both cognitive tests had poorer unaided word recognition performance than patients who had high performance on both cognitive processing tests. However, the benefit that hearing aids provided these patients didn't really vary depending on their cognitive processing- everyone benefitted. That answered another very important question which was whether the "low, low" patients can still benefit from amplification, and the answer is yes. It is just that the level that they're starting from affects the final aided benefit they can expect to achieve. Hearing aids are still a good idea for elderly patients, but each patient's ultimate level of success is going to be affected by the efficiency of their cognitive processing system.

Compensation Strategies

The final topic that I want to talk about today is compensation strategies that you can you implement in your work with older patients to take into account some of the cognitive processing changes we've discussed today.

Patient Education and Realistic Expectations

The first strategy relates back to the findings from Davis (2003), and it involves patient education and establishing realistic expectations. As we discussed, the data suggest that hearing aids are still going to be useful but the ultimate level of performance might be modulated downward to some degree if the patient's basic cognitive processing skills are not as sharp as they were when they were younger. That is not necessarily an easy conversation to have with someone. However, there has to be some way to talk about the task of listening and how complex that task could be. It requires a lot of other skills other than just good hearing in order to do it well. Further, some of the basic changes that happen with aging such as memory changes, response time, and other changes can also affect speech understanding. It is not going to be any mystery to an older patient that their memory or decision speed is not as sharp as it used to be. It is important however, to remind the person that those skills also are related to speech understanding skills. Although it is not an easy conversation to have, it is a conversation that may need to take place.

Excellent Speech Processing (preserve information)

Another compensation strategy is to ensure that these patients have very good signal processing in their hearing aids because they can be easily thrown off by a cluttered signal. Because sensorineural hearing loss makes speech understanding more difficult, and because they're going to be easily distracted by information that is extraneous or unclear, you must ensure they're getting as much clean information as possible. In our hearing aid programming software at Oticon, depending on the patient's age, we make predictions about how efficient their cognitive system should be and we adjust the compression system accordingly. We know that patients who are older and have less effective cognitive processing skills can't handle an overly compressed signal very well. Perhaps there is too much information in a highly compressed signal for them to handle or there is too much information that has been modified from its natural state. We recommend using as uncluttered a signal as possible, and using slower acting time constants in the compression system is one way to do that. So there is every reason to try to get very good amplification on these patients because of the need to keep things as clear and unchallenging as possible.

Signal-to-Noise Ratio Improvement

You should take every opportunity to improve the signal-to-noise ratio for these patients. Any technology that can be used to improve the direct signal to noise ratio is going to make it much easier to listen and understand in noise. While this is true for all patients with sensorineural hearing loss, it is especially crucial when you are dealing with an older patient who may also have cognitive challenges associated with aging, and is faced with the task of listening and conversing in real time. Although FM use is not a popular option for adult hearing aid users, this is an opportunity to make the case to a patient with cognitive processing changes who may really stand to benefit from the assistance. Directional microphone technology also plays a role in improving signal-to-noise ratio and may be considered.

Fully Automatic Design

It makes sense to use hearing aids that offer fully automatic features for these patients. In challenging and complex listening situations they are already trying to manage a lot of information just to keep up with the conversation. You do not want to add to their tasks by having them have to make decisions about what their hearing aids should be doing. While some of our patients may be able to handle a multi-memory hearing aid with one program for quiet, one program for noise and a third program for another special situation, you do not want to add to the patient's plate when there are cognitive processing issues. An automatic hearing aid system can more effectively manage the incoming information on behalf of the patient. Some hearing aid companies tend to focus on multiple programs using remotes and buttons on the hearing aid while others focus on fully automatic design. As you consider the older patient who is cognitively challenged you can make a strong case for a very smart automatic hearing aid design to manage the acoustic situation for the patient.

Pacing & Complexity of Message

The messaging issue is a good opportunity to involve the patient's family members in the treatment process. You can point out that when there is a lot of information flow that the patient has to deal with, it can become a challenge. One of the things family members can do is pay attention to how quickly they bring information to the patient, such as how fast they talk, how often they change topics, or how complex and detailed they make their descriptions to the patient. They need to slow down and be more direct so that they're not taxing the person's cognitive system to interpret relative complex linguistic arrangements. While you it's hard to change the way we talk, it's important to increase the sensitivity of the family members to the fact the way they choose to produce speech can affect the ability of the person to interpret their message.

Clear Speech

One of my final recommendations is clear speech training. Clear speech is a very specific speech production technique. Picheny, Durlach and Braida (1985:1986) coined and defined the term clear speech as an approach to improving communication and documented acoustic changes and their positive effect on word intelligibility. You can train family members to produce clear speech. Clear speech training is one of the very specific training techniques that make a lot of sense to use with elderly hearing aid users because it helps ensure that the information that they decode is acoustically clear and linguistically undemanding.

Follow-up Programs

The final recommendation is to have the patient and their family members consider follow-up programs. When I talk about follow-up programs, I'm not necessarily talking about group training. While some practices may find a group training model that works well, there are not good practical group training models that will work in all practice settings. Group training is great if it can be implemented in your practice, but follow up programs can also be web-based or self-paced so that patients and families can conduct them at home. For example, the LACE (Listening and Communication Enhancement) program and other similar programs work on very specific listening tasks to make the person more effective in decoding information, and can be done on the patient's computer at home. When a person has the dual challenge of both sensorineural hearing loss and changes in their ability to process information, follow up programs can be a very good way of them improving their listening skills especially in complex situations.

Concluding Remarks

Hearing is much more than the ability of your peripheral auditory system to decode phonemes. Hearing is the way you use your auditory skills to be part of a modern world. As a patient gets older the efficiency at which their auditory system and cognitive system work can change over time. Changes in these systems can have very specific implications in terms of how useful and effective speech conversations are for these patients. My goal was to raise awareness of these issues and to give you some insights that you can use when working with elderly patients. There are other variables in addition to hearing loss that can affect the hearing aid user's ultimate communication success. I hope that some of the intervention strategies I provided will give you some useful clinical tools to use in the future.


Davis, A. (2003). Population study of the ability to benefit from amplification and the provision of a hearing aid in 55-74-year-old first-time hearing aid users. International Journal of Audiology, 42:2 S39-2 S52.

Kamm, C., Dirks, D., & Mickey, M. (1978). Effect of sensorineural hearing loss on loudness discomfort level and most comfortable loudness judgments. Journal of Speech and Hearing Research, 21, 668-681.

LACE, Listening & Communication Enhancement. Available from Neurotone.

Lunner, T. (2003). Cognitive function in relation to hearing aid use. International Journal of Audiology, 42,S49-S58.

Picheny, M., Durlach, N., Braida, L. (1985). Speaking clearly for the hard of hearing I: Intelligibility differences between clear and conversational speech. Journal of Speech and Hearing Research, 28(1),96-103.

Picheny, M., Durlach, N., Braida, L. (1986). Speaking clearly for the hard of hearing. II: Acoustic characteristics of clear and conversational speech. Journal of Speech and Hearing Research, 29(4),434-46.

Schum, D., & Matthews, L . (1992). SPIN test performance of elderly hearing-impaired listeners. Journal of the American Academy of Audiology, 3, 303-307.

Schuknecht, H. (1974). Pathology of the Ear. Cambridge, MA: Harvard University Press.

Grand Rounds Live | 4 advanced live webinars | Register today

donald j schum

Donald J. Schum, PhD

Vice President of Audiology and Professional Relations, Oticon

Don Schum currently serves as Vice President for Audiology & Professional Relations for Oticon, Inc. Previous to his position at Oticon in Somerset, Don served as the Director of Audiology for the main Oticon office in Copenhagen Denmark. In addition, he served as the Director of the Hearing Aid Lab at the University of Iowa, School of Medicine (1990-1995) and as an Assistant professor at the Medical University of South Carolina (1988-1990). During his professional career, Dr. Schum has been an active researcher in the areas of Hearing Aids, Speech Understanding, and Outcome Measures. (B.S. in Speech & Hearing Science, University of Illinois M.A. in Audiology, University of Iowa Ph.D. in Audiology, Louisiana State University.)

Related Courses

A Deeper Look at Sound Environments
Presented by Don Schum, PhD
Recorded Webinar
Course: #33536Level: Intermediate1 Hour
The characteristics of the sound environment have a fundamental impact on the performance of the hearing aid user. In this course, we will describe the obvious and sometimes more subtle aspects of sound environments that will affect hearing aid performance.

The Subjective Evaluation of a New Hearing Aid Fitting
Presented by Don Schum, PhD
Recorded Webinar
Course: #35584Level: Intermediate1 Hour
The final judge of the success of a new fitting will of course be the patient, and the criteria that they use may not always be in line with an objective audiological measure. This course will review some of the issues and options at play when having the patient weigh in on the value of the new devices.

Oticon Government Services May 2023 Contract Update
Presented by Kirstie Taylor, AuD
Recorded Webinar
Course: #38919Level: Intermediate1 Hour
Oticon understands that sudden, disruptive sounds affect a large majority of hearing aid users, making it difficult to stay sharp and focused – especially when engaged in conversation. Oticon Real™ is designed to help with this challenge and keep patients engaged. In this course we introduce new options on the VA hearing aid contact.

Oticon CROS: A Revolutionary Sound Experience
Presented by Rob Dowling, AuD, FAAA
Recorded Webinar
Course: #35592Level: Introductory1 Hour
This course will review the Oticon CROS family, which sets a new standard among CROS/BICROS hearing aids. Our revolutionary TwinLink dual-streaming technology offers exceptional sound quality for an immersive listening experience for those with single-sided deafness allowing users to take advantage of today’s technology without compromise.

More Power. More Choice. More Freedom
Presented by Kelly A. Stahl, AuD
Recorded Webinar
Course: #33546Level: Intermediate1 Hour
Oticon Opn S built on the OpenSound paradigm delivering improved speech understanding while reducing listening effort. Now with Oticon Xceed, we extend the open sound experience to patients with severe to profound hearing impairment. For individuals struggling with single sided deafness, we bring the benefits of open sound experience with a new CROS solution.

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.