AudiologyOnline Phone: 800-753-2160


Audioscan Simulated REM - September 2021

Cognitive-based Assessment of Signal Processing

Cognitive-based Assessment of Signal Processing
Donald J. Schum, PhD
February 27, 2012
Share:

Editor's Note: This is a transcript of a live Expert e-seminar. To view the recorded course, please visit:/audiology-ceus/course/hearing-evaluation-adults-hearing-aids-adults-cognitive-based-assessment-signal-processing-19659
A pdf of the slide presentation is available for download and contains additional images and material used in the live seminar.


Today's talk will address futuristic thoughts about the way we can assess the effect of signal processing for patients with sensorineural hearing loss. The way we have been testing the effect of amplification for patients has not evolved all that much over the last couple of decades, so it is about time. As the nature of signal processing becomes more sophisticated and the differences between various signal processing options become more subtle, it is of interest to many of us in the field to investigate new ways of evaluating its impact for the patient. One strategy that has attracted recent attention in the literature is to measure cognitive effort, or listening effort. We know that speech understanding is a cognitive process. What happens in the peripheral auditory system is only one part of a much larger set of processes that happen during communication.

Understanding speech in realistic situations, participating in conversation, and making the most out of communication are much more complex tasks than those we are measuring in the clinic. An interest in digging deeper into listening effort has gained a lot of attention by certain research groups. I want to talk about the nature of cognitive-based assessment of signal processing, give you some ideas how this has been done in a research setting, and then discuss where this could potentially impact our clinical processes.
Why Study Cognitive Effort?

When we talk about measuring speech understanding, we have a limited viewpoint based on how and what we assess in the clinical environment. We have, for significant reasons, tried to control many of the extraneous variables that affect assessment of speech understanding. When we test under headphones in a very controlled noise environment, we minimize the variability of measures in order to have some sort of common language that we can use when assessing patients.

The original purpose of speech understanding testing was much more diagnostic, and that diagnostic influence has stayed with us over the years. However, when we instead want to examine the effect of intervention for the patient, especially the effect of the hearing aid signal processing, the controlled test environments do not reflect real life. Testing using monosyllabic words in speech-shaped noise or clinical babble do not tap into the entire process of what speech understanding is like in true environments. This is why there has been an interest in speech understanding going beyond our traditional clinical measures. We, as audiologists, are not the only ones interested. For example, there is a lot of work from both commercial and military flight groups studying how much information can be absorbed by pilots while they are in a true, complex flight situation. In a realistic situation where you have a military or civilian pilot flying, they do not have the opportunity to get rid of all the distracters and extraneous variables in the task environment. That is how they have to operate. There is significant interest to understand just how much information the human observer can take in and still perform a task effectively.

Although speech understanding in a complex environment is not as crucial as understanding how well a pilot can handle multiple sources of information and still fly effectively, the parallels to our situation are applicable. We expect patients with sensorineural hearing loss who are struggling with impaired signals from the auditory system to perform at levels that approach those of normal-hearing individuals. It is important to understand exactly what we can expect for these patients. When you think about how a patient describes their hearing, or the way parents talk about the effect of hearing loss on their child, one of the descriptors that comes up is auditory fatigue.

It takes a lot of work to be a listener. Some of the subjective reports we hear are: "I feel tired at the end of the day"; "When I come home I just want to take my hearing aids out"; "I need a break"; "My child seems irritable"; "My child seems overwhelmed and towards the end of the day and is more of a behavior problem than earlier in the day". There are a lot of different ways that people with sensorineural hearing loss describe its effect throughout the course of a day.

Many people in our field, me included, believe that what these individuals are actually describing is the effect of listening fatigue. The task of listening to speech in truly complex situations throughout the course of the day through an impaired end organ is difficult. It takes a lot of resources to compensate for the effects of the sensorineural hearing loss, and one thing that all audiologists know is that amplification is only a partial solution to the effects of sensorineural hearing loss. We can restore audibility, and under certain situations we can improve the signal to noise ratio, but at the end of the day that is still an impaired encoding system. The notion that hearing aid technology can totally compensate for that impairment is off base. It will never be that way, so understanding the effect of listening throughout the course of the day through that impaired system is of interest to many of us.

One of the things to remember is that listening is normally automatic and effortless for those with normal hearing. As you are sitting here today listening to this talk, assuming that you are listening through a good sound system and English is your native language, this is probably pretty effortless for you. Maybe you are also having lunch. Maybe you are checking your Facebook status at the same time. Maybe you are doing this or that, but to understand what I am saying and make sense of it at the same time is likely not a tremendous burden for you. However, say something goes wrong with the sound system. If the Internet starts to act up and my voice cuts in and out or if you are listening in a noisy office or are trying to answer critical e mails and things start to pile up on you, then listening and understanding what I am saying no longer remains automatic and effortless. It may start to become more of a challenge to you. Because our patients are listening through a permanently-impaired end organ, there will always be a certain amount of distortion added to the signal.

There are a number of situations where the individual has to put in extra effort in order to understand what is being said. Depending on the nature and degree of the sensorineural hearing loss, all speech understanding may be a challenge to them. They may even have full audibility and be well-served by amplification, but despite all of our best efforts clinically, many will still have to work at decoding that signal, even when it is cleaned up and presented in the best possible way from an engineering standpoint. That may not show up on a word recognition test or during the decoding of an individual sentence. These patients might be able to have a short conversation with someone and seem to understand and do well. But the sum total of the extra effort has to start catching up with them eventually, and this is a burden that they have to deal with.

Over the years, listening effort has been studied in a variety of ways. Typically, it is measured as a rating of how much effort a person has to put in to listening and understanding. More traditional methods of quantifying listening effort have looked at the relationship between speech understanding and listening effort. If you test a normal-hearing individual on a word recognition test and vary the signal to noise ratio (SNR), you get a very typical curve (Figure 1). As the SNR improves, the person with normal hearing will start to show an improved speech-understanding score up to the point where they plateau, and there is no further effect on improving the score with a higher SNR. People with sensorineural hearing loss require a better SNR as compared to people with normal hearing to achieve the same levels of word recognition performance.



Figure 1. Percent correct on word recognition as a function of signal-to-noise ratio (increasing left to right) for listeners with normal hearing (blue line) versus those with sensorineural hearing loss (red lines).

Some patients may need only a 5 dB SNR improvement in order to perform as well as normal hearing individuals, and once you get above that SNR, the function rises in a similar way to the function of a normal hearing listener. It will then plateau cxlose to 100%. Other listeners with sensorineural hearing loss may require a higher SNR (perhaps somewhere around 8 or 9 dB). There are also some people who may never plateau. These are those patients who even in perfectly quiet situations do not understand what is being said. It is just the nature of the hearing loss that they are dealing with, and is not a matter of signal-to-noise ratio. It is simply a limitation of their auditory system and the effects of sensorineural hearing loss.

If we go back to the normal-hearing case and look at how patients would rate the effort they put into a listening task, we find that at very poor SNRs, even when the normal-hearing listener get some of the words correct, they still might rate their effort as very high (Figure 2).



Figure 2. Subjective report of listening effort reported by normal-hearing listeners as a function of signal-to-noise ratio (SNR increasing left to right).

On the right axis, note that effort is graphed with high effort being at the bottom and low effort at the top. There are a range of situations where the person might get some words correct, but still put in a tremendous amount of effort. As the SNR gets better and the word recognition score improves, the person has to put in less and less effort to the point where they will get 100% correct, or plateau at a high level. They may be able to understand every word that is said, but they have to work very hard at doing that. If you further improve the SNR, the amount of effort required for the task starts to lessen. Just because a person is getting 100% correct does not mean they are not working hard to get it. It may be that little improvement to SNR will lessen how much work they have to do to maintain that level of speech understanding.

This is the same for people with sensorineural hearing loss. They may also have to put in a high level of effort to achieve a score nearing 100%, and they need further SNR improvements to get to the point where they do not feel like they are putting in extra effort to hear and understand speech. Patients who have poor word recognition or never reach 100% may never be in a situation where effort becomes a non-factor. In other words, they may have to put extra effort into speech understanding no matter how good the SNR, which is, again, often the nature of sensorineural hearing loss. Throughout the course of the day, someone who is showing relatively good performance on a word recognition task might be working very hard, whereas the person with normal hearing is not working hard at all in that situation. That is one of those hidden effects of sensorineural hearing loss. The person may seem to hear and understand, but they are working very hard to maintain that level.

Researchers who are interested in listening effort want to know what happens in these situations, for example, where the normal hearing person can cruise by and the hearing-impaired individual scores well but has to work really hard. We want to understand the hidden effects in those situations. Just because you identify a word correctly for the task does not mean that you are not stealing resources from somewhere else. In other words, perhaps you are affecting performance in some other dimension. Measuring cognitive effort is an attempt to hone in on the perceptual phenomena that may be occurring. If the person has to work that hard to understand what is being said, what gets sacrificed as a result? What else can increase the cognitive effort required?

Beyond Hearing Loss

We talked about the distortional aspects of sensorineural hearing loss, but sensorineural hearing loss is not the only factor that increases effort on behalf of the listener. One of the well-documented factors is that as a person ages they will start to slow down or experience a lack of efficiency within the information processing system. Cognitive slowing is the term that is typically used for this. We are referring to a person who is not suffering from any clinically relevant cognitive disorders like dementia or Alzheimer's, but those who may be 60, 70 or 80 years old, and their nervous system does not work as efficiently. There are physiological changes in the way neural signals flow through the central auditory system that result in practical changes of how efficiently information can be processed. In an older person who has sensorineural hearing loss, the effort they have to put in to hear and understand might be driven by the nature of the sensorineural component, but also by the fact that, as an information processing machine, they are not as fast as they once were.

Let's think about a child who is in the process of developing language. For adults who grew up with normal hearing and no other particular disorders, language development happens naturally. You can process information effectively and you have the linguistic skills needed to be a good communication partner. For a child in the process of learning language, however, understanding and developing linguistic abilities requires much more effort. If you combine the extra effort it takes to be a communication partner when you do not have a full grasp of language with the presence of hearing loss, then you have multiple factors at play against you. We know that children with sensorineural hearing loss can be at risk for slowed language development. The fact that language development is slowed down is a problem in and of itself, but it can also be a contributing factor to the amount of effort that they have to put in to the task of hearing and understanding. Any other learning issues that a child might have will also be a hurdle. The question again is, "What else gets sacrificed in the effort to hear and understand?"

Speech understanding is a cognitive activity; it involves more than simply decoding words. You use speech information to learn to appreciate and react to new things. Speech understanding by itself is meaningless; it is the information that it brings you that is important. If you have to work very hard to understand each and every word, the other things that you do with your cognitive system, such as interpretation, planning a response, making decisions about what was said, and trying to commit that information to long term memory, can be sacrificed if you are spending cognitive energy to simply decode the words. It is a general observation that most individuals have finite cognitive resources. In other words, you cannot do an unlimited number of things with your brain at the same time; there is a limit to the amount of multi-tasking any one person can do. That is one of the reasons why there is interest in military pilots' performance, because in those aircraft they are bombarded with a lot of information that they need to perform their job.

If sensorineural hearing loss is slowing your cognitive system and aging is causing you short term memory encoding problems, then the whole process breaks down. The stress of being a listener starts to go up. Remember that listening is active and often purposeful. In other words, we do not listen passively in most cases. If you are just sitting and listening passively, at some point you do not retain anything. Normally to retain something and make it worth understanding, you have to actively respond. You have to commit the information to memory, and interpret it. If you listen to music or listen to speech you are doing things with the information. You are not simply sitting there like a microphone letting information passively move through you. It is the active process that might be sacrificed when decoding the information becomes more difficult.

Kathy Pichora Fuller (2003) makes an observation that you could think of speech understanding as meaningful integration of information. That includes memory, interpretation, evaluation and reaction. In other words, you do things with what you have heard under most communication situations. The general processing resources are finite and allocable. As I said, you have a limited amount of processing resources, and, to some degree, you can decide how you are going to use those processing resources. You can decide to focus on one particular task at a time. For example, if you are driving in the car and carrying on a conversation and all of a sudden the traffic gets heavy or the weather starts to get bad, what you end up doing is applying more of your attention to the task of driving safely, and your ability to converse under these conditions will suffer, as it should. As the driving situation becomes more complex and more challenging, you can decide just how much effort you have to put in to continue to drive safely. That will vary depending on the situation.

One of the things Pichora-Fuller (2003) noticed experimentally is that the identification of speech will be prioritized over memory or other deep-level processing. That makes sense, because you cannot remember what someone said if you cannot understand what they said. If you struggle to identify what is being said then your ability to commit it to memory or do other deeper-level processing starts to fall apart. In terms of information processing, you can show a differential sort of effect for speech understanding problems that go along with sensorineural hearing loss.

Sensorineural Hearing Loss: More Than Meets the Ear

What is sensorineural hearing loss? We normally talk about sensorineural hearing loss as a loss of sensitivity, because that is the way we measure it. We use audiometric thresholds, and then we talk about sensorineural hearing loss in terms of what the patient has lost or how much their sensitivity to sounds has changed. But when you talk to patients with sensorineural hearing loss, how do they describe their problems in noisy situations? They are usually not talking about what they do not hear; usually they talk about the opposite. They talk about the fact that they hear too much, and therefore have trouble understanding as a result.

I like to describe sensorineural hearing loss as the loss of the ability to organize sound. In complex situations, even with the presence of technology on the patient's ears, you can easily overwhelm the patient with too much sound. Now, the patient is not getting more sound than the person with normal hearing, so it is not that you are overloading them. The difference is that because of the nature of the peripheral hearing disorder and the loss of accurate encoding of the information, that patient simply cannot organize all the sources and sounds. They might lose the sense of where the different sounds are coming from and they may not be able to differentiate one speech stream from another. They might not also be able to differentiate speech from non-speech sounds. You run into a situation where all that sound is suddenly available to the person with sensorineural hearing loss. You fully compensate for the person's loss of audibility with amplification, but they cannot use that information in a meaningful, organized way. There are many cognitive models that describe this.

One model reminds us that what the patient wants at the end of the day is a figure and a ground. In other words, if they are listening in a complex environment, they want to be able to focus on the most important part of that soundscape: the conversation that they are trying to follow. All the other sound in the environment goes into the background. It becomes ignored and it becomes part of the subconscious layer of information. The thing about this model is that it says that, at some layer below our conscious layer of interpretation, our cognitive system performs grouping of information. One of the things that we have clearly learned about the cognitive system is that there is a tremendous amount of work that is accomplished behind the scenes. When it is working well, it can take a lot of complex auditory or visual information coming in and make sense of it by assigning meaning. It assigns reality to the sources of information and tries to create groupings of sounds.

How, then, should information be presented to the brain? Remember the auditory system is only partly related to the ears. The ears are the encoding organ, but listening happens in the brain. One of the ways to answer that question is to take a look at two road signs (Figure 3). What sort of road sign are you likely to see driving down the road? The one in the upper left includes stylized fonts. It may look cute, but it is not really effective as a road sign. The one in the lower right is more typical of a sign you see while driving. The font is clear and distinct so that no extra processing effort is required to understand the message. When you are driving down the road at 65 miles per hour, it is not good to spend extra effort to decode a road sign. In fact, there is a whole science around road sign fonts, and a lot of effort has been put into determining the most effective fonts to use on road signs. We do not typically conduct this type of perception assessment in audiology. We typically do not pay attention to very fine details or how much work it takes to understand information as it is flowing through the auditory system. It is important that we begin to recognize the nature and effect of competing information.



Figure 3. Examples of the same road sign in a fancy, stylized font (upper left) and a standard font (bottom right).

Understanding in Noise

What is noise? You may think that all noise is not created by Auditec of St. Louis. The specialized noise we employ in our clinical testing materials were made very precisely in order to have control over certain signal conditions. They are sometimes thought to provide an indication of what speech perception is like for the patients in real, complex environments. However, our materials do not come anywhere near capturing all the sources of information that a person has to deal with when they are communicating in realistic situations. Imagine a situation where you are sitting around a table ordering lunch in a restaurant. You are having a conversation with someone across the table. For that moment in time, that person is the signal and all the other conversations going on around the table, including the waiter taking someone else's order, is noise. As soon as that waiter steps up to take your order, the person who used to be the signal now becomes the noise. The waiter who used to be the noise now becomes the signal. This is a small illustration of how communication can be very complex. The source of information and the competition can change constantly.

The most difficult competition that patients experience is when people are talking simultaneously. There are limits to how much we can expect technology to be able to solve these types of problems, and it is important to understand what the impact is on perception in linguistically-complex situations or competition. It takes effort to ignore things. In other words, because sensorineural hearing loss creates a poor signal and the peripheral auditory system cannot encode it clearly enough to distinguish one sound from another, extra effort is required in order to distinguish speech from the person of interest from competing sources of speech. Competing voices have a greater impact on the speech that you are trying to listen to than a non-linguistic signal. The easiest noise in which to listen is speech-shaped noise. The next easiest is going to be clinical babble where you cannot really make out any of the words, and the most difficult competition is linguistically-realistic competition. This is because the competing context is running speech, not just babble. It is harder for the cognitive system to suppress competition with linguistic content.

Measuring Listening Effort

How has listening effort been measured over the years? Both immediate and long-term measures can be used. Immediate measures means that I give you a task. For example, I want you to listen to a passage and fly an airplane in combat at the same time. Sometimes it is important to know in a moment-by-moment basis how much impact the tasks put on your cognitive processing. These are very short term measures. An example from our field would be to have a person repeat a sentence and then somehow measure how much effort it took for them to hear and understand that particular sentence. The other way to measure the effect is over a long-term basis, maybe throughout the course of a day. The most common way of measuring listening in our field is to have the patient subjectively rate how much effort they are putting in. Typically, these ratings are done after relatively short periods of time, such as after every sentence or paragraph. They are not day-long measures, but it is longer than one word at a time.

Some clinics use a scale, where the patient rates on a number scale the amount of effort they put into listening to a particular passage or phrase. It can be hand-written or computerized; it does not matter. Many different scales have been used over the years. Figure 4 shows some data from work we did comparing two different hearing aids on a speech-in-noise task. We used a rating scale where the patient was given a blank white bar with one end identified as "no effort" and the other end as "maximum effort." They are given a listening test and were asked to draw a line across the bar representing how much effort they felt they were putting into the task. There are specific psychometric reasons why you do a test like this, but the idea is that you want to get an unbiased measure of their effort. We filled in the words "moderate effort" and "considerable effort" after the measures had been made. This is a bit different than using a scale where the words are actually on the scale. There are many reasons why you might want to use one over the other.



Figure 4. Comparison in listening effort between two digital hearing aids.

We made a comparison of effort between two different hearing aid models, and we saw patients judge significantly less effort when tested with one hearing aid model over the other (Figure 4). We made another comparison for super-power users of a newer versus older hearing aid. This time the scale range was from 0 to 100 rather than no effort and maximum effort.

Dual task paradigms have also been used to measure listening effort. You can measure how much effort it takes to do a particular task by measuring the effect on a secondary task. For example, let's go back to the driving situation. Let's say your job is to drive your car safely, and then you also carry on a conversation. As the traffic gets heavier or weather gets worse you still might be able to drive your car as safely as before, but it takes more of your attention in order to do that, and therefore you may not be able to participate in the conversation to the same extent. You would not be able to measure this extra effort simply measuring how well you can drive the car because you are still doing that task well. However, you could measure the effort it takes to drive the car safely (primary task) by measuring how well you carry on the conversation (secondary task). The quality and effort of the conversation should drop off if you have to put more effort into trying to drive safely.

I was involved in a project where we gave patients the task of listening to SPIN sentences in noise while performing a secondary task (Schum & Matthews, 1992). Subjects would hear a sentence like "John was talking about the growl," in the presence of background noise, while sitting in front of a computer monitor that changed color. Their primary task was to repeat the last word of the sentence. Since the last word of the sentence was highly cued from the other information in the sentence, it was a good way of ensuring that they were paying attention to the whole sentence as there was information across the whole sentence. The secondary task was to acknowledge when the computer monitor changed color. They kept their finger over a key on the keyboard and every time the screen changed from one color to the other they pressed a key as quickly as they could. The idea behind this was that if the primary task (repeating the word) was very easy, they should be able to very quickly perform the secondary task (indicating the color change on the computer). They are instructed to pay attention to the primary task. As the primary task gets more difficult, then the speed in which they can respond to the screen change should slow down. Dual tasks paradigms have been used in many studies over the years. In these studies, the reaction time on the secondary task changes as a function of difficulty of the primary task.

We looked at the performance of younger hearing-impaired individuals against a group of elderly hearing-impaired individuals (Schum & Matthews, 1992). I was interested in the cognitive slowing of the elderly, and I thought this might be a good way of teasing out some of those effects. The subjects all had similar audiograms, regardless of age. The subjects were required to push a button on a keypad, and I knew the older individuals were going to be slower overall because their motor skills are slower, so I did an individual baseline to account for that effect. I then measured how quickly they could perform the secondary task depending on how difficult the primary task was.

The younger individuals all had relatively fast reaction times and were grouped closely, but there was more variability with the older individuals, as we expected. Some individuals showed a very significant impact on reaction time when the listening task became more difficult, whereas others were able to perform just as quickly as the younger patients.

The study I just described used a reaction-time paradigm. Another way you can measure listening effort is by measuring memory rather than reaction time. We assume memory is sacrificed when you exert more cognitive effort. Your ability to encode information into memory becomes worse. An example of how memory can be used as a measure of effort is as follows: Subjects are given four SPIN sentences in a row, and they have to repeat the last word of the sentence. They are asked at the end of the fourth sentence to repeat back the last word of all four sentences. Not only do they have to pay attention to each sentence as they were hearing it, but they also have to keep a running list of words in memory. Instead of measuring reaction time, this is a memory task.

In addition to using rating scales and dual task paradigms, the third major way of measuring cognitive effort is via physiological measures. The idea is that if you have to work hard at something, it is going to show up physiologically even if it is a cognitive task. In other words, mental effort is still effort, and there are physiological effects of mental effort. Let me give you a few examples of how this has been studied. One of the measures of interest in these studies is pupil size. There is a group out of the Netherlands that has been studying cognitive effort during speech perception using pupil size (Zekveld, Kramer, & Festen, 2011). We know that when humans are under a stressful situation and working extra hard on a cognitive task, their pupils get larger, and you can measure it with the right equipment. You might see differences in how much work a person has to do in order to hear and understand speech in a realistic situation. This expended effort may not show up in word recognition testing, but it may very well show up as a change in a physiological response.

A few years ago at Vanderbilt, Candace Hicks and Anne Marie Tharpe (2002) looked at responses in school children after cumulated stress throughout the course of the day. They measured salivary cortisol levels. As you become stressed, your body releases more cortisol, which prepares the body to manage the stress. You can measure cortisol levels to see if you are using more cortisol throughout the course of the day, presumably because you are under more stress. They found there was no significant difference between the amount of cortisol in the hearing-impaired individuals versus normal-hearing individuals, but they noticed that all children showed reduced cortisol levels at the end of the day than at the beginning of the day. This was a physiological measure that in this particular study did not necessarily differentiate hearing-impairment from normal hearing, but it was another way of looking at the issue of listening effort.

One of the newer physiological measures that I think is very interesting is the galvanic skin response. There used to be an old-fashioned technique for catching malingerers that you may have learned in graduate school. It is no longer used, but the theory and physiology still apply. When you are under stress (for example, lying) you see changes in the galvanic skin response. A group out of MIT in Boston created a consumer device to measure stress levels as reflected by the galvanic skin response throughout the day. The premise is that if a person feels stressed but does not know what is causing the stress, they can wear this device over several days and look closely at the data to decide what is causing the stress. Anyone can put this on their home computer, and it gives a very precise reading of measurements throughout the course of the day. Although it has not been used in our field yet because it is very new, I think it is one of those things that could effectively show some of the issues that we are talking about in terms of cognitive processing. The potentially convenient aspect about this measure is that you can get both short term and long term measures at the same time.

I bought one of these devices. I have not started using it with hearing-impaired individuals yet, but I have tested it out as I have played Scrabble on my computer. I worked for half an hour playing regular Scrabble where I could sit back and think about the words, and then I played against the computer in a speed round that is more stressful because you have to make decisions quickly. I noticed on my galvanic skin response printout that it stressed me more to play under the timed condition than when I played at my leisure. It was just a quick and simple way to show this cognitive effect we are discussing today.

Conclusion

To summarize, are we are in a position to recommend measuring listening effort as a regular clinical tool? The answer is no. There is not yet a compelling justification for measuring listening effort routinely in the clinic, in addition to our other diagnostic and rehabilitative testing. We have not quite delineated the role of cognitive testing in a clinical environment. Experimentally, there is a tremendous amount of evidence for why it is important and why we are studying it. You will continue to see research reports that are using things like pupil monitoring, galvanic skin response and dual task testing to measure cognitive effort, but it is not ready for clinical use. You can suspect that one person is working harder than another at speech understanding, but a clinical test battery to look into the issue has not yet been well defined. The process of speech understanding is very complex and our current clinical protocols are not fully measuring it. This is a futuristic concept and an area to watch, because I believe that over the years there might be very good clinical uses for this sort of a measurement technique.

References

Hick, C.B., & Tharpe, A.M. (2002). Listening effort and fatigue in school-age children with and without hearing loss. Journal of Speech, Language, and Hearing Research, 45, 573-584.

Pichora-Fuller, M. K. (2003). Cognitive aging and auditory information processing. International Journal of Audiology, 42 (Supp 2), S26-S32.

Schum, D. & Matthews, L. (1992). SPIN test performance of elderly, hearing-impaired listeners. Journal of the American Academy of Audiology, 3, 303-307.

Zekveld, A.A., Kramer, S.E., & Festen, J.M. (2011). Cognitive load during speech perception in noise: the influence of age, hearing loss, and cognition on the pupil response. Ear & Hearing, 32(4), 498-510.

Rexton Reach - April 2024

donald j schum

Donald J. Schum, PhD

Vice President of Audiology and Professional Relations, Oticon

Don Schum currently serves as Vice President for Audiology & Professional Relations for Oticon, Inc. Previous to his position at Oticon in Somerset, Don served as the Director of Audiology for the main Oticon office in Copenhagen Denmark. In addition, he served as the Director of the Hearing Aid Lab at the University of Iowa, School of Medicine (1990-1995) and as an Assistant professor at the Medical University of South Carolina (1988-1990). During his professional career, Dr. Schum has been an active researcher in the areas of Hearing Aids, Speech Understanding, and Outcome Measures. (B.S. in Speech & Hearing Science, University of Illinois M.A. in Audiology, University of Iowa Ph.D. in Audiology, Louisiana State University.)



Related Courses

Assessing Auditory Functional Performance: Goals and Intervention Considerations for Individuals with Hearing Loss
Presented by Susan G. Allen, MED, CED, MEd, CCC-SLP, LSLS Cert. AVEd
Recorded Webinar
Course: #33024Level: Intermediate1 Hour
Functional auditory assessment and continuing assessment is critical in order to determine the current level of function, develop appropriate goals for intervention, and achieve maximum outcomes. Learning to listen drives everything else: speech intelligibility, language competence, reading, academics, and life-long learning. This course offers a detailed look at functional auditory assessment and intervention, to provide audiologists with a better understanding of hearing loss in children in terms of the broader speech, language, learning and academic contexts. Additional videos to demonstrate key points will be included.

20Q: Changes to Auditory Processing and Cognition During Normal Aging – Should it Affect Hearing Aid Programming? Part 2 – Programming Hearing Aids for Older Adults
Presented by Richard Windle, PhD, MSc, CS
Text/Transcript
Course: #39168Level: Advanced2.5 Hours
Part 1 discussed how a decline in some elements of cognition and auditory processing alters speech perception during normal aging. This course considers how hearing aids may help or hinder speech perception for older adults. The author discusses how different hearing aid settings can affect the speech signal and consider practical ways we can use this in the clinic to offer the optimum fitting for an individual, in particular how we should set up hearing aid compression.

Implementation of Cochlear Implants: Enhanced Candidacy Criteria and Technology Advances
Presented by J. Thomas Roland, MD Jr.
Recorded Webinar
Course: #37377Level: Intermediate1 Hour
The participant in this course will understand the extended candidacy criteria with cochlear implantation and expectations. The course will cover implanting under age one, hybrid hearing with cochlear implantation, CI under local anesthesia, single-sided deafness, cochlear implantation, and auditory brainstem implantation.

Conductive/Mixed Hearing Loss: Otosclerosis and Other Causes
Presented by Daniel Zeitler, MD, FACS
Recorded Webinar
Course: #34564Level: Intermediate1 Hour
This course will review middle ear mechanics and conductive hearing loss. A highlight of differences and similarities between conditions will be discussed. A review of audiological and otological work-up will be covered and as well as a brief introduction to surgical and non-surgical treatments.

20Q: Audiologic Care for Musicians - Creating the Perfect Harmony
Presented by Cory Portnuff, AuD, PhD
Text/Transcript
Course: #36100Level: Intermediate1 Hour
Musicians' ears are part of their instruments, and audiology expertise is important for amateur as well as professional musicians. Standard audiology clinic protocols and knowledge may not always be on target for musicians. This course uses an engaging Q & A format to discuss musicians' unique hearing needs and how audiologists can best meet them.

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.