AudiologyOnline Phone: 800-753-2160

Starkey Genesis - August 2023

Vanderbilt Audiology's Journal Club: Noise Reduction, Directional Microphones, and Listening Effort

Vanderbilt Audiology's Journal Club: Noise Reduction, Directional Microphones, and Listening Effort
Erin Margaret Picou, AuD, PhD
January 25, 2016

Editor’s Note: This text course is an edited transcript of a live webinar. Download supplemental course materials.

Learning Objectives

  • After this course learners will be able to list recent key articles on the topic of noise reduction, directional microphones and listening effort.
  • After this course learners will be able to describe the key findings from recent journal articles on the topic of noise reduction, directional microphones and listening effort.
  • After this course learners will be able to explain the clinical implications for audiologists of recent key articles on the topic of noise reduction, directional microphones and listening effort.

Dr. Gus Mueller:  Our presenter today for our Vanderbilt Journal Club is Dr. Erin Picou.  We are fortunate to have someone talking about current hearing aid research who also is conducting and publishing her own research in this area.  A simple PubMed search with her name will provide you with a review of the 10 or more articles she has published in just the last few years.  She also has contributed several articles and presentations for us here at AudiologyOnline.

Dr. Picou initially started at Vanderbilt as an AuD student, then became interested in research and completed her PhD.  Today, she is an assistant professor in research at Vanderbilt, and also is actively involved in teaching and mentoring in the Vandy AuD program. 

Dr. Erin Picou:  Today I want to talk about some of my favorite articles that have been published in the last year or so.  I would like to start with some terms and definitions related to cognition and listening effort to put us all on the same page.  I will also talk about some general methodologies for how we measure listening effort. 


Cognition is defined as mental processes.  It is anything that involves thinking, understanding, learning, remembering, understanding speech, etc.  Listening effort would be how hard someone is working to understand speech, or some other listening task.  It is the cognitive resources applied to understand speech or how tired someone is while they are listening to speech.  Listening effort can be a problem because, as humans, our cognitive resources are finite.  If we are using more resources to understand speech, there are fewer resources available for other things like memory, recall, or learning.  This can make someone feel tired and disengaged from a conversation. 

Today I am going to talk about both resource allocation and listening effort, and I will present some studies that talk about individual differences and how a person’s cognitive capacity might influence their listening effort or benefit from a particular hearing aid technology. 

Measuring Cognition

How do we measure cognition?  Three different kinds of cognitive abilities that are of interest in the research world are attention, memory, and speed of processing.  Let’s briefly review each one.


Attention is how well someone can focus on a particular aspect of a message.  You can test attention with something like the STROOP task.  In this task, you would be presented with a card that had color words printed in the wrong color ink.  For example, RED on a card is printed in purple ink, or BLUE is printed in red ink.  Your job would be to name as many colored inks as possible, while ignoring what the letters say.  You are attending to a particular part of the message (the color) and ignoring another (the letters).  You would be given a set amount of time to name as many colors as quickly as possible. 


Our memory system often is divided into three categories: sensory memory, working memory (also known as short-term memory), and long-term memory.  Working memory is the most common kind of memory set that we test in auditory related studies.  For working memory tasks, generally we present words or digits and ask people to remember as many of them as they can. 

One common task is the reading span task, where someone presents one sentence at a time, and the listener is asked to remember the last word in the sentence.  For example, you would see, “The train sang a song,” and you would be asked to remember song. “The girl brushed her teeth,” and you would be asked to remember teeth.  After a certain amount of time or number of sentences, you would be asked to recall as many of those words as possible.  The more words that are recalled means a larger working memory capacity. 


The third area of cognition we commonly measure is processing speed.  There are two kinds: a general processing speed and a verbal processing speed, or how quickly someone processes language information. 

A common way to measure verbal processing speed would be with a lexical decision task.  In this case, you would look at a computer screen and see a series of letters on the screen.  Your job would be to decide as quickly as possible if those letters form a real word or a nonsense word.  For example DUCK is a recognizable word, while WORS is not.  How long it takes someone to make that decision is verbal processing speed.  If you can make that decision faster, we say that someone has faster verbal processing speed.

Measuring Listening Effort    

Recent research is also interested in listening effort.  There are at least four different kinds of methodologies for measuring listening effort: subjective reports, physiologic measures, recall tasks, and reaction time measures.  Most of these measures are indirect measures, and there is no consensus on which way is best.  We are trying to integrate the information with different methodologies to come up with a cohesive picture. 

Subjective Report

Subjective report is asking something like, “How tired do you feel while you are listening?  How much effort do you put into listening?”  There are also some standardized questionnaires that have listening effort questions in them. 

Physiologic Measures

Another general listening effort methodology is to use a physiologic measure.  We know that when people are putting more mental effort into something, their body changes.  For example, the pupils dilate, the palms sweat more, and the heart rate changes.  There are researchers who are using these physiologic measures to get at listening effort as well. 

Recall Tasks

Recall tasks try to capitalize on the idea that human cognitive capacity is fixed.  If we are using more resources to understand speech, there are fewer left for recall, rehearsal, or memory of items.  Fewer items recalled would mean that someone is exerting more listening effort. 

Response Time Measures

Response time measures still capitalize on the notion that human cognitive capacity is fixed, but we see it in slowed physical responses.  The easiest way to measure this would be a verbal response time.  We could measure how long it takes someone to repeat back speech.  If they repeat speech back more slowly, they are using more listening effort. 

You could also use a dual-task paradigm.  This is where someone repeats speech, and occasionally a light flashes on a computer screen; they are to press a button when they see the light while continuing to repeat words.  Slower or longer responses are an indication of increased listening effort because they have fewer resources available to push the button because they are using more resources to understand speech. 

Why Study Listening Effort?

For me, the most interesting reason initially to study listening effort was that I heard about it clinically from my patients.  They would come in and say, “I’m exhausted at the end of the day.  I have to work really hard to understand.”  From a communication experience, I thought it would be interesting to study listening effort.  It also turns out that speech recognition and listening effort are separate constructs. 

We conducted a study in 2013 looking at hearing aid benefit for word recognition and listening effort (Picou, Ricketts, & Hornsby, 2013).  We used a dual-task paradigm, where the participants repeated the target speech signal, and randomly, a red box showed up on the computer screen, to which they were instructed to press a button as soon as they saw it.

We examined the effect of listening effort and speech recognition benefit whne people were using hearing aids.  The results were divided into four categories: recognition improves but not effort, recognition and effort improve, effort improves but not recognition, or neither improves.  Fortunately, there were very few where neither recognition or effort improved, but there was no consistent or strong trend between the benefit for word recognition and benefit for listening effort.  Those were some results showing that hearing aids help improve listening effort, but listening effort and word recognition are distinct constructs. 

Today, I want to talk about research focusing on listening effort and hearing aids, the effects of digital noise reduction on listening effort for adults and children, the effects of directional microphones on listening effort and fatigue, and lastly, listening effort and driving a car. 

Article #1: Age-Related Changes in Listening Effort for Various Types of Masker Noises (Desjardins & Doherty, 2013)

These authors from Syracuse University asked, “What are the effects of age and hearing loss on listening effort and noise?”  They were also interested in the relationship between cognitive capacity and listening effort in noise.  We know that working memory capacity and processing speed are some of the most important predictors of speech recognition abilities in older adults, once we account for audibility.  The authors were interested in whether there is a relationship between cognitive capacity and listening effort in noise, like there is for speech recognition ability in noise. 

The authors were also interested in separating the effects of age and hearing loss on listening effort.  We know that younger adults who have normal hearing exert less effort than older adults for the same listening task (Gosselin & Gagne, 2010).  We also know from the population of older listeners, that those listeners with normal hearing exert less listening effort than those with hearing loss (Tun, McCoy, & Wingfield, 2009).  We see that both age and hearing loss can affect listening effort, but the authors were interested in looking at the combined effects of age and hearing loss. 

Why it Matters

It is important to increase our understanding of the separate and combined effects of age and hearing loss on listening effort.  We should also look towards specific factors that might contribute to a patient’s reports of difficulty and fatigue, by looking at predictive variables that can help us to understand our patients better.  This might also help guide our expectations counseling. 


Desjardins & Doherty (2013) tested three groups of listeners with 15 participants in each group.  They tested younger listeners with normal hearing, older listeners with normal hearing, and older listeners with hearing loss.  The listeners with hearing loss had mild to moderate sensorineural hearing loss, and they were all experienced hearing aid users.  All of these participants underwent a cognitive battery of tests, including a selective attention test, the STROOP task, a working memory capacity test, and a processing speed test.

They also tested listening effort for all three groups of listeners using a dual-task paradigm.  In this dual-task paradigm, the primary task was sentence recognition using R-SPIN (Speech in Noise-Revised) sentences.  The secondary task was a digital visual pursuit rotary tracking task. 

In this task, the computer screen has a yellow oval on a black background.  A white ball moves around the oval track, and the participant’s job, while they are listening to the speech, is to hold the cursor on that moving ball.  The notion is that the better they are at keeping the cursor on the target, the less listening effort they are using, because they have enough cognitive resources left after repeating the sentence to track the ball as it is moving around the track.  They used this dual-task paradigm in three different noise conditions: speech-shaped noise, six-talker babble, and interfering two-talkers. 


In their evaluation of listening effort as a function of background noise, older listeners exerted more effort than younger listeners for the speech-shaped noise with the interfering two-talkers.  Interestingly, there was no effect related to degree of hearing loss.  The older listeners with normal hearing and hearing loss exerted the same amount of listening effort across all three conditions.  The older listeners with hearing loss were fitted with hearing aids for the study for the purpose of testing in the dual-task paradigm.  These results indicate that if patients are fit with hearing aids, their listening effort is the same as older listeners who have normal hearing.  The hearing aids are, in a way, compensating for the hearing loss, at least in terms of the listening effort performance. 

As for the relationship between cognitive ability and listening effort, they found two significant findings.  First, there was a significant relationship between someone’s working memory capacity and their listening effort, in that people who had more capacity exhibited less effort in general.   Also, people who had fast verbal processing speed exhibited less listening effort in some of the conditions. 


The results suggest that hearing aids can compensate for the effects of hearing loss on listening effort.  When these participants were fitted with appropriate hearing aids for their hearing loss, they performed just like their peers with normal hearing.  However, there was still an effect of age.  The hearing aids did not counteract the effect of age, but they did counteract the effect of hearing loss. 

Listeners with more capacity and faster processing speed may exhibit less listening effort in general.  This helps us understand more about these participants and what they are going through.  These results suggest that when you are fitting patients with hearing aids, you are likely improving their effort and allowing them to perform more like their peers with normal hearing.  

Article #2: The Effect of Hearing Aid Noise Reduction on Listening Effort in Hearing-Impaired Adults (Desjardins & Doherty, 2014)

This second article was published by the same group a year after Article #1 which I just discussed (Desjardins & Doherty, 2013), using very similar methodologies.  This time they were interested in not just the effect of hearing aids, but also hearing aid technology.  Specifically, they wanted to know what effect digital noise reduction might have on listening effort. 

We have seen that hearing aids without advanced features can improve listening effort (Downs, 1982; Picou, Ricketts, & Hornsby, 2013) and we just saw that hearing aids without advanced features can help compensate for the effects of hearing loss (Desjardins & Doherty, 2013).  We also know from several years of previous research that digital noise reduction does not improve speech recognition in background noise, but it does improve ratings of listening comfort.  People report that they like the sound with noise reduction “on” better.  They report that speech in noise is more comfortable.  There is likely a relationship between comfort and effort.  Sarampalis et al. (2009) published some data looking at the effect of noise reduction on listening effort for adults with normal hearing.  The current Desjardins & Doherty (2014) study was interested in the effects of noise reduction on listening effort for adults with hearing loss. 

Why it Matters

If noise reduction positively affects listening effort, more cognitive resources will be available for other things such as general memory or staying involved in the conversation; it might also lead to less fatigue or allow someone to stay on a task longer.  It would be an advantage of digital noise reduction that extends beyond speech recognition, which could impact clinicians’ decisions of when to activate digital noise reduction, or what strength to program digital noise reduction for a given patient.


In this study, 12 listeners were tested.  All were all older, experienced hearing aid users with mild to moderate sensorineural hearing loss.  This time, the participants were fitted with hearing aids with two programs.  The first program was an omnidirectional microphone, and all of the advanced features were disabled.  The second program also had everything disabled except digital noise reduction.  They used the same dual-task paradigm as in Article #1 (Desjardins & Doherty, 2013), with sentence recognition as the primary task and visual pursuit tracking as the secondary task.

This time they tested listening effort with the two hearing aid settings in a moderate signal-to-noise ratio (SNR), where participants were getting about 76% of the words correct, and in a difficult SNR, where they were getting about 50% of the words correct.  They also used the cognitive test battery of working memory capacity and processing speed to see if there was a relationship between these factors and benefit from digital noise reduction. 


The first set of results pertain to speech recognition scores as a function of noise reduction.  It was tested with noise reduction on versus off for the two different SNR levels. 

First, there was an effect of task difficulty; speech recognition was better (76%) overall for the moderate listening condition as opposed to the difficult listening condition (50%).  There was no effect, however, of noise reduction on speech recognition.  That is consistent with previous literature.  Having noise reduction on or off did not change sentence recognition in the given conditions. 

Next are the effects of listening effort as a function of noise reduction for the two SNR conditions.  For the moderate SNR, turning on the noise reduction did not change listening effort.  Conversely, in the more difficult listening situation, noise reduction improved listening effort.  They did not find a relationship between any of the cognitive factors and listening effort in this study. 

They also looked at working memory capacity and processing speed, and they did not find a relationship between listening effort or benefit from the technology.  But indeed, they did find the interesting effect that noise reduction could help improve listening effort. 


To me, this is more evidence that listening effort is different than speech recognition.  Noise reduction did not have any impact on speech recognition, but it did improve listening effort, and it also indicates that reduced listening effort is another potential benefit of noise reduction for listeners with hearing loss. Clinically, if patients are in difficult listening situations, they may experience less listening effort, even if speech recognition performance is not better. 

Article #3: Listening Effort and Perceived Clarity for Normal-Hearing Children with the Use of Digital Noise Reduction (Gustafson, McCreery, Hoover, Kopun, & Stelmachowicz, 2014)

The research group out of Arizona State University and Boys Town examined the impact of digital noise reduction on listening effort and perceived clarity for children with normal hearing.

Much like the adult population, digital noise reduction typically does not have a significant effect on speech recognition for children, although we have observed that noise reduction can improve listening effort for adults, and has been shown to improve ratings of sound quality in adults.  The authors were interested to see if the effects of noise reduction with children would be the same. 

Why it Matters

Listening effort might be even more important for children than adults because children are still learning, developing, and growing.  If they are using all of their cognitive resources to understand speech, there will be fewer resources available for important tasks like academic learning and language development.  If noise reduction can be shown to reduce that cognitive load, it could have significant implications for children with hearing loss. 


They tested 24 school-aged children with normal hearing, ages 7 to 12.  They used two different brands of hearing aids, and in both cases, the researchers recorded stimuli with noise reduction on and noise reduction off.  The child’s task was speech recognition using consonant-vowel-consonant (CVC) nonsense words.  They wanted to eliminate the vocabulary component, and test phoneme recognition. 

With phoneme recognition, they measured the length of time it took participants to respond.  They hypothesized that if the children took longer to respond, it was an indication of listening effort. 

They also asked the participants to rate the sound clarity of the stimuli with the noise reduction on and off.  To do that, they used a Visual Analog Scale.  They had pictures of six tigers on a piece of paper, and the tigers ranged from very clear to very blurry.  They asked the children to point to the tiger that reflected the last listening condition.  This gave an indication of speech recognition, listening effort using the verbal response time, and also perceived sound clarity.  All of this was measured in two different conditions; one was a moderate SNR of +5 dB, and the other was a difficult SNR of 0 dB. 


I will go through the three different experimental measures one-by-one: 

Speech recognition. For hearing aid 1, there was no effect of noise reduction on word recognition, meaning that digital noise reduction did not change word recognition for hearing aid 1 in either SNR condition.  Interestingly, noise reduction improved word recognition with hearing aid 2.    This is inconsistent with most of the literature, although sometimes we see that noise reduction can improve word recognition in the right noise conditions.  In this case, the background noise used was a steady-state white noise.  This type of noise of course, gives the noise reduction algorithm the best chance possible to improve word recognition; that worked for hearing aid 2.

Perceived clarity. For both conditions with both hearing aids, perceived ratings of clarity went up in each of these conditions when digital noise reduction was on.  In other words, the children perceived the CVC non-words to be clearer when noise reduction was on than when it was off. 

Effort. We see the same general trend with effort as we did with clarity.  In all cases with digital noise reduction on, verbal response times were faster.  That was independent of SNR and hearing aid.  In all cases, the noise reduction improved listening effort, regardless of the SNR and the hearing aid model. 


This is another indication that digital noise reduction can have significant benefits regardless of the effects of speech recognition.  Importantly, the authors pointed out that while we see these findings for children with normal hearing, we do not yet know how these results generalize to children who have hearing loss, or when other types of noises are used.  Nonetheless, it is encouraging to see that noise reduction might help reduce the cognitive load for children listening in difficult situations.

Article #4: The Effects of Hearing Aid Use of Listening Effort and Mental Fatigue Associated with Sustained Speech Processing Demands (Hornsby, 2013)

The next study is from Ben Hornsby at Vanderbilt.  He looked at the effect of hearing aid use and advanced signal processing, including directional microphones, on listening effort and mental fatigue.

We have observed that hearing aids can improve listening effort and help compensate for the effects of hearing loss on listening effort.  In the three previous articles I’ve discussed, we  saw that noise reduction can improve listening effort for adults and children.   We also know that listeners with hearing loss are at increased risk of stress, tension, and fatigue due to listening at work (Hétu, Riverin, Lalande, Getty, & St.-Cyr, 1988; Kramer, Kapteyn, & Houtgast, 2006).  People who have hearing loss are more likely to take mental health days from work than their peers with normal hearing.  These subjective data suggest that the cumulative effects of listening effort are likely to be mental fatigue.  Dr. Hornsby was interested in quantifying this relationship between effort and fatigue and looking at how hearing aids and hearing aid processing can affect effort and fatigue. 

The assumption is that increases in effort over time lead to mental fatigue, but this has not yet been validated.  Knowing how hearing aids and hearing aid technologies affect effort and how effort modulates fatigue could help guide expectations counseling and outcomes counseling. 


Dr. Hornsby tested 16 adults who were 47 to 69 years old with mild to moderate sensorineural hearing loss in three conditions: without the hearing aids, in a basic hearing aid program with an omnidirectional microphone and all other features disabled, and in a program with advanced signal processing including adaptive directionality. 

In the laboratory, he did not test listeners in the advanced setting with fixed directional; the hearing aids in the advanced program were set to switch automatically between omnidirectional and directional microphones according to the signal classification system. 

Participants had a one to two week acclimatization period between hearing aid conditions.  He used a recall paradigm, where he would play a string of words and ask the participants to repeat the string of words.  After eight to 12 words, the participant would be asked to remember the last five words.  Participants did not know when they were going to be asked to recall the last five words, so they had to keep as many words as possible in their working memory.  

He also recorded an indication of physical response time.  The participant would hear, “Say the word laud, say the word pool, say the word sub.”  They would repeat the word and then at a certain point, the screen would flash red and it would say STOP.  They were asked to push a button as soon as they saw that red flash on the screen.  Then they would be asked to recall as many of the words as possible that they heard.  With this, he obtained word recognition, word recall, and a physical response time.  

The subjects were tested for six sequential 200 word blocks or 20 strings of words.  That is 1,200 words at a time.  He could then look at the effects of time on recall, word recognition, and also response time. In other words, determine if listening fatigue perhaps would affect performance.


            Word recognition. Word recognition was tested as a function of blocks of words.  Participants repeated more words back when fit with hearing aids than when unaided.  He did not see a directional microphone benefit, or a benefit for other advanced processing.  There was also no effect over time.  Word recognition performance stayed consistent, no matter if it was the first block of 200 words or the last block of 200 words. 

            Word recall. We see a similar trend for the word recall performance, although the effects are smaller.  We saw some hearing aid benefit, no effect of the advanced signal processing, and no effect over time.  The ability to recall words did not change over time. 

            Physical response time. There was a benefit to physical response time when wearing the hearing aid versus unaided.  Hearing aids reduce listening effort.  There was no effect of the advanced signal processing. The results did reveal that in the unaided condition, the response times slowed in the sixth block compared to the first.  However, when they were wearing hearing aids, there was not a difference between their response times in the first and last condition. 

This suggests that hearing aids can improve listening effort and also reduce susceptibility to fatigue.  We see in the unaided condition that increases in effort over time led to fatigue, but in the aided condition, because listening effort was reduced, there was no fatigue over time. 


It was documented that sustained effort over time can lead to fatigue.  It is one of the first studies to do that.  It also suggests that hearing aids can reduce listening effort and reduce susceptibility to auditory fatigue.  The effect of directional microphone is still unknown because the hearing aid program was free to switch between omni and directional, and in the laboratory environment, it did not switch to directional. 

These results are important clinically because they suggest that reducing effort can reduce fatigue, and by extension, that a hearing aid can reduce fatigue.  Anytime you read a study or hear about something that reduces effort, you might also assume that it would reduce fatigue, and it might reduce the patient complaint of being tired at the end of the day.  That is very encouraging.

Article #5: Measuring Listening Effort: Driving Simulator Versus Simple Dual-Task Paradigm (Wu, Aksan, Rizzo, Stangl, Zhang, & Bentler, 2014)

The last study I want to talk about today comes the University of Iowa.  Wu and colleagues (2014) looked at comparing listening effort measured with different paradigms, as well as the effects of directional processing.  Wu was interested in whether hearing aids or directional microphone technology improve listening effort and whether laboratory measures give us similar results to more realistic situations.  Wu acknowledges that these dual-task paradigms are artificial, and it is not clear how these findings might generalize to more realistic situations. 

We have seen that listening effort can be reduced with hearing aids and digital noise reduction, but as I mentioned, these effects in the laboratory might be difficult to translate into more realistic listening situations.  We also know that listeners multi-task in the real world.  One of the things we do when we multi-task is to listen to speech or music while driving a car.  Wu developed a car driving task to see if this could be an indication of listening effort.  The question about directional benefit for listening effort was still open, also.  This study was aimed to address laboratory effects and realistic listening situations and the effects of directional microphones. 


Participants included 29 adults with mild to moderate sensorineural hearing loss with hearing aid experience and driving experience.  Hearing aids had three settings: unaided, omnidirectional, and directional. 

Two different dual-task paradigms were used.  One was the driving simulator, where participants had to listen and repeat speech while they were driving.  He measured how closely they were following the car in front of them using the driving simulator. 

Then he used a traditional visual response.  In this case, the patient looks at some boxes on the screen and occasionally a number appears; they have to make a complex decision about which button to push based on what they see on the screen.


As expected, the hearing aids improved word recognition for people with hearing loss on the driving task.  The omnidirectional condition was better than unaided, and directional was better than the omnidirectional and unaided conditions.  Therefore, directional microphones and hearing aids improved word recognition.  

In terms of driving performance, the best condition was the baseline, compared to unaided, omnidirectional and directional.  In the baseline condition, all they are doing is driving and not listening to speech.  When speech was introduced, the participants’ driving performance worsened, but it did not matter if they were unaided, in omnidirectional, or in directional mode; their listening effort or their driving performance was the same. 

Wu found an identical pattern of results using the other dual-task paradigm.  Speech recognition got better with directional compared to omnidirectional compared to unaided.  Response time was fastest when they were not also listening to speech, but there was no effect of omnidirectional or directional processing on listening effort.

They thought these findings could be related to the tasks.  They wanted to test the same tasks using a group of normal-hearing listeners to see if the results were different.  They saw the same pattern where the listening effort was worse in the unaided and omni conditions compared to the baseline.  However, in this case, when the directional technology was activated, listening effort was reduced. 

Wu looked at the relationship between performance and effort in speech recognition, and they decided that the reason that they did not see directional benefit for the listeners with hearing loss was that the task was too difficult.  They speculated that the listeners were focused so much on repeating back the words that the task was too difficult, and they were not able to demonstrate the benefit of directional microphone technology for this population. 


It seems like the driving simulator has good face validity for evaluating hearing aid technology, so the multi-tasking component mirrored the results of the artificial laboratory environment.  The idea is that you could use the driving simulator task for measuring changes in listening effort, but the laboratory task seems to be representative of perhaps a more realistic listening situation.  Importantly, results from listeners with normal hearing may not be generalizable to listeners with hearing loss, and that relationship depends on task difficulty.  If a task is too difficult for listeners with hearing loss, the task might not be sensitive to the effects that you might expect. 

Clinically, the effects of directional technologies on listening effort are still unclear.  Maybe directional microphones would help improve listening effort if the listening situations were different or less challenging. 


To summarize, listening effort is distinct from speech recognition and can be measured in a variety of ways.  There is no consensus on the best way to measure listening effort, and everyone is doing it in somewhat different ways. 

We have seen that hearing aids can improve listening effort and allow listeners with hearing loss to perform similarly to their peers with normal hearing.  Hearing aids compensate for the effects of hearing loss.  Digital noise reduction can improve listening effort for both adults and normal-hearing children. 

Fortunately, in addition to reducing effort, hearing aids can reduce fatigue.  Anything you can do to reduce listening effort might help reduce mental fatigue.  The effects of directional microphone technologies on listening effort are still unclear.  They have the potential to improve listening effort, but I have not presented you with any strong evidence today, in part because the noise levels that Dr. Hornsby (2013) chose did not activate the directional processing, and the listening situations in the Wu et al. (2014) study might have been too difficult to reveal a change in effort for the listeners with hearing loss.

Questions and Answers

In a clinical or educational setting, how can we measure listening effort and word recognition relationships in an effort to demonstrate effects of hearing aids, cochlear implants, and hearing assistive technology?  I can see this also applies to students with Auditory Processing Disorder and other disabilities for which hearing assistive technologies are recommended, such as autism and attention deficit disorders.

That is a great point.  My interest has been looking at people with hearing loss, but I definitely think that anything that disrupts the communication experience can result in an increase in listening effort.  Communicating takes more effort for these populations that you mentioned.  How do we measure listening effort clinically or in the educational arena?  I do not know yet.  The fastest, easiest way to get at effort would be to measure how long it takes someone to respond, but that takes some additional technologies and analysis.  One way is to ask people how much effort they are putting in or how tired they are.  It is not clear yet the relationship between the subjective measures and the physical response time or the recall paradigms.  Some kind of subjective report is a quick, rudimentary way to measure effort in the clinic.

In your 2013 study, did you notice any correlation with speech perception in noise, such as the QuickSIN, dB SNR loss or acceptable noise level (ANL) measurements?  Also, did you eliminate or otherwise account for auditory neuropathy spectrum disorder?

I did not have anyone with auditory neuropathy in that study.  They were all experienced hearing aid users.  I did not measure a QuickSIN or ANL, but that would be interesting, and we are doing that in the next round of studies looking at directional processing in listening effort. 


Desjardins, J. L., & Doherty, K. A. (2014). The effect of hearing aid noise reduction on listening effort in hearing-impaired adults. Ear and Hearing, 35(6), 600-610.

Desjardins, J. L., & Doherty, K. A. (2013). Age-related changes in listening effort for various types of masker noises. Ear and Hearing, 34(3), 261-272.

Downs, D. (1982). Effects of hearing aid use on speech discrimination and listening effort. Journal of Speech and Hearing Disorders, 47, 189-193.

Gosselin, P. A., & Gagne, J. P. (2011). Older adults expend more listening effort than younger adults recognizing speech in noise. Journal of Speech, Language & Hearing Research, 54(6), 944-958.

Gustafson, S., McCreery, R., Hoover, B., Kopun, J. G., & Stelmachowicz, P. (2014). Listening effort and perceived clarity for normal-hearing children with the use of digital noise reduction. Ear and Hearing, 35(2), 183-194.

Hétu, R., Riverin, L., Lalande, N., Getty, L., & St-Cyr, C. (1988). Qualitative analysis of the handicap associated with occupational hearing loss. British Journal of Audiology, 22(4), 251-64.

Hornsby, B. W. Y. (2013). The effects of hearing aid use on listening effort and mental fatigue associated with sustained speech processing demands. Ear and Hearing, 34(5), 523-534.

Kramer, S. E., Kapteyn, T. S., & Houtgast, T. (2006). Occupational performance: comparing normally-hearing and hearing-impaired employees using the Amsterdam Checklist for Hearing and Work. International Journal of Audiology, 45(9), 503-12.

Picou, E. M., Ricketts, T. A., & Hornsby, B. W. (2013). How hearing aids, background noise, and visual cues influence objective listening effort. Ear and Hearing, 34(5), e52-64. doi: 10.1097/AUD.0b013e31827f0431.

Sarampalis, A., Kalluri, S., Edwards, B., & Hafter, E. (2009). Objective measures of listening effort: Effects of background noise and noise reduction. Journal of Speech, Language, and Hearing Research, 52, 1230-40.

Tun, P. A., McCoy, S., & Wingfield, A. (2009). Aging, hearing acuity, and the attentional costs of effortful listening. Psychology in Aging, 24, 761-766. doi: 10.1037/a0014802

Wu, Y. H., Aksan, N., Rizzo, M., Stangl, E., Zhang, X., & Bentler, R. (2014). Measuring listening effort: Driving simulator versus simple dual-task paradigm. Ear and Hearing, 35(6), 623-632.

Cite this Content as:

Picou, E.M. (2016, January). Vanderbilt Audiology's Journal Club: Noise reduction, directional microphones, and listening effortAudiologyOnline, Article 16180. Retrieved from


Industry Innovations Summit Live CE Feb. 1-29

erin margaret picou

Erin Margaret Picou, AuD, PhD

Research Assistant Professor, Vanderbilt University Medical Center

Erin Picou is a Research Assistant Professor in the Department of Hearing and Speech Sciences at Vanderbilt University Medical Center.   She has been working in the Dan Maddox Hearing Aid Research Laboratory since she was an AuD student.  After completing her Ph.D. (also at Vanderbilt) she was hired to a research faculty position.  Her research interests are primarily related to hearing aid technologies for adults and children, with a specific focus on speech recognition and listening effort. This work continues to be supported through a variety of industry and federal funding sources. In addition to her research activities, Erin is involved with teaching and mentoring AuD students at Vanderbilt.

Related Courses

Vanderbilt Audiology Journal Club: Update in Hearing Aid Research with Clinical Implications
Presented by Erin Margaret Picou, AuD, PhD, Todd Ricketts, PhD
Recorded Webinar
Course: #33164Level: Intermediate1 Hour
This course will cover a review of recent key hearing aid journal articles with clinical implications, by Drs. Todd Ricketts and Erin Picou from Vanderbilt University.

Vanderbilt Audiology Journal Club: Recent Hearing Aid Innovations and Technology
Presented by Erin Margaret Picou, AuD, PhD
Recorded Webinar
Course: #30621Level: Intermediate1 Hour
This course will review new key journal articles on hearing aid technology and provide clinical implications for practicing professionals.

Vanderbilt Audiology Journal Club: Clinical Insights from Recent Hearing Aid Research
Presented by Todd Ricketts, PhD, Erin Margaret Picou, AuD, PhD, H. Gustav Mueller, PhD
Recorded Webinar
Course: #37376Level: Intermediate1 Hour
This course will review new key journal articles on hearing aid technology and provide clinical implications for practicing audiologists.

What’s in the New Hearing Aid Fitting Standard?
Presented by Erin Margaret Picou, AuD, PhD
Recorded Webinar
Course: #37181Level: Introductory1 Hour
The Audiology Standards and Practice Organization recently adopted a standard for adult hearing aid fittings. This presentation will cover the standard, the evidence supporting it, and some case studies on how to apply it.

20Q: Newborn Hearing Screening Brochures - Changes are Needed
Presented by Erin Margaret Picou, AuD, PhD
Course: #37989Level: Intermediate1 Hour
Newborn hearing screening brochures are provided to families around the time of the hearing screening and contain information about the screening, the results, and potential next steps. This article describes the results of a study investigating the suitability of current newborn hearing screening brochures, in addition to a study investigating the likelihood that someone would understand a ‘refer’ screening result. These studies reveal that many current newborn hearing screening brochures could be revised to make the information more accessible to families and that the use of the word ‘refer’ to describe a hearing screening result should be avoided because it is potentially confusing to most people.

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.