AudiologyOnline Phone: 800-753-2160


Starkey Genesis - August 2023

Interview with Brent Edwards, Ph.D., Vice President of Research & Jason Galster, Ph.D., Manager of Clinical and Comparative Research, Starkey

Brent Edwards, PhD

February 7, 2011
Share:

Topic: Starkey Research, and How Do You Define Success with Hearing Aids?
CAROLYN SMAKA: This is Carolyn Smaka from AudiologyOnline and today I have the pleasure to catch up with Brent Edwards and Jason Galster from Starkey. We're talking about some of Starkey's research projects, as well as some of the evidence behind the signal processing we find in our industry today across manufacturers. Great speaking with you both today.

Brent, how many researchers are there now at Starkey?

BRENT EDWARDS: We have about 40 researchers between Berkeley and Minnesota. We have four divisions, four departments, within the research division, one of which is in California, the other three in Minnesota. In addition, our research includes collaborations with universities and other outside collaborative initiatives around the world. We have a very long-term approach and functionally and organizationally, we are separate from product development.

So you can think of us as the front end investigating new ideas, whether those are new ideas on technology or new ideas around audiological tools or fitting tools like SoundPoint. Another example is our work on cognition and technology - we spent several years investigating the topic before we handed it off to product development and it was eventually implemented in Voice iQ.

With all of our research, we hope that when we're done, it contributes in some way to the clinical audiologist. We strongly believe that our research division is probably the best example of translational research that exists in the U.S.

SMAKA: You mentioned how your research was implemented in Voice iQ. Can we talk about your research looking at speech understanding in noise, as we know that is an important area for hearing aid wearers.

EDWARDS: Sure. One primary area of interest of ours is looking at the various dimensions of the problem of understanding speech in noise and what we can do to improve that situation for people with hearing loss.

We've done fairly extensive work on acceptable noise level (ANL), which is sort of a new way of thinking about the speech in noise difficulties that people have, and it's seemingly unrelated to speech understanding. ANL is an important measure for our field because it's predictive of whether someone's going to be a successful hearing aid wearer or not.

We wanted to understand why two people with the same audiogram could have ANL thresholds that differ by as much as 20 dB and result in very different levels of success with hearing aids. We thought that if we understood why people had the ANL threshold that they do, then maybe we could better counsel patients and develop technology that addresses people differently depending on their ANL score and improve their success with hearing aids.

In our research with ANL, we discovered that when listening to speech in noise and determining an ANL, there seem to be two groups of people. One group tends to focus more on intelligibility of speech, and the other group focuses more on the loudness of the noise. People who are focused more on intelligibility when doing the ANL, are not as bothered by high levels of noise as long as it doesn't affect their ability to understand speech.

With hearing aids today, we're doing fairly well with audibility and directional microphones to improve intelligibility. So those people who are focused on intelligibility are generally the successful hearing aid wearers.

The group that is focused on the loudness of the noise are the ones that appear to be at risk of being unsuccessful hearing aid wearers. This group is sensitive to high levels of noise, even when the noise is not affecting intelligibility at all. So, we thought that an algorithm that doesn't affect intelligibility but does reduce the perceived loudness of noise in the presence of speech, might actually change someone's ANL score, which would then make them more likely to be a successful hearing aid wearer. This was the goal of Voice iQ when we designed it.

We demonstrated that, indeed, for subjects who are more concerned about intelligibility, Voice iQ doesn't change their ANL score. But for people who are much more focused on comfort and loudness of noise when listening to speech, we were able to significantly improve their ANL score, shifting it in the direction where they would become more successful hearing aid wearers.

That was our goal in the ANL project at the beginning, to use our understanding of why people have different ANLs in order to drive better hearing solutions.

SMAKA: You mentioned that your research in cognition went into the development of Voice iQ. That paper has received high accolades, and everyone is talking about it. Congratulations.

EDWARDS: Thank you. Yes, we were honored to be awarded the Editor's Award for the Paper of the Year with JSLHR.

That paper was the first that used an objective measure of cognitive effort where we demonstrated that noise reduction technology and directional microphone technology can reduce the amount of listening effort that people use when listening to speech in noise. This is based on the long-held observation of clinicians that their patients tend to be more mentally fatigued after listening to speech in noise for a long period of time.

It seemed like there was increased mental effort for understanding speech in noise, but no one had ever measured this except through questionnaires. I'm pretty skeptical about whether people can actually assess whether their neurons are working harder when doing a task and it's very difficult for people to separate changes in speech intelligibility with other dimensions of speech in noise.

Our paper demonstrated that hearing aid technology can reduce listening effort. So, we are continuing that work in collaboration with Kevin Munro at The University of Manchester. We are using Starkey's latest hearing aids and looking at a large number of new users to see, very simply, whether speech understanding becomes less effortful with new hearing aids.

We thought it would, since hearing aids provide more audibility, so the wearers would have an easier time understanding speech. What we have found so far is that there's quite a spread. A large number of people, when they're first fit with hearing aids, are experiencing an increase in listening effort, which surprised us. But, I guess if you think about it, if they're first-time wearers, then they're sort of experiencing audibility that they haven't heard in years.

SMAKA: Reverse auditory deprivation.

EDWARDS: Exactly. So, maybe they have to work a little harder, because they're getting information that they haven't been used to processing for a long period of time. We're continuing to look at this over time. What we expect to see in a longitudinal study is that people will adapt and learn to use these cues more effectively, and ultimately speech in noise will become less effortful for them with hearing aids.

SMAKA: When noise reduction first came out, I was always amazed that it didn't improve speech intelligibility, because when you listen to it, it's such a remarkable difference on v. off. I always kind of wondered myself if you asked someone, "Did you understand better with it on or with it off?" people would think they understood better because it was easier or more comfortable, etc.

EDWARDS: That was certainly the thinking of Professor Erv Hafter with whom we collaborated with on this project. He's in the Psychology Department at UC Berkeley.

He heard demonstrations of noise reduction in Harry Levitt's lab when Harry was developing the first digital hearing aids in the 1980s. And he thought, "Well, why does it sound better - why does it sound more intelligible even though we can measure that there's no improvement in intelligibility?" Maybe it's because the noise reduction is off-loading the noise reduction work that your brain would normally do. So it makes it easier, and you perceive benefit in intelligibility even though it's not measurable through the speech scores.

SMAKA: I look forward to the results of your continued research in that area.

Can you talk about your research recently published in IJA, regarding microphone placement and localization?

EDWARDS: Yes. This research spun out of a concern I have had for a while, which is that with BTE hearing aid fittings, you're essentially removing the acoustic affect of the pinna. Now, there are reasons we've evolved to have pinnas but we are effectively removing them when we fit people with BTEs due to the microphone location.

I wanted to measure the benefit of preserving pinna cues through mic placement at the canal using CIC devices, ITCs, or even invisible-in-the-canal (IIC) devices. To do this, we worked with a leading expert on localization in Australia, Simon Carlile, who is at the University of Sydney in Australia. We looked specifically at front-back confusions, because the thinking was that your binaural system can tell you where sound is coming from in the azimuthal plane. Your pinnas differentiate between sounds coming from in front of you and those coming from behind you, which is a fairly common occurrence. In your ubiquitous cocktail party situation, you have people speaking all around you.

We use localization to help segregate sounds into its component sources, and this is important because it allows us to focus our attention on the single person that we want to listen to. If you can't segregate sounds from the front and sounds from the back, you're essentially blurring the two. You're kind of collapsing the sounds behind you onto the sounds in front of you, and they can interfere with each other. Because you can't auditorily segregate front from back, you're going to have a harder time focusing your attention on the person in front of you who you're trying to listen to. It's sort of an 'informational masking' issue.

SMAKA: It's almost like you're collapsing a 3-D auditory scene to 2-D. You're losing a whole dimension there.

EDWARDS: That's exactly right. You're losing some critical information about being able to separate these acoustic sounds in order to focus your attention on what you want to hear. Ultimately, that has a big impact on speech understanding. Even though there's no energetic masking involved, it can still debilitate you in noisy situations.

Our study did indeed find a significant negative impact on front-back differentiation when comparing BTEs to CICs. We also found that people had to learn the cues over time. So, when you first fit someone with a CIC, they weren't very good, but after they wore the CIC for six weeks and came back in, they were quite good at differentiating front from back. With the BTEs the brain couldn't learn over time because there's nothing to learn since the cues are not there.

SMAKA: According to Hearing Industries Association, the statistics are something like 70% BTEs in the U.S. market, correct?

EDWARDS: Right. I've actually heard people over the past few years say that it's wrong to ever fit CICs, because you can't get directional microphones on them. If you look at the directivity that you get from your pinna in the high frequencies, it's as good as a directional mic, and you actually make directionality worse by fitting someone with an omni-directional BTE. A directional mic in much of the frequencies is simply getting you back to what the pinna does.

On top of that, in a lot of situations, you have this front-back confusion as well. There are certain issues here about differences in style that are simply being ignored, and people tend to only look at style in terms of comfort and ease of fit and battery life. Perceptually, and in terms of patient benefit, there are other issues to be considered, and there's a lot of value in custom products that is being ignored.

SMAKA: Right. And although most BTEs and open fit BTEs are cosmetically appealing, there are still some consumers that will prefer a custom style for cosmetic reasons.

EDWARDS: Yes. Certainly with our invisible-in-the-canal product, we're seeing a huge interest, in part for that reason.

SMAKA: Brent, thanks for all your time today. I'll direct our readers to www.starkeyevidence.com for more information on these research projects and others happening at Starkey.

EDWARDS: Thanks, Carolyn.

SMAKA: Jason, you gave an interesting talk at the Innovation in Action Symposium regarding documenting clinical outcomes with modern signal processing. Before we get in to your talk, can you tell me about your role at Starkey?

GALSTER: Sure. I recently transitioned into the research group with the company in order to start a new research department, Clinical and Comparative Research. Our group essentially looks at clinical outcomes of technology in released hearing aids.

Whereas many of our research and development efforts go into making new technology and fitting it into a hearing aid, I'm focused on these clinical questions. How are our patients reacting and how are they interacting with the real world? Take noise reduction for example. If we have grossly different implementations of technology across products and manufacturers, how do patients respond? That's where the comparative component comes in.

I can say that at any given point in time I've worked with just about every hearing aid available.

(Laughter)

SMAKA: In addition to the mic location effects that Brent discussed, I heard an interesting talk you gave recently on some other benefits of IIC devices in regards to ear acoustics.

GALSTER: Yes. With BTE fittings, whether it's receiver-in-the-canal or receiver-in- the-aid, we're putting the microphone in a less than ideal situation. I think that that's an intuitive point that everybody understands. A brief review of some data on head-related transfer functions shows electroacoustically that canal placement of a microphone will provide better directivity with a directional microphone when compared to a BTE with directional microphone. In fact, canal placement of a microphone will almost restore the natural directivity related effects of the pinna, as Brent mentioned. In some respects by placing the microphone in the patient's ear canal, you're almost giving them a little bit of a directional microphone.

SMAKA: Would placing the mic further down into the ear have even better effects than say a CIC microphone which is sitting at the entrance of the ear canal?

GALSTER: That's the information that we're looking at now. We are seeing slight changes from CIC to this very deep placement of the hearing aids from a directivity perspective. But most of the benefits we're seeing are in other areas.

We're finding two acoustic benefits to deep placement of a hearing aid as opposed to a CIC placement. One of those is in the real- ear to coupler difference (RECD). In terms of the residual canal volume, if we can reduce that air volume between the termination of the device and the tympanic membrane we're seeing a boost in the effective level of the hearing aid, and it's about 4 dB on that medial end. It's like 4 dB of 'free gain' that you get from just sliding that hearing aid in further down the canal, even from CIC to this deep placement IIC style.

If we look at microphone location effects, we see another boost around the faceplate of the device, leveraging that area of canal that you're opening up by moving the device further down the canal. And from that perspective the data show up to 7 dB resonance in the canal.

Basically what that means is that on the medial end of the device you get a 4 dB boost, but then we also have it up to a 7 dB boost, and this is all frequency specific. So in some ways we're seeing up to a 10 or 11 dB boost from this deep canal placement, which means that the hearing aid can work more efficiently at its prescribed gains.

SMAKA: The boost is mainly in the high frequencies, I assume?

GALSTER: Yes. The 4 dB on the medial end appears to be a little more broadband in nature, but the canal resonance is high frequency, peaking at about 4 kHz.

Also, by moving the device into the bony portion of the canal, the own voice occlusion is actually similar to the open ear, on average. So these are some of the natural acoustic benefits that we can leverage simply by placing the hearing aid as deeply into the ear as possible.

As with all hearing aid fittings, success with IIC is contingent upon this appropriate patient selection. If you have the right ear canal and you take the appropriate impression, the outcomes are extremely impressive.

SMAKA: At the Symposium, you also spoke about directional benefit or lack thereof. Can you briefly give an overview of this talk?

GALSTER: Sure. As audiologists we're all very aware that directional microphones work and they work well. We have years and years of research to support that. But every day when we walk into the clinic and fit directional microphones, we don't always see the tangible patient responses that we would hope to see from the technology.

SMAKA: Those sentiments certainly resonate with me. Why this variability in outcome with directional mics?

GALSTER: The answer is that it's complicated and something we're continuing to investigate. If you look at variability among individuals in terms of how they leverage directional mics for speech understanding benefit, we see a wide variability across individuals even in controlled lab conditions.

Visual cues also play a big role in our patient's real-world experiences. For instance, a pair of 2010 studies from the University of Iowa showed us how visual cues may impact directional benefit. Also the nature of real world acoustics, i.e. sound is transient, in motion and complex - definitely contributes to variability in patient benefit in the real world.

These factors give us insight into understanding that there may be people who in optimal situations use directional microphones to their benefit extremely well, and notice a very distinct benefit. However, there may be other people that simply don't cue in to the benefits of directional microphones to the same extent, and they may not experience a tangible improvement in their day-to-day routine.

SMAKA: Is there any way for the clinician to test in their office who might be the ones to benefit and who might be the ones that aren't really making as great of use of those directional cues?

GALSTER: Using a speech-in-noise test with the speech located in front of the patient and noise behind, can help establish some conclusions regarding how well the patient is going to leverage the technology, although it's an idealized situation and may not be indicative of real-world performance.

One thing to keep in mind when you conduct such testing is the behavior of the hearing aid. If it's set to automatic or adaptive, you probably can't be sure it's going to trigger the directional response, so you probably need to manipulate it manually in the software. For real world settings, I'll give patients an optional fixed directional program. This allows them to more easily determine if the directional setting is beneficial in situations.

SMAKA: You've also looked at variability in noise reduction algorithms, right?

GALSTER: Yes. The take-away with noise reduction is that noise reduction in hearing aids is better than it's ever been. Brent discussed our research on noise reduction decreasing the cognitive load. We also know, of course, that most patients will experience improved comfort in noise with noise reduction. We need to be aware that noise reduction is inherently gain manipulation, and if this isn't done in a very deliberate manner you can compromise speech audibility and quality. So that said, we always would hope that manufacturers are giving us data that show that their noise reduction algorithms provide benefit and at a minimum will not harm speech quality and audibility.

SMAKA: Right. We know from Bentler & Chiou (2006) that there's a lot of variability in manufacturer implementations of noise reduction, even if it's the same type of noise reduction algorithm or they call it by the same names.

GALSTER: Absolutely. Noise reduction and compression, for instance, are two perfect examples where all of the manufacturers are still using these universal terms, but the way that they're implemented is vastly different from one device to the next.

Most of today's real-ear equipment will have a sort of canned routine for testing noise reduction. The only caveat to testing noise reduction in the clinic is that you see the maximum reduction, but it won't tell you what's going on when speech and noise are present at the same time. Probably the two most valuable pieces of information an audiologist can have when looking at noise reduction are, first, at what overall level and signal-to-noise ratio does it activate at? And, what is the maximum amount of reduction or accessible amount of reduction? Most of the systems today are level dependent so they won't reach their maximum amount of noise reduction until the environmental levels are around 80 to 90 dB.

SMAKA: The last area I wanted to touch on is your comparative research with feedback cancellation.

GALSTER: Feedback cancellation today obviously has changed our fitting behavior. Five years ago, only a few audiologists were fitting open-canal hearing aids;today everyone is fitting them. That speaks to the fact that feedback cancellation and feedback suppression in modern hearing aids has advanced by leaps and bounds. As clinicians, we're looking for effective feedback suppression that limits the occurrence of feedback, and allows us to meet our prescriptive goals.

When you do encounter feedback, which most wearers will at some point, you're interested in how quickly the hearing aid will suppress the occurrence of feedback. You also want to make sure that the sound quality is consistent and maintained as well as possible throughout the entire experience.

I think it's also important for audiologists to understand that even though a hearing aid may have a very competent feedback suppression algorithm in it, the performance of that algorithm may vary quite a bit from one patient to the next.

SMAKA: Why is that?

GALSTER: One reason is the fitting configuration - is it open? Is it an occluded fitting? What is the vent size? The ear canal geometry is important as well. We always want to be sensitive to the fact that every hearing aid fitting requires appropriate receiver placement and orientation, effective venting and a quality solid mechanical build. These things are important factors to success in any hearing aid fitting and will certainly impact the performance of the feedback system.

SMAKA: Can you see that individual variability with the feedback system through probe mic measures, from one patient to the next?

GALSTER: What you would likely observe is that your stable fitting range changes. So you may do a fitting with one patient and it's a very stable fitting with good sound quality, they can bring their phone up to the ear without feedback. And then you have a similar audiogram and a different patient, their pinna is different, their canal is different, and when they bring their phone up, they have a chirp or a whoop because of that change in the feedback path. You can definitely observe these things clinically.

SMAKA: We've talked a lot about variability with hearing aid fittings today.

GALSTER: Yes. Harvey Abrams has a great quote on the topic, and I'm paraphrasing but he says,

"We know from a generation of hearing aid research that there are many variables (demographic, audiologic, cognitive, emotional, lifestyle, etc.) that influence hearing aid success. As a matter of fact, there is not even universal agreement as to what exactly constitutes hearing aid success."
I think that last statement is particularly important. We don't have universal agreement as to what defines success with amplification. One of the things I think that we see as clinical professionals is that success may not be something that we're in a position to define for our patients;success may be something they need to define for themselves. Benefit is a different topic and something that we can define, but success is that individual, golden outcome.

SMAKA: We've come a long way but it's still a challenge when that subjective and objective outcome data do not match up. Jason, it's been a very interesting discussion today! Thanks for your time.

GALSTER: My pleasure, Carolyn. Thanks for having me.

For more information please visit www.starkey.com or the Starkey Web Channel on AudiologyOnline.

References

Bentler, R. & Chiou, L-K. (2005). Digital noise reduction: An overview. Trends in Amplification, 10(2), 67-82.

Best, V., Kalluri, S., McLachlan, Valentine, S., Edwards, B. & Carlile, S. (2010). A comparison of CIC and BTE hearing aids for three-dimensional localization of speech.IJA, 49(10), 723-732.

Sarampalis, A., Kalluri, S., Edwards, B. & Hafter, E. (2009). Objective measures of listening effort: Effects of background noise and noise reduction. JSLHR, 52, 1230 - 1240.

Wu, Y., & Bentler, R.A. (2010). Impact of visual cues on directional benefit and preference: Part I-Laboratory tests. Ear & Hearing, 31(1), 22-34.
Industry Innovations Summit Recordings Available


Brent Edwards, PhD

Executive Director of the Starkey Hearing Research Center

Brent Edwards completed his Ph.D. in electrical Engineering at the University of Michigan in 1992 where he investigated the application of signal processing techniques to the human auditory system.  He followed this with a postdoctoral fellowship in Psychology at the University of minnesota where he conducted psychoacoustic research and taught in the University's Department of Communication Disorders.  Since then, Dr. Edwards has directed research departments that combined engineering, hearing science and audiology to develop new technology for the hearing impaired.  He has published and presented extensively on hearing aids and auditory perception.

Dr. Edwards is currently the Executive Director of the Starkey Hearing Research Center (SHRC) located in Berkeley, CA.  SHRC was created to conduct fundamental research on perception by the hearing impaired and on new ideas for hearing aid technology.



Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.