AudiologyOnline Phone: 800-753-2160


Sonic Radiant - January 2021

Binaural Coordination: Making the Connection

Binaural Coordination: Making the Connection
Erin Reichert, MS
July 21, 2014
Share:
This article is sponsored by Sonic.

Editor’s Note: This text course is an edited transcript of a live webinar. Download supplemental course materials.

Erin Reichert:  Thank you for joining me today in a discussion of Binaural Coordination: Making the Connection.  Binaural coordination is a feature in our hearing instruments that sometimes gets forgotten amongst the other features and benefits.

Many processes are occurring at once within the instrument for binaural coordination to take place.  With binaural coordination, independent functionalities merge together to create a single, unified, natural listening experience.  One facet of our 4S Foundation is sound that is natural.  The whole experience of wearing the hearing instrument should be a natural thing for our patients.  It is important to offer a feature like binaural coordination that will merge many things together and deliver benefit to your patient. 

As this is a CEU course, we want to make sure we cover our learning outcomes for today.  After this course, you will be able to describe what binaural coordination means in terms of the Sonic product offering, and describe the functionality of environment classification, non-telephone ear control and binaural synchronization.  In our world, those three features build up, create, and compile binaural coordination.  Finally, you should be able to summarize which Sonic products offer binaural coordination. 

The Dynamic Environment

Almost everything in our world, including communication, is dynamic.  However, sound technology is consistently changing and improving.  For a person with hearing impairment, that can be a difficult experience.  I want you to think about your day today.  Imagine all of the different listening environments that you have experienced already.  Maybe there was an alarm that woke you up this morning.  Maybe you climbed out of bed and took a shower.  Maybe you have a family at home and you have children who were asking questions and engaging in dialogue as they needed help with different things.  Maybe you listened to the radio on the way in to your office.  You have communicated all day, whether in meetings, over phone, or video chat.  There are a lot of listening environments.  Some of you may have been to lunch at a noisy restaurant.  These moments make up the spectrum of human communication, and along with that, human interaction.  We need communication.  We need interaction.  It is consistently changing. 

The Healthy Ear

Someone with normal hearing has a normal cochlea that will control variations in amplitude by amplifying low-level sounds and compressing high-level sounds.  There is a tonotopic arrangement of inner and outer hair cells, which maintain frequency contrasts within speech.  Frequency contrast is key, especially when it comes to clarity. 

Most people with a healthy cochlea likely have a healthy brain as well.  There are many psychoacoustic properties of time and level differences that occur between the ears, such as determining spatial separation, location of sound and source, et cetera.  A healthy cochlea and a healthy brain can put these signals together automatically without a problem. 

Dynamic World

Think about all the communicative things you did today.  Now let’s give you a hearing loss.  Maybe you have some recruitment.  We are starting to see more and more instrument orders with comments that the patient has recruitment.  As far as speech discrimination goes, what is the best we can get when we are doing audiometry and speech testing?  Speech discrimination is a key indicator for how successful that patient will be with amplification.  Very often, speech discrimination is degraded for someone with a hearing loss. 

The only silent place in the world is in an anechoic chamber.  There is always noise, from the refrigerator to the air conditioning to traffic.  Now imagine you have a hearing loss where all you hear is the noise instead of speech. 

If you have a hearing loss, very often you have sound localization problems.  This is a major disadvantage for people who have hearing loss but are not wearing any amplification.  We know that only a small percentage of people with mild to moderate hearing loss are fit with amplification.  As an industry, we need to find a way to address all the people with mild to moderate hearing loss who do not use amplification.   They are at a disadvantage, and things could easily be improved with the many great solutions that are available.

Limits of Amplification

Not every hearing aid can do a great job, but generally speaking, any hearing aid is better than no hearing aid.  We do recognize that amplification has some limits with respect to processing capabilities.  Not all technology is created equally.  Some have very fast attack and release times, and others are slow.  It depends upon the manufacturer’s core beliefs with regard to their digital signal processing (DSP).  Many instruments are burdened by having to make manual adjustments.  Patients often think about what program they are in.  “Is my right one in telephone?  Am I correct?  Is my right one in noise and my left one in quiet?  I can’t figure out if my hearing aids are in sync. I didn’t hear how many beeps there were.”  In these cases, the patient will continue to press buttons in an attempt to synchronize the hearing aids. 

What about wind?  If you do not have a system that can handle wind noise, that hearing-impaired person will struggle.  Our environment is windier than we realize.  The woman who takes care of my daughter has hearing loss, and she is wearing some Sonic Flips.  She has told me that she can finally hear outside for the first time, even when it is so windy.  The technology in our instruments provides the capability for her to do well in our ever-changing dynamic environments. 

Next is volume controls.  I used to hear complaints from many patients about the differences or uncertainties regarding the volume in each hearing aid.  “I turn my right one up louder than my left one.  One seems louder.”  When they are not in sync and the patient has to do a lot of manual adjustments, it can be cumbersome.

The Sonic Solution encompasses the 4S foundation, starting with Simplicity.  We need to make it simple to interact with our organization and with our hearing instruments.  It should not be problematic or burdensome.  The environment classification algorithm will analyze all acoustic signals to provide the best adaptive response for all situations.  This system does a tremendous job behind the scenes and keeps it simple for the end user.  If it is simple for your patient, they are going to be happy and smiling.  Happy patients tell other people, and those people will come to you because your patient had such a positive experience.  The environment classification makes a big difference for patients using our products. 

Environmental Classification

This course is about binaural coordination, which is divided into three different categories.  The first one is the environment classification.  This is the method of acoustic analysis that recognizes complex patterns.  It will detect the temporal and spectral characteristics of the incoming signal and measure and classify all those in 300 msec.  That is nearly instantaneous for our auditory system.  Our DSP is speech-variable processing (SVP). 

We believe in a very fast attack and release.  Because we have a fast attack and release, our system consistently analyzes and focuses on what deciphering a signal and the processing consequences.  What is a signal?  How do I need to react?  Because the core processor is so fast, everything that we pile onto it is going to be equally fast, at under 300 msec. 

Periodicity

The first parameter evaluated is periodicity.  We are going to identify the harmonics found in speech and music and other signals like wind.  Detecting periodicity will tell us what kind of signal it is we need to handle.

Modulation Rate

The modulation rate aims to determine if speech is present or absent.  Speech is highly modulated.  It is what makes speech, speech.  Our speech priority noise reduction is able to determine what is speech and what is noise, and they will handle things separately.  This is because of the modulation analysis that is occurring. 

Signal-to-Noise Ratio

The signal-to-noise ratio tells us how much noise is present.  Is the amount of noise overwhelming the signal or is the noise quiet enough that we do not need to manage it because the signal is loud and clear? 

Interaural Time Difference

The interaural time difference tells us the variations in sound pressure level arriving at each microphone, just as interaural level differences.  We are going to figure out what is going on between the microphones and where the signal is coming from so we can make sure we respond accordingly. 

After the source is analyzed, the algorithm then organizes the auditory scene into five possible categories: speech in noise, speech in quiet, noise only, quiet only, and wind.  As I mentioned, wind is its own category.  Wind is a tough one to do.  All of the other categories are sound scenes. 

Speech is king in everything we do here at Sonic.  Our mission is to improve lives through enhanced hearing.   Speech in noise and speech in quiet will be your priorities.  Sometimes there is noise only.  Sometimes there is no massive target signal you are listening to; maybe it is just very noisy area.  We need to make sure that both hearing instruments are configured properly and are working together to handle that noise.  Comfort is important in a product.

The quiet-only mode is very important.  Some people like to sit in the quiet and they do not want to hear a lot of other things going on.  It is important to have a hearing instrument that will say, “Hey right aid, do you have quiet?  I’m in the left and I have quiet.”  It is important that it is happening in the technology. 

Now it is time to coordinate.  The prioritization and synchronization of environment-classified categories needs to happen between both instruments.  The right aid needs to be talking to the left aid, and they will figure out what is going on to ensure they are set correctly.  If both instruments detect a different environment, the highest priority environment dominates and will synchronize to the dominant environment.  The right aid says, “I hear a lot of noise over here.  What do you hear left aid?”  The left aid says, “I have speech in noise.”  Because there is speech present on the left side, the left aid is the dominant marker, and it will make sure both instruments are set to hear the target auditory scene of speech in noise.  This is where it gets impressive.  The hearing instruments are doing this seamlessly, and your patient will not notice anything a hearing aid shifting into another program.  It is happening so rapidly that the patients have a great fidelity experience. 

The Universal Environment

Speech in noise receives the highest priority for the universal environment, giving that extra hands-free advantage.  Another option, though, is speech in quiet, noise only or quiet only.  The universal program makes sure that the auditory scene is set correctly.  Wind is excluded from this aspect in the universal environment because it is managed separately between the two ears.  Think about a patient who wears hearing aids; he loves to have the car window down on a beautiful summer day.  His wife is sitting next to him and wants to have a conversation with him.  In that situation, that left ear will detect wind and will check to see what the right ear detects.  Most likely, the right ear will say speech in noise, because the radio is on and his wife is talking.  The left hearing aid will handle the wind, and the right aid will handle speech in noise.  They will be managed separately.  In a situation like this, we do not want to change both instruments to wind because then the focus is not on speech.  Along the same lines, we do not want the left ear with the wind noise to not get the benefit of the wind calculation that is occurring.  It is a neat feature. 

There are not a lot of manufacturers out there with something this sophisticated, and it is added value for your patient to have a seamless listening experience.  Our universal program ships as the default program in all the hearing instruments.  It is loaded into the hearing instrument in the Program 1 (P1) position when you connect to EXPRESSfit. 

I like to do a sampling of the instruments that come into our facility to see how our clinicians are programming.  I most often find that the hearing instruments have one active program, and it is the universal program.  When I see that, I get excited, because it tells me that the patient is doing well in that one program.  That put it in-and-forget-it environment is what we believe is beneficial about this technology.  With the universal program, again, speech is king.  We will always have the highest priority given to speech in the universal configuration. 

Optimization

We also have further optimization with adaptive and hybrid directionality.  With all the inputs we are analyzing, we ensure that the best polar pattern is selected.  Every situation is different, and you have to have a system that is adaptive.  Hybrid directionality is our premium level technology, and that will keep analyzing along the frequency range, dividing it into four different segments.  You have lows, low-mids, mid-highs, and highs.  Adaptive directionality will morph polar patterns in those four different regions by how it is going to be best produced for the best benefit for your patient.  With hybrid directionality, the low-frequency region is going to remain in an omnidirectional configuration, and then it will adapt in the top three regions.  We have found this to be tremendously successful with patients, and it does a great job giving extra assistance when wind is present.  Whenever you have wind, you want to be in the omnidirectional configuration in the low frequencies. 

We also optimize with speech priority noise reduction.  We are going to attenuate background noise, but only as much as needed.  We need to restore listening comfort.  Speech priority noise reduction does a wonderful job of that.  Sonic has always been known to do really well in noise.  We are very targeted and believe that someone with hearing loss is coming in to get help from you because of their noisy world.  That is where they are struggling. 

Very often, patients with high-frequency losses report doing okay in one-on-one conversations, but struggling considerably when they are at their place of worship, at a restaurant, at a movie or at a sporting event.  This is why they are coming for help.  If we cannot give them help and provide benefit that makes a difference, then we are not doing a good job.  We feel that speech priority noise reduction accomplishes this goal.  It consistently analyzes the signal and it will look for noise and for speech.  When we can attenuate the noise and preserve and correctly amplify speech, it makes a big difference.  That is the key. 

Many traditional products in the marketplace are not as fast or as sophisticated to be able to do this.  Unfortunately, other devices will detect noise and attenuate the entire signal, speech included.  Ours is a hands-free listening experience with simplicity and environment classification; it is important to have a system that can do this so the patient is not losing any fidelity.  It allows them to trust their amplification.

Gain Settings

We are also going to optimize gain settings in the instrument.  For speech in noise, the optimized gain setting is a decrease in compression of speech-related input to maximize the phonemic cues in noise.  For speech in quiet, amplification of speech-related input is increased to accentuate conversation with less listening effort on behalf of the patient.  That is exactly what you would want to do from a gain approach.  When you only have noise, amplification of loud inputs is reduced for greater comfort in noise when speech is not present.  Think of the noisiest place you have been in the last week.  If you have a hearing loss and you are wearing hearing instruments and an amplifier, it can be overwhelming.  We want to make sure those loud inputs are reduced for comfort.  When it is quiet, amplification of soft inputs is reduced for transparent sound in quiet.  We have a nice track record of excelling in expansion in our products.

As previously mentioned, wind is a hard category for gain optimization.  However, the optimized gain setting is a reduction of low-frequency amplification for only the affected side. The opposite side remains unaffected.  You do want to make sure that both sides are set for wind.  Remember the example I gave of the gentleman driving in the car with the window down.   Not only is the hearing instrument going to stay unconnected, if you will, as one is focusing on wind and the other is focusing on speech, but that windy ear will have some low-frequency amplification reduction.  You want to make sure to do that, but you do not want to touch the other ear.  You want to make sure that other ear is focused appropriately based on that auditory scene. 

Say this patient who was driving gets out of their car to go for a hike on a very windy day.  You definitely want both ears working together to make sure they can pick the correct auditory scene.  In this automatic scenario, the patient does not have to think, “I got out of the car, so I need to switch my hearing aids from this program to another program.” If they are in a universal program, this all happens automatically.  They do not have to press a button.

The Essentials

Sonic’s DSP provides the computation power needed for a robust, fast-acting, accurate system.  This technology is occurring in our wireless platform.  Wireless is available in any of our 100, 80, and 60 level technologies.  It is also available to coordinate to our Flip, Bliss, and Charm product offerings.  Wireless technology is happening 120,000 bits per second.  The hearing instruments are rapidly firing data back and forth to each other.  Because of all of these things in play, we allow for extremely rapid detection and synchronization of classified settings between the ears. 

The DSP will drive everything.  We are happy that we have speech variable processing; it gives natural sound very quickly, not under or over-amplifying.  All the other processing systems play right on top of that and work to provide benefit, which is value to your patient.  That is what it is about. 

The result is a full 360-degree analysis of what is going on.  You have full optimization of the auditory environment.  The benefit is a hands-free, unified auditory experience for the listener in dynamic environments.  Speech variable processing is based off of our cochlear amplifier model.  We believe everything should be as quick as it can, as in a healthy ear.  A healthy auditory system has two cochleae, but one brain that is doing the computing.  We have two hearing instruments (if the patient is a binaural user) that act as one for that patient. 

Screenshot of video about environment classification

Non-Telephone Ear Control

Non-telephone ear control is a premium-level feature at Sonic.  I have personally seen what a difference this has made for someone who is very important to me.  This particular person is able now to have a conversation on the phone without any confusion or adjustments.  It has made a big difference in connecting with family again, and that is what it is about.  Simple and effortless is the key from a development aspect. 

Sergei Kochkin has told us for years that listening on the telephone presents challenges for hearing instrument wearers, especially in noise.  I am lucky to have normal hearing, but even when I am in a noisy environment, it is difficult to hear on the telephone.  The auto telephone feature in hearing aids is a great improvement with the hearing instrument industry.  The hearing instrument knows that it hears a phone, so it will configure for the phone, but it cannot do anything for the noise.  That is when we decided to address the issue. 

Our non-telephone ear control is simplicity at its best.  It is easy to configure.  The Auto Telephone is activated in the instrument within the features program.  In the software, you want to make sure that Auto Telephone is turned on.  The user will pick up of the phone; it is important that they have a phone that is compatible with a hearing instrument.  There have been a lot of discussions in the past five years regarding M (microphone) and T (Telecoil) ratings.  Cellphone providers are now required by law to have at least one product that gives an M4/T4 rating.  In our scenario, we are going to presume that the patient has a M4/T4 phone.  The user picks up the phone with a magnetic field near the instrument.  The magnetic switch in the instrument will engage the Auto Telephone program. 

A cool thing about our software is that you can program the Auto Telephone program completely independent of other programs.  In some competitive products, the gain and Auto Telephone program will default based on the universal or hands-free program, or maybe even their quiet program.  With ours, it is an independent program.  If that patient has too much gain or not enough gain in that specific environment, you can shape the frequency response as specifically as you want for that patient to have great fidelity on the phone.  Then, with binaural coordination active in the instrument, the non-telephone ear responds simultaneously as it is configured in the EXPRESSfit software. 

Within the EXPRESSfit fitting system, you have three configurations for the non-telephone ear control.  You can set it at 0 dB.  With that, there will be absolutely no change for the non-telephone ear.  If I pick up the phone on my left ear, the right ear will keep amplifying just like it does normally.  If I set it to -6 dB and hold the phone on my left ear, amplification of my non-telephone ear (or right ear in this case) is attenuated by 6 dB.  If I configured the non-telephone ear to mute in EXPRESSfit, it will be silenced completely during a phone call.  That is a nice feature.  This is a premium feature, which means it is available in our 100-level technology. 

So when would each setting be appropriate?  Patients who are not distracted by background noise during phone use will typically do fine with the 0 dB setting.  Using the -6 dB setting would be appropriate for patients who are slightly bothered by background noise during phone use, but still want to retain some auditory awareness of the environment while on the phone.  They do not want the sound to go completely away on the non-telephone ear.  Mute is great for patients who are disturbed by environment noise on the phone.

Our M and T ratings are wonderful.  By using one of our products and putting that non-telephone ear on mute, the fidelity that the patient will experience will go up.  They are going to hear the phone coming in nice and clear on the telephone ear, and the non-telephone ear is going to be muted.   You have a lot of configuration options in the EXPRESSfit fitting system. 

It is really the most flexible way to hear on the phone, and it is not side dependent.  Older technologies required the fitter to select a dominant ear that would use the phone.   Our program handles both instruments accordingly, and the user can talk on the phone with either ear.  Again, there is nothing your patient has to touch.  Our customers and their patients have had positive experiences with this feature.  It is a great way to give premium technology to someone who struggles on the phone or for whom the phone is very important in their daily life. 

The average age for first-timers users is decreasing every year.  That means we are hitting more and more patients out there, and many of them are getting benefit, and that is what matters.  Because of that, we have a lot of people still in the workforce.  It adds value to our product line-up. 

Screenshot from video showing how to choose configurations
 

I do not know about you, but I am tethered to my phone.  If I could not hear while on my phone, my job would be in jeopardy.  I need to be able to communicate with people.  If I had a hearing loss, this product would be such a tremendous value so I could have the opportunity to hear well on the phone.  When I am in a noisy environment, I plug the non-telephone ear with my finger.  Although it is not technically helping, it is cutting out the extra noise.  When the noise is decreased, I can focus on the phone conversation.  That is exactly what this system does.  It gives patients great fidelity on the phone. 

Binaural Synchronization

The final aspect of our binaural coordination is binaural synchronization.  We have two ears for a reason.  From a spatial aspect, if we were meant to have one, it would be in the middle of our forehead; but our head is round, and we have two ears.  We also know that in even with two healthy ears, we hear with our brain.  All this works as one unit, and it is important to have a hearing instrument that will mimic that as much as possible.  We aim to have two ears working as one unit for your patient. 

Binaural synchronization is the wireless feature that is going to coordinate manual changes on the ear or via an accessory.  We have a program button, which is a push button at ear level.  You also have a volume control that you can manipulate on some instruments, or you have the push button mute.  This will work via carrier frequency between the two hearing instruments at 3.84 MHz.  It occurs simultaneously.  Let’s say I am wearing Bliss BTEs, which is our standard BTE.  Those instruments have a volume control.  Perhaps I am in a scenario that I want a little more volume.  We all have patients who love their volume controls.  I can reach up and increase the gain by turning the volume wheel up in my right instrument, and it will simultaneously happen in the left instrument as well. 

Let’s say I have two programs active in my hearing instruments.  Say I have program one set as the universal and program two set as high noise because I have demanding dynamic changes in my sound world.  When I go into my noisy environments, I reach up and press my push button to go into my second program.  Simultaneously, that other instrument changes into the noise program. 

You can also do it via accessory.  The binaural synchronization is a wireless feature, so it is available in any of our wireless products.  Our accessories are the SoundGate, which is our Bluetooth streamer, as well as our RCP, which is our remote control.  In that situation, if I press a button on the RCP or my SoundGate to change something in my hearing aid, it will do that simultaneously to both hearing instruments. 

Configurable Controls

The binaural synchronized behavior is a short press on the program button on one instrument, and it results in the exact same program change in the other instrument.  With regard to the volume control, an increase or decrease of the VC on one instrument will do the same thing on the other instrument.  It will not physically roll the volume wheel on the instrument, but it will make sure the change in amplification is matched to that ear. 

If you have push-button mute configured in EXPRESSfit, a long press of the program button on one instrument will then simultaneously mute the other instrument.  It is all configurable all in the EXPRESSfit fitting software.  When you are working with an accessory, the RC-P or the SoundGate, it is going to ensure adjustments are happening in both ears simultaneously.  A cool thing here is that there is no delay.  It is seamless and hands-free when you are working with the accessory.  You could have your RC-P in your pocket making an adjustment, and no one would ever know. 

We want to make sure that adjustments can happen for your patient in both ears, but what about a patient that has dexterity issues?  Maybe they have arthritis or difficulty manipulating anything small with one hand or the other.  The nice thing for that patient is they can make an adjustment with the better hand and it will carry through to the other aid without any problem.  It will also guarantee accuracy.

I remember before data logging came out, I had a patient who was not hearing at home.  I took a look at the instrument and their hearing aid was sitting in Telecoil mode.  I asked if they had hit the button at some point.  They said there was no way they could hit the button or maybe they did while on the phone, but they pushed it back. Right after this time, data logging came out and I was able to determine that this patient was hanging out in their telephone program all day because they pressed the button and forgot to press it back. 

In this situation, we are going to keep the ears in sync.  We are going to guarantee accuracy.  Programs are always matched from side to side.  Volume changes are made together.  Most often, if you want your right aid up, you are going to want your left aid up because you are facing what you want to hear.  This being said, if you want to deactivate the synchronization, by all means, you can.  In EXPRESSfit, you can uncheck the binaural synchronization, and then all the controls will be done completely independent of each other. 

Screenshot from video on how to synchronize manual adjustments

EXPRESSfit with Ease

EXPRESSfit is a great software because it is exactly what you need to do your fitting quickly so you can spend time counseling your patient.  Binaural coordination is simple and straight forward and easy to configure.  In the Manage Programs screen, there is a box on the features tab that says binaural coordination.  You need to have a wireless product for this functionality to be able to work.  The non-telephone ear control is configured in the Auto Telephone program in what we call the bonus programs.  You go into Manage Programs, then into Features, you access your bonus program and check the Auto Telephone for non-telephone ear control in the 100-level products.

The configuration for binaural synchronization occurs in the Finish Session tab of our software.  There is a little checkmark in the middle that says synchronize volume, synchronize programs, and synchronize mute.  You can configure this specifically for your patient, their dexterity and for what their listening needs are. 

Summary

In conclusion, wearing hearing instruments should not wear you out.  It should be a wonderful experience, and it should not be cumbersome.  With binaural coordination, dynamic listening environments are stabilized with environment classification.  Telephone use becomes effortless with the non-telephone ear control, and manual adjustments are minimized with the binaural synchronization. 

I would like to give a big thank you to the author of our Spotlight article, Tara Helbing.  She is a member of our team here at Sonic, and she wrote a great article on this topic.  It is available on our website, www.sonici.us.  Thank you, Tara, for the article and for your contributions to this presentation.   

Cite this content as:

Reichert, E. (2014, July). Binaural coordination: making the connection. AudiologyOnline, Article 12799. Retrieved from: https://www.audiologyonline.com

 

Industry Innovations Summit Recordings Available

erin reichert

Erin Reichert, MS

Senior Marketing Director

Erin Reichert is the Senior Marketing Director for Sonic and is responsible for all marketing activity, globally. Her primary focus is to create materials that impact not only the hearing care professional but also the end user ensuring features, benefits and value are all clearly defined. Prior to her current position, she held various positions of increasing responsibility within the company, most recently having been Director of Professional Services. Prior to coming to Sonic in 2016, Erin worked in a private practice serving south-eastern Minnesota.  She received her Master's Degree in Audiology from the University of Wisconsin-Madison. Go Badgers!



Related Courses

Sonic Innovations Spotlight Series: Impressions Aren't Scary!
Presented by Erin Reichert, MS
Recorded Webinar
Course: #19023Level: Introductory0.5 Hours
No CEUs/Hours Offered
For some, obtaining an ear-mold can be a daunting procedure, especially when taking impressions for the new popular microCIC models. Join us to learn why it's not as scary as some make it seem.

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.