Widex My Sound: A More Personal, AI-Powered Hearing Experience
AudiologyOnline: Why did Widex see the need to develop My Sound if Sound Sense Learn was already accommodating for personal preference? What did Widex learn from Sound Sense Learn and how did you use that information to develop My Sound?
Oliver Townend: SoundSense Learn was always going to be the first solution of many to improve personalization in the real world. My Sound is now the home for multiple AI solutions that leverage the power of the SoundSense Learn engine in two ways.
SoundSense Learn can be trained in the moment, by the individual, to find very unique listening settings for their hearing aids when they want something different from the sound the automatic system presents them with. A common example we use is when in a restaurant, the wearer may prefer to listen to the background music rather than speech (most automatic hearing aid systems will naturally promote speech). The benefits of individual training of SoundSense Learn is the specific nature of the solutions found, but this can take some attention and concentration from the listener.
When we analyzed the data that was generated by SoundSense Learn, our AI analysis could spot preference patterns and use these to generate patient-driven recommendations. In other words, all the training of SoundSense Learn means we can predict settings that most patients are looking for. This overcomes the initial concentration/attention barriers for SoundSense Learn as the recommendations are presented almost instantly.
Overall, the two methods have distinct advantages, one being very individual and the other being much faster. Both tools are available and will suit different patient needs.
AudiologyOnline: How do you see My Sound shaping the end patient experience with their hearing aids?
Oliver Townend: The broader portfolio of AI personalization now available in My Sound means more patients will feel confident to reach out and use these tools to take their experience to the next level. For all our hard work and high-tech solutions, hearing aid manufacturers face an almost impossible challenge: to build an automatic hearing aid that always meets patients’ needs.
Detecting speech and presenting this to the patient can be made more and more efficient, but what happens when the patient wants to hear something else? A hearing aid currently can’t read someone’s mind. Neither can a human being, but if we want to know what someone wants, we ask a question. The AI systems fundamentally do the same thing. SoundSense Learn asks questions, such as “Do you like sound A or B?”
By asking these questions, it allows itself to be trained on exactly the sound preference that the patient desires. Now we take it to the next level by enabling the patient preference data to generate recommendations for other patients, so faster patient solutions can be found. For different patients with different needs, we can give personalization experiences that are not available anywhere else in the market.
AudiologyOnline: Is My Sound using machine learning? And how will it improve in the future? How can using My Sound help the HCP better understand their patients?
Oliver Townend: As mentioned previously, My Sound uses the existing AI engine, SoundSense Learn. This is still the primary way patients can train the system to find their preferred sounds. We use more AI in the cloud to make analysis of the preference data from all the training to generate the qualified recommendations now available in My Sound. If we see that new training data shows that listening trends are evolving, then the recommendations will change too to reflect the changes.
Remember, every personal program that the patient creates is available to the provider via Real Life Insights. The settings of the programs and how they are being used can help the provider tailor their approach to the individual patient.
AudiologyOnline: Will app-based customization eliminate the provider?
Oliver Townend: Not at all. AI-driven applications are very good at performing tasks based on lots of data − larger pools of data than a human could handle. AI is not very good at doing tasks not involving data: emotional tasks that are incredibly important in the clinic, counselling, etc. In the future, in all parts of life, we will see more AI being used to make people’s jobs easier and allow the individual to focus on tasks that a human is better at. The provider of the future should look at AI as their partner, making them better at their key goal: to help people hear better.
AudiologyOnline: How would you present My Sound to a patient is intimidated by technology?
Oliver Townend: Think of My Sound as having lots of people around you who can help. By telling the app where you are and what you want to hear, it can give you two solutions that are based on thousands of other patients just like you. It couldn’t be faster or simpler.