AudiologyOnline Phone: 800-753-2160


Unitron Hear Life - November 2023

Rise of the Audiological Machines

Aaron Jones, AuD, MS

October 28, 2019
Share:

Interview with Aaron Jones, AuD, MS, Sr. Director of Product Management and Practice Development.

 

AudiologyOnline: Thank you for being with us today, let's start by defining "intelligence" versus "artificial intelligence".

Aaron Jones, AuD, MS: The definition of intelligence is elusive but certainly includes reference to the activities of processing, reasoning and learning. Clearly, intelligence is something we associate with the brain, but increasingly people use the term artificial intelligence (AI). Data show that popularity of the Internet search term “artificial intelligence” has more than doubled in the last 10 years (Google Trends, 2019). We see frequent references to AI in popular culture, and it is a technological basis for thousands of entrepreneurial ventures ("The AI 100: Artificial Intelligence Startups That You Better Know", 2019). With this increasing societal and occupational interest, AI is bound to make inroads into the hearing care industry, which means that audiologists need to be aware of it.

Simply put, AI is aptitude, demonstrated by a computer, for a task normally accomplished by a brain. It uses mathematical models, which are systems of equations that produce desired outputs for specific inputs, to mimic brain function through processing information, reasoning based on that information, and learning from it. Models are developed and trained using input data that typically have patterns and are labeled. In other words, AI involves using systems of equations trained with real-world data to automatically produce, in a brain-like way, desired outputs for new inputs.

The term “artificial intelligence” was coined in 1955 (McCarthy, et al., 2006). In their proposal, McCarthy and his team conjectured that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” Its idea dates back to the automatons of Greek mythology.

Over the years, AI has been depicted as enabling the automation of intellectual and physical human tasks. Depictions have ranged from utopian to dystopian. Utopian ones like the movie Robot & Frank, where a man gains both a friend and an accomplice in a robot, nurtured the idea that AI may ultimately assist humans in our daily activities and professions (Schreier, et al., 2012). At the other end of the spectrum, dystopian depictions like the film interpretation of Isaac Asimov’s novel I, Robot have fueled fears that AI may someday replace humans in our professions (Proyas, 2004). Although often depicted in the context of robots, AI does not require them. Robots themselves are not AI but they can be functionalized by it. Even without using AI, robots can perform defined tasks based on sensor data. The use of sensor data to trigger a computational decision is not AI.

AudiologyOnline: How does this relate to audiology?

Aaron Jones, AuD, MS: Recently, a computer scientist and former Chief Scientific Officer of Baidu, which is one of the largest internet and AI companies by revenue in the world, said that tasks a person can do with no more than one second of thought may be automated with AI now or in the near future (Ng, 2017). This suggests that some audiological tasks today may be ripe for automation with AI.

AI and automation have, in fact, already affected audiology. For example, screening audiometry has been automated, without using AI, as demonstrated by the Welch Allyn AudioScope®. Some manufacturers use AI to improve hearing instrument performance or to automate audiological tasks like fitting fine tuning. As audiologists we are increasingly faced with AI terminology but, even though it has become a part of our lexicon, that terminology is often misunderstood and misused. Furthermore, our own lack of AI awareness and fear—fueled by dystopian depictions of in media—have made us susceptible to marketing hype.

AudiologyOnline: Is AI already used in healthcare?

Aaron Jones, AuD, MS: Although AI is most common with everyday applications, it is definitely finding its way into the healthcare industry. Today, AI has found applications in disease prediction, diagnostics, and management. Prognos is using AI to predict disease from big data. Ginger is using it to assess mental health. Sensely is using AI to direct insurance plan members to resources, and for remote monitoring of chronic illnesses like congestive heart failure and chronic obstructive pulmonary disease. Arterys is using computer vision, which is one building block of AI, to analyze medical images and drive diagnoses. These are but a few examples. The list goes on and on.

AI is being applied in the audiology profession, too. Application of computer vision is in a particularly early stage, but at least one product is under development that leverages it for automated, otoscopic diagnosis of common middle ear disorders. More mature in its audiological application, natural language processing (NLP), which is another building block of AI and is often used for automatic speech recognition (ASR), has been used with both Cloud and mobile apps. Microphones on connected hearing instruments provide a means by which a user can remotely access a virtual personal assistant (VPA) like Siri or Alexa, transcribe, and translate. It is important to note that, like non-audiological applications, hearing instrument applications of ASR have their limitations—distance, noise, reverberation, dialect, accent, jargon, speech rate, and more—due to the challenges of training language and acoustic models, as previously described.

AudiologyOnline: We've heard the term AI related to hearing aids.  

Aaron Jones, AuD, MS: Indeed. Looking closer at another building block of AI known as machine learning, two notable applications have surfaced in the hearing aid industry. The first application is hearing instrument fitting fine tuning, based on user preferences and behaviors. The second application is acoustic classification, to inform automatic changes of hearing instrument sound performance.

AudiologyOnline: How does AI improve user preference and behavior learning?

Aaron Jones, AuD, MS: In the course of a hearing instrument fitting, fine tuning is traditionally performed in-clinic, based on classical methods of validation: aided speech testing, questionnaires and inventories like the International Outcome Inventory for Hearing Aids (IOI-HA), and face-to-face discussion. Modern methods of hearing aid validation, leveraging teleaudiology and ecological momentary assessment, are gaining traction with some manufacturers (Timmer, et al., 2018).

Another approach to fitting fine tuning is to use machine learning in hearing instruments or their mobile app, for user preference and behavior learning. The idea is that a hearing instrument fitting may be allowed to evolve, without involving an audiologist, based on user preferences for volume and sound performance in different listening environments. User preference and behavior learning has been implemented by multiple hearing instrument manufacturers. This puts a modest amount of control in the hands of hearing aid wearers, which may be a double-edged sword. Ideally, the use of machine learning in this way improves user satisfaction. However, in reality, it could sometimes lead to under-amplification for users with strong preference for listening comfort.

AudiologyOnline: How is AI used in acoustic classification in hearing aids?

Aaron Jones, AuD, MS: Modern hearing instruments automatically switch programs, based on changes in listening environments that are acoustically classified by the hearing instruments. This automaticity sometimes obviates the need for manual user adjustments, but how do these acoustic classifiers work?

“Automatic classifiers sample the current acoustic environment and generate probabilities for each of the listening destinations in the automatic program. The hearing instrument will switch to the listening program for which the highest probability is generated. It will switch again when the acoustic environment changes enough such that another listening environment generates a higher probability.” (Hayes, 2019)

Some manufacturers use machine learning to develop their acoustic classifiers in order to better distinguish between listening environments. Using a training set of many audio clips from different listening environments, acoustic classifiers learn to better differentiate between environments that are remarkably similar, and even trick people who have normal hearing thresholds. Accurate acoustic classification is the basis for automatic sound performance that hearing aid users may prefer (Rakita and Jones, 2015; Cox, et al., 2016).

AudiologyOnline: What might the future hold for AI in the audiology profession?

Aaron Jones, AuD, MS: Clear applications of computer vision, NLP and machine learning are surfacing. Together these and other AI building blocks support automation of some audiological tasks. Pure-tone and speech audiometry, and perhaps assessment of central auditory processing, are strong candidates for near-term automation. Furthermore, with applicability to primary care and otolaryngology, we may see routine use of computer vision to diagnose middle and outer ear disorders. In the more distant future, computer vision may be used for viseme recognition to supplement and improve speech recognition in noisy environments; although privacy concerns, digital memory, and battery life remain obstacles.

NLP is pervasive. Companies like Apple, Amazon.com, Google, Nuance Communications, and Baidu continue to mature language models and acoustic models, thereby commoditizing transcription, translation, VPAs, and chatbots. We may leverage these models to caption and subtitle in challenging listening environments, where people struggle most. In addition, we may see implementations of NLP within hearing instruments rather than on mobile phones, assuming that latency and battery life barriers can be overcome. NLP innovations seem likely to focus on speech-in-noise improvements and further integration with mobile phones.

AI will continue to inform acoustic classification. As acoustic models mature, we may expect to see hearing instruments automatically identify even more listening environments and adjust sound performance accordingly. Also, with machine learning, our understanding of user preferences and behaviors should improve over time. With this improvement, AI-mediated fitting fine tuning is likely to become more efficient and effective, thereby decreasing the need for hearing aid follow-up appointments.

AI is enabling automation of audiological tasks, but it is not something to fear. AI is unlikely to replace audiologists. Some tools may even help audiologists thrive amid the rise of the machines. Counseling is one crucial aspect of audiology that seems beyond the near-term reach of automation. While VPAs may leverage language and acoustic models to function in simple use cases, and emotion detection may mature to reliably recognize extremes, the empathetic top-of-license counseling provided by audiologists ensures job security. Complex decision-making, based on subtle cues among a highly variable spectrum of patients, will keep audiologists in clinical practice for years to come.

References

The AI 100: Artificial Intelligence Startups That You Better Know. (2019, February 06). Retrieved from https://www.cbinsights.com/research/artificial-intelligence-top-startups/.

Amer, M. R., Siddiquie, B., Richey, C., & Divakaran, A. (2014, 4-9 May). Emotion detection in speech using deep networks. Paper presented at the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’14), Florence, Italy.

Cook, T. (2017). Keynote. Apple Worldwide Developers Conference. Available at: https://developer.apple.com/videos/play/wwdc2017/101/ (accessed 3 August 2019).

Cox, R. M., Johnson, J. A., & Xu, J. (2016). Impact of Hearing Aid Technology on Outcomes in Daily Life I: The Patients’ Perspective. Ear and hearing, 37(4), e224–e237.

Google Trends. (2019, July 1). Retrieved from trends.google.com/trends/explore?date=2008-07-01 2019-07-01&geo=US&q=artificial intelligence.

Gottfredson, L. S. (1994, December 13). Mainstream Science on Intelligence. The Wall Street Journal, p. A18.

Hayes, D. (2019). What’s the big deal with hearing instrument classifiers? (Unitron publication 1904-093-02). Kitchener, ON: Unitron.

McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for the dartmouth summer research project on artificial intelligence, august 31, 1955. AI magazine, 27(4), 12.

Moon, C., Lagerkrantz, H., & Kuhl, P. K. (2013). Language experienced in utero affects vowel perception after birth: a two-country study. Acta Paediatrica, 102(2), 156-160.

Ng, A. (2017, January 25). Personal interview during lecture at Stanford University Graduate School of Business.

Potamianos, G., Neti, C., Luettin, J., & Matthews, I. (2012). Audiovisual automatic speech recognition. In G. Bailly, P. Perrier, & E. Vatikiotis-Bateson (Eds.), Audiovisual Speech Processing (pp. 193-247). Cambridge: Cambridge University Press.

Proyas, A. Twentieth Century Fox Film Corporation. (2004). I, Robot.

Rakita L., & Jones C. (2015). Performance and Preference of an Automatic Hearing Aid System in Real-World Listening Environments. Hearing Review, 22(12), 28.

Schreier, J., Ford, C., Niederhoffer, G., Bisbee, S., Bisbee, J. K., Acord, L., Rifkin, D., ... Sony Pictures Home Entertainment. (2012). Robot & Frank.

Timmer, B., Hickson, L., & Launer, S. (2018). Do Hearing Aids Address Real-World Hearing Difficulties for Adults With Mild Hearing Impairment? Results From a Pilot Study Using Ecological Momentary Assessment. Trends in hearing, 22, 1-15.

Industry Innovations Summit Recordings Available


aaron jones

Aaron Jones, AuD, MS

Aaron has worked in a variety of clinical settings including a private practice EAR Audiology that he founded; ear, nose and throat offices; and the Veterans Administration. He is passionate about patient-centered innovation and has spent his career developing solutions for people in and out of healthcare. Aaron is an executive at Unitron. Previously an engineer, his experience includes leading an artificial intelligence venture at SRI the birthplace of Siri, genomics at Illumina, anthropometrics at NASA, and acoustics at Boeing. Aaron earned MS degrees in mechanical engineering and management, focusing on patient-centered innovation.



Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.