AudiologyOnline Phone: 800-753-2160


Oticon Intent - April 2024

Negative Synergy: Hearing Loss and Aging

Negative Synergy: Hearing Loss and Aging
Donald J. Schum, PhD, Douglas Beck, AuD
June 23, 2008
Share:
This article is sponsored by Oticon.

Introduction

Basic audiometric measures do not offer a comprehensive description of the speech understanding ability or difficulty of patients. Speech understanding is complex and complicated. Individual cognitive abilities (intelligence, language, vocabulary, psychological profile, etc.) as well as audiologic and medical histories, genetic make-up, first- and second-language issues and other factors impact our ability to understand speech, particularly in challenging acoustical environments.

Fortunately, researchers have devised clever test protocols to better characterize the auditory abilities of our patients beyond the audiogram and better reflect real-world hearing and listening ability. For example, the HINT (Hearing in Noise Test) uses pre-recorded sentences presented in quiet and noise across multiple locations to describe binaural and directional abilities, as related to signal-to-noise levels across multiple adverse and advantageous listening environments (Nilsson, Soli, & Sullivan, 1994). The QuickSIN test (Etymotic Research, 2001) provides a signal-to-noise level analysis of the patient's listening ability, based on key-word recognition within sentences, in quiet and noise. We applaud these challenging tests and encourage their use. Nonetheless, other philosophical and pragmatic aspects of hearing and listening are worthy of exploration.

From a philosophical "bottom-up" perspective, basic sound components (i.e., amplitude and frequency) are combined to form phonemes. Phonemes are grouped together to become words and other speech sounds. Speech sounds are recognized cognitively and processed "top-down" to give linguistic meaning to speech;but regardless of top-down or bottom-up philosophical perspectives, speech sounds only acquire meaning over time. Speech sounds in isolation are essentially stepping stones, leading to eventual construction of the entire speech event into meaningful perceived language. For the entire process to be successful, the listener must attentively accumulate sensory-based information (bottom-up) over time and must cognitively interpret (top-down) the intent of the talker. Speech occurs rapidly at a pace set by the talker. Nonetheless, the listener is challenged to accumulate information over time at a rate that matches or approximates the talker's chosen rate. If the listener cannot accumulate and construct meaning quickly enough, the communication event breaks down and misunderstandings occur. Reconstructing speech sounds over time into meaningful words is a complex process;furthermore, thoughts and ideas become highly problematic when hearing impairment and age-related cognitive changes are also combined.

This paper will address speech understanding from cognitive (top-down) and sensory-based (bottom-up) perspectives, highlighting the interactive and complimentary relationship required for successful communication (see Duchan & Katz, 1983).

Timing and Speech Understanding

Conversational speech occurs at a rate of approximately two to five syllables per second (Pickett, 1980;Pindzola, Jenkins, & Lokken, 1989) often without a break for many minutes. Simple audiogram-based testing of word and sentence perception does not involve the same pressurized and time sensitive demands as conversational speech. During normal conversation we hold a significant amount of information in temporary storage (i.e. memory) while decoding additional input. Often, the meaning of the beginning of the sentence is not revealed until the end of the sentence. Additionally, interpretation of the present sentence may well be influenced by material presented several moments earlier. Successful processing of speech requires a memory component;unfortunately, the peripheral auditory system has no such component.

In conversational situations, auditory and language processing are active components and must work in tandem to successfully receive, decode and identify the intended message. However, when receiving and decoding input becomes challenging due to poor signal quality from the peripheral auditory system, we must divert cognitive resources to this same task (Schneider, Daneman & Pichora-Fuller, 2002). When we have to work harder (strain to hear) and apply more cognitive resources to understand basic psychoacoustic sounds, we become less capable of using our finite memory and cognitive powers to process speech. In other words, as we put more effort and energy into bottom-up processing, we have less effort and energy left for top-down processing.

Multiple Sources of Non-Vocalized Information

Some models of speech understanding suggest that successful processing results from the constant reduction of possibilities, until only one option, such as a word, is possible. Normal speech understanding is efficient because the listener is constantly predicting the next word. The entire process is faster and more targeted after a template sentence structure exists. Conversely, at the beginning of an utterance, the psychoacoustic signal encoded by the peripheral auditory system (bottom-up) is even more important, as there is scant other information to rely on. As longer sentences progress, the importance of centrally-stored information (top-down) increases as the listener accumulates cues to predict the remaining structure and content. For example;"My favorite . . ." can be followed by a vast quantity of words and options. Whereas the phrase, "My favorite sport is . . ." has a more restricted set of possibilities.

Syntactical and semantic cues are not the only sources of information that allow a listener to predict the upcoming words. The listener can use abstract cues too, such as the topic of conversation, previous parts of the conversation, knowledge about the speaker, situation or settings cues, etc. The prediction of the conclusion of the phrase "My favorite sport is . . ." may be influenced by the talker's gender, age, nationality, etc.

Complex Environments

When a normal-hearing listener is in a complex listening environment, the total mass of sound enters the auditory system and is broken down into individual sound sources ("deconstruction"). Albert Bregman (1990) introduced the term "scene analysis" to describe the way speech understanding in complex environments takes place. When listening in a complex acoustic environment with multiple sound sources, our cognitive system automatically attempts to separate sources of sounds. Once separate sound sources are isolated, we can attend to the specific sound source of interest. We cannot selectively attend until we organize the sources of sound in our environment. Normally, deconstruction takes place automatically and unconsciously. Deconstruction allows us to suppress the irrelevant cacophony while focusing on the primary signal of interest. A tremendous amount of brain work is required in noisy, reverberant, music-filled, complex listening environments to deconstruct and stream together sound in meaningful ways.

The Effect of Hearing Impairment

Of course, information enters the system first through the peripheral auditory system. In effect, bottom-up, sensory-based psychoacoustic percepts drive the entire process. As the quality of the bottom-up input improves, so too does the efficiency of top-down processing. However, when information from the peripheral auditory system is unclear, poorly defined, distorted, or is missing pieces, the cognitive system must apply additional cognitive effort and control. Of course, the principle effect of sensorineural hearing loss is degradation of the auditory input, involving more that just a decrease in audibility (Moore, 2007). Plomp (1986) noted there are attenuation and distortion components of sensorineural hearing loss. The distortion component is often presumed to be the most significant factor negatively impacting speech understanding in noise.

When rapid processing is required and the system is stressed, the effects of hearing loss become even more apparent. For example, Schum & Collins (1990) demonstrated that listeners with sensorineural hearing loss had to listen longer to each individual phoneme before arriving at an accurate identification. Without time constraints, identification was just as good as those with normal hearing. However, people with hearing loss had to monitor phonemes longer before identification could be made. Therefore, if basic identification takes more cognitive resources, less cognitive capacity remains for other processing activities.

The Physical, Chemical and Biological Impact of Aging

Sometimes we unintentionally attribute hearing problems in the elderly solely to elevated pure-tone and speech thresholds. Of course, if it were just threshold elevation, hearing aids would be an ideal solution. However, there is more to consider.

Attention span, focus, memory and other cognitive (i.e. "executive") functions generally do decline with age, but not always. New research indicates tremendous variability and vast cognitive capacities in aging brains (Andreoli, 2007). Maintaining cognitive abilities as one ages, in the absence of disease, is becoming more likely. Neurogenesis (the birth or creation of new neurons) has been in the popular press as of late. Halpern (2008) offers a comprehensive guide regarding how exercise, diet, thought, puzzles and cerebral work can help promote neurogenesis in adults and help to maintain a healthy brain.

Nonetheless, there are physical, chemical and biological changes that occur with aging, and they cannot all be optimally managed. For example, the hippocampus (a primary structure within the temporal lobe involved with memory and spatial organization) starts to shrink by age 60 even in normal, healthy people (Halpern, 2008). For those with Alzheimer's Disease (AD), hippocampus degradation is more aggressive.

In general, as we age, neural plasticity is a good thing. However, for those who have aged with significant hearing loss, neural plasticity may foster the perception of tinnitus, secondary to long-term deprivation of auditory input (Moller, 2006).

Andreoli (2007) noted that common problems in the aging person include forgetfulness, word finding difficulty, slowed reaction times and difficulty learning new tasks. Altered and reduced neurotransmitters contribute to slowed neural conduction times. As we age, vivid memories that we previously accessed instantly may take a few seconds or more to retrieve. Andreoli noted in a study of 2000 "non-demented" people aged 65 and older that the rate of cognitive decline was one of the strongest predictors of mortality. In essence, decreased cognitive ability led to less activation and less time spent in the activities of daily living.

Physiologic processing of temporal cues has also been shown to be reduced in older persons (Tremblay & Ross, 2007). Temporal cues are important with respect to audition, but they are also the foundation for correct interpretation of localization and spatiality-related acoustic cues. Tremblay and Ross (2007) reported the physiological capacity to detect and perceive changes in interaural phase differences declines with aging.

Cognitive disorders tend to increase with age, somewhat along a continuum: Age-Associated Memory impairment, Age-Associated Cognitive Decline and Mild Cognitive Impairment. Additionally, AD is the most common dementia. Between ages 65 and 74 years, three percent of the population has AD. Between ages 75 and 84, 19 percent have AD, and above the age of 85, half the population is expected to have Alzheimer's.

Many studies have examined whether it is necessary to look beyond standard audiometric measures to explain the speech understanding abilities of older listeners (Humes, 2003). In essence, when tested under challenging conditions, the patient's cognitive status is often predictive of performance. Additionally, there is a strong correlation with cognitive status and age. For example, Schum and Matthews (1992) noted older, hearing-impaired listeners seemed to be less able to make full use of contextual information when listening to meaningful sentences in noise.

Consequently, cognitive changes in aging can be viewed as a disruption of rapid access to stored knowledge, or the slowing down of neural activity. Older individuals likely have all the basic processing resources available. However, they cannot complete processing tasks as quickly and efficiently as when they were younger.

Psychoacoustic studies have revealed a variety of disruptions in tasks requiring accurate temporal perception in older listeners (Fitzgibbons, Gordon-Salant, & Friedman, 2006;Konkle, Beasley, & Bess, 1977;Pichora-Fuller, 2003;Schneider & Pichora-Fuller, 2001). Elderly listeners are more challenged by faster rates of speech (Wingfield & Tun, 2001) even though they often perform as well as younger listeners at more typical rates.

As complex auditory input arrives from the peripheral auditory system, the listener must use essential cognitive skills to detangle multiple and competing streams of speech. Cognitive skills involving access and retrieval from short and long-term memories, linguistic information, sound structure information, sentence structure information and more are required to accomplish the goal. As hearing loss and cognitive decline increase, people are less able to detangle rapid speech, which increases misunderstanding, confusion, frustration and the quantity of communication breakdowns. When hearing loss and age-related decline are combined, a negative synergistic communication problem occurs.

The Role of Technology

Because many hearing-impaired patients also suffer from declining cognitive processing skills, there will be limits to the benefits provided by hearing aids. Cognitive decline limits maximal performance, particularly in complex and challenging communication environments.

However, when viewed from the perspective of the benefit provided by amplification, a different perspective emerges. Davis (2003) evaluated speech understanding performance in noise for a group of older, new hearing aid users, aided and unaided. The patients where categorized based on performance on two different cognitive processing tasks. Patients with the poorest performance on the cognitive tests were those whose speech understanding performance increased the most, when going from unaided to aided listening. The importance of cognitive skill becomes most apparent in challenging listening situations (Lunner & Sundewall-Thorén, 2007). Important relationships between cognitive status and the implementation of non-linear signal processing have been explored (Foo, Rudner, Rönnberg, & Lunner, 2007;Gatehouse, Naylor, & Elberling, 2003, 2006).

Therefore, because sensory-based psychoacoustic percepts drive the entire process, amplification plays a significant role. As we maximize the quality of acoustic information from the bottom-up, we allow cognitive-based top-down executive functions to occur more efficiently. Hearing aids can readily compensate for the sensitivity (i.e. threshold) loss associated with sensorineural hearing loss. Additionally, modern digital hearing aids can implement directional technology to improve the signal-to-noise ratio (S/N) in many acoustic environments, and noise reduction technology can reduce distractibility and improve acceptability via reduced overall loudness and less annoyance of the speech signal in many situations. Therefore, in some respects, modern digital technology can support stream segregation, scene analysis and selection. As noted above, after the brain separates sound sources, then it can decide where to focus its processing power, while dismissing less important sound sources.

Advances in digital technology allow us to consider the hearing aid as providing cohesive and more useful information than previously available to the auditory system (Schum & Bruun Hansen, 2007). Beyond directionality and noise reduction, extended bandwidth and high-frequency amplification increase speech information availability and are beneficial for localization, too. When extended bandwidths are combined with open ear fittings, we provide more localization (horizontal and vertical) and spatial cues than ever before. Wireless communication between hearing aids allows balancing of gain and compression to better preserve natural inter-aural level differences. Consequently, preservation of interaural differences allows improved localization while increasing the brain's top-down ability to separate the speaker of interest from competing noise. Wireless communication also improves the ability of amplification systems to decrease false cues by coordinating environmentally adaptive systems, such as directional microphones and noise reduction algorithms, in both hearing aids to better represent dynamic aspects of the acoustic environment.

A well-fit, advanced-technology hearing aid is by no means a guarantee of good conversational performance for the older patient. However, if we are to provide the best opportunity to use remaining auditory and cognitive abilities, excellent signal processing is mandatory. Pichora-Fuller (2003) noted that advanced technology should be seen as a tool to reduce the stress on cognitive processing ability.

Final Thoughts

As witnessed through daily listening situations, top-down and bottom-up processes must effectively integrate to maximize speech perception. A failure in either sensory input or cognitive management of speech sounds creates difficult communication obstacles. When both deficits are present, as is often the case in older patients with hearing loss, significant and life changing communication challenges need to be managed.

Effective management of these problems can be defined as aural rehabilitation. Aural rehabilitation is the process for achieving long-term auditory success for hearing-impaired patients, with and without amplification, by learning to satisfactorily use hearing and listening to fulfill their auditory needs. For example, speech understanding by hearing-impaired listeners can be improved by having a communication partner adopt the Clear Speech speaking mode (Picheny, Durlach, & Braida, 1985;Schum, 1996). Clear Speech can be seen as addressing both bottom-up (a cleaner signal) and top-down needs (slower pacing and more clearly defined structure of information).

Our goal as professionals is to help patients maximize their remaining auditory and cognitive capacity. The very best thing that amplification can do for the patient is to deliver the "truest" sounds (bottom-up) possible. This maximizes real-world auditory cues in order to supply the brain with the acoustic information it needs to manage such information in a top-down manner, consistent with maximal use of our own natural resources.

References

Andreoli, T. (2007). Cognitive Disorders Among the Elderly. Brain Therapy Center. Retrieved June 14, 2008, from www.brain-injury-therapy.com/articles/dementia.htm

Bregman, A. (1990). Auditory Scene Analysis. Cambridge: MIT Press.

Davis, A. (2003). Population study of the ability to benefit from amplification and the provision of a hearing aid in 55-74-year-old first-time hearing aid users. International Journal of Audiology, 42(S2), 2S39-52.

Duchan, J.F., Katz, J. (1983). Language and Auditory Processing: Top Down Plus Bottom Up. In E.Z. Lasky and J. Katz (Eds.), Central Auditory Processing Disorders, Problems of Speech, language and Learning (pp. 31-45). Baltimore: University Park Press.

Etymotic Research. (2001). QuickSIN Speech in Noise Test, Version 1.3. Elk Grove Village, IL.

Fitzgibbons, P., Gordon-Salant, S., & Friedman, S. (2006). Effects of age and sequence presentation rate on temporal order recognition. Journal of the Acoustical Society of America, 120(2), 991-999.

Foo, C., Rudner, M., Rönnberg, J., & Lunner, T. (2007). Recognition of speech in noise with new hearing instrument compression release settings requires explicit cognitive storage and processing capacity. Journal of the American Academy of Audiology, 18, 618-631.

Gatehouse, S., Naylor, G., & Elberling, C. (2003). Benefits from hearing aids in relation to the interaction between the user and the environment. International Journal of Audiology, 42(S1), S77-S85.

Gatehouse, S., Naylor, G., & Elberling, C. (2006). Linear and non-linear hearing aid fittings- 2. Patterns of candidature. International Journal of Audiology, 45, 153-171.

Halpern, S. (2008, May 19). Memory: Forgetting is the new normal. Time Magazine.

Humes, L. (2003). Factors underlying speech-recognition performance of elderly hearing-aid wearers. Journal of the Acoustical Society of America, 112(7), 1112-1132.

Konkle, D. F., Beasley, D. S., & Bess, F. H. (1977). Intelligibility of time-altered speech in relation to chronological aging. Journal of Speech and Hearing Research, 20, 108-115.

Lunner, T. & Sundewall-Thorén, E. (2007). Interactions between cognition, compression, and listening conditions: Effects on speech-in-noise performance in a two-channel hearing aid. Journal of the American Academy of Audiology, 18, 604-617.

Moller, A.R. (2006). Neural Plasticity in Tinnitus. Progress in Brain Research, 157, 365-72.

Moore, B. (2007). Cochlear Hearing Loss: Physiological, Psychological and Technical Issues. Hoboken: John Wiley & Sons.

Nilsson, M., Soli, S.D., & Sullivan, J.A. (1994). Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. Journal of the Acoustical Society of America, 95(2), 1085-99.

Picheny, M., Durlach, N., & Braida, L. (1985). Speaking clearly for the hard of hearing: Intelligibility differences between clear and conversational speech. Journal of Speech & Hearing Research, 28, 96-103.

Pichora-Fuller, M.K. (2003). Cognitive aging and auditory information processing. International Journal of Audiology, 42(S2), 2S26-32.

Pickett, J. (1980). The Sounds of Speech Communication. Batimore: University Park Press.

Pindzola, R.H., Jenkins, M.M., & Lokken, K.J. (1989). Speaking rates of young children. Language, Speech and Hearing Services in Schools, April, 133-138.

Plomp, R. (1986). A signal-to-noise ratio model for the speech-reception threshold of the hearing impaired. Journal of Speech and Hearing Research, 29, 146-154.

Schneider, B.A., Daneman, M., & Pichora-Fuller, M.K. (2002). Listening in aging adults: From discourse comprehension to psychoacoustics. Canadian Journal of Experimental Psychology, 56(3), 139-152.

Schneider, B.A., & Pichora-Fuller, M.K. (2001). Age-related changes in temporal processing: implications for speech perception. Seminars in Hearing, 22, 227-239.

Schum, D. (1996). The intelligibility of clear conversational speech of young and elderly talkers. Journal of the American Academy of Audiology, 7, 212-218.

Schum, D. & Bruun Hansen, L. (2007, August 20). New Technology and Spatial Resolution. Audiology Online, Article 1854. Retrieved June 14, 2008, from www.audiologyonline.com/articles

Schum, D. & Collins, M.J. (1990). The time course of acoustic/phonemic cue integration in the sensorineurally hearing-impaired listener. Journal of the Acoustical Society of America, 87, 2716-2728.

Schum, D. & Matthews, L. (1992). SPIN test performance of elderly, hearing-impaired listeners. Journal of the American Academy of Audiology, 3, 303-307.

Tremblay, K. L., & Ross, B. (2007, Nov. 27). Auditory rehabilitation and the aging brain. The ASHA Leader, 12(16), 12-13.

Wingfield, A. & Tun, P. (2001). Spoken language comprehension in older adults: Interactions between sensor and cognitive change in normal aging. Seminars in Hearing, 22, 287-301.

Industry Innovations Summit Recordings Available

donald j schum

Donald J. Schum, PhD

Vice President of Audiology and Professional Relations, Oticon

Don Schum currently serves as Vice President for Audiology & Professional Relations for Oticon, Inc. Previous to his position at Oticon in Somerset, Don served as the Director of Audiology for the main Oticon office in Copenhagen Denmark. In addition, he served as the Director of the Hearing Aid Lab at the University of Iowa, School of Medicine (1990-1995) and as an Assistant professor at the Medical University of South Carolina (1988-1990). During his professional career, Dr. Schum has been an active researcher in the areas of Hearing Aids, Speech Understanding, and Outcome Measures. (B.S. in Speech & Hearing Science, University of Illinois M.A. in Audiology, University of Iowa Ph.D. in Audiology, Louisiana State University.)


douglas beck

Douglas Beck, AuD



Related Courses

Managing Severe Hearing Loss
Presented by Don Schum, PhD
Recorded Webinar
Course: #33563Level: Intermediate1 Hour
A significant subset of patients seen in practices present with more severe hearing loss and more complex medical histories. In this seminar, the nature of severe hearing loss will be discussed along with a discussion of how to best match available technology choices (power devices, BiCROS, etc) to patient needs.

Perception in Noise
Presented by Don Schum, PhD
Recorded Webinar
Course: #33580Level: Intermediate1 Hour
The human cognitive system is designed to process information and to inform the person about the nature of the environment that they are in. In this seminar, we will examine how that natural process is complicated when a person with hearing loss is attempting to understand spoken language.

The Aging Perceptual System
Presented by Don Schum, PhD
Recorded Webinar
Course: #34527Level: Intermediate1 Hour
The majority of hearing aid users are older adults and, thus, it is incumbent to understand the nature of their perceptual difficulties. In this seminar, the effects of aging in combination with hearing loss on spoken language understanding will be examined.

A Deeper Look at Sound Environments
Presented by Don Schum, PhD
Recorded Webinar
Course: #33536Level: Intermediate1 Hour
The characteristics of the sound environment have a fundamental impact on the performance of the hearing aid user. In this course, we will describe the obvious and sometimes more subtle aspects of sound environments that will affect hearing aid performance.

The Subjective Evaluation of a New Hearing Aid Fitting
Presented by Don Schum, PhD
Recorded Webinar
Course: #35584Level: Intermediate1 Hour
The final judge of the success of a new fitting will of course be the patient, and the criteria that they use may not always be in line with an objective audiological measure. This course will review some of the issues and options at play when having the patient weigh in on the value of the new devices.

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.