AudiologyOnline Phone: 800-753-2160


Starkey Signature - February 2024

Spectral iQ: Audibly Improving Access to High-Frequency Sounds

Spectral iQ: Audibly Improving Access to High-Frequency Sounds
Jason Galster, PhD, CCC-A, Susie Valentine, PhD, Andrew Dundas, Kelly Fitz, PhD
February 20, 2012
Share:
This article is sponsored by Starkey.

Frequencies above 3,000 Hz contain approximately 25 percent of speech cues that contribute to recognition of spoken language (ANSI S3.5-1997). The highest frequency speech sound, the fricative /s/, is one of the most common consonant sounds in the English language. The peak energy of /s/ when spoken by a child or female talker will fall between 6,300 and 8,300 Hz (Stelmachowicz, Lewis, Choi, & Hoover, 2007) and ranges in level between 57 and 68 dB SPL (Behrens & Blumstein, 1988). For some patients with high frequency, sloping hearing loss, restoration of audibility for these high-frequency speech cues may not be possible or desirable with conventional amplification.

Restoration of audibility for individuals with severe or profound high-frequency hearing loss is often constrained by limited hearing aid bandwidth, feedback oscillation, and poorly fitted gains. Even when audibility of high-frequency speech sounds can be restored, some patients with severe-to-profound hearing loss may not benefit from amplification and may reject the amplified sound quality. Sometimes these outcomes are attributed to non-functioning inner hair cells, or dead regions, within portions of the cochlea. In a dead region, mechanical vibration of the basilar membrane is not transduced appropriately to elicit electrical stimulation of the auditory nerve. For patients with cochlear dead regions, the effective result of listening to amplified speech within those cochlear dead regions has been described as "information overload" (Moore, 2001). This information overload is thought to be perceived as distortion by the hearing impaired patient.

The inability to restore audibility of high-frequency speech and possible contraindication for the restoration of high-frequency speech signals are established conundrums of hearing care. In order to address these challenges, restoration of high-frequency speech audibility has been accomplished by the shifting of high-frequency information into lower frequency regions in which hearing loss is less severe and cochlear integrity is superior. In other words, moving high-frequency speech information to lower frequencies should improve audibility for patients with sloping high-frequency hearing loss.

With regard to frequency lowering technology, the outcomes of independent reviews have been mixed. Multiple papers have offered systematic reviews of technology designed to lower frequency. Braida and colleagues, in 1979, reviewed the earliest research on frequency lowering; the reviewed studies spanned the 1950s, 1960s and 1970s. The authors observed that frequency lowering techniques of the mid-century were unsuccessful, citing challenges related to the strategies used for frequency lowering, a lack of training and acclimatization, and finally stating that "substantial lowering tends to create sound patterns that differ sharply from those of normal speech; it is not unrealistic to assume that such lowering can only be successful with an intensive, long-term, and appropriately designed training program" (Braida et al., 1979, p.109).

More recently, Simpson (2009) provided an updated review of research outcomes with various techniques for frequency lowering. Portions of that review focused on modern implementations of frequency lowering. Her literature review showed that clinical outcomes related to frequency lowering have improved as compared to those observed earlier by Braida and colleagues. These modern implementations have shown significant benefits in terms of speech recognition and detection that, while variable in outcome, across individuals, support clinical application of these technologies (Kuk, Keenan, Korhonen, & Lau, 2009; Glista, et al., 2009).

Frequency Lowering Techniques Reviewed and Contrasted

In this paper, two existing techniques for frequency lowering will be reviewed and contrasted with a third, new technology for improving audibility of high-frequency sounds. At the time of this publication, two frequency lowering technologies are available as signal processing features from leading hearing instrument manufacturers. These are: linear frequency transposition (LFT), available from Widex as a hearing aid feature labeled "Audibility Extender"; and non-linear frequency compression (NLFC), available from Phonak as a feature labeled "SoundRecover."

Linear Frequency Transposition shifts high-frequency sounds to lower frequencies. The shifted high-frequency information is overlapped with existing lower frequency information. Specific to the implementation of Widex's "Audibility Extender," frequencies up to two octaves above a defined start frequency can be lowered as far as one octave below that start frequency. In this example, the term linear refers to the fact that the frequency distribution within the lowered information is unchanged. Figure 1, adapted from Simpson (2009), is an illustration of this process. In this illustration, the numbered boxes represent hearing aid channels and the increasing channel numbers represent increasing frequency; Panel A shows conventional Frequency processing and Panel B2shows the transposed high-frequency information and its relationship to the lower frequency information.



Figure 1. In this illustration, the numbered boxes represent hearing aid channels and the increasing channel numbers represent increasing frequency; Panel A shows conventional hearing aid processing and Panel B shows the transposed high-frequency information and its relationship to the lower frequency information.

This process of transposition maintains relationships among high-frequency speech components that can be useful for speech understanding and quality. The overlap of high- and low-frequency information may result in the masking of low-frequency speech information by the transposed higher frequency information. In an attempt to minimize undesired masking of low-frequency sounds, LFT will only transpose frequency information when a strong high-frequency input is detected. Although the transposition behavior is transient and based on input to the hearing aid, the bandwidth of the device is permanently reduced even in the absence of active transposition. Subsequent figures will include spectrograms of a short speech segment. In each spectrogram the horizontal axis shows information over time, the vertical axis shows frequency and louder speech components are shown by brighter colors. Figure 2 shows two spectrograms: the first, Panel A, was recorded without LFT and the second, Panel B, was recorded with LFT. Each recording is of the same speechstimulus containing a word-medial and word-final 'SH' or /∫/.



Figure 2. Two spectrograms: the first, Panel A, was recorded without LFT and the second, Panel B, was recorded with LFT. Each recording is of the same speech stimulus containing a word-medial and word-final 'SH' or /∫/. In this example, the white boxes illustrate differences between the two figures, showing the behavior of this system in transposing high-frequency information to lower frequency regions.

In this example, the white boxes illustrate differences between the two figures, showing the behavior of this system in transposing high-frequency information to lower frequency regions. The band-limiting effect of LFT is also visible as the high-frequency energy rolls off quickly above 4,000 Hz.

Non-linear frequency compression, a second available method of frequency lowering, approaches the lowering of high-frequency information in a manner that is different from frequency transposition. In this case, high-frequency information is moved to lower frequencies by compressing the energy in high-frequency hearing aid channels into a lower frequency range. The highest frequencies are shifted and compressed to the greatest extent, while lower frequency information is shifted to a progressively lesser extent. In Phonak's "SoundRecover," a cutoff frequency is established. Below this frequency the amplified signal is unaltered. Above this cutoff frequency all signals are compressed in the frequency domain. Figure 3, adapted from Simpson (2009), is an illustration of the NLFC process. In this illustration, the numbered boxes represent hearing aid channels; the increasing channel number represents increasing frequency. Panel A shows the conventional hearing aid processing and Panel B shows the compressed information and its relationship to the lower frequency information.



Figure 3. In this illustration, the numbered boxes represent hearing aid channels; the increasing channel number represents increasing frequency. Panel A shows the conventional hearing aid processing and Panel B shows the compressed information and its relationship to the lower frequency information.

Unlike Frequency Transposition, NLFC will not affect frequencies below the defined cutoff frequency. All high-frequency information, falling above this cutoff frequency, will be compressed into a reduced high-frequency range. Assuming that the compressive behavior does not fall within the range of important formant frequencies, vowel information and quality will be retained. When optimizing the prescription of frequency lowering technology, decreasing cutoff frequency to a region that contains formant information may compromise harmonic relationships and, by extrapolation, speech quality.

Similar to LFT, NLFC limits high-frequency hearing aid output above the highest compressed frequency. Figure 4 shows two spectrograms: the first, Panel A, was recorded without NLFC and the second, Panel B, was recorded with NLFC. Each recording is of the same speech stimulus containing a word-medial and word-final 'SH' or /∫/. In this example, the white boxes illustrate differences between the two figures, showing the behavior of this system to compress high-frequency information into a lower frequency region. The band-limiting effect of NLFC is also visible, as there is no output from the hearing aid above 5,000 Hz.



Figure 4. Two spectrograms: the first, Panel A, was recorded without NLFC and the second, Panel B, was recorded with NLFC. Each recording is of the same speech stimulus containing a word-medial and word-final 'SH' or /∫/. In this example, the white boxes illustrate differences between the two figures, showing the behavior of this system to compress high-frequency information into a lower frequency region.

Understanding the advantages and limitations faced with established methods of frequency lowering, research staff at Starkey Laboratories, Inc. developed a new technology, designed for the treatment of patients with severe-to-profound high-frequency hearing loss. This innovation, called Spectral iQ, is designed to restore audibility for high-frequency speech features, while avoiding the distortion and frequency-limiting behavior of traditional frequency lowering techniques.

Spectral iQ uses a technique called Spectral Feature Identification to monitor acoustic input to the hearing aid. Spectral Feature Identification identifies and classifies acoustic features of high-frequency sounds. Once appropriate high-frequency features are detected, Spectral iQ uses a sophisticated processing technique to replicate (or translate) those high-frequency features at a lower, audible frequency. This unique process goes beyond the simple frequency lowering of acoustic input; new features are created in real time, resulting in the presentation of audible cues while minimizing the distortion that occurs with other technologies.

To use an example, speech features such as /s/ or /∫/ have distinct spectral characteristics that allow for accurate identification. A broadband noise will have energy across a wide band of frequencies, while a high-frequency speech or music feature will have peaks of energy in the high frequencies and often lesser energy at lower frequencies. These relationships allow for accurate and instantaneous identification and translation of important high-frequency information. Figure 5 illustrates the behavior of Spectral iQ: Panel A shows the unaffected hearing aid response; Panel B shows the identification of a high-frequency speech component such as /s/ along with the newly generated high-frequency speech cue. Panel C illustrates that when the high-frequency speech features are no longer present,
Spectral iQ remains inactive until Spectral Feature Identification again detects the presence of an appropriate speech cue prompting the creation of a new lower frequency feature.



Figure 5. The behavior of Spectral iQ: Panel A shows the unaffect hearing aid response. Panel B shows the identification of high-frequency speech and the newly generated high frequency speech cue. Panel C illustrates that in the absence of high-frequency speech Spectral iQ remains inactive.

Figure 6 shows two spectrograms: the first, Panel A, was recorded without Spectral iQ and the second, Panel B, was recorded with Spectral iQ. Each recording is of the same speech stimulus containing a word-medial and word-final /∫/. In this example, the white boxes illustrate differences between the two figures, showing the behavior of Spectral iQ to identify a high-frequency speech cue and regenerate that cue at a lower frequency. A visual comparison between Panel A and B will note nominal reduction in available bandwidth.



Figure 6. Two spectrograms: the first, Panel A, was recorded without Spectral iQ and the second, Panel B, was recorded with Spectral iQ. Each recording is of the same speech stimulus containing a word-medial and word-final /∫/. In this example, the white boxes illustrate differences between the two figures, showing the behavior of Spectral iQ to identify a high-frequency speech cue and regenerate that cue at a lower frequency.

A comparative look at three technologies designed to improve audibility of high-frequency speech cues shows that, in order to provide benefit, all of these technologies must introduce a sound that was not previously heard by the wearer. Thus some patients may need to acclimatize to the change in sound quality. LFT will temporarily shift high-frequency information, both speech and noise, to lower frequency regions, thereby overlapping sounds within the audible range. NLFC brings high-frequency speech and noise into an audible range, distorting some high-frequency cues while preserving low-frequency information. In contrast, the dynamic nature of Spectral iQ retains the natural distribution of frequencies and comparatively broadband sound quality, while also avoiding the introduction of high-frequency noise that was not present in those lower frequency regions. This is accomplished by providing a complementary, audible cue when high-frequency speech sounds such as /s/ and /∫/ are present.

Evidence for Clinical Application

Twenty adults, each with a mild, steeply sloping to severe or profound, symmetric sensorineural hearing loss participated in a clinical study. Two of twenty participants were not able to meet the demands of the experimental task and are not included in this discussion. Figure 7 shows the mean audiogram for all 18 participants as well as the maximum and minimum at each tested frequency.



Figure 7. Mean thresholds for right and left ears, across all participants are shown with the solid black line. The dashed black lines show maximum and minimum thresholds.

For a minimum of two weeks prior to testing all participants wore hearing aids that included the Spectral iQ algorithm. All participants were fit with bilateral hearing aids programmed to eSTAT targets generated by the Inspire programming software. Participants were asked to complete the S-test (Robinson, Baer & Moore, 2007), in which they detect the presence or absence of a word-final /s/ (e.g. dog vs. dogs). The words were presented in a sound field at 65 dB SPL, with a low level of speech-shaped background noise presented at 45 dB SPL. The inclusion of a low-level background noise masks word-final offset cues that may be confused with the plural cue (Glista et al., 2009). Detection of the word-final consonant /s/ is an important English language cue that identifies possession or plurality. Because this is a detection task, similar to finding a threshold for high-frequency speech, rather than a speech recognition task, scoring is done through a statistical measure of d' (d-prime). The results presented here have been converted to a clinically recognizable measure of percent correct using a method described by Hartmann (1997, p.543).

Figure 8 shows the results of the S-test: blue bars show percent correct identification with conventional processing, red bars show percent correct identification with Spectral iQ. A one-way repeated measures analysis of variance indicate significant benefit from the application of Spectral iQ (F1,35 = 18.7, P <.001 the mean data show group benefit of percentile points similar to improvements observed with existing frequency lowering technology>
In the current study, 16 of 18 participants benefited from Spectral iQ, showing individual improvements to high-frequency speech detection by as many as 29 percentile points.



Figure 8. Results of the S-test: blue bars show percent correct identification with conventional processing, red bars show percent correct identification with Spectral iQ.

Summary

Frequency lowering has been a part of hearing aid technology for more than 50 years. Recent advances in signal processing have improved these technologies and their clinical outcomes. Starkey has introduced Spectral iQ, an innovative approach to improving audibility for high-frequency speech sounds, designed to overcome some drawbacks associated with established frequency lowering techniques.

All techniques for frequency lowering introduce distortion to the amplified signal through the overlap of high- and low-frequency sound or the disruption of harmonic relationships. Yet these technologies can improve audibility of high-frequency sounds. Spectral iQ uses a unique process of Spectral Feature Identification to analyze, classify and react to speech and other high-frequency features in real time. When a high-frequency feature is identified, Spectral iQ regenerates that feature at a lower, audible frequency, cueing the listener to the presence of high-frequency speech components such as /s/ or /∫/. Unlike competing techniques that limit high-frequency bandwidth, Spectral iQ allows Starkey hearing aids to maintain a comparatively broadband, undistorted frequency distribution, while simultaneously restoring high-frequency speech audibility for patients who may have previously been considered unaidable.

References

ANSI (1997). ANSI S3.5-1997. American National Standard Methods for the calculation of the speech intelligibility index. New York.

Behrens, S. & Blumstein, S. E. (1988). On the role of the amplitude of the fricative noise in the perception of place of articulation in voiceless fricatives. Journal of the Acoustical Society of America, 84(3), 861-867.

Braida, L.D., Durlach, N.L., Lippmann, R.P., Hicks, B.L., Rabinowitz, W.M. & Reed, C.M. (1979). Hearing aids—a review of past research on linear amplification, amplitude compression, and frequency lowering. ASHA Monographs, 19 (Chapter IV, 87-113).

Glista, D., Scollie, S., Bagatto, M., Seewald, R., Parsa, V. & Johnson, A. (2009). Evaluation of nonlinear frequency compression: Clinical outcomes. International Journal of Audiology, 48(9), 632-644.

Hartmann, W.M. (1997). Signals, sound, and sensation. Woodbury, NY: American Institute of Physics.

Kuk, F., Keenan, D., Korhonen, P. & Lau, C. (2009). Efficacy of linear frequency transposition on consonant identification in quiet and noise. Journal of the American Academy of Audiology, 20, 465-479.

Moore, B.C.J. (2001). Dead regions in the cochlea: Diagnosis, perceptual consequences, and implications for the fitting of hearing aids. Trends in Amplification, 5(1), 1-34.

Robinson, J.D., Baer, T., & Moore, B.C. (2007). Using transposition to improve consonant discrimination and detection for listeners with severe high-frequency hearing loss. International Journal of Audiology, 46, 293-308.

Simpson, A. (2009). Frequency lowering devices for managing high-frequency hearing loss: A review. Trends in Amplification, 13(2), 87-106.

Stelmachowicz, P.G., Lewis, D.E., Choi, S. & Hoover, B. (2007). Effect of stimulus bandwidth on auditory skills in normal-hearing and hearing-impaired children.Ear & Hearing, 28(4), 483-494.

Acknowledgements

We would like to acknowledge the contributions of Nazanin Nooraei, Au.D., John Ellison, M.S., Zheng Yan, Ph.D., and Brent Edwards, Ph.D., for their work in developing and evaluating Spectral iQ.

Industry Innovations Summit Recordings Available

jason galster

Jason Galster, PhD, CCC-A

Director of Audiology Communications with Starkey Laboratories

Jason Galster, Ph.D., is Director of Audiology Communications with Starkey Laboratories. He is responsible for ensuring that all product claims are accurate and backed by supporting evidence. Dr. Galster has held a clinical position as a pediatric audiologist and worked as a research audiologist on topics that include digital signal processing, physical room acoustics, and amplification in hearing-impaired pediatric populations.


Susie Valentine, PhD

Research Audiologist

Susie Valentine, Ph.D., is a Research Audiologist with Starkey Laboratories, Inc. She holds a certificate of clinical competence in audiology and has worked as a clinical audiologist at the Indiana University Hearing Clinic, where she received her Ph.D. Valentine holds a bachelor’s from Lenoir-Rhyne University and a Master’s in audiology from the University of Tennessee.


Andrew Dundas


Kelly Fitz, PhD

Digital Signal Processing Engineer

Kelly Fitz, Ph.D., is a Digital Signal Processing Engineer specializing in the design and implementation of audio analysis, processing and synthesis algorithms. As Senior DSP Research Engineer at Starkey, he conducts research combining hearing science, psychoacoustics, and signal processing to explore the perceptual consequences of hearing loss and hearing aids. Fitz has a Ph.D. in electrical engineering from the University of Illinois at Urbana Champaign.



Related Courses

Considerations for Patients with Severe-to-Profound Hearing Loss
Presented by Luis F. Camacho, AuD, FAAA
Recorded Webinar
Course: #37889Level: Intermediate1 Hour
The past decade has brought about changes in technology and criteria for treatment of severe to profound hearing loss. The capabilities of power hearing aids have dramatically improved with effective feedback management, Bluetooth connectivity, rechargeability, advanced noise management influenced by Artificial Intelligence, and the availability of a variety of innovative accessories. This session will focus on selection criteria and available options for fitting severe to profound hearing loss.

Meeting Patients Where They Are with TeleHear Synchronous Remote Programming
Presented by Andrea Hannan-Dawkes, AuD, FAAA
Recorded Webinar
Course: #37638Level: Intermediate1 Hour
Telehealth applications like Starkey’s TeleHear synchronous remote programming feature provide opportunities to delight patients and professionals. With user acceptance ratings comparable to in-office sessions, it makes hearing aid programming effortless from the first fit forward. It’s easy to use, engaging, and ensures timely hearing healthcare can be provided for patients who lead busy lives or for whom obstacles make office visits difficult or impossible. This course will explain everything professionals need to know to use TeleHear to meet patients where they are.

Livio AI & Fall Detection
Presented by Cassie Billiet, AuD
Recorded Webinar
Course: #32525Level: Intermediate1 Hour
A fall can potentially be a life-threatening event for many seniors. Informing family and friends in the event of a fall is important. This session will focus on the new fall detection feature available with the Livio AI Heathable Hearing Technology.

Lasting Connections: Starkey Hearing Technologies 2.4 GHz Patient-focused Accessories
Presented by Michele Hurley, AuD, FAAA, Lawanda Chester, AuD
Recorded Webinar
Course: #32526Level: Intermediate1 Hour
Wireless connectivity is an integral part of the Livio AI and Livio hearing devices. Starkey's line of 2.4 GHz accessories provides patient's with connectivity when and how they need it. This session will focus on the 2.4 GHz accessories, their uses and how they are integrated into the user's daily life.

Empowering Patients to Pursue Hearing Health Part I
Presented by Andrea Hannan-Dawkes, AuD, FAAA
Recorded Webinar
Course: #38687Level: Intermediate1 Hour
Starkey is delighted to bring you a 2-part series on Empowering Patients to Pursue Hearing Health. The ability to counsel patients with hearing loss in a manner that educates and motivates them to prioritize hearing health is an important skill for hearing care professionals. This series will provide you with helpful strategies for encouraging patients to embark upon the journey to better hearing. This course will focus on how to establish great patient-provider relationships, share ideas on how to provide impactful education on hearing loss and amplification, and highlight valuable tools for identifying the unique listening needs and goals of patients.

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.