From the Desk of Gus Mueller
It seems that each year, the audiometric definition of who might be a cochlear implant candidate becomes broader. And, cochlear implant technology also is rapidly changing. In recent years we have seen improved form factors, more advanced signal classification and processing, and increased wireless connectivity options. Another area where advancements have been observed recently is with the hybrid cochlear implant systems, that is, systems with combined electric and acoustic stimulation (EAS).
Discussion of EAS systems is not totally new to 20Q. Back in June 2011, Dr. René Gifford joined us to describe this technology, and reviewed what was then “new” research (you can read that article here). But that was six years ago, and a lot has changed, so we thought it was time to invite René back to provide us with an update.
René Gifford, Ph.D, is a Professor in the Vanderbilt Department of Hearing and Speech Sciences with a joint appointment in the Department of Otolaryngology. She is currently the Director of the Cochlear Implant Program at the Vanderbilt Bill Wilkerson Center in the Division of Audiology as well as the Director of the Cochlear Implant Research Laboratory. Dr. Gifford is one of the leading researchers in the area of cochlear implants with over 100 publications and numerous book chapters on this topic. In addition, she has authored the popular clinically-based book, Cochlear Implant Patient Assessment: Evaluation of Candidacy, Performance, and Outcomes.
Dr. Gifford currently is the principal investigator on two NIH grants centered on cochlear implants. In addition to her research interests with EAS systems, she also is studying preoperative prediction of postoperative outcomes, speech perception for adults and children with cochlear implants, and spatial hearing abilities of individuals with unilateral and bilateral cochlear implants.
René’s national acclaim has led to her receiving the ASHA Award for Clinical Achievement, and recently she was the featured scientist for the NPR Science Friday broadcast. It’s always nice to hear the latest advancements on a topic from an expert, so sit back and enjoy this month’s 20Q!
Gus Mueller, PhD
Browse the complete collection of 20Q with Gus Mueller CEU articles at www.audiologyonline.com/20Q
20Q: Combining Electric and Acoustic Hearing - Hearing Preservation and Bimodal Listening
After this course, readers will be able to:
- Define EAS/Hybrid implant system and explain the differences from a traditional cochlear implant.
- Explain the general indications for an EAS/Hybrid system, and list the potential benefits of such a system, based on the evidence.
- Explain the benefits of bimodal listening as well as hearing preservation for CI recipients.
- Describe the role that ITD sensitivity plays in EAS/Hybrid benefit.
- Discuss the general approach for programming EAS/Hybrid systems, including new evidence for optimizing outcomes, and explain how the acoustic component is typically fit.
1. It’s been a while since we last spoke with you here at 20Q. We were last talking about hybrid cochlear implant systems, if I recall correctly?
Yes, it’s been over 6 years since our last 20Q on the topic of hybrid cochlear implant (CI) systems, also referred to as combined electric and acoustic stimulation (EAS) systems. For a brief refresher, EAS/Hybrid devices are implanted using minimally traumatic surgical techniques with thinner, less traumatic electrode arrays. In the best-case scenario, there is minimal surgical trauma to the delicate intracochlear structures with the goal of preserving residual low frequency acoustic hearing to be combined in an EAS/Hybrid listening configuration. Hearing preservation surgery can be achieved with essentially any cochlear implant device and electrode type available in the US today.
A lot has changed in just the past few years. For example, we now have two FDA-approved EAS/Hybrid systems that use integrated EAS processors. This means that the CI processor is capable of providing both electric stimulation and acoustic amplification for the implanted ear. This has made our job so much simpler for the population of patients who already have hearing preservation and for the rapidly growing population of newly implanted CI recipients with hearing preservation!
2. Integrated EAS processors? What did you do before these integrated sound processors were available?
For patients with significant residual hearing in the implanted ear, we would often fit an in-the-ear (ITE) hearing aid (HA) to be worn in the implanted ear along with the behind-the-ear (BTE) CI sound processor. We had some patients who used an off-the-ear CI processor (such as the MED-EL Rondo, Advanced Bionics Neptune, or more recently, the Cochlear Kanso) that allowed the patient to continue use of the BTE HA in the implanted ear. But realistically, even the patients who were fitted with hearing aids and who demonstrated benefit from hearing preservation were reluctant to wear three or even four separate hearing devices—the latter relevant for bilateral CI recipients with bilateral acoustic hearing preservation. It just wasn’t a practical, everyday solution.
3. I recall that you mentioned that some EAS patients fail to demonstrate significant benefit from the preserved hearing in the implanted ear. Is that why some patients were reluctant to wear up to four devices?
That’s correct, but with a very big caveat. Clinical assessment of speech understanding is generally achieved using a single loudspeaker placed directly in front of the listener. So, if you compare the best-aided EAS condition (CI + bilateral HAs) to the bimodal control condition (CI + contralateral HA), the best we could expect are just a few percentage points afforded by summation of bilateral low-frequency acoustic hearing. Recall that the typical auditory profile for EAS and Hybrid CI recipients is that of a bilateral precipitously sloping high-frequency hearing loss. What that means is that the summation effects that we could measure in the clinical environment will be reduced given the restricted audible bandwidth in the acoustic hearing ears. There are very few patients who fail to demonstrate at least some benefit of aiding residual hearing in the implanted ear. But I am really only referring to those individuals who have aidable hearing in the low frequency region (low frequency thresholds in the 70 to 80 dB HL range).
4. What would be the better way to assess benefit for EAS patients in the audiology clinic?
Well, there really might not be an ideal metric for the typical audiology clinic. For CI recipients, we do not reference binaural hearing for a few reasons. First, CI sound processors do not currently afford recipients access to the interaural time difference (ITD) cues given the use of a constant, high-rate stimulation with a pulse train that is amplitude modulated by the envelope of each band-pass filter. Second, the CI sound processors have automatic gain control (AGC) features, which are highly necessary due to the narrow electric dynamic range. However, these AGC features attenuate interaural level differences (ILDs) rendering these cues useful but significantly reduced relative to individuals with normal hearing, or even relative to individuals with hearing loss using amplification with less input compression (e.g., Grantham, Ashmead, Ricketts, Haynes, & Labadie, 2008). EAS/Hybrid CI recipients with hearing preservation in the implanted ear(s) have binaural hearing in the low-to-mid frequencies—or at least over the range of audibility for both ears. Thus, there are several potential benefits of having access to ITDs, but few that could be assessed in a typical clinical environment.
The first and most robust benefit is in localization, which is significantly improved by providing the CI recipient with binaural acoustic hearing (e.g., Dunn, Perreau, Gantz, & Tyler, 2010; Gifford et al., 2014a, 2014b; Dorman, Loiselle, Cook, Yost, & Gifford, 2016; Loiselle, Dorman, Yost, & Gifford, 2015). The second benefit is in speech understanding in complex listening environments, such as those that include diffuse noise and/or reverberation. Having access to ITDs can really benefit a listener in the latter environment. That is, the listener is typically looking at the talker which should provide a 0-microsecond ITD for the speech stimulus; on the other hand, the distracting noise sources reach the ears at various different ITDs allowing for binaural unmasking of speech—also commonly referred to as binaural squelch. This is, of course, assuming that the CI recipients have ITD sensitivity.
5. Has anyone tested whether these implant recipients actually have ITD sensitivity?
In fact, we measured ITD thresholds for 14 adult CI recipients with hearing preservation as well as 5 listeners with normal hearing (Gifford et al., 2013, 2014b). We found that ITD sensitivity was highly variable for the CI recipients with thresholds rivaling those of normal hearing listeners up to ITD thresholds that are not physiologically relevant (> 700-800 microseconds). Not surprisingly, we also found that ITD thresholds were significantly correlated with localization accuracy as well as the degree of EAS/Hybrid benefit obtained by adding acoustic hearing from the implanted ear for speech understanding in a semi-diffuse, restaurant noise background. So it does appear that ITD sensitivity plays a major role in this EAS/Hybrid benefit.
6. Was it the case that those who had the best hearing preservation also had the best ITD thresholds?
That’s close to correct, but rather it was the degree of symmetry in low frequency thresholds that was correlated with ITD sensitivity, not simply the auditory thresholds in the implanted ear. The reason that it is not simply defined by detection thresholds in the implanted ear is that the thresholds in the non-implanted ear are also relevant—it’s the interaural component that is important in ITDs. Or, as I often say, remember the "I" in ITD. In a later paper, we demonstrated significantly better localization for hearing preservation implant recipients with symmetrical low frequency audiometric thresholds (0 to 15 dB interaural asymmetry) as compared to those with highly asymmetric audiometric thresholds (45 to 60 dB interaural asymmetry, on average) (Loiselle et al., 2015).
7. What kind of speech recognition benefit are we seeing these days for EAS/Hybrid recipients?
Reports in the literature demonstrate average speech recognition benefit from preserved acoustic hearing in the implanted ear from 10 to over 20 percentage points or approximately 2 dB, up to 5 dB improvement in the signal-to-noise ratio (SNR) (e.g., Dunn et al., 2010; Dorman & Gifford, 2010; Dorman, Spahr, Gifford, Cook, & Zhang, 2012; Dorman et al., 2013; Gifford, Dorman, & Brown, 2010; Gifford et al., 2012, 2013, 2014a, 2014b, 2017; Rader, Fastl, & Baumann, 2013). This might not seem like much, but remember, this benefit of 10 to 20 percentage points (or 2 to 5 dB in the SNR) is above and beyond the performance obtained in the bimodal hearing configuration (CI + contralateral HA). Furthermore, we recently demonstrated that perceived listening difficulty is significantly reduced for individuals using binaural low-frequency acoustic hearing as compared to just monaural acoustic hearing.
8. Doesn’t bimodal hearing also offer significant benefit?
Yes, you are absolutely correct! Adding a HA in the non-implanted ear—for a bimodal hearing configuration—provides significant benefit for speech understanding in quiet and in background noise. In fact, adding a hearing aid in the non-implanted ear yields improvement of 10% - 20% for speech in quiet (e.g., Dunn, Tyler, & Witt, 2005; Gifford, Dorman, Spahr, & McKarns, 2007; van Hoesel, 2012; Illg, Bojanowicz, Lesinski-Schiedat, Lenarz, & Büchner, 2014) and 15 to over 40 percentage points for speech recognition in noise (e.g., van Hoesel, 2012; Sheffield & Gifford, 2014; Zhang, Dorman, & Spahr, 2010; Zhang, Dorman, Gifford, & Moore, 2014). Bimodal listening, however, offers little improvement for horizontal plane localization (e.g., Potts, Skinner, Litovsky, Strube, & Kuk, 2009; Dorman et al., 2016). The primary reason is that localization is achieved by comparing ITDs and/or ILDs. Most adult bimodal listeners have precipitously sloping, high frequency hearing loss. Thus, there is no possibility of access to ILDs which are predominantly located in the higher frequency region. Further, timing information is available in the non-implanted ear, but periodicity and fine structure cues are not well preserved by the CI. Thus, bimodal listeners do not have access to ITDs either, as they are missing cues in the implanted ear for an interaural comparison (again, they’re missing the "I" in ITD).
9. Got it - though I couldn’t help but notice that you specifically mentioned adults. Do children show different trends for bimodal hearing?
You caught that, huh? Though it is cliché, children really are not little adults. When we’re talking about children with prelingual onset of hearing loss, auditory development is achieved through the use of hearing aids and/or cochlear implants. There are a couple of studies that have demonstrated significantly better bimodal localization for children than for adults with bimodal hearing (e.g., Davidson, Firszt, Brenner, & Cadieux, 2015; Choi et al., 2017). There is still much to be learned about children combining electric and acoustic hearing, however, one thing is certain - children absolutely derive bimodal benefit. Are you also curious about pediatric implant recipients with hearing preservation?
10. Sure - what do we know about hearing preservation with children?
There are reports of pediatric cochlear implant recipients with hearing preservation (e.g., Skarzynski, Lorens, Piotrowska, & Anderson, 2007, Skarzynski et al., 2016; Bruce et al., 2014; Carlson et al., 2017). Indeed, children exhibit similar levels of hearing preservation following cochlear implantation as those for adult recipients—specifically we see low frequency threshold elevation ranging from 10 to 20 dB, on average. We do not know, however, whether children with hearing preservation have access to low frequency ITD cues. Keep in mind that spatial hearing abilities continue to mature through adolescence, even for children with normal hearing. We have much to learn about this population. Also, it is important to mention that the current EAS/Hybrid cochlear implant systems are not indicated for use with children. These children, therefore, are currently implanted off-label, which is allowed per the FDA given the professional clinical judgment of the physician and implant team. However, use of EAS/Hybrid cochlear implant systems with children is much less prevalent than traditional CIs for pediatric implant recipients with bilateral profound sensorineural hearing loss. So, for the purposes of this discussion, we should focus on adult implant recipients.
11. Ok, then getting back to adults - can you give me a summary of how EAS/Hybrid devices are programmed?
How much time do you have? Seriously, this is a complicated answer and there is currently no consensus. In the past, the majority of patients were programmed using either complete EAS overlap or minimal EAS overlap in the implanted ear. That is, complete EAS overlap would allow for full CI bandwidth as well as full aidable bandwidth for acoustic amplification. Minimal EAS overlap would generally set the low frequency cutoff for the CI to the frequency at which audiometric thresholds reached 85 to 90 dB HL. For a summary of previous studies, see Table 1 in Gifford et al. (2017). While EAS/Hybrid patients exhibited significant benefit irrespective of the EAS overlap, we have more recent evidence to suggest that we might not have been optimizing outcomes for these patients.
12. What does the recent evidence suggest in terms of optimizing outcomes - can you elaborate?
Sure, the clinical software for clinical trials of the Hybrid-L24 system as well as the MED-EL EAS system had both used EAS cutoffs that provided acoustic amplification for frequencies with thresholds up to 85 to 90 dB HL, and then would set the low frequency CI cutoff to the limits of acoustic audibility—generally also the frequency at which audiometric thresholds reach 85 to 90 dB HL (NOTE: there was some variability in the algorithm depending on both the severity and the slope of the hearing loss). In a recent paper with 11 EAS/Hybrid recipient (13 ears), we found significantly better outcomes for speech understanding and perceived listening difficulty by setting the low frequency CI cutoff to the frequency at which the audiogram reached 70 dB HL (Gifford et al., 2017).
13. That’s interesting. Do you have any explanation for why changing the low frequency CI cutoff had an impact on outcomes?
We have various theories. The primary theory is pretty simple. Recall that these patients met cochlear implant candidacy and pursued cochlear implantation. While they had considerable residual hearing in the low frequencies, it was not sufficient to offer high levels of speech understanding. Thus, we should likely not require that the EAS/Hybrid recipient rely too heavily on that low frequency acoustic hearing. In other words, providing more speech information through the CI with a broader CI bandwidth yields significantly higher speech understanding and less perceived listening difficulty. Paramount to this theory are a number of previous studies demonstrating diminishing benefit for acoustic amplification for spectral regions with audiometric thresholds reaching approximately 70 dB HL (e.g., Hogan & Turner, 1998; Ching, Dillon, & Byrne, 1998; Turner & Cummings, 1999; Hornsby & Ricketts, 2003, 2006). Thus, we have theorized that setting the low frequency CI cutoff at the frequency where the audiogram reaches approximately 90 dB HL is likely placing too much weight on spectral regions where acoustic amplification may not be effective.
14. What do you think is going on with the underlying cochlear physiology for regions where thresholds reach or exceed 70 dB HL?
Research suggests that this is related to greater inner hair cell damage. We know that once hearing losses exceed approximately 60 dB that we have involvement of both outer and inner hair cells (Liberman & Dodds, 1984). Our inner hair cells are our primary sensory transducers (afferent innervation). That means that once inner hair cells are dysfunctional or destroyed, no amount or quality of acoustic amplification will be capable of driving auditory nerve fibers in that frequency region. This is the beauty of EAS/Hybrid systems! The recipient can still take advantage of the richer, more natural sound quality of the acoustic low frequency hearing and use electrical stimulation for the frequencies not well transmitted by conventional acoustic amplification.
15. Let’s backtrack for just a moment. You mentioned that previous clinical trials used an EAS/Hybrid crossover that may not have been optimal. What does that mean for current and future EAS/Hybrid recipients?
Great question! The good news is that these parameters are easily manipulated in the clinical software for the EAS/Hybrid systems. I tend to believe that much of the benefit reported in research publications that we associated for hearing preservation cochlear implantation, while significant, may have been of a smaller magnitude than the patients’ underlying auditory potential due to lack of optimized EAS/Hybrid parameters—this includes some of my own publications! Specifically, many of the previous studies used full EAS overlap or little-to-no EAS overlap. In our most recent paper (Gifford et al. 2017), we observed improvements in speech understanding that reached up to 20 percentage points beyond that offered by the full CI bandwidth or no EAS overlap. This was achieved simply by providing more EAS overlap by lowering the CI starting frequency. Based on these data, Cochlear has changed the default EAS/Hybrid crossover frequency to correspond to the frequency at which the audiogram reaches approximately 70 dB HL. It is very possible that we’ve been selling ourselves short on the magnitude of the EAS/Hybrid benefit that one could receive! We need more research to investigate the expected benefit with optimized HA and CI parameters based on patient-specific variables—like personalized or precision hearing healthcare.
16. Should all EAS/Hybrid patients be reprogrammed by lowering the CI starting frequency to where the audiogram reaches 70 dB HL?
I’m reluctant to make such a broad recommendation at this point. Keep in mind, we only tested 11 patients (13 ears). While the results were significant, we still have much to learn. I tend to recommend that we provide our patients with more than one program with different EAS/Hybrid crossover frequencies and then make decisions based on speech understanding outcomes and patient report. It is also likely that the insertion depth of the electrode array also plays a role. Today we have access to a variety of cochlear implant electrodes ranging from 16 mm up to 31 mm. There are various reports of hearing preservation with nearly all available electrodes on the market today. It’s an exciting time to be a practicing audiologist!
17. I thought that hearing preservation was primarily limited to just the shorter electrodes. Are you saying that there are cochlear implant recipients with conventional electrodes who are using an EAS/Hybrid hearing configuration?
Yes, that is the case. If a patient has preserved acoustic hearing, we can fit them with an integrated EAS/Hybrid system irrespective of the implanted electrode array. Right now we can do this in the clinic for both Cochlear and MED-EL. Advanced Bionics is still in clinical trials with their EAS system.
18. How are audiologists fitting the acoustic component of the CI sound processor?
We use probe microphone measures for verification, of course! Right now we are fitting to NAL-NL2 (Keidser, Dillon, Carter, & O'Brien, 2012) targets for low frequencies in the implanted ear. For pediatric CI recipients with hearing preservation using an EAS/Hybrid hearing configuration, I would recommend using DSL v5 child targets (Scollie et al., 2005). It is quite possible that we might discover that an entirely different prescriptive fitting formula may be more appropriate for individuals combining electric and acoustic hearing in the same ear. For now, we going to leave this up to the hearing aid fitting experts and their decades of evidence-based recommendations.
19. Are there patients who have normal or near-normal hearing in the low frequencies who wouldn’t require any acoustic amplification?
Ah yes, I didn’t even mention this population. There are some patients who have normal to near-normal hearing in the low frequencies and then combine with electric stimulation in the mid-to-high frequencies. This is another population of interest as it might even be possible that based on low frequency hearing and electrode insertion depth, we could find different fitting recommendations.
20. It seems that the days of “hearing aid” audiologists and “cochlear implant” audiologists are slipping away. What do you think?
It is true that hearing aids and cochlear implants are increasingly merging and I believe we’ll be seeing even greater integration of these technologies in the next few years. But, we still have a much greater population of individuals who are hearing aid candidates as compared to cochlear implant or EAS/Hybrid cochlear implant candidates. However, for those in-between patients who struggle with traditional amplification due to the severity of their high frequency hearing loss, but are generally performing too well with their hearing aids to qualify for a conventional CI, I suggest they be referred for a cochlear implant evaluation. We just might be able to help them!
Bruce, I.A., Felton, M., Lockley, M., Melling, C., Lloyd, S.K., Freeman, S.R., & Green, K.M. (2014). Hearing preservation cochlear implantation in adolescents. Otology & Neurotology, 35, 1552–9.
Carlson, M.L., Patel, N.S., Tombers, N.M., DeJong, M.D., Breneman, A.I., Neff, B.A., & Driscoll, C.L.W. (2017). Hearing preservation in pediatric cochlear implantation. Otology & Neurotology, 38(6), e128-e133. doi: 10.1097/MAO.0000000000001444
Ching, T.Y.C., Dillon, H., & Byrne, D. (1998). Speech recognition of hearing-impaired listeners: Predictions from audibility and the limited role of high-frequency amplification. Journal of the Acoustical Society of America, 103, 1128-1140.
Choi, J.E., Moon, I.J., Kim, E.Y., Park, H.S., Kim, B.K., Chung, W.H.,...Hong, S.H. (2017). Sound localization and speech perception in noise of pediatric cochlear implant recipients: Bimodal fitting versus bilateral cochlear implants. Ear Hear., 38, 426-440.
Davidson, L.S., Firszt, J.B., Brenner, C., & Cadieux, J.H. (2015). Evaluation of hearing aid frequency response fittings in pediatric and young adult bimodal recipients. Journal of the Americal Academy of Audiology, 26, 393-407.
Dorman, M.F., & Gifford, R.H. (2010). Combining acoustic and electric stimulation in the service of speech recognition. International Journal of Audiology, 49, 912-919.
Dorman, M.F., Loiselle, L.H., Cook, S.J., Yost, W.A., & Gifford, R.H. (2016). Sound source localization by normal-hearing listeners, hearing-impaired listeners and cochlear implant listeners. Audiology & Neurotology, 21, 127-31.
Dorman, M.F., Spahr, A.J., Gifford, R.H., Cook, S., & Zhang, T. (2012). Current research with cochlear implants at Arizona State University. Journal of the Americal Academy of Audiology, 23, 385-395.
Dorman, M.F., Spahr, A.J., Loiselle, L., Zhang, T., Cook, S., Brown, C., & Yost, W. (2013). Localization and speech understanding by a patient with bilateral cochlear implants and bilateral hearing preservation. Ear Hear, 34, 9-17.
Dunn, C.C., Perreau, A., Gantz, B.J., & Tyler, R.S. (2010). Benefits of localization and speech perception with multiple noise sources in listeners with a short-electrode cochlear implant. Journal of the Americal Academy of Audiology, 21, 44-51.
Dunn, C.C., Tyler, R.S., & Witt, S.A. (2005). Benefit of wearing a hearing aid on the unimplanted ear in adult users of a cochlear implant. Journal of Speech, Language, and Hearing Research, 48, 668-80.
Gifford, R.H., Davis, T.J., Sunderhaus, L.W., Menapace, C., Buck, B., Crosson, J.,…Segel, P. (2017). Combined electric and acoustic stimulation with hearing preservation: Effect of cochlear implant low-frequency cutoff on speech understanding and perceived listening difficulty. Ear Hear., [Epub ahead of print]. doi: 10.1097/AUD.0000000000000418
Gifford, R.H., Dorman, M.F., & Brown, C.A. (2010). Psychophysical properties of low-frequency hearing: implications for perceiving speech and music via electric and acoustic stimulation. Advances in Otorhinolaryngology, 67, 51-60.
Gifford, R.H., Dorman, M.F., Sheffield, S.W., Teece, K., & Olund, A.P. (2014a). Availability of binaural cues for bilateral implant recipients and bimodal listeners with and without preserved hearing in the implanted ear. Audiology & Neurotology, 19, 57-71.
Gifford, R.H., Dorman, M.F., Skarzynski, H., Lorens, A., Polak, M., Driscoll, C.L.W.,...Buchman, C.A. (2013). Cochlear implantation with hearing preservation yields significant benefit for speech recognition in complex listening environments. Ear Hear., 34(4), 413-25.
Gifford, R.H., Dorman, M.F., Spahr, A.J., & McKarns, S.A. (2007). Combined electric and contralateral acoustic hearing: word and sentence intelligibility with bimodal hearing. J Speech Lang Hear Res, 50, 835-843.
Gifford, R.H., Grantham, D.W., Sheffield, S.W., Davis, T.J., Dwyer, R., & Dorman, M.F. (2014b). Localization and interaural time difference (ITD) thresholds for cochlear implant recipients with preserved acoustic hearing in the implanted ear. Hearing Research, 312, 28-37.
Grantham, D.W., Ashmead, D.H., Ricketts, T.A., Haynes, D.S., & Labadie, R.F. (2008). Interaural time and level difference thresholds for acoustically presented signals in post-lingually deafened adults fitted with bilateral cochlear implants using CIS+processing. Ear Hear., 29(1), 33–44.
Hogan, C.A., & Turner, C.W. (1198). High-frequency audibility: Benefits for hearing-impaired listeners. Journal of the Acoustical Society of America, 104(1), 432-41.
Hornsby, B.W.Y. & Ricketts, T.A. (2003). The effects of hearing loss on the contribution of high- and low-frequency speech information to speech understanding. Journal of Acoustical Society of America, 113, 1706-1717.
Hornsby, B.W., & Ricketts, T.A. (2006). The effects of hearing loss on the contribution of high- and low-frequency speech information to speech understanding. II. Sloping hearing loss. Journal of the Acoustical Society of America,119(3),1752-1763.
Illg, A., Bojanowicz, M., Lesinski-Schiedat, A., Lenarz, T., & Büchner, A. (2014). Evaluation of the bimodal benefit in a large cohort of cochlear implant subjects using a contralateral hearing aid. Otology & Neurotology, 35, e240-4.
Keidser, G., Dillon, H., Carter, L., & O’Brien, A. (2012). NAL-NL2 empirical adjustments. Trends in Amplification, 16, 211-223.
Liberman, M.C., & Dodds, L.W. (1984). Single-neuron labeling and chronic cochlear pathology. III. Stereocilia damage and alterations of threshold tuning curves. Hearing Research, 16(1), 55-74.
Loiselle, L., Dorman, M., Yost, W., & Gifford, R.H. (2015). Sound source localization by hearing preservation patients with and without symmetric, low frequency acoustic hearing. Audiology & Neurotology, 20(3),166-71.
Potts, L.G., Skinner, M.W., Litovsky, R.A., Strube, M.J., & Kuk, F. (2009). Recognition and localization of speech by adult cochlear implant recipients wearing a digital hearing aid in the nonimplanted ear (bimodal hearing). Journal of the Americal Academy of Audiology, 20, 353-73.
Rader, T., Fastl, H., & Baumann, U. (2013). Speech perception with combined electric-acoustic stimulation and bilateral cochlear implants in a multisource noise field. Ear Hear., 34(3), 324-332.
Scollie, S., Seewald, R., Cornelisse, L., Moodie, S., Bagatto, M., Laurnagaray, D., & Beaulac, S. (2005). The Desired Sensation Level multistage input/output algorithm. Trends in Amplification, 9, 159-197.
Sheffield, S.W. & Gifford, R.H. (2014). The benefits of bimodal hearing: Effect of frequency region and acoustic bandwidth. Audiology & Neurotology, 19,151-163.
Skarzynski, H., Lorens, A., Piotrowska, A., & Anderson, I. (2007). Partial deafness cochlear implantation in children. International Journal of Pediatric Otorhinolaryngology, 71, 1407–13.
Skarzynski, H., Matusiak, M., Lorens, A., Furmanek, M., Pilka, A., & Skarzynski, P.H. (2016). Preservation of cochlear structures and hearing when using the Nucleus Slim Straight (CI422) electrode in children. The Journal of Laryngology and Otology, 130, 332-9.
Turner, C.W., & Cummings, K.J. (1999). Speech audibility for listeners with high-frequency hearing loss. American Journal of Audiology, 8(1), 47-56.
Zhang, T., Dorman, M.F., & Spahr, A.J. (2010). Information from the voice fundamental frequency (F0) accounts for the majority of the benefit when acoustic stimulation is added to electric stimulation. Ear Hear., 31(1), 63-69.
Zhang, T., Dorman, M.F., Gifford, R.H., and Moore, B.C.J. (2014). Cochlear dead regions constrain the benefit of combining acoustic stimulation with electric stimulation. Ear Hear., 35(4), 410-7.
Gifford, R. (2017, July). 20Q: Combining electric and acoustic hearing - Hearing preservation and bimodal listening. AudiologyOnline, Article 20676. Retrieved from www.audiologyonline.com