Interview with Nina Kraus, PhD, Auditory Neuroscience Laboratory, Northwestern University
Carolyn Smaka: Nina, can you please give me an overview of some of the research underway in the Auditory Neuroscience Laboratory, at Northwestern?
Nina Kraus: In our laboratory, we study the biological basis of auditory learning. I am a biologist by training. I started out recording from single neurons in the auditory cortex of rabbits while they were learning tone-signaled tasks. I was fascinated to observe that the response of the same neuron to the same sound would change as the sound acquired meaning for the animal. That basic idea is one we have studied now in humans.
Nina and staff of the Auditory Neuroscience Laboratory at Northwestern
There are a number of clinical populations that could profit from more efficient auditory skills, for example, children with auditory processing issues. The aging process and hearing loss are two additional general factors that compromise our abilities to communicate auditorily. In my lab, we’re very interested to understand the ingredients for effective auditory learning. Our work is rooted in issues that are either medical, social or educational – that is, translational issues. We are motivated to apply our scientific discoveries to the people and the populations for whom they would be most helpful.
In terms of auditory-based communication skills, the two we are especially interested in are hearing in noise, and reading. Hearing in noise and reading are the real world communication skills that impact how effective people are in their auditory communication. Some of the questions we have been looking at are: What are the effects of musical experience?; and, What is the impact of computer-based auditory training for children with auditory processing, language and learning problems? In older adults we ask, What training and experience can help improve communication skills like hearing in noise and auditory memory? This brings me to an important concept, and that is, hearing is an integrated process that involves how the nervous system represents sound and also how we use it, i.e., cognitive function. There is an interplay between how we remember sound, how we pay attention to it, and how the nervous system encodes the fundamental ingredients of sound.
Carolyn: In that context, it makes sense that your lab would be interested in looking at FM and assistive devices, to see the impact on auditory processes.
Nina: When we started research in this area, there was already a lot of work showing that auditory skills impact reading abilities. This includes making very basic auditory judgments like frequency discrimination, backward masking, temporal judgments and rhythm, as well as the ability to sequence sounds and patterns, and to hear speech in noise. And there have been work from other labs showing that auditory processing skills in infants and toddlers can predict later language and reading skills.
With that in mind we thought that a form of auditory training that essentially makes meaningful sound more salient to a child in an everyday learning and listening environment might be a way in which auditory learning and, in particular, auditory skills like reading might be strengthened. There had already been research showing that using an FM listening device in the classroom was associated with improvements in literacy and academic achievement. There were also reports of improved attention and listening skills, and studies showing that some cortical responses that reflect attentional processes could be strengthened in people who used FM devices.
Carolyn: The particular study we’ll discuss today is entitled, “Assistive listening devices drive neuroplasticity in children with dyslexia” (Hornickel, Zecker, Bradlow, & Kraus, 2012), which gives us some idea of the conclusion. Let’s start with the study design and purpose.
Nina: We had an opportunity to work with the Hyde Park Day School in Chicago. They have two schools, for smart children who have learning disabilities. Children are pulled out of their mainstream classrooms for one to two years to attend. The Hyde Park Day School has the latest technologies available, utilizes small class sizes, a small teacher to student ratio, and is informed about the best ways of improving children’s language skills. After attending the Hyde Park school, children return to their mainstream educational track at their neighborhood schools. The Hyde Park model has been successful, and you can find more information at https://hpds.uchicago.edu/
Altogether our study took three to four years from the initial planning through the final stages of the publication of the results, and much credit goes to Jane Hornickel who was instrumental throughout. The study design was such that the children came to our laboratory and we administered a series of measures. Then, one group of children spent a year at school wearing FM devices all day long for all of their educational classes, and one group were educated in the same school, but did not use FM system. There were approximately 20 children in each group. After a year, the children returned to Northwestern and we administered the same battery of tests. The tests consisted of measures of reading, because we wanted to know if reading scores improved; measures of cognitive function such as memory and attention; and biological measures.
Carolyn: The children had normal hearing?
Nina: Yes, normal hearing thresholds. All the children had normal audiograms and click-evoked brainstem responses. They all had an external diagnosis of reading disorder. There was no difference between the groups on their reading and learning measures at the beginning of our study. Both groups, not surprisingly, performed poorer than typically developing kids on reading measures, and we also had a typically-developing cohort that was informative.
Carolyn: With a small class size and small teacher to student ratios, is FM even necessary? It sounds like an ideal situation.
Nina: Interesting, because even in this ideal classroom situation, we found an average noise level of 60 dB SPL. Even under the best of circumstances there are lots of sounds in a classroom that detract from the meaningful signal that a child needs to listen to. There is traffic noise outside, chairs moving around, heating or air conditioning systems going on, and possibly computers with fans, projectors, and such.
As you know, FM systems improve the signal-to-noise ratio (SNR) by providing a direct transmission from the teacher’s microphone to the ear-level receiver. We used the Phonak EduLink, which is a miniaturized FM receiver designed specifically for children with normal hearing thresholds to improve the SNR.
Carolyn: We should note that in the current Phonak product line, EduLink has been upgraded and replaced by the iSense Micro. More information can be found at www.phonakpro.com
Nina: In terms of disclosure, Phonak provided the devices and some funding for the study. I was adamant about not sharing the results of the study with Phonak until they were published. We have a responsibility to science to publish the results, whether or not they were favorable to the device, and Phonak agreed to those terms.
Carolyn: Tell me more about the biological measures.
Nina: It’s something that we call cABR, and it’s an auditory brainstem response to complex sounds such as speech sounds and music sounds.
With a typical ABR montage utilizing three scalp electrodes, we deliver the sounds, and measure the nervous system’s response. To further understand cABR, there are four attributes to consider.
The first is that cABR captures the acoustic characteristics of the stimulus. The response physically looks like the stimulus that you used to stimulate the brain. You can take the brain wave and feed it back through a speaker, and you will hear the response that you’ve recorded. It actually sounds quite a bit like the stimulus that you used to obtain the response in the first place. This provides details about the pitch, the timing, and the harmonics that make up our sounds. It enables us to determine what aspects of sounds are being transcribed well by the nervous system, and which ones are not.
Our work and that of others shows that children who are poor readers have deficiencies especially in representing the consonant portions of syllables, i.e. the sounds that distinguish “bill” from “pill” or “cat” from “bat”. The information at the beginning of a consonant requires the nervous system to process timing on the order of microseconds, and requires the nervous system to distinguish sounds on the basis of their harmonics. The timing and the harmonics enable us to disambiguate one consonant from another. It’s not that poor readers have difficulty with every aspect of sound or that their response to all sound is reduced. It is their nervous system’s selective ability to pick out certain elements of sound that seems to be disrupted. If you’re interested in reading some of the research in this area, the study by Hornickel and colleagues (Hornickel, Skoe, Nicol, Zecker & Kraus 2009) as well as Banai and colleagues (Banai, Hornickel, Skoe, Nicol, Zecker, & Kraus, 2009) are good references to check out.
The second and most important attribute to know about cABR is that it is experience dependent. It changes with the language that we speak, and with the music experience that we have. This response from the nervous system is an index of learning because the research has shown that it changes depending on your experience with language and music, both short term and long term. It is also experience-dependent in real time, as our nervous system makes judgments about sounds in a very automatic way based upon our experience.
Third, cABR reflects everyday communication skills, specifically hearing in noise and reading. We see that there is a strong relationship between various components of the cABR and reading and hearing in noise ability.
And finally, CABR is meaningful in individual people as with any auditory brainstem response. As you know, each individual is different vis-à-vis how their nervous system interfaces with the world of sound, and so we can actually get at that with these measures.
With all these attributes in mind, we can say that the cABR provides a snapshot of auditory processing. For more information, two articles that summarize the CABR are Skoe and Kraus (2010) and Kraus (2011).
Carolyn: cABR sounds like a powerful metric of auditory processing.
Nina: We believe so. It has already informed us a lot, and there is enormous potential for its use in various clinical and research situations where biological processing of sound is of interest. It can provide information about device development (e.g. hearing aids, audio equipment, cochlear implants) and provide an objective, biological metric of auditory training or assessment of individuals who have processing difficulties.
In the FM study, we looked at a particular aspect of cABR called “response consistency”. As you present one sound, and then you present the same sound again, and the same sound one more time, an efficient nervous system will respond the same way every time. The sound hasn’t changed and so you would expect that you would get a similar response from trial to trial, and you would have what we call a consistent or a stable response to sound.
What we discovered from our research on over 100 participants, is that children who are poor readers are inconsistent in the way their nervous systems represent sound from trial to trial, while good readers have very consistent responses to sound. (see Journal of Neuroscience, Hornickel and Kraus, in press)
Carolyn: What were the key findings from the FM study?
Nina: An important finding is that children who wore the EduLink device improved in their reading ability. They improved in reading-related skills such as phonologic awareness, and on standardized measures of basic reading ability. They improved to a greater extent than the children who did not wear the device who were educated in the same classroom.
Carolyn: Were there any fundamental changes in their nervous systems, as measured by cABR?
Nina: The answer is yes and this is where it gets really exciting. These poor readers had inconsistent responses to sound going into the academic year. After a year of wearing the assistive device, their responses became more stable, more consistent. In fact, they became normally consistent. The device seemed to have perhaps cured them.
Nina: Yes, I say cured because their responses to sound became like those of a typically-developing child and, importantly, these responses from the brain were measured while the child was no longer wearing the device. So the idea is that once the nervous system has learned to use sound efficiently, you now have a nervous system that no longer needs the device because successful communication is happening. Using the device, presumably, helps the child pay attention to what is important. Constantly reinforcing the successful use of sound then changes the nervous system. Successful communication reinforces those circuits that are necessary for the effective representation of sound.
This is speculation, but it appears that from a practical standpoint, they’re not tethered to this device for life. Rather, the device enabled the children to learn what to pay attention to; they learned what aspects of sound were important.
To help understand this idea, I sometimes use an anecdote about guitar playing. I play guitar a little bit, and my husband is a musician. Once, I was trying to figure out a guitar lead by listening to a CD. He came by and said, “If you listen to the sounds of the notes, you can hear that these three notes are not being picked. The sound you hear is the sound of the fingers of the left hand being pulled off of the strings, and it has a very different sound than that of the notes being picked”. Once he pointed it out to me, I associated that meaning with the sound and could pay attention to it. Up until then I had been effectively deaf to those nuances in the sound. Now I can recognize much better the distinctive sounds of notes happening in rapid succession and whether they are being picked or not.
The idea that cognitive abilities shape the neural processing of sound is very important and part of our theoretical framework. In reference to the FM study, the cognitive ability we’re talking about is auditory attention. In the study, we also used questionnaires to collect parent observations, teacher observations, and student observations. In general, they all felt that the children who used the FM devices became more and more attentive to sound.
Carolyn: Can you provide a few examples of the comments collected from the questionnaires?
Nina: One teacher commented, “His ability to attend to my classroom instructions and attend to my voice improved.” Students said things like, “I was able to hear and understand every one of my teacher’s words. It helps me hear my teacher better.”
In the end we’re back to this idea of hearing being an integrated process involving sensory and cognitive systems, and also our reward system or how we feel about what we hear. Auditory processing reflects our experience with sound. Auditory training can help in surmounting biological deficits. It’s important to remember that the biological effects obtained at the end of the study were done so without the FM device. So it appears that once the brain learns how to listen, the device has served its purpose.
I think this study speaks to the utility of FM listening devices, and auditory training for reading impairments. They’re classroom based and they focus attention on meaningful speech. In addition, cABR is a very good outcome measure of remediation. It is our hope to get this technology into a user-friendly format so that it can be used by others to inform the biological function in response to sound.
Carolyn: You mentioned that even an ideal classroom may have noise levels on the order of 60 dB SPL. Do you think that children that don’t have reading problems could benefit from technology to improve the signal-to-noise ratio for meaningful sounds?
Nina: Theoretically, why not? The principles of auditory training are ones that should be applicable to anyone from typically-developing children, to those with deficiencies, to those who might be considered auditory experts, i.e., those that speak multiple languages, musicians, etc. It may be that a typically developing kid is already getting all of the useful information that they can possibly get in a given environment. But it certainly would not surprise me if there would be additional benefits someone could get from having increased salience and attention directed to the signals that are most meaningful.
In this study we could see retrospectively which children were going to be most likely to improve. Those were the children who had the most inconsistent responses at the start and who were the poor readers. Therefore, if you have limited resources, you might want to target children with reading impairments, specifically those kids for whom biologically we see the nervous system’s inability to process sound.
Carolyn: What do you see as the logical future directions from here?
Nina: We are continuing to put a lot of focus on auditory learning in a number of contexts. Like the FM study, much of our work is neuroeducational, or research that is done in an educational setting where we obtain a number of listening and learning measures. The field has a dearth of longitudinal studies that follow the same person as they learn, and as their auditory system develops one way or another based on the environments that they’re in. That is one area for future research. We’re also looking at the effect of a certain kind of training on the development of auditory skills, and a big focus of our work is on music. There is a slideshow on our website that demonstrates how music training changes sound processing in the brain.
We’re involved in public schools both in LA and in the Chicago area where music education is delivered as part of the curriculum, as often as math and English. We are very keen to understand the effect of music education: How does it change the children’s nervous systems vis-à-vis their response to sound? And importantly, how can that auditory experience help children become better learners overall?
We also have research underway on auditory training in older adults (Anderson et al., in press). It is obvious to me that we are what we do and how we spend our time. So if you’re a biker and you have well developed quadriceps because of all the hours you’ve spent on a bike, those large quadriceps are going to be part of you (whether you're awake or asleep), because you have trained your body.
Similarly, if you spend time actively working on making sound to meaning connections for example through studying a musical instrument or another form of auditory training, you create a nervous system that is automatically able to respond more consistently, pick up on meaningful sound patterns and efficiently represent meaningful elements of sound.
Carolyn: Fine tuning and training. I’ve never heard it explained quite that way before. It is great that you can take this incredibly complex work that you do and interpret it so eloquently so that it can be appreciated by the masses, and I am grateful for your time today.
I strongly recommend our readers spend a few minutes at your website. The videos and demonstrations make this fascinating research come to life.
Nina: Thank you. We really care about communicating what we do to various audiences like audiologists, educators, scientists, teachers and policymakers. We want to put our research into the hands of people who can apply it. The slideshows on our website can be viewed in just a few minutes, and each one encapsulates years of work, to give you the essence or the kernel of a project. If you then want the nitty gritty details you can download the publications of interest.
Carolyn: Thanks again, Nina. It was great speaking with you.
Anderson, S., White-Schwoch, T., Parbery-Clark, A., & Kraus, N. (in press). Reversal of age-related neural timing delays with training. Proceedings of the National Academy of Sciences.
Banai, K., Hornickel, J., Skoe, E., Nicol, T., Zecker, S., & Kraus, N. (2009). Reading and subcortical auditory function. Cerebral Cortext, 19, 2699-2707.
Hornickel, J., & Kraus, N. (in press). Unstable representation of sound: A biological marker of dyslexia. Journal of Neuroscience.
Hornickel, J., Skoe, E., Nicol, T., Zecker, S., & Kraus, N. (2009). Subcortical differentiation of stop consonants relates to reading and speech-in-noise perception. Proceedings of the National Academy of Sciences of the USA, 106(31), 13022-13027.
Hornickel, J., Zecker, S.G., Bradlow, A.R., & Kraus, N. (2012). Assistive listening devices drive neuroplasticity in children with dyslexia. Proceedings of the National Academy of Sciences of the USA,109(41), 16406-16407.
Kraus, N. (2011). Listening in on the listening brain. Physics Today, 64(6),40.
Skoe, E., & Kraus, N. (2010). Auditory brainstem response to complex sounds: A tutorial. Ear and Hearing, 31(3), 302 – 324.