AudiologyOnline Phone: 800-753-2160


Cochlear Service Report - January 2024

Maximizing Outcomes for Children with Auditory Disorders: Auditory Brain Development - Listening for Learning

Maximizing Outcomes for Children with Auditory Disorders: Auditory Brain Development - Listening for Learning
Carol Flexer, PhD, CCC-A, LSLS Cert. AVT
February 22, 2016
Share:

Editor’s Note: This is an edited transcript of the first webinar in a 3-part webinar series.  The webinar series is also available as a text course - access the text course here.

Learning Objectives

After this course, readers will be able to describe auditory brain development as the foundation of listening, language and literacy for all children, explain bottom up and top down processing as influenced by acoustic accessibility, and describe signal-to-noise ratio (S/N Ratio) and technologies that are designed to enhance the S/N ratio.

Introduction

Jane Madell, PhD: I've been involved with the Pediatric Audiology Project since its inception in 2009. Children who lived in Jackson Hole, Wyoming and in the surrounding area did not have access to audiology services.  They had to travel seven hours to Salt Lake City or fourteen hours to Denver to get any services. That was the reason we started this program.  We have trained a pediatric audiologist and a certified auditory verbal therapist (AVT).

One of the things we do as part of this program is run conferences. This year we decided to do it as an online conference rather an in-person conference in Jackson Hole. This is a three-hour workshop entitled Maximizing Outcomes for Children with Auditory Disorders.  My friend Dr. Carol Flexer will presenting the first hour today. I am presenting the second hour on what children need to hear in the classroom to be successful, and Dr. Gail Whitelaw will be presenting the third hour on auditory processing disorders.  To earn CEUs for this material, you can register for all three sessions as a recorded course or as a text course.

Let me introduce Dr. Flexer. Carol and I have been good friends and colleagues for many years. She is a Distinguished Professor Emeritus at the University of Akron and lectures internationally on pediatric and educational audiology topics.

She is the author of more than 155 publications, including fourteen books, three of which she wrote with me. She is a past president of the Educational Audiology Association, the American Academy of Audiology, and the AG Bell Academy for Listening and Spoken Language. For her research and advocacy for children with hearing loss, Dr. Flexer has received a number of prestigious awards.

Carol Flexer, PhD:  Thank you, Jane.  The Pediatric Audiology Project is an amazing project, and I'm honored to be invited to participate. I'm going to be talking about auditory brain development, my favorite thing, listening for learning.

What is Hearing?

Hearing is a first-order event for the development of spoken communication and literacy skills.  Anytime the word hearing is used, we should think auditory brain development, because we hear with the brain. We may assume that people know that we hear with the brain, well just like we see with the brain. The eyes are the doorway to the brain for vision. The mouth is the doorway to the brain for taste. The nose is the doorway to the brain for smell. Those are all portals. Likewise, the ears are the portal to the brain for sound. A child's brain is born with about a billion neurons, but with experience and exposure, the brain will grow about a quadrillion auditory connections. In order for the brain to develop those connections, we have to feed the brain auditory information. Acoustic accessibility of intelligible speech is critical to grow accurate connections.

Children speak what and how they hear. The brain is a probability organizer and as such, it can only grow connections based on the data it receives. What comes out of the child is what went into the brain.

If the child is speaking English, what went into the brain? English. If what comes out of the child is Spanish, what went in? Spanish. If what comes out of the child is clear speech, what went into the brain? Clear speech. If what comes out is a garbled speech, what went in? Garbled speech. What goes in is what comes out, so when we talk about listening for learning, we want to give the child's brain access to clear, accurate auditory information.

To have a sense of how critical it is for the brain to be exposed to auditory information, just look at our organic design. We're designed with eyelids that, when closed, block out visual information to the brain, but we're not born with ear lids. There's no anatomical structure that impairs auditory information from reaching the brain. A person with typical anatomical peripheral hearing structures has access to auditory information 24/7.

What if a child has a hearing loss, which translates to a problem in getting sound to the brain? How many hours a day does their brain get sound? Only as often as they wear technology. Six hours a day, ten hours a day, twelve hours a day? What we typically say is, “Eyes open, technology on,” because the brain needs a massive amount of developmentally appropriate auditory information. When we say that child needs access to sound, it means their brain requires access and exposure to auditory information. Signal-to-noise ratio is the key to receiving intelligible speech. As we'll discuss, in order for a child's brain to have access to clear auditory information, the desired signal needs to be about ten times louder than background sounds.

I recommend that, parents, professionals, school personnel, and children who have tablets and iPhones, download sound level meter (SLM) apps.  Start with the free ones, but know that you cannot do legal measurements with these apps, as they are not calibrated. They will, however, give you a good idea of the noise in an environment.  Noise is problematic for the reception of auditory information for everybody, especially for children who have hearing problems. Every app comes with a guide that explains how to use it and what the advantages and limitations are. Use the app and use it in your home, school, therapy room, cafeteria, hallway, or the soccer field. What is the soundscape of every environment?  All of our programs need to be aware of acoustic access of all of our children.

What is Hearing Loss?

Given our previous discussion that the ear is the doorway of sound to the brain for sound, hearing loss, then, is a doorway problem. Hearing loss of any type and degree obstructs the brain’s reception of auditory information. The sole purpose of hearing aids, cochlear implants and FM systems is to get auditory information through the doorway to the brain. When the technology is not worn, not programmed well, or not used in all the child's environments, their brain is being deprived of auditory information. That deprivation influences the growth of neural connections and also the knowledge base that a child has about their world.

What is Sound?

Arthur Boothroyd (2014) says that sound is an event. For example, you don't hear mommy. You would hear mommy in an action of doing something: talking, walking, singing, dancing.  Vision is like a label. You can see mommy, but sound is about an event, because the event creates vibrations. Vibrations are picked up by the ear doorway and then sent to the brain as energy for coding and for perception as information.

Hearing vs. Listening

What's the difference between hearing and listening? We often use those words interchangeably, but that distinction is important. Hearing is acoustic access of auditory information through the doorway to the brain. In order to have hearing occur, we have to get through the doorway. We have to improve the signal-to-noise ratio. We have to manage the environment, and if there's any hearing problem at all, we need to use hearing technology.

Listening is attending to acoustic events with intentionality. Neurologically, listening occurs through the activation of the prefrontal cortex. The prefrontal cortex is not activated voluntarily.  The prefrontal cortex is the neurological event that occurs when there is focused attention to a stimulus. For us to encourage, teach, support and grow that child's listening capacity, we have to first give them something to listen to. Hearing must be made available before listening can be taught. Whatever we give the brain in terms of auditory events, that is what the child will be listening to. If we give the brain clear auditory information, that is what the child will attend to.

If we give the brain garbled auditory information, that is what the brain will attend to. There isn't any magic in listening that somehow generates the auditory signal. That's what we have to get through the doorway to the brain. We have to know the hearing thing before we can do the listening thing.

For example, I had a teacher contact me not long ago about a child who said, “This child is not paying attention. She's not listening in the classroom.” The first thing I asked is, "Well, what’s her hearing like?" "Oh, she has this mild to moderate hearing loss. She really even doesn't need those hearing aids. She just hears fine. Her problem is she doesn't pay attention, she's not listening."  In truth, you have to give a brain clear auditory information in order to develop the listening skill.  Yes, hearing aids need to be worn and programmed well. Yes, that child likely needs a personal FM system, then we can really focus on that listening thing.

Children Cannot Listen like Adults

Children cannot listen like adults for several reasons. The auditory brain centers and neurological connections are not fully developed until a child is about 15 years old. Children cannot perform the automatic auditory cognitive closure like adults. If we don't hear all the information, the brain fills in the gaps, but you can only fill in the gaps of missed information if you already know what that information is. Children do not know a lot of the information in order to make inferences. They cannot perform automatic auditory cognitive closure.

Extrinsic vs. Intrinsic Redundancy

All children need a quieter environment and a louder signal than adults require. James Jerger discusses this concept and distinguishes extrinsic and intrinsic redundancy. Extrinsic redundancy refers the integrity of information from outside the person; that is called bottom-up sensory input. On the other hand, intrinsic redundancy refers to cognitive capacity, which is the internal knowledge and attentional resources that the individual brings to a listening event; that is top-down processing.

There is an inverse relationship between the two, and we have to consider those for every child. The stronger the top-down capacity, the more their brain can help them out when the bottom-up information is faulty, or deficient, or spotty.

Enhance Bottom-Up Input

Children do not have the same top-down capacity as adults, so what are some ways that we can improve intelligibility of the bottom-up signal? The signal-to-noise ratio is one way, and I will talk on that later.  Another way is to manage our own voice. Most adults speak faster than most children and many aging persons can process. In fact, adults can speak up to 200 words a minute, but children can only process about 124 words a minute. Slowing down a bit, pausing, and using appropriate and meaningful suprasegmentals or melody to enhance meaning can improve the listener's speech discrimination by up to 40%.

As teachers and parents, we have a responsibility to assist in a child's development of pristine neural connections by giving the brain clear speech to process and listen to. A child is going to have to learn to listen to and process fast speech and distorted speech, but they learn to do that from a strong top-down position. The stronger their neural connections and the more they know, the better equipped that child will be to handle a deficient bottom-up signal.

Be mindful of using clear speech. Slow down, pause, give that child's brain time to process and grow those connections. Then, we'll improve the signal-to-noise ratio to allow that child clear access to their brain.

Top-Down Capacity

Top-down capacity, meaning information in the brain and neural connections, is necessary for auditory cognitive closure; that is what we mean by filling in the gap.

Here's a tip.  If you are speaking clearly to the child with a good signal and the child says, "Huh? What?" rather than repeating right away, encourage the child to exercise their top-down capacity by asking, "What did you hear me say?" Give them a chance to retrieve what they have top-down to try and put the signal together.

Another important necessity of top-down capacity is casual listening.  This is the child knowing they need to pay attention, even if the person is not looking at them and doesn't appear to be talking directly to them. As audiologists, we need to make sure that the child has clear access to soft speech through their technology.

The next need of top-down capacity is repairing conversational breakdowns. It's important to distinguish not understanding from misunderstanding. We often think those are the same things, but they are not. Not understanding is typically when the person or child recognizes they did not understand what was said. That is when they say, “huh,” or, “what,” or we offer them new strategies for repairing conversational breakdowns, such as, “pardon me?”

Misunderstanding is when the child does not know that they did not understand what was said. The child cannot offer a repair for a breakdown because they do not know where a breakdown occurred. They did not know they misunderstood.

The Ear

When I talk about the ear, I show a picture of the brain, because that is the real ear.  Again, the physical ear is only the doorway.  What often happens is we start conversations about doorway problems, or hearing loss problems. When we start conversations in the doorway, our colleagues, parents and even children get stuck in the doorway, not recognizing that doorway problems are really about the brain. A problem in the doorway truly means that brain development and knowledge is going to be compromised.

Mainstream Classrooms

Mainstream classrooms are auditory-verbal environments. The cornerstone of the educational system is listening, receiving and attending to auditory information in the environment. Children spend up to 70% of their school day listening - to teachers, peers, instructional media, and their own speech. Children are the biggest source of noise in the classroom. The larger the room, the more children, the more simultaneous activities, and the noisier the room and the more obscure the desired signal can become.

If a child cannot clearly hear and attend to spoken instruction, the entire premise of the educational system is undermined. That is a universal issue. Pam Talbot (2015) asserts that it is important to teach the acoustic and not the visual similarities of speech sounds when looking at auditory processing.

There are several reasons that we have to look at the acoustics of speech sounds. One is for the creation of the child's auditory feedback loop, which is listening to their own speech through the doorway. They are attending to their speech and modifying their output (spoken production) based on what they heard themselves say.  They match their output to the spoken production of teachers and parents.

Speech-language pathologists know that if that child is not attending to how they sound, we will never have an impact on the development and growth of their spoken language. When a child has a doorway problem and is wearing technology, we have to make sure that the child has access to their own speech.

Acoustic Speech Sounds

Pam Talbot (2015) identifies different sound-alike systems. These sounds look different on the lips, but they sound similar. Examples are s, f, th; m, n, ng; b, d, g, j; p, t, k, ch. Many times in speech-language pathology the focus is on how the sounds look, but that really does not help you with how they sound.

The only way a child is going to distinguish acoustic confusions is by hearing and listening to the differences.  These differences have to make it to their brain.  One strategy of assisting the child in developing the sound-alike auditory folders in their brain is to play with rhyming words. Rhyming is also very important for literacy development, as are rhythm and repetition. Examples of rhyming words that address the sound alikes might be, pick, tick, kick, check. All these look somewhat different on the mouth, but they sound the same. We have to assist the child in hearing these acoustic distinctions.

There is research that links the fact that some children who have literacy problems also have rhythm problems. There is definitely a neurological link between rhythm and rhyming and literacy. Anything that we can do to enhance those skills and grow those neurological connections will give children a literacy advantage.

Another way to help children learn the acoustical part of speech is to have them read and speak into their own FM mic. This is a way of directing that child's prefrontal cortex to his own verbal productions. We are increasing the signal-to-noise ratio of their own speech, thereby enhancing their auditory feedback loop.  You might be thinking that this is unnecessary in a quiet classroom, but by enhancing the focus of that child's voice reading into the mic, of you reading into the mic, it directs that child's prefrontal cortex to the relevant auditory information. I encourage you to try that.

Acoustic Accessibility

Let's talk about acoustic accessibility. What are the negative effects of poor classroom acoustics, not just for hearing-impaired children, but for all children?

We can identify three main areas: misunderstanding verbal instruction, missing verbal information and fatigue. If there are poor classroom acoustics, you increase the probability of children misunderstanding, mishearing and not knowing. Studies have shown that the harder a child has to work in the classroom, the less auditory information they get. Poor acoustics contribute significantly to working harder at listening.  If a child has any doorway problems, they are fatigued.

Keep in mind that “fatigue” is not just physical fatigue; it is cognitive fatigue. If a child is using the majority of his cognitive reserve to try and determine what information got to the brain, then what cognitive energy is left over for thinking about the information, processing or making inferences? Fatigue is a very big issue. The clearer the input signal, the less energy cognitively we will be draining from that child, and the more top-down capacity they will have for thinking, processing and learning, which is our global goal.

Science: Signal-to-Noise Ratio

What is the science of acoustic accessibility? It does not really help to say to a teacher, “You have to charge this system daily. Be sure you wear the mic,” without telling them why we are making these recommendations? Why is acoustic accessibility important for that child's brain? Why is the technology important? There is a science behind every recommendation that we make.

Signal-to-noise ratio is also called speech-to-noise ratio. Signal-to-noise ratio is the relationship between the primary or desired auditory signal as compared to all other unwanted background sounds. What's the relationship between the two? The more favorable the signal-to-noise ratio, the more intelligible the spoken message.

What we mean by “intelligible” is hearing low-frequency sounds (vowel sounds), hearing high-frequency sounds (consonant sounds), and hearing the non-salient, morphological markers that are part of the spoken message.  In English, these are the parts that we typically do not stress, like articles, pronouns, and endings of words such as past tense and plurals. We need the whole signal to be intelligible. In addition, everybody hears better in a quiet environment; quieting the environment is a universal listening condition. It is not relevant only for children or people with auditory problems.

Adults with typical hearing have had access to auditory information for years.   As such, they have great top-down capacity.  They have strategies for focusing the prefrontal cortex.  Those types of brains do best when they have a signal-to-noise ratio of about +6dB. This means that the signal is about twice as loud as background sounds in order to have clear access to spoken information that does not drain cognitive energy. Children in general require a much more favorable signal-to-noise ratio. They need the signal to be +15 to +20 dB. The desired signal needs to be about 10 times louder. They need greater extrinsic redundancy for bottom-up processing

We need to facilitate their bottom-up input in order to grow their top-down capacity. Their neurological connections are not developed. They don't know as much as adult, and children with any doorway problems, including ear infections and auditory processing, will suffer an impact of brain growth.  Their brain is not handling auditory information well.

Children with learning or attention deficit behaviors, developmental delay, visual disabilities, or English as a second language need very clear access to English information in order to develop their data files in their brain. All children require a better signal-to-noise ratio than adults, because they do not have the same top-down capacity. Unfortunately, typical classrooms have very inconsistent and poor signal-to-noise ratios, maybe +4 dB at best. Remember, we want the signal to be 15 to 20 decibels louder than background sounds.

A +4 dB signal-to-noise ratio gives the brain a poor signal, which can lead to missing information, misunderstanding, and cognitive fatigue. In classrooms, there will be new information, new words, and new knowledge. If that child's brain does not receive that information due to poor signal-to-noise ratios, those data files are going to be deficient, and that child will not know what it is. We want them to learn by being in classrooms. If we do not manage the acoustics in the room, then the signal-to-noise ratio varies dramatically depending on where you are.

Strategic Seating

It is true that the closer you are to a sound source, the clearer and more intelligible the message is likely to be. Six inches is ideal, but we all know that physical positioning like this does not work in a classroom. Please do not write “preferential seating” on an IEP, because that assumes that you somehow can manage that child's acoustic access by positioning. Classrooms are dynamic environments. People and stations are moving here, there and everywhere.

Physical positioning cannot control a very flexible environment. I suggest that you write “strategic seating” on the IEP. I know, you are saying, “But no one knows what that means.” What it means is that you have to have a conversation. When you write “preferential seating,” people think they know what that means – to sit upfront, like everything happens in the front of the classroom. Because no one knows what strategic seating means, we have to talk about it. We have to educate.  We know a child has to have a personal FM if they have a doorway problem and are wearing technologies. Strategic seating means they are seated in a room in a way that they also have access to what is happening visually in that learning space.

For example, if you are sitting in the front with your back to the room, everything interesting is behind you and you do not who is talking or giving responses to the teacher.  Even if the hearing technology is working, you still do not know where to listen if you do not see it.  Strategic seating might be halfway down the side of the room. Strategic seating might mean the ability to move around the room as the teacher moves around the room. Strategic seating is a negotiated seating involving the child, teacher, and understanding of the room, which will give that child better access to where instruction is occurring.

Technologies that use a Remote Microphone

First, what is a remote microphone?  That term means that the microphone is placed near the desired sound source, like the teacher’s mouth or instructional media. This is different from the microphones on their ears that belong to their hearing aids or implant processors.  They need to hear their own voice by way of the hearing aid microphone, and they also need to hear a person or peers speaking to them. That remote mic on that person facilitates that reception of auditory information. What technologies use remote microphones?

Personal FM

In simplified terms, the personal FM microphone is when the FM or radio receiver is inside the child's personal technology and the microphone of this radio system is on the desired sound source.  A personal-worn FM does provide the best signal-to-noise ratio, the most intelligible bottom-up signal of instructional information to the brain of the child with a doorway problem.

Soundfield System

The second most common device is a soundfield system. It is sometimes also called a Classroom Audio Distribution system (CAD). Using either radio or light waves, the teacher’s voice is transmitted to a loud speaker or speakers that are positioned around the room, so every child in the room has clear access to teacher instruction. The CAD thus improve the signal-to-noise ratio, but not as much as the personal FM.

Again, the issue here is that children need to know where to listen. A clear indication of where to listen is by identifying the person wearing the remote microphone.  The American Academy of Audiology has hearing assistance technology guidelines. This is an excellent resource about how to fit, manage, verify and validate the use of both personal FMs and soundfield classroom systems.

Acoustic accessibility means you have to remove barriers to the instruction or reception of information. Physical barriers include distracting sound intrusions from outside the room or building, reverberation, or background noise.

Background Noise

As previously stated, the biggest source of noise in a classroom are the children and their movements in the room. When we take the children out of the room and measure the sound in a room, which is how we do the initial sound measurements in an unoccupied classroom, what are other sources of noise? Some of these include lighting ballasts, diffusers, and heating, venting and cooling (HVAC) systems. Electrical appliances pass through noise from rooms or hallways, so there are many intrusive noises in addition to the children.  We need to reduce those noises as much as possible for learning to occur in a room.

Energetic Masking

Background noise masks the speech, and there are different kinds of masking noises.  The one with which we are most familiar is called energetic masking. This masking reduces the audibility of speech sounds because the speech's intelligibility is partially obscured by noise in the room.

Informational Masking

Informational masking occurs when the listener cannot distinguish between two streams of meaningful information. For example, say you have multiple small-group activities occurring in a room and each group is engaged in a different learning activity. Speech is going to drift out into the room from each table, so a child at one table who's trying to learn about states in the U.S. will experience informational masking from the next table that is talking about the concepts of sinking and floating. That information is obscured by information from a different source and has nothing to do with the task in which the child is involved. On top of that, there is the noise in the room from the heating and ventilating system, the fan is on, and the child's table is near the noisy gerbil cage that squeaks throughout the day.

Most environments include a combination of both types of masking, and the listener has to expend a lot of cognitive effort to piece together a deficient speech signal.  This piecing-together process is called glimpsing.  For example, you are in a restaurant and are speaking with your friend.  Their talk is obscured by informational masking of other tables and by the energetic masking of ambient noise in the room.   You are desperately trying to glimpse and put together pieces of what they are trying to say. Do you know how hard that is? Step back - we expect little, immature children to do that? For more information on this and strategies, I would like to refer you to a website by Karen Anderson, successforkidswithhearingloss.com.

Reverberation

Another way the classroom is sabotaged is called reverberation. Reverberation is echo. Sounds reflect off of hard walls, ceilings, floor surfaces, and when these surfaces do not have sufficient absorbing ability, there is an echo. Excess reverberation is caused by materials used in ceilings, floors and walls. The harder the surfaces, the more the sound will bounce. To further elaborate, in a reverberate environment, the intact information from the speaker bounces and reflects off surfaces, causing indirect information and reflected signals, which results in overlap masking.

Overlap Masking

Overlap masking is when the signal the listener hears is dominated by reverberant energy that overwhelms and obscures energy from the talker. Karen Anderson offers a great visual portrayal of overlap masking on www.successforkidswithhearingloss.com.

Classroom Combination

In the classroom, we have are children who are overwhelmed with energetic masking, informational masking, and overlap masking. Is it not a wonder that anyone gets any information into their brain? In many instances, children learn in spite of us, not because of us. One of the things that I recommend for every child is to have a sound distribution system in that classroom. Have the teacher wear a remote microphone, have a pass-around microphone so children who have something to say can also be amplified and given the opportunity for clear speech around the classroom. The child who is speaking can be developing and enhancing their own auditory feedback loop by directing their attention to their own speech.

When teachers use a microphone, they do need to have instruction and coaching about how to use that microphone in an effective fashion. How do they use clearer speech? Does the teacher speaks slowly, clearly, pausing, so that children in her room have the best opportunity to receive intact spoken instruction? When that child is coached to listen, pay attention, and activate their prefrontal cortex, they should be able to hear the speaker clearly the first time.   

Home Environment

Children are learning both academic and social information in the classroom. We want them to have the best acoustic access to knowledge through the doorway in any environment. We want remote microphones to be used; we want their technology worn if they have a hearing loss, but t.  But what about when they are at home?

Unfortunately, we have all heard on occasion, “I take my child's hearing aids off at home. They don't really need them there." Does that mean that that child's brain has nothing to gain from knowledge in the home environment? We know that that child's social-emotional skills, their ability to learn how we treat people, how we negotiate, how we initiate conversations, is taught first and foremost in the home. It is absolutely critical that that child has doorway access to their brain at home and in environments outside the classroom.  This is how we gain knowledge about our entire world, not just the educational setting.

Children with doorway problems are very compromised by noise in every environment. We recommend to parents to turn off unnecessary sources of noise at home. Turn off the TV and computer if those are not the focus of the conversation. Some research shows that the indirect sound from the TV is damaging.  When the TV is on in the background for no reason, it is just noise.  Turn it off. Also, think about the dishwasher, washing machine, vacuum, and other appliances.  If they do not have to be running when conversation is occurring, then wait until later.

Conclusion

Our job as parents and teachers is to create an acoustically accessible environment for that child, to get intelligible information through the doorway, through the brain, and create a strong top-down capacity with cemented neural connections, figurative data files of life, language and knowledge. That child will have their brain to rely on when the bottom-up information is faulty or deficient. As parents and teachers, it is our job to develop their brain. Then, we can coach the child in listening and paying attention to auditory information that is critical to their knowledge.

References

Cole, E., & Flexer, C. (2016). Children with hearing loss: developing listening and talking, birth to six, 3rd ed. San Diego, CA: Plural Publishing.

Dehaene, S. (2009). Reading in the brain: the science and evolution of a human invention. New York, NY: Penguin Group.

Doidge, N. (2007). The BRAIN that changes itself. London, England: Penguin Books, Ltd.

Hart, B., & Risley, T.R. (1999). The social world of children learning to talk. Baltimore, MD: Brookes.

Madell, J., & Flexer, C. (2014). Pediatric audiology: diagnosis, technology, and management, 2nd ed. New York, NY: Thieme Medical Publishers.

Madell, J., & Flexer, C. (2011). Pediatric audiology casebook. New York, NY: Thieme Medical Publishers.

Moucha, R., & Kilgard, M.R. (2006). Cortical plasticity and rehabilitation. Progressive Brain Research, 157, 111-122.

Musiek, F. E. (2009). The human auditory cortex: interesting anatomical and clinical perspectives. Audiology Today, 21(4), 26-37.

Robertson, L. (2014). Literacy and deafness: listening and spoken language, 2nd ed. San Diego, CA: Plural Publishing.

Smaldino, R., & Flexer, C. (2012).  Handbook of acoustic accessibility – best practices for listening, learning and literacy in the classroom. New York, NY: Thieme Medical Publishers.

 

Cite this Content as:

Flexer, C. (2016, February). Maximizing outcomes for children with auditory disorders: Auditory brain development - listening for learning. AudiologyOnline, Article 16320. Retrieved from https://www.audiologyonline.com.

 

 

Rexton Reach - April 2024

carol flexer

Carol Flexer, PhD, CCC-A, LSLS Cert. AVT

The University of Akron and Northeast Ohio Au.D. Consortium & Listening and Spoken Language Consulting

Carol Flexer, PhD, CCC-A, LSLS Cert. AVT is Distinguished Professor Emeritus of Audiology, The University of Akron. An international lecturer in pediatric and educational audiology and author of more than 155 publications including 14 books, Dr. Flexer is a past president of the Educational Audiology Association, the American Academy of Audiology, and the AG Bell Academy for Listening and Spoken Language.  For her research and advocacy for children with hearing loss, Dr. Flexer has received four prestigious awards: two from The Alexander Graham Bell Association for the Deaf and Hard of Hearing -- the Volta Award and Professional of the Year Award; one from the American Academy of Audiology -- the 2012 Distinguished Achievement Award; and one from Kent State University -- The EHHS Hall of Fame Distinguished Alumni Award, 2015.



Related Courses

Maximizing Outcomes for Children in Schools: The Responsibility of Clinical Audiologists
Presented by Jane Madell, PhD, CCC-A/SLP, LSLS Cert. AVT, Carol Flexer, PhD, CCC-A, LSLS Cert. AVT
Recorded Webinar
Course: #30088Level: Intermediate1.5 Hours
Many school districts no longer have educational audiologists. Students with hearing loss continue to need all the services that educational audiologists have provided. Clinical audiologists now need to pick up this slack if their young patients with hearing loss are going to succeed in today’s challenging academic environment. This session will discuss contemporary audiological needs of children with hearing loss in schools, how clinical audiologists can help meet those needs, and how to network with schools from a clinical setting.

Assessing Auditory Functional Performance: Goals and Intervention Considerations for Individuals with Hearing Loss
Presented by Susan G. Allen, MED, CED, MEd, CCC-SLP, LSLS Cert. AVEd
Recorded Webinar
Course: #33024Level: Intermediate1 Hour
Functional auditory assessment and continuing assessment is critical in order to determine the current level of function, develop appropriate goals for intervention, and achieve maximum outcomes. Learning to listen drives everything else: speech intelligibility, language competence, reading, academics, and life-long learning. This course offers a detailed look at functional auditory assessment and intervention, to provide audiologists with a better understanding of hearing loss in children in terms of the broader speech, language, learning and academic contexts. Additional videos to demonstrate key points will be included.

School Audiology and Community Audiology Partnerships
Presented by Gail Whitelaw, PhD
Recorded Webinar
Course: #30988Level: Intermediate1 Hour
This course will focus on the critical partnership between educational/school audiology and community audiology services. Issues that maximize educational and communication outcomes for school-aged children will be highlighted.

Supporting Families of Children with Hearing Loss: What Parents Want from their Audiologist
Presented by Dave Gordey, PhD
Recorded Webinar
Course: #36381Level: Intermediate1 Hour
Parents and caregivers rely on their audiologist to help develop their understanding of their child’s hearing loss. According to recent research, parents' needs go well beyond the use and care of their child’s hearing technology. Within the framework of Self-Determination Theory, this presentation will discuss the topics and resources families value as being most important.

Classroom+ Learning Series: Nucleus Technology in the Classroom
Presented by Amy Donaldson, AuD, CCC-A
Text/Transcript
Course: #31775Level: Introductory1 Hour
Educational audiologists are asked to work with a wide range of technologies in the classroom, and technology for children with cochlear implants is changing fast. Please join us to review current Nucleus technology, discuss the selection and fitting of remote microphone technology for children with cochlear implants, and hear about the unique connectivity available for today’s Nucleus recipients.

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.