AudiologyOnline Phone: 800-753-2160

Inventis - June 2023

The Auditory Brain: Conversations for Pediatric Audiologists

The Auditory Brain: Conversations for Pediatric Audiologists
Carol Flexer, PhD, CCC-A, LSLS Cert. AVT
July 25, 2011

Editor's note: This is a transcript of the live seminar presented on March 14, 2011. To view the course recording, register here. In addition, this was the first seminar in a week-long virtual conference entitled, Pediatric Audiology - Raising the Bar. To view the recordings of the other courses in this series, you can register here.

Thank you for inviting me to be part of this pediatric audiology week on AudiologyOnline and for having me talk on Monday so that I could set the stage for the other talks that are coming. This is a good way to provide the big picture so that we can keep it in mind as we work with children and their families.

We know that because of technology and brain neuroplasticity the whole landscape of "deafness" has changed, and I will be talking about that today. It is wonderful that because of technology we can test the hearing of newborns. We can find out what is reaching their brain in the first hours of life, and, of course, we hear with the brain; the ears are just a way in. The challenge with hearing loss historically has been that hearing loss has kept sound from reaching the auditory centers of the brain and critical areas located throughout the brain. While the brain still may have the potential to develop, it does require auditory stimulation in order to develop completely. Infants require that stimulation very early in order to have the best opportunity to develop these very intricate and plentiful auditory neural pathways. Because we can identify hearing loss in babies at birth and we have the technology to get to the brain, we have a whole new generation of children with hearing loss. It is not that there was anything wrong with the older generation; it is just the way the world was. In 1960 we could provide 1960 intervention and technology, in 1990 we could provide 1990 intervention and technologies, but now it is 2011. So we have the privilege of providing 2011 technologies and interventions. We will talk this week about how the audiologist is absolutely pivotal in providing the assessments, technology and management of that technology in order to access, grow, and develop the auditory centers of the brain to allow this generation of children with hearing loss to be different from previous generations.

All About the Brain

Neuroplasticity is most available and most active during the earliest days and months of a child's life. We know that the brain has some plasticity throughout the lifespan, but the most rapid changes in brain development occur in the earliest months. Historically we have talked about hearing loss as an ear issue. Of course, the ear is the peripheral auditory mechanism that forms the barrier to sound due to damage or pathology, and it is that peripheral system that allows or does not allow all clear sound to reach the brain. Decades ago, our conversations focused on the ear, but, in fact, our conversations about hearing loss really need to focus on the brain. When we work with families and colleagues, we need to emphasize that what is at stake is not the ear; it is the child's brain. So if a family says, does my child really need to wear their amplification after school? All that is at stake is the child's brain, because we are either accessing, stimulating and developing auditory brain centers or we are literally losing auditory neural capacity. So our conversations about hearing loss with families and colleagues are brain conversations, and they are very serious ones. As audiologists and pediatric audiologists, we are in a very important position to provide to the world a new and expanded vision of what hearing loss looks like and what outcomes are available in 2011.

We know that as professionals that we are legally, morally, and ethically required to provide technology and intervention services consistent with the family's desired outcome. We have to ask ourselves how does the family want their child to communicate? Where does this family live? Who are they? What is their community? What is their family style? Who are they and what is their vision for the outcome for this child? Until we know where we want to end up, we cannot really develop a road map to get there. For example, until we know we want to end up in Miami, we cannot really select possible roads to get there. What you do, of course, will determine where we end up.

We know that more than 95 percent of children with hearing loss are born to hearing and speaking families. It is highly likely, therefore, that the vast majority of families with whom we work are very interested in listening, speaking, and literacy outcomes for their child with hearing loss. This talk is all about the technologies, neuroplasticity, and a touch on intervention strategies if the family chooses listening, speaking, and reading. We know that if the desired outcome is spoken communication and literacy, then hearing, and by hearing we mean auditory brain development, is a first-order event. Obviously, there is not one little nugget in the brain that has auditory written on it; auditory tissue is located throughout the brain. All of those areas need stimulation in order to develop, and they cannot be stimulated if there is a hearing loss unless we have technology that gets sound to the brain.

Also as audiologists, we play a pivotal role in literacy. We know from research that reading is a multi sensory activity, and that the cornerstone of reading is auditory development of the auditory neural centers. We know that many children with hearing loss have been challenged in their literacy development, and their challenge is not because they do not see so well. The challenge comes because, historically, we have been confronted with getting the necessary quality and quantity of auditory exposure and spoken language practice to the brain. This detailed exposure and practice is needed in order to develop the solid neurological framework upon which we can scaffold the higher-level language skills of reading and writing. While, as human beings, we are organically designed to listen and talk, we are not organically designed to read. By that I mean that we do not have any existing hard wired systems in our brain that we can just activate for reading. Reading is an exercise in neuroplasticity and in developing the connection between various parts of the child's brain and the key auditory centers. If a family is concerned or not sure of the value of having technology worn all the time, we can also have a conversation about reading. If the child's brain does not receive ongoing clear and consistent sound, then we as audiologists can have a conversation about the child's opportunities to develop literacy skills. Literacy has its foundation in auditory neural development from spoken language exposure and spoken communication practice.

We also have to access the brain with highly intelligible speech. We have raised the bar on what highly intelligible speech looks like through the management of hearing aids and cochlear implants. For today's child, we expect access to every single speech sound at soft levels and at distances, and it may take multiple technologies to allow that to happen, but we have very high expectations. The brain can only organize itself around the information that it receives. If we get complete, detailed and clear spoken language to the brain, then that is how the brain will organize its neural network. If we get muddy, inconsistent, late, or scanty information, then that is how the brain will be organized. A good signal to noise ratio is the key to hearing intelligible speech. In order for the brain to be developed, the sound first has to actually get there. In order to get there, the auditory information has to travel through the physical environment, through the technology the child is wearing, and through their auditory system. If there is a weak link, we might as well talk to the floor because that is where the words will end up. In noise environments such as classrooms we need to consider use of a remote microphone or an FM system in order to reach the brain with clear sound. Being close to a child or managing the physical space in a room by positioning is not effective in the dynamic classrooms of today. Technologies such as cochlear implants, hearing aids and FM systems can be known as "Brain Access Tools." When a child is not wearing their technology, it's as if their brain is sitting on a shelf losing capacity. I think we have significantly underestimated how much auditory practice the brain requires to develop these pathways. We need to remember that the brains of typically-hearing children have access to sound 24 hours a day. We do not have "earlids". We are designed neurologically to receive sound 24/7; we know that during sleep the brain continues to process sound. How else would we wake up when we hear our child cry or when we hear a suspicious sound if our brain was not always processing sound?

So, what about the brains of children with hearing loss? Their brains are receiving sound only when wearing technologies, which is not 24 hours a day because none of our technology is engineered for 24 hour wear. Yet our brains are designed for 24 hour access and demand that level of auditory practice. When we ask a parent, "How many hours per day does your child wear their technology?" and they say, "All day," we really need to identify what "all day" looks like to that parent. I have worked with some families where "all day" meant five hours. What is exciting is that we have continually emerging neural research, so we have basic science that supports the recommendations that we make. We know that if there is hearing loss, the brain is organized differently, depending on what and when the sound reaches the brain.

We know that the auditory cortex is directly involved in speech perception and language processing. If we are interested in the typical development of speech and language and normal maturation, the maturation of the central auditory pathways is a precondition for that outcome. In the past when we did not have the technology we do today or early access to the brain, we could not get to that brain with sufficient quantity and quality sound in time to develop the auditory centers for many children. Today is a different day. We have always known that the brain is the key point when it comes to learning. However, in the past many of our conversations did not evoke the word brain. Now, it is a very fertile conversation going on all around us.

Cortical Development

We know that the brain has availability to rewire changes over the years. The concept of critical periods, those times that are better than others for the brain to develop skills, is not new. However, we are now learning that these critical periods are not as straightforward as we thought. The cortex actually matures in stages or columns, and here we get to our concept of experience-dependent plasticity (Merzenich, 2010). The level of maturity of the cortex depends on the richness of the exposure and experience that is provided to those pathways. The first level of the cortex matures in that child's first year of life, probably by 12 months. This is a very important stage for the cortex, called the set up stage, as the brain is literally always on. It is always available to stimulation and enhanced to development. In this early time, all it takes to develop auditory pathways is exposure. This means that we have to fit that technology early, and then create an environment that is rich in auditory language communication when that is the desired outcome for the family. If the family's desired outcome is spoken language, listening and literacy, then a precondition for that outcome is the use of technology to get sound to the brain in order to develop the auditory system in those first 12 months. We do not want to squander a minute of it! During those early months the brain's task is to create a model of the culture into which it has been born (Merzenich, 2010). The baby learns how to control the actions required to survive and thrive in that world, and the only way a baby learns that in the set up stage is through exposure.

The second stage of cortical development is a little bit different. Remember, each stage is based upon the rich exposure and neural development of prior stages so what we do in those early stages absolutely matters. By the second stage, we have set up the brain, and the brain is now controlling its own plasticity. In this stage, the child's attention is involved; what are they interested in and focusing on as they master skill after skill. These are learning driven changes in the brain, and they are very important. We know that the higher levels of the cortex continue to mature up to age 17 to 19 years and likely beyond (Merzenich, 2010). We know that neural organization is bottom up and, as emphasized before, the quality and the quantity and the exposure of the lower-level maturation stimulation and practice influences the quality of the higher-level neural maturation. It is clear that we do in early intervention influences outcomes in later years.


In addition to critical periods, there are some generalizations about neuroplasticity to consider. Neuroplasticity is greatest in the first three-and-a-half years of life and probably the greatest in that 12 month set up stage. We know that the younger the infant, the greater the neuroplasticity. Because the brain grows rapidly in the early days, we need prompt intervention, especially technology, to promote auditory skill development. In the absence of sound, the brain will reorganize to make better use of the other major senses, especially vision. Dr. Graeme Clark (2007), who is the primary inventor of the cochlear implant, reported that the competition from visual brain centers will actually dominate the auditory brain centers unless we focus on auditory brain access. In other words, if we do not develop and grow the auditory centers with auditory input, some of that auditory tissue will be used for other tasks and no longer available for auditory tasks. To change the cortex, we have to control attention and working memory. As we said previously, repeated auditory stimulation leads to stronger neural connections, or experience-dependent plasticity (Kilgard, 2006). What this means is 30 minutes a day of auditory stimulation is not going to do anything at all. Sensory experience directly shapes the brain's wiring and makes learning possible, especially guided neural reorganization. We know that what we learn to do is a product of the culture and of our exposure, experience and practice. And lastly, we gain attention through engaging the prefrontal cortex (Musiek, 2009). We do have to interest the child and gain their attention in order for our exposure and practice to have any impact on the child's auditory neural development.

The Brain - the REAL Ear

As audiologists, when we diagnose a new hearing loss we often start our conversations with new families at the level of the peripheral hearing system where the hearing problem initially occurs. I am proposing, however, rather than starting our conversation in the peripheral system, we start our conversation in the real ear, the brain. We talk about how the hearing loss impacts auditory neural development and how our intervention impacts the exposure and development of critical auditory pathways. When we take this approach, this gets us started immediately on a very serious vein, where we can share the gravity of our intervention and technology in terms of the child's overall development.

With the focus on the translation of this basic cortical research to its practical application in therapy, how do we create the hearing brain and teach it to be a listening brain? Most importantly, we have to work in harmony with our organic design. Human beings are designed to listen and talk. Occasionally we have families who will say, "Do you think that if we use technology that our child can possibly talk?" And the answer is (in the absence of extreme pathology such as no eighth cranial nerve, no brain stem, no temporal lobe) - if we do what it takes to access, stimulate and develop the auditory centers, then yes, that child will learn to listen and talk. The bottom line question to ask families is, "What is your vision for your child?" We know that 95% of children with hearing loss are born into hearing and speaking families, and that the vast majority are very interested in having their child communicate through spoken language because that is how their family communicates. Once we know that listening, speaking, and literacy are desired outcomes, the next conversation is, "What will it take?"

As audiologists, we know it takes early identification and intervention. So what is the science behind what we know? Early intervention takes advantage of neuroplasticity and developmental synchrony so that we can stimulate and develop the brain in the time periods at which that brain is organically designed to be developed. We know that we have to have vigilant and ongoing and kind audiologic management. Nothing happens for listening and spoken language for a child with hearing loss without our good work. We know what it takes, and we know we can do it in this day and age. Early intervention programs, namely the Birth to Three programs, therefore, cannot be entirely home-based programs. Children with technology need a great deal of close audiologic management. They may need to be put in the sound booth, have hearing aids adjusted and ear molds made every few weeks. This cannot be accomplished from a home-based program. We have to determine what infrastructure is necessary to manage today's children.

We also need to get to the brain immediately to preserve auditory neural capacity and to engage that critical first stage of setting up the brain. One of the most efficient ways to get to the brain early is through loaner hearing aid banks. Let's face it: it takes time to purchase hearing aids. If it looks like the child may be a cochlear implant candidate, it does not make sense to waste time buying hearing aids when we could have immediate access to sounds through vigilant and close management of loaner hearing aids until that child has received a thorough cochlear implant evaluation. We have to get the best possible quantity and quality of sound to the brain as early as possible. We need to train the brain in acoustically favorable conditions. Doidge (2007) wrote an interesting book called The BRAIN That Changes Itself, which is about neuroplasticity throughout the lifespan. This book addresses people who have had strokes, autism and hearing loss, and one of the statements Doidge (2007) makes is that memory can be no clearer than the input that the person receives. Muddy in, muddy out. It is our job as audiologists to get the clearest possible signal to the brain, and if the desired outcome is listening and spoken language, we need to support the family by educating them about all of their options, and then choosing a listening and spoken language specialist who is highly qualified in parent coaching and mentoring in the development of listening, speaking and literacy through parent involvement.

Once we develop the brain's auditory centers we have set the stage to continually teach the child to listen, but we need their auditory attention to use audition to learn language. However, do not confuse language with knowledge. If we have done our job correctly by providing early intervention, we still have to be aware of being seduced by how good today's children with hearing loss sound and score on early language tests (Fairgray, Purdy, and Smart, 2010). We are often fooled into thinking that they are doing so well that we can release them from any sort of intervention, but we may actually be setting them up for failure if we do that. These children may begin school not showing any kind of failure because their speech and language skills seem to be very much within normal limits, and therefore, they do not qualify for any services. If we do our job right early on, we have created the auditory and linguistic platform through which the child can obtain knowledge and information, but language does not equal information and knowledge. Language is the platform and the skill set that the child has or does not have for attaining knowledge and information. What we see happening for some children today is that we think they are progressing well and we release them from therapy, but forget they still do not have 24 hour-a-day listening exposure. Because they do not have the same distance hearing as children with typical hearing, they will not have the same access to free information in the environment, and they still need additional listening exposure and practice. Today's children with severe to profound hearing loss who have cochlear implants are achieving at higher levels than children with the same degree of hearing loss with hearing aids because they have more brain exposure. We need to facilitate continuous spoken language and information exposure for these children. It is far more effective to build solid foundations and prevent failures than to try and rehabilitate later on.

The Audiologist's Role

Audiologists are pivotal. Until we do our job of identifying hearing loss and accessing the brain for these children, no one else can do theirs. No matter how amazing and skilled teachers, speech language pathologists and parents are, if we do not get to the brain with our technology, it might as well be 1970. Acoustic access to the brain, including incidental information that children learn through free listening, is the biggest challenge for today's children with hearing loss, worldwide. Babies and young children use incidental learning to obtain 90 percent of their knowledge of the world. In the "olden days" (a few years ago) we focused on in-your-face talking and active teaching. Now, we have to make sure children have incidental access to information, and we need to have high expectations for today's children. If the child is not progressing as expected, always suspect equipment first. Equipment breaks. Hearing loss changes. MAPs get corrupted. Things change in the life of the child. The audiologist plays a key role in making sure that child can access the auditory centers of the brain through technology, and that the brain receives practice, practice, practice!

How much practice is needed to influence neural structure? I mentioned earlier that we probably have grossly underestimated how much practice that brain requires to develop auditory centers. Hart and Risley (1999) studied children from professional families and determined that they have heard 46 million live-spoken words by age 4. This is the magnitude of practice that is critical. This speaks volumes to the fact that less than every waking hour of technology use will not cut it for children with hearing loss. Dehaene (2009) talked about the listening basis for reading, and children with hearing loss require three times the exposure to learn new words and concepts because of reduced acoustic bandwidth compared to typical hearing peers. And yet we have children who wear technology less than half the time of children with typical hearing. We have our work cut out for us, but if we do what it takes, the outcomes can be phenomenal.

For our purposes today, we are concerned with two primary intervention models: ecological and instructional intensity. The ecological model takes the standpoint that we know that our children have the best opportunities when they are around typical social- linguistic models with high expectations. Instructional intensity means practice, practice, practice. We have to find a balance between these two models with each child that we see. When I see children in the clinic, families may leave with different handouts, but they are all labeled How to "Grow" Your Baby's Brain for Listening, Talking, Reading and Learning. Feel free to use any of this information. When you label information in this way (the "brain" way), it sets a serious conversational tone. Let's talk about what is included in these handouts.

Above all, we want the parent to bond and play with the child, and children do have to wear that technology every waking hour of the day. Encourage the parents to check the technology consistently. There is a guarantee that technology is going to malfunction at some point in time. Remember that without auditory brain access, you might as well talk to the floor. Minimize background noise; muddy in, muddy out. Sing, sing, sing. The brain loves rhythm, melody and repetition. We have such an evolving literature base about the value of singing from developing brainstem areas all the way up to the cortex. Speak slowly and clearly and in full sentences with lots of melody to engage the prefrontal cortex. Most adults speak faster than most children can process, so for all children we need to slow down. Focus your child on listening. Call attention to sounds in the environment by pointing to your ear and using listening words such as, "You heard that!" or "You were listening!" Emphasize the sound before visual reinforcement for auditory enrichment. If the parent's desired outcome is listening, spoken language and literacy, read, read, read. Try to read at least ten books a day for babies, with chapter books for preschoolers. Hart and Risley (1999) found that children who are spoken to a lot in early years tend speak a lot themselves. Children who are read to aloud show much higher literacy skills, so with that in mind we want to absolutely emphasize reading. Name objects in the environment as you encounter them. Talk about and describe how things sound, look, and feel. Compare how objects are similar and different in size, texture, smell and shape. Talk about where objects are located by using prepositions and meaningful locations. Describe sequences of events and actions, because sequencing is necessary for organization.


We as audiologists have a very important role in working with the families and talking honestly about what it takes to achieve their desired outcomes. It is very important for us to provide information in a format that is relevant to our families, to provide referrals, and to provide the foundation of science that supports the necessary conditions to achieve their desired outcome. The purpose of hearing aids, cochlear implants, personal worn FMs, classroom audio distribution systems, and auditory-based intervention is to access, grow and develop auditory brain centers for listening, talking, reading and learning. The pediatric audiologist's role is pivotal in this regard.

Questions & Answers

Editor's note: For the Q & A section, Dr. Flexer was joined by Dr. Jane Madell & Dr. Jace Wolfe, who are presenting other courses in the Pediatric Audiology - Raising the Bar virtual conference.

What are the benefits of unilateral amplification, and what is the difference between auditory development of bilateral and unilateral hearing loss?

Carol Flexer: We clearly know through the literature that there is a huge advantage to stimulating the brain with sound from both sides, whether it is with bilateral cochlear implants, one cochlear implant and a hearing aid, or two hearing aids. Jace will get into some of those issues in his talk later this week.

Do you speak to parents of children with mild hearing loss differently?

Carol Flexer: That's a very good question because many parents think of hearing loss categorically. You either hear everything or nothing, and children with mild hearing loss seem to hear a lot if it is quiet and you are close to them when speaking. So parents often do not recognize how much the child is missing of the redundancy of free information in the environment. We audiologists have to provide evidence of what that child can hear with and without technology at soft conversational levels and in noise because those parents do need specific evidence. But no, I do not think you have to talk to these parents differently. You just have to explain it in terms they can understand.

Jane Madell: I find it helpful to go into the sound booth and test normal conversation and soft conversation in quiet and in competing noise without the technology because both the families and the children need to see what they're missing. When they see what they're missing, they can make a more educated decision.

Jace Wolfe: I agree that this can be very difficult from a counseling standpoint to help parents understand the significance of even a mild loss, because oftentimes without hearing aids the children still respond when spoken to from a close distance, as Carol mentioned. Without a doubt, however, when those children hit certain points in time when academic curriculum becomes more rigorous, you start to see delays. Those types of hearing losses will cause problems in the future, and I think that we still need to be really aggressive about promoting management for those children.

How might you talk to a family who chose sign language as the primary mode of communication but has now decided they want their outcome to be spoken language?

Carol Flexer: That is definitely sensitive counseling information. I always start with the desired outcome and what does it take from a neurological perspective. I also do not say, "Don't sign." I provide evidence of what that child's brain access is with and without their technology and create a context for what hearing loss is in 2011, and we do that in a sensitive way. Jane, did you want to add something to that?

Jane Madell: Yes. I also talk about the fact that sign language is a different language with different grammar than spoken language, and I compare that to trying to listen to Italian and English at the same time. I try to help families understand that they are really teaching a child two separate languages at the same time. By using that example, I find that parents understand the difference. I also say to them that sign language is not a bad thing, but having less auditory information and exposure may cause roadblocks for both spoken language and literacy.

How does sign language develop language centers of the brain prior to cochlear implantation when auditory access is not yet sufficient?

Carol Flexer: A study published this year by Geers et al. (2011) took a longitudinal approach to looking at the outcomes of children who received cochlear implants in early childhood. One thing she found was that what we do early on (the intervention provided in the early years) absolutely does impact later outcomes. So if we're not using as much auditory exposure early on, that's going to negatively influence later outcomes, so get those hearings aids on, and stimulate even if there's only a little bit of hearing.

Jane Madell: I would add that these babies can be fit with an FM system and whoever is with the baby can wear that FM system full-time. It certainly is not the same kind of sound access they will have when they get a CI, but the FM does give the brain much more auditory access and the remote microphone of the FM system overcomes noise and distance.

Jace Wolfe: An FM system is a great way to prime the pump for these kids who will receive a cochlear implant. This means that when they get their implant, they can hit the ground running. I have seen kids take off when they get their cochlear implant, even when they had a profound hearing loss and likely limited access to sound through hearing aids. FDA guidelines indicate that cochlear implantation should be considered for children at one year of age, but for children who have profound hearing loss, we oftentimes try to push for implantation prior to that, as early as eight or nine months of age, for those kids who receive very limited benefit from their amplification.

Jane Madell: It is clear that we have much more to talk about, but we also have four more days to talk about it in this conference, which is so exciting. Carol and Jace will still be here to participate in these conversations as we go through the week. Please join us to tomorrow when Gail Whitelaw will talk about auditory processing disorders. Jace Wolfe will talk about cochlear implants and hearing aid fittings on Wednesday and Thursday, and on Friday I will talk about helping families accept technology . Some of the questions that came up today will be addressed in more detail in my presentation on Friday. Thank you so much for participating.


Dehaene, S. (2009). Reading in the brain: The science and evolution of a human invention. New York: Penguin Group.

Doidge, N. (2007). The BRAIN that changes itself. London, England: Penguin Books, Ltd.

Geers, A.E., Tobery, E., & Moog, J.S. (2011). Editorial: Long-term outcomes of cochlear implantation in early childhood. Ear and Hearing, 32(1), 1s.

Fairgray, E., Purdy, S.C., Smart, J.L. (2010). Effects of auditory-verbal therapy for school-aged children with hearing loss: an exploratory study. The Volta Review, 110(3), 407-433.

Hart, B., & Risley, T.R. (1999). The social world of children learning to talk. Baltimore: Brookes.

Kilgard, M.R. (2006). Cortical plasticity and rehabilitation. Progressive Brain Research, 157, 111-122.

Merzenich, M.M. (2010, April). Brain plasticity-based therapeutics in an audiology practice. Learning Lab presented at the American Academy of Audiology National Conference, San Diego.

Musiek, F.E. (2009). The human auditory cortex: Interesting anatomical and clinical perspectives. Audiology Today, 21(4), 26-37.

Sennheiser Hearing - June 2024

carol flexer

Carol Flexer, PhD, CCC-A, LSLS Cert. AVT

The University of Akron and Northeast Ohio Au.D. Consortium & Listening and Spoken Language Consulting

Dr. Carol Flexer received her doctorate in audiology from Kent State University in 1982. She was at The University of Akron for 25 years as a Distinguished Professor of Audiology in the School of Speech-Language Pathology and Audiology. Special areas of expertise include pediatric and educational audiology. Dr. Flexer continues to lecture and consult extensively nationally and internationally about pediatric audiology issues. She has authored numerous publications and co-edited and authored ten books. Dr. Flexer is a past president of the Educational Audiology Association, a past president of the American Academy of Audiology, and a past-president of the Alexander Graham Bell Association for the Deaf and Hard of Hearing Academy for Listening and Spoken Language.

Related Courses

Maximizing Outcomes for Children in Schools: The Responsibility of Clinical Audiologists
Presented by Jane Madell, PhD, CCC-A/SLP, LSLS Cert. AVT, Carol Flexer, PhD, CCC-A, LSLS Cert. AVT
Recorded Webinar
Course: #30088Level: Intermediate1.5 Hours
Many school districts no longer have educational audiologists. Students with hearing loss continue to need all the services that educational audiologists have provided. Clinical audiologists now need to pick up this slack if their young patients with hearing loss are going to succeed in today’s challenging academic environment. This session will discuss contemporary audiological needs of children with hearing loss in schools, how clinical audiologists can help meet those needs, and how to network with schools from a clinical setting.

Assessing Auditory Functional Performance: Goals and Intervention Considerations for Individuals with Hearing Loss
Presented by Susan G. Allen, MED, CED, MEd, CCC-SLP, LSLS Cert. AVEd
Recorded Webinar
Course: #33024Level: Intermediate1 Hour
Functional auditory assessment and continuing assessment is critical in order to determine the current level of function, develop appropriate goals for intervention, and achieve maximum outcomes. Learning to listen drives everything else: speech intelligibility, language competence, reading, academics, and life-long learning. This course offers a detailed look at functional auditory assessment and intervention, to provide audiologists with a better understanding of hearing loss in children in terms of the broader speech, language, learning and academic contexts. Additional videos to demonstrate key points will be included.

Cochlear Implants for Children with Auditory Neuropathy Spectrum Disorder: What Are We Learning?
Presented by Holly F. B. Teagle, AuD
Course: #1767Level: Intermediate1 Hour
No CEUs/Hours Offered
This course will provide participants with clinical information regarding cochlear implants in children with auditory neuropathy spectrum disorder.

Implementation of Cochlear Implants: Enhanced Candidacy Criteria and Technology Advances
Presented by J. Thomas Roland, MD Jr.
Recorded Webinar
Course: #37377Level: Intermediate1 Hour
The participant in this course will understand the extended candidacy criteria with cochlear implantation and expectations. The course will cover implanting under age one, hybrid hearing with cochlear implantation, CI under local anesthesia, single-sided deafness, cochlear implantation, and auditory brainstem implantation.

Engaging Deaf Professionals in the EHDI System
Presented by Sheri Farinha, MA, Julie Rems Smario, EdD
Recorded Webinar
Course: #37893Level: Intermediate1 Hour
The course explores ways to draw on Deaf professionals' expertise in working with families of Deaf and hard of hearing infants and young children. A highlight of Deaf people's knowledge, experiences, expertise, and contributions in the EHDI (Early Hearing Detection and Intervention) System will be discussed.

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.