AudiologyOnline Phone: 800-753-2160


MED-EL - Bonebridge - August 2023

20Q: What Exactly is “Normal” Hearing?

20Q: What Exactly is “Normal” Hearing?
Christopher Spankovich, AuD, PhD, MPH
October 7, 2024

To earn CEUs for this article, become a member.

unlimited ceu access $129/year

Join Now
Share:

From the Desk of Gus Mueller

Gus-mueller-contributing-editor

There are some concepts in audiology that take a little thinking to grasp—such as, why is it that 6 dB + 6 dB = 9 dB? But here’s an easy one for you: When hearing thresholds are conducted, what is the dB cut-off for normal hearing? I suspect that this is something that has been thought about since at least 1914, when the first commercially available electronic audiometer, the Western Electric 1A, became available. I’ll give you a clue to the answer. Over the past 100+ years, we seem to have narrowed it down to a value bigger than 10 and smaller than 30 dB! Doesn’t it seem that we could do a little better than a 20 dB window? Oh sure, there was a time when we had to convert from ASA to ISO, which did lead to a little confusion, but that was 50 years ago.

If you have an idle moment someday, take a look at Google images for “audiogram.” What you will find is a cut-off for “normal hearing” anywhere from 15 to 25 dB. To help add to the confusion, the Wikipedia page has two audiograms posted, one with a 25 dB cut-off, the other with a 20 dB cut-off. 

Interestingly, the cut-off used by the Hearing Industry Association (HIA) is 20 dB, but if you travel a little north, the Canadian HIA uses a cut-off of 15 dB. Seems like it might be time to bring in someone to sort all this out, and that’s what we did this month here at 20Q.

Christopher Spankovich, AuD, PhD, MPH, is Professor, Clinical Audiologist, and the Vice Chair of Research for the Department of Otolaryngology–Head and Neck Surgery at the University of Mississippi Medical Center (UMMC). He also serves as the director of the Doctor of Audiology Program at UMMC. His research program, which has been funded by industry, federal, and professional bodies, spans basic and epidemiological investigations into identifying and leveraging modifiable risk factors to mitigate susceptibility to acquired hearing loss and tinnitus. You’re probably familiar with his work from his many publications, and his popularity as a convention and workshop speaker.

Dr. Spankovich is an associate editor for the International Journal of Audiology and recently completed service on the Board of Directors for the American Academy of Audiology. He currently serves on the Board of the Accreditation Commission for Audiology Education. He also provides consulting services for medico-legal matters.

So what is the audiometric cut-off for “normal” hearing? I’m not going to give you the answer, but Chris provides some very logical reasons why one cut-off level seems to make the most sense.

Gus Mueller, PhD
Contributing Editor

Browse the complete collection of 20Q with Gus Mueller CEU articles at www.audiologyonline.com/20Q

20Q: What Exactly is "Normal" Hearing?

Learning Outcomes 

After reading this article, professionals will be able to:

  • Describe the historical rationale for audiometric zero.
  • Describe the rationale for levels and frequencies included in percent hearing loss for speech calculation.
  • Discuss relative factors in defining hearing loss.
headshot
Christopher Spankovich,
AuD, PhD, MPH

1. What is "normal" hearing?

How about we begin with the term "normal"? In the world of audiology, we use this word quite often, but it may do more harm than good and is subjective to the listener or, in this case, the reader. Furthermore, as audiologists, we have an expansive set of diagnostic tools that can identify hearing deficits that are not identified by customary pure-tone audiometry. The imprecise nature of the term "normal" has led the editors of the Journal of the American Academy of Audiology to use alternative language that specifies a technical definition relative to a reference. I suspect that this concept of a reference will be important as we continue our discussion. You can read about it here: https://www.audiology.org/news-and-publications/journal/jaaa-language-guidance/.

2. Why do most audiologists use the term "normal hearing"?

Historically, this is based on statistical concepts applied to data to summarize observations relative to the average (mean) and variability among a sample reflective of the larger unobserved population. If the distribution is symmetric around the mean. this is referred to as a normal distribution, the good old bell-shaped curve that we all have heard about. Of course, not all symmetrical distributions are "normal". Still, there are properties that help define it. For example, the mean (average), median (midpoint), and mode (most frequent observation) are equivalent with zero skew. Nonetheless, these concepts of mean and variability are critical factors regarding how we have come to define "normal" hearing.

3. No one told me there was going to be statistics, you are starting to lose me?

No problem, how about a little history. At the turn of the century (that is, last century, back in the late 1800s and early 1900s), physicists, physicians, engineers, and early pioneers of the hearing sciences and audiology sought to identify minimal hearing sensitivity, aka the minimal sound detectable by healthy young adults. These healthy young adults included participants from 18 to 40 and persons without a history of ear disease, noise exposure, and perceived deficits, some with no description; this was variable depending on the study. Pure-tone stimuli were used to determine the minimal level.

Based on these early works (e.g. Fletcher & Wegel, 1922; Sivian & White, 1933) 10-16 watts/cm2 (aka 0.0002 dynes/cm2 aka 20 uPa) for a 1000 Hz signal was eventually chosen as the reference for physical sound measurement. Fletcher and Munson (1933) recommend this pressure level as it was a simple number that was convenient and in the range of threshold measurements obtained when listening in the standard method. In 1949, at the meeting of the International Congress of Audiology, the reference was set, and it was the suggestion of the committee that the remaining frequencies be based on the Fletcher & Munson equal loudness contours. We can all agree the standard method for measuring decibel in sound pressure level (dBSPL) is with a 20 uPa reference.

The next chapter of our story continues with identifying the dB sound pressure level at different test frequencies for calibration purposes referred to as the audiometric zero. Clearly demonstrated by Fletcher & Munson, hearing sensitivity is different for different frequencies. Both International groups (5 different countries including the United Kingdom and Japan) and United States based groups sought to establish the minimal sound pressure level (SPL) detectable for the average healthy young adult with presumed "normal hearing" using the reference 20 uPa (re 20 uPA), as measured with the Western Electric 705-A earphone mounted in a National Bureau of Standards 9-A coupler. A byproduct of this race to establish standards was notable differences in methodology to measure hearing (testing methods, testing space), and thus, differences in the measured thresholds resulted. While the audiometric zero controversy lingered on, standards were created in both the United States (pre-ANSI called the ASA) and Internationally (ISO). This controversy was eventually cleared up, but not without contributing to our issue at hand.

So, to answer your question, the term commonly applied to the identified audiometric zero was average normal hearing (Fowler, 1943); values higher than zero were considered decibels hearing loss (dBHL) or loss in decibels. This represented the average minimal audibility (hearing sensitivity) of pure-tone stimuli among persons without perceived deficits and or considered young and healthy.

4. If I understand all that correctly, normal hearing, based on pure-tone audiometry, is 0 dBHL?

Well, that depends on who you ask. Going back to the beginning of our discussion, 0 dB HL is our reference for average hearing sensitivity of a so- called healthy young adult. Still, we have the expected variance around the mean. Moreover, since we do not live in a world that relies on low level pure tones, but rather more complex sounds often experienced at suprathreshold levels, we need to consider when hearing is compromised for these types of sounds. Also, when a person perceives hearing difficulty, and/or even when they seek intervention. We also can consider, relative to pure-tone audiometry, when other measures of auditory function begin to show deficits (e.g. auditory evoked potentials and otoacoustic emissions). As we all know, not all hearing deficits are peripheral. Also, we are not always dealing with adults. Children, depending on their age and ability to complete conventional testing, may do slightly better, particularly in the higher frequencies, based on smaller ear canal dimensions (Trehub et al. 1988). However, with children we often limit our testing methods to minimize test burden and don't often test thresholds below 0 dB (HL), or in some cases, maybe even below 15 dB (HL).

Also recognize, though pure-tone audiometry is a common test used by audiologists to characterize hearing loss, even the early pioneers prior to the existence of audiology as a distinct profession understood the limitations of pure-tone audiometry. Fletcher, Fowler, Bunch and others in their writings, going back to the early 1900's, point out the limitations of pure-tone hearing tests in identifying hearing impairment/handicap, speech intelligibility, and underlying pathologies. 

5. What do audiologists today typically consider normal pure-tone audiometry?

Several years ago, I asked this question via a social media poll. The responses ranged from 0 to 25 dBHL. Still, most of the audiologists in my survey indicated a cutoff or fence for normal pure-tone audiometry (as measure of hearing) between 15-25 dBHL. What is evidently less clear is why 15-25 dBHL? What research identified this as the critical cutoff? Unfortunately, the answer is complicated. The range of what was considered "normal" hearing based on pure-tone audiometry was not necessarily identified based on a cutoff of change in perception or compromise in fidelity, but rather a combination of the sensitivity required to understand speech (in particular for phone communication), changes in standards describing minimal audibility, and medical-legal factors, along with political pressure. If average hearing sensitivity is 0 dBHL, it would make sense that the further you increase from that average the higher the odds of a hearing deficit.

6. Who established the cutoffs, like 15, 20 or 25 dB as being meaningfully different from 0 dBHL?

Sorry, but again we need to dig into a little history. Early interpretations of audiograms often simply compared to the reference (0 dBHL) and any value above the reference was considered some level of dB Hearing Loss (original dBHL). In the early 1930s, the Consultants on Audiometers and Hearing Aids of the Council of Physical Medicine (part of the American Medical Association) were appointed the task to formulate a method for "hearing impairment." This included numerous of our early pioneers (Bunch, Coates, Fowler, Sabine, etc). Many proposals were put forth based on the existing literature, which was greatly driven by the work at Western Electric/Bell Laboratories for identifying minimal frequency content and levels for speech understanding via the phone (not necessarily normal hearing). Ultimately what resulted was the 1942 AMA formula for percent hearing loss to understand speech, revised and simplified in 1947 (moving form octave intervals to discrete frequencies and removing weightings based on intensity).

The 1942 AMA formula included a chart to plot the thresholds from 250 to 4000 Hz. As thresholds reached15 dBHL or higher a percent loss was assigned at each level and frequency that was summed to determine percent hearing loss.

7. That explains the 15 dBHL cutoff, but why the use of 20 or 25 dBHL?

It does seem logical that this is the origin of the 15 dB cut-off, but not so, only coincidental. Recall in my response to one of your earlier questions, I mentioned an audiometric zero controversy. Now it turns out the United States (ASA) standard for audiometric zero was different than the International (ISO). Different in that, the SPL for the ASA audiometric zero was around 10 dB off from the ISO (6-15 dB depending on frequency; the simple conversion, used by the military and others, was that 10 dB was added to the ASA threshold for most frequencies, to derive the equivalent ISO threshold).

It is important to point out, that the data used to derive the values for the two standards was collected differently. The data for the ASA standard was based on survey data collected by the USPHS led by a young epidemiologist (Beasley) that collected data across 17 sites which happened to be state fairs (Beasley excluded data for participants with thresholds greater than 15-20 dB as they were outside expected variance for normal). Where, the ISO data was collected in laboratory environments using consistent psychophysical methods.

Rather than admitting the US standard was inaccurate and adopting the ISO, the next generation of US-based hearing scientists (led by efforts of Aram Glorig in 1954-1955) repeated testing applying more strict standards. This resulted in generating a new US standard (the ANSI-1969) for audiometric zero. To recognize this difference for calculating hearing percentage loss, the 1959 formula was adjusted. Rather than subtracting 15 dB to calculate the percentage loss, it was changed to subtract 25 dB to recognize the ~10 dB difference (AAOO 1959, revised 1971). From here, scales of impairment began to manifest showing various cutoffs for "normal hearing" (based on a pure-tone average of 500-2000 Hz). The most common was probably the Goodman's (1965) classification, endorsed in the Katz, 1978 textbook, that showed a 25 dBHL cutoff for "normal hearing".

8. Should it have been that simple, to just add 10 dB?

This is debatable. At the time the transition to the updated standard and how to address medico-legal consequences was of significant controversy. Numerous suggestions were made but ultimately simply adding 10 dB to the hearing impairment calculation was adopted.

9. Why did this become so important regarding how we define normal hearing?

In 1948, a landmark legal case (Slawinki vs. JH Williams and Company) went before the New York State Court of Appeals; the court found in favor of the plaintiff (Slawinski) for occupational hearing loss and awarded him $1661.25. (this would be > $20,000.00 today). Given the high number of noise exposed workers, much dissatisfaction was expressed by industry leaders and politicians. There was such a stir that the AMA along with the American Academy of Ophthalmology and Otolaryngology (AAOO) was "inspired" to re-evaluate the 1942/1947 formula. The result was removal of 4000 Hz from the formula, known as the AAOO 1959. This formula is still used by numerous states for determining percentage of hearing loss for speech. The frequencies of 500-2000 Hz are of course the range demonstrated by Fletcher as the spectrum most important for speech hearing, e.g., you can still understand speech pretty well when filtered to this spectrum. You likely can appreciate that the removal of 4000 Hz from the formula was a welcome change for industry leaders and politicians. 

According to Aram Glorig (1961): the original purpose of compensating for disability was to replace reduced earning capacity from occupationally induced impairment. Obviously, the common denominator of hearing is communication by speech not whether one can hear the top note on the piano.

10. I don't quite understand. I think we all know that hearing loss that starts above 2000 Hz can still cause a significant handicap for understanding speech.

Consider that most of the work we have described demonstrating the importance of frequency spectrum and thresholds for speech understanding was completed in quiet, mostly in laboratory test conditions, and in a large part, for telephone communication. This work, dating back to the early 1920s at Bell Laboratories, really set the foundation for the importance of 500-2000 Hz for speech understanding. A primary goal of the work at that time was to identify the minimum speech spectrum to maintain intelligibility on the phone. They were not necessarily looking at perceived change in fidelity or environmental factors (e.g. a noisy room).

Of course, it was also well-recognized by the 1930's that noise, age, and even drug related changes to hearing also often manifested earliest at frequencies above 2000 Hz. Let's not be too hard on our pioneers, as hearing aid technology of the time was limited, and persons with hearing loss limited to frequencies above 4000 Hz commonly reported minimal benefit from the amplification of the time. Fowler (1943) states (1) No patient with deafness which, on the average, is less than 30 dB above normal threshold (0 dBHL) for the speech range will be helped by a hearing aid. (2) If he has deafness over 85 dB for this range, he will rarely be helped unless he has worn a hearing aid while his deafness was less pronounced.

Now, I will say that there also was research going back to the 1920s showing that there are important contributions at high frequencies up to 8000 Hz (e.g. Fletcher, 1929), and as we recognized now, even beyond 8000 Hz (e.g. Hunter et al. 2020). And we know that these high-frequency components are notably important to speech understanding in noise and perceived hearing loss. This early research led to the development of the Articulation Index (AI), which has now been modified to the Speech Intelligibility Index (SII).

11. Now, you earlier mentioned the AAOO 1959 formula. Is there nothing more recent?

If we fast forward a couple decades, a very informative study was published in 1978 by Alice Suter, PhD for the Air Force and EPA. The study compared hearing thresholds of 3 groups with variable means of hearing sensitivity and that underwent speech intelligibility in noise assessments. All three groups had "normal hearing" for speech based on the AAOO formula we described ("speech PTA" using 25 dB cutoff). [The work was in part inspired by Kryter, who was a vocal critic of the AAOO 1959 and suggested that the 25 dBHL cutoff was much too high, assuming a one-meter distance between the speaker and listener. Kryter advocated a 15 dBHL cutoff, using the new ANSI reference].

The results showed large differences in speech recognition ability in noise with minimal difference in quiet. The recommendations were inclusion of frequencies above 2000 Hz in measures of hearing for speech understanding and lowering the fence; recommendations ranged from 9-22 dBHL depending on frequencies included. A complementary study looking at self-perceived hearing handicap identified the upper fence for normal at 9-14 dBHL depending on the frequencies included in the PTA (Merluzzi and Hinchcliffe, 1973).

12. Did things change?

To some extent, yes. One year later (1979), the American Academy of Otolaryngology (AAO) updated their formula to include 3000 Hz to recognize the contributions of higher frequencies to speech understanding but did not change the 25 dBHL fence, rather referring to PTA 500-3000 Hz at 16-25 as non-material hearing impairment and > 25dBHL as material hearing impairment (Dobie, 2007).

13. It is clear that slight hearing loss, including the high frequencies, can lead to challenges. Why not lower the fence and include even higher frequencies?

Over 40 years ago, Clark (1981) made a strong argument for modifying the Goodman scale to include slight hearing loss . . .16-25 dBHL. This scale was then adopted by the American Speech Language Hearing Association (ASHA) for both children and adults.

Furthermore, clinically, as I mentioned earlier, we don't limit ourselves to hearing loss percent calculations or PTA, rather we look at the overall audiometric profile, along with other testing to inform our recommendations.

14. I have a few more questions, but before we continue, can you provide a brief summary of all we've talked about?

Sure, we have hit on three major points:

  1. 0 dBHL represents average normal hearing sensitivity (ie. threshold, minimal audibility) for a healthy young adult. The contemporaneous literature to establishing 0 dBHL often refer to that level as "normal hearing". Indeed, the original meaning of dBHL was dB hearing loss not dB hearing level. Early audiograms identified any level above 0 dBHL as hearing loss. However, it is also clear that persons with thresholds above 0 dBHL can do very well with understanding speech spoken at average levels.
  2. The recommended fence for "normal hearing" for speech intelligibility and the included frequencies in the average has been influenced by consideration of test environment, stimuli presented in quiet vs noise, perceived hearing difficulty, medical-legal/political factors, etc. The cutoffs were not designed for what is "normal hearing" but what is adequate hearing to minimize speech understanding difficulty.
  3. Hearing above 2000 Hz is important for speech understanding (and hearing in general) and hearing loss commonly manifests earliest at frequencies above 2000 Hz.

15. Okay, so 0 dBHL is the reference, and 25 dBHL appears to be a bit high of a fence. But, I have to say, I do have patients with hearing at 25 dBHL that report no hearing difficulty.

Excellent point! As mentioned earlier, there are limitations to pure-tone audiometry and its relevance to perceived hearing difficulty. Hearing is a psychophysical phenomenon that is altered by auditory (e.g., competing noise) and non-auditory factors (e.g., cognition, attention). Where you may apply your fence is dependent on the function of the fence. The historical basis is for calculating hearing loss percentage for medical-legal purposes. By the 1970/80s we see movement toward the recognition of minimal hearing loss contributing to hearing challenges and a call for recognizing this as meaningful hearing loss.

16. In your opinion, what do you recommend, of course, recognizing the limitations of pure-tone audiometry?

In my opinion, it depends on the function of the fence. For hearing conservation purposes, we recognize changes as low as 10-15 dB elevation in thresholds as a significant threshold shift (STS). For primary prevention, prevention before a person manifests symptoms, we would want to apply a fence at a level lower. For secondary prevention, prevention early into manifestation of symptoms, we would want to apply a fence at a level low enough to capture early symptoms to allow intervention to reduce progression. For tertiary prevention, we would identify a level where a person has significant symptoms, but needs intervention for associated sequala, like quality-of-life impact.

17. All makes sense, but you didn't really answer my question?

Well, my opinion, is not what is important. What is important are the data. As we've already talked about, there is a rich history of research that has identified levels as low as 9 dBHL to as high as 25 dBHL as the recommended fence for "normal" hearing. Recently, our research group completed a data driven analysis of the National Health and Nutrition Examination Survey (NHANES) to inform a fence for application as a secondary prevention tool (de Gruy et al. 2024). Using data based on self-reported perceived hearing difficulty (PHD) we identified the levels and frequencies most sensitive to increasing the odds of PHD.

What we found was highly consistent with the evidence-based literature. First, the level at which report of PHD significantly increased was dependent on the frequencies included in the PTA. If we applied the classic speech PTA (500-2000 Hz), the report of PHD significantly increased when speech PTA increased above 5 dBHL. Furthermore, over 20% of the sample with a speech PTA between 11-15 dBHL reported PHD. This was spot on consistent with Merluzzi and Hinchcliffe (1973).

Next, using a different PTA, the PTA4, which is consistent with the AMA 1947 method (500-4000 Hz) and endorsed by the World Health Organization (WHO), we found that the report of PHD significantly increased when PTA4 rose above 5 dBHL and that over 20% of the individuals with PTA4 between 16-20 dBHL reported PHD.

Then, we also looked at high frequency hearing (Note: the NHANES only goes out to 8000 Hz; we did not have extended high frequency data). For a PTA using the frequencies of 4000-8000 Hz, we found statistically significant increased report of PHD when the PTA rose above 15 dBHL, and nearly 20% of participants with a high frequency PTA between 21-26 dBHL reported PHD. Again, this was highly consistent with the literature relative to PHD and speech in noise ability.

Interestingly, all of the PTAs we examined (and we did look at other variations) all had only fair to sufficient diagnostic accuracy for PHD. Meaning, pure-tone audiometry has significant limitations in capturing PHD. Further, the high frequency PTA was the most sensitive but the least specific.

18. What was the conclusion from your research?

Based on our analysis and the literature I just described, we recommended 15 dBHL as the cutoff reference for pure-tone audiometry relative to PHD for low and high frequency PTAs and individual frequencies. But of course, there are also other rationale to support 15 dBHL as a fair fence.

19. What would those be?

Once pure-tone audiometry thresholds increase above 15 dBHL, we observe significant reductions in peripheral measures of auditory function, including otoacoustic emissions and auditory brainstem response wave I amplitude (e.g. Parker, 2020; Bramhall et al. 2015). In other words, peripheral function is compromised based on objective measures by the time thresholds reach 15 dBHL. This does not mean that OAEs or ABRs will not be present, of course they will, however, they are often reduced in their amplitude as thresholds move above 15 dBHL. This is in part because studies to establish normative data (in adults) often used 15-20 dBHL as the cutoff for inclusion.

Numerous studies examining perceived hearing difficulty and speech-in-noise capability also have identified ranges from as low as 9-15 dBHL as the recommended fences based on pure-tone thresholds (Kryter et al.; Merluzzi and Hinchcliffe, 1973; Suter, 1978). Martin et al. (2000) examined hearing aid sales relative to PTA (500-2000 Hz) and found that increase in pursuit of amplification was observed as the PTA increased above 15 dBHL.

Finally, statistics! The standard deviation (SD) for pure tone audiometry average thresholds approximates 5 dB and up to >10 dB at extended high frequencies. Now, as we already discussed, 0 dBHL is the mean pure-tone sensitivity (threshold aka minimum audibility) for healthy young adults. If 1 SD =5 dB then 2 SD = 10 dB from the mean (0 dBHL). Greater than two SD is often used as a statistical convention for a statistically significant difference from the mean. The empirical rule states that 95% of the group will fall within two SD which is 10 dB. Further, 99.7% of the group will fall within three SD, which is 15 dB. In other words, as threshold exceeds 10 dB from the mean (0 dBHL) the difference is commonly considered statistically different.

20. What is "normal" hearing?

Well, the average hearing sensitivity for a healthy young adult is 0 dBHL for each tested frequency. Adequate hearing for speech intelligibility in quiet falls around 10-25 dBHL, even when limited to frequencies of 2000 Hz and below. Adequate hearing for speech intelligibility in noise is a bit more complicated (as everyone has greater difficulty as certain SNRs are approached) but the literature suggests 9-22 dBHL, depending on the frequencies included and test material. When considering PHD, the literature and our data suggest 10-20 dBHL, depending on the frequencies included and the inclusion of frequencies above 4000 Hz.

Overall, based on historical literature, physiological changes, perceptual changes, initial pursuit of intervention, basic statistics, and recommendations from professional organizations, 15 dBHL is a reasonable conservative fence.

Also, note that what we've talked about has mainly focused on how we use pure-tone audiometry to define hearing loss. Pure-tone audiometry is actually only a "fair" measure of PHD. Evaluation and management of patients with PHD should extend beyond pure-tone audiometry and include consideration of functional ability.

References

AAO. (1979). Guide for the evaluation of hearing handicap. JAMA, 241(19), 2055–2059.

American Academy of Ophthalmology and Otolaryngology Committee on Conservation of Hearing. (1959). Guide for the evaluation of hearing impairment. Trans Am Acad Ophthalmol Otolaryngol, 63, 236–238.

Bramhall, N., Ong, B., Ko, J., & Parker, M. (2015). Speech perception ability in noise is correlated with auditory brainstem response wave I amplitude. J Am Acad Audiol, 26(5), 509–517.

Clark, J. G. (1981). Uses and abuses of hearing loss classification. J. ASHA, 23, 493-500.

Dobie, R. A. (2015). Medical-legal evaluation of hearing loss (3rd ed.). San Diego, CA: Plural Publishing.

Fletcher, H. (1929). Speech and hearing. New York, NY: Van Nostrand.

Fletcher, H., & Munson, W. A. (1933). Loudness, its definition, measurement and calculation. JASA, 5, 82-106.

Fletcher, H., & Wegel, R. (1922). The frequency sensitivity of normal ears. PNAS, 8(1), 5-6.2.

Fowler, E. P. (1943). Audiogram interpretation and fitting of hearing aids. Proc Royal Soc Med, 36(8), 385-440.

Glorig, A. (1961). The problem of noise in industry. AJPH, 51(9), 1338-1346.

Goodman, A. (1965). Reference zero levels for pure-tone audiometer. ASHA, 7, 262-263.

Hunter, L. L., Mondson, B. B., Moore, D. R., et al. (2020). Extended high frequency hearing and speech perception implications in adults and children. Hear Res, 397.

Katz, J. (1978). Clinical Audiology. In J. Katz (Ed.), Handbook of clinical audiology (2nd ed.). Baltimore: Williams and Wilkins.

Kryter, K. D., Williams, C., & Green, D. M. (1962). Auditory acuity and the perception of speech. J. Acoust. Soc Am., 34(9), 1217-1223.

Martin, F. N., & Champlin, C. A. (2000). Reconsidering the limits of normal hearing. J. Am Acad Audio, 11, 64-66.

Merluzzi, F., & Hinchcliffe, R. (1973). Threshold of subjective auditory handicap. Audiology, 12(2), 65–69.

Parker, M. A. (2020). Identifying three otopathologies in humans. Hear Res, 398.

Sivian, L. J., & White, S. D. (1933). On minimum audible sound fields. JASA, 4, 288-321.

Suter, A. (1978). The ability of mildly hearing-impaired individuals to discriminate speech in noise. United States: Environmental Protection Agency.

Tentative standard procedure for evaluating the percentage loss of hearing in medicolegal cases, Council on Physical Therapy. (1947). JAMA, 133, 396-397.

Tentative standard procedure for evaluating the percentage of useful hearing loss in medicolegal cases, Council on Physical Therapy. (1942). JAMA, 119, 1108-1109.

Trehub, S. E., Schneider, B. A., Morrongiello, B. A., & Thorpe, L. A. (1988). Auditory sensitivity in school-age children. J Exp Child Psych, 46(2), 273-285.

Citation 

Spankovich, C. (2024). 20Q: What exactly is "normal" hearing? AudiologyOnline, Article 28938. Available at www.audiologyonline.com

To earn CEUs for this article, become a member.

unlimited ceu access $129/year

Join Now
Rexton Reach - November 2024

christopher spankovich

Christopher Spankovich, AuD, PhD, MPH

Christopher Spankovich, AuD, PhD, MPH, is a tenured professor and vice chair of research for the Department of Otolaryngology Head Neck Surgery at the University of Mississippi Medical Center. Dr. Spankovich is a clinician-scientist with a translational research program focused on prevention of acquired forms of hearing loss, tinnitus, and sound sensitivity. His research includes clinical trials of otoprotectants, epidemiological studies of determinants (e.g., dietary quality) of hearing loss/tinnitus, basic research in thermal stress for prevention of ototoxicity, and translational research on the effects of noise on auditory physiology/perception. His research has been funded by industry, federal, and professional bodies. He has published over 90 articles and book chapters (60 in peer-reviewed journals) and has given over 60 national and international presentations. Dr. Spankovich continues to practice clinically with special interest in tinnitus, sound sensitivity, ototoxicity, hearing conservation, and advanced diagnostics. He holds Adjunct faculty status with Salus University and Nova Southeastern University and serves as an associate editor for the International Journal of Audiology. He recently completed service as a board member for the American Academy of Audiology.



Related Courses

20Q: Tinnitus - Developing a Practical Management Protocol
Presented by Christopher Spankovich, AuD, PhD, MPH
Text/Transcript
Course: #33660Level: Intermediate1 Hour
Key components of a tinnitus management protocol for audiologists including resources and references, written in an engaging Q & A format.

Sound Approaches to Tinnitus Management
Presented by Christopher Spankovich, AuD, PhD, MPH
Recorded Webinar
Course: #29715Level: Intermediate1 Hour
Tinnitus is a common complaint in patients presenting for audiological consultation. This course will examine use of amplification and other sound therapy based approaches for tinnitus management.

Grand Rounds: Tinnitus Evaluation & Management, in partnership with the University of Mississippi Medical Center
Presented by Christopher Spankovich, AuD, PhD, MPH, Thomas Eby, MD, Victoria Gonzalez, AuD, PhD, Charles Bishop, AuD, PhD, Alex Elkins, AuD
Recorded Webinar
Course: #35348Level: Advanced1.5 Hours
This course features clinicians from the University of Mississippi Medical Center presenting actual case studies on adult tinnitus management with a range of audiological and medical diagnoses represented.

Principles of Tinnitus Evaluation & Management I, presented in partnership with Salus University
Presented by Christopher Spankovich, AuD, PhD, MPH
Recorded Webinar
Course: #31553Level: Intermediate1.5 Hours
The evaluation and management of the tinnitus patient can be a differentiating factor for your clinical practice. In this two-part lecture series, we will discuss foundational elements to tinnitus practice and perform a step-by-step case walk through.

Principles of Tinnitus Evaluation & Management II, presented in partnership with Salus University
Presented by Christopher Spankovich, AuD, PhD, MPH
Recorded Webinar
Course: #31554Level: Intermediate1.5 Hours
The evaluation and management of the tinnitus patient can be a differentiating factor for your clinical practice. In this two-part lecture series, we will discuss foundational elements to tinnitus practice and perform a step-by-step case walk through.

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.