AudiologyOnline Phone: 800-753-2160


Exam Preview

The Audiology of Oticon More

View Course Details Please note: exam questions are subject to change.


1.  Compared to traditional approaches to amplification, our approach with More:
  1. Focusses on total elimination of all noise
  2. Attempts to create a safe listening bubble for the patient
  3. Emphasizes improved performance in complex listening situations without creating an artificially restricted listening experience
  4. Allows all sound to enter the auditory system in its normal perspective
2.  The relationship between Deep Learning and Artificial Intelligence is:
  1. Those terms mean the same thing
  2. Deep Learning means the same thing as Machine Learning
  3. AI is a specialized version of Deep Learning
  4. Machine Learning is a branch of AI and Deep Learning is a highly specialized version of Machine Learning
3.  The goal of Deep Learning is
  1. To find patterns and interrelationships in large amounts of data that may otherwise be difficult to see
  2. To teach machines to know all that humans know
  3. To help professionals learn new levels of minutia
  4. To emphasize the need for patients to practice listening more often
4.  Speech and noise signals:
  1. Can best be describe by a simple set of discreet rules written by experts
  2. Once digitized, represent the sort of complex data that makes sense to analyze via Deep Learning
  3. Cannot be interpreted accurately by machines
  4. Look the same to the human brain
5.  Within the More hearing aid, the Deep Neural Network:
  1. Takes care of all the signal processing
  2. Replaces the function that used to be handled by the Analysis part of OpenSound Navigator
  3. Replaces the function that used to be handles by our OpenSound Optimizer feedback solution
  4. Replaces the function that used to be handled by our Noise Removal (noise reduction) system
6.  The Spatial Rebalancer in More:
  1. Uses directional properties to reduce sound levels for noise coming primarily from the back and sides
  2. Creates an artificial movement of non-speech sound sources to behind the user using AI
  3. Shifts more sound over to the user’s dominant listening ear, using AI
  4. Uses AI to make the dominate speech signal appear to come from all directions
7.  In MoreSound Amplifier:
  1. Longer time constants are used in all situations
  2. Narrowband processing is used in all situations
  3. The system adaptively adjusts both bandwidth and response time based on the dynamic properties of the input signal
  4. Use AI to limit the high frequency cutoff of the device
8.  In MoreSound Amplifier, longer time constants and narrowband processing will be used
  1. In all situations
  2. For situations where rapidly occur noise spikes occur
  3. At all times when speech is present
  4. For the relatively stable passages in the speech signal
9.  In More, the Sound Enhancer:
  1. Can be used to replace the loudness loss that occurs when directionality and noise reduction remove parts of the signal
  2. Uses AI-based speech generation to replace high-frequency consonants
  3. Is set the same for all patients
  4. Is not actually practical given the current speed of our platform
10.  Compared to OpnS, Oticon More:
  1. Cannot improve speech understanding because Deep Learning will not allow for that
  2. Forces a patient to listen more intently because that is good for the brain
  3. Performs at the same levels
  4. Allows for improved speech understanding performance in noise

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.