AudiologyOnline Phone: 800-753-2160


HearUSA - Newsweek - September 2023

20Q: Moving from Evidence to Practice - Can Knowledge Translation and Implementation Science Help Audiology Get There?

20Q: Moving from Evidence to Practice - Can Knowledge Translation and Implementation Science Help Audiology Get There?
Sheila T. Moodie, PhD, MCISc
October 3, 2011
Share:

From the desk of Gus Mueller

Whether our primary daily responsibility is routine diagnostics, balance function testing, electrophysiologic assessment, cochlear implants or hearing aid fittings, we all pretty much believe that 'Best Practices" are a good thing. After all, standards of practice are part of defining a profession, and in some cases differentiating the professional from the technician. For most areas of clinical audiology, at one time or another, a group of "experts" have gotten together and formulated Best Practice Guidelines. For some areas, there are even multiple guidelines. Regarding the fitting of hearing aids for adults, for example, there are guidelines available from the AAA, the ISA and the ASHA—all quite similar, I might add.

So on the surface, it seems pretty simple. If you're a practicing audiologist seeing patients, you sit down with the guidelines for the different areas of your practice and plan your diagnostic and rehabilitation test procedures and protocols accordingly. But it doesn't seem to work that way. For a variety of reasons, some good, some probably not so good, a large percentage of audiologists do not follow Best Practice Guidelines.

The gap that exists between
best practice and clinical practice is not unique to the profession of audiology. In fact, there is an area of research that studies this topic; it's referred to as knowledge translation and implementation science. Fortunately for us, at least one audiologist has devoted several years of research to this area, and she is here to tell us all about it. Our 20Q guest this month is Sheila T. Moodie, MClSc., Research Audiologist at The National Centre for Audiology and PhD Candidate at The University of Western Ontario, London Ontario.

Sheila's work history probably had something to do with her interest in knowledge transfer and implementation. She started working in the Child Amplification Laboratory at The University of Western Ontario (UWO) in 1990 as part of the Desired Sensation Level (DSL) Method team. At this time, Team DSL was just starting to move to computer-assisted implementations of the DSL method that could be widely distributed. They were also in the process of developing a clinical method for measuring the real-ear-to-coupler difference (RECD)—a concept that was still pretty foreign to clinicians at that time.

In the mid-1990's, Sheila spent several years working for Audioscan (probe-microphone equipment), and she conducted numerous probe-mic and DSL 'hands-on' training workshops around North America. She tells us that during these many years of training she saw firsthand the dilemmas faced by clinicians relating to usability, feasibility, utility, adherence, value, acceptability, and so on.

It was either because of the knowledge acquired in these unique job opportunities, or the frustrations experienced, that led Sheila to enter the Ph.D. program at UWO. She'll soon be finishing the program with a focus on applying knowledge translation and clinical behavior change interventions to improve the quality and effectiveness of audiology practice. Those are her words, not mine, and quite an undertaking! We wish her well.


Gus Mueller, Ph.D.
Contributing Editor
October, 2011

To browse the complete collection of 20Q with Gus Mueller articles, please visit www.audiologyonline.com/20Q

 

 

20Q: Moving from Evidence to Practice - Can Knowledge Translation and Implementation Science Help Audiology Get There?

 

 

 

 

1. I heard you were doing some work in rather unique areas, but can't say I really know what "knowledge translation and implementation science" are all about.

Neither did I when I started looking into these areas. However, they became topics in my doctoral dissertation at the University of Western Ontario (UWO), so as you might guess, I've been thinking about these areas a lot. Actually, our UWO DSL group has probably spent the last 20 years thinking about the issues associated with developing knowledge and having it used in clinical practice, but until recently there have not been identified fields-of-research that might guide our efforts. Now there is.


2. What do the terms really mean?

In brief terms, knowledge translation is about identifying and using the best possible strategies for conducting research, synthesizing results and translating results into clinical practice so that effective and efficient health services are delivered to patients in our care. Implementation science provides a scientific methodology for evaluating influences of behavior (individual, organizational and policy) on healthcare practice and policy (Eccles & Mittman, 2006; Foy, Eccles, & Grimshaw, 2001). At its core, implementation science is about studying change; change in practice behavior and maintaining change in organizations.

3. Why do you think these areas might be important for audiology?

Audiology has been working on incorporating evidence-based practice principles into its mandate for professional practice since the mid-1990's (Bess, 1995). The profession has developed countless clinical practice guideline documents for making decisions about the diagnosis, treatment and management of persons with hearing and balance disorders, but it will come to you as no surprise that many of these guidelines are not being used in clinical practice.

Here's an example that you maybe can relate to. It is widespread knowledge that using probe-mic measures to verify the performance of hearing aids fit to adults and children, and the subsequent subjective and/or objective validation, are part of evidence-based best practice guidelines. In clinical practice however, numerous surveys over the past 10 years have shown that more than half of adult hearing aid fittings are not verified with probe-mic measures (e.g., Mueller & Picou, 2010). You've maybe seen the recent MarkeTrak VIII data from Kochkin (Kochkin et al., 2010) indicating that about 60% of individuals being fitted with hearing aids do not receive probe-mic testing or other types of objective verification.

A recent survey indicates that compliance with best practice guidelines is a better with children, but there may still be about 30% of hearing aids fitted to children that are not verified with probe-mic or predicted probe-mic measures (Moodie et al., 2011).

4. Not too surprising, although I didn't think it was quite that bad. What do you think the issues are?

What I've learned is that there are many factors that influence the use of evidence in practice by audiologists. Factors can be associated with: (1) the guideline; (2) the individual practitioner; (3) the context/environment in which the individual works; and (4) the broader healthcare system (Moodie et al., in press). The guideline might be too rigid so that local adaptation can't or doesn't occur; audiologists might think change is too difficult; they may lack the authority in their workplace to change practice; and the healthcare system might not be reimbursing appropriately for services delivered. There are multiple reasons that best practices are not followed.

When looking at the factors listed above, we generally focus on issues associated with the guideline or the individual clinician. In the research that I've been doing, audiologists reported numerous barriers to changing practice, but the greatest barrier they reported to changing practice on the basis of 'best evidence' are insufficient time at work to find research and/or organizational information and time at work to implement changes in practice. Many of the health sciences professions, like audiology, nursing, and physical therapy, report "lack of time" as a big barrier to the use of research evidence in practice. Until now we have not had good scientific methodologies to help us define what "lack of time" means for the different contexts in which audiologists work nor methods that could be used to address the barriers once defined. We can use the scientific areas of knowledge translation and implementation science to better understand how to develop best practices that will be clinically valued, and develop strategies to change clinical practice behavior. Until we have a better understanding of what audiologists' perceptions of the barriers to the use of evidence in the practice environment and broader healthcare system in which they work are, we will have difficulty developing strategies to facilitate practice behavior change. It doesn't matter if researchers think the clinicians' reasons for not implementing evidence in practice are justifiable or not.

5. So how might researchers and/or guideline developers change their behavior to develop evidence that might be better used in practice?

We need to not only be developing evidential knowledge products (such as protocols or clinical practice guidelines) but we need to always be attending to how that knowledge is going to be implemented in practice. For example, we need to ask ourselves questions like: what local adaptations might need to be made to the evidence so that it will be implemented in the majority of practices? What adaptations are acceptable versus what adaptations would not be acceptable? How are we going to measure that the knowledge is being used appropriately in practice? There are some frameworks developed in the areas of knowledge translation and implementation science that can guide us. The one that I am most familiar with is called the knowledge-to-action process (Graham et al., 2006; Graham & Tetroe, 2007). As shown in Figure 1, the knowledge-to-action framework is comprised of (1) a knowledge creation funnel and (2) an application of knowledge cycle.



Figure 1: The Knowledge-to-Action Process. Adapted from "Lost in knowledge translation: Time for a map?" by I. Graham, J. Logan, M. B. Harrison, S. Straus, J. Tetroe, W. Caswell, & N. Robinson, 2006, The Journal of Continuing Education in the Health Professions, 26, p. 19. Copyright 2006 by John Wiley and Sons. Reprinted with permission.

6. Is this somehow different from the usual framework?

Yes it is. Traditionally we've developed evidence for implementation in practice by having research experts synthesize the best available evidence, rate it as high or low quality, and then format it into clinical practice recommendations, and then send it out for comment/revision. I believe that audiology guideline development strategies, at least the ones I know about, have been a bit different because we have clinicians and clinician-researchers involved. But based on the fact that we still have adherence issues to evidence-based practice recommendations we need to consider trying something different. The integrated knowledge-to-action framework suggests that the development of best evidence for clinical practice is enhanced when all parts of the research process consider the evidence and ways to facilitate its implementation in practice, and when collaboration occurs between the researcher, the clinician, and whenever possible, the patient.

7. Don't you think we've always been doing this?

I think in audiology we have generally been doing a good job in creating clinical practice guidelines; that is the part that is represented by the knowledge creation funnel in Figure 1. We have experts in the profession who have developed the skills needed to gather, synthesize, and evaluate the available evidence and filter it to develop the guidelines. Where I believe that we could improve is ensuring that the high-quality research evidence is continually tailored throughout the process in such a way that it is clinically useable, acceptable and valued by the end user and the patients in their care. I think that if we look at many of the published clinical practice guidelines in audiology we might wonder if we could do a better job at presenting the information in clear, concise formats to promote their uptake in clinical practice.

8. That's probably true. What about the "application of knowledge cycle?"

The application of knowledge cycle is also shown in Figure 1. It surrounds the knowledge creation funnel. This cycle (and notice that the arrows go in both directions) really tries to address what needs to be considered when we think about putting evidence, or clinical practice guidelines, into people's lives at work. The knowledge-to-action framework would state that during, or subsequent to, release of a knowledge product (such as research results, protocols or guidelines), the researchers should attend to critical components for moving the knowledge into practice. These critical components include: determining at a clinical level barriers to knowledge use; adaptations that might encourage implementation; and strategies to facilitate practice change. These analyses will subsequently lead to the development of appropriate strategies for change in clinical practice behavior and should include methods of measuring and monitoring implementation as well as strategies to sustain the use of the knowledge once it has been put into practice. What I really like about this framework is that it delineates the important stages that should occur to ensure implementation not only happens, but is sustained over time. I believe that this is best done at the same time the knowledge product is being created.

9. How do you see this framework being applied to issues in clinical audiology and hearing aid fitting?

Let me provide you with a recent example. Our Child Amplification Lab research group has just used the knowledge-to-action framework to develop a guideline for outcome measures to evaluate the auditory development and performance of children with permanent childhood hearing impairment (PCHI) who wear hearing aids and are aged birth to six years (Bagatto, Moodie, Malandrino, et al., in press; Moodie, Bagatto, et al., in press). We call it the University of Western Ontario Pediatric Audiological Monitoring Protocol, and because we cannot live without acronyms, we refer to it as the UWO PedAMP. Dr. Bagatto led the knowledge inquiry and synthesis stages to filter out appropriate tools for clinical use (Bagatto, Moodie, Seewald, Bartlett, & Scollie, 2011). One of the dilemmas facing the research team was the lack of valid and reliable outcome measurement tools for this population that offered among other things, good conceptual clarity, normative data, minimal respondent and administrative burden, and was also available in various languages (Bagatto et al., 2011). We considered for inclusion the Parents Evaluation of Aural/Oral Performance in Children (PEACH) Diary (Ching & Hill, 2005a). Overall the PEACH Diary received a very good grade on our critical review of the evidence. However, the interview-style format introduced clinical feasibility and utility concerns. We had also reviewed a version of the PEACH that was not interview-style (PEACH Rating Scale; Ching & Hill, 2005b). It scored lower than the diary in our evidential critical review. The PEACH Rating Scale asks parents to retrospectively (during the prior week) rate the presence/absence of auditory related behaviors. We took both tools to our collaborative clinical partners to ascertain their opinions. We were informed that, regardless of the context in which they worked, the audiologists found the version of the PEACH that used the rating scale to be a more clinically feasible outcome evaluation tool to include in the UWO PedAMP guideline. They indicated that the PEACH rating scale version: was less difficult to score and interpret; less time consuming for parents and audiologists; would have more support and acceptance for use in their workplace setting; would require less development of new skills and knowledge to use; and was more practical to implement. More audiologists indicated that they were more likely to use the PEACH rating scale in daily practice over the PEACH diary if it became part of the guideline. So, from an implementation perspective if one aim is to ensure adherence to the guideline, including the PEACH rating scale over the more evidential PEACH diary made sense.

10. That's very interesting, but don't you still have the problem of changing practice behavior?

I think one problem is that we haven't had focused areas of research that examine implementation questions, such as: "What conditions internal to individual audiologists and in their social and physical work environment need to be in place for a specified practice behavior to be achieved?" (Michie, van Stralen, & West, 2011; para. 48).

11. That sounds very theoretical. Do you have a practical example?

Sure. Let's go back to our earlier discussion regarding the use (or lack of use) of probe-mic verification. We know from the survey of Mueller & Picou (2010) that audiologists who own probe-mic equipment do not conduct probe-mic verification significantly more than those who do not, so not owning the equipment probably isn't a major reason. But what is?

There probably are many others. When I've spoken with end-users who own probe-mic systems but don't use them they tell me that they are meeting their patients' needs regardless of whether or not they're making probe-mic measurements. I think that data from Kochkin (2010, 2011) would refute that argument, but perhaps they're not aware of these data. For example, Kochkin (2011) reports that based on the nearly 2.7 million hearing aids fit in the U.S. in 2010, the systematic evaluation of hearing aid performance using probe-mic verification and evidence-based validation procedures could reduce patient visits by a total of 521,779 visits, and reduce by 391,334 hours in a single year practitioners are spending on these visits. If we could convert this into a cost benefit analysis that showed the potential cost savings, it might persuade and motivate a change in their practice behavior.

12. So you think it is all about saving money?

Actually no I don't. What I think it is about is using the appropriate methodology to determine why we're having implementation issues. There likely are numerous factors, some totally unrelated to individual audiologists' behavior that limit the use of probe-mic measures in clinical practice. When we start to define the barriers, we can start to develop solutions. The most successful solutions will have multiple components and will target not only the individual but also group and organizational factors.

13. So does everything you're talking about only relate to audiologists who dispense hearing aids?

Absolutely not. We know that there are many audiologists who diagnose hearing loss and are involved in the habilitation/rehabilitation process who are not adhering to, or following, clinical practice guidelines. The process is the same, we need to identify what we can do to develop evidence that is easily translated into practice and work with clinicians to change practice behavior. One issue is that "old habits, die hard". If I've always conducted live-voice word recognition testing, developing the skills to use the audiometer-implemented testing might be time-consuming and might not appear to have value to me.

14. Changing clinical practice behavior sounds complicated. Is there some research model that can help us figure this all out?

Glad you asked! Recently, Susan Michie from University College, London, U.K. and colleagues proposed a new framework for changing behavior (Michie et al., 2011). At the centre of this framework is a 'behaviour system' comprised of three essential conditions: Capability, Opportunity, and Motivation (it is being referred to as the COM-B [behaviour] system). This framework could be used to examine factors and potential interventions that impact clinical practice behavior:

 

 

 

 

  • Capability is defined as "the individual's psychological and physical capacity to engage in the activity concerned." Capability includes knowledge and skills.

  • Opportunity is defined as "all the factors that lie outside the individual that make the behavior possible or prompt it."

  • Motivation is defined as "all those brain processes (including habits, emotional responding and analytic decision-making) that energize and direct behavior."

 

 

15. It seems like there could be an interaction between the three components of the COM-B system?

Yes there often is. Capability and opportunity can influence motivation and performing a behavior can modify capability, opportunity and motivation.

16. This sounds like something we might use with our patients.

It is, but the interesting thing for me at least, is that not only has this method been used to change patient behavior (Michie et al., 2011; Michie, Hyder, Walia, & West, 2010) it is more recently being considered for use in the area of clinical practice change (Michie, van Stralen & West, 2011). Behavior change is difficult whether it is patient behavior or audiologist behavior (or researcher behavior)! In order to provide guidance, the authors of the COM-B system have developed a "behavior change wheel". It is shown in Figure 2. The behavior change wheel is comprised of the COM-B system and associated subdivisions at the hub. Capability can be divided into physical and psychological (reasoning, comprehension, etc). Opportunity can be divided into physical (environmental/contextual) and social opportunity (for example, the way concepts are developed to shape how we think about things). Finally, motivation is divided into reflective processes (self-evaluation, for example) and automatic processes (impulses, emotions, etc.). These are encircled by effective behavior change intervention functions and then by policy categories.



Figure 2: The Behavior Change Wheel. From The Behaviour Change Wheel: A new method for characterizing and designing behaviour change interventions by S. Michie, M. M. van Stralen & R. West, 2011, Implementation Science; 6(42). doi: 10.1186/1748-5908-6-42. Copyright 2011 by BioMed Central. Reprinted with permission.

17. Given all the information in this 'wheel' how do you decide which intervention to use, and with whom?

In order to identify the type or types of implementation intervention that are likely to be effective, the COM-B system provides a canvas of the many options available and provides a rationale for selecting from among them. Here is one way that it might work for an audiology example. We know that conducting real-ear-to-coupler (RECD) measures with young children is a valuable tool for setting the child's hearing aid gain and output. We know that many audiologists are conducting RECD measures but some are not (Moodie et al., 2011). How could we facilitate an effective behavior change? The first step would be identifying the barrier to making the measurement. Most audiologists in 2011 know what an RECD is (this is substantially different than even 10 years ago!). However many still report that they lack confidence in making the measurement in very young children (Moodie, Rall, et al., 2011). If we know this, we can now use the 'intervention function' of the Behavior Change Wheel (highlighted in the red/burgundy rung) to select from various interventions to change practice behavior. These interventions include restrictions, education, persuasion, incentivisation, coercion, training, enablement, modeling, and environmental restructuring.

18. That's a long list of interventions!

True, but as I mentioned earlier, if the clinicians have been involved in the process all along, we probably know what interventions will work best. To improve their confidence in making an RECD measure, for example, we might choose training ("tips and tricks to make the RECD measure"), enablement (hands-on training), and modeling strategies ("here is what an appropriate RECD should look like") to ultimately change their practice behavior. The important component here is to remember all aspects of the 'wheel': capability, opportunity and motivation and determine for a given behavior, which intervention will work best. If the question here is: "What conditions internal to individual audiologists and in their social and physical work environment need to be in place for an RECD measurement to be achieved?", then improving the confidence level and ability of pediatric audiologists may ultimately affect the opportunity and capability and then hopefully motivation to make the measure.

19. Are there times where policy-level interventions can be used to change clinical practice behavior?

I think so. I read an interesting article by Wendy Phillips and colleagues from the U.K. that discussed the process that resulted in the introduction of digital signal processing (DSP) hearing aids into the English National Health Service (NHS) back in 2000, when the Modernising Hearing Aid Services (MHAS) project was initiated (Phillips, Knight, Caldwell & Warrington, 2007). Initially there was a policy-level assumption that the provision of state-of-the-art DSP hearing aids through this program would be too costly. Four important stakeholders including the Royal National Institute for Deaf People; the Department of Health; the Medical Research Council; and the Institute of Hearing Research worked as a collective to influence policy. Together they developed a list of leading edge specifications for digital hearing aids and their clinical use. They negotiated with manufacturers and policy-makers to reach agreement on appropriate pricing of instruments and equipment. This collaboration affected policy-level change and ultimately public practice with the end result that by 2007, over 200,000 DSP hearing aids were fitted within NHS sites at a manageable cost per unit. The take home point here is that at a policy-level the adoption of DSP technology was considered too cost-prohibitive for the NHS. The collaborative took the opportunity to collect appropriate information and advocate for change. Ultimately policy-level adoption of DSP technology occurred and led to a change in clinical practice behavior where the use of DSP technology is now the practice standard.

20. This all sounds good, and call me a pessimist, but when 50% of audiologists are not following Best Practice Guidelines, do you really think that your "funnel" and your "wheel" are really going to make the world a better place?

I won't call you a pessimist, I'll call myself an optimist! We believe that the UWO PedAMPs successful uptake in practice is in part a result of using an integrated knowledge translation approach throughout the development of the guideline. Collaborating closely with audiologists in practice helped to effectively identify the barriers and facilitators to implementation so we were able to address the barriers (like language translations) in an expeditious manner. As we move forward we may have an opportunity to use the Behavior Change Wheel as a guide to further assist implementation of not only the UWO PedAMP, but other areas of practice change we might be interested in.

Acknowledgements - Funding

This work was supported with funding by the Canadian Institutes of Health Research [Sheila Moodie: 200710CGD-188113-171346]. This work has also been supported by The Masonic Foundation of Ontario, Help-2-Hear Project and Starkey Laboratories Inc.

References

Bagatto, M.P., Moodie, S.T., Malandrino, A., Richert, F., Clench, D., & Scollie, S.D. (in press). The University of Western Ontario Pediatric Audiological Monitoring Protocol (UWO PedAMP). Trends in Amplification.

Bagatto, M.P., Moodie, S.T., Seewald, R.C., Bartlett, D.J., & Scollie, S.D. (2011). A critical review of audiological outcome measures for infants and children, Trends in Amplification, published online August 25, 2011: https://tia.sagepub.com/content/early/2011/08/03/1084713811412056

Bess, F. (1995). Editorial: Evidence-based audiology. American Journal of Audiology, 4(2), 5.

Ching, T. Y., & Hill, M. (2005a). The Parents' Evaluation of Aural/Oral Performance of Children (PEACH) Diary. Chatswood, New South Wales, Australia: Australian Hearing. Retrieved from outcomes.nal.gov.au.

Ching, T. Y., & Hill, M. (2005b). The Parents' Evaluation of Aural/Oral Performance of Children (PEACH) Rating Scale. Chatswood, New South Wales, Australia: Australian Hearing. Retrieved from: https://www.outcomes.nal.gov.au/

Eccles, M. P., & Mittman, B.S. (2006). Editorial: Welcome to Implementation Science. Implementation Science, 1(1). doi:10.1186/1748-5908-1-1 Retrieved from https://www.implementationscience.com/content/1/1/1

Foy, R., Eccles, M., & Grimshaw, J. (2001). Why does primary care need more implementation research? Family Practitioner, 18, 353-355.

Graham, I. D., & Tetroe, J. (2007). How to translate health research knowledge into effective health care action. Healthcare Quarterly, 10(3), 20-22.

Graham, I.D., Logan, J., Harrison, M.B., Straus, S.E., Tetroe, J., Caswell, W., & Robinson, N. (2006). Lost in knowledge translation: Time for a map? Journal of Continuing Education in the Health Professions, 26(1), 13-24. doi: 10.1002/chp.47

Kochkin, S. (2011). MarkeTrak VIII: Reducing patient visits through verification & validation. Hearing Review, 18(6), 10-12. Retrieved from https://www.hearingreview.com/issues/

Kochkin, S., Beck, D. L., Christensen, L. A., Compton-Conley, C., Fligor, B. J., Kricos, P. B., et al. (2010). MarkeTrak VIII: The impact of the hearing healthcare professional on hearing aid user success. Hearing Review, 17(4), 12-34.

Michie, S., Ashford, S., Sniehotta, F. F., Dombrowski, S. U., Bishop, A., & French, D. P. (2011). A refined taxonomy of behaviour change techniques to help people change their physical activity and healthy eating behaviours: The CALO-RE taxonomy. Psychology & Health, doi:10.1080/08870446.2010.540664 Retrieved from https://dx.doi.org/10.1080/08870446.2010.540664

Michie, S., Hyder, N., Walia, A., & West, R. (2010). Development of a taxonomy of behaviour change techniques used in individual behavioural support for smoking cessation. Addictive Behaviors, 36, 315-319. doi:10.1016/j.addbeh.2010.11.016

Michie, S., van Stralen, M. M., & West, R. (2011). The behaviour change wheel: A new method for characterizing and designing behaviour change interventions. Implementation Science, 6(42). doi:10.1186/1748-5908-6-42

Moodie, S.T., Bagatto, M.P., Miller, L.T., Kothari, A., Seewald, R.C., & Scollie, S.D. (in press). An integrated knowledge translation experience: Use of the Network of Pediatric Audiologists of Canada to facilitate the development of the University of Western Ontario Pediatric Audiological Monitoring Protocol (UWO PedAMP v1.0). Trends in Amplification.

Moodie, S.T., Kothari, A., Bagatto, M.P., Seewald, R.C., Miller, L.T., & Scollie, S.D. (in press). Knowledge translation in audiology: Promoting the clinical application of best evidence. Trends in Amplification.

Moodie, S.T., Rall, E., Lindley, G., Davidson, L., Littman, L. & Gordey, D. (2011, April). Survey of best practices: Pediatric hearing aid fitting. Presentation at AudiologyNOW!, Chicago, IL.

Mueller, H. G., & Picou, E. M. (2010). Survey examines popularity of real-ear probe-microphone measures. The Hearing Journal, 63(5), 27-32.

Phillips, W., Knight, L., Caldwell, N., & Warrington, J. (2007). Policy through procurement - The introduction of digital signal process (DSP) hearing aids into the English NHS. Health Policy, 80(1), 77-85. doi: 10.1016/j.healthpol.2006.03.008

Sennheiser Forefront - March 2024

sheila t moodie

Sheila T. Moodie, PhD, MCISc

Research Audiologist in The Child Amplification Laboratory

Sheila Moodie is a Research Audiologist in The Child Amplification Laboratory, at The National Centre for Audiology, University of Western Ontario and a PhD Candidate in the Health and Rehabilitation Sciences Program, Faculty of Health Sciences also at The University of Western Ontario. She has assisted in the development and methods to improve the clinical implementation of the DSL Method for over 20 years. She has been awarded a Canadian Institutes of Health Research Frederick Banting and Charles Best Canada Graduate Scholarship to study methods that reduce the knowledge utilization gap in audiology by encouraging collaborative linkage and exchange between researchers and clinicians. none



Related Courses

20Q: Consumer Reviews Offer Hearing Care Insights
Presented by Vinaya Manchaiah, AuD, MBA, PhD
Text/Transcript
Course: #36108Level: Introductory2 Hours
This engaging Q & A course discusses research about online consumer reviews in hearing healthcare. The discussion includes what audiologists and hearing care professionals can learn from reviews in order to deliver the best possible patient experience and positive outcomes with hearing care.

20Q: Measuring Clinical Productivity—the Hunt for the Right Metric
Presented by Derek Stiles, PhD
Text/Transcript
Course: #37901Level: Intermediate1 Hour
How do you know if you are working hard enough, or too hard? Is your clinic’s workload similar to others in your region? Productivity measures can help answer these questions. This article describes how a group of pediatric audiology clinic directors developed a way to calculate productivity in their programs and the challenges they faced along the way.

20Q: Why We Need an Audiology Practice Standards Organization
Presented by Jenne Tunnell, AuD
Text/Transcript
Course: #34858Level: Introductory1 Hour
A discussion of standards, as well as the rationale, purpose, and current activities of the Audiology Practice Standards Organization, written in an engaging Q & A format.

20Q: Audiology - There’s an Art to This Science
Presented by Thomas Davis, BSc (Hons), MSc, MAudA(CCP)
Text/Transcript
Course: #37940Level: Intermediate2 Hours
The importance of having a balance of knowledge and information in clinical practice. A review of anosognosia and how to counsel patients with this symptom is also discussed.

20Q: Hearing Loss and Public Health Initiatives
Presented by Nicholas Reed, AuD
Text/Transcript
Course: #37464Level: Intermediate1.5 Hours
How hearing loss is associated with healthcare-seeking behaviors and unmet health care needs. A new hearing loss public awareness campaign and public health training initiatives that are open to audiologists from the Cochlear Center for Hearing and Public Health at Johns Hopkins University are also reviewed.

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.