AudiologyOnline Phone: 800-753-2160


Shoebox - Learn More - January 2024

20Q: Measuring Clinical Productivity—the Hunt for the Right Metric

20Q: Measuring Clinical Productivity—the Hunt for the Right Metric
Derek Stiles, PhD
September 12, 2022

To earn CEUs for this article, become a member.

unlimited ceu access $129/year

Join Now
Share:

From the Desk of Gus Mueller

Gus-mueller-contributing-editor

While assessment of productivity is tedious and usually not too exciting, there is an interesting story that goes along with its measurement. It relates to research that was conducted between 1924 and 1932 at a Western Electric plant in Cicero, Illinois, called Hawthorne Works. One of the studies was to determine if workers would become more productive in higher or lower levels of lighting. Interestingly, the researchers found that workers' productivity improved when lighting both increased and decreased, even when the change was a small amount. They concluded that it was due to the fact that the employees knew they were being studied, not directly related to the lighting. In later years this was dubbed the “Hawthorne Effect,” and while the scientific merits of the conclusions have been questioned, the Hawthorne Effect is still sometimes a consideration in research today, often controlled with placebo groups.

If you are at all involved in the business side of a practice, or managing other audiologists, the issue of productivity will sooner or later come up. And sometimes, it is difficult to design a measurement tool that is fair to all—it’s often not as simple as patient-contact hours, number of patients seen, what procedures were conducted, or revenue generated—you’ll hear more about all this in this month’s 20Q.

Derek Stiles, PhD, is Director, Center for Communication Enhancement, and Assistant Professor of Otolaryngology, Harvard Medical School. As Director, he is recognized for his continued accomplishments in improving interdisciplinary communication, reviewing equity, diversity and inclusion in the department, and expanding practice guidelines.

Over the years you’ve probably gotten to know Dr. Stiles for his work in the area of childhood language development. For this month’s 20Q, however, he will be talking about something different. Thanks to a collaboration with Cincinnati Children’s Hospital Medical Center and Children’s Hospital of Philadelphia, he and his colleagues developed a productivity measure for pediatric audiologists, and this tool is now available online for other audiology clinics to access.

I’m not entirely sure what the folks from Hawthorne Works will think about Derek’s productivity assessment techniques, but I do know that many of you will find the results interesting, informative and useful.

Gus Mueller, PhD
Contributing Editor

Browse the complete collection of 20Q with Gus Mueller CEU articles at www.audiologyonline.com/20Q

20Q: Measuring Clinical Productivity—the Hunt for the Right Metric

Learning Outcomes 

After reading this article, professionals will be able to:

  • Compare different approaches to measuring productivity.
  • Define relative value units (RVU).
  • Apply productivity to their own clinic.
Derek-stiles-presenter
Derek Stiles, PhD

1. I recall having a conversation with you here several years ago. Looks like you’ve moved on to a new topic?

For today, yes. I’m still very interested in language development in children, but a lot has changed since the last time I was here. I left Chicago for Boston and accepted a position as Director of Audiology at Boston Children’s Hospital. It opened a whole new world of administrative responsibilities to me. One of which I have spent a lot of time exploring is clinical productivity.

2. Productivity is not the first thing that comes to mind when I think of pediatric audiology. Why is it important?

I’ll be frank, it was a foreign topic to me, too. Soon after I started my new position, questions of whether there was a fair distribution of work among clinicians began to arise. So, I had to engage with this concept of productivity as a requirement of my position.

I immediately noticed the visceral response people tend to have to the concept of “measuring productivity.” And honestly, I felt it too. In conversations with friends and colleagues, there was an assumption that the goal of productivity tracking was to take any type of breathing room a provider might have in a day and fill it with more patients. I think it carries a sense of trying to squeeze blood from a turnip.

That, of course, was not my goal. I wanted to quantify the existing practice pattern across our satellites and programs so I could measure, compare and track it meaningfully. The hospital tended to frame productivity in terms of the number of patients coming through the doors. That approach of counting patients broke down when trying to assess individual productivity. Audiologists assigned to support the Ear Nose and Throat clinic would see several more patients in a day than the audiologists working the hearing aid and cochlear implant clinics. This was by design, and it would not make sense to penalize the hearing technology providers because they saw fewer patients than the diagnostic audiologists. So, for me, what was important was to find a way to evaluate productivity that could accommodate the different types of work the team members were performing.

3. That sounds challenging. Did you have any help?

Fortunately, and coincidentally, two of my colleagues, Wendy Steuerwald (who was then the interim director of Audiology at Cincinnati Children’s Hospital Medical Center) and Joy Peterson (who was then the director of Audiology at Children’s Hospital of Philadelphia), were facing the same questions. We decided to team up to figure out a good way to measure productivity that would work across all our programs.

4. Where would one even start in this process?

The first thing we did was figure out what kind of data we had that was accessible and common to our centers. The readiest information we had was the number of patients seen, but as I mentioned before, this was only useful at a departmental level, not for determining equity among providers.

We recognized that patients were scheduled into certain visit types based on their needs and thought this might be an avenue. For instance, we had cochlear implant visit types, amplification visit types, and diagnostic testing visit types. But we also had visit types that didn’t seem to match well across all our centers – for example, certain types of auditory processing visits and specialty clinic visits. So, we more or less considered this approach to be a dead end.

We finally landed on billing codes, specifically Current Procedural Terminology (CPT) codes, as the most common and consistent data type we could use. CPT codes were developed in the 1960s to create a standard directory of medical services. For example, all our providers bill the same CPT code, 92567, when tympanometry is performed on a patient. And because CPT codes are used nationally, we thought this would be a good choice if we ever expanded beyond our three medical centers.

5. Was it easy to gather all the CPT codes that had been billed?

It wasn’t originally. One of the things we learned throughout this process is that sometimes you have to hunt for meaningful information. Every provider at our hospitals submits billing codes for their patients, but the question was whether we could get that billing information back out. Fortunately, our respective medical centers were able to generate reports for us that showed the number of billing codes submitted by the provider and date. For example, at Boston Children’s Hospital, we have a data warehouse that includes billing codes. We can submit queries to the data warehouse, including date ranges, visit types, providers, etc., and pull the relevant billing codes.

6. Do you count the billing codes submitted to determine the productivity level of the provider or of the clinic?

Unfortunately, it wasn’t as simple as that. For the same reason that counting patients is not a good representation of productivity, counting billing codes is not reflective of the amount of work being done. An audiologist may spend 90 minutes doing cochlear implant mapping and submit a single billing code. In contrast, another audiologist could complete two hearing tests in the same period and submit billing codes for audiometry, tympanometry, and otoacoustic emissions. We needed to come up with a kind of correction factor that would weight the billing codes appropriately.

During our research, we noted that the Centers for Medicare & Medicaid Services (CMS) had designated weightings for all CPT codes, called Relative Value Units (RVUs). There are three categories of RVUs reflecting different aspects of the medical service: work, practice expense, and malpractice. For our question, we only focused on the work RVU. A higher work RVU indicates more time spent or skill needed to complete a procedure. For instance, the work RVU for tympanometry (CPT 92567) is 0.2, and the work RVU for threshold ABR (CPT 92652) is 1.5. So, in a sense, an audiologist who completes 15 tympanograms is as productive as an audiologist who completes 2 ABRs. If you are curious and want to pull the work RVU of a CPT code, you can use the calculator at https://www.aapc.com/practice-management/rvu-calculator.aspx or access the information directly from CMS at https://www.cms.gov/medicare/physician-fee-schedule/search.

7. I’ve certainly heard of RVUs, but have never really worked with them. Did that method help?

Yes and no. The approach was useful, but the published work RVUs didn’t fit exactly with what we were trying to do. The CMS RVUs were assigned by a committee of physicians with input from professional organizations. But as we reviewed them, some of the values seemed arbitrary to us. For example, where tympanometry has a value of 0.2, distortion product OAEs has a value of 0.55, and speech perception testing has a value of 0! So, we decided to start from scratch and come up with our own RVUs based on our experience working in our respective pediatric medical centers.

8. How did you do that?

We decided to base our custom RVUs on what we thought was an appropriate duration to spend on a procedure, where an RVU of 1.0 equated to 1 hour’s worth of work. In other words, procedures that take 0 - 15 minutes were assigned an RVU of 0.25; procedures that last 15 - 30 minutes were assigned an RVU of .5, etc. Also, we decided that if the code was timed in its definition, we would just use that value for our custom RVU. For example, 92620 is defined as an evaluation of central auditory function lasting 60 minutes; we assigned a value of 1.0. Independently, we went through all of our CPT codes and assigned an RVU. We then met and compared our values. Overall, we were in good agreement, particularly for the shorter procedures, such as tympanometry. We did have discussions over some of the longer procedures, such as cochlear implant mapping. Ultimately, we came up with a list of custom RVUs for all of the major CPTs used in our clinic (See table below). We call them the PAPA RVUs, PAPA being short for Pediatric Audiology Productivity Algorithm. It felt like a great accomplishment.

CPT Code

Description

PAPA Custom RVU

92550

Tympanometry and reflex threshold measurements

0.5

92551

Screening test, pure tone, air only

0.25

92552

Pure tone audiometry (threshold); air only

0.25

92553

Pure tone audiometry (threshold); air and bone

0.5

92555

Speech audiometry threshold;

0.25

92556

Speech audiometry threshold; with speech recognition

0.25

92557

Comprehensive audiometry threshold evaluation and speech recognition

0.5

92567

Tympanometry (impedance testing)

0.25

92568

Acoustic reflex testing, threshold

0.25

92570

Acoustic immittance battery

0.5

92579

Visual reinforcement audiometry (VRA)

0.5

92582

Conditioning play audiometry

0.5

92583

Select picture audiometry

0.25

92585

Auditory evoked potentials; comprehensive (retired code)

2

92586

Auditory evoked potentials; limited (retired code)

1

92587

Distortion product evoked otoacoustic emissions; limited

0.25

92588

Distortion product evoked otoacoustic emissions; comprehensive

0.25

92590

Hearing aid exam and selection, monaural

1.5

92591

Hearing aid exam and selection, binaural

1.5

92592

Hearing aid check; monaural

0.5

92593

Hearing aid check; binaural

0.5

92594

Electroacoustic evaluation for hearing aid; monaural

0.5

92595

Electroacoustic evaluation for hearing aid; binaural

0.5

92601

Diagnostic cochlear implant, younger than 7 years of age; with programming

1.5

92602

Diagnostic cochlear implant, younger than 7 years of age; reprogramming

1

92603

Diagnostic cochlear implant, age 7 years or older; with programming

1.5

92604

Diagnostic cochlear implant, age 7 years or older; reprogramming

1

92620

Evaluation of central auditory function, initial 60 minutes

1

92621

Evaluation of central auditory function, each additional 15 minutes

0.25

92626

Evaluation to determine candidacy for a surgically implanted hearing device, initial 60 minutes

1

92627

Evaluation to determine candidacy for a surgically implanted hearing device, each additional 15 minutes

0.25

92650

Screening ABR

1

92651

Broadband threshold ABR

1

92652

Multiple frequency threshold ABR

2

92653

Neurodiagnostic ABR

2

 
Table 1. Pediatric Audiology Productivity Algorithm Custom Relative Value Units. The RVU is an indication of how many hours are needed to complete the related procedure code, rounded to the nearest quarter hour.
 

9. It sounds like a great accomplishment. What do you mean by ‘major’ CPTs?

Well, there are a lot of codes that our clinics used that didn’t seem to make sense to include. For example, there are codes related to dispensing hearing aid accessories that we believed did not contribute to clinician time or effort. We found that there was a lot of inconsistency in how our hearing aid services were delivered vis-à-vis the billing codes selected for those services. Discussion with audiology leaders outside of our triad revealed that this was a broad problem. To illustrate, we have these codes available for hearing aid service: V5014: Repair/modification of a hearing aid, V5020: Conformity evaluation, and 92592: Hearing aid check. Not all clinics use all three of these codes. Some clinics bundle the three under V5014. Some clinics never bundle the codes. So, in this case, we had to throw up our hands. We are confident with our custom RVUs for diagnostic services and cochlear implant services (as there are so few codes), but we are not able to gauge the productivity of dispensing audiologists very well in a universal approach.

10. What was the next step?

Well, we thought we had done it. We decided to see how our clinics compared to each other now that we have created these custom RVUs. A reasonable starting point was to determine what was the total number of RVUs our clinics billed during a one month period. We contacted our billing departments to pull the CPT codes billed during that time period, and we multiplied the total times each code was billed by its respective custom RVU and added that up. And do you know what we found? We found that the centers that had more patients had greater productivity.

11. That doesn’t seem to be very enlightening. That’s all you had to show for all your effort?

Kind of. The thing to remember about productivity is that it is a ratio of the amount of work done over a period of time. Up to now, we had spent all of our effort working on only half of the ratio: the amount of work done. We had yet to look at the denominator, the measurement of time.

12. I thought you did. You specified one month’s worth of time, right?

We did, but within that month, there were different numbers of person-hours available in each clinic. If you compare a clinic with 10 providers against a clinic with 5 providers over a one-month period, their productivity will clearly look different. We needed to refine our metric to reflect the amount of time providers were in each clinic.

13. That seems pretty straightforward, right?

Well, like anything else here, there is more than one way to do it. One approach is to use the number of hours of full-time equivalents (FTEs) assigned in a clinic. In a clinic with 5 full-time staff, the time unit would be 200 hours per week (5 people working 40 hours). FTE assignment is a relatively stable value but does not always reflect the actual amount of available work. During weeks when employees are out due to illness or vacation, productivity will look poorer compared to weeks when the clinic is fully staffed. This, of course, makes sense for observing general clinical trends. But if you want to control for actual changes in staffing when looking at productivity, you could not use the FTE approach.

We found in the literature on productivity that labor can be measured using paid-hours or worked-hours. Simply put, paid-hours includes paid time off (essentially the clinic FTE), and worked hours does not include paid time off. 

14. If you are trying to compare individual productivity, then worked hours would be the better option, right? It would describe the provider’s availability most accurately.

Exactly right, and so worked hours was the value we wanted for the other half of our productivity ratio. But it is not a simple task to obtain the worked hours for a certain period of time for a multi-satellite medical center with 30 audiologists working across 6 clinical sites. Of our three medical centers, two were able to access a worked hours report from the payroll department; for the third center, this information was not accessible. 

Joy Peterson at CHOP brought to our attention that there are operational efficiency metrics used in healthcare called Paid Hours Per Unit of Service (PHPUOS) and Worked Hours Per Unit of Service, or WHPUOS. Joy liked to pronounce them as Puh-HOO-Poo-Oze and Wuh-HOO-Poo-Oze respectively, which of course Wendy Steuerwald at CCHMC and I liked to tease her about. But our efforts had essentially brought us to a place where we could calculate custom PHPUOS and WHPUOS for our clinics. We already had found the paid hours and worked hours, so that would now be our numerator instead of denominator in the ratio. We also needed to convert our procedure RVUs to hour equivalency; for example, tympanometry’s RVU changed from 1 (one 15-minute unit) to 0.25 (a quarter of a 1-hour unit). After making these updates, we were able to determine that, across our clinics, the average PHPUOS was 1.5, meaning for every hour’s worth of services billed, audiologists typically worked 90 minutes.

15. Did you finally find out what you wanted to know?

Pretty much. We found out that we were more or less in line with each other. We were also interested in how hospitals beyond ours compared, so we created an online survey that allows other clinics to calculate their productivity using PHPUOS based on our custom RVUs. This survey is still active at https://redcap.link/cometopapa if anyone would like to try it. It takes a fair amount of time to complete because it requires knowledge of the number of worked or paid hours spent in a clinic and the number of times each billing code was submitted during a set date range. For that reason, we did not get many respondents. But we did come up with a “PAPA Range” of expected productivity of between 1.2 and 1.5 PHPUOS. 

16. What are you using this information for now?

Currently, I am using it for monitoring. I set up some guidelines for how we set up schedule templates for providers to promote as much evenness of individual productivity as possible. I check the numbers monthly to make sure there aren’t any outliers. When there are, I check in with the audiologist’s manager to see if it’s something we need to address. 

Also I am expanding its use to evaluate productivity in our speech and language programs. I am at a different phase of productivity development, because we do not have custom RVUs for their CPT codes. So, I am using a volume-based adaptation of the WHPUOS: worked hours per patient.

17. I recall that earlier we talked about how staff might react negatively to the “measurement of productivity.” Did the clinic dynamics change when staff knew they were being measured?

It did increase staff awareness of productivity. Staff became more responsible in finding ways to fill empty appointment slots. They also checked in more when there were higher than normal no-show and cancellation rates, just to make me aware. When the staff realized that the measurements were not being used punitively, they relaxed more. I think if you are transparent with how you are using it, people are less negative. Staff who were curious about the business health of the clinic liked to see the productivity data. After we returned to work following the pandemic quarantines of Spring 2020, monitoring productivity became more important than ever in understanding how the department was performing. However, given the increased stress everyone felt during those times, sharing productivity information did not help mental health. We had several providers switch to a hybrid work schedule where patient contact hours were consolidated into larger blocks so that non-patient contact time could be spent at home, reducing potential exposure to SARS-CoV-2. Although productivity didn’t necessarily change, there was an increase in work-related stress. So until things normalize (whatever that means), I’m keeping the productivity work behind-the-scenes.

18. Are there other clinical metrics you use?

Oh yes, plenty. One thing I track is the proportion of visits we see. This was actually really interesting during the pandemic. Prior to the pandemic, patients being evaluated by Otolaryngology accounted for about half of our caseload. This population decreased significantly, whereas our electrophysiology patients increased significantly during the same period because they were considered time-sensitive following referral on the newborn hearing screening. We have only just recently started to see the proportions approach the pre-pandemic pattern.

Another I like to look at is appointment lag. This is the difference between the time an appointment was scheduled and the appointment date. It is essentially a measurement of wait times. Depending on how our schedule is set up, different visits can be accommodated earlier than others. When I see one type of visit having a longer lag than the others, that tells me that there might be something wrong with how the schedule is templated. Currently, our longest lag is with auditory processing testing. This we know is due to a limited number of providers qualified to administer the testing. Sometimes I have seen that our lag for ABR testing is larger than we’d like, so we’ll change some behavioral slots over to ABR slots to accommodate the demand. 

19. For those of us in smaller clinics, how might we benefit from all the work you’ve done? Does it apply to us too?

I’ve learned to appreciate the flexibility and scalability of productivity. I think any size clinic may have questions about performance that could be answered by exploring productivity. For example, a manager might have a gut feeling that productivity in summer is worse than other seasons, but not be sure if its due to staff taking vacations or decreased patient demand for services. Comparing WHPUOS across the seasons would give an answer. If the WHPUOS is lower in the summer, then there is likely decreased demand for services. 

20. Any final words?

My take home message would be that productivity is a tool that can be customized and used in lots of different ways. It can be a rough estimate or a very detailed calculation. It can reveal inequities in workload across clinicians or satellites. It can show you what direction to support expansion. It can be something you refer to regularly or just when a question comes up. Understanding how to measure productivity is a powerful skill for evaluating the health of your practice or clinic. I’m very happy to be contacted if anyone wants to learn more.

Citation

Stiles, D. (2022). 20Q: Measuring Clinical Productivity—the Hunt for the Right MetricAudiologyOnline, Article 28308. Available at www.audiologyonline.com

To earn CEUs for this article, become a member.

unlimited ceu access $129/year

Join Now
Rexton Reach - April 2024

derek stiles

Derek Stiles, PhD

Derek received his PhD at the University of Iowa, and completed his dissertation on the relationships between speech audibility, working memory, and vocabulary in children with hearing loss. Derek Stiles is the Director of the Center for Communication Enhancement at Boston Children’s Hospital, comprised of multiple speech/language and audiology programs.



Related Courses

Pediatric Grand Rounds, presented in partnership with Boston Children's Hospital
Presented by Briana Dornan, AuD, Cheryl Edwards, AuD, Alison Leschinski, ScD, Derek Stiles, PhD
Recorded Webinar
Course: #33418Level: Advanced1.5 Hours
The proportion of challenging pediatric cases is on the rise. This course features clinicians from Boston Children’s Hospital describing tactics that can be used to improve the quality and quantity of audiometric test information obtained through behavioral testing.

20Q: Consumer Reviews Offer Hearing Care Insights
Presented by Vinaya Manchaiah, AuD, MBA, PhD
Text/Transcript
Course: #36108Level: Introductory2 Hours
This engaging Q & A course discusses research about online consumer reviews in hearing healthcare. The discussion includes what audiologists and hearing care professionals can learn from reviews in order to deliver the best possible patient experience and positive outcomes with hearing care.

20Q: Why We Need an Audiology Practice Standards Organization
Presented by Jenne Tunnell, AuD
Text/Transcript
Course: #34858Level: Introductory1 Hour
A discussion of standards, as well as the rationale, purpose, and current activities of the Audiology Practice Standards Organization, written in an engaging Q & A format.

20Q: Audiology - There’s an Art to This Science
Presented by Thomas Davis, BSc (Hons), MSc, MAudA(CCP)
Text/Transcript
Course: #37940Level: Intermediate2 Hours
The importance of having a balance of knowledge and information in clinical practice. A review of anosognosia and how to counsel patients with this symptom is also discussed.

20Q: Hearing Loss and Public Health Initiatives
Presented by Nicholas Reed, AuD
Text/Transcript
Course: #37464Level: Intermediate1.5 Hours
How hearing loss is associated with healthcare-seeking behaviors and unmet health care needs. A new hearing loss public awareness campaign and public health training initiatives that are open to audiologists from the Cochlear Center for Hearing and Public Health at Johns Hopkins University are also reviewed.

Our site uses cookies to improve your experience. By using our site, you agree to our Privacy Policy.