Productivity in medicine – what’s real and what’s fake?

Let’s think about provider productivity.  As an armchair economist, I apologize to any PhD economists who feel I am oversimplifying things.
Why is productivity good?  It has enabled the standard of living increase over the last 200 years.  Economic output is tied to two variables: the number of individuals producing goods, and how many goods and services they can produce – productivity.  Technology supercharges productivity.   50 member platform companies now outproduce the corporation of 40 years ago which took a small army of people to achieve a lower output.  We live better lives because of productivity.

We strive for productivity in health care.  More patients seen per hour, more patients treated.  Simple enough.  But productivity focused on N(#) of patients seen per hour does not necessarily maintain quality of care as that metric increases.  A study of back office workers in banking validated that when the workers were overloaded, they sped up, but the quality of their work decreased (defects).  Banking is not healthcare, granted, but in finance defects are pretty quickly recognized and corrected [“Excuse me, but where is my money?”].  As to patient outcome, defects may take longer to show up and be more difficult to attribute to any one factor.  Providers usually have a differential diagnosis for their patient’s presenting complaints.   A careful review of the history and medical record can significantly narrow the differential.  Physician extenders can allow providers to see patients more effectively, with routine care shunted to the extender.  However, for a harried clinician, testing can also be used as a physician extender of sorts.  It increases diagnostic accuracy, at a cost to the patient (monetary and time) and the payor (monetary).  It is hardly fraudulent.  However, is it waste?  And since it usually requires a repeat visit, is it rework?  Possibly yes, to both.

The six-minute per encounter clinician who uses testing as a physician extender will likely have higher RVU production than one who diligently reviews the medical record for a half-an-hour and sees only 10 patients a day.  But who is providing better care?  If outcomes are evaluated, I would suspect that there is either no difference between the two or a slight outcome measure favoring the higher testing provider.  An analysis to judge whether the cost/benefit ratio is justified would probably be necessary.  Ultimately, if you account for all costs on the system, the provider that causes more defects, waste, and re-work is usually less efficient on aggregate, even though individually measured productivity may be high.  See: ‘The measure is the metric‘.  Right now, insurers are data mining to see which providers have best outcomes and lowest costs for specific disease processes, and will steer patients preferentially to them (Aetna CEO, keynote speech HIMSS 2014).

One of my real concerns is that we are training an entire generation of providers in this volume-oriented, RVU-production approach.  These folks may be high performers now, but when the value shift comes, these providers are going to have to re-learn a whole new set of skills.  More worrisome, there are entire practices that are being optimized under six sigma processes for greatest productivity.  Such a practice will have a real problem adapting to value-based care, because it represents a cultural shift.  It might affect the ability of a health system to pivot from volume to value, with resulting loss of competitiveness.

In the volume to value world, there are two types of productivity:

  • Fake productivity: High RVU generators who do so by cost shifting, waste, re-work, defects.
  • True productivity: Consistent RVU generators who follow efficient testing, appropriate # of follow-up visits, and have the good outcomes to prove it.

I am sure that most providers want to work in the space of real productivity – after all, it represents the ideal model learned as students.   Fake productivity is simply a maladaptive response to external pressures, and shouldn’t be conflated with True productivity.

2 thoughts on “Productivity in medicine – what’s real and what’s fake?

  • March 20, 2014 at 1:23 pm
    Permalink

    You have pinned exactly the paradox of Pay For Performance–the metric becomes the measure, but are the measures robust enough to account for all downstream effects which also have costs? High RVU generation can lead to outsourcing development and production (like using testing or extenders)–you book the revenue, but the costs (time, money, etc.) are off your books. Hence the revenue generation (and RVU) looks good. However, if there are downstream problems, then often the costs (i.e., re-visiting work) can be higher.

    A perfect example of this is the Boeing 787–brilliant technology, but hobbled by “novel production techniques” that outsourced key development that used to be handled by Boeing itself. Looked great when it was announced a decade ago. But when those vendors made errors than become manifest in completed or near-completed aircraft, it was MUCH more expensive to fix.

    Put another way, spending an hour with that patient may be expensive at first, but if you prevent 3 ER visits, then its a good investment. Great post. Ajay

  • March 20, 2014 at 2:01 pm
    Permalink

    Ajay,
    Thanks for your thoughts. What I’m getting at (and what will be in future posts) is that the measures of downstream accounting have been insufficientlly robust to combat this inefficient care model. Data science and advanced statistical computing techniques are now able to begin to tease out these relationships and model care delivery (& assign contracts) based upon that data.
    Aetna’s CEO Mark Bertolini has been very clear that he is going after these relationships and will be discouraging this RVU generation, defects be darned model of care.
    If you are an integrated system where insurer-hospital-provider are all under the same roof, spending that hour with the patient may be the highest return investment you can make.

Comments are closed.