Further Development of a care model

Let’s go back to our care model expanded upon in our prior post.  As eluded to, once interdependencies are considered, things get complicated fast.  This might not be as apparent in our four-stage ER care delivery model, but consider a larger process with six stages, and each stage being able to interact with each other.  See the figure below:

6procFor this figure, this is the generalized linear model with first order single interactions:

Complex GLM

A 23 term generalized linear model is probably not going to really help anyone and is too unweildy, so something needs to be done to get to the heart of the matter and create a model which is reasonably simple and will well-approximate this process.  The issue of multi-collinearity also is relevant here.  So, the next step is to get the number of terms down to what matters.  This would probably be best served by a shrinkage technique or a dimension reduction technique.

Shrinkage:  The LASSO immediately comes to mind due to its coefficient minimization as a feature that may allow variable selection dependent on lambda.  A ridge regression wouldn’t apply the same parsimony to the equation, so it keeps terms which may not help us simplify.  It has been pointed out to me that there is a technique called elastic net regularization which combines features of both the LASSO and ridge regression – seems worth a look.

Dimension Reduction:  First using Principal component analysis to identify the most important terms in the model and then utilizing Partial least squares to consider the response.

At this point, we probably have gone about as far as we can on a theoretical basis, and need to proceed on a more applied basis.  That will be a subject of future posts.

Thanks to Flamdrag5 for clarifying my thoughts on this post.

 

Some thoughts on Revenue Cycle Management, predictive analytics, and competing algorithms

After some reflection, this is clearly Part 5 of the “What medicine can learn from Wall Street” series.

It occurred to me while thinking about the staid subject of revenue cycle management (RCM) that this is a likely hotspot for analytics.   First, there is data – tons of data.  Second, it is relevant – folks tend to care about payment.

RCM is the method by which healthcare providers get paid, beginning from patient contact, leading to evaluation and treatment, and submitting charges by which we are ultimately paid via contractual obligation.  Modern RCM goes beyond billing, to include marketing, pre-authorization, completeness in the medical records to decrease denials, and ‘working’ the claims until payment is made.

Providers get paid by making claims.  Insurers ‘keep the providers honest’ by claim denials when the claims are not properly 1) pre-authorized, 2)documented, 3)medically indicated (etc).  There is a tug of war between both entities, which usually results in a relationship that ranges somewhere between grudging wariness to outright war (with contracts terminated and legal challenges fired off).  The providers profit by extracting the maximum profit they are contractually allowed to, the insurer by denying payment so that they can obtain investment returns on the pool of reserves they have.  Typically, the larger the reserve pool, the larger the profit.
Insurers silently fume at ‘creative coding’ where a change of coding rules causes a procedure/illness that has previously been paid at a lower level now paid at a much higher level.  Providers seethe at ‘capricious’ denials which require  staff work to provide whatever documentation requested (perhaps relevant, perhaps not) and ‘gotcha’ downcoding due to a single missing piece of information.  In any case, there is plenty of work for the billing & IT folks on either side.

Computerized revenue cycle management seems like a solution until you realize that the business model of either entity has not changed, and now the same techniques on either side can be automated.  Unfortunately, if the other guy does it, you probably need to too – here’s why.

We could get into this scenario:   A payor (insurer) when evaluating claims, decides that there is too much spend ($) on a particular ICD-9 diagnosis (or ICD-10 if you prefer) than expected and targets these for claim denials.  A provider would submit claims for this group, be denied on many of them, re-submit, be denied, and then either start ‘working’ the claims to gain value from them or if they had a sloppy or lazy or limited billing department, simply let them go (with resultant loss of the claim).  That would be a 3-12 month process.   However, a provider that was using descriptive analytics (see part 1) on say a weekly or daily basis would be able to see something was wrong more quickly – probably within three months – and gear up for quicker recovery.  A determined (and agressive) payor could shift their denial strategy to a different ICD-9 and something similar would occur.  After a few cycles of this, if the provider was really astute, they might data mine the denials to identify what codes were being denied and set up a predictive algorithm to compare new denials relative to their old book of business.  This would identify statistical anomalies in new claims, and could alert the provider about the algorithm the payor was using to target claims for denial.  By anticipating these denials, and either re-coding them or providing superior documentation to force the payor to pay (negating the beneficial effects of the payor’s claim denial algo), claims are paid in a timely and expected manner.  I haven’t checked out some of the larger vendors’ RCM offerings but I suspect that this is not far in the offing.

I could see a time where a very aggressive payor (perhaps under financial strain) strikes back with an algorithm designed to deny some, but not all claims on a semi-random basis to ‘fly under the radar’ and escape the provider’s more simple detection algorithms.  A more sophisticated algorithm based upon anomaly detection techniques could then be used to identify these denials….  This seems like a nightmare to me.  Once things get to this point, it’s probably only a matter of time until these games are addressed by the legislature.

Welcome to the battles of the competing algorithms.  This is what happens in high-frequency trading.  Best algorithm wins, loser gets poorer.

One thing is sure: in negotiations, the party who holds & evaluates the data holds the advantage.   The other party will forever be negotiating from behind..

P.S.  As an aside, with the ultra-low short term interest rates after the 2008 financial crisis, the time value of money is near all-time lows.  Delayed payments are an annoyance but apart from a cash-flow basis there is not any real advantage to delaying payments.   Senior management who lived through or studied the higher short term interest rates of the 1970’s-1980’s will recall the importance of managing the ‘float’ and good treasury/receivables operations.  Changing economic conditions could make this even more of a hot topic.

Developing a simple care delivery model further – dependent interactions

Let’s go back to our simple generalized linear model of care delivery from this post:simplified ER Process

With its resultant Generalized Linear Function:

GLM

This model, elegant in its simplicity, does not account for the inter-dependencies in care delivery.  A more true-to-life revised model is:

ER Process with interdependencies - New PageWhere there are options for back and forth pathways depending on new clinical information, denoted in red.

A linear model that takes into account these inter-dependencies would look like this:

GLM2
Including these interactions, we go from 4 terms to 8.  And this is a overly-simplified model!  By drilling down in a typical PI/Six Sigma environment into an aspect of the healthcare delivery processes, its not hard to imagine creating well over four points of contact/patient interaction, each with their own set of interdependencies.  Imagine a process with 12-15 sub-processes and most of those sub-processes each having on average six (6) interdependencies.  And then the possibility of multiple interdependencies among the processes…  This doesn’t even account for a EMR dataset where the number of columns could be …. 350?  Quickly, your ‘simple’ linear model is looking not so simple with easily over 100 terms in the equation, which also causes solvation problems.     Not to despair!  There are ways to take this formula with a high number of terms and create a more manageable model as a reasonable approximation!  The mapping and initial modeling of the care process is of greatest utility from an operational standpoint, to allow for understanding and to guide interpretation of the ultimate data analysis.

I am a believer that statistical computational analysis can identify the terms which are most important for the model.  By inference, these inputs will have the most effect upon outcome, and can guide management to where precious effort, resources, and time should be guided to maximize outcomes.

 

What Medicine can learn from Wall Street – Part 4 – Portfolio Management and complex systems

attrib: Asy ArchLet’s consider a single security trader.

All they trade is IBM.  All they need to know is that security and  its included indexes.  But start trading another security, such as Cisco (CSCO), while they have a position in IBM, and they have a portfolio.   Portfolios behave differently – profiting or losing on an aggregate basis from the combination of movements in multiple securities.  For example, if you hold 10,000 shares of IBM and CSCO, and IBM appreciates by a dollar while CSCO loses a dollar, you have no net gain or loss.  That’s called portfolio risk.

Everything in the markets is connected.  For example, if you’re an institutional trader, with a large (1,000,000 shares +) position in IBM, you know that you can’t sell quickly without tanking the market.  That’s called execution risk.  Also, once the US market closes (less of a concern these days than 20 years ago) there is less liquidity.  Imagine you are this large institutional trader, at home at 11pm.   A breaking news story develops about a train derailment of toxic chemicals near IBM’s research campus causing fires.   You suspect that it destroyed all of their most prized experimental hardware which will take years to replace.  Immediately, you know that you have to get out of as much IBM as possible to limit your losses.  However, when you get over to your trading terminal, the first bid in the market is $50 lower than the price that afternoon for a minuscule 10,000 shares.  If you sell at that price, the next price will be even lower for a smaller amount.   You’re stuck.  However, there is a relationship between IBM and the general market called a beta which is a correlation coefficient.  Since you cannot get out of your IBM directly, you sell a defined number of short S&P futures in the open market to simulate a short position in IBM.  You’re going to take a bath, but not as bad as the folks that went to bed early and didn’t react to the news.

A sufficiently large portfolio with >250 stocks will approximate broader market indexes (such as the S&P 500 or Russell index) depending upon composition.  It’s beta will be in the 0.9-1.1 range with 1.0 equaling a perfect correlation coefficient (r).  Traders attempt to improve upon this expected rate of return by strategic buys and sells of the portfolio components.  Any extra return above the expected rate of return of the underlying is alpha.   Alpha is what you pay managers for instead of just purchasing the Vanguard S&P 500 index and forgetting about it.  It’s said that most managers underperform the market indexes.  A discussion of Modern Portfolio Theory is beyond the scope of this blog, but you can go here for more.

So, excepting an astute manager delivering alpha (or an undiversified portfolio), the larger & more diversified the portfolio is the more it behaves like an index and the less dependent it is upon the behavior of any individual security.  Also, without knowing the exact composition of the portfolio and it’s proportions, it’s overall behavior can be pretty opaque.

MAIN POINT: The portfolio behaves as it’s own process; the sum of the interactions of its constituents.

 

Courtesy Arnold C.I postulate that the complex system of healthcare delivery behaves like a multiple security portfolio.  It is large, complex, and without a clear understanding of its constituent processes, potentially opaque.  The individual components of care delivery summate together to form an overall process of care delivery.  The over-arching hospital, outpatient, office care delivery process is a derivative process – integrating multiple underlying sub-processes.

We trace, review, and document these sub-processes to better understand them.  Once understood, metrics can be established and process improvement tools applied.  The PI team is called in, and a LEAN/Six Sigma analysis performed.  Six sigma process analytics typically focus on one sub-process at a time to improve its efficiency.  Improving a sub-process’ efficiency is a laudable & worthwhile goal which can result in cost savings, better care outcomes, and reduced healthcare prices.  However, there is also the potential for Merton’s ‘unintended consequences‘.

Most importantly, the results of the six sigma PI need to be understood in the context of the overall enterprise – the larger complex system.  Optimizing the sub-process when causing a bottleneck in the larger enterprise process is not progress!
This is because a choice of the wrong metric or overzealous overfitting may, while improving the individual process, create a perturbation in the system (a ‘bottleneck’) the negative effects of which are, confoundingly, more problematic than the fix.  Everyone thinks that they are doing a great job, but things get worse, and senior management demands an explanation.   Thereafter, a lot of finger pointing occurs.  These effects are due to dependent variables  or feedback loops that exist in the system’s process.  Close monitoring of the overall process will help in identifying unintended consequences of process changes.  I suspect most senior management folks will recall the time when an overzealous cost-cutting manager decreased in-house transport to the point where equipment idled and LOS increased.  I.E. The .005% saved by patient transport re-org cost the overall institution 2-3% until the problem was fixed.

There is a difference between true process improvement and goosing the numbers.  I’ve written a bit about this in real vs. fake productivity and my post about cost shifting.  I strongly believe it is incumbent upon senior management to monitor middle management & prevent these outcomes.  Well thought out metrics and clear missions and directives can help.  Specifically – senior management needs to be aware that optimization of sub-processes exists in the setting of the larger overall process and that optimization must also optimize the overall care process (the derivative process) as well.   An initiative that fails to meet both the local and global goals is a failed initiative!

It’s the old leaky pipe analogy – put a band-aid on the pipe to contain one leak, and the increased pressure in the pipe causes the pipe to burst somewhere else, necessitating another band-aid.  You can’t patch the pipe enough – too old.  The whole pipe needs replacement.  And the sum of repairs over time exceeds the cost of simply replacing it.

I’m not saying that process improvement is useless – far from it, it is necessary to optimize efficiency and reduce waste to survive in our less-than-forgiving healthcare business environment.  However, consideration of the ‘big picture’ is essential – which can be mathematically modeled.  The utility of modeling is to gain an understanding of how the overall complex process responds to changes – to avoid unintended consequences of system perturbation.