Health Analytics Summit 2016 – Summary

has-logo-jpg_large

I was shut out last year from Heath Catalyst’s Health Analytics Summit in Salt Lake City – there is a fire marshal’s limit of about 1000 people for the ballroom in the Grand America hotel, and with vendors last year there were simply not enough slots.  This year I registered early.  At the 2015 HIMSS Big Data and Medicine conference in NYC, the consensus was this conference had lots of practical insights.

The undercurrents of the conference as I saw them:

  • Increasing realization that in accountable care, social ills impact the bottom line.
  • Most people are still at the descriptive analytics stage but a few sophisticated players have progressed to predictive.  However actionable cost improvements are achievable with descriptive reporting.
  • Dashboarding is alive and well.
  • EDW solutions require data governance.
  • Data Scientists & statistical skills remain hard to come by in healthcare & outside of major population centers.

A fascinating keynote talk by Anne Milgram, former NJ attorney general, showed the striking parallels between ER visits/hospitalizations and arrests/incarcerations.  In Camden, NJ, there was a 2/3 overlap between superutilizers of both healthcare and the criminal justice system (CJS).  Noting that CJS data is typically public, she hinted this could potentially be integrated with healthcare data for predictives.  Certainly, from an insurer’s viewpoint, entry into the CJS is associated with higher healthcare/insured costs.  As healthcare systems move more into that role via value-based payments, this may be important data to integrate.

I haven’t listened to Don Berwick MD much – I will admit a “part of the problem” bias for his role as a CMS chief administrator, and his estimate that 50% of healthcare is “waste” (see Dr. Torchiana below).  I was floored that Dr. Berwick appeared to be pleading for the soul of medicine – “less stick and carrot”, “we have gone mad with too many (useless) metrics”.  But he did warn there will be winners and losers in medicine going forward, and signalling to me that physicians, particularly specialists, are targeted to be losers.

David Torchiana MD of Partners Healthcare followed with a nuanced talk reminding us there is value of medicine – and that much of what we flippantly call waste has occurred in the setting of a striking reduction in mortality for treatment of disease over the last 50 years.  It was a soft-spoken counterpoint to Dr. Berwick’s assertions.

Toby Freier and Craig Strauss MD both demonstrated how analytics can impact health significantly while reducing the bottom line, on both the community level and for specialized use cases.  New Ulm Medical Center’s example demonstrated 1) the nimbleness of a smaller entity to evaluate and implement optimized programs and processes on a community-wide basis while Minneapolis Heart Institute demonstrated 2) how advanced use of analytics could save money by reducing complications in high cost situations (e.g. CABG, PTCA, HF) and 3) how analytics could be used to answer clinical questions that there was no good published data on. (e.g. survivability for 90 year olds in TAVR)

Taylor Davis of KLAS research gave a good overview of analytics solutions and satisfaction with them.  Take home points were that the large enterprise solutions (Oracle et al.) had lower levels of customer satisfaction than the healthcare specific vendor solutions (Healthcatalyst, qlik).  Integrated BI solutions within the EHR provided by the EHR vendor, while they integrated well, were criticized as underpowered/insufficient for more than basic reporting.  However, visual exploration services (Tableau) were nearly as well received as the dedicated healthcare solutions.  Good intelligence on these solutions.

The conference started off with an “analytics walkabout” where different healthcare systems presented their success and experiences with analytics projects.  Allina Health was well-represented with multiple smart and actionable projects – I was impressed.  One project from Allina predicting who would benefit from closure devices in the cath lab (near and dear to my heart as an Interventional Radiologist) met goals of both providing better care and saving costs through avoiding complications.  There was also an interesting presentation from AMSURG about a project integrating Socio-Economic data with GI endoscopy – a very appropriate use of analytics for the outpatient world speaking from some experience.  These are just a few of the 32 excellent presentations in the walkabout.

I’ll blog about the breakout sessions separately.

Full Disclosure: I attended this conference on my own, at my own expense, and I have no financial relationships with any of the people or entities discussed.  Just wanted to make that clear.  I shill for no one.

 

Value and Risk: the Radiologist’s perspective (Value as risk series #4)

Public DomainMuch can be written about Value-based care. I’ll focus on imaging risk management from a radiologist’s perspective. What it looks like from the Hospital’s perspective , the Insurer’s perspective, and in general have been discussed previously.

When technology was in shorter supply, radiologists were gatekeepers of limited Ultrasound, CT and MRI resources. Need-based radiologist approval was necessary for ‘advanced imaging’. The exams were expensive and needed to be protocoled correctly to maximize utility. This encouraged clinician-radiologist interaction – thus our reputation as “The Doctor’s doctor.”

In the 1990’s-2000’s , there was an explosion in imaging utilization and installed equipment. Imaging was used to maximize throughput, minimize patient wait times and decrease length of hospital stays. A more laissez-faire attitude prevailed where gatekeeping was frowned upon.

With a transition to value-based care, the gatekeeping role of radiology will return. Instead of assigning access to imaging resources on basis of limited availability, we need to consider ROI (return on investment) in the context of whether the imaging study will be likely to improve outcome vs. cost. (1) Clinical Decision Support (CDS) tools can help automating imaging appropriateness and value. (2)

The bundle’s economics are capitation of a single care episode for a designated ICD-10 encounter. This extends across the inpatient stay and related readmissions up to 30 days after discharge (CMS BPCI Model 4). A review of current Model 4 conditions show mostly joint replacements, spinal fusion, & our example case of CABG (Coronary Artery Bypass Graft).

Post CABG, a daily Chest X-ray (CXR) protocol may be ordered – very reasonable for an intubated & sedated patient. However, an improving non-intubated awake patient may not need a daily CXR. Six Sigma analysis would empirically classify this as waste – and a data analysis of outcomes may confirm it.

Imaging-wise, patients need a CXR preoperatively, & periodically thereafter. A certain percentage of patients will develop complications that require at least one CT scan of the chest. Readmissions will also require re-imaging, usually CT. There will also be additional imaging due to complications or even incidental findings if not contractually excluded (CT/CTA/MRI Brain, CT/CTA neck, CT/CTA/US/MRI abdomen, Thoracic/Lumbar Spine CT/MRI, fluoroscopy for diaphragmatic paralysis or feeding tube placement, etc…). All these need to be accounted for.

www.n2value.com

 

In the fee-for-service world, the ordered study is performed and billed.  In bundled care, payments for the episode of care are distributed to stakeholders according to a pre-defined allocation.

Practically, one needs to retrospectively evaluate over a multi-year period how many and what type of imaging studies were performed in patients with the bundled procedure code. (3) It is helpful to get sufficient statistical power for the analysis and note trends in both number of studies and reimbursement. Breaking down the total spend into professional and technical components is also useful to understand all stakeholder’s viewpoints. Evaluate both the number of studies performed and the charges, which translates into dollars by multiplying by your practice’s reimbursement percentage. Forward-thinking members of the Radiology community at Nieman HPI  are providing DRG-related tools such as ICE-T to help estimate these costs (used in above image). Ultimately one ends up with a formula similar to this:

CABG imaging spend = CXR’s+CT Chest+ CTA chest+ other imaging studies.

Where money will be lost is at the margins – patients who need multiple imaging studies, either due to complications or incidental findings. With between a 2% to 3% death rate for CABG and recognizing 30% of all Medicare expenditures are caused by the 5% of beneficiaries that die, with 1/3 of that cost in the last month of life (Barnato et al), this must be accounted for. An overly simplistic evaluation of the imaging needs of CABG will result in underallocation of funds for the radiologist, resulting in per-study payment dropping  – the old trap of running faster to stay in place.

Payment to the radiologist could either be one of two models:

First, fixed payment per RVU. Advantageous to the radiologist, it insulates from risk-sharing. Ordered studies are read for a negotiated rate. The hospital bears the cost of excess imaging. For a radiologist in an independent private practice providing services through an exclusive contract, allowing the hospital to assume the risk on the bundle may be best.

Second, a fixed (capitated) payment per bundled patient for imaging services may be made to the radiologist. This can either be in the form of a fixed dollar amount or a fixed percentage of the bundle.  (Frameworks for Radiology Practice Participation, Nieman HPI)  This puts the radiologist at-risk, in a potentially harmful way. The disconnect is that the supervising physicians (cardio-thoracic surgeon, intensivist, hospitalist) will be focusing on improving outcome, decreasing length of stay, or reducing readmission rates, not imaging volume. Ordering imaging studies (particularly advanced imaging) may help with diagnostic certitude and fulfill their goals. This has the unpleasant consequence of the radiologist’s per study income decreasing when they have no control over the ordering of the studies and, in fact, it may benefit other parties to overuse imaging to meet other quality metrics. The radiology practice manager should proceed with caution if his radiologists are in an employed model but the CT surgeon & intensivists are not. Building in periodic reviews of expected vs. actual imaging use with potential re-allocations of the bundle’s payment might help to curb over-ordering. Interestingly, in this model the radiologist profits by doing less!

Where the radiologist can add value is in analysis, deferring imaging unlikely to impact care. Reviewing data and creating predictive analytics designed to predict outcomes adds value while, if correctly designed, avoiding more than the standard baseline of risk. (see John’s Hopkins Sepsis prediction model). In patients unlikely to have poor outcomes, additional imaging requests can be gently denied and clinicians reassured. I.e. “This patient has a 98% chance of being discharged without readmission. Why a lumbar spine MRI?” (c.f. AK Moriarty et al) Or, “In this model patients with these parameters only need a CXR every third day. Let’s implement this protocol.” The radiologist returns to a gatekeeping role, creating value by managing risk, intelligently.

Let’s return to our risk/reward matrix:

www.n2value.com

 

For the radiologist in the bundled example receiving fixed payments:

 

Low Risk/Low Reward: Daily CXR’s for the bundled patients.

 

High Risk/Low Reward: Excess advanced imaging (more work for no change in pay)

 

High Risk/High Reward: Arbitrarily denying advanced imaging without a data-driven model (bad outcomes = loss of job, lawsuit risk)

 

Low Risk/High Reward: Analysis & Predictive modeling to protocol what studies can be omitted in which patients without compromising care.

 

I, and others, believe that bundled payments have been put in place not only to decrease healthcare costs, but to facilitate transitioning from the old FFS system to the value-based ‘at risk’ payment system, and ultimately capitated care. (Rand Corp, Technical Report TR-562/20) By developing analytics capabilities, radiology providers will be able to adapt to these new ‘at-risk’ payment models and drive adjustments to care delivery to improve or maintain the community standard of care at the same or lower cost.

  1. B Ingraham, K Miller et al. Am Coll Radiol 2016 in press
  2. AK Moriarty, C Klochko et al J Am Coll Radiol 2015;12:358-363
  3. D Seidenwurm FJ Lexa J Am Coll Radiol 2016 in press

Where does risk create value for a hospital? (Value as Risk series post #3)

towers1Let’s turn to the hospital side.

For where I develop the concept of value as risk management go here 1st, and where I discuss the value in risk management from an insurer’s perspective click here 2nd.

The hospital is an anxious place – old fat fee-for-service margins are shrinking, and major rule set changes keep coming. To manage revenue cycles requires committing staff resources (overhead) to compliance related functions, further shrinking margin. More importantly, resource commitment postpones other potential initiatives. Maintaining compliance with Meaningful Use (MU) 3 cum MACRA, PQRS, ICD-10 (11?) and other mandated initiatives while dealing with ongoing reviews Read more

How an health insurer uses risk to define value (Value as risk series)

RiskLets continue with value as risk. If you missed it, here’s the first post.

Providers assert that insurers hold most if not all the cards, collecting premiums and denying payment while holding large datasets of care patterns. I’ve heard, “if only we had access to that data, we could compete on a level playing field.”

I am neither an apologist for nor an insider in the insurance industry, but is this a “grass is always greener” problem? True, the insurer has detailed risk analysis on the patient & provider. Yes, the insurer does get to see what all providers are charging and coding in their coverage. And the insurer can deny or delay payment knowing that a certain percentage of these claims will not be re-submitted.

But the insurer also has deep institutional knowledge in risk-rating their clients. Consider the history of health insurance in the US.  Advancing medical knowledge advanced treatment cost. When medical cost inflation exceeded CPI  insurers modeled and predicted estimated spend with hard data. If individuals had medical conditions which would cost more ultimately than premiums received they failed medical underwriting. The insurers are private, for-profit businesses, and will not operate at a loss willingly.

To optimize profitability, insurers collected data from not only the insurance application, but also claims data, demographic data from consumer data brokers, financial data, information from other insurers (auto, home, life), and probably now Internet data (Facebook, etc…) to risk-rate the insured. Were they engaged in a risky lifestyle? Searching the net for serious genetic diseases?

Interestingly, the ACA changed this to only permit 1) Age 2) Smoking 3) Geographic location as pricing factors in the marketplace products. The marketplace products have been controversial, with buyers complaining of networks so narrow to be unusable , and insurers complaining of a lack of profitability, which has caused them to leave the market. Because the marketplace pools must take all comers, and many who entered the pools had not had insurance, there is some skew towards high-cost, sicker patients.

Consider a fictional medium-sized regional health insurer in three southern states specializing in group (employer) insurance – Southern Health. They are testing an ACA marketplace product. The geographic area they serve has a few academic medical centers, many community hospitals competing with each other, and only a few rural hospitals. In the past, they could play the providers off one another and negotiate aggressively, even sometimes paying lower rates than Medicare.

However, one provider – a fictional two-hospital system – Sun Memorial – hired a savvy CEO who developed profitable cardiac and oncology service lines leveraging reputation. Over the last 5 years, the two-hospital group has merged & acquired hospitals forming a 7-hospital system, with 4 more mergers in late-stage negotiations. The hospital system changed its physicians to an employed model and then at next contract renewal demanded above Medicare rates. As such, Southern Health did not renew their contract with Sun Memorial. In the past, such maneuvers ended conflict quickly as the hospital suffered cash flow losses. However, now with fewer local alternatives to Sun Memorial; patients were furiously complaining to both Southern Health and their employer’s HR department that their insurance would not cover their bills.   Pushback on the insurer by the local businesses purchasing benefits through Southern Health happened as they now threatened not to renew! The contract was eventually resolved at Medicare rates, with retroactive coverage.

The marketplace product is most purchased by the rural poor, operating on balance neutral to a slight loss. As the Southern Health’s CEO, you have received word that the your largest customer, a university, has approached Sun Memorial about creating a capitated product – cutting you out entirely. The CEO of Sun Memorial has also contacted you about starting an ACO together.

           Recall the risk matrix:

www.n2value.com

 

Low Risk/Low return: who cares?

High Risk/Low return: cancelling provider contracts as a negotiating ploy.

High Risk/High return: Entering into an ACO with Sun Memorial. Doing so shares your data with them & teaches them how to do analytics. This may negatively impact future negotiations and might even help them to structure the capitated contract correctly.

Low Risk/High Return: Pursue lobbying and legal action at the state/federal level to prevent further expansion of Sun Memorial. Maintain existing group business. Withdraw from unprofitable ACA marketplace business.

As CEO of Southern Health, you ultimately decide to hinder the chain’s acquisition strategy. You also withdraw from the marketplace but may reintroduce it later. Finally, you do decide to start an ACO – but with the primary competitor of Sun Memorial. You will give them analytic support as they are weak in analytics, thereby maintaining your competitive advantage.

From the insurer’s perspective the low risk and high return move is to continue the business as usual (late stage, mature company) and maintain margins in perpetuity. Adding new products is a high-risk high reward ‘blue ocean’ strategy that can lead to a new business line and either profit augmentation or revitalization of the business. However, in this instance the unprofitable marketplace product should be discontinued.

 

For the insurer, value is achieved by understanding, controlling, and minimizing risk.

 

Next, we’ll discuss things from the hospital system’s CEO perspective.

 

Defining value in healthcare through risk

High-low-norisk

For a new definition of value, it’s helpful to go back to the conceptual basis of payment for medical professional services under the RBRVS. Payment for physician services is divided into three components: Physician work, practice expense, and a risk component.

Replace physician with provider, and then extrapolate to larger entities.

Currently, payer (insurer, CMS, etc…) and best practice (specialty societies, associations like HFMA, ancillary staff associations) guidelines exist. This has reduced some variation among providers, and there is an active interest to continue this direction. For example, level 1 E&M clearly differs from a level 5 E&M – one might disagree whether a visit is a level 3 or 4, but you shouldn’t see the level 1 upcoded to 5. Physician work is generally quantifiable in either patients seen or procedures done, and for any corporate/employed practice, most physicians will be working towards the level of productivity they have contractually agreed to, or they will be let go/contracts renegotiated. Let’s hope they are fairly compensated for their efforts and not subjected solely to RVU production targets, which are falling out of favor vs. more sophisticated models (c.f. Craig Pedersen, Insight Health Partners).

Unless there is mismanagement in this category, provider work is usually controllable, measurable, and with some variation due to provider skill, age, and practice goals, consistent. For those physicians who have been vertically integrated, their current EHR burdens and compliance directives may place a cap on productivity.

Practice expenses represent those fixed expenses and variable expenses in healthcare – rent, taxes, facility maintenance, and consumables (medical supplies, pharmaceuticals, and medical devices). Most are fairly straightforward from an accounting standpoint. Medical supplies, pharmaceuticals, and devices are expenses that need management, with room for opportunity. ACO and super ACO/CIO organizations and purchasing consortiums such as Novation, Amerinet, and and Premier have been formed to help manage these costs.

Practice expense costs are identifiable, and once identified, controllable. Initially, six sigma management tools work well here. For all but the most peripheral, this has happened/is happening, and there are no magic bullets out there beyond continued monitoring of systems & processes as they evolve over time as drift and ripple effects may impact previously optimized areas.

This leaves the last variable – risk. Risk was thought of as a proxy for malpractice/legal costs. However, in the new world of variable payments, there is not only downside risk in this category, but the pleasant possibility of upside risk.

It reasons that if your provider costs are reasonably fixed, and practice expenses are as fixed as you can get them at the moment, that you should look to the risk category as an opportunity for profit.

As a Wall St. options trader, the only variable that really mattered to me for the price of the derivative product was the volatility of the option – the measure of its inherent risk. We profited by selling options (effectively, insurance) when that implied volatility was higher than the actual market volatility, or buying them when it was too low. Why can’t we do the same in healthcare?

What is value in this context? The profit or loss arising from the assumption and management of risk. Therefore, the management of risk in a value-based care setting allows for the possibility of a disproportionate financial return.

www.n2value.com

The sweet spot is Low Risk/High Return. This is where discovering a fundamental mispricing can return disproportionately vs. exposure to risk.

Apply this risk matrix to:

  • 1 – A medium sized insurer, struggling with hospital mergers and former large employers bypassing the insurer directly and contracting with the hospitals.
  • 2 – A larger integrated hospital system with at-risk payments/ACO model, employed physicians, and local competitors which is struggling to provide good care in the low margin environment.
  • 3 – group radiology practice which contracts with a hospital system and a few outpatient providers.

& things get interesting. On to the next post!

Some reflections on the ongoing shift from volume to value

As an intuitive and inductive thinker, I often use facts to prove or disprove my biases. This may make me a poor researcher, though I believe I would have been popular in circa 1200 academic circles. Serendipity plays a role; yes I’m a big Nassim Taleb fan – sometimes in the seeking, unexpected answers appear. Luckily, I’m correct more often than not. But honestly – in predicting widely you miss more widely.

One of my early mentors from Wall St. addressed this with me in the infancy of my career – take Babe Ruth’s batting average of .342 . This meant that two out of three times at bat, Babe Ruth struck out. However, he was trying to hit home runs. There is a big difference between being a base hit player and a home run hitter. What stakes are you playing for?

With that said, this Blog is for exploring topics I find of interest pertaining mostly to healthcare and technology. The blog has been less active lately, not only due to my own busy personal life (!) but also because I have sought more up-to-date information about advancing trends in both the healthcare payment sector and the IT/Tech sector as it applies to medicine. I’m also diving deeper into Radiology and Imaging. As I’ve gone through my data science growth phase, I’ll probably blog less on that topic except as it pertains to machine learning.

The evolution of the volume to value transition is ongoing as many providers are beginning to be subject to at least a degree of ‘at-risk’ payment. Stages of ‘at-risk’ payment have been well characterized – this slide by Jacque Sokolov MD at SSB solutions is representative:

Sokolove - SSB solutions slide 1

In 2015, approximately 20% of medicare spend was value-based, with CMS’s goal 50% by 2020. Currently providers are ‘testing the waters’ with <20% of providers accepting over 40% risk-based payments (c.f. Kimberly White MBA, Numerof & Associates). Obviously the more successful of these will be larger, more data-rich and data-utilizing providers.

However, all is not well in the value-based-payment world. In fact, this year United Health Care announced it is pulling its insurance products out of most of the ACA exchange marketplaces. While UHC products were a small share of the exchanges, it sends a powerful message when a major insurer declines to participate. Recall most ACO’s (~75%) did not produce cost savings in 2014, although more recent data was more encouraging (c.f. Sokolov).   Notably, out of the 32 Pioneer ACO’s that started, only 9 are left (30%) (ref. CMS). The road to value is not a certain path at all.

So, with these things in mind, how do we negotiate the waters? Specifically, as radiologists, how do we manage the shift from volume to value, and what does it mean for us? How is value defined for Radiology? What is it not? Value is NOT what most people think it is. I define value as: the cost savings arising from the assumption and management of risk. We’ll explore this in my next post.

Memory requirements for Convolutional Neural Network analysis of brain MRI.

Believed to be in the publc domainI’m auditing the wonderful Stanford CS 231n class on Convolutional Neural Networks in Computer Vision.

A discussion the other day was on the amount of memory required to analyze one image as it goes through the Convolutional Neural Network (CNN). This was interesting – how practical is it for application to radiology imaging?  (To review some related concepts see my earlier post : What Big Data  Visualization Analytics can learn from Radiology)

Take your standard non-contrast MRI of the brain. There are 5 sequences (T1, T2, FLAIR, DWI, ADC). For the purposes of this analysis, all axial. Assume a 320×320 viewing matrix for each slice. Therefore, one image will be a 320x320x5 matrix suitable for processing into a 512,000 byte vector. Applying this to the VGGNet Protocol D (1) yields the following:

VGGNet

In each image, there are 320 x and y pixels and each pixel holding a greyscale value. There are 5 different sequences. Each axial slice takes up 512KB, the first convolutional layers hold most of the memory at 6.4MB each, and summing all layers uses 30.5MB. Remember that you have to double the memory for the forward/backward pass through the network, giving you 61MB per image. Finally, the images do not exist in a void, but are part of about 15 axial slices of the head, giving you a memory requirement of 916.5MB, or about a gigabyte.

Of course, that’s just for feeding an image through the algorithm.

This is simplistic because:

  1. VGG is not going to get you to nearly enough accuracy for diagnosis! (50% accurate, I’m guessing)
  2. The MRI data is only put into slices for people to interpret – the data itself exists in K-space. What that would do to machine learning interpretation is another discussion.
  3. We haven’t even discussed speed of training the network.
  4. This is for older MRI protocols.  Newer MRI’s have larger matrices (512×512) and thinner slices (3mm) available, which will increase the necessary memory to approximately 4GB.

Nevertheless, it is interesting to note that the amount of memory required to train a neural network of brain MRI’s is in reach of a home network enthusiast.

(1). Karen Simonyan & Andrew Zisserman, Very Deep Convolutional Networks for Large-Scale Visual Recognition, ICLR 2015

Catching up with the “What medicine can learn from Wall St. ” Series

The “What medicine can learn from Wall Street” series is getting a bit voluminous, so here’s a quick recap of where we are up to so far:

Part 1 – History of analytics – a broad overview which reviews the lagged growth of analytics driven by increasing computational power.

Part 2 – Evolution of data analysis – correlates specific computing developments with analytic methods and discusses pitfalls.

Part 3 – The dynamics of time – compares and contrasts the opposite roles and effects of time in medicine and trading.

Part 4 – Portfolio management and complex systems – lessons learned from complex systems management that apply to healthcare.

Part 5 – RCM, predictive analytics, and competing algorithms – develops the concept of competing algorithms.

Part 6 – Systems are algorithms – discusses ensembling in analytics and relates operations to software.


 

What are the main themes of the series?

1.  That healthcare lags behind wall street in computation, efficiency, and productivity; and that we can learn where healthcare is going by studying Wall Street.

2.  That increasing computational power allows for more accurate analytics, with a lag.  This shows up first in descriptive analytics, then allows for predictive analytics.

3.  That overfitting data and faulty analysis can be dangerous and lead to unwanted effects.

4.  That time is a friend in medicine, and an enemy on Wall Street.

5.  That complex systems behave complexly, and modifying a sub-process without considering its effect upon other processes may have “unintended consequences.”

6.  That we compete through systems and processes – and ignore that at our peril as the better algorithm wins.

7.  That systems are algorithms – whether soft or hard coded – and we can ensemble our algorithms to make them better.


 

Where are we going from here?

– A look at employment trends on Wall Street over the last 40 years and what it means for healthcare.

– More emphasis on the evolution from descriptive analytics to predictive analytics to proscriptive analytics.

– A discussion for management on how analytics and operations can interface with finance and care delivery to increase competitiveness of a hospital system.

– Finally, tying it all together and looking towards the future.

 

All the best to you and yours and great wishes for 2016!

 

 

Further Developing the Care Model – Part 3 – Data generation and code

Returning to our care model that discussed in parts one and two, we can begin by defining our variables.

n2value

Each sub-process variable is named for  its starting sub-process and ending sub-process.  We will define mean time for the sub-processes in minutes, and add a component of time variability.  You will note that the variability is skewed – some shorter times exist, but disproportionately longer times are possible.  This coincides with real-life: in a well-run operation, mean times may be close to lower limits – as these represent physical (occurring in the real world) processes, there may simply be a physical constraint on how quickly you can do anything!  However, problems, complications and miscommunications may extend that time well beyond what we all would like it to be – for those of us who have had real-world hospital experience, does this not sound familiar?

Because of this, we will choose a gamma distribution to model our processes:

                              \Gamma(a) = \int_{0}^{\infty} {t^{a-1}e^{-t}dt} 

The gamma distribution is useful because it deals with continuous time data, and we can skew it through its shaping parameters Kappa (\kappa) and Theta (\theta) .   We will use the function in R : rgamma(N,\kappa, \theta) to generate our distribution between zero and 1, and use a multiplier (slope) and offset (Y-intercept) to adjust  the distributions along the X-axis.  The gamma distribution can deal with the absolute lower time limit – I consider this a feature, not a flaw.

It is generally recognized that a probability density plot (or Kernel plot) as opposed to a histogram of distributions is more accurate and less prone to distortions related to number of samples (N).  A plot of these distributions looks like this:Property N2value.com

The R code to generate this distribution, graph, and our initial values dataframe is as follows:

seed <- 3559
set.seed(seed,kind=NULL,normal.kind = NULL)
n <- 16384 ## 2^14 number of samples then let’s initialize variables
k <- c(1.9,1.9,6,1.9,3.0,3.0,3.0,3.0,3.0)
theta <- c(3.8,3.8,3.0,3.8,3.0,5.0,5.0,5.0,5.0)
s <- c(10,10,5,10,10,5,5,5,5,5)
o <- c(4.8,10,5,5.2,10,1.6,1.8,2,2.2)
prosess1 <- (rgamma(n,k[1],theta[1])*s[1])+o[1]
prosess2 <- (rgamma(n,k[2],theta[2])*s[2])+o[2]
prosess3 <- (rgamma(n,k[3],theta[3])*s[3])+o[3]
prosess4 <- (rgamma(n,k[4],theta[4])*s[4])+o[4]
prosess5 <- (rgamma(n,k[5],theta[5])*s[5])+o[5]
prosess6 <- (rgamma(n,k[6],theta[6])*s[6])+o[6]
prosess7 <- (rgamma(n,k[7],theta[7])*s[7])+o[7]
prosess8 <- (rgamma(n,k[8],theta[8])*s[8])+o[8]
prosess9 <- (rgamma(n,k[9],theta[9])*s[9])+o[9]
d1 <- density(prosess1, n=16384)
d2 <- density(prosess2, n=16384)
d3 <- density(prosess3, n=16384)
d4 <- density(prosess4, n=16384)
d5 <- density(prosess5, n=16384)
d6 <- density(prosess6, n=16384)
d7 <- density(prosess7, n=16384)
d8 <- density(prosess8, n=16384)
d9 <- density(prosess9, n=16384)
plot.new()
plot(d9, col=”brown”, type = “n”,main=”Probability Densities”,xlab = “Process Time in minutes”, ylab=”Probability”,xlim=c(0,40), ylim=c(0,0.26))
legend(“topright”,c(“process 1″,”process 2″,”process 3″,”process 4″,”process 5″,”process 6″,”process 7″,”process 8″,”process 9”),fill=c(“brown”,”red”,”blue”,”green”,”orange”,”purple”,”chartreuse”,”darkgreen”,”pink”))
lines(d1, col=”brown”, add=TRUE)
lines(d2, col=”red”, add=TRUE)
lines(d3, col=”blue”, add=TRUE)
lines(d4, col=”green”, add=TRUE)
lines(d5, col=”orange”, add=TRUE)
lines(d6, col=”purple”, add=TRUE)
lines(d7, col=”chartreuse”, add=TRUE)
lines(d8, col=”darkgreen”, add=TRUE)
lines(d9, col=”pink”, add=TRUE)
ptime <- c(d1[1],d2[1],d3[1],d4[1],d5[1],d6[1],d7[1],d8[1],d9[1])
pdens <- c(d1[2],d2[2],d3[2],d4[2],d5[2],d6[2],d7[2],d8[2],d9[2])
ptotal <- data.frame(prosess1,prosess2,prosess3,prosess4,prosess5,prosess6,prosess7,prosess8,prosess9)
names(ptime) <- c(“ptime1″,”ptime2″,”ptime3″,”ptime4″,”ptime5″,”ptime6″,”ptime7″,”ptime8″,”ptime9”)
names(pdens) <- c(“pdens1″,”pdens2″,”pdens3″,”pdens4″,”pdens5″,”pdens6″,”pdens7″,”pdens8″,”pdens9”)
names(ptotal) <- c(“pgamma1″,”pgamma2″,”pgamma3″,”pgamma4″,”pgamma5″,”pgamma6″,”pgamma7″,”pgamma8″,”pgamma9”)
pall <- data.frame(ptotal,ptime,pdens)

 

Where the relevant term is rgamma(n,\kappa, \theta).  We’ll use these distributions in our dataset.

One last concept needs to be discussed: The probability of the sub-processes’ occurence.  Each sub-process has a percentage chance of happening – some a 100% certainty, others a fairly low 5% of cases.  This reflects the real world reality of what happens – once a test is ordered, there’s a 100% certainty of the patient showing up for the test, but not 100% of the patients will get the test.  Some cancel due to contraindications, others can’t tolerate it, others refuse, etc…  The percentages that are <100% reflect those probabilities and essentially are like a non-binary boolean switch applied to the beginning of the term that describes that sub-process.  We’re evolving first toward a simple generalized linear equation similar to that put forward in this post.  I think its going to look somewhat like this:

N2Value.comBut we’ll see how well this model fares as we develop it and compare it to some others.  The x terms will likely represent the probabilities between 0 and 1.0 (100%).

For a EMR based approach, we would assign a UID (medical record # plus 5-6 extra digits, helpful for encounter #’s).  We will ‘disguise’ the UID by adding or subtracting a constant known only to us and then performing a mathematical operation on it. However, for our purposes here, we would not need to do that.

We’ll  head on to our analysis in part 4.

 

Programming notes in R:

1.  I experimented with for loops and different configurations of apply with this, and after a few weeks of experimentation, decided I really can’t improve upon the repetitive but simple code above.  The issue is that the density function returns a list of 7 variables, so it is not as easy as defining a matrix, as the length of the data frame changes.  I’m sure there is a way to get around this, but for the purposes of this illustration it is beyond our needs.  Email me at mailto:contact@n2value.com if you have working code that does it better!

2.  For the density function, the number of samples must be a power of 2.  So by choosing 16384 (2^14) we meet that goal.  Setting N to that number makes the data frame more symmetric.

3.  In variable names above, prosess is an intentional misspelling.

 

Black Swans, Antifragility, Six Sigma and Healthcare Operations – What medicine can learn from Wall St Part 7

Black Swans, Antifragility, Six Sigma and Healthcare Operations – What medicine can learn from Wall St Part 7

antifragile

I am an admirer of Nicholas Nassim Taleb – a mercurial options trader who has evolved into a philosopher-mathematician.  The focus of his work is on the effects of randomness, how we sometimes mistake randomness for predictable change, and fail to prepare for randomness by excluding outliers in statistics and decision making.  These “black swans” arise unpredictably and cause great harm, amplified by systems that have put into place which are ‘fragile’.

Perhaps the best example of a black swan event is the period of financial uncertainty we have lived through during the last decade.  A quick recap: the 1998 global financial crisis was caused by a bubble in US real estate assets.  This in turn from legislation mandating lower lending standards and facilitating securitization of these loans combining with lower lending standards (subprime, Alt-A) allowed by the proverbial passing of the ‘hot potato’.  These mortgages were packaged into derivatives named collateralized debt obligations (CDO’s), using statistical models to gauge default risks in these loans.  Loans more likely to default were blended with loans less likely to default, yielding an overall package that was statistically unlikely to default.  However, as owners of these securities found out, the statistical models that made them unlikely to default were based on a small sample period in which there were low defaults.  The models indicated that the financial crisis was a 25-sigma (standard deviations) event that should only happen once in:

Lots of Zeroesyears. (c.f.wolfram alpha)

Of course, the default events happened in the first five years of their existence, proving that calculation woefully inadequate.

The problem with major black swans is that they are sufficiently rare and impactful enough that it is difficult to plan for them.  Global Pandemics, the Fukushima Reactor accident, and the like.  By designing robust systems, expecting system perturbations, you can mitigate their effects when they occur and shake off the more frequent minor black (grey) swans – system perturbations that occur occasionally (but more often than you expect); 5-10 sigma events that are not devastating but disruptive (like local disease outbreaks or power outages).

Taleb classifies how things react to randomness into three categories: Fragile, Robust, and Anti-Fragile.  While the interested would benefit from reading the original work, here is a brief summary:

1.     The Fragile consists of things that hate, or break, from randomness.  Think about tightly controlled processes, just-in-time delivery, tightly scheduled areas like the OR when cases are delayed or extended, etc…
2.     The Robust consists of things that resist randomness and try not to change.  Think about warehousing inventories, overstaffing to mitigate surges in demand, checklists and standard order sets, etc…
3.     The Anti-Fragile consists of things that love randomness and improve with serendipity.  Think about cross-trained floater employees, serendipitous CEO-employee hallway meetings, lunchroom physician-physician interactions where the patient benefits.

In thinking about FragileRobustAnti-Fragile, be cautious about injecting bias into meaning.  After all, we tend to avoid breakable objects, preferring things that are hardy or robust.  So, there is a natural tendency to consider fragility ‘bad’, robustness ‘good’ and anti-fragility must be therefore be ‘great!’  Not true – when we approach these categories from an operational or administrative viewpoint.

Fragile processes and systems are those prone to breaking. They hate variation and randomness and respond well to six-sigma analyses and productivity/quality improvement.  I believe that fragile systems and processes are those that will benefit the most from automation & technology.  Removing human input & interference decreases cycle time and defects.  While the fragile may be prone to breaking, that is not necessarily bad.  Think of the new entrepreneur’s mantra – ‘fail fast’.  Agile/SCRUM development, most common in software (but perhaps useful in Healthcare?) relies on rapid iteration to adapt to a moving target.scrum.jpg   Fragile systems and processes cannot be avoided – instead they should be highly optimized with the least human involvement.  These need careful monitoring (daily? hourly?) to detect failure, at which point a ready team can swoop in, fix whatever has caused the breakage, re-optimize if necessary, and restore the system to functionality.  If a fragile process breaks too frequently and causes significant resultant disruption, it probably should be made into a Robust one.

Robust systems and processes are those that resist failure due to redundancy and relative waste.  These probably are your ‘mission critical’ ones where some variation in the input is expected, but there is a need to produce a standardized output.  From time to time your ER is overcome by more patients than available beds, so you create a second holding area for less-acute cases or patients who are waiting transfers/tests.  This keeps your ER from shutting down.  While it can be wasteful to run this area when the ER is at half-capacity, the waste is tolerable vs. the lost revenue and reputation of patients leaving your ER for your competitor’s ER or the litigation cost of a patient expiring in the ER after waiting 8 hours.    The redundant patient histories of physicians, nurses & medical students serve a similar purpose – increasing diagnostic accuracy.  Only when additional critical information is volunteered to one but not the other is it a useful practice.  Attempting to tightly manage robust processes may either be a waste of time, or turn a robust process into a fragile one by depriving it of sufficient resilience – essentially creating a bottleneck.  I suspect that robust processes can be optimized to the first or second sigma – but no more.

Anti-fragile processes and systems benefit from randomness, serendipity, and variability.  I believe that many of these are human-centric.  The automated process that breaks is fragile, but the team that swoops in to repair it – they’re anti-fragile.  The CEO wandering the halls to speak to his or her front-line employees four or five levels down the organizational tree for information – anti-fragile.  Clinicians that practice ‘high-touch’ medicine result in good feelings towards the hospital and the unexpected high-upside multi-million dollar bequest of a grateful donor 20 years later – that’s very anti-fragile.  It is important to consider that while anti-fragile elements can exist at any level, I suspect that more of them are present at higher-level executive and professional roles in the healthcare delivery environment.  It should be considered that automating or tightly managing anti-fragile systems and processes will likely make them LESS productive and efficient.  Would the bequest have happened if that physician was tasked and bonused to spend only 5.5 minutes per patient encounter?  Six sigma management here will cause the opposite of the desired results.

I think a lot more can be written on this subject, particularly from an operational standpoint.   Systems and processes in healthcare can be labeled fragile, robust, or anti-fragile as defined above.  Fragile components should have human input reduced to the bare minimum possible, then optimize the heck out of these systems.  Expect them to break – but that’s OK – have a plan & team ready for dealing with it, fix it fast, and re-optimize until the next failure.  Robust systems should undergo some optimization, and have some resilience or redundancy also built in – and then left the heck alone!  Anti-fragile systems should focus on people and great caution should be used in not only optimization, but the metrics used to manage these systems – lest you take an anti-fragile process, force it into a fragile paradigm, and cause failure of that system and process.  It is the medical equivalent of forcing a square peg into a round hole.  I suspect that when an anti-fragile process fails, this is why.