CheXNet – a brief evaluation

CheXNet – a brief evaluation

Chest X-Ray deep dreamed - our AI & deep learning future
Chest Radiograph from ChestX-ray14 dataset processed with the deep dream algorithm trained on ImageNet

Andrew Ng released CheXNet yesterday on ArXiv (citation) and promoted it with a tweet which caused a bit of a stir on the internet and related radiology social media sites like Aunt Minnie.  Before Radiologists throw away their board certifications and look for jobs as Uber drivers, a few comments on what this does and does not do.

First off, from the Machine Learning perspective, methodologies check out.  It uses a 121 layer DenseNet, which is a powerful convolutional neural network.  While code has not yet been provided, the DenseNet seems similar to code repositories online where 121 layers are a pre-made format.  80/20 split for Training/Validation seems pretty reasonable (from my friend, Kirk Borne), Random initialization, minibatches of 16 w/oversampling positive classes, and a progressively decaying validation loss are utilized.  Class activation mappings are used to visualize areas in the image most indicative of the activated class (in this case, pneumonia).  This is an interesting technique that can be used to provide some human-interpretable insights into the potentially opaque DenseNet.

The last Fully Connected (FC) layer is replaced by a single output (only one class is being tested for – pneumonia) coupled to a sigmoid function (an activation function – see here) to give a probability between 0 and 1.

The test portion of the study was 420 Chest X-rays read by four radiologists, one of whom was a thoracic specialist.  They could choose between the 14 pathologies in the ChestX-ray14 dataset, read blind without any clinical data.

So, a ROC curve was created, showing three radiologists similar to each other, and one outlier.The radiologists lie slightly under the ROC curve of the CheXNet classifier.  But, a miss is as good as a mile, so the claims of at or above radiologist performance are accurate, because math.  Addendum – Even though this claim would likely not meet statistical significance.  Thanks, Luke.

So that’s the study.  Now, I will pick some bones with the study.

First, only including one thoracic radiologist is relevant, if you are going to make ground truth agreement of 3 out of four radiologists.  General radiologists will be less specific than specialist radiologists, and that is one of the reasons why we have moved to specialty-specific reads over the last 20 years.  If the three general rads disagreed with the thoracic rad, the thoracic rad’s ground truth would be discarded.  Think about this – you would take the word of the generalist over the specialist, despite greater training.  Google didn’t do this in their retinal machine learning paper.  Instead, Google used their three retinal specialists as ground truth and then looked at how the non-specialty opthalmologists were able to evaluate that data and what it meant to the training dataset.  (Thanks, Melody!)

Second, the Wang ChestXray14 dataset is a dataset that I believe was data-mined from NIH radiology reports.  This means that for the dataset, ground truth was whatever the radiologists said it was.  I’m not casting aspersions on the NIH radiologists, as I am sure they are pretty good.  I’m simply saying that the dataset’s ground truth is what it says it is, not necessarily what the patient’s clinical condition was.  As proof of that, here are a few cells from the findings field on this dataset.  I’m not sure if the multiple classes strengthens or weakens Mr. Ng’s argument.

Findings field from the ChestX-ray14 dataset (representative)

In any case, the NIH radiologists more than a few times perhaps couldn’t tell either, or identified one finding as the cause of the other (Infiltrate & Pneumonia mentioned side by side) and at the top you have an “atelectasis vs consolidation vs Pneumonia” (as radiologists we say these things).  Perhaps I am missing something here and the classifier is making a stronger decision between the pathologies.  But without the sigmoid activation percentages for each class in the 14, I can’t tell.  Andrew, if you read this, I have the utmost respect for you and your team, and I have learned from you.  But I would love to know your rebuttal, and I would urge you to publish those results.  Or perhaps someone should do it for reproducibility.

Addendum:   Other radiologists with machine learning chops whom I respect are also concerned about how the ground truth was decided in the ChestXRay14 dataset.  This needs to be further looked into.

Finally, I’m bringing up these points not to be a killjoy, but to be balanced.  I think it is important to see this and prevent an administrator from making a really boneheaded decision of firing their radiologists to put in a computer diagnostic system (not in the US, but elsewhere) and realizing it doesn’t work after spending a vast sum of money on it.  Startups competing in the field who do not have healthcare experience need to be aware of these pitfalls in their product.  I’m saying this because real people could be really hurt and impacted if we don’t manage this transition into AI well.

Thanks for reading, and feel free to comment here or on twitter or linkedin to me: @drsxr

Defining value in healthcare through risk

High-low-norisk

For a new definition of value, it’s helpful to go back to the conceptual basis of payment for medical professional services under the RBRVS. Payment for physician services is divided into three components: Physician work, practice expense, and a risk component.

Replace physician with provider, and then extrapolate to larger entities.

Currently, payer (insurer, CMS, etc…) and best practice (specialty societies, associations like HFMA, ancillary staff associations) guidelines exist. This has reduced some variation among providers, and there is an active interest to continue this direction. For example, level 1 E&M clearly differs from a level 5 E&M – one might disagree whether a visit is a level 3 or 4, but you shouldn’t see the level 1 upcoded to 5. Physician work is generally quantifiable in either patients seen or procedures done, and for any corporate/employed practice, most physicians will be working towards the level of productivity they have contractually agreed to, or they will be let go/contracts renegotiated. Let’s hope they are fairly compensated for their efforts and not subjected solely to RVU production targets, which are falling out of favor vs. more sophisticated models (c.f. Craig Pedersen, Insight Health Partners).

Unless there is mismanagement in this category, provider work is usually controllable, measurable, and with some variation due to provider skill, age, and practice goals, consistent. For those physicians who have been vertically integrated, their current EHR burdens and compliance directives may place a cap on productivity.

Practice expenses represent those fixed expenses and variable expenses in healthcare – rent, taxes, facility maintenance, and consumables (medical supplies, pharmaceuticals, and medical devices). Most are fairly straightforward from an accounting standpoint. Medical supplies, pharmaceuticals, and devices are expenses that need management, with room for opportunity. ACO and super ACO/CIO organizations and purchasing consortiums such as Novation, Amerinet, and and Premier have been formed to help manage these costs.

Practice expense costs are identifiable, and once identified, controllable. Initially, six sigma management tools work well here. For all but the most peripheral, this has happened/is happening, and there are no magic bullets out there beyond continued monitoring of systems & processes as they evolve over time as drift and ripple effects may impact previously optimized areas.

This leaves the last variable – risk. Risk was thought of as a proxy for malpractice/legal costs. However, in the new world of variable payments, there is not only downside risk in this category, but the pleasant possibility of upside risk.

It reasons that if your provider costs are reasonably fixed, and practice expenses are as fixed as you can get them at the moment, that you should look to the risk category as an opportunity for profit.

As a Wall St. options trader, the only variable that really mattered to me for the price of the derivative product was the volatility of the option – the measure of its inherent risk. We profited by selling options (effectively, insurance) when that implied volatility was higher than the actual market volatility, or buying them when it was too low. Why can’t we do the same in healthcare?

What is value in this context? The profit or loss arising from the assumption and management of risk. Therefore, the management of risk in a value-based care setting allows for the possibility of a disproportionate financial return.

www.n2value.com

The sweet spot is Low Risk/High Return. This is where discovering a fundamental mispricing can return disproportionately vs. exposure to risk.

Apply this risk matrix to:

  • 1 – A medium sized insurer, struggling with hospital mergers and former large employers bypassing the insurer directly and contracting with the hospitals.
  • 2 – A larger integrated hospital system with at-risk payments/ACO model, employed physicians, and local competitors which is struggling to provide good care in the low margin environment.
  • 3 – group radiology practice which contracts with a hospital system and a few outpatient providers.

& things get interesting. On to the next post!

Catching up with the “What medicine can learn from Wall St. ” Series

The “What medicine can learn from Wall Street” series is getting a bit voluminous, so here’s a quick recap of where we are up to so far:

Part 1 – History of analytics – a broad overview which reviews the lagged growth of analytics driven by increasing computational power.

Part 2 – Evolution of data analysis – correlates specific computing developments with analytic methods and discusses pitfalls.

Part 3 – The dynamics of time – compares and contrasts the opposite roles and effects of time in medicine and trading.

Part 4 – Portfolio management and complex systems – lessons learned from complex systems management that apply to healthcare.

Part 5 – RCM, predictive analytics, and competing algorithms – develops the concept of competing algorithms.

Part 6 – Systems are algorithms – discusses ensembling in analytics and relates operations to software.


 

What are the main themes of the series?

1.  That healthcare lags behind wall street in computation, efficiency, and productivity; and that we can learn where healthcare is going by studying Wall Street.

2.  That increasing computational power allows for more accurate analytics, with a lag.  This shows up first in descriptive analytics, then allows for predictive analytics.

3.  That overfitting data and faulty analysis can be dangerous and lead to unwanted effects.

4.  That time is a friend in medicine, and an enemy on Wall Street.

5.  That complex systems behave complexly, and modifying a sub-process without considering its effect upon other processes may have “unintended consequences.”

6.  That we compete through systems and processes – and ignore that at our peril as the better algorithm wins.

7.  That systems are algorithms – whether soft or hard coded – and we can ensemble our algorithms to make them better.


 

Where are we going from here?

– A look at employment trends on Wall Street over the last 40 years and what it means for healthcare.

– More emphasis on the evolution from descriptive analytics to predictive analytics to proscriptive analytics.

– A discussion for management on how analytics and operations can interface with finance and care delivery to increase competitiveness of a hospital system.

– Finally, tying it all together and looking towards the future.

 

All the best to you and yours and great wishes for 2016!

 

 

Skeptical about competing algorithms?

Someone commented to me that the concept of competing algorithms was very science-fictiony and hard to take at face value outside of the specific application of high frequency trading on Wall Street.  I can understand how that could be argued, at first glance.

However, consider that systems are algorithms (you may want to re-read Part 6 of the What Medicine can learn from Wall Street series).  We have entire systems (in some cases, departments) set up in medicine to handle the process of insurance billing and accounts receivable.  Just when our billing departments seem to get very good at running claims, the insurers implement a new system or rule set which increases our denials.  Our billers then adapt to that change to return to their earlier baseline of low denials.

Are you still sure that there are no competing algorithms in healthcare?  They are hard-coded in people and processes not soft-coded in algorithms & software.

If you are still not sure, consider legacy retailers who are selling commodity goods.  If everyone is selling the same item at the same price, you can only beat your competition by successful internal processes that give you increased profitability over your competitors, allowing you to out-compete them.  You win because you have better algorithms.

Systems are algorithms.  And algorithms compete.

What medicine can learn from Wall Street part 6 – Systems are algorithms

Systems trading on Wall Street in the early days (pre 1980’s) was done by hand or by laborious computation.  Systems traded off indicators –  hundreds of indicators, exist but most are either trend or anti-trend.  Trending indicators range from the ubiquitous and time-honored Moving Average, to the MACD, etc…  Anti-trend indicators tend to be based on oscillators such as relative strength (RSI), etc.  In a trending market, the moving average will do well, but it will get chopped around in a non-trending market with frequent wrong trades.  The oscillator solves some of this problem, but in a strongly trending market, tends to underperform and miss the trend.  Many combinations of trend and anti-trend systems were tried with little success to develop a consistent model that could handle changing market conditions from trend to anti-trend (consolidation) and back.

The shift towards statistical models in the 2000’s (see Evidence-Based Technical Analysis by Aronson) provided a different way to analyze the markets with some elements of both systems.  While I would argue that mean reversion has components of an anti-trend system, I’m sure I could find someone to disagree with me.  The salient point is that it is a third method of evaluation which is neither purely trend or anti-trend.

Finally, the machine learning algorithms that have recently become popular give a fourth method of evaluating the markets. This method is neither trend, anti-trend, or purely statistical (in the traditional sense), so it provides additional information and diversification.

Combining these models through ensembling might have some very interesting results.  (It also might create a severely overfitted model if not done right).

Sidebar:  I believe that the market trades in different ways at different times.  It changes from a technical market, where predictive price indicators are accurate, to a fundamental market, driven by economic data and conditions, to a psychologic market, where ‘random’ current events and investor sentiment are the most important aspects.  Trending systems tend to work well in fundamental markets, anti-trend systems work well in technical or psychologic markets, statistical (mean reversion) systems tend to work well in technical or fundamental markets, and I suspect machine learning might be the key to cracking the psychologic market.  What is an example of a psychologic market?  This – the S&P 500 in the fall of 2008 when the financial crisis hit its peak and we were all wondering if capitalism would survive.

40% Drop in the S&P 500 from August - November during the 2008 financial crisis.
40% Drop in the S&P 500 from August – November during the 2008 financial crisis.

By the way, this is why you pay a human to manage your money, instead of just turning it over to a computer.  At least for now.

So why am I bringing this up?  I’m delving more deeply into Queuing & operations theory these days, wondering if it would be helpful in developing an ensemble model – part supervised learning(statistics), part unsupervised (machine) learning, part Queue Theory algorithms.  Because of this, I’m putting this project on hold.  But it did make me think about the algorithms involved, and I had an aha! moment that is probably nothing new to Industrial Engineering types or Operations folks who are also coders.

Algorithms, like an ensemble model composed of three separate models: a linear model (Supervised Learning), a machine learning model (Unsupervised learning) and a rule based models (Queueing theory), are software coded rule sets.  However, the systems we put in place in physical space are really just the same thing.  The policies, procedures and operational rule sets that exist in our workplace (e.g. the hospital) are hard-coded algorithms made up of flesh and blood, equipment and architecture, operating in an analogue of computer memory – the wards and departments of the hospital.

If we only optimize for one value (profit, throughput, quality of care, whatever), we may miss the opportunity to create a more robust and stable model.  What if we ensembled our workspaces to optimize for all parameters?

The physical systems we have in place, which stem from policies, procedures, management decisions, workspace & workflow design, are a real-life representation of a complex algorithm we have created, or more accurately has grown largely organically, to serve the function of delivering care in the hospital setting.

What if we looked at this system as such and then created an ensemble model to fulfill the triple (quad) aim?

How powerful that would be.

Systems are algorithms.  

Further developing the care model – theoretical to applied – part 1

Consider an adult patient who has presented to the ER for abdominal pain.  The ER doctor suspects an appendicitis, so next is a CT scan to “r/o appendicitis.”  There is an assumption that the patient has already had labs drawn and done upon presentation to the ER (probably a rapid test).ER_CT_process

First, the ER doctor has to decide to order the CT study, and write the order.  We’ll assume a modern CPOE system to take out the intervening steps of having the nurse pick up the order, sign off, and then give it to the HUC to call the order to the CT technologist.  We’ll also assume that the CPOE system automatically contacts patient transport and lets them know that there is a patient ready for transport.  Depending on your institution’s HIMSS level, these may be a lot of assumptions!

Second, patient transport needs to pick up the patient and bring them to the CT holding area (from the hallway to a dedicated room).

Third, the nurse (or a second technologist / tech assistant) will assess this patient and make sure that they are a proper candidate for the procedure.  This involves taking a focused history, making sure there is no renal compromise that would be made worse by the low osmolar contrast (LOCA) used in a CT scan, ensuring that IV access is satisfactory for the LOCA injection (or establishing it if it is not), and ensuring that the patient does not have a contrast allergy that would be a contraindication to the study.

Fourth, the CT technologist gets the patient from holding, places them on the CT gantry, hooks up the contrast, and protocols the patient, and then scans.  Once the scan finishes, the patient returns to holding, and the study posts to the PACS system for interpretation by the M.D. radiologist.

Fifth, the radiologist physician sees the study pop up on their PACS (picture archiving & communication system), interprets the study, generates a report (usually by dictating into voice recognition software these days), proofreads it and then approves the report.  If there is an urgent communication issue, the radiologist will personally telephone the ER physician, if not, ancillary staff on both sides usually notice the report is completed and alerts the ER physician to review it when he has time.

Sixth, the ER physician sees the radiologist’s report.  She or he then takes all the information on the patient, including that report, laboratory values, physical examination, patient history, and outside medical records and synthesizes that information to make a most likely diagnosis and exclude other diagnoses.  It is entirely possible that the patient may go on to additional imaging, and the process can repeat.

In comparison to the prior model where all interactions were considered, we can use a bit of common sense to get the number of interacting terms down.  The main rate limiting step is the ordering ER physician – the process initiates with that physician’s decision to get CT imaging.  It is possible for that person to exceed capacity.  Also, there are unexpected events which may require immediate discussion and interaction between members of the team – ER physician to either radiology physician, radiology nurse, or radiology technologist.  Note that the radiology physician and the radiology nurse can both interact with the ER physician both before (step 1) and after (step 6) the study, because of the nature of patient care.

An astute observer may note that there is no transport component of the patient back to ER from radiology holding.  This is because the patient has already been assessed by the ER physician, and more testing, disposition, etc… is pending the information generated by the CT scan.  While the patient certainly needs care, where that care is given during the assessment process (for a stable patient )is not critical.  It could be that the patient goes from CT holding to dialysis, or another testing area, etc…  Usually the next ordered test, consult, or disposition hinges on the basis of the CT results, and will be entered via CPOE, where the patient and ER physician need not be in the same physical space to execute.

From practical experience, ER physician – CT technologist interactions are most common and usually one-sided.  (please take this patient first, I want the study done this way, etc…)  ER physician – nurse interactions are uncommon and usually unidirectional (nurse to physician – this patient is in renal failure, we can’t use LOCA, etc…).  ER Physician and radiology physician interactions are even less common but bidirectional.  (‘This patient is confounding – how can we figure this out?’ vs. ‘your patient has a ruptured aortic aneurysm and will die immediately without surgical interaction!’)

Next post we will modify our generalized linear model and begin assembling a dataset to test our assumptions.

Some thoughts on Revenue Cycle Management, predictive analytics, and competing algorithms

After some reflection, this is clearly Part 5 of the “What medicine can learn from Wall Street” series.

It occurred to me while thinking about the staid subject of revenue cycle management (RCM) that this is a likely hotspot for analytics.   First, there is data – tons of data.  Second, it is relevant – folks tend to care about payment.

RCM is the method by which healthcare providers get paid, beginning from patient contact, leading to evaluation and treatment, and submitting charges by which we are ultimately paid via contractual obligation.  Modern RCM goes beyond billing, to include marketing, pre-authorization, completeness in the medical records to decrease denials, and ‘working’ the claims until payment is made.

Providers get paid by making claims.  Insurers ‘keep the providers honest’ by claim denials when the claims are not properly 1) pre-authorized, 2)documented, 3)medically indicated (etc).  There is a tug of war between both entities, which usually results in a relationship that ranges somewhere between grudging wariness to outright war (with contracts terminated and legal challenges fired off).  The providers profit by extracting the maximum profit they are contractually allowed to, the insurer by denying payment so that they can obtain investment returns on the pool of reserves they have.  Typically, the larger the reserve pool, the larger the profit.
Insurers silently fume at ‘creative coding’ where a change of coding rules causes a procedure/illness that has previously been paid at a lower level now paid at a much higher level.  Providers seethe at ‘capricious’ denials which require  staff work to provide whatever documentation requested (perhaps relevant, perhaps not) and ‘gotcha’ downcoding due to a single missing piece of information.  In any case, there is plenty of work for the billing & IT folks on either side.

Computerized revenue cycle management seems like a solution until you realize that the business model of either entity has not changed, and now the same techniques on either side can be automated.  Unfortunately, if the other guy does it, you probably need to too – here’s why.

We could get into this scenario:   A payor (insurer) when evaluating claims, decides that there is too much spend ($) on a particular ICD-9 diagnosis (or ICD-10 if you prefer) than expected and targets these for claim denials.  A provider would submit claims for this group, be denied on many of them, re-submit, be denied, and then either start ‘working’ the claims to gain value from them or if they had a sloppy or lazy or limited billing department, simply let them go (with resultant loss of the claim).  That would be a 3-12 month process.   However, a provider that was using descriptive analytics (see part 1) on say a weekly or daily basis would be able to see something was wrong more quickly – probably within three months – and gear up for quicker recovery.  A determined (and agressive) payor could shift their denial strategy to a different ICD-9 and something similar would occur.  After a few cycles of this, if the provider was really astute, they might data mine the denials to identify what codes were being denied and set up a predictive algorithm to compare new denials relative to their old book of business.  This would identify statistical anomalies in new claims, and could alert the provider about the algorithm the payor was using to target claims for denial.  By anticipating these denials, and either re-coding them or providing superior documentation to force the payor to pay (negating the beneficial effects of the payor’s claim denial algo), claims are paid in a timely and expected manner.  I haven’t checked out some of the larger vendors’ RCM offerings but I suspect that this is not far in the offing.

I could see a time where a very aggressive payor (perhaps under financial strain) strikes back with an algorithm designed to deny some, but not all claims on a semi-random basis to ‘fly under the radar’ and escape the provider’s more simple detection algorithms.  A more sophisticated algorithm based upon anomaly detection techniques could then be used to identify these denials….  This seems like a nightmare to me.  Once things get to this point, it’s probably only a matter of time until these games are addressed by the legislature.

Welcome to the battles of the competing algorithms.  This is what happens in high-frequency trading.  Best algorithm wins, loser gets poorer.

One thing is sure: in negotiations, the party who holds & evaluates the data holds the advantage.   The other party will forever be negotiating from behind..

P.S.  As an aside, with the ultra-low short term interest rates after the 2008 financial crisis, the time value of money is near all-time lows.  Delayed payments are an annoyance but apart from a cash-flow basis there is not any real advantage to delaying payments.   Senior management who lived through or studied the higher short term interest rates of the 1970’s-1980’s will recall the importance of managing the ‘float’ and good treasury/receivables operations.  Changing economic conditions could make this even more of a hot topic.

Developing a simple care delivery model further – dependent interactions

Let’s go back to our simple generalized linear model of care delivery from this post:simplified ER Process

With its resultant Generalized Linear Function:

GLM

This model, elegant in its simplicity, does not account for the inter-dependencies in care delivery.  A more true-to-life revised model is:

ER Process with interdependencies - New PageWhere there are options for back and forth pathways depending on new clinical information, denoted in red.

A linear model that takes into account these inter-dependencies would look like this:

GLM2
Including these interactions, we go from 4 terms to 8.  And this is a overly-simplified model!  By drilling down in a typical PI/Six Sigma environment into an aspect of the healthcare delivery processes, its not hard to imagine creating well over four points of contact/patient interaction, each with their own set of interdependencies.  Imagine a process with 12-15 sub-processes and most of those sub-processes each having on average six (6) interdependencies.  And then the possibility of multiple interdependencies among the processes…  This doesn’t even account for a EMR dataset where the number of columns could be …. 350?  Quickly, your ‘simple’ linear model is looking not so simple with easily over 100 terms in the equation, which also causes solvation problems.     Not to despair!  There are ways to take this formula with a high number of terms and create a more manageable model as a reasonable approximation!  The mapping and initial modeling of the care process is of greatest utility from an operational standpoint, to allow for understanding and to guide interpretation of the ultimate data analysis.

I am a believer that statistical computational analysis can identify the terms which are most important for the model.  By inference, these inputs will have the most effect upon outcome, and can guide management to where precious effort, resources, and time should be guided to maximize outcomes.

 

What Medicine can learn from Wall Street – Part 4 – Portfolio Management and complex systems

attrib: Asy ArchLet’s consider a single security trader.

All they trade is IBM.  All they need to know is that security and  its included indexes.  But start trading another security, such as Cisco (CSCO), while they have a position in IBM, and they have a portfolio.   Portfolios behave differently – profiting or losing on an aggregate basis from the combination of movements in multiple securities.  For example, if you hold 10,000 shares of IBM and CSCO, and IBM appreciates by a dollar while CSCO loses a dollar, you have no net gain or loss.  That’s called portfolio risk.

Everything in the markets is connected.  For example, if you’re an institutional trader, with a large (1,000,000 shares +) position in IBM, you know that you can’t sell quickly without tanking the market.  That’s called execution risk.  Also, once the US market closes (less of a concern these days than 20 years ago) there is less liquidity.  Imagine you are this large institutional trader, at home at 11pm.   A breaking news story develops about a train derailment of toxic chemicals near IBM’s research campus causing fires.   You suspect that it destroyed all of their most prized experimental hardware which will take years to replace.  Immediately, you know that you have to get out of as much IBM as possible to limit your losses.  However, when you get over to your trading terminal, the first bid in the market is $50 lower than the price that afternoon for a minuscule 10,000 shares.  If you sell at that price, the next price will be even lower for a smaller amount.   You’re stuck.  However, there is a relationship between IBM and the general market called a beta which is a correlation coefficient.  Since you cannot get out of your IBM directly, you sell a defined number of short S&P futures in the open market to simulate a short position in IBM.  You’re going to take a bath, but not as bad as the folks that went to bed early and didn’t react to the news.

A sufficiently large portfolio with >250 stocks will approximate broader market indexes (such as the S&P 500 or Russell index) depending upon composition.  It’s beta will be in the 0.9-1.1 range with 1.0 equaling a perfect correlation coefficient (r).  Traders attempt to improve upon this expected rate of return by strategic buys and sells of the portfolio components.  Any extra return above the expected rate of return of the underlying is alpha.   Alpha is what you pay managers for instead of just purchasing the Vanguard S&P 500 index and forgetting about it.  It’s said that most managers underperform the market indexes.  A discussion of Modern Portfolio Theory is beyond the scope of this blog, but you can go here for more.

So, excepting an astute manager delivering alpha (or an undiversified portfolio), the larger & more diversified the portfolio is the more it behaves like an index and the less dependent it is upon the behavior of any individual security.  Also, without knowing the exact composition of the portfolio and it’s proportions, it’s overall behavior can be pretty opaque.

MAIN POINT: The portfolio behaves as it’s own process; the sum of the interactions of its constituents.

 

Courtesy Arnold C.I postulate that the complex system of healthcare delivery behaves like a multiple security portfolio.  It is large, complex, and without a clear understanding of its constituent processes, potentially opaque.  The individual components of care delivery summate together to form an overall process of care delivery.  The over-arching hospital, outpatient, office care delivery process is a derivative process – integrating multiple underlying sub-processes.

We trace, review, and document these sub-processes to better understand them.  Once understood, metrics can be established and process improvement tools applied.  The PI team is called in, and a LEAN/Six Sigma analysis performed.  Six sigma process analytics typically focus on one sub-process at a time to improve its efficiency.  Improving a sub-process’ efficiency is a laudable & worthwhile goal which can result in cost savings, better care outcomes, and reduced healthcare prices.  However, there is also the potential for Merton’s ‘unintended consequences‘.

Most importantly, the results of the six sigma PI need to be understood in the context of the overall enterprise – the larger complex system.  Optimizing the sub-process when causing a bottleneck in the larger enterprise process is not progress!
This is because a choice of the wrong metric or overzealous overfitting may, while improving the individual process, create a perturbation in the system (a ‘bottleneck’) the negative effects of which are, confoundingly, more problematic than the fix.  Everyone thinks that they are doing a great job, but things get worse, and senior management demands an explanation.   Thereafter, a lot of finger pointing occurs.  These effects are due to dependent variables  or feedback loops that exist in the system’s process.  Close monitoring of the overall process will help in identifying unintended consequences of process changes.  I suspect most senior management folks will recall the time when an overzealous cost-cutting manager decreased in-house transport to the point where equipment idled and LOS increased.  I.E. The .005% saved by patient transport re-org cost the overall institution 2-3% until the problem was fixed.

There is a difference between true process improvement and goosing the numbers.  I’ve written a bit about this in real vs. fake productivity and my post about cost shifting.  I strongly believe it is incumbent upon senior management to monitor middle management & prevent these outcomes.  Well thought out metrics and clear missions and directives can help.  Specifically – senior management needs to be aware that optimization of sub-processes exists in the setting of the larger overall process and that optimization must also optimize the overall care process (the derivative process) as well.   An initiative that fails to meet both the local and global goals is a failed initiative!

It’s the old leaky pipe analogy – put a band-aid on the pipe to contain one leak, and the increased pressure in the pipe causes the pipe to burst somewhere else, necessitating another band-aid.  You can’t patch the pipe enough – too old.  The whole pipe needs replacement.  And the sum of repairs over time exceeds the cost of simply replacing it.

I’m not saying that process improvement is useless – far from it, it is necessary to optimize efficiency and reduce waste to survive in our less-than-forgiving healthcare business environment.  However, consideration of the ‘big picture’ is essential – which can be mathematically modeled.  The utility of modeling is to gain an understanding of how the overall complex process responds to changes – to avoid unintended consequences of system perturbation.

What medicine can learn from Wall Street – Part 3 – The dynamics of time

This a somewhat challenging post with cross-discipline correlations, some unfamiliar terminology, and concepts.  There is a payoff!

You can recap part 1 and part 2 here. 

The crux of this discussion is time.  Understanding the progression towards shorter and shorter time frames on Wall Street enables us to draw parallels and differences in medical care delivery particularly pertaining to processes and data analytics.  This is relevant because some vendors tout real-time capabilities in health care data analysis.  Possibly not as useful as one thinks.

In trading, the best profit one is a risk-less one.  A profit that occurs by simply being present, is reliable, and reproducible, and exposes the trader to no risk.  Meet arbitrage.  Years ago, it was possible for the same security to be trading at different prices on different exchanges as there was no central marketplace.  A network of traders could execute a buy of a stock for $10 in New York, and then sell those same shares on the Los Angeles exchange for $11.  If one imagines a 1000 share transaction, a $1 profit per share yields $1000.  It was made by the head trader holding up two phones to his head and saying ‘buy’ into one and ’sell’ into the other.*   These relationships could be exploited over longer periods of time and represented an information deficit.  However, as more traders learned of them, the opportunities became harder to find as greater numbers pursued them.  This price arbitrage kept prices reasonably similar before centralized, computerized exchanges and data feeds.

As information flow increased, organizations became larger and more effective, and time frames for executing profitable arbitrages decreased.  This led traders to develop simple predictive algorithms, like Ed Seykota did, detailed in part 1.  New instruments re-opened the profit possibility for a window of time, which eventually closed.  The development of futures, options, indexes, all the way to closed exchanges (ICE, etc…) created opportunities for profit which eventually became crowded.  Since the actual arbitrages were mathematically complex (futures have an implied interest rate, options require a solution of multiple partial differential equations, and indexes require summing instantaneously hundreds of separate securities) a computational model was necessary as no individual could compute the required elements quickly enough to profit reliably.  With this realization, it was only a matter of time before automated trading (AT) happened, and evolved into high-frequency trading with its competing algorithms operating without human oversight on millisecond timeframes.

The journey from daily prices to ever shorter prices over the trading day to millisecond prices was driven by availability of good data and reliable computing which could be counted to act on those flash prices.  Once a game of location (geographical arbitrage) turned into a game of speed (competitive pressures on geographical arbitrage) turned into a game of predictive analytics (proprietary trading and trend following) turned into a more complex game of predictive analytics (statistical arbitrage) was then ultimately turned back into a game of speed and location (High frequency trading).

The following chart shows a probability analysis of an ATM straddle position on IBM.  This is an options position.  It is not important to understand the instrument, only to understand what the image shows.  For IBM, the expected variance that exists in price at one standard deviation (+/- 1 s.d.) is plotted in below.  As time (days) increases along the X axis, the expected range widens, or becomes less accurate.

credit: TD Ameritrade
credit: TD Ameritrade

Is there a similar corollary for health care?

Yes, but.

First, recognize the distinction between the simpler price-time data which exists in the markets, vs the rich, complex multivariate data in healthcare.  

Second, assuming a random walk hypothesis , security price movement is unpredictable, and at best can only be calculated so that the next price will be in a range defined by a number of standard deviations according to one’s model as seen above in the picture. You cannot make this argument in healthcare.  This is because the patient’s disease is not a random walk.  Disease follows proscribed pathways and natural histories which allow us to make diagnoses and implement treatment options.

It is instructive to consider Clinical Decision Support tools.  Please note that these tools are not a substitute for expert medical advice (and my mention does not employ endorsement).  See Esagil and diagnosis pro.  If you enter “abdominal pain” into either of the algorithms, you’ll get back a list of 23 differentials (woefully incomplete) in Esagil and 739 differentials (more complete, but too many to be of help) in Diagnosis Pro.  But this is a typical presentation to a physician – a patient complains of “abdominal pain” and the differential must be narrowed.

At the onset, there is a wide differential diagnosis.  The possibility that the pain is a red herring and the patient really has some other, unsuspected, disease must be considered.  While there are a good number of diseases with a pathognomonic presentation, uncommon presentations of common diseases are more frequent than common presentations of rare diseases.

In comparison to the trading analogy above, where expected price movement is generally restricted to a quantifiable range based on the observable statistics of the security over a period of time, for a de novo presentation of a patient, this could be anything, and the range of possibilities is quite large.

Take, for example, a patient that presents to the ER complaining “I don’t feel well.”  When you question them, they tell you that they are having severe chest pain that started an hour and a half ago.  That puts you into the acute chest pain diagnostic tree.

Reverse Tree

With acute chest pain, there is a list of differentials that needs to be excluded (or ‘ruled out’), some quite serious.  A thorough history and physical is done, taking 10-30 minutes.  Initial labs are ordered (5-30 minutes if done in a rapid, in-ER test, longer if sent to the main laboratory) an EKG and CXR (chest X-ray) are done for their speed,(10 minutes for each)  and the patient is sent to CT for a CTA (CT Angiogram) to rule out a PE (Pulmonary embolism).  This is a useful test, because it will not only show the presence or absence of a clot, but will also allow a look at the lungs to exclude pneumonias, effusions, dissections, and malignancies. Estimate that the wait time for the CTA is at least 30 minutes.  

The ER doctor then reviews the results (5 minutes)- troponins are negative, excluding a heart attack (MI), the CT scan eliminated PE, Pneumonia, Dissection, Pneumothorax, Effusion, malignancy in the chest.  The Chest X-Ray excludes fracture.  The normal EKG excludes arrhythmia, gross valvular disease, and pericarditis.   The main diagnoses left are GERD, Pleurisy, referred pain, and anxiety.  ER doctor goes back to the patient (10 minutes) , patient doesn’t appear anxious & no stressors, so panic attack unlikely.  No history of reflux, so GERD unlikely.  No abdominal pain component, and labs were negative, so abdominal pathologies unlikely.  Point tenderness present on the physical exam at the costochondral junction – and the patient is diagnosed with costochondritis.  The patient is then discharged with a prescription for pain control.  (30 minutes).  

Ok, if you’ve stayed with me, here’s the payoff.

As we proceed down the decision tree, the number of possibilities narrows in medicine.

In comparison, price-time data – in which the range of potential prices increase as you proceed forward in time.

So, in healthcare the potential diagnosis narrows as you proceed down the x-axis of time.  Therefore, time is both one’s friend and enemy – friend as it provides for diagnostic and therapeutic interventions which establish the patient’s disease process; enemy as payment models in medicine favor making that diagnostic and treatment process as quick as possible (when a hospital inpatient).

We’ll continue this in part IV and compare it relevance to portfolio trading.

*As an aside, the phones in trading rooms had a switch on the handheld receiver – you would push them in to talk.  That way, the other party would not know that you were conducting an arbitrage!  They were often slammed down and broken by angry traders – one of the manager’s jobs was to keep a supply of extras in his desk, and they were not hard-wired but plugged in by a jack expressly for that purpose! trader's phone

**Yes, for the statisticians reading this, I know that there is an implication of a gaussian distribution that may not be proven.  I would suspect the successful houses have modified for this and have instituted non-parametric models as well.  Again, this is not a trading, medical or financial advice blog.