A conversation with Farzad Mostashari MD

I participated in a webinar with Farzad Mostashari MD, scM, former director of the ONC (Office of the National Coordinator for Health IT)  sponsored by the data analytics firm Wellcentive   He is now a visiting fellow at the Brookings Institution.  Farzad spoke on points made in a recent article in the American Journal of Accountable Care, Four Key Competencies for Physician-led Accountable Care Organizations.  

The hour-and-a-half format lent itself well to a Q&A format, and basically turned into a small group consulting session with this very knowledgeable policy leader!  

Discussed:
1.  Risk Stratification.  Begin using the EHR data by ‘hot spotting.’  Hot spotting refers to a technique of identifying outliers in medical care and evaluating these outliers to find out why they are consuming resources significantly beyond that of the average.  The Oliver Wyman folks wrote a great white paper that references Dr. Jeffrey Brenner of the Camden Coalition who identified the 1% of Medicaid patients responsible for 30% of the city’s medical costs.  Farzad suggests that data mining should go further and “identify populations of ‘susceptibles’ with patterns of behavior that indicate impending clinical decomposition & lack of resilience.”   He further suggests that we go beyond a insurance-like “risk score” to understand how and why these patients fail, and then apply targeted interventions to prevent susceptibles from failing and over utilizing healthcare resources in the process.  My takeaway from this is in the transition from volume to value, bundled payments and ACO style payments will incentivize physicians to share and manage this risk, transferring a role onto them traditionally filled only by insurers.

2.  Network Management.  Data mining the EHR enables organizations to look at provider and resource utilization within a network.  (c.f. the recent Medicare physician payments data release).  By analyzing this data, referral management can be performed.   By sending patients specifically to those providers who have the best outcomes / lowest costs for that disease, the ACO or insurer can meet shared savings goals.  This would help to also prevent over-utilization – by changing existing referral patterns and excluding those providers who always choose the highest-cost option for care (c.f. the recent medicare payment data for ophthalmologists performing intraocular drug injections – wide variation in costs).  This IS happening – Aetna’s CEO Mark Bertolini, said so specifically during his HIMSS 2014 keynote.   To my understanding, network analysis is mathematically difficult (think eigenfunctions, eigenvalues, and linear algebra) – but that won’t stop a determined implementer from it (it didn’t stop Facebook, Google, or Twitter).  Also included in this topic was workflow management, which is sorely broken in current EHR implementations, clinical decision support tools (like ACRSelect), and traditional six sigma process analytics.

3.  ADT Management.  This was something new.  Using the admission/discharge/transfer data from the HL7 data feed, you could ‘push’ that data to regional health systems.  It achieves a useful degree of data exchange not currently present without a regional data exchange.   Patients who bounce from one ER to the next could be identified this way.  Its also useful to push to the primary care doctors (PCP) managing those patients.  Today, where PCP’s function almost exclusively on an outpatient basis and hospitalists manage the patient while in the hospital, the PCP often doesn’t know about a patient’s hospitalization until they present to the office.  Follow-up care in the first week after hospitalization may help to prevent readmissions. According to Farzad, there is a financial incentive to do so – a discharge alert can enable a primary care practice to ensure that every discharged patient has a telephone follow-up within 48 hours and an office visit within 7 days which would qualify for a $250 “transition in care” payment from Medicare.  (aside – I wasn’t aware of this. I’m not a PCP, and I would carefully check medicare billing criteria closely for eligibility conditions before implementing, as consequences could be severe.  Don’t just take my word for it, as I may be misquoting/misunderstanding and medicare billers are ultimately responsible for what they bill for.  This may be limited to ACO’s.  Due your own due diligence)

4.  Patient outreach and engagement.  One business point is that for the ACO to profit, patients must be retained.  Patient satisfaction may be as important to the business model as the interventions the ACO is performing, particularly as the ACO model suggests a shift to up-front costs and back-end recovery through shared savings.  If you as an ACO invest in a patient, to only lose that patient to a competing ACO, you will let your competitor have the benefit of those improvements in care and eat those sunk costs!  To maintain patient satisfaction and engagement, behavioral economics (think Cass Sunstein’s Nudges.gov  paper), gamification (Jane McGonigal ), A/B Testing (Tim Ferriss) marketing techniques.  Basically, we’re applying customer-centric marketing to healthcare, with not only the total lifetime revenue of the patient considered, but also the total lifetime cost!

It was a very worthwhile discussion and thanks to Wellcentive for hosting it!  

On Mentoring, compassion, curing and healing.

Once, I had a not-so-brief flirtation with Neurosurgery.  In medical school, I was awed by the structural and functional specificity of the brain, and fascinated by the almost priestly status of the neurosurgery attendings.  Unlike other attendings, they did EVERYTHING themselves – from operating to NICU management (including respirators) to clinics to research.  The gung-ho spirit of the specialty is infectious to those it speaks to – which is good because Neurosurgery demands a commitment so overwhelming it is more of a lifestyle choice than a profession.
Unfortunately, my medical school did not have a top-ranked Neurosurgery program, so in my fourth year of medical school, it was up to the Mecca in Boston to steep myself in its culture.  I was a curiosity, but the department was professionally committed to my education, which I appreciated.  Daily rounds were done with different attendings as I tried to soak up as much as possible so that I could learn how to be a great, academic neurosurgeon.

One night, after a long day of operative procedures and clinics which started at 6am, I was rounding alone with the patriarch of the Neurosurgery department.  This man was one of the greats – a society chairman, an expert in an esoteric and challenging area of neurosurgery, a prolific paper and book writer – a man who had dedicated his life to the pursuit of knowledge and surgical skill as a penultimate goal in his 60+ years of life.  I was honored that he allowed me to round with him, in truth.

We were seeing the last patient of the day – late at night.  It was a woman who had come to him with a brain tumor deemed inoperable by all others.  The surgeon had taken her to the OR a few days ago, and tried to wrest the cancer from her brainstem.  He proceeded painstakingly, with extreme care.  The movements of his hands were precise and slight during the operation.  Every time he tried to extirpate the tumor, it caused physiologic instability.  It was nerve-wracking to observe.  After a number of hours, he closed.  

The patient was awake and awaiting the surgeon.  She asked him, “Did you get it?”  He answered, “No.”   She then asked, “Am I going to die?”  He answered softly, “Yes.”

The patient started to cry.  And then I saw this man, this giant, this scientist and clinician beyond reproach – sit down on the bed and put his arm around the patient.  He held her until she stopped crying.  It was more than a perfunctory few minutes.

This man, at the pinnacle of his field, could act any way he wanted.  He could spin on his heel and leave the room, snap at the patient and tell her to get herself together – and nobody would ever reproach him.   Such a gifted surgeon could act any way he wished.

But instead he reached out in a more compassionate way to a suffering patient than many physicians I have known.   It was not just programmed, scripted ‘compassion’ learned from a patient experience consultant – he waited there until she had exhausted her grief at that moment.   She knew that he was there for her in a human, healing way, now that he could no longer cure her.  I stood still, listening to her muffled sobs for at least 15 minutes.

The lesson learned – if the best of the best could show such compassion, so could I.  Perhaps other surgeon’s responses I had seen lacking empathy were really a marker of a lesser degree of competence, a cover-up for personal or professional inadequacies instead of a mark of importance.

I ultimately did not choose Neurosurgery as my specialty.  But the lesson stayed with me.  I hope that in my practice I was able to show this degree of caring to my patients, many of whom came to me in extreme sickness, many of whom I would never be able to cure.  I hope that I was able to heal them in some way.

As might be expected with many years passage, he is now gone.  But the picture of him is how I remember him, and I will share it with you. (from the MGH dept. of Neurosurgery web site)

OjemannPortrait2008x600wGood night, Dr. Ojemann. (1931-2010) 

 

What Medicine can learn from Wall Street – Part 2 – evolution of data analysis

If you missed the first part, read it  here.  Note: For the HIT crowd reading this, I’ll offer (rough) comparison to the HIMSS stages (1-7).

1.     Descriptive Analytics based upon historical data.

Hand Drawn Chart of the Dow Jones Average
Hand Drawn Chart of the Dow Jones Average

     This was the most basic use of data analysis.   When newspapers printed price data (Open-High-Low-Close or OHLC), that data could be charted (on graph paper!) and interpreted using basic technical analysis, which was mostly lines drawn upon the chart. (1)  Simple formulas such as year-over-year (YOY) percentage returns could be calculated by hand. This information was merely descriptive and had no bearing upon future events.  To get information into a computer required data entry by hand, and operator errors could throw off the accuracy of your data.  Computers lived in the accounting department, with the data being used to record position and profit and loss (P&L).   At month’s end a large run of data would produce a computer-generated accounting statement.dot matrix
     A good analogue to this system would be older laboratory reporting systems where laboratory test values were sent to a dedicated lab computer.  If the test equipment interfaced with the computer (via IEEE-488 & RS-232 interfaces) the values were sent automatically.  If not, data entry clerks had to enter these values.  Once in the system, data could be accessed by terminals throughout the hospital.  Normal ranges were typically included, with an asterisk indicating the value was abnormal.  The computer database would be updated once a day (end of day type data).  For more rapid results, you would have to go to the lab yourself and ask.  On the operations side, a Lotus 1-2-3 spreadsheet on the finance team’s computer of  quarterly charges, accounts receivable, and perhaps a few very basic metrics would be available to the finance department and CEO for periodic review.  
     For years, this delayed, descriptive data was the standard.  Any inference would be provided by humans alone, who connected the dots.  A rough equivalent would be HIMSS stage 0-1.

2.     Improvements in graphics, computing speed, storage, connectivity.

     Improvements in processing speed & power (after Moore’s Law), cheapening memory and storage prices, and improved device connectivity resulted in more readily available data.  Near real-time price data was available, but relatively expensive ($400 per month or more per exchange with dedicated hardware necessary for receipt – a full vendor package could readily run thousands of dollars a month from a low cost competitior, and much more if you were a full service institution).    An IBM PC XT of enough computing power & storage ($3000) could now chart this data.  The studies that Ed Seykota ran on weekends would run on the PC – but analysis was still manual. The trader would have to sort through hundreds of ‘runs’ of the data to find the combination of parameters which led to the most profitable (successful) strategies, and then apply them to the market going forward.  More complex statistics could be calculated – such as Sharpe Ratios, CAGR, and maximum drawdown – and these were developed and diffused over time into wider usage.  Complex financial products such as options could now be priced more accurately in near-real time with algorithmic advances (such as the binomial pricing model).
     The health care corollary would be in-house early electronic record systems tied in to the hospital’s billing system.  Some patient data was present, but in siloed databases with limited connectivity.  To actually use the data you would ask IT for a data dump which would then be uploaded into Excel for basic analysis.  Data would come from different systems and combining it was challenging.  Because of the difficulty in curating the data (think massive spreadsheets with pivot tables), this could be a full-time job for an analyst or team of analysts, and careful selection of what data was being followed and what was discarded would need to be considered, a priori.  The quality of the analysis improved, but was still human labor intensive, particularly because of large data sets & difficulty in collecting the information.  For analytic tools think Excel by Microsoft or Minitab.
     This corresponds to HIMSS stage 2-3.

3.     Further improvement in technology correlates with algorithmic improvement.Pretty
     With new levels of computing power, analysis of data became quick and relatively cheap allowing automated analysis.  Taking the same data set of computed results from price/time data that was analyzed by hand before; now apply an automated algorithm to run through ALL possible combinations of included parameters.  This is brute-force optimization.   The best solve for the data set is found, and a trader is more confident that the model will be profitable going forward.
ACTV5MA     For example, consider ACTV(2).  Running a brute force optimization on this security with a moving average over the last 2 years yields a profitable trading strategy that returns 117% with the ideal solve.  Well, on paper that looks great.  What could be done to make it even MORE profitable?  Perhaps you could add a stop loss.  Do another optimization and theoretical return increases.  Want more?  Sure.  Change the indicator and re-optimize.  Now your hypothetical return soars.  Why would you ever want to do anything else? (3,4)
     But it’s not as easy as it sounds.  The best of the optimized models would work for a while, and then stop.  The worst would immediately diverge and lose money from day 1 – never recovering.  Most importantly : what did we learn from this experienceWe learned that how the models were developed matteredAnd to understand this, we need to go into a bit of math.
    Looking at security prices, you can model (approximate) the price activity as a function, F(X)= the squiggles of a chart.  The model can be as complex or simple as desired.  Above, we start with a simple model (the moving average), and make it progressively more complex adding additional rules and conditions.  As we do so, the accuracy of the model increases, so the profitability increases as well.  However, as we increase the accuracy of the model, we use up degrees of freedom, making the model more rigid and less resilient.
     Hence the system trader’s curse – everything works great on paper, but when applied to the market, the more complex the rules, and the less robustly the data is tested, the more likely the system will fail due to a phenomenon known as over-fitting.  Take a look at the 3D graph below which shows a profitability model of the above analysis:3D optimization
     You will note that there is a spike in profitability using a 5 day moving average at the left of the graph, but profitability sharply falls off after that, rises a bit, and then craters.  There is a much broader plateau of profitability in the middle of the graph, where many values are consistently and similarly profitable.  Changes in market conditions could quickly invalidate the more profitable 5 day moving average model, but a model with a value chosen in the middle of the chart might be more consistently profitable over time.  While more evaluation would need to be done, the less profitable (but still profitable) model is said to be more ‘Robust’.

To combat this, better statistical sampling methods were utilized, namely cross-validation where an in-sample set is used to test an out-of-sample set for performance.  This gave a system which was less prone to immediate failure, i.e. more robust.  A balance between profitability and robustness can be struck, netting you the sweet spot in the Training vs. Test-set performance curve I’ve posted before.
So why didn’t everyone do this?  Quick answer: they did.  And by everyone analyzing the same data set of end-of-day historical price data in the same way, many people began to reach the same conclusions as each other.  This created an ‘observer effect’ where you had to be first to market to execute your strategy, or trade in a market that was liquid enough (think the S&P 500 index) that the impact of your trade (if you were a small enough trader – doesn’t work for a large institutional trader) would not affect the price.  Classic case of ‘the early bird gets the worm’. here
     The important point is that WE ARE HERE in healthcare.  We have moderately complex computer systems that have been implemented largely due to Meaningful Use concerns, bringing us to between HIMSS stages 4-7.  We are beginning to use the back ends of computer systems to interface with analytic engines for useful descriptive analytics that can be used to inform business and clinical care decisions.  While this data is still largely descriptive, some attempts at predictive analytics have been made.  These are largely proprietary (trade secrets) but I have seen some vendors beginning to offer proprietary models to the healthcare community (hospitals, insurers, related entities) which aim at predictive analytics.  I don’t have specific knowledge of the methods used to create these analytics, but after the experience of Wall Street, I’m pretty certain that a number of them are going to fall into the overfitting trap.  There are other, more complex reasons why these predictive analytics might not work (and conversely, good reasons why they may), which I’ll cover in future posts.  
     One final point – application of predictive analytics to healthcare will succeed in the area where it fails on Wall Street for a specific reason.  On Wall Street, the relationship once discovered and exploited causes the relationship to disappear.  That is the nature of arbitrage – market forces reduce arbitrage opportunities since they represent ‘free money’ and once enough people are doing it, it is no longer profitable.  However, biological organisms don’t response to gaming the system in that manner.  For a conclusive diagnosis, there may exist an efficacious treatment that is consistently reproducible.  In other words, for a particular condition in a particular patient with a particular set of characteristics (age, sex, demographics, disease processes, genetics) if accurately diagnosed and competently executed, we can expect a reproducible biologic response, optimally a total cure of the individual.  And that reproducible response applies to processes present in the complex dynamic systems that comprise our healthcare delivery system.  That is where the opportunity lies in applying predictive analytics to healthcare.

(1) Technical Analysis of Stock Trends, Edwards and Magee, 8th Edition, St. Lucie Press
(2) ACTIVE Technologies, acquired (taken private) by Vista Equity Partners and delisted on 11/15/2013.  You can’t trade this stock.   
(3) Head of Trading, First Chicago Bank, personal communication
(4) Reminder – see the disclaimer for this blog!  And if you think you are going to apply this particular technique to the markets to be the next George Soros, I’ve got a piece of the Brooklyn Bridge to sell you.

What medicine can learn from Wall Street – Part I – History of analytics

floorWe, in healthcare, lag in computing technology and sophistication vs. other fields.  The standard excuses given are: healthcare is just too complicated, doctors and staff won’t accept new ways of doing things, everything is fine as it is, etc…  But we are shifting to a new high-tech paradigm in healthcare, with ubiquitous computing supplanting or replacing traditional care delivery models.  Medicine has a ‘deep moat’ – both regulatory and through educational barriers to entry.  However, the same was said of the specialized skill sets of the financial industry.  Wall St. has pared its staffing down and has automated many jobs & continues to do so.  More product (money) is being handled by fewer people than before, an increase in real productivity.

Computing power in the 1960’s-1970’s on Wall street was large mainframe & mini-frame systems which were used for back-office operations.  Most traders operated by ‘seat of your pants’ hunches and guesses, longer term macro-economic plays, or using their privileged position as market-makers to make frequent small profits.  One of the first traders to use computing was Ed Seykota, who applied Richard Donchian’s trend following techniques to the commodity markets.  Ed would run computer programs on an IBM 360 on weekends, and over six months tested four systems with variations (100 combinations), ultimately developing an exponential moving average trading system that would turn a $5000 account into $15,000,000.(1)  Ed would run his program and wait for the output.  He would then manually select the best system for his needs (usually most profitable).  He had access to delayed, descriptive data which required his analysis for a decision.

In the 1980’s – 1990’s computing power increased with the PC, and text-only displays evolved to graphical displays.  Systems traders became some of the most profitable traders in large firms.  Future decisions were being made on historical data (early predictive analytics).   On balance well-designed systems traded by experienced traders were successful more often than not.  Testing was faster, but still not fast (a single security run on a x386 IBM PC would take about 8 hours).  As more traders began to use the same systems, the systems worked less well.  This was due to an ‘observer effect’., with traders trying to exploit a particular advantage quickly causing the advantage to disappear!  The system trader’s ‘edge’ or profitability was constantly declining, and new markets or circumstances were sought. ‘Program’ trades were accused of being the cause of the 1987 stock market crash.  

There were some notable failures in market analysis – Fast Fourier Transformations being one.  With enough computing power, you could fit a FFT to the market perfectly – but it would hardly ever work going forward.  The FFT fails because it presumes a cyclical formula, and the markets while cyclical, are not predictably so.  But an interesting phenomenon was that the better the fit in the FFT, the quicker and worse it would fall apart.  That was due to the phenomenon of curve-fitting.  ‘Fractals’ were all the rage later & failed just as miserably – same problem.  As an aside, it explains why simpler linear models in regression analysis are frequently ‘better’ than a high-n polynomial spline fit to the data, particularly when considered for predictive analytics.  The closer you fit the data, the less robust the model becomes and more prone to real-world failure.

Further advances in computing and computational statistics followed in the 1990’s-2000’s.  Accurate real-time market data became widely available and institutionally ubiquitous, and time frames became shorter and shorter.   Programs running on daily data were switched to multi-hour, hour, and then in intervals of minutes.The trend-following programs of the past became failures as the market became more choppy, and anti-trend (mean reversion) systems were popular.  Enter the quants –  the statisticians.(2)   With fast, cheap, near-ubiquitous computing, the scope of the systems expanded.   Now many securities could be analyzed at once, and imbalances exploited.  Hence the popularity of ‘pairs’ trading. Real-time calculation of indices created index arbitrage, which were able to execute without human intervention.

The index arbitrage (index-arb) programs relied on speed and proximity to the exchanges to have advantages in execution.  Statistical Arbitrage (Stat-arb) programs were the next development. These evolved into today’s High-Frequency-Trading programs (HFT’s) which dominate systems trading  These programs are tested extensively on existing data, and then are let loose on the markets to be run – with only high-level oversight.  They make thousands of trading decisions a second, incur real profits and losses, and compete against other HFT algorithms in a darwinian environment where the winners make money and are adapted further, and the losers dismissed with a digital death.  Master governing algorithms coordinate individual algorithms. (4)

The floor traders, specialists, market-makers, and scores of support staff that once participated in the daily business have been replaced by glowing boxes sitting in a server rack next to the exchange.  

Not to say that automated trading algorithms are perfect.  A rogue algorithm with insufficient oversight caused a forced sale of Knight Capital Group (KCG) in 2012.  (3)  The lesson here is significant – there ARE going to be errors once automated algorithms are in greater use – it is inevitable.

So reviewing the history, what happened on wall st.?
1.  First was descriptive analytics based upon historical data.
2.  Graphical Interfaces were improved.
3.  Improving technology led to more complicated algorithms which overfit the data. (WE ARE HERE)
4.  Improving data accuracy led to real-time analytics.
5.  Real time analytics led to shorter analysis timeframes
6.  Shorter analysis timeframes led to dedicated trading algorithms operating with only human supervision
7.  Master algorithms were created to coordinate the efforts of individual trading algorithms.

Next post, I’ll show the corollaries in health care and use it to predict where we are going.

  

(1) Jack Schwager, Market Wizards, Ed Seykota interview pp151-174.
(2) David Aronson, Evidence-based Technical Analysis, Wiley 2007
(3) Wall St. Journal, Trading Error cost firm $440 million, Marketbeat  

(4)Personal communication, HFT trader (name withheld)