What’s up with N2Value -tying up loose ends

Dora Mitsonia - CC license

It’s been almost a year since my last long-form article. Of course, ‘busyness’ in real life and blog writing are inversely proportional! I’ve been focused on real-life advances; namely neural networks, machine learning, and machine intelligence which fall loosely under the colloquial misnomer of “A.I.”

After a deep dive into machine learning, it is contemporaneously unexpectedly simple and deceptively difficult. The technical hurdles are significant, but improving – math skills ease the conceptual framework, but without the programming chops, practical application is tougher. Worse, the IT task of getting multiple languages, packages, and pieces of hardware to work together well is daunting. Getting the venerable MNIST to work on your computer with your GPU might be a weekend project – or worse. I’m not a ‘gamer’, so for the last decade it has been hard for me to get excited about increasing CPU clock speeds, faster DRAM, and faster GPU flops. Like many, I’ve been happy to use OSX on increasingly venerable Mac products – works fine for my purposes.

But since Alexnet’s publication in 2014, the explosion in both theory and application in machine learning has made me sit up and take notice. The Imagenet Large Scale Visual Recognition Challenge top-5 classification error rate was only 2.7% in latest competition held a few days ago in July 2017. That’s up from 30%+ error rates only four years ago. And my current hardware isn’t up to that task.

So, count me in. Certainly AI will be used in healthcare, but in what manner and to what extent still to be worked out. Pioneer firms like Arterys and Zebra Medical Vision, brave uncharted regulatory waters, watched closely by AI startups with similar dreams.

So, while I’d like to talk more about AI, I’m not sure that N2Value is the right place to do it. N2Value is primarily a healthcare thought leadership blog, promoting an evolution from Six Sigma methodology into more robust management practices which incorporate systems theory, focus on appropriately chosen metrics, model patient populations and likely outcomes and thereby successfully implement profitable value-based care. Caveat: with current US politics, it is very difficult to predict healthcare policy’s direction.

So, in the near future, I will decide what the scope of N2Value is to be going forward. I thank my loyal readers & subscribers who have given me 5 digit page views over the short life of the blog – far more than I ever expected! The blog has been a labor of love, but I’m pretty sure that AI algorithms have a place in healthcare management. However, I am not sure if you want to hear me opine on which version of convolutional neural network works better with or without LSTM added here, so stay tuned!

I have a few topics I have eluded to which I would like to mention quickly as stubs – they may or may not be expanded in the future.

STUB: What Healthcare can learn from Wall Street.

The main point of this series was to document the chronological implications of advances in computing technology on a leading industry (finance), to describe the likely similar path of a lagging industry (healthcare). I never was able to find the statistics on Wall Street employment I was seeking, which would document a declining number of workers, while documenting higher productivity and profitability per employee as IT advances allowed for super-empowerment of individuals.

Additionally, it raised issues regarding technology in B2B relationships that are adversarial. Much like Insurer-Hospital or Hospital-Doctor. If I have time, I’d like to rewrite this series. It was when I first began blogging and it is a bit rough.

STUB: The Measure is the Metric

One of my favorite articles (with its siblings), this subject was addressed much more eloquently on the Ribbonfarm blog by David Manheim in Goodhart’s Law and why measurement is Hard. If anything, after reading that essay, you will have sympathy for the metrics-oriented manager and be convinced that nothing they can do is right. I firmly believe that metrics should be designed to the task at hand, and then once achieved, monitored for a while but not dogmatically so. Better to target new and improved metrics than enforce institutional petrification ‘by the numbers.’

STUB: Value as Risk Series

I perceive the only way for value based care to be long-term profitable/successful is for large-scale vertical integration by a large Enterprise Health Institution (EHI) across the care spectrum. Hospital acquires Clinics, Practices, and Doctors, quantifies its covered lives, and then with better analytics than the insurers, capitates, ultimately contracting directly with employers & individuals. The insurers become redundant – and the Vertically Integrated Enterprise saves on economies of scale. It provides care in the most cost effective manner possible & closes beds, relying instead on telehealth, m health apps & predictive algorithms, and innovative care delivery.

When the Hospital’s profitability model resembles the insurer’s, and it is beholden only to itself (capitated payments are all there is), something fascinating happens. No longer does it matter if there is an ICD-10/HOPPS/CPT/DRG code for a procedure. The entity is no longer beholden to the rules of payment, and can internally innovate. A successful vertically integrated enterprise will – and quickly. While there will have to be appropriate regulatory oversight to prevent patient abuse, profiteering, or attempts to financialize the model; adjusting capitation with incentive payments for real measures of quality (not proxies) will prompt compliance and improved care.

Writing as a physician, this arrangement may or may not commoditize care further. Concerns about standardization of care are probably overstated, as the first CDS tool more accurate than a physician will standardize care to that model anyway! From an administrator’s perspective, it is a no-brainer to deliver care in an innovative manner that circumvents existing stumbling blocks. From a patient’s perspective, while I prefer easy access to a physician, maintaining that access is becoming unaffordable, let alone then utilizing health care! At some point, the economic pain will be so high that patients will want alternatives they can afford. Whether that means mid-levels or AI algorithms only time will tell.

STUB: Data Science and Radiology

I really like the concept I began here with data visualization in five dimensions. Could this be a helpful additional tool to AI research like Tensorboard? I’m thinking about eventually writing a paper on this one.

STUB: Developing the Care Model

The concept of treating a care model like an equation is what got me started on all this – describing a system as a mathematical model seemed like such a good idea – but required learning on my part. That, and the effects thereof, are still ongoing. At the time of the writing, the solution appeared daunting & I “put the project on the back burner (i.e. abandoned it)” as I couldn’t make it work. Of course, with advancing tools and algorithms well suited to evaluation of this task, I might rexamine this soon.

Machine Intelligence in Medical Imaging Conference – Report

blueI heard about the Society of Imaging Informatics in Medicine’s (SIIM) Scientific Conference on Machine Intelligence in Medical Imaging (C-MIMI) on Twitter.  Priced attractively, easy to get to, I’m interested in Machine Learning and it was the first radiology conference I’ve seen on this subject, so I went.  Organized on short notice so I was expecting a smaller conference.


I almost didn’t get a seat.  It was packed.

The conference had real nuts and bolts presentations & discussions on healthcare imaging machine learning (ML).  Typically, these were Convolutional Neural Networks (CNN‘s/Convnets) but a few Random Forests (RF) and Support Vector Machines (SVM) sneaked in, particularly in hybrid models along with a CNN (c.f.  Microsoft).  Following comments assume some facility in understanding/working with Convnets.

Some consistent threads throughout the conference:

  • Most CNN’s were trained on Imagenet with the final fully connected (FC) layer removed; then re-trained on radiology data with a new classifer FC layer placed at the end.
  • Most CNN’s were using Imagenet standard three layer RGB input despite being greyscale.  This is of uncertain significance and importance.
  • The limiting of input matrices to grids less than image size is inherited from the Imagenet competitions (and legacy computational power).  Decreased resolution is a limiting factor in medical imaging applications, potentially worked-around by multi-scale CNN’s.
  • There is no central data repository for a good “Ground Truth” to develop improved machine imaging models.
  • Data augmentation methods are commonly used due to lower numbers of obtained cases.

Keith Dryer DO PhD gave an excellent lecture about the trajectory of machine imaging and how it will be an incremental process with AI growth more narrow in scope than projected, chiefly limited by applications.  At this time, CNN creation and investigation is principally an artisanal product with limited scalability.  There was a theme – “What is ground truth?” which in different instances is different things (path proven, followed through time, pathognomonic imaging appearance).

There was an excellent educational session from the FDA’s Berkman Sahiner.  The difference between certifying a type II or type III device may keep radiologists working longer than expected!  A type II device, like CAD, identifies a potential abnormality but does not make a treatment recommendation and therefore only requires a 510(k) application.  A type III device, as in an automated interpretation program creating diagnosis and treatment recommendations will require a more extensive application including clinical trials, and a new validation for any material changes.  One important insight (there were many) was that the FDA requires training and test data to be kept separate.   I believe this means that simple cross-validation is not acceptable nor sufficient for FDA approval or certification.  Adaptive systems may be a particularly challenging area for regulation, as similar to the ONC, significant changes to the software of the algorithm will require a new certification/approval process.

Industry papers were presented from HK Lau of Arterys, Xiang Zhou of Siemens, Xia Li of GE, and Eldad Elnekave of Zebra medical.  The Zebra medical presentation was impressive, citing their use of the Google Inception V3 model and a false-color contrast limited adaptive histogram equalization algorithm, which not only provides high image contrast with low noise, but also gets around the 3-channel RGB issue.  Given statistics for their CAD program were impressive at 94% accuracy compared to a radiologist at 89% accuracy.

Scientific Papers were presented by Matthew Chen, Stanford; Synho Do, Harvard; Curtis Langlotz, Stanford; David Golan, Stanford; Paras Lakhani, Thomas Jefferson; Panagiotis Korfiatis, Mayo Clinic; Zeynettin Akkus, Mayo Clinic; Etka Bullar, U Saskatchewan; Mahmudur Rahman, Morgan State U; Kent Ogden SUNY upstate.

Ronald Summers, MD PhD from the NIH gave a presentation on the work from his lab in conjunction with Holger Roth, detailing the specific CNN approaches to Lymph Node detection, Anatomic level detection, Vertebral body segmentation, Pancreas Segmentation, and colon polyp screening with CT-colonography, which had high False Positives.  In his experience, deeper models performed better.  His lab also changes unstructured radiology reporting into structured reporting through ML techniques.

Abdul Halabi of NVIDIA gave an impressive presentation on the supercomputer-like DGX-1 GPU cluster (5 deliveries to date, the fifth of which was to Mass. General, a steal at over $100K), and the new Pascal architecture in the P4 & P40 GPU’s.  60X performance on AlexNet vs the original version/GPU configuration in 2012.  Very impressive.

Sayan Pathak of Microsoft Research and the Inner Eye team gave a good presentation where he demonstrated that a RF was really just a 2 layer DNN, i.e. a sparse 2 layer perceptron.   Combining this with a CNN (dNDE.NET), it beat googLENet’s latest version in the Imagenet arms race.  However, as one needs to solve for both structures simultaneously, it is an expensive (long, intense) computation.

Closing points were the following:

  • Most devs currently using Python – Tensorflow +/- Keras with fewer using CAFFE off of  Modelzoo
  • De-identification of data is a problem, even moreso when considering longitudinal followup.
  • Matching accuracy to the radiologist’s report may not be as important as actual outcomes report.
  • There was a lot of interest in organizing a competition to advance medical imaging, c.f. Kaggle.
  • Radiologists aren’t obsolete just yet.

It was a great conference.  An unexpected delight.  Food for your head!




Health Analytics Summit 2016 – Summary


I was shut out last year from Heath Catalyst’s Health Analytics Summit in Salt Lake City – there is a fire marshal’s limit of about 1000 people for the ballroom in the Grand America hotel, and with vendors last year there were simply not enough slots.  This year I registered early.  At the 2015 HIMSS Big Data and Medicine conference in NYC, the consensus was this conference had lots of practical insights.

The undercurrents of the conference as I saw them:

  • Increasing realization that in accountable care, social ills impact the bottom line.
  • Most people are still at the descriptive analytics stage but a few sophisticated players have progressed to predictive.  However actionable cost improvements are achievable with descriptive reporting.
  • Dashboarding is alive and well.
  • EDW solutions require data governance.
  • Data Scientists & statistical skills remain hard to come by in healthcare & outside of major population centers.

A fascinating keynote talk by Anne Milgram, former NJ attorney general, showed the striking parallels between ER visits/hospitalizations and arrests/incarcerations.  In Camden, NJ, there was a 2/3 overlap between superutilizers of both healthcare and the criminal justice system (CJS).  Noting that CJS data is typically public, she hinted this could potentially be integrated with healthcare data for predictives.  Certainly, from an insurer’s viewpoint, entry into the CJS is associated with higher healthcare/insured costs.  As healthcare systems move more into that role via value-based payments, this may be important data to integrate.

I haven’t listened to Don Berwick MD much – I will admit a “part of the problem” bias for his role as a CMS chief administrator, and his estimate that 50% of healthcare is “waste” (see Dr. Torchiana below).  I was floored that Dr. Berwick appeared to be pleading for the soul of medicine – “less stick and carrot”, “we have gone mad with too many (useless) metrics”.  But he did warn there will be winners and losers in medicine going forward, and signalling to me that physicians, particularly specialists, are targeted to be losers.

David Torchiana MD of Partners Healthcare followed with a nuanced talk reminding us there is value of medicine – and that much of what we flippantly call waste has occurred in the setting of a striking reduction in mortality for treatment of disease over the last 50 years.  It was a soft-spoken counterpoint to Dr. Berwick’s assertions.

Toby Freier and Craig Strauss MD both demonstrated how analytics can impact health significantly while reducing the bottom line, on both the community level and for specialized use cases.  New Ulm Medical Center’s example demonstrated 1) the nimbleness of a smaller entity to evaluate and implement optimized programs and processes on a community-wide basis while Minneapolis Heart Institute demonstrated 2) how advanced use of analytics could save money by reducing complications in high cost situations (e.g. CABG, PTCA, HF) and 3) how analytics could be used to answer clinical questions that there was no good published data on. (e.g. survivability for 90 year olds in TAVR)

Taylor Davis of KLAS research gave a good overview of analytics solutions and satisfaction with them.  Take home points were that the large enterprise solutions (Oracle et al.) had lower levels of customer satisfaction than the healthcare specific vendor solutions (Healthcatalyst, qlik).  Integrated BI solutions within the EHR provided by the EHR vendor, while they integrated well, were criticized as underpowered/insufficient for more than basic reporting.  However, visual exploration services (Tableau) were nearly as well received as the dedicated healthcare solutions.  Good intelligence on these solutions.

The conference started off with an “analytics walkabout” where different healthcare systems presented their success and experiences with analytics projects.  Allina Health was well-represented with multiple smart and actionable projects – I was impressed.  One project from Allina predicting who would benefit from closure devices in the cath lab (near and dear to my heart as an Interventional Radiologist) met goals of both providing better care and saving costs through avoiding complications.  There was also an interesting presentation from AMSURG about a project integrating Socio-Economic data with GI endoscopy – a very appropriate use of analytics for the outpatient world speaking from some experience.  These are just a few of the 32 excellent presentations in the walkabout.

I’ll blog about the breakout sessions separately.

Full Disclosure: I attended this conference on my own, at my own expense, and I have no financial relationships with any of the people or entities discussed.  Just wanted to make that clear.  I shill for no one.


Value and Risk: the Radiologist’s perspective (Value as risk series #4)

Public DomainMuch can be written about Value-based care. I’ll focus on imaging risk management from a radiologist’s perspective. What it looks like from the Hospital’s perspective , the Insurer’s perspective, and in general have been discussed previously.

When technology was in shorter supply, radiologists were gatekeepers of limited Ultrasound, CT and MRI resources. Need-based radiologist approval was necessary for ‘advanced imaging’. The exams were expensive and needed to be protocoled correctly to maximize utility. This encouraged clinician-radiologist interaction – thus our reputation as “The Doctor’s doctor.”

In the 1990’s-2000’s , there was an explosion in imaging utilization and installed equipment. Imaging was used to maximize throughput, minimize patient wait times and decrease length of hospital stays. A more laissez-faire attitude prevailed where gatekeeping was frowned upon.

With a transition to value-based care, the gatekeeping role of radiology will return. Instead of assigning access to imaging resources on basis of limited availability, we need to consider ROI (return on investment) in the context of whether the imaging study will be likely to improve outcome vs. cost. (1) Clinical Decision Support (CDS) tools can help automating imaging appropriateness and value. (2)

The bundle’s economics are capitation of a single care episode for a designated ICD-10 encounter. This extends across the inpatient stay and related readmissions up to 30 days after discharge (CMS BPCI Model 4). A review of current Model 4 conditions show mostly joint replacements, spinal fusion, & our example case of CABG (Coronary Artery Bypass Graft).

Post CABG, a daily Chest X-ray (CXR) protocol may be ordered – very reasonable for an intubated & sedated patient. However, an improving non-intubated awake patient may not need a daily CXR. Six Sigma analysis would empirically classify this as waste – and a data analysis of outcomes may confirm it.

Imaging-wise, patients need a CXR preoperatively, & periodically thereafter. A certain percentage of patients will develop complications that require at least one CT scan of the chest. Readmissions will also require re-imaging, usually CT. There will also be additional imaging due to complications or even incidental findings if not contractually excluded (CT/CTA/MRI Brain, CT/CTA neck, CT/CTA/US/MRI abdomen, Thoracic/Lumbar Spine CT/MRI, fluoroscopy for diaphragmatic paralysis or feeding tube placement, etc…). All these need to be accounted for.



In the fee-for-service world, the ordered study is performed and billed.  In bundled care, payments for the episode of care are distributed to stakeholders according to a pre-defined allocation.

Practically, one needs to retrospectively evaluate over a multi-year period how many and what type of imaging studies were performed in patients with the bundled procedure code. (3) It is helpful to get sufficient statistical power for the analysis and note trends in both number of studies and reimbursement. Breaking down the total spend into professional and technical components is also useful to understand all stakeholder’s viewpoints. Evaluate both the number of studies performed and the charges, which translates into dollars by multiplying by your practice’s reimbursement percentage. Forward-thinking members of the Radiology community at Nieman HPI  are providing DRG-related tools such as ICE-T to help estimate these costs (used in above image). Ultimately one ends up with a formula similar to this:

CABG imaging spend = CXR’s+CT Chest+ CTA chest+ other imaging studies.

Where money will be lost is at the margins – patients who need multiple imaging studies, either due to complications or incidental findings. With between a 2% to 3% death rate for CABG and recognizing 30% of all Medicare expenditures are caused by the 5% of beneficiaries that die, with 1/3 of that cost in the last month of life (Barnato et al), this must be accounted for. An overly simplistic evaluation of the imaging needs of CABG will result in underallocation of funds for the radiologist, resulting in per-study payment dropping  – the old trap of running faster to stay in place.

Payment to the radiologist could either be one of two models:

First, fixed payment per RVU. Advantageous to the radiologist, it insulates from risk-sharing. Ordered studies are read for a negotiated rate. The hospital bears the cost of excess imaging. For a radiologist in an independent private practice providing services through an exclusive contract, allowing the hospital to assume the risk on the bundle may be best.

Second, a fixed (capitated) payment per bundled patient for imaging services may be made to the radiologist. This can either be in the form of a fixed dollar amount or a fixed percentage of the bundle.  (Frameworks for Radiology Practice Participation, Nieman HPI)  This puts the radiologist at-risk, in a potentially harmful way. The disconnect is that the supervising physicians (cardio-thoracic surgeon, intensivist, hospitalist) will be focusing on improving outcome, decreasing length of stay, or reducing readmission rates, not imaging volume. Ordering imaging studies (particularly advanced imaging) may help with diagnostic certitude and fulfill their goals. This has the unpleasant consequence of the radiologist’s per study income decreasing when they have no control over the ordering of the studies and, in fact, it may benefit other parties to overuse imaging to meet other quality metrics. The radiology practice manager should proceed with caution if his radiologists are in an employed model but the CT surgeon & intensivists are not. Building in periodic reviews of expected vs. actual imaging use with potential re-allocations of the bundle’s payment might help to curb over-ordering. Interestingly, in this model the radiologist profits by doing less!

Where the radiologist can add value is in analysis, deferring imaging unlikely to impact care. Reviewing data and creating predictive analytics designed to predict outcomes adds value while, if correctly designed, avoiding more than the standard baseline of risk. (see John’s Hopkins Sepsis prediction model). In patients unlikely to have poor outcomes, additional imaging requests can be gently denied and clinicians reassured. I.e. “This patient has a 98% chance of being discharged without readmission. Why a lumbar spine MRI?” (c.f. AK Moriarty et al) Or, “In this model patients with these parameters only need a CXR every third day. Let’s implement this protocol.” The radiologist returns to a gatekeeping role, creating value by managing risk, intelligently.

Let’s return to our risk/reward matrix:



For the radiologist in the bundled example receiving fixed payments:


Low Risk/Low Reward: Daily CXR’s for the bundled patients.


High Risk/Low Reward: Excess advanced imaging (more work for no change in pay)


High Risk/High Reward: Arbitrarily denying advanced imaging without a data-driven model (bad outcomes = loss of job, lawsuit risk)


Low Risk/High Reward: Analysis & Predictive modeling to protocol what studies can be omitted in which patients without compromising care.


I, and others, believe that bundled payments have been put in place not only to decrease healthcare costs, but to facilitate transitioning from the old FFS system to the value-based ‘at risk’ payment system, and ultimately capitated care. (Rand Corp, Technical Report TR-562/20) By developing analytics capabilities, radiology providers will be able to adapt to these new ‘at-risk’ payment models and drive adjustments to care delivery to improve or maintain the community standard of care at the same or lower cost.

  1. B Ingraham, K Miller et al. Am Coll Radiol 2016 in press
  2. AK Moriarty, C Klochko et al J Am Coll Radiol 2015;12:358-363
  3. D Seidenwurm FJ Lexa J Am Coll Radiol 2016 in press

Where does risk create value for a hospital? (Value as Risk series post #3)

towers1Let’s turn to the hospital side.

For where I develop the concept of value as risk management go here 1st, and where I discuss the value in risk management from an insurer’s perspective click here 2nd.

The hospital is an anxious place – old fat fee-for-service margins are shrinking, and major rule set changes keep coming. To manage revenue cycles requires committing staff resources (overhead) to compliance related functions, further shrinking margin. More importantly, resource commitment postpones other potential initiatives. Maintaining compliance with Meaningful Use (MU) 3 cum MACRA, PQRS, ICD-10 (11?) and other mandated initiatives while dealing with ongoing reviews Read more

How an health insurer uses risk to define value (Value as risk series)

RiskLets continue with value as risk. If you missed it, here’s the first post.

Providers assert that insurers hold most if not all the cards, collecting premiums and denying payment while holding large datasets of care patterns. I’ve heard, “if only we had access to that data, we could compete on a level playing field.”

I am neither an apologist for nor an insider in the insurance industry, but is this a “grass is always greener” problem? True, the insurer has detailed risk analysis on the patient & provider. Yes, the insurer does get to see what all providers are charging and coding in their coverage. And the insurer can deny or delay payment knowing that a certain percentage of these claims will not be re-submitted.

But the insurer also has deep institutional knowledge in risk-rating their clients. Consider the history of health insurance in the US.  Advancing medical knowledge advanced treatment cost. When medical cost inflation exceeded CPI  insurers modeled and predicted estimated spend with hard data. If individuals had medical conditions which would cost more ultimately than premiums received they failed medical underwriting. The insurers are private, for-profit businesses, and will not operate at a loss willingly.

To optimize profitability, insurers collected data from not only the insurance application, but also claims data, demographic data from consumer data brokers, financial data, information from other insurers (auto, home, life), and probably now Internet data (Facebook, etc…) to risk-rate the insured. Were they engaged in a risky lifestyle? Searching the net for serious genetic diseases?

Interestingly, the ACA changed this to only permit 1) Age 2) Smoking 3) Geographic location as pricing factors in the marketplace products. The marketplace products have been controversial, with buyers complaining of networks so narrow to be unusable , and insurers complaining of a lack of profitability, which has caused them to leave the market. Because the marketplace pools must take all comers, and many who entered the pools had not had insurance, there is some skew towards high-cost, sicker patients.

Consider a fictional medium-sized regional health insurer in three southern states specializing in group (employer) insurance – Southern Health. They are testing an ACA marketplace product. The geographic area they serve has a few academic medical centers, many community hospitals competing with each other, and only a few rural hospitals. In the past, they could play the providers off one another and negotiate aggressively, even sometimes paying lower rates than Medicare.

However, one provider – a fictional two-hospital system – Sun Memorial – hired a savvy CEO who developed profitable cardiac and oncology service lines leveraging reputation. Over the last 5 years, the two-hospital group has merged & acquired hospitals forming a 7-hospital system, with 4 more mergers in late-stage negotiations. The hospital system changed its physicians to an employed model and then at next contract renewal demanded above Medicare rates. As such, Southern Health did not renew their contract with Sun Memorial. In the past, such maneuvers ended conflict quickly as the hospital suffered cash flow losses. However, now with fewer local alternatives to Sun Memorial; patients were furiously complaining to both Southern Health and their employer’s HR department that their insurance would not cover their bills.   Pushback on the insurer by the local businesses purchasing benefits through Southern Health happened as they now threatened not to renew! The contract was eventually resolved at Medicare rates, with retroactive coverage.

The marketplace product is most purchased by the rural poor, operating on balance neutral to a slight loss. As the Southern Health’s CEO, you have received word that the your largest customer, a university, has approached Sun Memorial about creating a capitated product – cutting you out entirely. The CEO of Sun Memorial has also contacted you about starting an ACO together.

           Recall the risk matrix:



Low Risk/Low return: who cares?

High Risk/Low return: cancelling provider contracts as a negotiating ploy.

High Risk/High return: Entering into an ACO with Sun Memorial. Doing so shares your data with them & teaches them how to do analytics. This may negatively impact future negotiations and might even help them to structure the capitated contract correctly.

Low Risk/High Return: Pursue lobbying and legal action at the state/federal level to prevent further expansion of Sun Memorial. Maintain existing group business. Withdraw from unprofitable ACA marketplace business.

As CEO of Southern Health, you ultimately decide to hinder the chain’s acquisition strategy. You also withdraw from the marketplace but may reintroduce it later. Finally, you do decide to start an ACO – but with the primary competitor of Sun Memorial. You will give them analytic support as they are weak in analytics, thereby maintaining your competitive advantage.

From the insurer’s perspective the low risk and high return move is to continue the business as usual (late stage, mature company) and maintain margins in perpetuity. Adding new products is a high-risk high reward ‘blue ocean’ strategy that can lead to a new business line and either profit augmentation or revitalization of the business. However, in this instance the unprofitable marketplace product should be discontinued.


For the insurer, value is achieved by understanding, controlling, and minimizing risk.


Next, we’ll discuss things from the hospital system’s CEO perspective.


Defining value in healthcare through risk


For a new definition of value, it’s helpful to go back to the conceptual basis of payment for medical professional services under the RBRVS. Payment for physician services is divided into three components: Physician work, practice expense, and a risk component.

Replace physician with provider, and then extrapolate to larger entities.

Currently, payer (insurer, CMS, etc…) and best practice (specialty societies, associations like HFMA, ancillary staff associations) guidelines exist. This has reduced some variation among providers, and there is an active interest to continue this direction. For example, level 1 E&M clearly differs from a level 5 E&M – one might disagree whether a visit is a level 3 or 4, but you shouldn’t see the level 1 upcoded to 5. Physician work is generally quantifiable in either patients seen or procedures done, and for any corporate/employed practice, most physicians will be working towards the level of productivity they have contractually agreed to, or they will be let go/contracts renegotiated. Let’s hope they are fairly compensated for their efforts and not subjected solely to RVU production targets, which are falling out of favor vs. more sophisticated models (c.f. Craig Pedersen, Insight Health Partners).

Unless there is mismanagement in this category, provider work is usually controllable, measurable, and with some variation due to provider skill, age, and practice goals, consistent. For those physicians who have been vertically integrated, their current EHR burdens and compliance directives may place a cap on productivity.

Practice expenses represent those fixed expenses and variable expenses in healthcare – rent, taxes, facility maintenance, and consumables (medical supplies, pharmaceuticals, and medical devices). Most are fairly straightforward from an accounting standpoint. Medical supplies, pharmaceuticals, and devices are expenses that need management, with room for opportunity. ACO and super ACO/CIO organizations and purchasing consortiums such as Novation, Amerinet, and and Premier have been formed to help manage these costs.

Practice expense costs are identifiable, and once identified, controllable. Initially, six sigma management tools work well here. For all but the most peripheral, this has happened/is happening, and there are no magic bullets out there beyond continued monitoring of systems & processes as they evolve over time as drift and ripple effects may impact previously optimized areas.

This leaves the last variable – risk. Risk was thought of as a proxy for malpractice/legal costs. However, in the new world of variable payments, there is not only downside risk in this category, but the pleasant possibility of upside risk.

It reasons that if your provider costs are reasonably fixed, and practice expenses are as fixed as you can get them at the moment, that you should look to the risk category as an opportunity for profit.

As a Wall St. options trader, the only variable that really mattered to me for the price of the derivative product was the volatility of the option – the measure of its inherent risk. We profited by selling options (effectively, insurance) when that implied volatility was higher than the actual market volatility, or buying them when it was too low. Why can’t we do the same in healthcare?

What is value in this context? The profit or loss arising from the assumption and management of risk. Therefore, the management of risk in a value-based care setting allows for the possibility of a disproportionate financial return.


The sweet spot is Low Risk/High Return. This is where discovering a fundamental mispricing can return disproportionately vs. exposure to risk.

Apply this risk matrix to:

  • 1 – A medium sized insurer, struggling with hospital mergers and former large employers bypassing the insurer directly and contracting with the hospitals.
  • 2 – A larger integrated hospital system with at-risk payments/ACO model, employed physicians, and local competitors which is struggling to provide good care in the low margin environment.
  • 3 – group radiology practice which contracts with a hospital system and a few outpatient providers.

& things get interesting. On to the next post!

Some reflections on the ongoing shift from volume to value

As an intuitive and inductive thinker, I often use facts to prove or disprove my biases. This may make me a poor researcher, though I believe I would have been popular in circa 1200 academic circles. Serendipity plays a role; yes I’m a big Nassim Taleb fan – sometimes in the seeking, unexpected answers appear. Luckily, I’m correct more often than not. But honestly – in predicting widely you miss more widely.

One of my early mentors from Wall St. addressed this with me in the infancy of my career – take Babe Ruth’s batting average of .342 . This meant that two out of three times at bat, Babe Ruth struck out. However, he was trying to hit home runs. There is a big difference between being a base hit player and a home run hitter. What stakes are you playing for?

With that said, this Blog is for exploring topics I find of interest pertaining mostly to healthcare and technology. The blog has been less active lately, not only due to my own busy personal life (!) but also because I have sought more up-to-date information about advancing trends in both the healthcare payment sector and the IT/Tech sector as it applies to medicine. I’m also diving deeper into Radiology and Imaging. As I’ve gone through my data science growth phase, I’ll probably blog less on that topic except as it pertains to machine learning.

The evolution of the volume to value transition is ongoing as many providers are beginning to be subject to at least a degree of ‘at-risk’ payment. Stages of ‘at-risk’ payment have been well characterized – this slide by Jacque Sokolov MD at SSB solutions is representative:

Sokolove - SSB solutions slide 1

In 2015, approximately 20% of medicare spend was value-based, with CMS’s goal 50% by 2020. Currently providers are ‘testing the waters’ with <20% of providers accepting over 40% risk-based payments (c.f. Kimberly White MBA, Numerof & Associates). Obviously the more successful of these will be larger, more data-rich and data-utilizing providers.

However, all is not well in the value-based-payment world. In fact, this year United Health Care announced it is pulling its insurance products out of most of the ACA exchange marketplaces. While UHC products were a small share of the exchanges, it sends a powerful message when a major insurer declines to participate. Recall most ACO’s (~75%) did not produce cost savings in 2014, although more recent data was more encouraging (c.f. Sokolov).   Notably, out of the 32 Pioneer ACO’s that started, only 9 are left (30%) (ref. CMS). The road to value is not a certain path at all.

So, with these things in mind, how do we negotiate the waters? Specifically, as radiologists, how do we manage the shift from volume to value, and what does it mean for us? How is value defined for Radiology? What is it not? Value is NOT what most people think it is. I define value as: the cost savings arising from the assumption and management of risk. We’ll explore this in my next post.

Memory requirements for Convolutional Neural Network analysis of brain MRI.

Believed to be in the publc domainI’m auditing the wonderful Stanford CS 231n class on Convolutional Neural Networks in Computer Vision.

A discussion the other day was on the amount of memory required to analyze one image as it goes through the Convolutional Neural Network (CNN). This was interesting – how practical is it for application to radiology imaging?  (To review some related concepts see my earlier post : What Big Data  Visualization Analytics can learn from Radiology)

Take your standard non-contrast MRI of the brain. There are 5 sequences (T1, T2, FLAIR, DWI, ADC). For the purposes of this analysis, all axial. Assume a 320×320 viewing matrix for each slice. Therefore, one image will be a 320x320x5 matrix suitable for processing into a 512,000 byte vector. Applying this to the VGGNet Protocol D (1) yields the following:


In each image, there are 320 x and y pixels and each pixel holding a greyscale value. There are 5 different sequences. Each axial slice takes up 512KB, the first convolutional layers hold most of the memory at 6.4MB each, and summing all layers uses 30.5MB. Remember that you have to double the memory for the forward/backward pass through the network, giving you 61MB per image. Finally, the images do not exist in a void, but are part of about 15 axial slices of the head, giving you a memory requirement of 916.5MB, or about a gigabyte.

Of course, that’s just for feeding an image through the algorithm.

This is simplistic because:

  1. VGG is not going to get you to nearly enough accuracy for diagnosis! (50% accurate, I’m guessing)
  2. The MRI data is only put into slices for people to interpret – the data itself exists in K-space. What that would do to machine learning interpretation is another discussion.
  3. We haven’t even discussed speed of training the network.
  4. This is for older MRI protocols.  Newer MRI’s have larger matrices (512×512) and thinner slices (3mm) available, which will increase the necessary memory to approximately 4GB.

Nevertheless, it is interesting to note that the amount of memory required to train a neural network of brain MRI’s is in reach of a home network enthusiast.

(1). Karen Simonyan & Andrew Zisserman, Very Deep Convolutional Networks for Large-Scale Visual Recognition, ICLR 2015

The coming computer vision revolution

3 layer (7,5,3 hidden layers) neural network created in R using the neuralnet package.
3 layer (7,5,3 hidden layers) neural network created in R using the neuralnet package.


Nothing of him that doth fade
But doth suffer a sea-change
Into something rich and strange.

– Shakespeare, The Tempest 1.2.396-401

I’m halfway through auditing Stanford’s CS231n course – Convolutional Neural Networks for Visual Recognition.

Wow. Just Wow. There is a sea-changing paradigm shift that is happening NOW –  we probably have not fully realized it yet.

We are all tangentially aware of CV applications in our daily lives – Facebook’s ability to find us in photos, optical character recognition (OCR) of our address on postal mail, that sort of thing. But these algorithms were rule-based expert systems grounded in supervised learning methods. Applications were largely one-off for a specific, single task. They were expensive, complicated, and somewhat error prone.

So what changed?   First, a little history. In the early 1980’s I had a good friend obtaining a MS in comp sci all atwitter about “Neural Networks”. Back then they went nowhere. Too much processing/memory/storage required, too difficult to tune, computationally slow. Fail.


1999 –  Models beginning with SIFT & ending with SVM (support vector machine) deformable parts. Best model only 74% accurate.

2006 – Restricted Boltzmann Machines apply backpropogation to allow deep neural networks.

2012 – AlexNet Deep learning applied to Imagenet classification database competition achieves a nearly 2X increase in accuracy to earlier SVM methods.

2015-   ResNet Deep learning system achieves a 4.5X increase in accuracy compared to Alexnet and 8X increase in accuracy to old SVM models.

In practical aspects, what does this mean? On a data set with 1000 different items  (ImageNet), ResNet is getting the item 100% correct (compared to a human) about 80% of the time, and correctly classifies the image as one of a list of 5 most probable items 96.4% of the time. People are typically believed to have 95% accuracy identifying the correct image. It’s clear to see that the computer is not far off.

2012 was the watershed year with the first application and win of the CNN to the dataset, and the improvement was significant enough it sparked additional refinements and development. That is still going on – the ResNet example was just released in December 2015! It’s clear that this is an area of active research and further improvements expected.

The convolutional neural network is a game-changer and will likely approach and perhaps exceed human accuracy in computer vision and classification in the near future. That’s a big deal.  As this is a medical blog, the applications to healthcare are obvious – radiology, pathology, dermatology, ophthalmology for starters.  But the CNN may also be useful for the complicated process problems I’ve developed here on the blog – the flows themselves resemble networks naturally.  So why not model them as such?  Why is it a game changer?  Because the model is probably universally adaptable to visual classification problems and once trained, potentially cheap.


I’ll  write more on this in the coming weeks – I’ve been inching towards deep learning models (but lagging blogging about them) but there is no reason to wait any more. The era of the deep learning neural network is here.