Do we need more medical imaging?

 

starshipdream
Fanpic of the starship enterprise with deep dream

The original captain of the starship Enterprise, James T. Kirk addressed his ship with the invocation of, “Computer, …” .  For an audience in the late 1960’s it was a imagined miracle hundreds of years in the future.  In the early 1990’s, MIT’s SAIL Laboratory was dreaming of Project Oxygen – an ever-present, voice activated computer that could be spoken to and give appropriate responses.

 

“Hi, Siri” circa 2011
alexa
“Hello Alexa” circa 2016

 

 

 

 

 

 

 

 

 

Cloud computing, plentiful memory, on-demand massive storage and GPU-powered deep learning brought this future into our present.  Most of us already have the appliance (a smartphone) capable of connecting us to scalable cloud computing resources. Comparing current reality to the 1960’s expectations, this advancing world of ubiquitous computing is small, cheap, and readily available.

But imaging is not.  The current paradigm holds imaging as a rare, special, and expensive medical procedure.  In the days of silver-film radiology, with tomographic imaging and cut-film feeders for interventional procedures, it was a scarce resource.  In the first days of CT and MRI, requests for anything more complicated than an x-ray needed to pass through a radiologist.  These machines, and the skills necessary to operate them, were expensive and in short supply.

But is it still?  In a 2017 ER visit – the point of access to health care for > 50% of patients –  if your symptoms are severe enough, it is almost a certainty you will receive imaging early in your ER visit.  Belly pain? – CT that.  Worst headache of your life? – CT again.   Numbness on one side of your body?  Diffusion Weighted MRI.  And it is ordered on a protocol circumventing Radiology approval – why waste time in the era of 24/7 imaging with final interpretations available in under an hour.

I’ve written briefly about how a change to value-based care will upend traditional fee for service (FFS) delivery patterns.  But with that change from FFS, and volume to value, should we think about Radiology and other diagnostic services differently?  Perhaps medical imaging should be not rationed, but readily and immediately available – an equal to the history and physical.

I call this concept Ubiquitous Imaging ©, or Ubiquitous Radiology.   Ubiquitous Imaging is the idea that imaging is so necessary for the diagnosis and management of disease that it should be an integral part of every diagnostic workup, and guide every treatment plan where it is of benefit.  “A scan for every patient, when it would benefit to the patient.”

This is an aggressive statement.  We’re not ready for it just yet.  But let me explain why Ubiquitous Imaging is not so far off.

  1.  Imaging is no longer a limited good in the developed world
  2.  Artificial intelligence will increase imaging productivity, similar to PACS
  3.  Concerns about radiation dose will be salved by improvements in technology
  4.  Radiomics will greatly increase the value of imaging
  5.  Contrast use may be markedly decreased by an algorithm
  6.  Imaging will change from a cost center to an accepted part of preventative care in a value-based world.
  7. Physicians may shift from the current subspecialty paradigm to a Diagnosis-Acute Treatment-Chronic Care Management paradigm to better align with value based care.

Each of these points may sound like science fiction.  But the groundwork for each of these is being laid now:

In the US in 2017, there are 5,564 hospitals registered with the AHA.  Each of these will have some inpatient radiology services.  As of 2007, there were at least 10,335 CT Scanners operating in the US, and 7810 MRI scanners.  Using OECD library data from 2015, with 41 CT’s & 39 MRI’s per million inhabitants of the US, and a total US census of 320,000,000 we can calculate the number of US CT and MRI scanners in 2015 to be 13,120 and 12,480 respectively.

If proper procedures are followed with appropriate staffing and a lean/six sigma approach to scanning, it is conceivable that a modern multislice CT could scan one patient every ten minutes (possibly better), and be run almost 24/7 (downtime for maintenance & QA).  Thus, one CT scanner could image 144 patients daily. 144 scans/day x 365 days/year x 13120 CT scanners = 689,587,200 potential scans yearly – two scans a year for every US resident!

MRI imaging is more problematic because physics dictates the length of scans.  The T1 and T2 relaxation times are set by the length of the sequence in milliseconds, and making scans faster runs up against the laws of physics.  While there are some ‘shortcuts’, we pay for those with T2* effects and decreased resolution.  Stronger magnets & gradients help, but at higher cost and a risk of energy transfer to the patient.  So at optimal efficiency and staffing, the best you could probably get is 22 studies daily (a very aggressive number).  22 MRI studies/day x 365 days/year x 12480 MRI’s = 100,214,400 studies yearly.  Or enough to scan 1/3 of the US population yearly.  (Recent discussions at RSNA 2017 suggest MRI scans might be able to be shortened to the length of CT)

Think about this.  We can CT scan every US citizen twice in a one year period, and we continue to think about imaging as a scarce resource.  One in three US citizens can be scanned with MRI annually.  Imaging is not scarce in the developed world.

X-ray is the most commonly performed imaging procedure, including mammography & fluoroscopy, accounting for up to 50% of radiology studies.  The CT/MR/US and nuclear medicine studies occupy the other 50%.  A bit of backing out on the number above will suggest capacity on the order of 2.256 billion possible studies a year.

We’ve done the studies – how will we interpret them?  A physician (MD) examines every study and interprets them, delivering a report.  There are about 30,656 radiologists in the USA (2012 AMA physician masterfile).  Nieman HPI suggests that estimate may be low, and gives an upper range of 37,399 radiologists.

A busy radiologist on a PACS system could interpret 30,000 studies a year.  30,656 x 30,000 = 919,680,000 potentially interpretable studies from our workforce.  Use the high estimate and the capacity number rises to 1.12 billion.  That’s a large variance from the 2.256 billion studies performed.  However, it is suggested that about 50% of studies, usually X-ray and Ultrasound, are performed and interpreted by non-radiologists.  So, that gets us back to 1.12 billion studies.

Recall that Radiologists did not always interpret studies on computer monitors (PACS).  Prior to PACS, a busy radiologist would read 18,000 studies a year.  Radiologists experienced a jump in productivity when we went from interpreting studies based on film to interpreting studies on PACS systems.

Artificial Intelligence algorithms are beginning to appear in Radiology at a rapid pace.  While it is early in the development of these products, there is no question in the minds of most informed Radiologists that computer algorithms will be a part of radiology.  And because AI solutions in radiology will not be reimbursed additionally, cost justification needs to come from productivity.  An AI algorithm in Radiology needs to justify its price by making the radiologist more efficient, so that cost is borne by economies of scale.

Now imagine that the AI algorithms develop accuracy similar to a radiologist.  Able to ‘trust’ the algorithms and thereby streamline their daily work processes, Radiologists no longer are limited to interpreting 30,000 studies a year.  Perhaps that number rises to 45,000.  Or 60,000.  I can’t in good conscience consider a higher number.  The speed of AI introduction, if rapid and widespread, may cause some capacity issues, but the aging population, retiring radiologists, well-informed medical students responding to the “invisible hand” and perpetual trends toward increasing demand for imaging services will form a new equilibirum.  Ryan Avent of the Economist (who’s book Wealth of Humans is wonderful reading) has a more resigned opinion, however.

One of the additional functions of Radiologists is to manage the potentially harmful effects of the dose of ionizing radiation used in X-rays.  We know that high levels of ionizing radiation cause cancerWhether lower levels of radiation cause cancer is controversialHowever, it is likely that some (low) percentage of cancer is actually CAUSED by medical imaging.  To combat this, we have used the ALARA paradigm in medical imaging, and in recent years to combat concerns associated with higher doses received in advanced imaging, the image gently campaign.

Recently, James Brink MD of the American College of Radiology (ACR) testified to the US congress about the need for contemporary research on the effects of the radiation doses encountered in medical imaging.  Without getting too much into the physics of imaging, more dose usually yields crisper, “prettier” images at higher resolution.

But what if there was another way to do this?  Traditionally, Radiologists have relied upon equipment makers to improve hardware and extract better signal/noise ratios which would allow for a lower radiation dose.  But in a cost-concious era, it is difficult to argue for more expensive new technologies if there is no reimbursement advantage.

However, an interesting pilot study used an AI technique on CT scans to ‘de-noise’ the images, improving their appearance.   The noise was added after artificially after the scan, rather than present at the time of imaging.  A number of papers at NIPS 2017 dealt with super-resolution.  Could similar technologies exist for imaging?  Paras Lahkani seems to think so.

Put hardware & software improvement together and we might be able to substantially decrease dose in ionizing radiation.  If this dose is low enough, and research bears out that there is a dose threshold below which radiation doesn’t cause any real effects, we could “image gently” with impunity.

Are we using the information in diagnostic imaging effectively?  Probably not.  There is just too much information on a scan for a single radiologist to report entirely.  But with AI algorithms also looking at diagnostic images, there is much more information that we can extract from the scan than we currently are.  The obvious use case is volumetrics.

The burgeoning science of Radiomics includes not only volumetrics, but also relationships between the data present on the scan we may not be able to perceive directly as humans.  Dr. Luke Oakden-Rayner caused a brief internet stir with his preliminary precision radiology article in 2017, using an AI classifier (a CNN) to predict patient survival from CT images.  While small, it showed the possibility of advanced informational discovery on existing datasets and application of those findings in a practical manner.  Radiomics feature selection has similar problems to that of genomics feature selection, in that the large number of data variables may predispose to more chance correlations than in traditionally designed, more focused experiments.

At the RSNA 2017, a number of machine learning companies were making their debut.  One of the more interesting offerings was Subtle Medical, a machine learning application designed to reduce contrast dose in imaged patients.  Not only would this be disruptive to the contrast industry by reducing the amount of administered contrast by a factor of 5 or higher (!), but it would remove one of the traditional concerns about contrast – its potential toxicity.  CT uses iodinated contrast, and MRI uses Gadolinium-based contrast.  Using less implies less toxicity and less cost, so this is a win all-around.

The economics of imaging could fill a book, let alone a blog post.  In a fee-for service world, imaging was a profit center, and increasing capacity and maximizing the number of imaging services was sensible to encourage a profitable service line.  With declining reimbursement, it has become less so (but still profitable).  However, as we transition to value-based care, how will radiology be seen?  Will it be seen as a cost-center, with radiologists fighting over a piece of the bundled payment pie, or something else?  Will it drive reduced or increased imaging utilization?  Target metrics and ease of attainment in the ACO drive this decision, with easier targets correlated with greater imaging. Particularly if imaging is seen as providing greater value, utilization should continue to rise.

Specialty training as it exists currently may not be sufficient to prepare for the way medicine is practiced in the future.  A specialty (and sup-specialty) approach was reasonable when information was not freely available, and the amount of information to know was overwhelming without specialization.  But as we increase efficiencies in medical care, care access goes along a definable path: Patient complaint -> Investigation -> Diagnosis -> Acute Treatment ->Chronic Treatment.  Perhaps it would make more sense to organize medicine along those lines as well?  Particularly in the field of diagnosis, I am not the only physician recognizing the shift occurring.  A well-thought out opinion piece written by Saurabh Jha MD and Eric Topol MD, Radiologists and Pathologists as Information Specialists, broaches that there is more similarity between the two specialties than differences, particularly in an age of artificial intelligence.  Should we call for a new Flexner report, ending the era of physician-basic scientists and beginning the dominance of physician-informaticists and physician-empaths?

Perhaps it is time to consider imaging not as a limited commodity, but instead to recognize it as a widely available resource, to be used as much as is reasonable.  By embracing AI, radiomics, new payment models, the radiologist as an informatician, and basic research on radiation safety, we can get there.

©2017 – All rights reserved

Health Analytics Summit 2016 – Summary

has-logo-jpg_large

I was shut out last year from Heath Catalyst’s Health Analytics Summit in Salt Lake City – there is a fire marshal’s limit of about 1000 people for the ballroom in the Grand America hotel, and with vendors last year there were simply not enough slots.  This year I registered early.  At the 2015 HIMSS Big Data and Medicine conference in NYC, the consensus was this conference had lots of practical insights.

The undercurrents of the conference as I saw them:

  • Increasing realization that in accountable care, social ills impact the bottom line.
  • Most people are still at the descriptive analytics stage but a few sophisticated players have progressed to predictive.  However actionable cost improvements are achievable with descriptive reporting.
  • Dashboarding is alive and well.
  • EDW solutions require data governance.
  • Data Scientists & statistical skills remain hard to come by in healthcare & outside of major population centers.

A fascinating keynote talk by Anne Milgram, former NJ attorney general, showed the striking parallels between ER visits/hospitalizations and arrests/incarcerations.  In Camden, NJ, there was a 2/3 overlap between superutilizers of both healthcare and the criminal justice system (CJS).  Noting that CJS data is typically public, she hinted this could potentially be integrated with healthcare data for predictives.  Certainly, from an insurer’s viewpoint, entry into the CJS is associated with higher healthcare/insured costs.  As healthcare systems move more into that role via value-based payments, this may be important data to integrate.

I haven’t listened to Don Berwick MD much – I will admit a “part of the problem” bias for his role as a CMS chief administrator, and his estimate that 50% of healthcare is “waste” (see Dr. Torchiana below).  I was floored that Dr. Berwick appeared to be pleading for the soul of medicine – “less stick and carrot”, “we have gone mad with too many (useless) metrics”.  But he did warn there will be winners and losers in medicine going forward, and signalling to me that physicians, particularly specialists, are targeted to be losers.

David Torchiana MD of Partners Healthcare followed with a nuanced talk reminding us there is value of medicine – and that much of what we flippantly call waste has occurred in the setting of a striking reduction in mortality for treatment of disease over the last 50 years.  It was a soft-spoken counterpoint to Dr. Berwick’s assertions.

Toby Freier and Craig Strauss MD both demonstrated how analytics can impact health significantly while reducing the bottom line, on both the community level and for specialized use cases.  New Ulm Medical Center’s example demonstrated 1) the nimbleness of a smaller entity to evaluate and implement optimized programs and processes on a community-wide basis while Minneapolis Heart Institute demonstrated 2) how advanced use of analytics could save money by reducing complications in high cost situations (e.g. CABG, PTCA, HF) and 3) how analytics could be used to answer clinical questions that there was no good published data on. (e.g. survivability for 90 year olds in TAVR)

Taylor Davis of KLAS research gave a good overview of analytics solutions and satisfaction with them.  Take home points were that the large enterprise solutions (Oracle et al.) had lower levels of customer satisfaction than the healthcare specific vendor solutions (Healthcatalyst, qlik).  Integrated BI solutions within the EHR provided by the EHR vendor, while they integrated well, were criticized as underpowered/insufficient for more than basic reporting.  However, visual exploration services (Tableau) were nearly as well received as the dedicated healthcare solutions.  Good intelligence on these solutions.

The conference started off with an “analytics walkabout” where different healthcare systems presented their success and experiences with analytics projects.  Allina Health was well-represented with multiple smart and actionable projects – I was impressed.  One project from Allina predicting who would benefit from closure devices in the cath lab (near and dear to my heart as an Interventional Radiologist) met goals of both providing better care and saving costs through avoiding complications.  There was also an interesting presentation from AMSURG about a project integrating Socio-Economic data with GI endoscopy – a very appropriate use of analytics for the outpatient world speaking from some experience.  These are just a few of the 32 excellent presentations in the walkabout.

I’ll blog about the breakout sessions separately.

Full Disclosure: I attended this conference on my own, at my own expense, and I have no financial relationships with any of the people or entities discussed.  Just wanted to make that clear.  I shill for no one.

 

Value and Risk: the Radiologist’s perspective (Value as risk series #4)

Public DomainMuch can be written about Value-based care. I’ll focus on imaging risk management from a radiologist’s perspective. What it looks like from the Hospital’s perspective , the Insurer’s perspective, and in general have been discussed previously.

When technology was in shorter supply, radiologists were gatekeepers of limited Ultrasound, CT and MRI resources. Need-based radiologist approval was necessary for ‘advanced imaging’. The exams were expensive and needed to be protocoled correctly to maximize utility. This encouraged clinician-radiologist interaction – thus our reputation as “The Doctor’s doctor.”

In the 1990’s-2000’s , there was an explosion in imaging utilization and installed equipment. Imaging was used to maximize throughput, minimize patient wait times and decrease length of hospital stays. A more laissez-faire attitude prevailed where gatekeeping was frowned upon.

With a transition to value-based care, the gatekeeping role of radiology will return. Instead of assigning access to imaging resources on basis of limited availability, we need to consider ROI (return on investment) in the context of whether the imaging study will be likely to improve outcome vs. cost. (1) Clinical Decision Support (CDS) tools can help automating imaging appropriateness and value. (2)

The bundle’s economics are capitation of a single care episode for a designated ICD-10 encounter. This extends across the inpatient stay and related readmissions up to 30 days after discharge (CMS BPCI Model 4). A review of current Model 4 conditions show mostly joint replacements, spinal fusion, & our example case of CABG (Coronary Artery Bypass Graft).

Post CABG, a daily Chest X-ray (CXR) protocol may be ordered – very reasonable for an intubated & sedated patient. However, an improving non-intubated awake patient may not need a daily CXR. Six Sigma analysis would empirically classify this as waste – and a data analysis of outcomes may confirm it.

Imaging-wise, patients need a CXR preoperatively, & periodically thereafter. A certain percentage of patients will develop complications that require at least one CT scan of the chest. Readmissions will also require re-imaging, usually CT. There will also be additional imaging due to complications or even incidental findings if not contractually excluded (CT/CTA/MRI Brain, CT/CTA neck, CT/CTA/US/MRI abdomen, Thoracic/Lumbar Spine CT/MRI, fluoroscopy for diaphragmatic paralysis or feeding tube placement, etc…). All these need to be accounted for.

www.n2value.com

 

In the fee-for-service world, the ordered study is performed and billed.  In bundled care, payments for the episode of care are distributed to stakeholders according to a pre-defined allocation.

Practically, one needs to retrospectively evaluate over a multi-year period how many and what type of imaging studies were performed in patients with the bundled procedure code. (3) It is helpful to get sufficient statistical power for the analysis and note trends in both number of studies and reimbursement. Breaking down the total spend into professional and technical components is also useful to understand all stakeholder’s viewpoints. Evaluate both the number of studies performed and the charges, which translates into dollars by multiplying by your practice’s reimbursement percentage. Forward-thinking members of the Radiology community at Nieman HPI  are providing DRG-related tools such as ICE-T to help estimate these costs (used in above image). Ultimately one ends up with a formula similar to this:

CABG imaging spend = CXR’s+CT Chest+ CTA chest+ other imaging studies.

Where money will be lost is at the margins – patients who need multiple imaging studies, either due to complications or incidental findings. With between a 2% to 3% death rate for CABG and recognizing 30% of all Medicare expenditures are caused by the 5% of beneficiaries that die, with 1/3 of that cost in the last month of life (Barnato et al), this must be accounted for. An overly simplistic evaluation of the imaging needs of CABG will result in underallocation of funds for the radiologist, resulting in per-study payment dropping  – the old trap of running faster to stay in place.

Payment to the radiologist could either be one of two models:

First, fixed payment per RVU. Advantageous to the radiologist, it insulates from risk-sharing. Ordered studies are read for a negotiated rate. The hospital bears the cost of excess imaging. For a radiologist in an independent private practice providing services through an exclusive contract, allowing the hospital to assume the risk on the bundle may be best.

Second, a fixed (capitated) payment per bundled patient for imaging services may be made to the radiologist. This can either be in the form of a fixed dollar amount or a fixed percentage of the bundle.  (Frameworks for Radiology Practice Participation, Nieman HPI)  This puts the radiologist at-risk, in a potentially harmful way. The disconnect is that the supervising physicians (cardio-thoracic surgeon, intensivist, hospitalist) will be focusing on improving outcome, decreasing length of stay, or reducing readmission rates, not imaging volume. Ordering imaging studies (particularly advanced imaging) may help with diagnostic certitude and fulfill their goals. This has the unpleasant consequence of the radiologist’s per study income decreasing when they have no control over the ordering of the studies and, in fact, it may benefit other parties to overuse imaging to meet other quality metrics. The radiology practice manager should proceed with caution if his radiologists are in an employed model but the CT surgeon & intensivists are not. Building in periodic reviews of expected vs. actual imaging use with potential re-allocations of the bundle’s payment might help to curb over-ordering. Interestingly, in this model the radiologist profits by doing less!

Where the radiologist can add value is in analysis, deferring imaging unlikely to impact care. Reviewing data and creating predictive analytics designed to predict outcomes adds value while, if correctly designed, avoiding more than the standard baseline of risk. (see John’s Hopkins Sepsis prediction model). In patients unlikely to have poor outcomes, additional imaging requests can be gently denied and clinicians reassured. I.e. “This patient has a 98% chance of being discharged without readmission. Why a lumbar spine MRI?” (c.f. AK Moriarty et al) Or, “In this model patients with these parameters only need a CXR every third day. Let’s implement this protocol.” The radiologist returns to a gatekeeping role, creating value by managing risk, intelligently.

Let’s return to our risk/reward matrix:

www.n2value.com

 

For the radiologist in the bundled example receiving fixed payments:

 

Low Risk/Low Reward: Daily CXR’s for the bundled patients.

 

High Risk/Low Reward: Excess advanced imaging (more work for no change in pay)

 

High Risk/High Reward: Arbitrarily denying advanced imaging without a data-driven model (bad outcomes = loss of job, lawsuit risk)

 

Low Risk/High Reward: Analysis & Predictive modeling to protocol what studies can be omitted in which patients without compromising care.

 

I, and others, believe that bundled payments have been put in place not only to decrease healthcare costs, but to facilitate transitioning from the old FFS system to the value-based ‘at risk’ payment system, and ultimately capitated care. (Rand Corp, Technical Report TR-562/20) By developing analytics capabilities, radiology providers will be able to adapt to these new ‘at-risk’ payment models and drive adjustments to care delivery to improve or maintain the community standard of care at the same or lower cost.

  1. B Ingraham, K Miller et al. Am Coll Radiol 2016 in press
  2. AK Moriarty, C Klochko et al J Am Coll Radiol 2015;12:358-363
  3. D Seidenwurm FJ Lexa J Am Coll Radiol 2016 in press