Do we need more medical imaging?

 

starshipdream
Fanpic of the starship enterprise with deep dream

The original captain of the starship Enterprise, James T. Kirk addressed his ship with the invocation of, “Computer, …” .  For an audience in the late 1960’s it was a imagined miracle hundreds of years in the future.  In the early 1990’s, MIT’s SAIL Laboratory was dreaming of Project Oxygen – an ever-present, voice activated computer that could be spoken to and give appropriate responses.

 

“Hi, Siri” circa 2011
alexa
“Hello Alexa” circa 2016

 

 

 

 

 

 

 

 

 

Cloud computing, plentiful memory, on-demand massive storage and GPU-powered deep learning brought this future into our present.  Most of us already have the appliance (a smartphone) capable of connecting us to scalable cloud computing resources. Comparing current reality to the 1960’s expectations, this advancing world of ubiquitous computing is small, cheap, and readily available.

But imaging is not.  The current paradigm holds imaging as a rare, special, and expensive medical procedure.  In the days of silver-film radiology, with tomographic imaging and cut-film feeders for interventional procedures, it was a scarce resource.  In the first days of CT and MRI, requests for anything more complicated than an x-ray needed to pass through a radiologist.  These machines, and the skills necessary to operate them, were expensive and in short supply.

But is it still?  In a 2017 ER visit – the point of access to health care for > 50% of patients –  if your symptoms are severe enough, it is almost a certainty you will receive imaging early in your ER visit.  Belly pain? – CT that.  Worst headache of your life? – CT again.   Numbness on one side of your body?  Diffusion Weighted MRI.  And it is ordered on a protocol circumventing Radiology approval – why waste time in the era of 24/7 imaging with final interpretations available in under an hour.

I’ve written briefly about how a change to value-based care will upend traditional fee for service (FFS) delivery patterns.  But with that change from FFS, and volume to value, should we think about Radiology and other diagnostic services differently?  Perhaps medical imaging should be not rationed, but readily and immediately available – an equal to the history and physical.

I call this concept Ubiquitous Imaging ©, or Ubiquitous Radiology.   Ubiquitous Imaging is the idea that imaging is so necessary for the diagnosis and management of disease that it should be an integral part of every diagnostic workup, and guide every treatment plan where it is of benefit.  “A scan for every patient, when it would benefit to the patient.”

This is an aggressive statement.  We’re not ready for it just yet.  But let me explain why Ubiquitous Imaging is not so far off.

  1.  Imaging is no longer a limited good in the developed world
  2.  Artificial intelligence will increase imaging productivity, similar to PACS
  3.  Concerns about radiation dose will be salved by improvements in technology
  4.  Radiomics will greatly increase the value of imaging
  5.  Contrast use may be markedly decreased by an algorithm
  6.  Imaging will change from a cost center to an accepted part of preventative care in a value-based world.
  7. Physicians may shift from the current subspecialty paradigm to a Diagnosis-Acute Treatment-Chronic Care Management paradigm to better align with value based care.

Each of these points may sound like science fiction.  But the groundwork for each of these is being laid now:

In the US in 2017, there are 5,564 hospitals registered with the AHA.  Each of these will have some inpatient radiology services.  As of 2007, there were at least 10,335 CT Scanners operating in the US, and 7810 MRI scanners.  Using OECD library data from 2015, with 41 CT’s & 39 MRI’s per million inhabitants of the US, and a total US census of 320,000,000 we can calculate the number of US CT and MRI scanners in 2015 to be 13,120 and 12,480 respectively.

If proper procedures are followed with appropriate staffing and a lean/six sigma approach to scanning, it is conceivable that a modern multislice CT could scan one patient every ten minutes (possibly better), and be run almost 24/7 (downtime for maintenance & QA).  Thus, one CT scanner could image 144 patients daily. 144 scans/day x 365 days/year x 13120 CT scanners = 689,587,200 potential scans yearly – two scans a year for every US resident!

MRI imaging is more problematic because physics dictates the length of scans.  The T1 and T2 relaxation times are set by the length of the sequence in milliseconds, and making scans faster runs up against the laws of physics.  While there are some ‘shortcuts’, we pay for those with T2* effects and decreased resolution.  Stronger magnets & gradients help, but at higher cost and a risk of energy transfer to the patient.  So at optimal efficiency and staffing, the best you could probably get is 22 studies daily (a very aggressive number).  22 MRI studies/day x 365 days/year x 12480 MRI’s = 100,214,400 studies yearly.  Or enough to scan 1/3 of the US population yearly.  (Recent discussions at RSNA 2017 suggest MRI scans might be able to be shortened to the length of CT)

Think about this.  We can CT scan every US citizen twice in a one year period, and we continue to think about imaging as a scarce resource.  One in three US citizens can be scanned with MRI annually.  Imaging is not scarce in the developed world.

X-ray is the most commonly performed imaging procedure, including mammography & fluoroscopy, accounting for up to 50% of radiology studies.  The CT/MR/US and nuclear medicine studies occupy the other 50%.  A bit of backing out on the number above will suggest capacity on the order of 2.256 billion possible studies a year.

We’ve done the studies – how will we interpret them?  A physician (MD) examines every study and interprets them, delivering a report.  There are about 30,656 radiologists in the USA (2012 AMA physician masterfile).  Nieman HPI suggests that estimate may be low, and gives an upper range of 37,399 radiologists.

A busy radiologist on a PACS system could interpret 30,000 studies a year.  30,656 x 30,000 = 919,680,000 potentially interpretable studies from our workforce.  Use the high estimate and the capacity number rises to 1.12 billion.  That’s a large variance from the 2.256 billion studies performed.  However, it is suggested that about 50% of studies, usually X-ray and Ultrasound, are performed and interpreted by non-radiologists.  So, that gets us back to 1.12 billion studies.

Recall that Radiologists did not always interpret studies on computer monitors (PACS).  Prior to PACS, a busy radiologist would read 18,000 studies a year.  Radiologists experienced a jump in productivity when we went from interpreting studies based on film to interpreting studies on PACS systems.

Artificial Intelligence algorithms are beginning to appear in Radiology at a rapid pace.  While it is early in the development of these products, there is no question in the minds of most informed Radiologists that computer algorithms will be a part of radiology.  And because AI solutions in radiology will not be reimbursed additionally, cost justification needs to come from productivity.  An AI algorithm in Radiology needs to justify its price by making the radiologist more efficient, so that cost is borne by economies of scale.

Now imagine that the AI algorithms develop accuracy similar to a radiologist.  Able to ‘trust’ the algorithms and thereby streamline their daily work processes, Radiologists no longer are limited to interpreting 30,000 studies a year.  Perhaps that number rises to 45,000.  Or 60,000.  I can’t in good conscience consider a higher number.  The speed of AI introduction, if rapid and widespread, may cause some capacity issues, but the aging population, retiring radiologists, well-informed medical students responding to the “invisible hand” and perpetual trends toward increasing demand for imaging services will form a new equilibirum.  Ryan Avent of the Economist (who’s book Wealth of Humans is wonderful reading) has a more resigned opinion, however.

One of the additional functions of Radiologists is to manage the potentially harmful effects of the dose of ionizing radiation used in X-rays.  We know that high levels of ionizing radiation cause cancerWhether lower levels of radiation cause cancer is controversialHowever, it is likely that some (low) percentage of cancer is actually CAUSED by medical imaging.  To combat this, we have used the ALARA paradigm in medical imaging, and in recent years to combat concerns associated with higher doses received in advanced imaging, the image gently campaign.

Recently, James Brink MD of the American College of Radiology (ACR) testified to the US congress about the need for contemporary research on the effects of the radiation doses encountered in medical imaging.  Without getting too much into the physics of imaging, more dose usually yields crisper, “prettier” images at higher resolution.

But what if there was another way to do this?  Traditionally, Radiologists have relied upon equipment makers to improve hardware and extract better signal/noise ratios which would allow for a lower radiation dose.  But in a cost-concious era, it is difficult to argue for more expensive new technologies if there is no reimbursement advantage.

However, an interesting pilot study used an AI technique on CT scans to ‘de-noise’ the images, improving their appearance.   The noise was added after artificially after the scan, rather than present at the time of imaging.  A number of papers at NIPS 2017 dealt with super-resolution.  Could similar technologies exist for imaging?  Paras Lahkani seems to think so.

Put hardware & software improvement together and we might be able to substantially decrease dose in ionizing radiation.  If this dose is low enough, and research bears out that there is a dose threshold below which radiation doesn’t cause any real effects, we could “image gently” with impunity.

Are we using the information in diagnostic imaging effectively?  Probably not.  There is just too much information on a scan for a single radiologist to report entirely.  But with AI algorithms also looking at diagnostic images, there is much more information that we can extract from the scan than we currently are.  The obvious use case is volumetrics.

The burgeoning science of Radiomics includes not only volumetrics, but also relationships between the data present on the scan we may not be able to perceive directly as humans.  Dr. Luke Oakden-Rayner caused a brief internet stir with his preliminary precision radiology article in 2017, using an AI classifier (a CNN) to predict patient survival from CT images.  While small, it showed the possibility of advanced informational discovery on existing datasets and application of those findings in a practical manner.  Radiomics feature selection has similar problems to that of genomics feature selection, in that the large number of data variables may predispose to more chance correlations than in traditionally designed, more focused experiments.

At the RSNA 2017, a number of machine learning companies were making their debut.  One of the more interesting offerings was Subtle Medical, a machine learning application designed to reduce contrast dose in imaged patients.  Not only would this be disruptive to the contrast industry by reducing the amount of administered contrast by a factor of 5 or higher (!), but it would remove one of the traditional concerns about contrast – its potential toxicity.  CT uses iodinated contrast, and MRI uses Gadolinium-based contrast.  Using less implies less toxicity and less cost, so this is a win all-around.

The economics of imaging could fill a book, let alone a blog post.  In a fee-for service world, imaging was a profit center, and increasing capacity and maximizing the number of imaging services was sensible to encourage a profitable service line.  With declining reimbursement, it has become less so (but still profitable).  However, as we transition to value-based care, how will radiology be seen?  Will it be seen as a cost-center, with radiologists fighting over a piece of the bundled payment pie, or something else?  Will it drive reduced or increased imaging utilization?  Target metrics and ease of attainment in the ACO drive this decision, with easier targets correlated with greater imaging. Particularly if imaging is seen as providing greater value, utilization should continue to rise.

Specialty training as it exists currently may not be sufficient to prepare for the way medicine is practiced in the future.  A specialty (and sup-specialty) approach was reasonable when information was not freely available, and the amount of information to know was overwhelming without specialization.  But as we increase efficiencies in medical care, care access goes along a definable path: Patient complaint -> Investigation -> Diagnosis -> Acute Treatment ->Chronic Treatment.  Perhaps it would make more sense to organize medicine along those lines as well?  Particularly in the field of diagnosis, I am not the only physician recognizing the shift occurring.  A well-thought out opinion piece written by Saurabh Jha MD and Eric Topol MD, Radiologists and Pathologists as Information Specialists, broaches that there is more similarity between the two specialties than differences, particularly in an age of artificial intelligence.  Should we call for a new Flexner report, ending the era of physician-basic scientists and beginning the dominance of physician-informaticists and physician-empaths?

Perhaps it is time to consider imaging not as a limited commodity, but instead to recognize it as a widely available resource, to be used as much as is reasonable.  By embracing AI, radiomics, new payment models, the radiologist as an informatician, and basic research on radiation safety, we can get there.

©2017,2018 – All rights reserved