What’s up with N2Value -tying up loose ends

Dora Mitsonia - CC license

It’s been almost a year since my last long-form article. Of course, ‘busyness’ in real life and blog writing are inversely proportional! I’ve been focused on real-life advances; namely neural networks, machine learning, and machine intelligence which fall loosely under the colloquial misnomer of “A.I.”

After a deep dive into machine learning, it is contemporaneously unexpectedly simple and deceptively difficult. The technical hurdles are significant, but improving – math skills ease the conceptual framework, but without the programming chops, practical application is tougher. Worse, the IT task of getting multiple languages, packages, and pieces of hardware to work together well is daunting. Getting the venerable MNIST to work on your computer with your GPU might be a weekend project – or worse. I’m not a ‘gamer’, so for the last decade it has been hard for me to get excited about increasing CPU clock speeds, faster DRAM, and faster GPU flops. Like many, I’ve been happy to use OSX on increasingly venerable Mac products – works fine for my purposes.

But since Alexnet’s publication in 2014, the explosion in both theory and application in machine learning has made me sit up and take notice. The Imagenet Large Scale Visual Recognition Challenge top-5 classification error rate was only 2.7% in latest competition held a few days ago in July 2017. That’s up from 30%+ error rates only four years ago. And my current hardware isn’t up to that task.

So, count me in. Certainly AI will be used in healthcare, but in what manner and to what extent still to be worked out. Pioneer firms like Arterys and Zebra Medical Vision, brave uncharted regulatory waters, watched closely by AI startups with similar dreams.

So, while I’d like to talk more about AI, I’m not sure that N2Value is the right place to do it. N2Value is primarily a healthcare thought leadership blog, promoting an evolution from Six Sigma methodology into more robust management practices which incorporate systems theory, focus on appropriately chosen metrics, model patient populations and likely outcomes and thereby successfully implement profitable value-based care. Caveat: with current US politics, it is very difficult to predict healthcare policy’s direction.

So, in the near future, I will decide what the scope of N2Value is to be going forward. I thank my loyal readers & subscribers who have given me 5 digit page views over the short life of the blog – far more than I ever expected! The blog has been a labor of love, but I’m pretty sure that AI algorithms have a place in healthcare management. However, I am not sure if you want to hear me opine on which version of convolutional neural network works better with or without LSTM added here, so stay tuned!

I have a few topics I have eluded to which I would like to mention quickly as stubs – they may or may not be expanded in the future.

STUB: What Healthcare can learn from Wall Street.

The main point of this series was to document the chronological implications of advances in computing technology on a leading industry (finance), to describe the likely similar path of a lagging industry (healthcare). I never was able to find the statistics on Wall Street employment I was seeking, which would document a declining number of workers, while documenting higher productivity and profitability per employee as IT advances allowed for super-empowerment of individuals.

Additionally, it raised issues regarding technology in B2B relationships that are adversarial. Much like Insurer-Hospital or Hospital-Doctor. If I have time, I’d like to rewrite this series. It was when I first began blogging and it is a bit rough.

STUB: The Measure is the Metric

One of my favorite articles (with its siblings), this subject was addressed much more eloquently on the Ribbonfarm blog by David Manheim in Goodhart’s Law and why measurement is Hard. If anything, after reading that essay, you will have sympathy for the metrics-oriented manager and be convinced that nothing they can do is right. I firmly believe that metrics should be designed to the task at hand, and then once achieved, monitored for a while but not dogmatically so. Better to target new and improved metrics than enforce institutional petrification ‘by the numbers.’

STUB: Value as Risk Series

I perceive the only way for value based care to be long-term profitable/successful is for large-scale vertical integration by a large Enterprise Health Institution (EHI) across the care spectrum. Hospital acquires Clinics, Practices, and Doctors, quantifies its covered lives, and then with better analytics than the insurers, capitates, ultimately contracting directly with employers & individuals. The insurers become redundant – and the Vertically Integrated Enterprise saves on economies of scale. It provides care in the most cost effective manner possible & closes beds, relying instead on telehealth, m health apps & predictive algorithms, and innovative care delivery.

When the Hospital’s profitability model resembles the insurer’s, and it is beholden only to itself (capitated payments are all there is), something fascinating happens. No longer does it matter if there is an ICD-10/HOPPS/CPT/DRG code for a procedure. The entity is no longer beholden to the rules of payment, and can internally innovate. A successful vertically integrated enterprise will – and quickly. While there will have to be appropriate regulatory oversight to prevent patient abuse, profiteering, or attempts to financialize the model; adjusting capitation with incentive payments for real measures of quality (not proxies) will prompt compliance and improved care.

Writing as a physician, this arrangement may or may not commoditize care further. Concerns about standardization of care are probably overstated, as the first CDS tool more accurate than a physician will standardize care to that model anyway! From an administrator’s perspective, it is a no-brainer to deliver care in an innovative manner that circumvents existing stumbling blocks. From a patient’s perspective, while I prefer easy access to a physician, maintaining that access is becoming unaffordable, let alone then utilizing health care! At some point, the economic pain will be so high that patients will want alternatives they can afford. Whether that means mid-levels or AI algorithms only time will tell.

STUB: Data Science and Radiology

I really like the concept I began here with data visualization in five dimensions. Could this be a helpful additional tool to AI research like Tensorboard? I’m thinking about eventually writing a paper on this one.

STUB: Developing the Care Model

The concept of treating a care model like an equation is what got me started on all this – describing a system as a mathematical model seemed like such a good idea – but required learning on my part. That, and the effects thereof, are still ongoing. At the time of the writing, the solution appeared daunting & I “put the project on the back burner (i.e. abandoned it)” as I couldn’t make it work. Of course, with advancing tools and algorithms well suited to evaluation of this task, I might rexamine this soon.

Where does risk create value for a hospital? (Value as Risk series post #3)

towers1Let’s turn to the hospital side.

For where I develop the concept of value as risk management go here 1st, and where I discuss the value in risk management from an insurer’s perspective click here 2nd.

The hospital is an anxious place – old fat fee-for-service margins are shrinking, and major rule set changes keep coming. To manage revenue cycles requires committing staff resources (overhead) to compliance related functions, further shrinking margin. More importantly, resource commitment postpones other potential initiatives. Maintaining compliance with Meaningful Use (MU) 3 cum MACRA, PQRS, ICD-10 (11?) and other mandated initiatives while dealing with ongoing reviews Read more

Some reflections on the ongoing shift from volume to value

As an intuitive and inductive thinker, I often use facts to prove or disprove my biases. This may make me a poor researcher, though I believe I would have been popular in circa 1200 academic circles. Serendipity plays a role; yes I’m a big Nassim Taleb fan – sometimes in the seeking, unexpected answers appear. Luckily, I’m correct more often than not. But honestly – in predicting widely you miss more widely.

One of my early mentors from Wall St. addressed this with me in the infancy of my career – take Babe Ruth’s batting average of .342 . This meant that two out of three times at bat, Babe Ruth struck out. However, he was trying to hit home runs. There is a big difference between being a base hit player and a home run hitter. What stakes are you playing for?

With that said, this Blog is for exploring topics I find of interest pertaining mostly to healthcare and technology. The blog has been less active lately, not only due to my own busy personal life (!) but also because I have sought more up-to-date information about advancing trends in both the healthcare payment sector and the IT/Tech sector as it applies to medicine. I’m also diving deeper into Radiology and Imaging. As I’ve gone through my data science growth phase, I’ll probably blog less on that topic except as it pertains to machine learning.

The evolution of the volume to value transition is ongoing as many providers are beginning to be subject to at least a degree of ‘at-risk’ payment. Stages of ‘at-risk’ payment have been well characterized – this slide by Jacque Sokolov MD at SSB solutions is representative:

Sokolove - SSB solutions slide 1

In 2015, approximately 20% of medicare spend was value-based, with CMS’s goal 50% by 2020. Currently providers are ‘testing the waters’ with <20% of providers accepting over 40% risk-based payments (c.f. Kimberly White MBA, Numerof & Associates). Obviously the more successful of these will be larger, more data-rich and data-utilizing providers.

However, all is not well in the value-based-payment world. In fact, this year United Health Care announced it is pulling its insurance products out of most of the ACA exchange marketplaces. While UHC products were a small share of the exchanges, it sends a powerful message when a major insurer declines to participate. Recall most ACO’s (~75%) did not produce cost savings in 2014, although more recent data was more encouraging (c.f. Sokolov).   Notably, out of the 32 Pioneer ACO’s that started, only 9 are left (30%) (ref. CMS). The road to value is not a certain path at all.

So, with these things in mind, how do we negotiate the waters? Specifically, as radiologists, how do we manage the shift from volume to value, and what does it mean for us? How is value defined for Radiology? What is it not? Value is NOT what most people think it is. I define value as: the cost savings arising from the assumption and management of risk. We’ll explore this in my next post.

Catching up with the “What medicine can learn from Wall St. ” Series

The “What medicine can learn from Wall Street” series is getting a bit voluminous, so here’s a quick recap of where we are up to so far:

Part 1 – History of analytics – a broad overview which reviews the lagged growth of analytics driven by increasing computational power.

Part 2 – Evolution of data analysis – correlates specific computing developments with analytic methods and discusses pitfalls.

Part 3 – The dynamics of time – compares and contrasts the opposite roles and effects of time in medicine and trading.

Part 4 – Portfolio management and complex systems – lessons learned from complex systems management that apply to healthcare.

Part 5 – RCM, predictive analytics, and competing algorithms – develops the concept of competing algorithms.

Part 6 – Systems are algorithms – discusses ensembling in analytics and relates operations to software.


What are the main themes of the series?

1.  That healthcare lags behind wall street in computation, efficiency, and productivity; and that we can learn where healthcare is going by studying Wall Street.

2.  That increasing computational power allows for more accurate analytics, with a lag.  This shows up first in descriptive analytics, then allows for predictive analytics.

3.  That overfitting data and faulty analysis can be dangerous and lead to unwanted effects.

4.  That time is a friend in medicine, and an enemy on Wall Street.

5.  That complex systems behave complexly, and modifying a sub-process without considering its effect upon other processes may have “unintended consequences.”

6.  That we compete through systems and processes – and ignore that at our peril as the better algorithm wins.

7.  That systems are algorithms – whether soft or hard coded – and we can ensemble our algorithms to make them better.


Where are we going from here?

– A look at employment trends on Wall Street over the last 40 years and what it means for healthcare.

– More emphasis on the evolution from descriptive analytics to predictive analytics to proscriptive analytics.

– A discussion for management on how analytics and operations can interface with finance and care delivery to increase competitiveness of a hospital system.

– Finally, tying it all together and looking towards the future.


All the best to you and yours and great wishes for 2016!



Black Swans, Antifragility, Six Sigma and Healthcare Operations – What medicine can learn from Wall St Part 7

Black Swans, Antifragility, Six Sigma and Healthcare Operations – What medicine can learn from Wall St Part 7


I am an admirer of Nicholas Nassim Taleb – a mercurial options trader who has evolved into a philosopher-mathematician.  The focus of his work is on the effects of randomness, how we sometimes mistake randomness for predictable change, and fail to prepare for randomness by excluding outliers in statistics and decision making.  These “black swans” arise unpredictably and cause great harm, amplified by systems that have put into place which are ‘fragile’.

Perhaps the best example of a black swan event is the period of financial uncertainty we have lived through during the last decade.  A quick recap: the 1998 global financial crisis was caused by a bubble in US real estate assets.  This in turn from legislation mandating lower lending standards and facilitating securitization of these loans combining with lower lending standards (subprime, Alt-A) allowed by the proverbial passing of the ‘hot potato’.  These mortgages were packaged into derivatives named collateralized debt obligations (CDO’s), using statistical models to gauge default risks in these loans.  Loans more likely to default were blended with loans less likely to default, yielding an overall package that was statistically unlikely to default.  However, as owners of these securities found out, the statistical models that made them unlikely to default were based on a small sample period in which there were low defaults.  The models indicated that the financial crisis was a 25-sigma (standard deviations) event that should only happen once in:

Lots of Zeroesyears. (c.f.wolfram alpha)

Of course, the default events happened in the first five years of their existence, proving that calculation woefully inadequate.

The problem with major black swans is that they are sufficiently rare and impactful enough that it is difficult to plan for them.  Global Pandemics, the Fukushima Reactor accident, and the like.  By designing robust systems, expecting system perturbations, you can mitigate their effects when they occur and shake off the more frequent minor black (grey) swans – system perturbations that occur occasionally (but more often than you expect); 5-10 sigma events that are not devastating but disruptive (like local disease outbreaks or power outages).

Taleb classifies how things react to randomness into three categories: Fragile, Robust, and Anti-Fragile.  While the interested would benefit from reading the original work, here is a brief summary:

1.     The Fragile consists of things that hate, or break, from randomness.  Think about tightly controlled processes, just-in-time delivery, tightly scheduled areas like the OR when cases are delayed or extended, etc…
2.     The Robust consists of things that resist randomness and try not to change.  Think about warehousing inventories, overstaffing to mitigate surges in demand, checklists and standard order sets, etc…
3.     The Anti-Fragile consists of things that love randomness and improve with serendipity.  Think about cross-trained floater employees, serendipitous CEO-employee hallway meetings, lunchroom physician-physician interactions where the patient benefits.

In thinking about FragileRobustAnti-Fragile, be cautious about injecting bias into meaning.  After all, we tend to avoid breakable objects, preferring things that are hardy or robust.  So, there is a natural tendency to consider fragility ‘bad’, robustness ‘good’ and anti-fragility must be therefore be ‘great!’  Not true – when we approach these categories from an operational or administrative viewpoint.

Fragile processes and systems are those prone to breaking. They hate variation and randomness and respond well to six-sigma analyses and productivity/quality improvement.  I believe that fragile systems and processes are those that will benefit the most from automation & technology.  Removing human input & interference decreases cycle time and defects.  While the fragile may be prone to breaking, that is not necessarily bad.  Think of the new entrepreneur’s mantra – ‘fail fast’.  Agile/SCRUM development, most common in software (but perhaps useful in Healthcare?) relies on rapid iteration to adapt to a moving target.scrum.jpg   Fragile systems and processes cannot be avoided – instead they should be highly optimized with the least human involvement.  These need careful monitoring (daily? hourly?) to detect failure, at which point a ready team can swoop in, fix whatever has caused the breakage, re-optimize if necessary, and restore the system to functionality.  If a fragile process breaks too frequently and causes significant resultant disruption, it probably should be made into a Robust one.

Robust systems and processes are those that resist failure due to redundancy and relative waste.  These probably are your ‘mission critical’ ones where some variation in the input is expected, but there is a need to produce a standardized output.  From time to time your ER is overcome by more patients than available beds, so you create a second holding area for less-acute cases or patients who are waiting transfers/tests.  This keeps your ER from shutting down.  While it can be wasteful to run this area when the ER is at half-capacity, the waste is tolerable vs. the lost revenue and reputation of patients leaving your ER for your competitor’s ER or the litigation cost of a patient expiring in the ER after waiting 8 hours.    The redundant patient histories of physicians, nurses & medical students serve a similar purpose – increasing diagnostic accuracy.  Only when additional critical information is volunteered to one but not the other is it a useful practice.  Attempting to tightly manage robust processes may either be a waste of time, or turn a robust process into a fragile one by depriving it of sufficient resilience – essentially creating a bottleneck.  I suspect that robust processes can be optimized to the first or second sigma – but no more.

Anti-fragile processes and systems benefit from randomness, serendipity, and variability.  I believe that many of these are human-centric.  The automated process that breaks is fragile, but the team that swoops in to repair it – they’re anti-fragile.  The CEO wandering the halls to speak to his or her front-line employees four or five levels down the organizational tree for information – anti-fragile.  Clinicians that practice ‘high-touch’ medicine result in good feelings towards the hospital and the unexpected high-upside multi-million dollar bequest of a grateful donor 20 years later – that’s very anti-fragile.  It is important to consider that while anti-fragile elements can exist at any level, I suspect that more of them are present at higher-level executive and professional roles in the healthcare delivery environment.  It should be considered that automating or tightly managing anti-fragile systems and processes will likely make them LESS productive and efficient.  Would the bequest have happened if that physician was tasked and bonused to spend only 5.5 minutes per patient encounter?  Six sigma management here will cause the opposite of the desired results.

I think a lot more can be written on this subject, particularly from an operational standpoint.   Systems and processes in healthcare can be labeled fragile, robust, or anti-fragile as defined above.  Fragile components should have human input reduced to the bare minimum possible, then optimize the heck out of these systems.  Expect them to break – but that’s OK – have a plan & team ready for dealing with it, fix it fast, and re-optimize until the next failure.  Robust systems should undergo some optimization, and have some resilience or redundancy also built in – and then left the heck alone!  Anti-fragile systems should focus on people and great caution should be used in not only optimization, but the metrics used to manage these systems – lest you take an anti-fragile process, force it into a fragile paradigm, and cause failure of that system and process.  It is the medical equivalent of forcing a square peg into a round hole.  I suspect that when an anti-fragile process fails, this is why.

Follow up to “The Etiquette of Help”

c.f. mark ong at ganyfd.com
Superior Mesenteric Angiogram demonstrating a right colonic bleed.

I came across this wonderful piece by Bruce Davis MD on Physician’s Weekly.com about “The Etiquette of Help”. How do you help a colleague emergently in a surgical procedure where things go wrong? As proceduralists, we are always cognizant that this is a possibility.

“Any Surgeon to OR 6 STAT. Any Surgeon to OR 6 STAT.


No surgeon wants to hear or respond to a call like that. It means someone is in deep kimchee and needs help right away.”


I was called about an acute lower GI bleed with a strongly positive bleeding scan. I practice in a resort area, and an extended family had come here with their patriarch, a man in his late 50’s. (Identifying details changed/withheld – image above is NOT from this case). He had been feeling woozy in the hot sun, went to the men’s room, evacuated a substantial amount of blood, and collapsed.


As an interventional radiologist, I was asked to perform an angiogram and embolize the bleeder if possible. The patient was brought to the cath lab; I gained access to the right femoral artery, and then consecutively selected the celiac, superior mesenteric, and inferior mesenteric arteries to evaluate abdominal blood supply. The briskly bleeding vessel was identifiable in the right colonic distribution as an end branch off the ileocolic artery. I guided my catheter, and then threaded a smaller micro-catheter through it, towards the vessel that was bleeding.


When you embolize a vessel, you are cutting off blood flow. Close off too large a region, and the bowel will die. Also, collateral vessels in the colon will resupply the bleeding vessel, so you have to be precise.


Advancing a microcatheter under fluoroscopy to an end vessel is slow, painstaking work requiring multiple wire exchanges and contrast injections. After one injection, I asked my assisting scrub tech to hand me back the wire.

“Sir, I’m sorry. I dropped the wire on the floor.”

“That’s OK. Just open up another one.”

“Sir, I’m sorry. That was the last one in the hospital.”

“There’s an art to coming in to help a colleague in trouble. Most of us have been in that situation, both giving and receiving help. A scheduled case that goes bad is different from a trauma. In trauma, you expect the worst. Your thinking and expectations are already looking for trouble. In a routine case, trouble is an unwelcome surprise, and even an experienced surgeon may have difficulty shifting from routine to crisis mode.”


We inquired how quickly we could get another wire. It would take hours, if we were lucky. The patient was still actively bleeding and requiring increasing fluid and blood support to maintain pressure. After a few creative attempts at solving this problem, it was clear that it was not going to be solved by me, today, in that room. It was time to pull the trigger and make the call the interventionalist dreads – the call to the surgeon.


The general surgeon came down to the angio suite and I explained what was happening. I marked the bowel with a dye to assist him in surgery, and sent the patient with him to the OR. The patient was operated on within 30 minutes from leaving my cath lab, and OR time was perhaps 45 minutes. After the procedure was done the surgeon remarked to me that it was one of the easiest resections ever, as he knew exactly where to go from my work.  The surgeon never said anything negative to me, and we had a very good working relationship thereafter.

“The first thing to remember when stepping into a bad situation is that you are the cavalry. You didn’t create the situation, and recriminations and blame have no place in the room. You need to be the calm center to a storm that started before you got involved. Sometimes that’s all that is needed. A fresh perspective, a few focused questions, and the operating surgeon can calm down and get back on track.”


I saw the patient the next day, sitting up with a large smile on his face. He explained to me how happy he was that he had come here for vacation, that it was the trip of a lifetime for him, and that he was looking forward to attending his youngest daughter’s wedding later that year. He told me he lived in a rural Midwest area, hours from a very small hospital without an interventionalist, and if this had happened at home, well, who knows?


If I had not objectively assessed my inability to finish the case because of equipment issues, well, who knows?


If I had been prideful and unwilling to accept my limitations at that time, well, who knows?


If I had been more concerned with my reputation or what my partners would think, well, who knows?


I sincerely hope that my patient has enjoyed many years of happiness with his family in his bucolic rural Midwestern home. I will never see him again, but I do think of him from time to time.

The danger of choosing the wrong metric : The VA Scandal

The Veteran’s affair scandal has been newsworthy lately.  The facts about the VA scandal will be forthcoming in August, but David Brooks made some smart inferences back on May 16th on NPR’s Week In Politics:

BROOKS: Yeah, he’s (Shinkseki) in hot water. He’s been there since the beginning. So I don’t know if I’d necessarily want to bet on him. But, you know, I do have some sympathy for the VA. It’s obviously not a good thing to doctor and cook the books, but you – there is a certain fundamental reality here, which is the number of primary care visits over the last three years at this place rose 50 percent. The number of primary care physicians rose nine percent.
And so there’s just a backlog, and if you put a sort of standard in place that you have to see everybody in 14 days but you don’t provide enough physicians to actually do that, well, people are going to start cheating. And so there is a more fundamental problem here than just the cheating.

An administrative failure was made by mandating patients be seen within 14 days but not providing the staffing capabilities to do so.  The rule designed to promote a high level of care had ‘unintended consequences.’  However, I do have some sympathy for an institution which depends on procurement from congress for funding in a political process where funds can be yanked, redistributed, or earmarked based on political priorities.

More concerning, multiple centers may have been complicit with the impossibility of fulfilling the mandate, and whistleblowers were actively retaliated against.

I need to disclaim here that I both trained and worked at the VA as a physician.  I have tremendous respect for the veterans who seek care there, and I had great pride working there and in being in a place to give service to these men and women who gave service to us.  The level of care in the VA system is generally thought to be good, by myself and others.

As I’ve written before in The Measure is the Metric and Productivity in Medicine – what’s real and what’s fake?, the selection of metrics is important because those metrics will be followed by the organization, particularly if performance evaluations and bonuses are tied to the metrics.  Ben Horowitz, partner at Andreessen Horowitz, astutely notes the following from his experience as CEO at Opsware and an employee at HP (1):

At a basic level, metrics are incentives.  By measuring quality, features, and schedule and discussing them at every staff meeting, my people focused intensely on those metrics to the exclusion of other goals.  The metrics did not describe the real goals and I distracted the team as a result.

And if he didn’t get the point across clearly enough (2):

Some things that you will want to encourage will be quantifiable, and some will not.  If you report on the quantitative goals and ignore the qualitative onces, you won’t get the qualitative goals, which may be the most important ones.  Management purely by numbers is sort of like painting by numbers – it’s strictly for amateurs.
At HP, the company wanted high earnings now and in the future.  By focusing entirely on the numbers, HP got them now by sacrificing the future…
By managing the organization as though it were a black box, some divisions at HP optimized the present at the expense of their downstream competitiveness.  The company rewarded managers for achieving short-term objectives in a manner that was bad for the company.  It would have been better to take into account the white box.  The white box goes beyond the numbers and gets into how the organization produced the numbers.  It penalizes managers who sacrifice the future for the short-term and rewards those who invest in the future even if that investment cannot be easily measured.

I’ll have to wait until the official report on the VA scandal is released before commenting on why the failure occurred.  However, it does seem to me as a case of failure of the black box, as Ben Horowitz explained so adeptly.  His writing is recommended.

1.  Ben Horowitz, The Hard Thing about Hard Things, HarperCollins 2014, p.132

2. IBID p.132-133


A conversation with Farzad Mostashari MD

I participated in a webinar with Farzad Mostashari MD, scM, former director of the ONC (Office of the National Coordinator for Health IT)  sponsored by the data analytics firm Wellcentive   He is now a visiting fellow at the Brookings Institution.  Farzad spoke on points made in a recent article in the American Journal of Accountable Care, Four Key Competencies for Physician-led Accountable Care Organizations.  

The hour-and-a-half format lent itself well to a Q&A format, and basically turned into a small group consulting session with this very knowledgeable policy leader!  

1.  Risk Stratification.  Begin using the EHR data by ‘hot spotting.’  Hot spotting refers to a technique of identifying outliers in medical care and evaluating these outliers to find out why they are consuming resources significantly beyond that of the average.  The Oliver Wyman folks wrote a great white paper that references Dr. Jeffrey Brenner of the Camden Coalition who identified the 1% of Medicaid patients responsible for 30% of the city’s medical costs.  Farzad suggests that data mining should go further and “identify populations of ‘susceptibles’ with patterns of behavior that indicate impending clinical decomposition & lack of resilience.”   He further suggests that we go beyond a insurance-like “risk score” to understand how and why these patients fail, and then apply targeted interventions to prevent susceptibles from failing and over utilizing healthcare resources in the process.  My takeaway from this is in the transition from volume to value, bundled payments and ACO style payments will incentivize physicians to share and manage this risk, transferring a role onto them traditionally filled only by insurers.

2.  Network Management.  Data mining the EHR enables organizations to look at provider and resource utilization within a network.  (c.f. the recent Medicare physician payments data release).  By analyzing this data, referral management can be performed.   By sending patients specifically to those providers who have the best outcomes / lowest costs for that disease, the ACO or insurer can meet shared savings goals.  This would help to also prevent over-utilization – by changing existing referral patterns and excluding those providers who always choose the highest-cost option for care (c.f. the recent medicare payment data for ophthalmologists performing intraocular drug injections – wide variation in costs).  This IS happening – Aetna’s CEO Mark Bertolini, said so specifically during his HIMSS 2014 keynote.   To my understanding, network analysis is mathematically difficult (think eigenfunctions, eigenvalues, and linear algebra) – but that won’t stop a determined implementer from it (it didn’t stop Facebook, Google, or Twitter).  Also included in this topic was workflow management, which is sorely broken in current EHR implementations, clinical decision support tools (like ACRSelect), and traditional six sigma process analytics.

3.  ADT Management.  This was something new.  Using the admission/discharge/transfer data from the HL7 data feed, you could ‘push’ that data to regional health systems.  It achieves a useful degree of data exchange not currently present without a regional data exchange.   Patients who bounce from one ER to the next could be identified this way.  Its also useful to push to the primary care doctors (PCP) managing those patients.  Today, where PCP’s function almost exclusively on an outpatient basis and hospitalists manage the patient while in the hospital, the PCP often doesn’t know about a patient’s hospitalization until they present to the office.  Follow-up care in the first week after hospitalization may help to prevent readmissions. According to Farzad, there is a financial incentive to do so – a discharge alert can enable a primary care practice to ensure that every discharged patient has a telephone follow-up within 48 hours and an office visit within 7 days which would qualify for a $250 “transition in care” payment from Medicare.  (aside – I wasn’t aware of this. I’m not a PCP, and I would carefully check medicare billing criteria closely for eligibility conditions before implementing, as consequences could be severe.  Don’t just take my word for it, as I may be misquoting/misunderstanding and medicare billers are ultimately responsible for what they bill for.  This may be limited to ACO’s.  Due your own due diligence)

4.  Patient outreach and engagement.  One business point is that for the ACO to profit, patients must be retained.  Patient satisfaction may be as important to the business model as the interventions the ACO is performing, particularly as the ACO model suggests a shift to up-front costs and back-end recovery through shared savings.  If you as an ACO invest in a patient, to only lose that patient to a competing ACO, you will let your competitor have the benefit of those improvements in care and eat those sunk costs!  To maintain patient satisfaction and engagement, behavioral economics (think Cass Sunstein’s Nudges.gov  paper), gamification (Jane McGonigal ), A/B Testing (Tim Ferriss) marketing techniques.  Basically, we’re applying customer-centric marketing to healthcare, with not only the total lifetime revenue of the patient considered, but also the total lifetime cost!

It was a very worthwhile discussion and thanks to Wellcentive for hosting it!  

The Measure is the Metric

There is a maxim in management circles to use data-rich methods of management.  Peter Drucker is reputed to have said, “What gets measured gets managed.” Clearly better than managing by the hem of one’s skirt (or seat of one’s pants), data-driven management allows for assessment of measured items.
It is interesting to consider the perturbations of this statement:
-if it can be measured, it can be managed (implying causality)
-if it can’t be measured, it can’t be managed (negative causality)
-if it can’t be measured, it doesn’t matter (reductio ad absurbum)
You can pick for yourself where in the spectrum you lie, and how far from Drucker’s original statement you are.
But there is another issue in measurement that isn’t as well addressed – the influence that the measure itself has on what is being measured.   This is what is known as an observer effect in Physics – simply measuring perturbs the system.  The Heisenberg uncertainty principal has been cited like this (that’s actually NOT what the Heisenberg says, but that’s beyond the scope of this discussion).
So, let’s acknowledge that observation, or measurement changes what is being measured itself.  An observation, or ‘measure’ of X (insert variable here – productivity, speed, outcome, etc..) is performed.  It is compared to a standard, or ‘metric’ is performed.
For a process or an person, there may or may not be established standards of measurement.  Therefore, a baseline or initial measurement becomes the metric to compare future measurements against.  As process improvement or skill improvement happens (hopefully), subsequent metrics should improve in both accuracy and value.
Let’s consider a human measure and its associated metric.  A manager may wish to evaluate his employees by comparing their productivity to an established range of productivity.  The employee is being measured, and is being compared to a metric.
But employees aren’t stupid.   Even if they have not been told that they are being measured, when they see the difference in their performance reviews as compared to their peer’s performance reviews, they figure it out.  And those employees with performance reviews that didn’t sit right with them become more diligent in their work, to achieve a better performance review next time.  Some employees will even figure out that they are being evaluated, and up their game before the performance review.


Positive feedback loop for the measure is the metric post.
Positive feedback loop in a simple system

So, by the mere act of being measured, we change what is being measured.  The measure is the metric.

And it shouldn’t be too hard to figure out that WHAT you measure and WHAT you choose to be the metric are more important than you think.

DeSalvo and Tavenner Keynote HIMSS 2014 – an online attendee’s perspective

Here are some observations from this morning’s HIMSS 2014 keynote session with Karen De Salvo and Marilyn Tavenner

Karen DeSalvo from the ONC – clearly she is an accomplished and energetic  speaker.  She gave a great ‘rally the troops’ speech about HIT which must have resonated in the hearts of the HIT and HIMSS crowd.  She clearly believes in HIT and its promise, and mentioned interoperability as a chief goal of the ONC. “Patients should not be walled into data because the vendor doesn’t want to share.“  She is clearly on board with administration speaking points regarding the current unsustainability of health care costs.  “It will be hard.  It will be fun! It will be rewarding” She emphasized her data-driven orientation: “ we need to start providing data to inform new models of care.”

Marilyn Tavenner from CMS – After a policy speak discussion of current CMS internal goals and metrics as well as their star system, ears picked up regarding the tailored remarks towards the end of her speech:

On ICD-10 implementation: “Let’s face it guys – we delayed it more than once.  It’s time to move on.  There will be no delay again.”
On Meaningful use stage 2 : “We understand, and hardship exemptions will be granted to providers/vendors but we expect all providers to be MU2 compliant by 2015”
On data : “Data is the lifeblood of our healthcare system.”

Questions asked:

To Tavenner:  Q: Regarding the challenges with stage 2 implementation, what are you able to do and what about deadlines?
A:  examples given of a vendor not ready with stage 2 technology being potentially eligible for an exemption; or a physician not able to meet hard percentages eligible for an exemption.

                           Q: her experiences with healthcare.gov, “lessons learned”

                           A: system integrator was the missing key for a multiply sourced project and they did not have that soon enough.

                           Q: Non-eligible providers.

                           A: excluded on a statutory basis, so little she could do except sympathize.

To DeSalvo:     Q:HIT can help end hunger?

                            A: HIT can help with social service integration

                            Q: Patient identification via HIE/HISP

                            A: Not as difficult as it seems – smart folks are working on patient matching algorithms and demand for this will drive development.

The final comments on the big ideas for the next 5 years: Less fee for service, more care coordination, payment tied to quality, more data being produced to be shared with the public at a reasonable cost.  Finally, DeSalvo described a virtuous feedback loop where available data informs technology that informs care and creates more data, ultimately leading to disruptive change.  (intriguing!)

That’s it for me from HIMMS 2014.  I hope that you found these posts useful and will consider coming back to read some other of my views!