{"id":13524,"date":"2018-01-04T12:10:18","date_gmt":"2018-01-04T17:10:18","guid":{"rendered":"http:\/\/n2value.com\/blog\/?p=13524"},"modified":"2018-01-04T16:14:08","modified_gmt":"2018-01-04T21:14:08","slug":"ooda-loop-revisited-medical-errors-heuristics-and-ai","status":"publish","type":"post","link":"https:\/\/n2value.com\/blog\/ooda-loop-revisited-medical-errors-heuristics-and-ai\/","title":{"rendered":"OODA loop revisited &#8211; medical errors, heuristics, and AI."},"content":{"rendered":"<p>My <a href=\"http:\/\/n2value.com\/blog\/ooda-loops-a-definition-and-thoughts-on-application-to-healthcare\/\">OODA loop post <\/a>is actually one of the most popular on this site. \u00a0 I\u00a0 blame <a href=\"https:\/\/www.ribbonfarm.com\/about\/\" target=\"_blank\" rel=\"noopener\">Venkatesh Rao of Ribbonfarm<\/a> and his <a href=\"http:\/\/www.tempobook.com\/\" target=\"_blank\" rel=\"noopener\">Tempo book<\/a> and John Robb&#8217;s <a href=\"http:\/\/www.librarything.com\/work\/2146702\/reviews\/37298226\" target=\"_blank\" rel=\"noopener\">Brave New War<\/a> for introducing me to <a href=\"https:\/\/en.wikipedia.org\/wiki\/John_Boyd_%28military_strategist%29\" target=\"_blank\" rel=\"noopener\">Boyd&#8217;s methodology<\/a>. \u00a0 Venkatesh focuses on philosophy and management consulting, and Robb focuses on <a href=\"http:\/\/www.globalguerrillas.typepad.com\/\">COIN and human social networks<\/a>. Both are removed from healthcare, but applying Boyd&#8217;s principles to medicine: our enemy is disease, perhaps even ourselves.<\/p>\n<p>Consider aerial dogfighting.\u00a0 The human OODA loop is &#8211; Observe, Orient, Decide, Act. \u00a0 You want to &#8220;get inside your opponent&#8217;s OODA loop&#8221; and out-think them, knowing their actions before they do, assuring victory.\u00a0 If you know your opponent&#8217;s next move, you can anticipate where to shoot and end the conflict decisively.\u00a0 Quoting Sun Tzu in The Art of War:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\" size-full wp-image-13614 aligncenter\" src=\"http:\/\/n2value.com\/blog\/wp-content\/uploads\/2018\/01\/sun-tzu.jpg\" alt=\"Sun Tzu Art of War OODA loops and AI\" width=\"485\" height=\"322\" srcset=\"https:\/\/n2value.com\/blog\/wp-content\/uploads\/2018\/01\/sun-tzu.jpg 485w, https:\/\/n2value.com\/blog\/wp-content\/uploads\/2018\/01\/sun-tzu-300x199.jpg 300w\" sizes=\"auto, (max-width: 485px) 100vw, 485px\" \/><\/p>\n<blockquote><p>If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.<\/p><\/blockquote>\n<p>Focused, directed, lengthy and perhaps exhausting training for a fighter pilot enables them to &#8220;know their enemy&#8221; and anticipate action in a high-pressure, high-stakes aerial battle.\u00a0 The penalty for failure is severe &#8211; loss of the pilot&#8217;s life. \u00a0 Physicians prepare similarly &#8211; a lengthy and arduous training process in often adverse circumstances.\u00a0 The penalty for failure is also severe &#8211; a patient&#8217;s death.\u00a0 Given adequate intelligence and innate skill, successful pilots and physicians internalize their decision trees &#8211; transforming the OODA loop to a simpler OA loop &#8211; Observe and Act.\u00a0 Focused practice allows the Orient and Decide portions of the loop to become automatic and intuitive, almost Zen-like.\u00a0 This is what some people refer to as <a href=\"https:\/\/en.wikipedia.org\/wiki\/Flow_(psychology)\">&#8216;Flow&#8217;<\/a> &#8211; an effortlessly hyperproductive state where total focus and immersion in a task suspends the perception of the passage of time.<\/p>\n<blockquote><p>For a radiologist, &#8216;flow&#8217; is when you sit down at your PACS at 8am, continuously reading cases, making one great diagnosis after another, smiling as the words appear on Powerscribe. You&#8217;re killing the cases and you know it.\u00a0 Then your stomach rumbles &#8211; probably time for lunch &#8211; you look up at the clock and it is 4pm.\u00a0 That&#8217;s flow.<\/p><\/blockquote>\n<p>Flow is one of the reasons why experienced professionals are highly productive &#8211; and a <a href=\"https:\/\/hbr.org\/2014\/04\/help-your-employees-find-flow\">smart manager will try to keep a star employee &#8216;in the zone&#8217; as much as possible, removing extraneous interruptions, unnecessary low-value tasks, and distractions<\/a>.<\/p>\n<p><a href=\"https:\/\/www.edge.org\/conversation\/daniel_kahneman-on-kahneman\" target=\"_blank\" rel=\"noopener\">Kahneman<\/a> defines this as fast type 1 thinking, intuitive and heuristic : quick, easy, and with sufficient experience\/training, usually accurate.\u00a0 But type 1 thinking can fail : a complex process masquerades as a simple one, additional important data is undiscovered or ignored, or a novel agent is introduced.\u00a0 In these circumstances type 2 critical thinking is needed : slow, methodological, deductive and logical.\u00a0 But humans err, substituting heuristic thinking for analytical thinking, and we get it wrong.<\/p>\n<p>For the enemy fighter pilot, its the scene in Top Gun where Tom Cruise hits the air brakes to drop behind an attacking Mig to deliver a kill shot with his last missile. For a physician, it is an uncommon or rare disease presenting like a common one, resulting in a missed diagnosis and lawsuit.<\/p>\n<p>To those experimenting in deep learning and Artificial intelligence, the time to <a href=\"https:\/\/blogs.nvidia.com\/blog\/2016\/08\/22\/difference-deep-learning-training-inference-ai\/\" target=\"_blank\" rel=\"noopener\">train<\/a> or teach the network far exceeds the time needed to process an unknown through the trained network.\u00a0 Training can take hours to days, evaluation takes seconds.<\/p>\n<p><a href=\"https:\/\/en.wikipedia.org\/wiki\/Weak_AI\" target=\"_blank\" rel=\"noopener\">Narrow AI&#8217;s<\/a> like Convolutional Neural Networks take advantage of their speed to go through the OODA loop quickly, <a href=\"https:\/\/blogs.nvidia.com\/blog\/2016\/08\/22\/difference-deep-learning-training-inference-ai\/\" target=\"_blank\" rel=\"noopener\">in a process called inference<\/a>.\u00a0 I suggest a deep learning algorithm functions as an OA loop on the specific type of data it has been trained on.\u00a0 Inference is quick.<\/p>\n<p>I believe that OODA loops are Kahneman&#8217;s Type 2 slow thinking.\u00a0 OA loops are Kahneman&#8217;s Type 1 fast thinking.\u00a0 Narrow AI inference is a type 1 OA loop. \u00a0 An AI version of type 2 slow thinking doesn&#8217;t yet exist.*<\/p>\n<p>And like humans, Narrow AI can be fooled.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13608\" src=\"http:\/\/n2value.com\/blog\/wp-content\/uploads\/2018\/01\/dogsvsmuffins.jpg\" alt=\"Can your classifier tell the difference between a chihuahau and blueberry muffin?\" width=\"900\" height=\"505\" srcset=\"https:\/\/n2value.com\/blog\/wp-content\/uploads\/2018\/01\/dogsvsmuffins.jpg 900w, https:\/\/n2value.com\/blog\/wp-content\/uploads\/2018\/01\/dogsvsmuffins-300x168.jpg 300w, https:\/\/n2value.com\/blog\/wp-content\/uploads\/2018\/01\/dogsvsmuffins-768x431.jpg 768w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/p>\n<p>If you haven&#8217;t seen the Chihuahua vs. blueberry muffin clickbait picture, consider yourself sheltered. <a href=\"https:\/\/blog.cloudsight.ai\/chihuahua-or-muffin-1bdf02ec1680\" target=\"_blank\" rel=\"noopener\">Claims that narrow AI can&#8217;t tell the difference are largely, but not entirely, bogus<\/a>.\u00a0 While Narrow AI is generally faster than people, and potentially more accurate, it can still make errors. But so can people. In general, classification errors can be reduced by creating a more powerful, or &#8216;deeper&#8217; network. I think collectively we have yet to decide how much error to tolerate in our AI&#8217;s. If we are willing to tolerate an error of 5% in humans, are we willing to tolerate the same in our AI&#8217;s, or do we expect 97.5%?\u00a0 Or 99%? Or 99.9%?<\/p>\n<p>The single pixel attack is a bit more interesting.\u00a0 While similar images such as the ones above probably won&#8217;t pass careful human scrutiny, and <a href=\"https:\/\/arxiv.org\/pdf\/1412.1897.pdf\" target=\"_blank\" rel=\"noopener\">frankly adversarial images unrecognizable to humans can be misinterpreted by a classifier<\/a>:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\" size-full wp-image-13609 aligncenter\" src=\"http:\/\/n2value.com\/blog\/wp-content\/uploads\/2018\/01\/Adversarial.png\" alt=\"Convolutional Neural Networks can be fooled by adversarial images\" width=\"324\" height=\"457\" srcset=\"https:\/\/n2value.com\/blog\/wp-content\/uploads\/2018\/01\/Adversarial.png 324w, https:\/\/n2value.com\/blog\/wp-content\/uploads\/2018\/01\/Adversarial-213x300.png 213w\" sizes=\"auto, (max-width: 324px) 100vw, 324px\" \/><\/p>\n<p>Selecting and perturbing a single pixel is much more subtle, and probably could escape human scrutiny.\u00a0 Jaiwei Su <em>et al<\/em> address this in their <a href=\"https:\/\/arxiv.org\/pdf\/1412.1897.pdf\" target=\"_blank\" rel=\"noopener\">&#8220;One Pixel Attack&#8221; paper<\/a>, where the modification of one pixel in an image had between a 66% to 73% chance of changing the classification of that image.\u00a0 By changing more than one pixel, success rates respectively rose.\u00a0 The paper used older, less deep Narrow AI&#8217;s like VGG-16 and Network-in-network.\u00a0 Newer models such as DenseNets and ResNets might be harder to fool.\u00a0 This type of &#8220;attack&#8221; represents a real-world situation where the OA loop fails to account for unexpected new (or perturbed) information, and is incorrect.<\/p>\n<p>Contemporaneous update: Google has developed <a href=\"https:\/\/arxiv.org\/pdf\/1712.09665.pdf\" target=\"_blank\" rel=\"noopener\">images that use an adversarial attack to uniformly defeat classification attempts<\/a> by standard CNN models.\u00a0 By making &#8220;stickers&#8221; out of these processed images, the presence of such an image, even at less than 20% of the image size, is sufficient to change the classification to what the ensemble dictates, rather than the primary object in an image.\u00a0 They look like this:<\/p>\n<figure id=\"attachment_13618\" aria-describedby=\"caption-attachment-13618\" style=\"width: 344px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone  wp-image-13618\" src=\"http:\/\/n2value.com\/blog\/wp-content\/uploads\/2018\/01\/adversarial.jpg\" alt=\"adversarial images capable of overriding CNN classifier\" width=\"344\" height=\"308\" srcset=\"https:\/\/n2value.com\/blog\/wp-content\/uploads\/2018\/01\/adversarial.jpg 688w, https:\/\/n2value.com\/blog\/wp-content\/uploads\/2018\/01\/adversarial-300x269.jpg 300w\" sizes=\"auto, (max-width: 344px) 100vw, 344px\" \/><figcaption id=\"caption-attachment-13618\" class=\"wp-caption-text\">https:\/\/arxiv.org\/pdf\/1712.09665.pdf<\/figcaption><\/figure>\n<p>&nbsp;<\/p>\n<p>I am not aware of defined solutions to these problems &#8211; the obvious images that fool the classifier can probably be dealt with by ensembling other, more traditional forms of computer vision image analysis such as <a href=\"https:\/\/en.wikipedia.org\/wiki\/Histogram_of_oriented_gradients\" target=\"_blank\" rel=\"noopener\">HOG<\/a> or <a href=\"http:\/\/citeseer.ist.psu.edu\/viewdoc\/summary?doi=10.1.1.9.6021\" target=\"_blank\" rel=\"noopener\">SVM&#8217;s<\/a>.\u00a0 For a one-pixel attack, perhaps widening the network and increasing the number of training samples by either data augmentation or adversarially generated features might make the network more robust.\u00a0 This probably falls into the &#8220;too soon to tell&#8221; category.<\/p>\n<p>There has been a great deal of interest and emphasis placed lately on understanding black-box models.\u00a0 I&#8217;ve written about some of these techniques in <a href=\"http:\/\/n2value.com\/blog\/chexnet-a-brief-evaluation\/\" target=\"_blank\" rel=\"noopener\">other posts<\/a>.\u00a0 Some <a href=\"https:\/\/lukeoakdenrayner.wordpress.com\/2017\/12\/27\/2017-in-review-progress-problems-and-predictions\/\" target=\"_blank\" rel=\"noopener\">investigators feel this is less relevant<\/a>.\u00a0 However, by understanding how the models fail, they can be strengthened.\u00a0 I&#8217;ve <a href=\"http:\/\/n2value.com\/blog\/black-swans-antifragility-six-sigma-and-healthcare-operations-what-medicine-can-learn-from-wall-st-part-8\/\" target=\"_blank\" rel=\"noopener\">also written about this<\/a>, but from a management standpoint.\u00a0 There is a trade off between accuracy at speed, robustness, and serendipity.\u00a0 I think the same principle applies to our AI&#8217;s as well.\u00a0 By understanding the frailty of speedy accuracy vs. redundancies that come at the expense of cost, speed, and sometimes accuracy, we can build systems and processes that not only work but are less likely to fail in unexpected &amp; spectacular ways.<\/p>\n<p>Let&#8217;s acknowledge the likelihood of failure of narrow AI where it is most likely to fail, and design our healthcare systems and processes around that, as we begin to incorporate AI into our practice and management.\u00a0 If we do that, we will truly get inside the OODA loop of our opponent &#8211; disease &#8211; and eradicate it before it even had a chance.\u00a0 What a world to live in where the only thing disease can say is, &#8220;I never saw it coming.&#8221;<\/p>\n<p>&nbsp;<\/p>\n<p>*I believe OODA loops have mathematical analogues. The OODA loop is inherently Bayesian &#8211; next actions iteratively decided by prior probabilities. Iterative deep learning constructs include LSTM and RNN&#8217;s (Recurrent Neural Networks) and of course, General Adversarial Networks (GANs). There have been attempts to not only use Bayesian learning for hyperparameter optimization but also combining it with RL(Reinforcement Learning) &amp; GANs.\u00a0 Time will only tell if this brings us closer to the vaunted AGI (Artificial General Intelligence)**.<\/p>\n<p>**While I don&#8217;t think we will soon solve the AGI question, I wouldn&#8217;t be surprised if complex combinations of these methods, along with ones not yet invented, bring us close to top human expert performance in a Narrow AI. But I also suspect that once we start coding creativity and resilience into these algorithms, we will take a hit in accuracy as we approach less narrow forms of AI.\u00a0 We will ultimately solve for the best performance of these systems, and while it may even eventually exceed human ability, there will likely always be an error present.\u00a0 And in that area of error is where future medicine will advance.<\/p>\n<p>\u00a9 2018<\/p>\n","protected":false},"excerpt":{"rendered":"<p>My OODA loop post is actually one of the most popular on this site. \u00a0 I\u00a0 blame Venkatesh Rao of Ribbonfarm and his Tempo book and John Robb&#8217;s Brave New War for introducing me to Boyd&#8217;s methodology. \u00a0 Venkatesh focuses on philosophy and management consulting, and Robb focuses on COIN and human social networks. Both [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":13612,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"What fighter pilots and Narrow AI's share in common - OODA loops for healthcare.  New on N2value.com","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","enabled":false},"version":2}},"categories":[29,22,2,7,24],"tags":[28],"class_list":["post-13524","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai","category-computer-vision","category-healthcare","category-leadership","category-radiology","tag-ai"],"jetpack_publicize_connections":[],"aioseo_notices":[],"jetpack_featured_media_url":"https:\/\/n2value.com\/blog\/wp-content\/uploads\/2018\/01\/fighter-jet-f-15-strike-eagle-fighter-aircraft-jet-fighter-76964-1.jpg","jetpack_shortlink":"https:\/\/wp.me\/p4mtfP-3w8","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/posts\/13524","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/comments?post=13524"}],"version-history":[{"count":24,"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/posts\/13524\/revisions"}],"predecessor-version":[{"id":13619,"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/posts\/13524\/revisions\/13619"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/media\/13612"}],"wp:attachment":[{"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/media?parent=13524"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/categories?post=13524"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/tags?post=13524"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}