{"id":13087,"date":"2016-09-16T14:45:59","date_gmt":"2016-09-16T18:45:59","guid":{"rendered":"http:\/\/n2value.com\/blog\/?p=13087"},"modified":"2016-10-21T20:16:59","modified_gmt":"2016-10-22T00:16:59","slug":"machine-intelligence-in-medical-imaging-conference-report","status":"publish","type":"post","link":"https:\/\/n2value.com\/blog\/machine-intelligence-in-medical-imaging-conference-report\/","title":{"rendered":"Machine Intelligence in Medical Imaging Conference &#8211; Report"},"content":{"rendered":"<p><a href=\"http:\/\/n2value.com\/blog\/wp-content\/uploads\/2016\/09\/blue.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-medium wp-image-13094\" src=\"http:\/\/n2value.com\/blog\/wp-content\/uploads\/2016\/09\/blue-300x300.jpg\" alt=\"blue\" width=\"300\" height=\"300\" srcset=\"https:\/\/n2value.com\/blog\/wp-content\/uploads\/2016\/09\/blue-300x300.jpg 300w, https:\/\/n2value.com\/blog\/wp-content\/uploads\/2016\/09\/blue-150x150.jpg 150w, https:\/\/n2value.com\/blog\/wp-content\/uploads\/2016\/09\/blue-768x768.jpg 768w, https:\/\/n2value.com\/blog\/wp-content\/uploads\/2016\/09\/blue-1024x1024.jpg 1024w, https:\/\/n2value.com\/blog\/wp-content\/uploads\/2016\/09\/blue.jpg 1280w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a>I heard about the <a href=\"http:\/\/siim.org\/\">Society of Imaging Informatics in Medicine&#8217;s (SIIM)<\/a> Scientific <a href=\"http:\/\/siim.org\/page\/2016CMIMI\">Conference on Machine Intelligence in Medical Imaging (C-MIMI)<\/a> on Twitter.\u00a0 Priced attractively, easy to get to, I&#8217;m interested in Machine Learning and it was the first radiology conference I&#8217;ve seen on this subject, so I went.\u00a0 Organized on short notice so I was expecting a smaller conference.<\/p>\n<p><a href=\"http:\/\/n2value.com\/blog\/wp-content\/uploads\/2016\/09\/cmimipacked.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-medium wp-image-13096\" src=\"http:\/\/n2value.com\/blog\/wp-content\/uploads\/2016\/09\/cmimipacked-300x225.jpg\" alt=\"cmimipacked\" width=\"300\" height=\"225\" srcset=\"https:\/\/n2value.com\/blog\/wp-content\/uploads\/2016\/09\/cmimipacked-300x225.jpg 300w, https:\/\/n2value.com\/blog\/wp-content\/uploads\/2016\/09\/cmimipacked-768x576.jpg 768w, https:\/\/n2value.com\/blog\/wp-content\/uploads\/2016\/09\/cmimipacked-1024x768.jpg 1024w, https:\/\/n2value.com\/blog\/wp-content\/uploads\/2016\/09\/cmimipacked.jpg 2016w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><\/p>\n<p>I almost didn&#8217;t get a seat.\u00a0 It was packed.<\/p>\n<p>The conference had real nuts and bolts presentations &amp; discussions on healthcare imaging machine learning (<strong>ML<\/strong>).\u00a0 Typically, these were Convolutional Neural Networks (<strong>CNN<\/strong>&#8216;s\/Convnets) but a few Random Forests (<strong>RF<\/strong>) and Support Vector Machines (<strong>SVM<\/strong>) sneaked in, particularly in hybrid models along with a CNN (c.f.\u00a0 Microsoft).\u00a0 Following comments assume some facility in understanding\/working with Convnets.<\/p>\n<p>Some consistent threads throughout the conference:<\/p>\n<ul>\n<li>Most CNN&#8217;s were trained on <a href=\"http:\/\/www.image-net.org\/\">Imagenet<\/a> with the final fully connected (FC) layer removed; then re-trained on radiology data with a new classifer FC layer placed at the end.<\/li>\n<li>Most CNN&#8217;s were using Imagenet standard three layer RGB input despite being greyscale.\u00a0 This is of uncertain significance and importance.<\/li>\n<li>The limiting of input matrices to grids less than image size is inherited from the Imagenet competitions (and legacy computational power).\u00a0 Decreased resolution is a limiting factor in medical imaging applications, potentially worked-around by multi-scale CNN&#8217;s.<\/li>\n<li>There is no central data repository for a good &#8220;Ground Truth&#8221; to develop improved machine imaging models.<\/li>\n<li>Data augmentation methods are commonly used due to lower numbers of obtained cases.<\/li>\n<\/ul>\n<p><a href=\"http:\/\/keithdreyer.com\/\">Keith Dryer DO PhD<\/a> gave an excellent lecture about the trajectory of machine imaging and how it will be an incremental process with AI growth more narrow in scope than projected, chiefly limited by applications.\u00a0 At this time, CNN creation and investigation is principally an <strong>artisanal<\/strong> product with limited scalability.\u00a0 There was a theme &#8211; &#8220;What is ground truth?&#8221; which in different instances is different things (path proven, followed through time, pathognomonic imaging appearance).<\/p>\n<p>There was an excellent educational session from the FDA&#8217;s Berkman Sahiner.\u00a0 The difference between certifying a type II or type III device may keep radiologists working longer than expected!\u00a0 A type II device, like CAD, identifies a potential abnormality but does not make a treatment recommendation and therefore only requires a 510(k) application.\u00a0 A type III device, as in an automated interpretation program creating diagnosis and treatment recommendations will require a more extensive application including clinical trials, and a new validation for any material changes.\u00a0 One important insight (there were many) was that the FDA requires training and test data to be kept separate. \u00a0 I believe this means that simple cross-validation is not acceptable nor sufficient for FDA approval or certification.\u00a0 Adaptive systems may be a particularly challenging area for regulation, as similar to the ONC, significant changes to the software of the algorithm will require a new certification\/approval process.<\/p>\n<p>Industry papers were presented from HK Lau of <a href=\"https:\/\/twitter.com\/ArterysInc\">Arterys<\/a>, Xiang Zhou of Siemens, Xia Li of GE, and <a href=\"https:\/\/twitter.com\/E_Elnekave_MD\">Eldad Elnekave <\/a>of <a href=\"https:\/\/twitter.com\/ZebraMedVision\">Zebra medical<\/a>.\u00a0 The Zebra medical presentation was impressive, citing their use of the Google Inception V3 model and a false-color contrast limited adaptive histogram equalization algorithm, which not only provides high image contrast with low noise, but also gets around the 3-channel RGB issue.\u00a0 Given statistics for their CAD program were impressive at 94% accuracy compared to a radiologist at 89% accuracy.<\/p>\n<p>Scientific Papers were presented by Matthew Chen, Stanford; Synho Do, Harvard; Curtis Langlotz, Stanford; David Golan, Stanford; Paras Lakhani, Thomas Jefferson; Panagiotis Korfiatis, Mayo Clinic; Zeynettin Akkus, Mayo Clinic; Etka Bullar, U Saskatchewan; Mahmudur Rahman, Morgan State U; Kent Ogden SUNY upstate.<\/p>\n<p><a href=\"https:\/\/irp.nih.gov\/pi\/ronald-summers\">Ronald Summers, MD PhD<\/a> from the NIH gave a presentation on the work from his lab in conjunction with <a href=\"http:\/\/www.holgerroth.com\/\">Holger Roth<\/a>, detailing the specific CNN approaches to Lymph Node detection, Anatomic level detection, Vertebral body segmentation, Pancreas Segmentation, and colon polyp screening with CT-colonography, which had high False Positives.\u00a0 In his experience, deeper models performed better.\u00a0 His lab also changes unstructured radiology reporting into structured reporting through ML techniques.<\/p>\n<p>Abdul Halabi of NVIDIA gave an impressive presentation on the supercomputer-like DGX-1 GPU cluster (5 deliveries to date, the fifth of which was to Mass. General, a steal at over $100K), and the new Pascal architecture in the P4 &amp; P40 GPU&#8217;s.\u00a0 60X performance on AlexNet vs the original version\/GPU configuration in 2012.\u00a0 Very impressive.<\/p>\n<p>Sayan Pathak\u00a0of Microsoft Research and the Inner Eye team gave a good presentation where he demonstrated that a RF was really just a 2 layer DNN, i.e. a sparse 2 layer perceptron.\u00a0\u00a0 Combining this with a CNN (dNDE.NET), it beat googLENet&#8217;s latest version in the Imagenet arms race.\u00a0 However, as one needs to solve for both structures simultaneously, it is an expensive (long, intense) computation.<\/p>\n<p>Closing points were the following:<\/p>\n<ul>\n<li>Most devs currently using Python &#8211; Tensorflow +\/- Keras with fewer using CAFFE off of\u00a0 Modelzoo<\/li>\n<li>DICOM -&gt; <a href=\"https:\/\/www.mathworks.com\/matlabcentral\/fileexchange\/42997-dicom-to-nifti-converter--nifti-tool-and-viewer?requestedDomain=www.mathworks.com\">NIFTI<\/a> -&gt; DICOM<\/li>\n<li>De-identification of data is a problem, even moreso when considering longitudinal followup.<\/li>\n<li>Matching accuracy to the radiologist&#8217;s report may not be as important as actual outcomes report.<\/li>\n<li>There was a lot of interest in organizing a competition to advance medical imaging, c.f. Kaggle.<\/li>\n<li>Radiologists aren&#8217;t obsolete just yet.<\/li>\n<\/ul>\n<p>It was a great conference.\u00a0 An unexpected delight.\u00a0 Food for your head!<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I heard about the Society of Imaging Informatics in Medicine&#8217;s (SIIM) Scientific Conference on Machine Intelligence in Medical Imaging (C-MIMI) on Twitter.\u00a0 Priced attractively, easy to get to, I&#8217;m interested in Machine Learning and it was the first radiology conference I&#8217;ve seen on this subject, so I went.\u00a0 Organized on short notice so I was [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":true,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"New!  #CMIMI Machine Intelligence in Medical Imaging Conference - Report #computervision #machinelearning","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","enabled":false},"version":2}},"categories":[22,24],"tags":[],"class_list":["post-13087","post","type-post","status-publish","format-standard","hentry","category-computer-vision","category-radiology"],"jetpack_publicize_connections":[],"aioseo_notices":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p4mtfP-3p5","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/posts\/13087","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/comments?post=13087"}],"version-history":[{"count":10,"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/posts\/13087\/revisions"}],"predecessor-version":[{"id":13100,"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/posts\/13087\/revisions\/13100"}],"wp:attachment":[{"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/media?parent=13087"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/categories?post=13087"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/tags?post=13087"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}