{"id":12982,"date":"2016-02-18T14:55:39","date_gmt":"2016-02-18T19:55:39","guid":{"rendered":"http:\/\/n2value.com\/blog\/?p=12982"},"modified":"2016-02-18T20:45:43","modified_gmt":"2016-02-19T01:45:43","slug":"memory-requirements-for-convolutional-neural-network-analysis-of-brain-mri","status":"publish","type":"post","link":"https:\/\/n2value.com\/blog\/memory-requirements-for-convolutional-neural-network-analysis-of-brain-mri\/","title":{"rendered":"Memory requirements for Convolutional Neural Network analysis of brain MRI."},"content":{"rendered":"<p><a href=\"http:\/\/n2value.com\/blog\/wp-content\/uploads\/2016\/02\/AFIP-00405589-Glioblastoma-Radiology.jpg\" rel=\"attachment wp-att-12987\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-12987\" src=\"http:\/\/n2value.com\/blog\/wp-content\/uploads\/2016\/02\/AFIP-00405589-Glioblastoma-Radiology.jpg\" alt=\"Believed to be in the publc domain\" width=\"343\" height=\"435\" srcset=\"https:\/\/n2value.com\/blog\/wp-content\/uploads\/2016\/02\/AFIP-00405589-Glioblastoma-Radiology.jpg 404w, https:\/\/n2value.com\/blog\/wp-content\/uploads\/2016\/02\/AFIP-00405589-Glioblastoma-Radiology-237x300.jpg 237w\" sizes=\"auto, (max-width: 343px) 100vw, 343px\" \/><\/a>I\u2019m auditing the wonderful <a href=\"http:\/\/cs231n.stanford.edu\/index.html\">Stanford CS 231n class on Convolutional Neural Networks in Computer Vision<\/a>.<\/p>\n<p>A discussion the other day was on the amount of memory required to analyze one image as it goes through the Convolutional Neural Network (CNN). This was interesting &#8211; how practical is it for application to radiology imaging?\u00a0 (To review some related concepts see my earlier post : <a href=\"http:\/\/n2value.com\/blog\/what-big-data-visualization-analytics-can-learn-from-radiology\/\">What Big Data\u00a0 Visualization Analytics can learn from Radiology<\/a>)<\/p>\n<p>Take your standard non-contrast MRI of the brain. There are 5 sequences (T1, T2, FLAIR, DWI, ADC). For the purposes of this analysis, all axial. Assume a 320&#215;320 viewing matrix for each slice. Therefore, one image will be a 320x320x5 matrix suitable for processing into a 512,000 byte vector. Applying this to the VGGNet Protocol D (1) yields the following:<\/p>\n<p><a href=\"http:\/\/n2value.com\/blog\/wp-content\/uploads\/2016\/02\/VGGNet.png\" rel=\"attachment wp-att-12985\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-12985\" src=\"http:\/\/n2value.com\/blog\/wp-content\/uploads\/2016\/02\/VGGNet.png\" alt=\"VGGNet\" width=\"408\" height=\"392\" srcset=\"https:\/\/n2value.com\/blog\/wp-content\/uploads\/2016\/02\/VGGNet.png 408w, https:\/\/n2value.com\/blog\/wp-content\/uploads\/2016\/02\/VGGNet-300x288.png 300w\" sizes=\"auto, (max-width: 408px) 100vw, 408px\" \/><\/a><\/p>\n<p>In each image, there are 320 x and y pixels and each pixel holding a greyscale value. There are 5 different sequences. Each axial slice takes up 512KB, the first convolutional layers hold most of the memory at 6.4MB each, and summing all layers uses 30.5MB. Remember that you have to double the memory for the forward\/backward pass through the network, giving you 61MB per image. Finally, the images do not exist in a void, but are part of about 15 axial slices of the head, giving you a memory requirement of 916.5MB, or about a gigabyte.<\/p>\n<p>Of course, that\u2019s just for feeding an image through the algorithm.<\/p>\n<p>This is simplistic because:<\/p>\n<ol>\n<li>VGG is not going to get you to nearly enough accuracy for diagnosis! (50% accurate, I&#8217;m guessing)<\/li>\n<li>The MRI data is only put into slices for people to interpret \u2013 the data itself exists in K-space. What that would do to machine learning interpretation is another discussion.<\/li>\n<li>We haven&#8217;t even discussed speed of training the network.<\/li>\n<li>This is for older MRI protocols.\u00a0 Newer MRI&#8217;s have larger matrices (512&#215;512) and thinner slices (3mm) available, which will increase the necessary memory to approximately 4GB.<\/li>\n<\/ol>\n<p>Nevertheless, it is interesting to note that the amount of memory required to train a neural network of brain MRI\u2019s is in reach of a home network enthusiast.<\/p>\n<p>(1). Karen Simonyan &amp; Andrew Zisserman, <a href=\"http:\/\/arxiv.org\/pdf\/1409.1556.pdf\" target=\"_blank\"><span style=\"text-decoration: underline;\">Very Deep Convolutional Networks for Large-Scale Visual Recognition<\/span><\/a>, ICLR 2015<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I\u2019m auditing the wonderful Stanford CS 231n class on Convolutional Neural Networks in Computer Vision. A discussion the other day was on the amount of memory required to analyze one image as it goes through the Convolutional Neural Network (CNN). This was interesting &#8211; how practical is it for application to radiology imaging?\u00a0 (To review [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"New N2Value post: Memory requirements for Convolutional Neural Network analysis of brain MRI.","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","enabled":false},"version":2}},"categories":[22,2],"tags":[],"class_list":["post-12982","post","type-post","status-publish","format-standard","hentry","category-computer-vision","category-healthcare"],"jetpack_publicize_connections":[],"aioseo_notices":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p4mtfP-3no","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/posts\/12982","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/comments?post=12982"}],"version-history":[{"count":9,"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/posts\/12982\/revisions"}],"predecessor-version":[{"id":12996,"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/posts\/12982\/revisions\/12996"}],"wp:attachment":[{"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/media?parent=12982"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/categories?post=12982"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/n2value.com\/blog\/wp-json\/wp\/v2\/tags?post=12982"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}