EXHIBITION, CONCERT, READING, STREET ACTIVITY, PARTY, PERFORMANCE, THEATRE PIECE, LECTURE, SCREENING, ETC.
JOB, WORKSHOP, INTERNSHIP, COLLABORATION PROJECTS, JOINING A BAND, RESIDENCY, WANTED PROFESSIONAL, ETC.
ARTWORK, PIECE OF MUSIC, VIDEO, PIECE OF POETRY, ILLUSTRATION, SERIES OF PHOTOGRAPHS, CORPORATE DESIGN, ETC.
The notion of applying deep learning techniques to medical imaging data sets is a fascinating and fast-moving area. In fact, in a recent issue of IEEE’s Transactions on Medical Imaging journal, there’s a fantastic guest editorial on deep learning in medical imaging, that provides an overview of current approaches, where the field is headed, and what sort of opportunities exist. As such, we pulled out some of our favorite nuggets from this article and summarize/extend upon them in Q&A form, so they’re more easily digestible.
Why is deep learning valuable in the field of medical imaging?
Most interpretations of medical images are performed by physicians; however, image interpretation by humans is limited due to its subjectivity, large variations across interpreters, and fatigue.
What are some challenges in applying Convolutional Neural Networks to medical imaging?
CNNs require a large amount of labeled data. Large medical data sets are not readily available because many data sets are proprietary and/or are difficult to obtain due to privacy concerns.
Most often, the data sets are not comprehensively annotated, owing to the costliness & scarcity of expert annotation in the medical domain.
Moreover, rare diseases, by virtue of being rare, are underrepresented in the data sets. If not accounted for properly, the class imbalance that ensues (i.e. disease label underrepresented while the healthy label is overrepresented) would bias a model to predict the healthy-label.
Furthermore, in situations where the features are highly correlated and the normal-class is overrepresented, many of the training samples are redundant and uninformative.
In many applications, making an informed diagnosis requires more than just the medical image (e.g. lab values, demographics, prior medical history). Gaining access to and linking these data with the images presents yet another obstacle.
Non-standardized evaluation metrics, the use of disparate data sets, and differences in the way that learning tasks are framed each make it difficult to track and compare advancements in the field.
How are the challenges being addressed?
One way is via transfer learning, which has been used to overcome the lack of large labeled data sets in medical imaging. In transfer learning, a separate CNN is trained on a different task using a different data set. The features learned from this separate task are then reused to train a CNN for the medical imaging task of interest. By recycling features in this way fewer examples are needed to achieve good performance. One main caveat to transfer learning is that the recycled features need to be generally useful across the two separate tasks.
Can I try this? Where can I find publicly available data?
Publicly available data sets:
Visual Concept Extraction Challenge in Radiology (VISCERAL). Manually annotated radiological data of several anatomical structures (e.g. kidney, lung, bladder, etc.) from several different imaging modalities (e.g. CT and MR). They also provide a cloud computing instance that anyone can use to develop and evaluate models against benchmarks.
The Cancer Imaging Archive. Cancer imaging data sets across various cancer types (e.g. carcinoma, lung cancer, myeloma) and various imaging modalities.
Grand Challenges in Biomedical Image Analysis. A collection of biomedical imaging challenges in order to facilitate better comparisons between new and existing solutions, by standardizing evaluation criteria. You can create your own challenge as well. As of this writing, there are 92 challenges that provide downloadable data sets.
The Lung Image Database Consortium image collection (LIDC-IDRI). A collection of diagnostic and lung cancer screening thoracic CT scans with annotated lesions.
Kaggle diabetic retinopathy. High-resolution retinal images that are annotated on a 0–4 severity scale by clinicians, for the detection of diabetic retinopathy. This data set is part of a completed Kaggle competition, which is generally a great source for publicly available data sets.
International Symposium on Biomedical Imaging 2015. Eight Grand Challenges presented at ISBI.
Multiple sclerosis lesion segmentation challenge 2008. A collection of brain MRI scans to detect MS lesions.
Multimodal Brain Tumor Segmentation Challenge (BRATS). Large data set of brain tumor magnetic resonance scans. They’ve been extending this data set and challenge each year since 2012.
Coding4Cancer. A new initiative by the Foundation for the National Institutes of Health and Sage Bionetworks to host a series of challenges to improve cancer screening. The first is for digital mammography readings. The second is for lung cancer detection. The challenges are not yet launched.
Why are large publicly available medical image data sets challenging to construct?
As we know, deep learning benefits from massive amounts of training data from large data sets. However, these sort of publicly available medical data sets are hard to construct. This is because, as the article states:
It is difficult to obtain funding for the construction of data sets.
Scarce and expensive medical expertise is needed for high quality annotation of medical imaging data.
Privacy issues make it more difficult to share medical data than natural images.
The breadth of applications in medical imaging requires that many different data sets need to be collected.
Data science challenges (like the Grand Challanges aforementioned) — that “provide a precise definition of a task to be solved and define one or more evaluation metrics that provide a fair and standardized comparison between proposed algorithms” — help to crowdsource massive annotated data sets, while also moving the field forward through standardization.
Though, annotations will not always be possible to obtain and of meaningful quality, especially in specialized, rare, or expert use cases. As such, this helps paint a picture of one of many interesting future directions for the field: It is likely that in order to leverage really big data for which hand annotations are unavailable or intractable to obtain, the field will need to move more towards semi-supervised and unsupervised learning.
Where do we go from here? Open questions and future opportunities:
How important and meaningful will be a transition to 3D analysis vs. 2D be in terms of performance gains?
“The majority of works are in fact using supervised learning.” How meaningful will advancements in unsupervised and semi-supervised approaches be in terms of performance gains?
How much data will be required to solve certain types of problems? What sort of things can the research community do to make bigger, higher quality data sets, evaluation criteria, and approaches accessible to other people in the field?