- What is medical imaging?
- Why deep learning is beneficial for medical imaging?
- How deep learning fits into medical imaging?
- Deep Learning in Medical Imaging Examples
- Deep learning cancer detection
- Tracking tumor development
- Deep learning medical image analysis – MRI image processing acceleration
- Retinal blood vessel segmentation
- Deep learning cardiac assessment
- Musculoskeletal Radiograph’s Abnormality Detection
- In Conclusion
Healthcare is an industry permanently aimed at future technologies. It is one of those sectors eager to embrace emerging tech to see if it can make a difference in its quest to cure diseases and save people’s lives.
Given the fact that healthcare proceedings are data-heavy by design, it seemed evident that sooner than later machine learning, in all its variety, would find its way to the healthcare industry.
In that context, medical imaging is one of the most prominent examples of effective deep learning implementation in healthcare operations.
In this article, we will:
- Explain the basics of medical imaging;
- Explain how deep learning makes medical imaging more accurate and useful;
- Describe primary machine learning medical imaging use cases;
The term “medical imaging” (aka “medical image analysis”) is used to describe a wide variety of techniques and processes that create a visualization of the body’s interior in general, and also specific organs or tissues.
Overall, medical imaging covers such disciplines as:
- X-ray radiography;
- magnetic resonance imaging (MRI);
- medical photography in general and a lot more.
The main goal of medical image analysis is to increase the efficiency of clinical examination and medical intervention - in other words, to look underneath the skin and bone right into the internal organs and discover what’s wrong with them.
- On the one hand, medical imaging explores the anatomy and physical inner-workings.
- On the other hand, medical image analysis helps to identify abnormalities and understand their causes and impact.
With that out of the way, let’s look at how machine learning and deep learning, in particular, can make medical imaging more efficient.
One of the defining features of modern healthcare operation is that it generates immense amounts of data related to a variety of intertwined processes. Amongst different healthcare fields, medical images generate the highest volume of data. And, it grows exponentially because the tools are getting better at capturing data.
Deep inside that data are valuable insights regarding patient condition, the development of the disease/anomaly, and the progress of the treatment. Each piece contributes to the whole and, it is critical to put it all together into a big picture as accurately as possible.
However, the scope of data often surpasses the possibilities of traditional analysis. Doctors can’t take into consideration so much data.
This aspect is a significant problem given the fact that data Interpretation is one of the most crucial factors in such fields as medical image analysis. The other issue with human interpretation is that it is limited and prone to errors due to various factors (including stress, lack of context, and lack of expertise).
Because of this, deep learning is a natural solution to the problem.
Deep learning applications can process data and extract valuable insights at higher speeds with much more accuracy. This can help doctors to process data and analyze test results more thoroughly.
The thing is - with that much data at hand, the training of deep learning models is not a big challenge. On the other hand, the implementation of deep learning in healthcare proceedings is an effective way to increase the efficiency of operation and accuracy of results.
The primary type of deep learning application for medical image analysis is a convolutional neural network (you can read more about them here). CNN uses multiple filters and pooling to recognize and extract different features out of input data.
The implementation of deep learning into medical image analysis can improve on the main requirements for the proceedings. Here is how:
- Provide high accuracy image processing;
- Enable input images analysis with an appropriate level of sensibility to certain field-specific aspects (depends on the use case. For example, bone fracture analysis).
Let’s break it down in an understandable term example, an x-ray of bones:
- Shallow layers identify broad elements of an input image. In this case - bones.
- Deeper layers identify specific aspects - like fractures, their positions, severity, and so now.
The primary operations handled by deep learning medical imaging applications are as follows:
- Diagnostic image classification - involves the processing of examination images, comparison of different samples. It is primarily used to identify objects and lesions into specific classes based on local and global information about the object’s appearance and location.
- Anatomical object localization - includes localization of organs or lesions. The process often involves 3D parsing of an image with the conversion of three-dimensional space into two-dimensional orthogonal planes.
- Organ/substructure segmentation - involves identifying a set of pixels that define contour or object of interest. This process allows quantitative analysis related to shape, size, and volume.
- Lesion segmentation - combines object detection and organ / substructure segmentation.
- Spatial alignment - involves the transformation of coordinates from one sample to another. It is mainly used in clinical research.
- Content-based image retrieval - used for data retrieval and knowledge discovery in large databases. One of the critical tools for navigation in numerous case histories and understanding of rare disorders.
- Image generation and enhancement - involves image quality improvement, image normalizing (aka cleaning from noise), data completion, and pattern discovery.
- Image data and report combination. This case is twofold. On the one hand, report data is used to improve image classification accuracy. On the other hand, image classification data is then further described in text reports.
Now let’s look at how medical image analysis uses deep learning applications.
At the time of writing this piece, cancer detection is one of the major applications of deep learning CNNs. This particular use case makes the most out of deep learning implementation in terms of accuracy and speed of operation.
This aspect is a big deal because some forms of cancer, such as melanoma and breast cancer, have a higher degree of curability if diagnosed early.
On the other hand, deep learning medical image analysis is practical at later stages.
For instance, it is used to track and analyze the development of metastatic cancer. One of the most prominent deep learning models in this field is LYmph Node Assistant (LYNA) developed at MIT. The LYNA model trained on pathological slides datasets. The model reviews sample slides and recognizes characters of tumors and metastases in a short time span with a 99% rate of accuracy.
In the case of skin cancer detection, deep learning is applied at the examination stage to identify anomalies and track its development. To do that, it compares sample data with available datasets such as T100000. (You can read more about it in our recent case study).
Breast cancer detection is the other critical use case. In this case, a deep learning neural network is used to compare mammogram images and identify abnormal or anomalous tissues across numerous samples.
One of the most prominent features of convolutional neural networks is its ability to process images with numerous filters to extract as many valuable elements as possible. This feature comes in handy when it comes to tracking the development of the tumor.
One of the main requirements for tracking tumor development is to maintain the continuity of the process i.e., identifying various stages, transition points, and anomalies.
The training of tumor development tracking CNN requires a relatively small number of clinical trials in comparison with other use cases.
The resulting data reveal critical features of the tumor with various image classification algorithms. The features include tumor location, area, shape, and also density.
In addition to that, such CNN can:
- track the changes of the tumor over time;
- tie this data with the impacting factors (for example, treatment or lack thereof).
In this case, the system also uses predictive analytics to analyze tumor proliferation. One of the most common methods for this is tumor probability heatmap that classifies the state of the tumor based on the tissue patch overlap.
MRI is one of the most complicated types of medical imaging. The operation is both resource-heavy and time-consuming (which is why it benefits so much from cloud computing). The data contains multiple layers and dimensions that require contextualization for accurate interpretation.
Enter deep learning. The implementation of the convolutional neural network can automate the image segmentation process and streamline its proceedings with a wide array of classification and segmentation algorithms that sift through data and extract as many things of note as required.
The operation of MRI scan alignment takes hours of computing time to complete. The process involves sorting millions of voxels (3D pixels) that constitute anatomical patterns. In addition to this, the same process is required for numerous patients time after time.
Here’s how deep learning can make it easier.
- Image classification and pattern recognition are two cases in which neural networks are at best.
- The convolutional neural network can train to identify common anatomical patterns. The data goes through multiple CNN filters that sift through it and identify relevant patterns.
- As a result, CNN will be capable of spotting anomalies and identify specific indications of different diseases.
The segmentation process may involve 2d/3d convolution kernels that determine the segmentation patterns.
- 2D CNN slices the data one-by-one to construct a pattern map;
- 3D CNN uses voxel data that predicts segmentation maps for volumetric patches.
As such, segmentation is a viable tool for diagnosis and treatment development purposes across multiple fields. In addition to that, it contributes significantly to quantitative studies and computational modeling, both of which are crucial in clinical research.
One of the most prominent implementations of this approach is MIT’s VoxelMorph. This system used several thousand different MRI brain scans as training material. This feature enables the system to identify common patterns of brain structure and also spot any anomalies or other suspicious differences from the norm.
Retinal blood vessel segmentation is one of the more onerous medical imaging tasks due to its scale. The thing is - blood vessels take just a couple of pixels contrasting with background pixels, which makes them hard to spot, not to mention analyze at the appropriate level.
Deep learning can make it much more manageable. However, it is one of the cases where deep learning takes more of an assisting role in the process. Such neural networks use Structured Analysis of the Retina, aka STARE dataset. This dataset contains 28 999x960 annotated images.
Overall, there are two ways deep learning improves retinal blood vessel segmentation operation:
- Image enhancement can improve the quality of an image.
- Substructure segmentation can correctly identify the blood vessels and determine their state.
As a result, the implementation of neural networks significantly compresses the time-span of workflow. The system can annotate the samples on its own as it already has the foundational points of reference. Because of that, the specialist can focus on the case-specific operations instead of manually reannotating samples every time.
Cardiac assessment for cardiovascular pathologies is one of the most complicated cases that require lots of data to spot the pattern and determine the severity of the problem.
The other critical factor is time, as cardiovascular pathologies require swift reaction to avoid lethal outcomes and provide effective treatment.
This is something deep learning can handle with ease. Here’s how. Deep learning fits into the following operations:
- Blood Flow Quantification - to measure rates and determine features.
- Perform anomaly detection in the accumulated quantitative data.
- Data Visualization of the results.
The implementation of deep learning in the process increases the accuracy of the analysis and allows doctors to gain much more insight in a shorter time. The speed of delivery can positively impact the course of treatment.
Bone diseases and injuries are amongst the most common medical causes of severe, long-term pain and disability. As such, they are a high testing ground for various image classification and segmentation CNN use cases.
Let’s take the most common method of bone imaging - X-rays. While the interpretation of images is less of a problem in comparison with other fields, the workload in any given medical facility can be overwhelming for the resident radiologist.
Enter deep learning:
- CNN is used to classify images, determining their features (i.e., bone type, etc.)
- After that, the system segments the abnormalities of the input image (for example, fractures, breaks, spurs, etc.).
As a result, the implementation of deep learning CNN can make the radiologist’s job more accessible and effective.
From the data volume standpoint, medical image analysis is one of the biggest healthcare fields. This alone makes the implementation of machine learning solutions a logical decision.
The combination is beneficial for both.
- On one hand, medical imaging gets streamlined workflow with faster turnaround and higher accuracy of the analysis.
- On the other hand, such an application contributes to the overall development of neural network technologies and enables their further refinement.