Chicago, USA – RSNA – November 28, 2018 – annotation and segmentation based on deep learning can greatly speed up model development and medical image analysis. However, developing high-performance and accurate deep neural networks from scratch is very challenging and time-consuming. The cost and quality of the required data sets are often the two main obstacles faced by developers. Announced the launch of nvia’s innovative image assisted learning toolkit, which is applicable to the field of image assisted learning.

NVIDIA migration learning kit

Through NVIDIA migration learning Toolkit (TLT), deep learning application developers in the field of medical imaging can use NVIDIA pre training model to carry out simple and easy-to-use training workflow, and then use their own data set to fine tune and re train the model.

TLT is a python software package in which each model is optimized and trained on NVIDIA Pascal, Volta and Turing GPU to achieve higher accuracy.

At MICCAI in 2018, NVIDIA won the first place in the brats challenge by virtue of 3D magnetic resonance imaging (MRI) brain tumor segmentation using automatic encoder regularization method. As part of the medical imaging software TLT, NVIDIA provided this pre training model in its first public release. 3-D brain tumor segmentation for multimodal MR data and 3-D pancreas and tumor segmentation for portal phase CT data are some models trained on public data sets, which can be easily obtained in the toolkit.

Using NVIDIA migration learning toolkit, developers can speed up deployment and reduce the computing resources required to build applications. Using this toolkit, researchers can also extend the pre training model to their own work. Through the easy-to-use API, developers can quickly adjust and use this technology.

The model using TLT workflow can also be easily deployed to Clara platform for reasoning.

TLT will be available for NVIDIA Tesla and DGX products.

NVIDIA AI auxiliary notes

When it comes to treatment and diagnosis, radiologists end up spending hours scrutinizing a 3D image of a patient. This is a tedious process. Radiologists must view CT or MRI scan images slice by slice, and manually draw, annotate and correct the organs or abnormalities they are concerned about. This step is then repeated for all 3D image slices of a specific organ or abnormality.

NVIDIA’s AI assisted annotation SDK can greatly speed up this process 10 times faster and help find exceptions faster. This is achieved by enabling application developers and data scientists to integrate the AI assisted annotation SDK into their existing applications and use the AI assisted workflow for radiography.

AI assisted annotation SDK uses NVIDIA’s migration learning toolkit to continuously learn by itself, so each new image with annotation can be used as training data to further improve the accuracy of the provided pre training depth learning model.

“We can get NVIDIA’s AI assisted annotation technology and integrate it into our image browser in a few days,” said mark Michalski, executive director of MGH & BWh Center for clinical data science. “We currently need to annotate a large number of images – sometimes about a thousand or more a day, so any technology that helps automate this process can significantly reduce annotation time and cost. We are very excited to use AI assisted workflow and work with NVIDIA to solve these critical medical imaging problems.”

If you want to learn more about NVIDIA’s AI assisted annotation SDK and how to integrate it into your personal application to use AI assisted workflow in medical imaging, please register here.

“The entire radiology department needs to be involved to successfully implement AI in a research and clinical environment,” said Abdul Hamid halabi, head of NVIDIA healthcare. “This annotation SDK allows radiology departments to easily release the value of data in their existing workflow. With the migration learning toolkit, radiologists can adjust all existing AI applications to suit their patients.”

About NVIDIA

NVIDIA (NASDAQ: NVDA) invented GPU in 1999, which stimulated the growth of PC game market, redefined modern computer graphics card, and innovated parallel computing. Recently, by using GPU as the brain of computers, robots and even autonomous vehicle that can perceive and understand the world, GPU deep learning has ignited a new era of computing – modern artificial intelligence.

Leave a Reply

Your email address will not be published.