Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

MONAI: Medical imaging AI for data scientists and developers @ 3D Slicer Project Week, 2020

75 views

Published on

MONAI is open source deep learning platform for medical image analysis. This presentation covers the motivation for and capabilities of MONAI, and it includes a brief getting-started tutorial for intermediate and expert medical imaging researchers and developers interested in exploring deep learning.

Published in: Health & Medicine
  • Be the first to comment

  • Be the first to like this

MONAI: Medical imaging AI for data scientists and developers @ 3D Slicer Project Week, 2020

  1. 1. Open Science for the Challenges of Medical Imaging AI Stephen R. Aylward, Ph.D. Chair of MONAI External Advisory Board Senior Directory of Strategic Initiatives, Kitware
  2. 2. Special thanks to Prerna Dogra (Nvidia), Jorge Cardoso (KCL), and all of the MONAI developers for their contributions to these slides. For more MONAI presentation material for hackfests, courses, and self-directed studies, please email me, Stephen.Aylward@kitware.com, or any of the other MONAI advisory board members: https://monai.io/about.html
  3. 3. Why is deep learning succeeding? ● Performance ● Open Science -- Forbes.com
  4. 4. Deep Learning Success: Performance < Left as an exercise for the audience >
  5. 5. Deep Learning Success: Open Science ● Open science is pervasive in deep learning ○ Open access publications: arXiv ○ Open access data: ImageNet, BU AIM, HL7, FIHR ○ Open access algorithms: Open source: PyTorch, MONAI
  6. 6. Medical Open Network for A. I. (MONAI) Goal: Accelerate the pace of research and development by providing a common software foundation and a vibrant community for medical imaging deep learning. ■ Began as a collaboration between Nvidia and King’s College London ■ Prerna Dogra (Nvidia) and Jorge Cardoso (KCL) ■ Freely available and community-supported ■ PyTorch-based ■ Optimized for medical imaging ■ Reference implementation of best practices
  7. 7. Accelerate Pace of Research and Innovation With a Common Foundation Data Augmentation Neural Network Loss FunctionData Sample MONAI ● Integrate rather than compete ● Build a community through value Current Conditions ● Many options ● Incompatible interfaces and formats ● Extended learning curves Validation Data Evaluation NiftyNet (KCL) DeepNeuro (Harvard) DLTK (ICL) Clara Train (NVIDIA) End2End workflow facilitated by MONAI . . . . . . Primary focus of MONAI Linkage with MONAI
  8. 8. MONAI TECHNOLOGY STACK Data CacheDataset PersistentDataset ZipDataset ArrayDataset GridDataset EnhancedDataLoader Savers & Writers Nifty, PNG & CSV Inferers SimpleInferer, Slidingwindow Losses DicesLoss & Extensions, FocalLoss, TverskyLoss Visualize Plot 3D/2D images, Plot statistics curve Metrics MeanDice, ROCAUC Networks UNET (2D & 3D); Layers & blocks; DenseNet(2D & 3D) Transforms Spatial, Intensity IO, Utility Post, Compose 3rd Part adapter BatchGenerator, Rising,TorchI/O FOUNDATIONAL COMPONENTS: Users can integrate Independent domain specialized components into PyTorch Programs Engines SupervisedTrainer SupervisedEvaluator Event Handlers Checkpoint Loader; ValidationHandler; ClassificationSaver; CheckpointSaver; LrSchedulerHandler; StatsHandler; TensorBoardHandlers; SegmentationSaver; MetricLogger Metrics MeanDice ROCAUC MONAI WORKFLOWS: Users can interface with MONAI workflows for ease of robust training & evaluation of Research Experiments MONAI EXAMPLES: Riche set of examples & demo notebooks to demonstrate the capabilities and integration with OSS packages Segmentation Classification GANs & AutoEncoder Federated Learning Get Started Notebooks Built for Customizable & Ease of Integration Multi-modality Support Radiogenomics Unconstrained and Optimized Models Model Parallelism/Neural Archi. Search Comprehensive Decision-making COVID-19 End-to-end research lifecycle DICOM/HL7 FHIR/Model Exchange & Deploy MONAI RESEARCH: Implementations of state-of-the-art research publications
  9. 9. Why is MONAI Needed? • Biomedical applications have specific requirements • Image modalities require specific processing methods: MRI, CT, etc. • Image formats require special support: DICOM, NIfTI, etc. • Image meta-data must be considered: voxel spacing, HU, etc. • Certain network architectures are designed for, or are highly suitable for, biomedical applications • Problem prioritization is domain specific: sample size limitations, annotation uncertainties, etc.
  10. 10. Why is MONAI Needed? Reproducibility is vital to clinical decision support • Reduce re-implementation • Provide baseline implementations • Demonstrate best practices • Stand on the shoulders of giants
  11. 11. How Does MONAI Address These Needs? • MONAI provides flexible yet reproducible Pytorch-compatible methods • Deterministic and validated modules • Medical data I/O • Data transforms to process, regularize, and augment image data • Metrics, Loss Functions • Checkpointing • Standardized networks and training paradigms • Support for multi-GPU and multi-node multi-GPU training • Tutorials and documentation: Jupyter Notebooks and Ignite Workflows
  12. 12. Liaison with the community: Recommend policies and priorities to development team Working Groups of MONAI 1. IMAGING I/O – Stephen Aylward (Kitware) 2. DATA DIVERSITY – Brad Genereaux (Nvidia) 3. CHALLENGES – Lena Maier-Hein (DKFZ) 4. TRANSFORMATIONS – Jorge Cordoso (KCL) 5. FEDERATED LEARNING – Jayashree Kalapathy (MGH) and Daniel Rubin (Stanford) 6. ADVANCED RESEARCH – Paul Jaeger (DKFZ) 7. INTEGRATION AND DEPLOYMENT – David Bericat (Nvidia) 8. COMMUNITY ADOPTION – Prerna Dogra (Nvidia) https://github.com/Project-MONAI/MONAI/wiki
  13. 13. MONAI IS A GROWING COMMUNITY 41
  14. 14. BOOTCAMP – IN NUMBERS A LOT OF INTEREST IN THE COMMUNITY! • Number of applicants: 563 attendance applications • Accepted participants with cluster access (60) • Additionally other participants “observers” (140) • From 40 different countries: Australia, Austria, Belgium, China, Cyprus, Czechia, Egypt, Ethiopia, France, Ghana, Germany, Greece, Guatemala, Hong Kong, India, Israel, Iran, Malta, Mexico, Nepal, Netherlands, Norway, Oman, Peru, Poland, Portugal, Saudi Arabia, Slovenia, South Korea, Spain, Sweden, Switzerland, Turkey, United Arab emirates, United Kingdom, United States of America A truly global event!
  15. 15. Installation > pip install -q "monai[tqdm, nibabel, gdown, ignite]" "itk" "itkwidgets“
  16. 16. Data and Experiments MONAI separates data from experiments Data Existing standards for Image I/O​ • ITK: DICOM via GDCM, HDF5, TIFF, Nifti, NRRD, and tens of other.​ • Will allow custom readers for specialized image formats.​ Structured data collections​ • DataSets define data in reproducible sections • Training, Testing, Validation sections • Images, bounding boxes, etc.​ • DataLoader and Transforms for augmentation and pre- processing per section Experiments • Batches and Metrics • MONAI network architecture, loss functions, seeds, … MONAI Flexible and extensible design for data scientists and healthcare institutions DataSets and DataLoaders (ITK, MSD, BIDS, FHIR, etc.) Sections: Training, Testing, Validation Experiment Definition (Sampling, Batches, Metrics, Network, etc.) Transformer
  17. 17. Access Medical Data 17 Goal: Harmonize and simplify open data and biomedical challenges • Participate in / use public challenges • Define “challenges” (custom datasets) within your lab Thin layer on top of PyTorch torch.data.utils.Dataset construct • Automated (verified) download and unzip • Caching of data as well as intermediate results of preprocessing • Random splits of training, validation, and test
  18. 18. Transform data
  19. 19. Transforms per data section
  20. 20. MONAI TRANSFORMATION & AUGMENTATION Medical Specific Transformations - LoadNifti | Spacing | Orientation - RandGaussianNoise |Normalize Intensirt - Rand2DElastic | Rand3DElastic Fused Spatial Transforms & GPU Optimization - Affine Transform - Random sampling: Class balanced fixed Ratio - Deterministic training controlled by setting random seed Multiple Transforms Chain - CopyItem in data dictionary transforms - ConcatItem combine for expected dimension - DeleteItems save memory - Scale intensity of same image into different ranges Generic | Vanilla |Dictionary-based Transforms LoadNifti AsChannelFirst Scale Intensity ConcatItems AsChannelFirst AsChannelFirst Scale Intensity Scale Intensity Network Nifty Images Brain Window Subdural Window Bone Window Load Image from File Make 2 Copies Different ranges &Scale Concat Together
  21. 21. MONAI Data Transforms
  22. 22. MONAI TRANSFORMATION & AUGMENTATION 3rd Party OSS Packages & MONAI adapter Tools - Interoperability with other open source packages - Accommodate different data for 3rd party Transforms - Utility Transforms: ToTensor, ToNumpy, SqueeseDim - BatchGenerator - TorchIO - Rising Post-Processing & Integrate Third Party Transforms
  23. 23. INFERENCING & EVALUATION METRICS Evaluation Metrics and Inference Patterns for Model Quality SLIDING WINDOW INFERENCE 1. Generate slices from Window 2. Construct Batches 3. Execute on Network 4. Connect All Outputs Domain Specialized Metrics Hausdorff distance, Kappa coefficients Youden’s J statistic, Relative volume tumor Target registration error, etc. Standard Metrics Mean Dice, Area under the ROC Curve, etc.
  24. 24. NETWORK ARCHITECTURE & LOSSES 1D/2D/3D Intermediate blocks and Generic Networks, such as UNet, DenseNet, GAN. BLOCKS & LAYERS NETWORKS & LOSSES (N/3- 10)3 (N/3- 2)3 (N/3- 4)3 (N/3- 6)3 (N/3- 8)3 (N- 40)3 (N- 34)3 (N- 38)3 (N- 36)3 (N- 42)3 (N- 44)3 (N- 46)3 (N/3- 12)3 (N/3- 14)3 (N/3- 16)3 (N- 48)3 (N- 48)3 (N- 32)3 (N/3 )3 Convolutional layers 3 3 0 3 0 4 0 4 0 4 0 4 0 5 0 5 0 15 0 15 0 2 Fully connected layers (N- 48)3 (N- 48)3 (N- 48)3 Upsam ple
  25. 25. Ease-of-use Example net = monai.networks.nets.UNet( dimensions=2, # 2 or 3 for a 2D or 3D network in_channels=1, # number of input channels out_channels=1, # number of output channels channels=[8, 16, 32], # channel counts for layers strides=[2, 2] # strides for mid layers ) 2D UNet network • 2 hidden layers: outputs has 8 channels, and the bottom (bottleneck) layer has outputs with 32 channels • Stride values state the stride for the initial convolution, ie. downsampling in down path and upsampling in up path
  26. 26. MONAI:End-End Training Workflow in 10 Lines of Code from monai.application import MedNISTDataset from monai.data import DataLoader from monai.transforms import LoadPNGd, AddChanneld, ScaleIntensityd, ToTensord, Compose from monai.networks.nets import densenet121 from monai.inferers import SimpleInferer from monai.engines import SupervisedTrainer transform = Compose( [ LoadPNGd(keys="image"), AddChanneld(keys="image"), ScaleIntensityd(keys="image"), ToTensord(keys=["image", "label"]) ] ) dataset = MedNISTDataset(root_dir="./", transform=transform, section="training", download=True) trainer = SupervisedTrainer( max_epochs=5, train_data_loader=DataLoader(dataset, batch_size=2, shuffle=True, num_workers=4), network=densenet121(spatial_dims=2, in_channels=1, out_channels=6), optimizer=torch.optim.Adam(model.parameters(),lr=1e-5), loss_function=torch.nn.CrossEntropyLoss(), inferer=SimpleInferer() ) trainer.run()
  27. 27. RESEARCH BASELINE IMPLEMENTATIONS IEEE, MICCAI & Many More SOA Research Implementations to follow
  28. 28. FEDERATED LEARNING Advanced Features to Enable Collaborative Research Coming Soon Peer to Peer Federated Learning Server Client Federated Learning Federated Learning is a generic paradigm for Collaborative Learning  Provide integration with existing FL Packages  Focus on ‘Domain-Specialized Learning’ aspects  Coming Soon! NVIDIA Clara Federated Learning PySyftSubstra
  29. 29. CLARA PRE-TRAINED MODELS Packaged as Medical Models ARchive (MMARs) Liver Tumor Segmentation Lung Segmentation Chest CT Classification Brain Tumor Segmentation Model Medical Task Data Network Brain tumor segmentation 3D Segmentation MR (BraTS 2018) Res-UNet Liver and tumor segmentation 3D Segmentation CT (medical Decath) Anisotropic Hybrid Network (AH-Net) COVID-19 Lung segmentation 3D Segmentation CT NIH + global COVID-19 Chest CT classification 3D Classification NIH dataset DenseNet121 Chest X-ray classification 2D Classification PLCO Vanderbilt (B. Landman) NSF funding Model Zoo Converting to MONAI MONAI 0.5: 20+ models
  30. 30. Encapsulating a COVID-19 Algorithm into an Integrated AI Application Nvidia CLARA
  31. 31. Learn •Getting Started (Installation, Examples, Demos, etc.) https://monai.io/start.html Contribute •GitHub •Community Guide: https://github.com/Project-MONAI/MONAI#community •Contributing Guide: https://github.com/Project-MONAI/MONAI#contributing •Issue Tracker: “Good First Issue” tag: https://github.com/Project-MONAI/MONAI/labels/good%20first%20issue •PyTorch Forums. Tag @monai or see the MONAI user page. https://discuss.pytorch.org/u/MONAI/ •Stack Overflow. See existing tagged questions or create your own: https://stackoverflow.com/questions/tagged/monai •Join our Slack Channel. Fill out the Google Form here: https://forms.gle/QTxJq3hFictp31UM9 Engage with MONAI
  32. 32. Deep Learning Success < Left as an exercise for the audience > Stephen R. Aylward, Ph.D. Chair of MONAI Advisory Board Senior Directory of Strategic Initiatives, Kitware https://monai.io/ https://github.com/Project-MONAI/

×