Skip to content

[2021.2.11] Deploy WG: Core weekly

David Bericat edited this page Feb 23, 2021 · 5 revisions

Thursday, February 11th 2021, 8:00 AM PST - 11:00 AM EST - 5:00 PM European time

Attendees:

  1. Ralf Floca (DKFZ), 2. Pouria Rouzrokh (Mayo), 3. Karthik (Vanderbilt), 4. Kilian Hett (Vanderbilt), 5. Laurence Jackson (GSTT), 6. B. Selnur Erdal (Mayo Jacksonville), 7. Dana Groff (NVIDIA), 8. Rahul Choudhury (NVIDIA), 9. Raghav Mani (NVIDIA), 10. David Bericat (NVIDIA)

Recording:

MONAI Deploy core WG - bi-weekly-20210211_110354-Meeting Recording

Action Items (AIs)

New ones

  • AI1 - [DB] To open a slide deck for all of us to document
  • AI2 - [All] Add slides with workflows per team from training to deployment and feedback
    • Data flows
    • Input syncs
    • Output syncs
  • AI3 - [Selnur] Share how they document a use case and requirements sample
  • AI4 - [All] Identify a use case from MONAI and use the use case and reqs template as a way to document what we will design and build

AGENDA (+Notes) - Whiteboard 2/11/2021

SUMMARY:

  • Team discussed how to structure work. Agreed on reviewing organizations' use cases first to identify commonalities. Build high level system architecture.
  • Once common building blocks are clear, pick an example from MONAI as use case to map out the journey from model trained to "clinical production". Document high level API requirements.
  • Build a roadmap with what to build vs what to adopt.

DETAILED:

End-to-end workflow from trained model to PACS integration, resource utilization, GPUs, etc. (3 people)

  • Mayo
    • AI lab where they train their models
      • What platform they were originally trained
      • Do we have to re-train?
      • Someone takes it and move them to
  • DKFZ
    • Use https://github.com/kaapana/kaapana as our deployment platform
    • Abstract the data source and good as possible from the very model
    • Roughly:
      • Images pushed (DIMSE, DicomWeb, Manual upload, Minio) or pulled (e.g. DICOM QR)
      • Workflow system (airflow) ensures preprocessing to convert, resample (what ever to a form that is needed)
      • Consumed/processed by the AI algorithm
      • Workflow system ensures that the algorithm results are converted in the needed format
    • Bottleneck; interfacing different algorithms with differing input/output semantics or needs in an automatic manner.
    • Several containers and workflows built on Helm with repos
  • GSTT (Laurence, Haris)
    • In process of setting up complete workflow for new AI projects
    • Interested in hearing about others' experiences with various tools for MLOPS including continuous integration/training/deployment.
  • Mayo Jacksonville (Selnur)
    • DICOM is key to their design
    • Exchanging data and results
    • Custom models, custom apps in Python
    • Train to deploy to retrain (maybe with new labels) to redeploy lifecycle - continuous learning
    • Pre and post processing
    • Inputs
    • Outputs depending on data types
    • Use cases
      • Requirements doc
  • NVIDIA (Dana, Rahul, David)
    • Model vs operator vs pipeline
    • Write use cases first
      • That drives the inputs and outputs

System architecture first -> then roadmap

  • INPUT:
    • Model trained - format?
    • Data - format (i.e. DICOM) and where is it?
    • Result -> data ingested in the system
  • EXECUTE:
    • What is my executable artifact? How do we package it?
      • Container? Pipeline?
    • Should we define one?
    • Jorge:
      • Containers, models, inference engine
  • OUTPUT:
    • Result formats?
    • Consumers? What do they speak? (i.e. DICOM)

Who is:

  • Responsible - Accountable - Consulted - Informed

Master document

MONAI Deploy WG - master doc

Clone this wiki locally