Video Event

How to Simply Run Complex AI Training & Inference Workloads with Domino & NVIDIA

In this technical webinar, we address the key challenges every data science team faces when training and operationalising complex AI models at scale. Nikolay and Adam will share how Domino Data Lab facilitates knowledge discovery and collaboration in teams, enabling data scientists to use their favourite tools with a lightning-fast GPU-accelerated computational back-end from NVIDIA. They’ll demonstrate the following:

  • How to run classic machine learning workloads and run algorithms such as XGBoost and Apache Spark at large scale, flipping between a simple CPU backend and a DGX A100 powered compute with a single click.
  • How to perform GPU-accelerated, parallelised, hyper parameter search and model selection.
  • How the Domino reproducibility engine transparently addresses the reproducibility challenge that is notoriously present in all stochastic workloads.
  • How to accelerate training and inference using cutting-edge models like BERT
  • The process of instantiation with the use of on-demand Spark in NVIDIA’s RAPIDS Accelerator For Apache Spark.
  • Model operationalisation and the single-click deployment process provided by the Domino Enterprise MLOps platform.

Latest Resources

GUIDE

A Guide To Enterprise MLOps

REPORT

2020 Gartner Magic Quadrant for Data Science and Machine Learning Platforms

WHITEPAPER

The True Cost of Building a Data Science Platform

BRIEF

Accelerate Adoption of SAS® Data Science Use Cases in the Cloud Using Domino