Generative AI has captured the public imagination, with ChatGPT contextualizing the benefits of AI for more people than ever before. OpenAI reportedly used 10,000 NVIDIA GPUs to train ChatGPT, and NVIDIA owns 88% of the GPU market powering model training.
At the upcoming NVIDIA GTC, the Conference for the Era of AI and the Metaverse (March 20 - 23), NVIDIA founder and CEO Jenson Huang will join OpenAI Co-founder and Chief Scientist Ilya Sutskever in a fireside chat about the future of AI.
Domino Data Lab’s unified data science platform supports the underlying technologies that can make technologies like deep learning, generative AI, computer vision, and IoT real for enterprises.
Last October, Domino showed the industry the future of AI across the hybrid- and multi-cloud with Domino Nexus. As a Diamond sponsor at NVIDIA GTC, March 2023, we’re excited to showcase our latest capabilities around operationalizing AI use cases like deep learning and edge computing across the hybrid- and multi-cloud.
Want to connect with us in person? Domino is a sponsor at the AI City Forum on March 21st in Houston, Texas put on by our partners Mark III Systems. We'll also be at Carasoft's NVIDIA GTC Federal Watch Party & Reception in Reston, Virginia on March 22nd. Already registered for GTC? Check out Domino's sponsor page!
March 23, 2023 Updates:
In case you missed it, see Domino Data Lab's Spring 2023 announcements at GTC! Also, check out NVIDIA's GTC MLOps blog post.
AI Anywhere: How Lockheed Martin Reduces Infrastructure Costs with a Hybrid Approach
Lockheed Martin is at the forefront of AI innovation — applying sophisticated, GPU-accelerated machine learning capabilities for manufacturing anomaly detection, supply chain predictive risk management, predictive maintenance for complex aircraft systems, and AI fighter drone systems. Lockheed uses a hybrid strategy for the unique advantages of both AWS GovCloud and on-premises clusters to get the best of both worlds — greater performance and cost optimization — while reducing siloed infrastructure sprawl.
Join this session with AI leaders from Lockheed, NVIDIA, and Domino Data Lab to learn about hybrid and multi-cloud best practices for breaking down infrastructure and collaboration silos, reducing costs, and standardizing governance.
Thomas Robinson , Chief Operating Officer, Domino Data Lab
Jon-Cody Sokull, AI Developer Advocacy Manager, Lockheed Martin
Kevin Carlson, Director, Technology Office, Lockheed Martin
Deploy a Deep Learning Model from the Cloud to the Edge
As organizations look to realize their investment in AI and demonstrate its value, edge deployment is rapidly becoming mission-critical. Yet the path from a model to the factory floor is far from well-defined. In this session, MathWorks and Domino Data Lab will showcase one such workflow, deploying a deep learning model developed and trained in MATLAB on Domino to NVIDIA EGX edge nodes. We will use Domino’s enterprise MLOps platform to show how MATLAB can incorporate models developed in Python, and use extensive image datasets to train the model with GPU acceleration. MATLAB will then use its Compiler SDK to generate a Python package, which will be deployed to NVIDIA Fleet Command using Domino model publishing.
Yuval Zukerman, Director, Technical Alliances, Domino Data Lab
Brandon Johnson, Senior Solutions Architect, NVIDIA
Large Scale Deep Learning Using Hybrid/multi-cloud MLOps, GPUs, OpenMPI, and DeepSpeed
As the utilization of Deep Learning and AI continues to expand, the models have become increasingly complex and larger in size. This presents a significant challenge when it comes to training these models, such as GPT-3 and Megatron. To address this issue, several distributed computing frameworks have been developed, with MPI (message passing interface) being one of the most well-established and reliable options.
In this talk, we will delve into the MLOps considerations when choosing a framework and explore the benefits of using MPI and the DeepSpeed library. We will also demonstrate how to train large deep networks using these tools, using real-world examples from the field of proteomics and a large language model, within Domino Data Lab’s Enterprise MLOps Platform. The first half of the talk will focus on introducing MPI and its use in Python, while the second half will delve into the various techniques used to train large deep networks and how to use DeepSpeed on MPI to enable these techniques.
Subir Mansukhani, Staff Field Data Scientist, Domino Data Lab
We're excited to show the latest on how enterprises are scaling AI across the enterprise. Register for free for NVIDIA GTC now.
If you're already registered, check out Domino's sponsor page!