Domino AI Factory

Accelerate deployment, operationalize and govern your AI at scale.

Governed

Monitor deployed models for accuracy, drift, and performance and quickly retrain to reduce risk and ensure business value. Track models in a single pane of glass.

Scalable

Deploy to scalable infrastructure across both on-premise, any cloud, or hybrid environments to ensure high-performance inference and responsive models and apps.

Flexible

Domino supports your deployment strategy and infrastructure. Deploy in Domino, in-database, the edge, hosted services like Sagemaker or to existing CI/CD pipelines.

Deploy, Observe, Improve

Model Registry

Centralized Model Tracking

Track all your models regardless of where they were trained. With Domino Model Registry, you get complete lineage tracking for auditability using integrated model cards. The model registry offers a central repository of all models, streamlines iterative improvement, and facilitates stakeholder reviews and approvals for transitioning models from development to production.

Model Performance & Analysis

Model Review and Validation

Deploy with Confidence

Validate and review models with custom approval workflows to ensure models and applications are robust and audit-ready with best-in-class reproducibility. Provide reviewers with detailed lineage and model metrics to help them evaluate AI trustworthiness.

One-click deployment

Rapidly and Flexibly Deploy AI Solutions

Deploy models to any endpoint for both batch and real-time predictions. Deploy in Domino natively, integrate with existing CI/CD pipelines, or export models to platforms like SageMaker, Snowflake, Databricks, or NVIDIA FleetCommand. Deploy in a hybrid world -- on-prem, across multiple on-prem environments, or a combination of on-prem and cloud. Share analytical dashboards, AI models, and GenAI apps with any framework, including Dash, Flask, Streamlit, Shiny, and more.

Churn Model US

Model Monitoring

Continuously Improve Model Performance

Track data drift and model quality degradation automatically with integrated monitoring and alerts. Continuously monitor accuracy metrics and ground truth to improve performance. Incorporate LLM evaluation frameworks. Monitor endpoint activity and health with prebuilt or custom metrics. Quickly identify and remediate issues and retrain with ease.

Resources

Domino

Is MLOps as a Service right for you?

Streamline the management, development, deployment, and monitoring of data science models — at scale.

Ray Summit Talk

Bridging MLOps and LLMOps

Explore the enterprise integration challenges and nuances of LLMOps.

Watchmaker

E-book

The complete guide to MLOps principles

The challenges of scaling data science and what to expect.

Blog

LLM Inference on Domino

Domino gives control across the LLM fine-tuning cycle.