Built to let data science teams rapidly develop and deliver models.
What is a data science platform?
A data science platform is software that centralizes the people and tools used across the data science lifecycle, from development to deployment. Individual data scientists use tools, but organizations use platforms to manage key business processes.
Domino provides an open, unified data science platform to build, validate, deliver, and monitor models at scale. This accelerates research, sparks collaboration, increases iteration speed, and removes deployment friction to deliver impactful models.
Open Infrastructure Foundation accelerates the data science lifecycle with open access to tools, compute, and data.
Discover, share, and re-use data sources, including cloud databases and distributed systems like Hadoop and Spark.
Run your development and production workloads in fully configurable Docker containers to create shared, reusable, revisioned environments.
Leverage Kubernetes-backed scalable compute to vertically and horizontally scale resources—in the cloud or on-premise.
Use the latest deep learning techniques with one-click access to GPU hardware.
Lab accelerates research at scale.
Spin up interactive workspaces with one click—using any web-based tool, such as Jupyter, RStudio, SAS, H2O, and Zeppelin.
Run many training and tuning jobs simultaneously, while automatically tracking key model metrics and comparing results side-by-side.
Automatically preserve an experiment's context. Each time an experiment is run, Domino captures the full set of model dependencies (data, code, packages/tools, parameters, and results) and the discussion of the experiment's results.
Scale compute infrastructure for model development needs—size up machines with one click or run many training jobs in parallel.
Launchpad delivers and monitors models to maximize impact.
Deliver model products to business stakeholders as scheduled reports, Flask and Shiny apps, or user-friendly web forms.
Deliver models as enterprise-grade batch or real-time APIs for integration into downstream systems.
Understand usage and performance of models by tracking engagement and key statistics over time.
Automatically preserve link from delivered model product to original development project for rapid iteration.
Control Center manages system-level complexity with transparency into all activity.
Gain visibility into all in-flight projects, recent progress, and insights.
Understand how and where different software packages and tools are used across the organization.
Manage and attribute compute costs with a granular view of hardware resource usage and with a flexible admin interface.
Knowledge Center sparks breakthroughs and creates a knowledge flywheel.
Search and discover relevant projects, files, and discussions to understand existing work on a given topic or tool.
Create centralized template projects that follow best practices for new data scientists to build upon.
Automatically capture all dependencies for every experiment with our patent-pending Reproducibility Engine, including data, code, environment, parameters, results, and discussion.