Domino accelerates the development and delivery of models with key capabilities of infrastructure automation, seamless collaboration, and automated reproducibility. This greatly increases the productivity of data scientists and removes bottlenecks in the data science lifecycle.

What is Domino

Compute Grid & Environment Management enables more data science and less devops

  • doneAvoid overwhelming your local machine by leveraging scalable compute with powerful, centralized hardware — in the cloud or on premise.
  • doneEliminate barriers to leveraging latest deep learning techniques with one-click access to GPU hardware.
  • doneReduce software configuration time by running your code in Docker containers, configured to create shared, reusable, revisioned Compute Environments.
Compute Grid & Environment Management
  • Reduce key-man and operational risk by automatically preserving all key project information with Domino’s containerization and dependency map– save data, software configurations, code, parameters, results, discussion, and delivered artifacts as they happen.
  • Streamline knowledgement management with all projects stored, searchable, and forkable.
  • Avoid a cold start by using native integrations with popular source control systems like GitHub.
Collaboration Hub & Reproducibility Engine
  • Work instantly by spinning up interactive workspaces with one click — using the tools you already know and love, e.g., Jupyter, RStudio, SAS, and Zeppelin.
  • Tackle complex problems by running, tracking, and comparing batch experiments in parallel with any language, even commercial languages like SAS or Matlab.
  • Minimize changes to your existing workflow by connecting to any data including cloud databases and distributed systems like Hadoop and Spark.
  • Instantly leverage all popular tools with the our pre-packaged Domino Analytics Distribution (includes database drivers, Anaconda Python, popular deep learning packages, visualization packages, etc.) Or customize your own environment without risk of affecting other users.
Data Science Workbench

Deliver and manage scalable data products faster

  • doneEliminate delays and risk from recoding of models by deploying models as batch or real-time APIs of your Python and R models.
  • doneSupport web-scale processes with horizontal scalability.
  • doneMinimize delivery risk with instant rollback to old versions of models.
  • doneMeasure ROI faster: Split traffic across versions to do A/B testing.
Faster Delivery and manage scalable data products
Share Reports and Dashboards

Deliver powerful insights to stakeholders

  • Communicate business benefits. Publish visualizations using open source data science tools, including knitr, Plotly, D3, etc. or for commercial tools like Tableau.
  • Expose complex results in business-friendly manner. Publish interactive dashboards and web apps using Shiny and Flask.
  • Remove low-value admin work. Schedule recurring tasks to update reports — serve results through the web or send to stakeholders via email.

Enterprise Identity and Access Management

  • Seamlessly manage platform as integral asset in enterprise IT stack with cost controls and governance.
  • Manage security for consuming and modifying production models.
  • Automate identity management with integrations into LDAP.
  • Manage hand-offs, sensitive information, and regulatory standards with integrated and granular user access management.
Enterprise Identity and Access Management