Self-Serve Elastic Compute
One-click access to scalable compute
Say goodbye to DevOps learning curves and wait times. Stop trying to guess compute needs in advance. With Domino, you can self-serve dynamically adjusting Kubernetes-based compute clusters with just a few clicks. You can easily access distributed frameworks such as Spark, Ray, and Dask, as well as NVIDIA GPUs, to power the most computationally hungry algorithms.
IT immediately benefits from centralized infrastructure management that optimizes resource use and facilitates chargeback to business units based on usage.
Unified Data Access Layer
Easy, governed, and secure access to data
Data is the fuel for all models but data access and preparation is a regular struggle. Not with Domino.
Domino’s secure Data Connectors provide rapid, secure access so you can quickly get to work while also adhering to data access policies. Domino’s Data Access Library unifies access patterns for disparate data types through SQL syntax. Analytic-ready data, along with its associated metadata, is easily saved in a result set for later reuse, saving time and compute costs.
Frequently Asked Questions
Can Domino support both on-prem and cloud infrastructure?
Yes, because Domino is fully Kubernetes native, we can support both on-prem and cloud infastructure. As a results, Domino aligns with your current and future IT strategy and infrastructure vision, and can be a key enabler as you move towards a full cloud, or hybrid on-prem and cloud deployment.
Does Domino support GPUs?
Yes, with Domino you can centrally provision GPU resources for data scientists to leverage in projects. They are shared across all users allowing you to maximize the benefit and utilization.
Is Spark supported in Domino?
Yes, Domino supports Spark, as well as other distributed computing frameworks like Ray and Dask.