Subject archive for "gpu"

MLOps

How to Use GPUs & Domino 5.3 to Operationalize Deep Learning Inference

Deep learning is a type of machine learning and artificial intelligence (AI) that imitates how humans learn by example. While that sounds complex, the basic idea behind deep learning is simple. Deep learning models are taught to classify data from images (such as “cat vs. dog”), sound (“meow vs. bark”), or text (“tabby vs. schnauzer”). These models build a hierarchy where each layer is based on knowledge gained from the preceding layer, and iterations continue until its accuracy goal is reached. Deep learning models often achieve accuracy that rivals what humans can determine, in a fraction of the time.

By Vinay Sridhar6 min read

Python

Snowflake and RAPIDS For On-Demand Computing by a Storm

With data being quoted as the oil of the 21st century and data science being labeled as the sexiest job of the century, we're seeing a sharp rise in data science and machine learning applications in every field. In IT, finance, and business, predictive analytics is disrupting every industry.

By Richard Ecker8 min read

Perspective

Increasing model velocity for complex models by leveraging hybrid pipelines, parallelization and GPU acceleration

Data science is facing an overwhelming demand for CPU cycles as scientists try to work with datasets that are growing in complexity faster than Moore’s Law can keep up. Considering the need to iterate and retrain quickly, model complexity has been outpacing available compute resources and CPUs for several years, and the problem is growing quickly. The data science industry will need to embrace parallelization and GPU processing to efficiently utilize increasingly complex datasets.

By Nikolay Manchev10 min read

Machine Learning

Powering Up Machine Learning with GPUs

Whether you are a machine learning enthusiast, or a ninja data scientist training models for all sorts of applications, you may have heard of the need to use graphical processing units (GPUs), to squeeze the best performance when training and scaling your models. This may be summarized by saying that training tasks based on small datasets that take a few minutes to complete on a CPU may take hours, days, or even weeks when moving to larger datasets if a GPU is not used. GPU acceleration is a topic we have previously addressed; see "Faster Deep Learning with GPUs and Theano".

By Dr J Rogel-Salazar14 min read

Ray

Spark, Dask, and Ray: Choosing the Right Framework

Apache Spark, Dask, and Ray are three of the most popular frameworks for distributed computing. In this blog post we look at their history, intended use-cases, strengths and weaknesses, in an attempt to understand how to select the most appropriate one for specific data science use-cases.

By Nikolay Manchev15 min read

Company Updates

Democratizing GPU Access for MLOps: Domino Expands Support to NVIDIA AI Enterprise

The trend from our customers is clear: GPU-based training is a competitive advantage for enterprises building increasingly sophisticated models. We see this in everything fromNLP for predictive supply chains to image processing for histopathology in clinical trials, andpredictive maintenance for energy infrastructure to automatic claims resolution in insurance.

By Thomas Robinson5 min read

Subscribe to the Domino Newsletter

Receive data science tips and tutorials from leading Data Science leaders, right to your inbox.

*

By submitting this form you agree to receive communications from Domino related to products and services in accordance with Domino's privacy policy and may opt-out at anytime.