GPU Computing for Data Science

Video Event

When working with big data or complex algorithms, we often look to parallelize our code to optimize runtime. By taking advantage of a GPU’s 1000+ cores, a data scientist can quickly scale out solutions inexpensively and more quickly than using traditional CPU cluster computing. In this recorded webinar, we present ways to incorporate GPU computing to complete computationally intensive tasks in both Python and R.

What’s inside:

  • Why use GPUs?
  • Example application in data science
  • Programming your GPU

Watch the Video

Latest resources


The Practical Guide to Managing Data Science at Scale


Gartner Report: 15 Insights for Managing Data Science Teams


Model Monitoring Best Practices


Accelerate Adoption of SAS Data Science Use Cases in the Cloud Using Domino

Dun & Bradstreet seal