Summary

When working with big data or complex algorithms, we often look to parallelize our code to optimize runtime. By taking advantage of a GPU’s 1000+ cores, a data scientist can quickly scale out solutions inexpensively and more quickly than using traditional CPU cluster computing. In this recorded webinar, we present ways to incorporate GPU computing to complete computationally intensive tasks in both Python and R.

What’s inside:

  1. Why use GPUs?
  2. Example application in data science
  3. Programming your GPU

Watch the Webcast