When working with big data or complex algorithms, we often look to parallelize our code to optimize runtime. By taking advantage of a GPU’s 1000+ cores, a data scientist can quickly scale out solutions inexpensively and more quickly than using traditional CPU cluster computing. In this recorded webinar, we present ways to incorporate GPU computing to complete computationally intensive tasks in both Python and R.
Why use GPUs?
Example application in data science
Programming your GPU
Watch the Video
451 Research Report - Domino Data Adds Model Management Framework to Data Science Platform