Your generative AI Springboard

Sprint Zero

Code. Webinars. Ideas. It all starts here.

Domino's Sprint Zero offers you, the AI and data science practitioners, the resources you need to get started and stay ahead in the world of GenAI. From prompt engineering with LLM APIs to fine-tuning and hosting an LLM on Domino — it's all here. Dive in!

First Steps

The image of a female data scientist sitting at her computer desk

Blog, Webinar and Repo

RAG: An Introduction

Learn how to leverage Retrieval-Augmented Generation with Domino

1980s computer with prompts

webinar and repo

Prompt Engineering Jumpstart

Understand the whys and see the hows of prompt engineering for large language models.

Screenshot of the webinar

Webinar

Supercharging your model with Generative AI

Learn how to develop and deploy a LLM app in Domino.

A big African elephant next to a baby elephant. This illustrates the use of nanoGPT vs. the full scale GPT.

NanoGPT

Start Small with NanoGPT

Generate text in the style of Homer’s Iliad.

Advanced Techniques

Data scientists reviewing code in the office

Webinar

Find the Right LLM for the Job

Learn how to work with multiple LLM providers and how Jupyter AI can help you work faster!

A team of data scientists in a meeting room

Blog and Repo

LLM Cascades with Mixture of Thought

Full LLM accuracy, 60% cheaper than the leading models.

An illustration of a mechanic tuning an engine

Webinar

Fine-Tuning Large Language Models

Optimizing with Quantization and LoRA in Domino.

Illustration of a data scientist working at her desk

Webinar

Advanced Parameter Efficient Fine-tuning

Scaling Fine-Tuning with Ray and DeepSpeed ZeRO.


A generative image of a person chatting with a bot

Repo

Llama 2 Chatbot

Multi-phase Supervised Fine-tuning with Llama 2.

CODE AND REPO

LLM Inference on Domino

Domino gives control across the LLM fine-tuning lifecycle.

Close up of watchmaker's hand holding a watch

Blog Post and Repo

Overcome the challenges of fine-tuning large language models (LLMs)

See how quantization and LoRA can help you deliver LLM power with less, on Domino.

Data scientist at her computer

REPO

Create a Q&A Agent

See how to use OpenAI’s API and Pinecone on Domino.