Subject archive for "fine-tuning"
Crossing the Frontier: LLM Inference on Domino
Generative AI transforms industries, but LLM deployment is tough. See how Domino simplifies LLM hosting & inference.
By Subir Mansukhani10 min read
Breaking Generative AI Barriers with Efficient Fine-Tuning Techniques
This blog post explores the challenges of fine-tuning large language models (LLMs) and introduces resource-optimized and parameter-efficient techniques such as quantization, LoRA, and Zero Redundancy Optimization (ZeRO). By fine-tuning Falcon-7b, Falcon-40b, and GPTJ-6b, we demonstrate how these techniques offer improved performance, cost-effectiveness, and resource optimization in LLM fine-tuning. The blog post also discusses the future of fine-tuning and its potential for unlocking new possibilities in enterprise AI applications.
By Subir Mansukhani9 min read
Subscribe to the Domino Newsletter
Receive data science tips and tutorials from leading Data Science leaders, right to your inbox.
By submitting this form you agree to receive communications from Domino related to products and services in accordance with Domino's privacy policy and may opt-out at anytime.