Subject archive for "large-language-models"
OpenAI’s Wake-up Call to the World: Future-Proof Your Generative AI Strategy Now
OpenAI’s board appears to have fatally stabbed the organization. Whether or not Sam Altman did or did not deserve to be let go, the organization will never be the same in the eyes of its customers, employees, investors and society at large. The unfolding debacle reveals the dangers of heavily relying on individual AI partners and the limits of outsourcing AI capabilities generally.
By Kjell Carlsson3 min read
Crossing the Frontier: LLM Inference on Domino
Generative AI transforms industries, but LLM deployment is tough. See how Domino simplifies LLM hosting & inference.
By Subir Mansukhani10 min read
Breaking Generative AI Barriers with Efficient Fine-Tuning Techniques
This blog post explores the challenges of fine-tuning large language models (LLMs) and introduces resource-optimized and parameter-efficient techniques such as quantization, LoRA, and Zero Redundancy Optimization (ZeRO). By fine-tuning Falcon-7b, Falcon-40b, and GPTJ-6b, we demonstrate how these techniques offer improved performance, cost-effectiveness, and resource optimization in LLM fine-tuning. The blog post also discusses the future of fine-tuning and its potential for unlocking new possibilities in enterprise AI applications.
By Subir Mansukhani9 min read
Llama 2: Leveling the Playing Field for LLM-Based AI Applications in the Enterprise
Meta's release of Llama 2 is a pivotal moment for businesses seeking to harness generative AI.
By Josh Poduska8 min read
Subscribe to the Domino Newsletter
Receive data science tips and tutorials from leading Data Science leaders, right to your inbox.
By submitting this form you agree to receive communications from Domino related to products and services in accordance with Domino's privacy policy and may opt-out at anytime.