Training a pre-trained AI model on specialized data to improve its performance on specific tasks or domains.
Fine-tuning takes a general-purpose LLM and specializes it. Instead of training from scratch (which costs millions), you take an existing model and train it further on a curated dataset relevant to your use case. The result is a model that retains general intelligence but excels at your specific domain.
There are several approaches to fine-tuning. Full fine-tuning updates all model parameters but requires significant compute. LoRA (Low-Rank Adaptation) and QLoRA are more efficient methods that only update a small subset of parameters, making fine-tuning accessible on consumer hardware. Instruction tuning specifically trains models to follow instructions better.
In practice, fine-tuning is not always necessary. For many use cases, prompt engineering combined with RAG achieves comparable results without the cost and complexity of fine-tuning. At Agentik {OS}, we generally prefer RAG and prompt optimization because they are faster to iterate on and do not require retraining when the underlying model improves. We reserve fine-tuning for cases where consistent specialized behavior is critical and cannot be achieved through prompting alone.
Want to see AI agents in action?
Book a Demo