Fine-tuning is the targeted further training of a pre-trained AI model on a smaller, domain-specific dataset. Instead of training a model from scratch – which costs millions – an existing model is adapted to specific data and tasks. The result is usually significantly more precise for the use case in question.
In HR, fine-tuning makes sense for recruiting classifications, sector-specific CV analyses or the generation of typical internal texts. Prerequisites are structured training data, a clear task definition and thoughtful data protection – application and employee data must not flow into third-party models without a legal basis.
Alongside classical fine-tuning, lighter methods such as LoRA, prefix-tuning or prompt-tuning are gaining traction. They reduce compute and storage while keeping the benefits of customised models.
Lunigi uses fine-tuning in a targeted way to specialise models for German-language job postings, collective-agreement contexts and AI-safety assessments – delivering more precise email digests.