RAG (retrieval-augmented generation) is a method where a large language model produces answers based on documents retrieved in advance. The typical flow: a question is encoded as an embedding, a vector database returns the most similar text passages, and the LLM formulates its answer using that context.
Advantages are higher factuality, freshness and traceability – the model grounds its answers in verifiable sources instead of relying on training memory alone. Confidential or domain-specific knowledge bases can be plugged in without training a model from scratch.
In recruiting, RAG underpins HR chatbots that ground answers in specific company policies, enhances job postings with current collective-agreement information or cleanly handles candidate questions about pay structures. Internal knowledge portals of large authorities increasingly use RAG too.
Lunigi uses RAG-like techniques to feed current job postings, company information and collective-agreement data into its evaluations – the result is a well-grounded, individual job email.