AI & HR tech

AI Bias

Systematic distortions in AI systems, often from skewed training data, with consequences for fairness in hiring.

AI bias describes systematic distortions in AI systems that can lead to unfair or discriminatory outcomes. The cause is often skewed or historically biased training data – for example when a model learns from past selection decisions in which certain groups are under-represented.

In recruiting the consequences are particularly sensitive. If a model was trained on data with women under-represented in leadership, it may score women lower for leadership roles. A well-known example is an Amazon model that downgraded CVs containing "women" and was therefore retired.

Mitigations are technical and organisational: balanced training data, fair evaluation metrics, human-in-the-loop reviews, regular bias audits, transparent documentation and clear complaint channels. In Europe the EU AI Act prescribes additional safeguards for high-risk applications like recruiting.

Lunigi uses AI where it helps – finding matching roles – while keeping human decisions with candidates themselves.

    AI Bias – Risks in Applications & Recruiting | Lunigi