Methodology
How we rate AI exposure
Lunigi's "AI-safe" filter relies on a public dataset that scores US occupations from 0 to 10. Here is exactly where the number comes from, what it does and doesn't tell you, and when to ignore it.
Last updated:
The honest answer
- We don't measure your specific job. We look up the closest matching US occupation in a published dataset and surface that occupation's score.
- The source is Josh Kale's "AI Exposure of the US Job Market": 343 US occupations, scored 0–10 by Gemini Flash, built on US Bureau of Labor Statistics data.
- Treat the score as a directional indicator and a filter you can override – not as evidence that any specific job is protected from AI disruption.
Where the data comes from
Lunigi's AI-exposure score is taken from a public dataset published by Josh Kale at joshkale.github.io/jobs/. It covers 343 US occupations (~143 million jobs as reported by the US Bureau of Labor Statistics), each scored from 0 to 10 by Google's Gemini Flash model. The full dataset and source code are openly available on GitHub. We do not run our own scoring on top of it.
The 0–10 scale
Each occupation falls into one of five bands. The colour stops match the original treemap.
Minimal (0–1)
Almost no overlap between current AI capabilities and the actual work.
Low (2–3)
Limited overlap. AI accelerates a few tasks; the core of the role is still done by humans.
Moderate (4–5)
Notable overlap. AI is changing parts of the workflow; the role is likely to evolve.
High (6–7)
Many tasks are already automatable. Significant restructuring of the role is expected.
Very high (8–10)
Most of the work is highly exposed to current AI. Substantial replacement risk on a multi-year horizon.
How we map your job to a score
For each job posting we receive, the bot identifies the closest matching US BLS occupation title and surfaces that occupation's score from the dataset. There is no per-posting LLM rescoring, no time-series prediction, and no employer-specific adjustment baked into the number. Our filtering is applied against this lookup score.
What the score is not
- Not a guarantee. AI capabilities and labour markets move faster than any static score.
- Not employer-specific. Two postings with the same title get the same number even if the companies are very different.
- Not a substitute for your own judgment. A high score doesn't mean a role is doomed; a low score doesn't mean it's safe.
- Sensitive to mapping errors. If a job title doesn't map cleanly to a BLS occupation, the surfaced score may be misleading.
- Frozen at the dataset's publication date. The number updates only when the underlying dataset is republished.
- Built on US categories. Jobs in DACH are mapped to the closest US equivalent, which is often – but not always – a clean fit.
When you should distrust the score
Be especially careful with: emerging hybrid roles that blend several occupations; very specialised niches that broad BLS categories don't capture; jobs where the title and the actual day-to-day differ substantially; and recent shifts that postdate the dataset's publication.
How we use the score in your emails
We filter out roles above a threshold the bot tunes to your preferences, and we include the 0–10 number in the AI-risk note that ships with every job in your daily email. You can always overrule the filter through your "fits" / "doesn't fit" feedback – the bot adjusts on the next delivery.
References
- Josh Kale – "AI Exposure of the US Job Market".
- US Bureau of Labor Statistics – Occupational Employment and Wage Statistics.
- Related guide: How AI is changing job search – study & analysis 2025