AI-safe jobs

Which jobs are evolving slowly under AI pressure and which are not? We explain the AI-safe concept, list typical roles and stay honest about the limits of the heuristic, so you can place the label rather than trust it blindly.

What does "AI-safe job" mean?

"AI-safe jobs" are roles whose core work has high amounts of human contact, physical or social labour, regulated responsibility, or contextual knowledge that is hard to automate. Today's AI systems can support sub-tasks but do not replace the role at scale. Examples include social work, nursing, skilled trades, large parts of public-sector administration and most teaching roles.

Why some roles are less exposed

When today's AI systems can perform a task well, that task shows up as highly exposed in datasets like Josh Kale's "AI Exposure of the US Job Market". Many text-only tasks fall into this bracket: standardised reports, simple translations, generic customer service or formulaic marketing copy.

Roles whose value comes from relationships, physical presence, accountability towards people or specific domain context score consistently as AI-resilient. Social pedagogy, the trades, nursing, facilities work, teaching, casework-based administration and most healthcare-adjacent jobs hold up well in the heuristics.

Typical AI-safe roles

  • Educators, teachers, social and youth-work professionals
  • Nursing, therapy, and other healthcare-adjacent roles
  • Public-sector and NGO administration
  • Skilled trades, technical maintenance, plant operation
  • Advisory and coaching roles with a strong relationship component
  • Project management in the social sector

More exposed roles

More exposed are tasks where today's AI gets close to the core work: simple translation, generic copywriting, standard customer service, data preparation, research aggregation, formulaic reports, or large parts of junior front-end programming.

"More exposed" does not mean "gone". In most cases, AI shifts the role rather than removes it: it moves the centre of gravity towards judgement, accountability, relationships and contextual knowledge. That is why we use the score as a filter, not as a verdict.

Where the assessment runs into limits

  • We are not measuring your specific job. We map it to the closest US occupation in a public dataset.
  • Hybrid and emerging roles that combine several occupations often slip through the grid.
  • The score says nothing about your specific employer, team or specialisation.
  • Datasets age. We refresh as the underlying source is refreshed.

Read the full methodology

Frequently asked questions

  1. Does Lunigi guarantee that a job is AI-safe?

    No. "AI-safe" is a heuristic, not a promise. We use a public score that captures how strongly a role's tasks overlap with current AI capabilities, and we publish the methodology behind that score.

  2. What dataset is the score based on?

    We use the public dataset "AI Exposure of the US Job Market" by Josh Kale. It covers 343 US occupations, each rated on a 0–10 scale. More detail on the methodology page.

  3. Does a high score mean my job will disappear?

    No. High exposure means today's AI can do many of the tasks well. In most cases that reshapes the role rather than eliminating it.

  4. Does a low score mean my job is safe?

    Also no. Even low-exposure roles change – through the economy, policy, demographics, new tools. The score only measures today's distance between AI and the work.

  5. Does this work for jobs in Germany and Switzerland too?

    We map roles in the German-speaking market to the closest US occupation. The fit is often clean, sometimes not. The methodology page explains where the mapping has limits.

Curated AI-safe jobs by email

Set up a profile, enable the AI-safety filter, get curated roles in your inbox.

Start free trial

Last updated:

    AI-Safe Jobs – Which Roles Are Less Threatened by AI? | Lunigi