A new Anthropic study is the most substantive look yet at how AI is actually landing inside organisations — not through a vendor's lens, but through the words of real workers. Here is what every senior leader needs to understand from it, and what it means for decisions you should be making now.
Anthropic surveyed 81,000 users of its Claude AI platform in April 2026, asking open-ended questions about their experience with AI at work. Researchers then used Claude itself to classify responses by occupation, career stage, productivity impact, and level of concern about job displacement. The result is one of the largest qualitative windows into AI's economic reality to date.
It is worth noting the sample is self-selecting — active Claude users willing to take a survey — which almost certainly skews toward enthusiastic adopters. Even so, the patterns are clear enough to act on.

Workers in exposed roles know they are exposed
One fifth of respondents voiced concern about AI displacing their jobs. That number tracks almost precisely with how much AI is actually being used in those roles. For every 10-percentage-point increase in what Anthropic calls "observed exposure" — the share of a job's tasks where Claude is being used — perceived job threat increased by 1.3 percentage points. Workers in the top quartile of exposure mentioned displacement anxiety three times as often as those in the bottom quartile.
Put simply: your people are not naive. If AI is doing a meaningful portion of their work, they have noticed, and they are worried. This is not abstract fear of some distant future. It is a present-tense concern already running through organisations.
The jobs registering the highest concern included computer programmers, web developers, graphic designers, and office clerks — consistent with the tasks AI tools are currently handling at scale. Elementary school teachers, civil engineers, and clergy registered among the lowest.
For a C-suite audience, the signal is this: if your organisation has deployed AI in any serious capacity, you already have a workforce anxiety problem whether you are managing it or not.
Early-career workers are the most worried — and that worry is rational
The study found that early-career respondents were significantly more likely to express job displacement concerns than senior workers. This aligns with separate Anthropic research showing tentative signs of a slowdown in graduate and early-career hiring in the US.
This is not a perception gap to be managed with communication. It is a signal about where AI's labour market impact is landing first. Junior roles — where much of the task work involves the kind of structured, repeatable outputs AI handles well — are the most exposed. Senior leaders, whose work is more relational, contextual, and judgment-heavy, are less exposed.
The implication for boards and CEOs: AI is not compressing headcount evenly across the organisation. It is concentrating its impact at the entry and early-career level. Hiring pipelines, succession planning, and the way organisations build institutional knowledge through junior-to-senior progression all need revisiting.
The productivity gains are real — and largest at the extremes
Across all respondents, the mean self-reported productivity rating was 5.1 on a 7-point scale — corresponding to "substantially more productive." That is a strong number, even accounting for the enthusiastic-user bias in the sample.
The distribution of gains is where it gets strategically interesting. The largest productivity improvements were reported at both ends of the income spectrum.
At the high end, senior professionals in management, software development, and technical roles reported the biggest gains — consistent with AI performing well on complex, knowledge-intensive tasks. Management occupations topped the list, largely because many of those respondents were entrepreneurs using AI to build businesses alone.
At the low end, some of the most striking testimonials came from workers in low-wage jobs using AI to extend far beyond their existing role. A delivery driver using Claude to build an e-commerce business. A landscaper building a music application. A customer service rep compressing multi-hour response workflows into minutes.
The lowest productivity gains came from legal and scientific professions — areas where precision, domain-specific reasoning, and strict procedural compliance still limit AI's usefulness.
For CFOs and COOs, the practical read is that the productivity case for AI is strongest in knowledge-heavy roles and in enabling lower-cost workers to take on higher-value tasks. The weakest current case is in highly regulated, precision-dependent functions.
Scope beats speed — workers are doing new things, not just old things faster
This finding is one of the most important in the study and the most underappreciated in boardroom AI discussions.
When respondents described how AI improved their productivity, the most common answer was not "it makes me faster" — it was "it lets me do things I couldn't do before." Scope expansion was cited by 48% of those who explicitly mentioned productivity gains. Speed gains were cited by 40%.
A non-technical professional who now describes himself as a full-stack developer. An executive who built a website in four days that would have taken months. Workers who no longer need to hire a social media manager, a research assistant, or a vendor — because AI now covers that function.
This reframes the standard executive conversation about AI, which tends to focus on efficiency and cost reduction. The more significant strategic story is capability expansion. Your organisation is not just getting faster at existing tasks. Your people — if they are using AI well — are doing things that previously required hiring specialists or building teams.
For CHROs, this changes how you think about job architecture, role definitions, and hiring criteria. The question is not just "how many people do we need?" but "what capabilities does this role need to have now that AI covers the specialist work that used to justify a separate hire?"
The people gaining the most are also the most worried
One of the study's counterintuitive findings: workers experiencing the largest AI-driven speedups were also among the most concerned about their own job security. The relationship between speedup and perceived job threat is U-shaped — both workers who find AI useless and workers experiencing transformative speedups express the most concern.
The logic is not hard to follow. If AI can compress what you do from a four-hour task to fifteen minutes, then you — along with everyone else in your organisation — can see that the volume of work required to justify your role is shrinking. Being highly effective with AI does not insulate you from feeling expendable. In some ways, it sharpens the anxiety.
For leaders, this has real implications for culture and retention. A common assumption is that upskilling people on AI tools will reduce their anxiety about displacement. This research suggests it may not. Being competent with AI and being concerned about long-term job security are not mutually exclusive — they may travel together.
Where do the gains go?
Among respondents who named a beneficiary of their AI productivity gains, the majority said the value was flowing to themselves — through faster work, expanded capability, and freed-up time. Around 10% said the gains were going to their employers, with managers asking for and receiving more output.
Critically, this survey covered personal account holders — people choosing to use Claude on their own. Enterprise users, where the organisation owns the licence and sets the agenda, would likely show a very different distribution. If your organisation has deployed AI at scale and your people are using it under direction rather than by choice, the productivity gains are probably being captured at the institutional level rather than the individual level.
That matters for your employee value proposition. If workers are producing significantly more but not experiencing the personal benefit of that productivity — through more autonomy, reduced workload, faster progression, or better compensation — you have a disengagement problem building.
What leaders should do now
Address workforce anxiety directly — not with reassurance, but with clarity. Vague statements about AI being a "tool, not a replacement" do not land when workers can see exactly which of their tasks are being automated. Give people a clear picture of where the organisation is heading, what roles will look like in two years, and how the transition will be managed.
Revisit your entry-level hiring and development model. If early-career roles are the most exposed, the traditional route of building senior talent through junior apprenticeship is under pressure. What does career development look like when the foundational tasks that used to train junior staff are being automated? This needs a deliberate answer, not a default.
Reframe your internal AI narrative around capability, not efficiency. Most internal AI programmes are sold as productivity tools — faster, cheaper, more output. The more powerful story, and the one that builds genuine engagement, is about what your people can now do that they couldn't before. Lead with expansion, not compression.
Audit where productivity gains are actually landing. Are the people generating AI-driven output sharing in the upside? Or are they delivering more for the same compensation while carrying more anxiety? The research suggests the latter is a common experience. How you answer that question will define your employer brand over the next three years.
The bottom line
Anthropic's study does not tell leaders anything they cannot already sense. But it quantifies it with unusual precision. Your people — particularly the ones most involved in AI-adjacent work — are aware of what is happening, are worried about it, and are simultaneously experiencing real gains. That combination of benefit and anxiety is not a communications problem to be solved. It is a strategic reality to be managed.
The organisations that will navigate this well are the ones that treat their workforce's concerns as signal rather than noise, and that build AI strategies around human capability expansion rather than headcount reduction. The data suggests your people are already doing remarkable things with these tools. The question is whether your leadership strategy is keeping up.