Top Headlines

Feeds

Anthropic Interviewer Captures Professionals’ Views on AI Across 1,250 Interviews

Published Cached
  • The different topics people discussed in their interviews with Anthropic Interviewer. Across all three samples we studied—the general workforce, scientists, and creatives—participants expressed predominantly positive sentiments about AI’s impact on their professional activities. Certain topics did introduce pause, particularly around questions of personal control, job displacement, and autonomy. In this diagram, topics are roughly ordered from more pessimistic to more optimistic.
    Image: Anthropic
    The different topics people discussed in their interviews with Anthropic Interviewer. Across all three samples we studied—the general workforce, scientists, and creatives—participants expressed predominantly positive sentiments about AI’s impact on their professional activities. Certain topics did introduce pause, particularly around questions of personal control, job displacement, and autonomy. In this diagram, topics are roughly ordered from more pessimistic to more optimistic. (Anthropic) Source Full size

Anthropic rolls out Interviewer tool and opens data Anthropic introduced Anthropic Interviewer, an AI‑driven system that runs automated 10‑15‑minute interviews on Claude.ai and publicly released the full transcript dataset on Hugging Face for research use [1][5].

1,250 professionals surveyed across three cohorts The pilot interviewed 1,000 workers from a broad occupational mix, plus 125 creatives (writers, visual artists, etc.) and 125 scientists (physicists, chemists, data scientists, among 50+ fields) recruited via crowd‑worker platforms [1].

General workforce reports productivity gains but feels stigma 86% say AI saves time, 65% are satisfied with its role, yet 69% note workplace stigma and 55% express anxiety; 48% envision future jobs focused on overseeing AI systems [1].

Creative professionals see speed and quality boosts amid economic worries 97% report time savings, 68% claim quality improvements, while 70% grapple with peer judgment and fear of displacement, with many fearing they must sell AI‑generated content to stay afloat [1].

Scientists cite trust gaps yet desire deeper AI partnership 79% flag trust and reliability as barriers, 27% mention technical limits, but 91% want more AI help—especially hypothesis generation—though current use stays limited to writing, coding, and literature review [1].

Interview tool proves scalable but study has biases Anthropic Interviewer completed the large‑scale test at far lower cost than manual interviewing, yet the sample’s crowd‑worker recruitment, self‑report nature, and snapshot design limit generalizability and may overstate positive attitudes [1].

  • Fact‑checker (general workforce): “A colleague recently said they hate AI and I just said nothing. I don’t tell anyone my process because I know how a lot of people feel about AI.” [1]
  • Data‑quality manager (general workforce): “I try to think about it like studying a foreign language—just using a translator app isn’t going to teach you anything, but having a tutor who can answer questions and customize for your needs is really going to help.” [1]
  • Pastor (general workforce): “…if I use AI and up my skills with it, it can save me so much time on the admin side which will free me up to be with the people.” [1]
  • Novelist (creative): “I feel like I can write faster because the research isn’t as daunting.” [1]
  • Information‑security researcher (scientist): “If I have to double check and confirm every single detail the [AI] agent is giving me to make sure there are no mistakes, that kind of defeats the purpose of having the agent do this work in the first place.” [1]
  • Microbiologist (scientist): “I worked with one bacterial strain where you had to initiate various steps when the cells reached specific colors…the differences in color have to be seen to be understood and [instructions] are seldom written down anywhere.” [1]

Links