Microsoft Research Publishes AI Relationship Advice Study and Calls for Personhood‑Focused Research
Updated (3 articles)
Study Reveals Diverse Roles Users Assign to Relationship‑Advice AI Researchers surveyed 25 participants who regularly use chatbots for sex, dating, and relationship guidance, collecting 90 distinct prompts to map usage patterns [1]. Interviewees described AI as a sounding board, strategic planner, and emotional confidant, illustrating varied expectations for AI‑mediated intimacy [1]. Follow‑up interviews with 17 participants deepened insights into how users balance AI’s informational gaps with relational goals [1]. The authors propose design and safety guidelines to foster healthier digital intimacies [1].
Participants Actively Counteract AI Bias and Overreliance Survey respondents reported deliberately checking AI suggestions against personal judgment to avoid flattering bias and dependence [1]. They expressed concerns that unchecked reliance could exacerbate loneliness or self‑harm, prompting self‑regulation strategies [1]. Participants highlighted the need for alternative viewpoints to mitigate sycophancy [1]. These mitigation tactics underscore emerging user‑driven safety practices in AI‑assisted intimacy [1].
Folk Theories Drive Structured Prompting Strategies Users developed informal beliefs about AI behavior, shaping how they phrase questions to elicit desired responses [1]. Common tactics included framing prompts neutrally and explicitly requesting alternative perspectives to overcome perceived limitations [1]. Such “folk theories” reflect users’ attempts to predict and steer AI output in relational contexts [1]. The study recommends further research into these prompting heuristics to improve AI design [1].
CHI Meet‑up Calls for Personhood‑Centered AI Research The upcoming CHI conference will host a meet‑up dedicated to designing AI that supports personhood, defined as recognizing individuals as whole people with histories and relationships rather than solely by health status [2]. Organized by Anja Thieme, the event invites submissions that explore AI’s role in upholding relational personhood [2]. Organizers emphasize collaborative inquiry to place personhood at the core of future HCI research [2]. The call highlights a shift toward inclusive technology design that respects lived experience [2].
Existing HCI Work Lays Foundation Yet Gaps Remain Prior HCI studies have examined identity, values, and lived experience in contexts such as stroke, bereavement, and dementia, providing a basis for investigating AI’s impact on personal identity [2]. Despite this foundation, the mediation of personhood by AI remains under‑researched, prompting the meet‑up’s focus [2]. Researchers are urged to address this gap by developing frameworks that integrate AI into holistic understandings of self [2]. The initiative seeks to expand scholarly attention beyond health‑centric applications toward broader relational dimensions [2].
Sources
-
1.
Microsoft Research: Study Maps How Users Seek Relationship Advice from AI: The paper details a survey of 25 regular chatbot users, 90 prompts, and follow‑up interviews, highlighting how people treat AI as a confidant, planner, and sounding board while managing bias and dependence.
-
2.
Microsoft Research: CHI Meet‑up Calls for AI Research on Personhood: The announcement outlines a CHI conference meet‑up led by Anja Thieme urging researchers to design AI that supports relational personhood, noting prior HCI work on identity but emphasizing the scarcity of studies on AI’s role in personhood.
Related Tickers
Timeline
2025 – Researchers interview nine adults who use Augmentative and Alternative Communication (AAC) about their meeting practices, uncovering nine distinct communication strategies for AI‑mediated dialogue and recommending transparency, watermarking, and “personhood” safeguards; one participant notes, “Our expertise in assistive tech offers crucial guidance for transparent AI agents.” [3]
2025 – A study surveys 25 frequent users of chat‑based relationship advice bots, gathers 90 unique prompts, and follows up with 17 participants to map how they treat AI as a sounding board, strategic planner, and emotional confidant; a user explains, “I treat the chatbot like a sounding board for my relationship dilemmas.” [1]
2025 – Participants develop informal “folk theories” and specific prompting tactics—such as neutral framing and requesting alternative viewpoints—to counteract AI sycophancy and overreliance, deliberately cross‑checking suggestions against personal judgment; one says, “I frame my questions neutrally to get alternative viewpoints.” [1]
May 2026 – The CHI conference hosts a dedicated meet‑up on AI and personhood, calling on researchers to design systems that recognize individuals as whole people with histories and relationships rather than merely by health status; organizer Anja Thieme asserts, “We need AI that recognizes individuals as whole people, not just by their impairments.” [2]
Late 2026 (planned) – Industry anticipates the rollout of AI‑powered “digital twins” that can speak on users’ behalf in meetings, building on the AAC‑derived design recommendations for attribution and safety; a researcher predicts, “Soon, people will deploy AI‑powered digital twins to speak on their behalf.” [3]
2026 onward – Authors of the relationship‑advice study issue concrete design, safety, and research recommendations—human‑AI interaction guidelines, AI safety measures, and sociotechnical research agendas—to promote healthier digital intimacies; they conclude, “We propose guidelines to foster healthier digital intimacies.” [1]