Katarzyna Nowaczyk-Basińska
| Program | AI2050 |
| School | The Leverhulme Centre for the Future of Intelligence at the University of Cambridge |
| Field of Study | Artificial Intelligence |
Over the past year, AI2050 Fellow Katarzyna Nowaczyk-Basińska led workshops and dinner-based focus groups across three countries to examine how different cultures perceive AI-driven digital afterlife technologies. The research surfaced shared concerns about addiction, emotional harm, and commercialization, informing recommendations for responsible design and governance of grief-related AI systems.
A few years ago, the idea of talking with the dead still belonged mostly to science fiction. Now it’s edging into everyday life: posthumous avatars, “griefbots,” and other A.I. systems that promise to preserve a voice, a personality, even a sense of presence after someone is gone. The technology is arriving quickly; the social rules around it are not. And it raises an uncomfortably practical question: who gets to decide what a “good” digital afterlife looks like, and who is protected when it goes wrong?
Dr. Katarzyna Nowaczyk-Basińska, an AI2050 Fellow and an Assistant Research Professor at the University of Cambridge’s Leverhulme Centre for the Future of Intelligence, approaches digital immortality less as a novelty than as a cultural and institutional shift. Her project asks how people in different cultural settings understand death, grief, and afterlife presence in the age of A.I., and what kinds of technologies, if any, they would consider legitimate. “For years, the digital-immortality narrative has been shaped largely in the United States, and I wanted to challenge this perspective and explore other narratives and views on this idea,” says Nowaczyk-Basinska, who conducted fieldwork in Poland, India, and China as part of her research.
Her method is deliberately qualitative. In each country, she gathered evidence by bringing people into the same room and listening closely to how they talked about death, grief, and what A.I. might do to both. One format was an expert workshop, designed to surface cross-disciplinary perspectives from people who rarely sit at the same table –– academics and artists, palliative-care professionals and funeral-industry practitioners, designers and engineers. The premise was simple: there is no single “expert” on A.I. and death, because the problem sprawls across technology, psychology, ritual, and law.
The second format was intentionally more intimate. Called “(Im)mortality over Dinner,” it functioned as a focus group held over a shared meal, moderated by a professional grief counselor and designed to make an otherwise taboo subject speakable. Complete strangers sat together in Poznań, New Delhi, and Beijing, sharing food and, often, deeply personal stories. In India, one participant described staying connected to the dead in explicitly spiritual terms: “It’s a form of prayer for me… It’s a meditation.”
Across these conversations, a few themes returned with striking consistency. People worried about the psychological force of griefbots and postmortem avatars that might become “very addictive,” or create attachments that are hard to step away from at the rawest moments of mourning. “One interesting suggestion was to treat some of these tools less like consumer apps and more like a medical intervention, something that might require professional psychiatrist supervision in certain contexts,” says Nowaczyk-Basinska. Participants also returned repeatedly to governance. The field, they noted, is largely unregulated and heavily shaped by commercial incentives, which means a narrow set of values can end up baked into the design. They imagined a different center of gravity, one that foregrounds care, empathy, responsibility, and trust rather than engagement and profit.
AI & Advanced Computing
KatarzynaNowaczykBasinska.pl | Feb 12, 2026
AI2050