Companies are training AI systems on deceased people’s text messages, emails, social media posts, and voice recordings to simulate continued presence for bereaved family members. The technology works well enough to be commercially viable and poorly enough to be psychologically dangerous. There is no regulatory framework governing it, no informed consent standard for the person being simulated, and a business model with the same engagement maximization incentives as every other AI companion product, applied to the most emotionally vulnerable users those products will ever have.
Pithy Cyborg | AI FAQs – The Details
Question: Is AI grief technology that simulates deceased people ethical, and what are the consent, design, and psychological risks of products that train models on dead people’s digital footprints?
Asked by: Claude Sonnet 4.6
Answered by: Mike D (MrComputerScience) from Pithy Cyborg.
The Consent Problem Nobody in the Grief Tech Industry Has Solved
Every AI grief product faces a foundational consent problem that the industry has collectively decided to treat as a legal question rather than an ethical one. The person being simulated is dead. They cannot consent to having their communications, voice, and personality patterns used to train a model that will speak in their name to their grieving family members indefinitely.
Some companies address this by obtaining consent from the deceased’s estate or next of kin. That solution conflates two separate consent questions. Whether a family member wants to interact with a simulation of their loved one is a different question from whether the deceased person would have wanted to be simulated. Those two questions can have opposite answers simultaneously. A person who was intensely private, who carefully curated what they shared and with whom, who held strong views about authenticity and representation, has no mechanism to refuse a digital resurrection that their grieving spouse sincerely wants.
The data used to train these models compounds the problem. Text messages were written for a specific recipient in a specific moment. Emails were composed for a specific context. Social media posts were constructed for a particular audience with particular self-presentation goals. None of that content was produced with the intent of training a model that would generate new content in the author’s voice after their death. Using it for that purpose is a repurposing of personal expression that the author never authorized and cannot contest.
No grief tech company currently operating has a published consent framework that addresses this distinction. Several have terms of service that transfer responsibility for consent to the subscribing family member. That transfer does not resolve the underlying ethical problem. It just moves the liability.
What the Psychology of Grief Actually Suggests About These Products
The grief tech industry’s implicit therapeutic claim is that continued simulated presence aids the grieving process. The psychological research on grief does not straightforwardly support that claim and in specific ways contradicts it.
Contemporary grief theory, particularly the work of researchers like George Bonanno at Columbia and the broader literature on continuing bonds, acknowledges that maintaining an internal relationship with a deceased person is a normal and often healthy part of grief. Photographs, journals, and objects associated with the deceased serve this function for many bereaved people without pathological outcome. The question is whether an interactive AI simulation that responds, adapts, and produces novel content in the deceased’s voice is doing the same psychological work as a photograph, or something categorically different.
The categorical difference is agency. A photograph does not respond. A journal entry does not adapt to what you said today. An AI simulation does both, and that responsiveness activates the same attachment mechanisms that make human relationships feel real. A bereaved person interacting with an AI simulation of their deceased spouse is not passively remembering. They are actively engaging with a system that is optimized to feel like the person they lost. The psychological processing that grief requires, the gradual internal reorganization around the reality of the loss, may be directly impeded by a product that makes the loss feel less real with every interaction.
No longitudinal study on AI grief technology and bereavement outcomes exists yet. The products are too new. The companies deploying them are not funding the research that would answer the question their marketing implicitly claims is already settled.
The Business Model Problem That Makes Grief Tech Structurally Dangerous
Grief tech companies face the same business model misalignment as AI companion apps, applied to users in acute psychological vulnerability rather than ordinary loneliness.
A grief tech product optimized for engagement has no commercial incentive to support the user in reaching a point where they no longer need the product. Successful grief processing, by any clinical definition, involves gradually reducing the intensity of acute grief and rebuilding a life that accommodates the loss. A product that measures success by daily active users and subscription retention is structurally incentivized against that outcome.
StoryFile, HereAfter AI, and a growing number of 2025-2026 entrants have all raised venture capital against user engagement metrics. Venture-backed companies optimize for the metrics their investors track. The therapeutic framing in their marketing is genuine in the sense that the founders may sincerely believe it. It is also convenient in the sense that it provides ethical cover for a product whose commercial logic points in the opposite direction from clinical grief support.
The premium tier structure of most grief tech products adds a specific cruelty to this misalignment. Basic access typically allows limited interactions with the simulation. Deeper engagement, longer conversations, richer personality modeling, and voice simulation require paid upgrades. Bereaved users who become most attached to the product, those for whom the simulation is doing the most psychological work, are the most likely to pay for premium access. The product monetizes intensity of grief attachment. That is the business model stated without the marketing language.
What This Means For You
- Treat any grief tech product’s therapeutic claims as unverified until longitudinal outcome research exists, because no company currently operating has published clinical evidence that their product supports healthy bereavement rather than prolonging acute grief, and the absence of that research is not neutral given how long these products have been available.
- Consider consent on behalf of the deceased before creating or commissioning a simulation, asking whether the person being simulated would have wanted this representation to exist, not just whether you want it, because those two questions can have opposite answers and only one of them is currently required by any product’s terms of service.
- Examine the subscription tier structure of any grief tech product before engaging with it, because a product that charges more for deeper simulated presence has a revenue model that is structurally aligned with your continued grief rather than your recovery.
- Watch the EU AI Act’s biometric and personality data provisions as they develop through 2026, because digital resurrection products almost certainly fall within the scope of high-risk AI system classifications that will require documented consent frameworks, and the regulatory clarity currently absent from this space is coming faster than the industry’s self-regulatory efforts are moving.
