
Sam Altman warns there’s no legal confidentiality when using ChatGPT as a therapist
San Francisco, CA – OpenAI CEO Sam Altman has issued a significant warning regarding the privacy and confidentiality of user conversations with ChatGPT, particularly when the AI tool is used for sensitive purposes like therapy or emotional support. Altman highlighted that, unlike interactions with human professionals, there is currently no legal framework for doctor-patient confidentiality or legal privilege when conversing with artificial intelligence.
The candid remarks came during a recent appearance on Theo Von’s podcast, “This Past Weekend w/ Theo Von,” where Altman addressed questions about AI’s intersection with the legal system. He underscored a critical gap: the absence of policy and legal structures governing the confidentiality of AI interactions.
Altman noted that users, especially younger individuals, frequently confide deeply personal information to ChatGPT, treating it as a therapist or life coach to navigate relationship problems and other intimate matters. “People talk about the most personal sh** in their lives to ChatGPT,” Altman stated, emphasizing the stark contrast with the legal protections afforded to conversations with human therapists, lawyers, or doctors. “And we haven’t figured that out yet for when you talk to ChatGPT.”
This oversight presents a substantial privacy risk, particularly in legal contexts. Altman revealed that in the event of a lawsuit, OpenAI could be legally compelled to disclose these highly personal conversations. He voiced strong disapproval of this situation, asserting, “I think that’s very screwed up. I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever — and no one had to think about that even a year ago.”
OpenAI recognizes that the current lack of robust privacy guarantees could impede broader user adoption of its AI tools. The company is already engaged in legal battles concerning data privacy, notably appealing a court order in its lawsuit with The New York Times. This order would necessitate saving chats from hundreds of millions of global ChatGPT users, with the exception of ChatGPT Enterprise customers.
In a statement on its official website, OpenAI has labeled this court order an “overreach,” expressing concerns that it could set a precedent for increased demands for legal discovery or law enforcement access to user data. The company’s stance highlights the escalating tension between technological innovation, user privacy expectations, and existing legal precedents.
The issue resonates with recent privacy debates, such as the increased scrutiny on digital data following the overturning of Roe v. Wade, which led many consumers to seek out more private period-tracking applications or encrypted health platforms.
When questioned by Theo Von about his own limited use of ChatGPT due to privacy concerns, Altman acknowledged the validity of such hesitations. “I think it makes sense … to really want the privacy clarity before you use [ChatGPT] a lot — like the legal clarity,” Altman concluded, underscoring the urgent need for defined legal frameworks to protect AI user data.



