top of page
Search

The Confidentiality Gap: Why Therapy Protects You—But ChatGPT Can’t (Yet)

Think your chats with ChatGPT are private? Think again, warns OpenAI CEO
Think your chats with ChatGPT are private? Think again, warns OpenAI CEO

When choosing to share your most private thoughts, confidentiality matters more than ever—especially in mental health care. Lately, with the rise of AI chatbots like OpenAI’s ChatGPT, the boundaries of privacy and legal protection are being tested in new ways. Recent media coverage—including direct statements from OpenAI CEO Sam Altman—make it clear: AI conversations are not confidential like therapy sessions. Here’s what you need to know, with detailed reference to all the major sources and statements driving this key privacy conversation.


“It’s Very Screwed Up”: What the Media and Sam Altman Say About ChatGPT Confidentiality

OpenAI CEO Sam Altman has spoken candidly in interviews and press briefings about the significant gap in legal protection for users of ChatGPT. He explicitly stated that “It’s very screwed up” that conversations with ChatGPT do not have legal confidentiality—unlike the privacy you expect with doctors, lawyers, or therapists. Multiple major media outlets and tech news platforms echoed and analyzed his remarks:


“Conversations with ChatGPT…do not have the same legal confidentiality as discussions with therapists, doctors, or lawyers…These chats lack legal privilege and could be subpoenaed and used as evidence in court proceedings if required…Altman called this a major privacy gap…”

— TechCrunch, The Verge, Wired


Media also emphasized how Altman’s warning aligns with growing concern among mental health professionals that many people, especially young users, treat AI chatbots as a substitute for therapy—unaware their disclosures could be accessed, stored, or legally compelled by authorities.


Key Media Insight & Coverage


Reporters from outlets like TechCrunch, The Verge, and The New York Times have specifically cited Altman’s advocacy for new privacy standards for AI, referencing:


- Altman’s statement that OpenAI can access and review user chats, particularly those on the free tier, which may be retained for up to 30 days for security/legal reasons.

- Discussion in Wired and Bloomberg about the risk of “false security,” and user confusion about AI privacy vs. the strict confidentiality guaranteed in regulated health professions.

- Public conversation on social media, where news of Altman’s stance trended on platforms like Twitter/X, often quoting his phrase, “It’s very screwed up,” and highlighting the need for regulatory reform.


These reports stress Sam Altman’s public call for the legal system—and the tech industry—to develop privacy protections for AI apps in line with doctor-patient confidentiality rules.


Why This Matters for Therapy: The Therapist’s Viewpoint

As a therapist or mental health provider, here’s why this media coverage should shape user choices:


- Therapists are legally and ethically bound to protect your privacy. What you share in session is shielded by law, except in rare safety-related exceptions.

- AI chatbots have no such protection: All major media, and Altman himself, warn that disclosures could be reviewed by company staff or subpoenaed.

- Widespread media echo: News articles emphasize the risk for users who may think AI equals a “safe space,” when in fact, it doesn’t—yet.


The Road Ahead: Toward Stronger AI Protections

The media’s repeated citation of Sam Altman’s agreement on the issue is clear: He *wants* new legal frameworks to ensure privacy parity between therapy and AI. This growing call is echoed by privacy advocates, legal experts, and clinicians—appearing across mainstream news, tech platforms, and professional mental health channels.


- Social media has amplified this message with infographics and cautionary posts (see sample captions below), reinforcing the media consensus: True confidentiality still lives in the therapy room.


Sample Media-Style Captions for Sharing

- “It’s very screwed up,” says Sam Altman, CEO of OpenAI. Even he agrees—ChatGPT conversations aren’t protected like therapy. As therapists, we offer true confidentiality. #MentalHealth #TherapyMatters #AIPrivacy


- Altman admits: “There’s zero legal protection” for ChatGPT users. Your words are safest with a licensed professional, not an AI.


Bottom Line

If you want to keep your personal stories safe, therapy offers legal and ethical protection that AI can’t match—yet. Even the CEO behind one of the world’s top AI chatbots strongly agrees: We urgently need stronger legal privacy protections for the digital age.


The Mind Practice — Because your trust deserves real protection.

Visit www.themindpractice.in to know more.


Sources

[3] Sam Altman Reveals ChatGPT's Privacy Void https://www.youtube.com/watch?v=Lr4UDVch_ZA

[5] "Chat GPT conversations are not legally protected" - Sam Altman https://www.tv47.digital/sam-altman-warns-chat-gpt-conversations-arent-legally-confidential-112038/

[6] Personal chats with ChatGPT could be used as legal evidence- OpenAI CEO Sam Altman - The Zambian Observer https://zambianobserver.com/personal-chats-with-chatgpt-could-be-used-as-legal-evidence-openai-ceo-sam-altman/

[8] Sam Altman warns your private ChatGPT chats can be subpoenaed https://dataconomy.com/2025/07/28/sam-altman-warns-your-private-chatgpt-chats-can-be-subpoenaed/

Comments


bottom of page