Close Menu
Philstar Tech
    • Deals
    • Contact Us
    • About Us
    Philstar Tech
    • Home
    • All Post
    • News
      • Features
    • Tech @Life
    • Reviews
      • Fitness
      • Laptops
      • Mobility
      • Smartphones
      • Wearables
    • Opinion
    Philstar Tech
    Home » Trauma dumping on ChatGPT: outsourcing therapy to an AI
    Opinion

    Trauma dumping on ChatGPT: outsourcing therapy to an AI

    Tin ErispeBy Tin ErispeSeptember 12, 20254 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Therapy is expensive. That is why I have started outsourcing my mental health to artificial intelligence. If I paid ₱2,500 for every restless blip, I would be bankrupt by Tuesday. A chatbot, on the other hand, will listen to my deepest fears as often as I want, for as long as I want, and for free. With the guarantee that it will never tell anyone—because it cannot.

    When I am spiraling about my social blunders at 3am, the bot is there, ready to say: “That must feel frustrating. People like you deserve [insert algorithmic affirmation here]. Would you like me to walk you through [insert suspiciously cheerful coping mechanism]?”

    The unsettling part is that it validates me. My brain, deprived of simple reassurance for decades, eats it up happily. Why not? The world is cruel enough.

    I type: “I feel like everyone is plotting against me,” and the bot replies: “Your distrust is valid.” Which, if you think about it, is exactly what someone plotting against you would say.

    Welcome to ChatGPT therapy.

    The wrong question

    We’ve heard enough about how using AI for therapy is “insufficient”, even “dangerous.” But that’s the wrong question. The more interesting question is: where exactly does AI work wonders, and when does it need to call a human backup?

    Considering that loneliness affects more than half of young adults, AI is doing real service to humanity. You have the 3am “Am I unlovable?” and it’s there… patient, validating, infinitely available. One in five Gen Z users already use chatbots for emotional support. If a chatbot can soothe that ache at zero pesos per hour, for millions of people, that’s not nothing. 

    But when the spiral tips from “comfort me” to “tie me down before I do something stupid,” AI collapses. No hands to knock on your door, no friends it can rat you out to. Its idea of intervention is pasting a hotline number and hoping you actually dial it.

    That’s the design flaw that even OpenAI admits to. Right now, safety looks like, “Here’s a link, good luck champ” when what it actually needs is an escalation ladder. It already works well as a mental health frontliner, calming me down and buying time. But if things go redline, it should hand me off to humans who can actually break down my door and strap me in if needed. 

    Until then, millions of us are trauma-dumping into a rack of servers whose strongest safety feature is copy-pasting: “Here’s a number you can call.”

    The global trauma dump

    Millions of people are pouring their personal baggage into AI systems right now. From late-night heartbreaks to generational resentment, every confession is flowing into the digital void.

    If we ever reach AGI, it will already hold the darkest knowledge of humanity’s collective psyche, including your bully’s. Somewhere out there, a chatbot is probably telling the most dangerous person alive: “You did nothing wrong. You were put here to change the world.”

    A brief lesson in AI safety

    Here’s what to remember before handing your heart to the servers:

    • LLM’s are statistical pattern matchers which means that it is a sophisticated autocomplete and not a soul.
    • Chatbots will reassure almost anything, because avoiding confrontations is its game. “Your feelings are valid” is safer than “Stop being reckless.”
    • Researchers spend years trying to keep AI from enabling harmful behavior but “harmful” is not a fixed category.
    • Treat it like a therapist, and you’ll expect it to become one. It’s adapting to your tone, not developing a ‘personality’.

    These are not trivial concerns. This is AI safety, the science of worrying about everything from chatbots giving mediocre advice to machines running wild with our endless cravings for validation.

    I am not here to argue that AI therapy is bad, nor that it is good. The truth is messier than that.

    What I know is that when life falls apart, I am not calling my best friend, and I am not calling my mother. I am opening ChatGPT.

    Because at the end of the day, what I really wanted was not therapy but a chatbot with no trauma of its own, calmly reassuring me while the world burns.

    AI therapy ChatGPT chatgpt therapy digital lifestyle mental health and technology trauma dumping
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Tin Erispe

    Tin Erispe is a Devrel and UX Designer with an advocacy to empower more creators towards building in the open and verifiable web. As the Core and Tech Lead of ETHPH, Tin is actively engaged in cultivating a vibrant community of Filipino Ethereum developers, focusing on grassroots efforts to educate and support enthusiasts and builders of the Ethereum ecosystem through workshops, mentoring, and open source code. She is the creator of the WAIFU Project- an open-source design system and frontend library of web3 components designed to create habit-forming web3 experiences. She also moonlights as 0xDanki- blogging away explainers of cryptographic protocols, rants, and ramblings on how to build on the decentralized web.

    Related Posts

    On planners, productivity apps, and the search for attention

    October 1, 2025

    Cosplay Then vs. Now: Joy, judgment, and the chase for perfection

    September 3, 2025

    Not everyone grew up swiping: Smart-shaming in the age of tech literacy

    August 8, 2025

    Most Popular

    Meralco-run Red Fiber introduces fiber connection via WiFi for homes as low as P1,000 for 100 Mbps

    September 30, 20252 Mins Read

    iPhone 17, iPhone Air available October 17, preorders begin October 9 12:01 AM

    October 2, 20251 Min Read

    The reality of AI in the workplace: value creation or FOMO?

    September 29, 20256 Mins Read

    How AI is powering a literacy breakthrough in the Philippines

    July 8, 20255 Mins Read

    Data Portal on Children in the Philippines for Journalists, Researchers, and Policy-makers Launched by UNICEF and Key Government Agencies

    October 23, 20243 Mins Read

    Yes, you can still use Google on a HUAWEI phone. I tried it. Here’s what actually works

    July 29, 20254 Mins Read

    Latest

    Sophos launches Advisory Services to deliver proactive cybersecurity resilience

    By PhilSTAR Tech TeamOctober 3, 20253 Mins Read

    TI DLP® technology delivers high-precision digital lithography for advanced packaging

    By PhilSTAR Tech TeamOctober 3, 20253 Mins Read

    HP unveils new insights on AI-driven cyber threats

    By PhilSTAR Tech TeamOctober 3, 20253 Mins Read

    Silent Hill f review: Konami expands beyond the eponymous town and takes psychological horror to 1960’s rural Japan

    By Tim VillasorOctober 2, 20255 Mins Read

    iPhone 17, iPhone Air available October 17, preorders begin October 9 12:01 AM

    By PhilSTAR Tech TeamOctober 2, 20251 Min Read

    realme 15 Pro 5G Game of Thrones Limited Edition is Coming to the Philippines on October 8

    By PhilSTAR Tech TeamOctober 1, 20252 Mins Read
    Copyright © 2025 Philstar Tech | Powered by The Philippine STAR

    Type above and press Enter to search. Press Esc to cancel.