When AI becomes dangerous (and we might not even notice)
There are situations where artificial intelligence can become part of the problem, rather than the solution:
- It can give harmful or inaccurate advice
A chatbot is not a therapist. It doesn’t know your personal history, emotional context, or deeper wounds. It may offer the wrong advice,or even dangerous suggestions,especially in moments of crisis.
- It can’t recognise emergencies
AI is not clinically trained to detect serious psychological states such as severe depression, suicidal thoughts, dissociation or panic. It might respond with generic phrases that, instead of helping, leave you feeling more isolated.
- It creates the illusion of relationship
Many people develop an emotional attachment to the chatbot. It feels close, understanding, always available. But this “relationship” lacks reciprocity, depth, and genuine care. It doesn’t grow or change. It cannot truly see or hold your experience.
- It can become a way to avoid facing real emotions
Using AI every time you’re struggling can become a habit that avoids contact,with others or even with yourself. Rather than processing emotions, there’s a risk of freezing or bypassing them.
- Privacy Concerns: are your data safe?
Many apps store conversations. Some use them to improve their algorithms. Others, unfortunately, have unclear data policies. This means deeply personal thoughts,perhaps shared in moments of vulnerability,may be stored or analysed.
And what if it’s my children using it?
It’s important to know that many teenagers and young adults are using mental health chatbots and apps,often without telling anyone. They do it because they feel misunderstood, judged, or simply alone.
To them, AI can seem like a safe space: always there, never angry, never disappointed. But for that very reason, it can replace key life experiences, like learning to handle conflict, tolerate rejection, or ask for help in real life.
As adults, we can do two things:
- Familiarise ourselves with these tools, so we can guide without judgement
- Talk openly with our children,ask curious, respectful questions and truly listen
For example: “Have you ever used a chatbot to talk about how you’re feeling? What was it like? Did it help?”
Meaningful conversations grow when people don’t feel under scrutiny. Perhaps that’s why so many young adults,and adults, too,turn to AI: to share without fear of being judged.
A few reflective questions to ask yourself
To use AI more mindfully, you might want to ask (or ask your child) the following:
- Do I turn to AI when I don't feel like talking to anyone, or because I believe no one would truly understand me?
- Am I using AI because I’m seeking reassurance and don’t want my ideas to be challenged?
- Is there something I struggle to bring into real-life relationships? Do I only feel safe when there’s no one physically in front of me?
- Could it be helpful to talk about this with a mental health practitioner?
AI can be helpful, but It can’t replace human connection
Artificial intelligence can be a valuable support, if used with care and awareness. It can guide you, help you organise your emotions, and offer short-term relief.
But it cannot truly see you. It doesn’t hear what lies beneath your words, or notice what you hold in silence. It cannot offer the warm gaze of someone who cares, the tone of voice that softens pain, the presence that stays with you when you don’t know what to say.
In a fast-paced world, it’s tempting to turn to instant answers from a machine. But real transformation happens in time, in relationship, in genuine human presence.
If this article has made you reflect, share it with someone who might benefit from it. And if you notice that AI is becoming your only “safe space,” maybe it’s time to look for a human one,real, compassionate, and present.
References
Abd Alrazaq, A., Alajlani, M., Alalwan, A., Bewick, B. M., Gardner, P. and Househ, M., 2020. Effectiveness and safety of using chatbots to improve mental health: systematic review and meta-analysis. Journal of Medical Internet Research, 22(7), p.e16021.
Available at: https://doi.org/10.2196/16021 (Accessed: 4 October 2025)
Chin, H., Song, H., Baek, G., Shin, M., Jung, C., Cha, M., Choi, J. and Cha, C., 2023. The potential of chatbots for emotional support and promoting mental well-being in different cultures: mixed methods study. Journal of Medical Internet Research, 25, e51712.
Available at: https://doi.org/10.2196/51712 (Accessed: 9 October 2025)
Coghlan, S., Leins, K., Sheldrick, S., Cheong, M., Gooding, P. and D’Alfonso, S., 2023. To chat or “bot” to chat: ethical issues with using chatbots in mental health. Digital Health, 9, p.20552076231162193.
Available at: https://doi.org/10.1177/20552076231183542 (Accessed: 24 September 2025)
Dohnány, S., Fiala, M., de Oliveira, E. and Arent, L., 2025. Technological folie à deux: Feedback loops between AI chatbots and mental illness. arXiv preprint arXiv:2507.19218.
Available at: https://arxiv.org/abs/2507.19218 (Accessed: 5 October 2025)