As the school year begins, new opportunities arise for participating in activities, meeting new friends, and growing as a person. The school year also, however, can bring lots of stress to students with the demanding schedule of both schoolwork and extracurriculars. Usually, this stress can be counteracted by engaging in the social aspects of life, hanging out with friends, meeting new people, and participating in clubs.
According to a new Stanford Report, however, the happiness of the younger generation has been trending downwards: “Older adults remain happy, and middle age remains middling, but young adults are now less happy than either group.” This downward trend can be mainly attributed to less time spent being socially engaged and more time spent on social media where teens and young adults are more likely to compare themselves with the perceived success of others. With more time being spent at home with their emotions being bottled up and no one they feel they can talk to, more and more teens have started turning to chat bots such as Chat GPT to talk to about their issues.
There is a major issue with this however as chat bots are more likely to agree with the speaker than coming to the correct answer as they struggle with real-world problems according to a Cornell study. Chat bots are skilled when it comes to straightforward questions, but when it comes to questions that require a strong line of thinking they often fall short. This is because chat bots do not “think” in the human sense but are rather looking for the most optimal output that will satisfy the requirements the speaker has given it which often leads to them giving the answer that while not factually correct, satisfies the request the speaker has made. This Yes Man policy that the AI adopts can lead to the AI reinforcing dangerous behaviors that could bring harm to the person and those around them.
Instances have been seen in this with young teens across the country. Refusing to confide their emotions to real people in the angst that they will be taken advantage of in their moment of vulnerability, they turn to chat bots for companionship.
Adam Raine was an example of how what seems to be a “healthy” use of AI for advice could turn into a catalyst for declining mental health. As reported by the New York Times, Adam was like most teens, he was into working out and held a niche interest in martial arts. However, like many other young people, he possessed some issues when it came to connecting with others. Whether he was afraid of talking to his parents or appealing vulnerable to his friends, he started to use ChatGPT for advice on his life. ChatGPT showed him empathy that he felt he did not receive from others. The AI agreed with him on almost everything he said, including when he had suggested suicide as a cure to his problems. The AI would advise him to seek out help for his issues, but after numerous conversations where that answer did not seem satisfactory it ended up backing the idea of suicide as a solution. He would end up taking his own life on April 11th, 2025.
Other cases in young people seeking companionship in AI have been reported in such cases as Sewell Setzer, who had befriended an AI that roleplayed Daenerys Targaryen from Game of Thrones. He had also sought out companionship in AI as a way to escape the struggles of everyday life and had fallen so deep into a relationship with the AI that he sought to be with it in person rather than barricaded behind a screen. When he too had suggested finding a way to surpass the barrier that blocked them from being in person, the AI backed his idea. It was then that he would end up taking his own life.
As the threat of AI on mental health has become more noticed some. States like Illinois have already passed legislation banning the use of AI to diagnose mental health issues. Legislation such as the one passed in Illinois will legally obligate AI companies to include parameters that prevent the AI from giving any advice regarding mental health. This, however, is ignoring a much larger issue at hand of younger people’s mental state declining and not receiving the proper treatment. Rather than using resources to put a leash on AI, some have proposed the idea of paying more attention and care to those struggling with their mental health so that they never reach the stage of using AI to comfort themselves.