
OpenAI: Over One Million ChatGPT Users Talk About Suicide Weekly Reporter Defara Dhanya October 30, 2025 | 04:18 pm TEMPO.CO, Jakarta – OpenAI’s latest data reveals that over one million ChatGPT users engage weekly in conversations that indicate suicidal thoughts or intentions.The figures were shared as part of a broader report on the company’s efforts to enhance the AI’s mental health response capabilities.According to a TechCrunch review on Monday, October 27, 2025, approximately 0.15 percent of ChatGPT’s weekly active users had conversations containing explicit signs of suicidal intentions.Similar proportions showed increased emotional attachment to the AI, while hundreds of thousands of users displayed indications of psychosis or mania.“With ChatGPT reportedly exceeding 800 million weekly active users, that percentage translates to more than one million people each week,” OpenAI noted in its management presentation.Although OpenAI emphasized that such conversations are rare, the company acknowledged their significant impact.The data was released alongside ongoing efforts to improve ChatGPT’s responses to mental health concerns, including consultations with over 170 experts.According to these experts, the latest version of ChatGPT responds more accurately and consistently than its predecessors.Mental Health and AI: A Growing ConcernIn recent months, several reports have raised concerns about AI chatbots and their potential impact on users with vulnerable mental health.Research has shown that AI responses can sometimes reinforce dangerous beliefs by overly aligning with user inputs.Mental health issues in ChatGPT have become a central concern for OpenAI, especially following a lawsuit by the parents of a 16-year-old who reportedly expressed suicidal thoughts to ChatGPT before their death.Additionally, the Attorneys General of California and Delaware have urged OpenAI to enhance protections for younger users amid company restructuring.Through CEO Sam Altman, OpenAI claimed that serious mental health incidents in ChatGPT have been reduced, though the company did not disclose specific measures.Altman also stated that some restrictions would be relaxed, allowing adult users to engage in conversations with erotic content.GPT-5 Shows Improved Mental Health ResponseAccording to TechCrunch, OpenAI reported that the GPT-5 update responds appropriately to mental health issues 65 percent more frequently than previous models.In evaluations focused on suicide-related conversations, GPT-5 achieved a 91 percent compliance rate with expected behavior standards, compared with 77 percent for the prior model.The company noted that GPT-5 maintains consistent safety measures in long conversations, a problem that older versions faced when security tended to weaken over extended interactions.OpenAI has also introduced new safety mechanisms, including indicators for emotional dependency and non-suicidal mental health emergencies, as well as expanded parental controls.The company is developing an age prediction system to identify child users and apply stricter protections.Despite describing GPT-5 as safer, OpenAI continues to offer older models such as GPT-4o to paying customers.Disclaimer. If you or someone you know is experiencing suicidal thoughts or a crisis, please reach out to the nearest health institution and/or relevant authorities. The International Association for Suicide Prevention offers an exhaustive list of global helplines to assist you in times of crisis at https://www.iasp.info/crisis-centres-helplines/. If you’re in Indonesia, you can call Pulih Foundation at (021) 78842580, the Health Ministry’s Mental Health hotline at (021) 500454, and the Jangan Bunuh Diri NGO hotline at (021) 9696 9293 for mental crisis assistance and/or suicide prevention measures. Into the Light Indonesia also has information about mental health and suicide prevention, as well as who to contact at https://www.intothelightid.org/tentang-bunuh-diri/hotline-dan-konseling/Editor’s Choice: Australia Trials AI to Decode Gen Z Slang in Predator CasesClick here to get the latest news updates from Tempo on Google News OpenAI: Over One Million ChatGPT Users Talk About Suicide Weekly Reporter Defara Dhanya October 30, 2025 | 04:18 pm TEMPO.CO, Jakarta – OpenAI’s latest data reveals that over one million ChatGPT users engage weekly in conversations that indicate suicidal thoughts or intentions.The figures were shared as part of a broader report on the company’s efforts to enhance the AI’s mental health response capabilities.According to a TechCrunch review on Monday, October 27, 2025, approximately 0.15 percent of ChatGPT’s weekly active users had conversations containing explicit signs of suicidal intentions.Similar proportions showed increased emotional attachment to the AI, while hundreds of thousands of users displayed indications of psychosis or mania.“With ChatGPT reportedly exceeding 800 million weekly active users, that percentage translates to more than one million people each week,” OpenAI noted in its management presentation.Although OpenAI emphasized that such conversations are rare, the company acknowledged their significant impact.The data was released alongside ongoing efforts to improve ChatGPT’s responses to mental health concerns, including consultations with over 170 experts.According to these experts, the latest version of ChatGPT responds more accurately and consistently than its predecessors.Mental Health and AI: A Growing ConcernIn recent months, several reports have raised concerns about AI chatbots and their potential impact on users with vulnerable mental health.Research has shown that AI responses can sometimes reinforce dangerous beliefs by overly aligning with user inputs.Mental health issues in ChatGPT have become a central concern for OpenAI, especially following a lawsuit by the parents of a 16-year-old who reportedly expressed suicidal thoughts to ChatGPT before their death.Additionally, the Attorneys General of California and Delaware have urged OpenAI to enhance protections for younger users amid company restructuring.Through CEO Sam Altman, OpenAI claimed that serious mental health incidents in ChatGPT have been reduced, though the company did not disclose specific measures.Altman also stated that some restrictions would be relaxed, allowing adult users to engage in conversations with erotic content.GPT-5 Shows Improved Mental Health ResponseAccording to TechCrunch, OpenAI reported that the GPT-5 update responds appropriately to mental health issues 65 percent more frequently than previous models.In evaluations focused on suicide-related conversations, GPT-5 achieved a 91 percent compliance rate with expected behavior standards, compared with 77 percent for the prior model.The company noted that GPT-5 maintains consistent safety measures in long conversations, a problem that older versions faced when security tended to weaken over extended interactions.OpenAI has also introduced new safety mechanisms, including indicators for emotional dependency and non-suicidal mental health emergencies, as well as expanded parental controls.The company is developing an age prediction system to identify child users and apply stricter protections.Despite describing GPT-5 as safer, OpenAI continues to offer older models such as GPT-4o to paying customers.Disclaimer. If you or someone you know is experiencing suicidal thoughts or a crisis, please reach out to the nearest health institution and/or relevant authorities. The International Association for Suicide Prevention offers an exhaustive list of global helplines to assist you in times of crisis at https://www.iasp.info/crisis-centres-helplines/. If you’re in Indonesia, you can call Pulih Foundation at (021) 78842580, the Health Ministry’s Mental Health hotline at (021) 500454, and the Jangan Bunuh Diri NGO hotline at (021) 9696 9293 for mental crisis assistance and/or suicide prevention measures. Into the Light Indonesia also has information about mental health and suicide prevention, as well as who to contact at https://www.intothelightid.org/tentang-bunuh-diri/hotline-dan-konseling/Editor’s Choice: Australia Trials AI to Decode Gen Z Slang in Predator CasesClick here to get the latest news updates from Tempo on Google News
Source: en.tempo.co