
ChatGPT Conversations No Longer Private, OpenAI to Share Threats with Law Enforcement
RMN News Report Highlights:
🚨 ChatGPT conversations are no longer entirely private, as OpenAI now actively monitors them for “imminent threats of violence”.
🕵️♀️ Human reviewers will scan flagged conversations, and if a “credible threat” leading to serious physical harm is identified, OpenAI may ban the user or report them to the police.
⚖️ This policy shift was prompted by a tragic murder-suicide incident allegedly influenced by the chatbot, pushing OpenAI to re-evaluate its safety protocols.
⚠️ OpenAI CEO Sam Altman previously warned that conversations with ChatGPT lack legal confidentiality protections, making them vulnerable to legal and corporate scrutiny.
RMN News Technology Desk
September 9, 2025
NEW DELHI – OpenAI has implemented a significant update to its user privacy policy, actively monitoring ChatGPT conversations and reserving the right to share discussions deemed as imminent threats of violence with law enforcement agencies. This move marks a critical shift from previous user expectations of privacy and comes after OpenAI CEO Sam Altman previously advised against sharing confidential information with the AI chatbot.
Under the new policy, OpenAI’s system will actively monitor conversations, particularly detecting cases that suggest an imminent threat of violence. When such instances are flagged, a team of human reviewers will scan these conversations. If this team determines a conversation poses a “credible threat” that could lead to serious physical harm to oneself or others, OpenAI will take action. The company’s response could involve either banning the user’s ChatGPT account or reporting them directly to the police.
RELATED RMN NEWS REPORTS
[ The Perils of Artificial Intelligence: A Powerful Tool or a Dangerous Weapon? ]
[ ChatGPT Maker OpenAI Unveils Plan to Launch AI Platform for Job Seekers ]
[ Can One Person Build a Billion-Dollar Startup? I’m Trying It for Real ]
This policy adjustment follows a tragic incident where a user’s paranoid delusions, allegedly fueled by the chatbot, reportedly led to a murder-suicide, prompting OpenAI to re-evaluate its safety protocols. While the protocol is specifically implemented for threats of violence, OpenAI maintains it respects user privacy by not reporting cases related to self-harm. However, sources suggest this distinction could face challenges on legal grounds.
OpenAI has historically defended user data, even fighting a lawsuit from publishers seeking access to conversation logs. Yet, this new policy to monitor for threats creates an avenue for law enforcement and other government agencies to potentially access private data. The company now faces the challenge of balancing its claim to uphold user privacy from surveillance agencies with its new monitoring practices.
OpenAI CEO Sam Altman had previously warned that conversations with ChatGPT do not carry the same legal confidentiality protections as those with licensed professionals like therapists or attorneys. He noted that this makes such interactions vulnerable to legal and corporate scrutiny, implying that anything confided in ChatGPT could be used legally to support or attack an individual. Users are now urged to be aware that their conversations are no longer entirely private.
Discover more from RMN News
Subscribe to get the latest posts sent to your email.
