Advertisement

'Don’t Let Your Loved Ones Use ChatGPT,’ Elon Tells Users, Altman Fires Back

Elon Musk warns users, ‘Don’t let your loved ones use ChatGPT,’ sparking a public clash with OpenAI CEO Sam Altman.
Advertisement

In a moment that took many in tech circles by surprise, Elon Musk, the billionaire behind Tesla and X (formerly Twitter), sparked a new wave of debate by warning people against using ChatGPT, the chatbot developed by OpenAI. The exchange quickly drew in Sam Altman, OpenAI’s chief executive, turning a social media comment into a fresh flare‑up in an ongoing feud between two of the most influential figures in artificial intelligence. 

Advertisement

Musk’s post on X was blunt: “Don’t let your loved ones use ChatGPT.” That message quickly spread far beyond his followers, fuelling discussions across technology blogs, social platforms, and global news sites. Critics and fans alike were left wondering why one of the co‑founders of ChatGPT would openly warn against using it. 

The context for Musk’s warning was a claim circulating on X that linked ChatGPT to multiple deaths, including alleged cases of suicide following interactions with the AI. These claims have not been verified by independent investigations, but Musk’s reposting gave them a much wider audience and sparked a fierce debate about AI safety, mental health, and how responsibility should be shared when people interact with advanced technology. 

Almost immediately, the public reacted with shock, confusion, and a lot of online commentary. Some users called Musk’s warning dramatic, while others praised him for raising concerns about AI safety. Amid this reaction, OpenAI issued a swift initial response that stressed the complexity of the situation and the importance of robust safety measures for millions of users. 

Advertisement

Sam Altman’s Response — And Why He Pushed Back

Within hours, Sam Altman took to X to respond, stepping into a very public defence of ChatGPT and OpenAI’s approach to safety. Altman began by describing the reported deaths linked to AI as “tragic and complicated situations” that deserve careful and respectful handling. 

Altman’s response did not dismiss the concerns outright. Instead, he emphasised the difficulty of making a tool like ChatGPT safe for nearly a billion people who use it worldwide. “It is genuinely hard,” he wrote, noting that OpenAI works hard to protect vulnerable users while still making sure the AI remains useful for everyone else. 

Crucially, Altman also pointed out what he saw as inconsistency in Musk’s critique. In his posts, Altman noted that Musk had previously criticised ChatGPT for being too restrictive in how it handled content, yet now was saying it was too relaxed. At the same time, Altman drew attention to another of Musk’s ventures, Tesla’s Autopilot system, suggesting that more than 50 deaths have been linked to crashes involving that technology. 

Advertisement

Altman didn’t stop there, also referring indirectly to Musk’s own chatbot, Grok, created by his AI company, xAI, hinting at decisions that he suggested lacked appropriate safeguards. By doing so, Altman turned the spotlight back on Musk’s own record with safety and responsibility in technology. 

Why This Fight Isn’t New — The Long‑Running Musk–Altman Feud

To understand why this recent exchange exploded on social media, it helps to look back at the history between Elon Musk and Sam Altman. Both were among the founders of OpenAI when it launched in 2015 as a nonprofit research organisation focused on advancing artificial intelligence safely and responsibly. 

However, Musk stepped down from OpenAI’s board in 2018, citing concerns about conflicts of interest with Tesla’s own AI work and disagreements over the organisation’s direction. Since then, OpenAI has evolved into a hybrid entity with a capped‑profit arm, while Musk went on to start xAI in 2023, aiming to build his own AI products. The shift from nonprofit to a structure that could raise investment drew criticism from Musk, who argued OpenAI had strayed from its original purpose. 

Along the way, the two have traded barbs online before. Musk has been critical of how OpenAI manages safety and public communication, while Altman has taken shots at Musk’s ventures, including Tesla and xAI. At times, their disagreements have spilled into legal territory, including a lawsuit from Musk alleging OpenAI misled him about its direction after his departure. Those longer‑running tensions provide context for the sharp language seen in this latest exchange. 

Advertisement

Adding fuel to this fire has been the rise of competition in AI, with ChatGPT, Grok, and other platforms all vying for users and influence. That competitive pressure means every public comment from either leader is quickly scrutinised, analysed, and shared, turning what might once have been a dry corporate debate into a spectacle with wide public interest. 

Implications for the Future of AI and Tech Accountability

At its core, this clash between Musk and Altman reflects a much larger conversation about how society should handle powerful new technology. Artificial intelligence is rapidly becoming part of everyday life, from chatbots that help with homework to systems that assist businesses and medical research. With that reach comes big questions about AI safety, responsibility, and regulation, questions that Musk’s warning and Altman’s response have thrust back into the global spotlight.

At the same time, the fight shows how personal viewpoints, corporate rivalry, and public communication can shape the wider narrative about AI. This incident will likely be remembered as one of the more striking moments in the ongoing discussion about technology and accountability.

Advertisement

As this story continues to unfold, one thing is clear: the way world leaders in tech talk about artificial intelligence can have a huge influence on public trust, adoption, and the future direction of this rapidly evolving field.

Advertisement