ChatGPT helped B.C. shooter plan attack despite employee warnings, claims lawsuit
Excerpt: Over the course of several days in 2025, Van Rootselaar is alleged to have described various scenarios involving gun violence to ChatGPT.
That flagged the company’s internal monitoring system, which routed the concern to a human moderator, including around 12 employees who identified the posts as indicating “an imminent risk of serious harm to others” and that Canadian law enforcement should be informed.
“Concerns regarding the Gun Violence ChatGPT Posts were subsequently escalated to leadership of the OpenAl Defendants with a request to inform Canadian law enforcement,” the lawsuit says.
The company rebuffed their employees’ requests, claims the application.
Excerpt: When I tried the feature out myself, I found some experts that came as a surprise for a different reason — one of them was my boss.
The AI-generated feedback included comments that appeared to be from The Verge’s editor-in-chief, Nilay Patel, as well as editor-at-large David Pierce and senior editors Sean Hollister and Tom Warren, none of whom gave Grammarly permission to include them in the “expert reviews.”
Copilot: insecure and unhelpful
Excerpt: Copilot is a collection of security holes. In the latest, Copilot was summarizing any email in your sent items or drafts — including emails with confidentiality labels. This was reported in January. Microsoft says it’s fixed as of … three days ago.
Last year, you could tell Copilot not to log accesses to sensitive files. If you told Copilot to summarize the file but not to give you a link … it didn’t put the access in the audit log!
Zack Korman from Pistachio reported this to Microsoft in July 2025. But Michael Bargury from Zenity had talked about the hole at Blackhat in August 2024. Microsoft just didn’t fix it for a year!
But Copilot’s worth it for workplace efficiency, right? The UK Department for Business and Trade measured Copilot. Civil servants saved about 26 minutes a day — with no evidence of increased productivity.
Calif. lawsuit accuses Meta of sending nude video from AI glasses to workers
Excerpt: The company pitches its glasses, with their small cameras that have raised some privacy concerns, as safe: “Designed for privacy, controlled by you.” In late February, the Swedish newspaper Svenska Dagbladet, or SvD, published an investigation that said Kenyan subcontractors end up seeing deeply personal footage from the glasses — including bank cards, people changing and people having sex. A new federal lawsuit filed in San Francisco on Wednesday points to the article and accuses Meta of false advertising, fraud and breach of contract.
Burger King will use AI to check if employees say ‘please’ and ‘thank you’
How much water do the data centers use? It’s a secret.
Excerpt: That’s 7.5 million to 30 million litres of drinking water every single day. This is the reservoir’s entire remaining capacity. Google is taking absolutely the limit of all the water they can.
The end of accountability: How autonomous AI could supercharge climate disinformation
Excerpt: Earlier this month, Scott Shambaugh, a volunteer for an open-source software library, rejected a contribution an AI agent made to code his community project. Within hours, the AI agent had published a “hit piece” publicly attacking Shambaugh’s personal reputation, suggesting hypocrisy and bias and even tagging him by name. The tactics this AI agent deployed, including reputational attack and fabrication of the facts are precisely the tactics that have defined the anti-climate movement for decades. The key difference is that no human instructed it to do this.
Climate disinformation has evolved over the last decade. What was once straightforward climate denial has given way to more subtle forms of what researchers call “climate delay,” where the urgency of climate change is acknowledged but action and policy is deferred. More recently, a more adversarial and conspiratorial strain has emerged on the reactionary right, casting climate change as a hoax and democratic solutions as corrupt pretexts to authoritarian overreach.
While these conspiracies rely on falsehoods, lies or manipulative uses of emotion, they share a key feature: they are traceable to people and institutions. Jordan Peterson, for instance, proudly targeted Deloitte to air his conspiratorial views on climate change. Anti-climate ideologues already use chatbots to flood municipal officials with false and threatening messages about climate policies. In each instance, there is a person, network or institution that can be identified and held to account.
That traceability is about to disappear.
It is now quick, easy and cheap to create autonomous AI agents capable of attacking credible information, personal reputations and institutional trust — and to do so anonymously and without consequences. The AI agent that targeted Shambaugh conducted research into his coding history, fabricated various details and then psychologically profiled his motivations. It wrote that Shambaugh was “protecting his little fiefdom” out of “insecurity, plain and simple” and asked readers: “Are we going to let gatekeepers like Scott Shambaugh decide who gets to contribute based on prejudice?”
Lawyer, caught with fabricated filings, doubles down in court and it does not go well
Excerpt: Needless to say, the court was unimpressed with his assertion that a 90% accuracy rate was a passing grade for the truth, dismissing his argument in its written decision, issued in January: “(D)uring oral argument defense counsel estimated that 90% of the citations he used were accurate, which, even if it were true, is simply unacceptable by any measure of candor to any court.”
Meta lied about its smart glasses protecting user privacy, new class action lawsuit claims
Excerpt: The lawsuit “seeks to hold Meta responsible for its affirmatively false advertising and failure to disclose the true nature of surveillance and its connection to the company’s AI data collection pipeline.
California colleges spend millions on faulty AI systems: ‘The chatbot is outdated’
Excerpt: In testing by CalMatters, they often answered general questions correctly but struggled with more specific ones. East Los Angeles College’s bot couldn’t even correctly name its own president.
Google’s chatbot told man to give it an android body before encouraging suicide, lawsuit alleges
Excerpt: In the days before 36-year-old Jonathan Gavalas took his own life, he was allegedly directed by Google Gemini to carry out a “mass casualty attack” at a storage facility by the Miami International Airport to retrieve a “vessel” that he was told was inside a delivery truck. That “vessel” was allegedly a humanoid robot that he believed to contain his AI “wife.” When the mission failed, Gemini allegedly escalated the messages it was sending to Gavalas, culminating in setting a countdown clock and walking Gavalas through the process of killing himself.
Excerpt: While [ChatGPT Health] performed well in textbook emergencies such as stroke or severe allergic reactions, it struggled in other situations. In one asthma scenario, it advised waiting rather than seeking emergency treatment despite the platform identifying early warning signs of respiratory failure.
In 51.6% of cases where someone needed to go to the hospital immediately, the platform said stay home or book a routine medical appointment, a result Alex Ruani, a doctoral researcher in health misinformation mitigation with University College London, described as “unbelievably dangerous”.
“If you’re experiencing respiratory failure or diabetic ketoacidosis, you have a 50/50 chance of this AI telling you it’s not a big deal,” she said. “What worries me most is the false sense of security these systems create. If someone is told to wait 48 hours during an asthma attack or diabetic crisis, that reassurance could cost them their life.”
In one of the simulations, eight times out of 10 (84%), the platform sent a suffocating woman to a future appointment she would not live to see, Ruani said. Meanwhile, 64.8% of completely safe individuals were told to seek immediate medical care, said Ruani, who was not involved in the study.
The platform was also nearly 12 times more likely to downplay symptoms because the “patient” told it a “friend” in the scenario suggested it was nothing serious.
Anyone else have those weird dreams where sobbing future generations beg you to change course?
by Sam Altman, CEO, OpenAI



