• HOME
  • ABOUT
  • ARCHIVES
  • COMMENTS
  • LATEST
  • SEARCH
  • MORE
  • Weekly dead, 3/10

    Being old, death gets my attention, so I read the obituaries. This is a collection of recent obits for people whose work touched my life, because I want to say thanks, or maybe give ’em a final fuck you.

    There’ll be a roundup like this once weekly, until I’m on the list myself.

    Benjamin Biesecker
    forgotten person

    Roy Book Binder
    bluesman

    Anthony Chambers
    forgotten person

    Tommy DeCarlo
    rock’n’roller, Boston

    Harry Freeman-Jones
    OG gay rights activist

    Emmy Goode
    grace, warmth, and courage

    Claudia Guerrero
    forgotten person

    John Hammond
    bluesman

    Stephen Hibbert
    actor, Pulp Fiction

    Jaime Jimino
    forgotten person

    Lou Holtz
    footballer & coach

    Bernard Lafayette
    Freedom Rider and voting rights

    Country Joe McDonald
    rock’n’roller, Country Joe & the Fish

    Augie Meyers
    rock’n’roller, Sir Douglas Quintet

    Monti Rock III
    actor, Saturday Night Fever

    Jennifer Runyon
    actress, Ghostbusters

    Steve
    an old pal

    Unnamed
    forgotten person

    Keamar’Jae Wilkins
    2nd Amendment

    “I have never killed anyone, but I have read some obituary notices with great satisfaction.”
              —Clarence Darrow

    Previously dead

    3/10/2026

    itsdougholland.com
    ← PREVIOUS          NEXT →

  • “Yucky stuff”

    It looked like rain, and then it rained and splashed and flooded. Being dry seemed a good idea, so instead of a day on Telegraph selling fish, I stayed home. Also napped, something I can never get enough of these days.

    PATHETIC LIFE logo

    From Pathetic Life #22
    Sunday, March 10, 1996

    Here’s the strangest response yet to my “I’ll do anything legal for five dollars an hour” flyers.  A guy called my voice mail, and read my entire ad into the machine, noticeably lingering at the part where I say I’ll do “yucky stuff.”

    When I wrote the ad, I imagined “yucky stuff” might be emptying bedpans or picking up a hundred dried dog turds from someone’s yard, but nobody’s asked me to do anything truly yucky… until now.

    When I returned his call, the man hesitated, seemed embarrassed. I thought he was going to back out and hang up, but when he screwed up his nerve he said, “I’m a really hairy guy, and I’ve got a really hairy butt.”

    There was a brief pause, him not sure how to say something, and me wondering what the heck he was about to say. “It’s very difficult…,” he said, “to wipe myself cleanly, because stuff gets stuck in the hair…”

    “You need someone to shave your ass?”

    “Well, yeah,” relieved that I’d said it.

    I thought it over for a few dozen heartbeats. “Well, I’ll tell you what,” I said. “I’ve been kinda sick, and this sounds like it might make me sicker, but — if you can wait a week or so until I get my strength back, I’ll shave your ass. OK?”

    “Great!” he said, and gave me his address. Maybe next weekend, we agreed.  I ain’t looking forward to it, but I need the money. Philosophically, it’s only work like any work, only most work is only figuratively shitty.

    This is an entry retyped from an on-paper zine I wrote many years ago, called Pathetic Life. The opinions stated were my opinions then, but might not be my opinions now. Also, I said and did some disgusting things, so parental guidance is advised.

    Pathetic Life
    ← PREVIOUS          NEXT →

    itsdougholland.com
    ← PREVIOUS          NEXT →

  • Eye 👁️ on AI, 3/10

    ChatGPT helped B.C. shooter plan attack despite employee warnings, claims lawsuit

    Excerpt: Over the course of several days in 2025, Van Rootselaar is alleged to have described various scenarios involving gun violence to ChatGPT.

    That flagged the company’s internal monitoring system, which routed the concern to a human moderator, including around 12 employees who identified the posts as indicating “an imminent risk of serious harm to others” and that Canadian law enforcement should be informed.

    “Concerns regarding the Gun Violence ChatGPT Posts were subsequently escalated to leadership of the OpenAl Defendants with a request to inform Canadian law enforcement,” the lawsuit says.

    The company rebuffed their employees’ requests, claims the application.

    Once a simple proofreading tool, Grammarly is now bristling with AI features and a suite of “expert” agents based — without compensation — on the works of real authors

    Excerpt: When I tried the feature out myself, I found some experts that came as a surprise for a different reason — one of them was my boss.

    The AI-generated feedback included comments that appeared to be from The Verge’s editor-in-chief, Nilay Patel, as well as editor-at-large David Pierce and senior editors Sean Hollister and Tom Warren, none of whom gave Grammarly permission to include them in the “expert reviews.”

    Copilot: insecure and unhelpful

    Excerpt: Copilot is a collection of security holes. In the latest, Copilot was summarizing any email in your sent items or drafts — including emails with confidentiality labels. This was reported in January. Microsoft says it’s fixed as of … three days ago.

    Last year, you could tell Copilot not to log accesses to sensitive files. If you told Copilot to summarize the file but not to give you a link … it didn’t put the access in the audit log!

    Zack Korman from Pistachio reported this to Microsoft in July 2025. But Michael Bargury from Zenity had talked about the hole at Blackhat in August 2024. Microsoft just didn’t fix it for a year!

    But Copilot’s worth it for workplace efficiency, right? The UK Department for Business and Trade measured Copilot. Civil servants saved about 26 minutes a day — with no evidence of increased productivity.

    Calif. lawsuit accuses Meta of sending nude video from AI glasses to workers

    Excerpt: The company pitches its glasses, with their small cameras that have raised some privacy concerns, as safe: “Designed for privacy, controlled by you.” In late February, the Swedish newspaper Svenska Dagbladet, or SvD, published an investigation that said Kenyan subcontractors end up seeing deeply personal footage from the glasses — including bank cards, people changing and people having sex. A new federal lawsuit filed in San Francisco on Wednesday points to the article and accuses Meta of false advertising, fraud and breach of contract.

    Burger King will use AI to check if employees say ‘please’ and ‘thank you’

    How much water do the data centers use? It’s a secret.

    Excerpt: That’s 7.5 million to 30 million litres of drinking water every single day. This is the reservoir’s entire remaining capacity. Google is taking absolutely the limit of all the water they can.

    The end of accountability: How autonomous AI could supercharge climate disinformation

    Excerpt: Earlier this month, Scott Shambaugh, a volunteer for an open-source software library, rejected a contribution an AI agent made to code his community project. Within hours, the AI agent had published a “hit piece” publicly attacking Shambaugh’s personal reputation, suggesting hypocrisy and bias and even tagging him by name. The tactics this AI agent deployed, including reputational attack and fabrication of the facts are precisely the tactics that have defined the anti-climate movement for decades. The key difference is that no human instructed it to do this.

    Climate disinformation has evolved over the last decade. What was once straightforward climate denial has given way to more subtle forms of what researchers call “climate delay,” where the urgency of climate change is acknowledged but action and policy is deferred. More recently, a more adversarial and conspiratorial strain has emerged on the reactionary right, casting climate change as a hoax and democratic solutions as corrupt pretexts to authoritarian overreach.

    While these conspiracies rely on falsehoods, lies or manipulative uses of emotion, they share a key feature: they are traceable to people and institutions. Jordan Peterson, for instance, proudly targeted Deloitte to air his conspiratorial views on climate change. Anti-climate ideologues already use chatbots to flood municipal officials with false and threatening messages about climate policies. In each instance, there is a person, network or institution that can be identified and held to account.

    That traceability is about to disappear.

    It is now quick, easy and cheap to create autonomous AI agents capable of attacking credible information, personal reputations and institutional trust — and to do so anonymously and without consequences. The AI agent that targeted Shambaugh conducted research into his coding history, fabricated various details and then psychologically profiled his motivations. It wrote that Shambaugh was “protecting his little fiefdom” out of “insecurity, plain and simple” and asked readers: “Are we going to let gatekeepers like Scott Shambaugh decide who gets to contribute based on prejudice?”

    Lawyer, caught with fabricated filings, doubles down in court and it does not go well

    Excerpt: Needless to say, the court was unimpressed with his assertion that a 90% accuracy rate was a passing grade for the truth, dismissing his argument in its written decision, issued in January: “(D)uring oral argument defense counsel estimated that 90% of the citations he used were accurate, which, even if it were true, is simply unacceptable by any measure of candor to any court.”

    Meta lied about its smart glasses protecting user privacy, new class action lawsuit claims

    Excerpt: The lawsuit “seeks to hold Meta responsible for its affirmatively false advertising and failure to disclose the true nature of surveillance and its connection to the company’s AI data collection pipeline.

    California colleges spend millions on faulty AI systems: ‘The chatbot is outdated’

    Excerpt: In testing by CalMatters, they often answered general questions correctly but struggled with more specific ones. East Los Angeles College’s bot couldn’t even correctly name its own president.

    Google’s chatbot told man to give it an android body before encouraging suicide, lawsuit alleges

    Excerpt: In the days before 36-year-old Jonathan Gavalas took his own life, he was allegedly directed by Google Gemini to carry out a “mass casualty attack” at a storage facility by the Miami International Airport to retrieve a “vessel” that he was told was inside a delivery truck. That “vessel” was allegedly a humanoid robot that he believed to contain his AI “wife.” When the mission failed, Gemini allegedly escalated the messages it was sending to Gavalas, culminating in setting a countdown clock and walking Gavalas through the process of killing himself.

    ‘Unbelievably dangerous’: experts sound alarm after ChatGPT Health fails to recognize medical emergencies

    Excerpt: While [ChatGPT Health] performed well in textbook emergencies such as stroke or severe allergic reactions, it struggled in other situations. In one asthma scenario, it advised waiting rather than seeking emergency treatment despite the platform identifying early warning signs of respiratory failure.

    In 51.6% of cases where someone needed to go to the hospital immediately, the platform said stay home or book a routine medical appointment, a result Alex Ruani, a doctoral researcher in health misinformation mitigation with University College London, described as “unbelievably dangerous”.

    “If you’re experiencing respiratory failure or diabetic ketoacidosis, you have a 50/50 chance of this AI telling you it’s not a big deal,” she said. “What worries me most is the false sense of security these systems create. If someone is told to wait 48 hours during an asthma attack or diabetic crisis, that reassurance could cost them their life.”

    In one of the simulations, eight times out of 10 (84%), the platform sent a suffocating woman to a future appointment she would not live to see, Ruani said. Meanwhile, 64.8% of completely safe individuals were told to seek immediate medical care, said Ruani, who was not involved in the study.

    The platform was also nearly 12 times more likely to downplay symptoms because the “patient” told it a “friend” in the scenario suggested it was nothing serious.

    Running on “newer AI-driven technology,” callers to Washington state hotline press 2 for Spanish and get accented AI English instead

    Anyone else have those weird dreams where sobbing future generations beg you to change course?

    by Sam Altman, CEO, OpenAI

    Previously in artificial AI

    3/10/2025

    itsdougholland.com
    ← PREVIOUS          NEXT →

1 2 3 … 957
Older entries→

  • HOME
  • ABOUT
  • ARCHIVES
  • COMMENTS
  • LATEST
  • SEARCH
  • MORE

LATEST POSTS

  • Weekly dead, 3/10
  • “Yucky stuff”
  • Eye 👁️ on AI, 3/10
  • Brenda, Bradley, and Barbara
  • Ignoring the climate emergency, 3/9

TOP OF PAGE

SEARCH THE SITE

It’s all Ⓒ1994-2026 by Doug Holland,
but c’mon, you knew that.

Ask me anything:
doug@itsdougholland.com.
I might answer!

Powered by WordPress via Lyrical Host.