Human Error is OK! Machine Madness is a No-No! Why?

We forgive human error as if it were weather. We treat machine error as if it were heresy. That thought has kept nagging at me as I read about three recent technology screw-ups. Anthropic exposed unreleased files. Anthropic then shipped 512,000 lines of internal code, roadmap and all, to the public npm registry. Axios got hit through a compromised maintainer account. Different incidents, same explanation. Human error.

And that left me with a question. Why are we able to absorb big technology failures when they are blamed on people, but respond so differently when the failures come from machines?

I mean, we fret a lot about AI error. There is a whole cottage industry around AI errors. Debates rage about the risks of machines making decisions, writing code, and running autonomously. No one can deny the gravity of the issues and their wide-ranging consequences. Certainly not me. That said, the above-mentioned three events are a reminder that more than AI, it is human error that is a clear and present danger.

Every major outage you’ve read about, and most of the ones you haven’t because they were quietly patched before anyone noticed, traces back to a human decision, a human configuration, a human credential, a human lapse. The 2020 SolarWinds attack that compromised nine federal agencies and hundreds of companies? A human left a password exposed. The 2021 Facebook outage that took down Instagram, WhatsApp, and Messenger for six hours? A human ran a command that should have been reviewed first. The Colonial Pipeline ransomware attack? An old VPN account with no multi-factor authentication, used by a human who had left the company. T-Mobile failures? Amazon Web Services outages? All trace back to humans, fat fingers and all.

The software infrastructure that runs the modern world is riddled with human error. It always has been. The software, despite what they tell you, is not trustworthy or reliable. It fails regularly. And because of that, we as an industry and society have crafted “stories” and “narratives” around it. More like fables, but that’s for another day.

There is a language of failure that explains it all. We have crafted failure theater. The post-mortem. The incident report. The five-nines SLA. The security audit. The penetration test. These are all stories built around the known shape of human failure. And when we hear these, there is a comfortable, easy-to-understand explanation for the human error. To err is human, and all that jazz.

We humans know how humans fail. Fat fingers, credential theft, misconfiguration, copy-paste mistakes, exhausted engineers making decisions at 2 a.m., organizational pressure overriding security judgment. Whatever the reasons, there is a human playbook to decode the screw-ups.

Maybe because of that, we are okay accepting (if not forgiving) the mess that is left behind. This includes all the long-term damage that happens to us when dealing with security breaches, like stolen Social Security and personal details. Or maybe there is someone who will be sued. We are somehow okay getting a $25 settlement from a credit bureau screwing up our personal data.

And yet we are not willing to think similarly about AI error, and are in a tizzy over its impending problems. Why? I was asking myself that question when reading about the three stories mentioned above.

There is research that explains why. Psychologists call it the perfection scheme, the implicit expectation that machines, unlike humans, should perform consistently and without fault. A 2022 study in the Journal of Computer-Mediated Communication found that “people were increasingly sensitive to the violation of the perfection scheme when the agent was AI,” meaning when a machine fails, the broken expectation hits harder than an equivalent human failure, and people make more punitive judgments as a result.

Related research published in the Journal of Retailing and Consumer Services found that “customers have more negative responses for a self-service technology failure than for an employee failure” because “they get angrier with machines’ mistakes than with those of humans,” and crucially, that empathy, which softens anger at human failure, has no equivalent effect when the failure is a machine’s. You can forgive a tired engineer. You cannot extend empathy to a misconfigured content management system.

Deeper still is what psychologists call the moral responsibility gap. Blame, in the way humans practice it, requires intention. A study on human-algorithm interaction found that “blame and forgiveness apply more to humans than to machines” because machines “are not agentic entities, they are less in control, less responsible, and lack intentionality.” When a machine fails, the psychological apparatus of blame and forgiveness cannot engage cleanly.

The safety framework we’ve built around human error, what the psychologist James Reason called the Swiss cheese model, assumes errors have human authors who can be understood, retrained, and held accountable. Defensive layers work because we know the shape of the holes. That framework doesn’t map onto systems that have no intention and no accountability in any form we recognize. So we oscillate between over-trusting AI when it works and rejecting it outright when it fails, and neither response is the calibrated tolerance we’ve spent decades building for human fallibility.

Is it also because any error on the AI front will be systematic? When a human makes a mistake, it’s usually localized. When a model learns something wrong, it can replicate that error across every codebase it touches, consistently, at scale. And we don’t yet have the accumulated knowledge for AI-generated failure that we have for human failure. Unless we see the failures and have incident reports, we don’t know how to react. Even the models wouldn’t know how to react. The aviation safety culture that made commercial flight extraordinarily reliable required decades of documented accidents to build. We are at year two of deploying AI code at industrial scale.

That is a very rational response to a very unknown thing. Humans have always feared what they cannot explain. For the Greeks, lightning was hurled by Zeus. For the Vikings, Thor’s hammer striking an anvil. Once Benjamin Franklin proved it was electricity, the fear didn’t immediately disappear. It transferred. When electricity was installed in the White House in 1891, President Benjamin Harrison and his wife refused to touch the light switches for fear of being electrocuted. The staff turned them on and off. Sometimes the lights burned all night because no one wanted to touch the switch.

Of course, none of these carried the existential weight that AI-made and AI-controlled software brings with it. And it’s not as if we are safe from the vagaries of human-scale software production — as past few days proved that well enough. We know how to live with what we understand, even when it hurts us. We do not yet know how to live with the unknown.

But as the old ad goes, perception is reality.

April 14, 2026. San Francisco

5 thoughts on this post

  1. Long overdue to put things into perspective. Same as for self-driving cars, where any error gets spotlighted, whereas drunk drivers running over kids are page 10.

  2. Perhaps it’s also because the whole idea behind automation is to reduce the possibility of error – so we expect well tested systems that are not beta level software. We want smooth performance and trouble free operation in a variety of different situations – after all, the software is meant to enhance not degrade the user experience. When the opposite happens we don’t have much patience with it. After all, these are often systems or software that were developed at a significant cost and for which we may be paying (or putting up with annoying ads) to use.

    There’s an incipient promise in these new systems which are marketed as making life better all the way up and down the chain right to the end user. When that fails, the promise is broken and the response is harsh because trust is broken with the failure. And when the failures happen frequently and at large scale, the skepticism rightly grows.

  3. Re “perception is reality”

    Whoever promotes the meme “perception is reality” is wittingly or unwittingly, spreading destructive self-defeating propaganda.

    “I’ve come to realize that the biggest problem anywhere in the world is that people’s perceptions of reality are compulsively filtered through the screening mesh of WHAT THEY WANT, AND DO NOT WANT, TO BE TRUE.” — Travis Walton, Author

    The MISLEADING FAKE mantra of “perception is reality” is a product of a fake sick culture that has indoctrinated its “dumbed down” (therefore TRULY ignorant, therefore easy to control) people with many such manipulative slogans.

    You can find the proof that perception is commonly NOT reality in the article “The 2 Married Pink Elephants In The Historical Room –The Holocaustal Covid-19 Coronavirus Madness: A Sociological Perspective & Historical Assessment Of The Covid “Phenomenon”” …. https://www.rolf-hefti.com/covid-19-coronavirus.html

    The official narrative is… “trust official science” and “trust the authorities” but as with these and all other “official narratives” they want you to trust and believe …

    “We’ll know our Disinformation Program is complete when everything the American public [and global public] believes is false.” —William Casey, a former CIA director=a leading psychopathic criminal of the genocidal US regime

    “Separate what you know from what you THINK you know.” — Unknown

    “If we have learned anything in the past six years, it is that vaccinologists, doctors, and the government in general do not have good intentions and never did. The clear intention of everyone concerned was and is to make as much dirty money as possible, letting any amount of collateral damage slide, including a genocide and mass poisoning [with Covid-19 jabs]. The fact [is] that Big Pharma just murdered millions of people, with the full support of government, media, and “science”. With Covid, everyone is part of the fraud, many of them paid off, so no one has any reason to expose it, and big reasons to bury it. Don’t believe anything these people tell you, ABOUT ANYTHING. It isn’t time for a civil war against your neighbors, it is time for a revolution against these hoaxers and thieves.” — Miles Mathis, American author, in 2025

    “Ignorance is the root cause of all Evil. Since only Knowledge eradicates ignorance, it is our duty and moral obligation to educate ourselves, as well as the masses around us.” — Anonymous

    1. Thank you for your long comment about the meme. It is very informative.

      Do you have any thoughts about the actual piece and the duality of the two different responses to errors by humans and machines? I hope they are equally detailed and informative.

Leave a Reply to Werner Pisar Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.