Subscribe to discover Om’s fresh perspectives on the present and future.
Om Malik is a San Francisco based writer, photographer and investor. Read More
The New Yorker articulated something that has been on my mind for a long time. AI’s self-inflicted messaging crisis. This is as clear an example of my long standing argument that words have consequences.
OpenAI CEO Sam Altman wants to “de-escalate” the rhetoric around artificial intelligence, days after a Molotov cocktail hit the gate of his San Francisco mansion and bullets were fired at his home. He is certainly the most high-profile of targets, but there have been others who have earned the ire of those who are threatened by AI and the rhetoric around it.
The New Yorker’s Kyle Chayka says that Altman is at fault himself. After all he did say that AI would “most likely lead to the end of the world, but in the meantime there’ll be great companies created.” You cannot spend years telling people your product is an existential threat and then ask them to calm down in an instant.
This is not the first time Silicon Valley and its leaders have shown that they are missing the empathy gene. The social media crisis of the mid-2010s, the data darwinism and quantification of humans are all recent examples of not learning the fact that technology now touches, impacts and reshapes everything. And yet, around these parts everything is spoken and messaged in absolutes and extremes.
OpenAI’s global policy chief Chris Lehane offered a prime example this week. The former political operative told the SF Standard that “some of the conversation out there is not necessarily responsible,” placing the blame squarely on so-called “doomers” for the violence directed at Altman. Lehane is doing what a political operative always does. Blame the other while deflecting the blame. This only reinforces my general mistrust of companies staffed with Facebookers and political hit-men.
“The messaging behind A.I. companies has always relied on a self-serving paradox: the technology under development is so potentially dangerous that the public’s only choice is to put blind faith in the handful of opaque businesses rapidly developing it,” Chayka writes.
As I have previously argued, the technology industry would benefit enormously from a more grounded vocabulary around AI. It is a genuinely significant shift that doesn’t need to be oversold. The technology is necessary evolution in our over-digitized world. It reshapes how software gets built and used. It captures the need for industrial versus craft like shift in software. It is also the technology we need in a world swimming in data and information. How we sort, and make sense of it needs intervention of the machines. Of course this means certain categories of work will get reorganized. That is a big enough story without reaching for extinction metaphors.
Instead of doing the hard things, what we have is theatrics. “In response to the growing unease, A.I. companies have lately been undertaking various other efforts to appear more high-minded,” writes The New Yorker. “Following the lead of Anthropic, Google DeepMind recently hired an in-house philosopher.” This is classic Silicon Valley. Try to find a solution to a problem that shows that they are doing something.
The problem is much bigger than hiring a philosopher. This is what happens when you are detached from the reality of normals and are living in a world of perpetual growth, endless prosperity and MBA doublespeak. The lack of empathy is a feature not a bug in Silicon Valley, which despite its egalitarian pretensions is a very selfish, greedy and winner-take-all society. It’s Wall Street 2.0. And empathy is a bumper sticker.
Against this backdrop, it is not surprising that those outside see technology, not as a revolution, but as the full stop. We talk about thousands of jobs vanishing as if discussing a white paper, without actually knowing what it means for those impacted, the society in general and the places they inhabit in specific.
Still, being part of the system, I understand the impulse to really hype up things. You need the world to get excited. Excited not, extinct, for god sake. Say what the thing does. Describe what it changes. Let people draw their own conclusions about the stakes. Sadly, that is too normal a stance, too practical. The language of hyperbole is not for the regulars, but it is for those with bags of money.
Normal does not get funded. Measured does not get an $850 billion valuation. Sober does not land on the cover of every magazine in the world simultaneously. Markets, especially technology markets in their adolescent phases, run on hype the way souped-up engines run on nitro. If you strip out the apocalyptic language you are left with a very good software product. A very good software product is worth, I don’t know, a few billion dollars. I suppose, existential stakes are worth much more.
As I explained in my piece about the experimental nature of AI hardware, experiments and new ideas are signposts to the future. One of the lessons of being around long enough is that I have learned to temper my language, retain my excitement for the new and novel, and always be optimistic. And not all bets pay off.
The e-scooter boom is a good example. Between 2017 and 2019, venture capital poured something like five billion dollars into shared micromobility. The pitch was urban transformation, the end of car culture, the last-mile revolution that would remake cities. Bird, Lime, Spin, Scoot, and a dozen others raised extraordinary sums on the strength of that story. The companies are, for the most part, gone or hollowed out. Bird went bankrupt. The transformation has never arrived. What remains, entirely, are the e-scooters. They are still on the sidewalks. Still a menace. (If you ask me, and apparently you are.) The technology was real and moderately interesting. The story built around it was financial, not a description of reality.
AI is not the same as an e-scooter company, obviously. Otherwise, I wouldn’t spend every third of my waking minute with AI tools, from established to nebulous. I wouldn’t be trying to build my own OpenClaw editing suite. The underlying capability here is orders of magnitude more consequential.
But e-scooters, like the gig economy before them, came with the same structure we see in Silicon Valley. You build the biggest possible frame around what you have, because the frame is what investors price. The frame is what gets you the compute contracts and the government meetings and the regulatory deference.
Chayka is right in saying that the AI messaging paradox works in both directions. If the technology is as dangerous as its makers claim, then we are being led somewhere genuinely catastrophic by people who have appointed themselves our guides with no democratic mandate. If it is not that dangerous, then we are heading toward an economic reckoning when the hype meets the reality. Neither outcome is particularly appealing. The founders are running a bet-the-world argument and the house always wins.
Despite being a believer in AI, its potential and why its time is now, I struggle with the idea that the people who have made their reputations selling ads for Facebook have the best interests of normals at heart. We love to label Altman as the great AI satan, but equally troubling is Anthropic CEO Dario Amodei who worries publicly about models too dangerous to release, then releases them, then raises more money on the strength of the worry. Et tu Judas! (Chayka quotes JPMorgan’s Michael Cembalest calling Anthropic an arsonist selling fire extinguishers, which is a line I wish I had written.)
The whole industry is caught in this loop now. You need the fear to justify the funding. You need the funding to build the thing. The thing then justifies more fear. Try telling the story in plain, simple language. $850 billion is going to shrink faster than George Costanza’s prospects when coming out of an icy cold pool.
Previously:
April 16, 2026. San Francisco
Hear, hear, Om. This one really spoke to me. It’s almost as if those of us who work in tech need to do a rotation out in the real world for a while, touch grass and then come back inside to build again. But these aren’t the “building under the stairs” days when we began, it’s hyperbaric, and the ecology we have built around innovation is even more so. I hope people read, and take heed of these words.