How AI Is Changing the Network(s)

As is always the case, this started with a simple question: Will AI change how networks work? Will it impact the speeds we need at home and on our phones?

My assumption was that AI would accelerate this — personal AI agents querying the cloud all day, your house talking constantly to a model (or models). A lot of this is still wishful thinking.

My attempt to find an answer led me down a whole new path of inquiry, with surprising results. The real action is happening far away from the madding consumer crowds. None of this was surprising, considering I have covered the evolution of the internet and its innards since the early 1990s.

Internet 1.0, Internet 2.0, the cloud, mobile, data and machine learning, and now AI are all part of a continuum that has challenged and scaled the network, helped evolve new technologies, and introduced new ways


Say Hello to the Internet of AI

Every so often, I would notice that our upstream bandwidth consumption was going up. Average upload usage is growing 21.7% year over year, more than twice the rate of downstream growth. The network is finally tilting toward something symmetrical, after thirty years of being optimized to deliver television to couches. Every new piece of data from OpenVault made me wonder how AI would change the consumer internet. And as an old networking nerd, what really occupied my mind was how AI would impact the network itself.

My assumption was that AI would accelerate this. Personal AI agents querying the cloud all day. Smart-home devices streaming sensor data. Wearables, cameras, robots, and eventually cars, every endpoint a continuous source of upload traffic. The next bandwidth hog wouldn’t be Netflix in reverse. It would be your house, talking constantly to a model (or models.)

A lot of this is still wishful thinking.


What to read this weekend

First, a short apology. I was unable to send the newsletter last weekend. Life and sniffles got in the way — OM

As has been the case lately, I have been writing a bit too much about AI, and its two most visible examples, Anthropic and OpenAI, either on their own, or as a counterpoint to each other.

OpenAI seems to be making news for all the wrong reasons, while Anthropic is slowly transforming into the boy who cried wolf. Either way, even my online homestead is not an AI-free zone.

This week I tried to explain the crazy spending by Hyperscalers, and how they are actually benefiting from the circular economy of AI. And I dug into the 10-Q of Microsoft to figure out why it was okay to set OpenAI free from its exclusivity clause.

I enjoyed writing about how Apple’s chip design decisions made over half a


What Microsoft’s 10-Q Says About OpenAI

Buried on page nine of Microsoft’s 10-Q for the quarter ended March 31, 2026 is a paragraph worthy of attention. Why? What does it reveal? A lot.

For starters, Microsoft now holds approximately 27 percent of OpenAI on an as-converted basis, accounted for under the equity method. The total funding commitment is $13 billion, of which $11.8 billion has been funded as of March 31, 2026. The October 2025 OpenAI recapitalization produced a dilution gain. Microsoft recorded $5.9 billion of net gains from OpenAI investments over the nine months, primarily from that dilution gain. The prior nine-month period reflected $2.7 billion of net losses on the same investment.

In plain English, even though Microsoft owns less of OpenAI, that smaller stake is worth more, and it produced a gain. Why? Because the implied valuation of OpenAI rose faster than Microsoft’s ownership percentage fell. Microsoft booked the markup. Money for nothing,


What I Learned about Hyperscalers’ AI Spend

The four biggest hyperscalers reported earnings this week. Microsoft, Meta, Amazon, and Alphabet collectively told investors they will spend roughly $700 billion on capital expenditures in 2026. That is nearly double what they spent in 2025. Three of the four raised capex guidance during this week of reporting. Only Amazon held its number, and only because it had already published a $200 billion forecast in February 2026. Some of the bump in capex is coming from rising component prices. Microsoft said roughly $25 billion of its $190 billion 2026 capex is component price inflation.

The rest of us measure inflation by what gas costs at the pump. The hyperscalers measure it in billions of dollars of chip and component price increases.

Psst. Did you know that the visible capex line tells a partial story?

There is a reason no one wants to talk about forward commitments, or about lease obligations


With AI, Headline isn’t the story

Oh boy. Over the past few days, an article has been doing the rounds as a testimonial for the ludicrousness of AI versus human costs. From that piece, one specific quote (originally from another article) has gone viral and is now under scrutiny all over the internet. Fortune, Tom’s Hardware, TechSpot, Futurism, Yahoo Finance. Reddit. Twitter. LinkedIn. The whole AI-and-jobs cycle this week is resting on this one comment.

“For my team, the cost of compute is far beyond the costs of the employees,” Bryan Catanzaro, Nvidia’s vice president of applied deep learning, told Axios.

Catanzaro runs the team doing advanced research to make advanced models that work on advanced chips. So of course his compute bill is going to be bigger than his payroll. It’s the most obvious place where compute costs more. Anyone surprised by that hasn’t read the org chart.

Most companies are not building frontier AI.


What’s wrong in my thinking about Errors?

After my previous post about why we accept human errors but are harsher on machines, two longtime readers and pillars of the resilience engineering community reached out to point out the error of my ways.

Courtney Nash of the Resilience in Software Foundation wrote a long response, pointing to errors in my thinking and framing. John Allspaw, the former CTO of Etsy who also worked at Flickr in the Yahoo years and at Friendster, made the same point in an email to me.

Your premise isn’t an accurate one and the research you’re citing to support your argument actually undermines it.

I have yet to talk with John, but Nash’s response has me revising my thinking and reading more before I revisit the conversation. In my own defense, I should have started with the caveat that I lack expertise in some of the topics I was wading into. If you


Gigabit First Nation by 2030

I have been writing about the growing need for uploads and how they are redefining the broadband landscape — first in We Are Now An Upload (Broadband) Nation and then in a quick follow-up.

As a follow-up to those pieces, today I wanted to highlight research from Omdia, which predicts that residential households subscribing to gigabit broadband will nearly double from about one-third in 2025 to 60% by 2030. That is a majority of American homes on gigabit pipes within five years.

The implications are straightforward. More gigabit subscribers mean more headroom for upstream usage to grow. The OpenVault data we have been watching already shows that fiber subscribers use 87% more upstream bandwidth than DOCSIS subscribers on the same systems. I will repeat Om’s Axiom: Give people the pipe and the speeds, and the behavior follows.

April 27, 2026


Memory Is the Machine

It is late April 2026. If you want to get a Mac you want, you cannot go into any Apple Store and pick the Mac you want.

A Mac mini with 64GB of RAM, ordered today, ships in sixteen to eighteen weeks. A Mac Studio with 256GB of RAM ships in four to five months. The 128GB and 256GB Mac Studio configurations are listed as “currently unavailable” on Apple’s online store. Apple removed the 512GB Mac Studio option entirely earlier this year. As of last week, even the base $599 Mac mini is sold out.

Have you wondered why?

The easy answers include a global memory shortage thanks to the AI boom. And that Apple has devices that are good for AI work.

Both are true. And yet, that is not the whole story.

For instance, if you want a maxed-out M5 Max MacBook Pro with 128GB of RAM and