How AI Goes to Work

scrabble tiles spelling gemini and chatgpt
Photo by Markus Winkler on Pexels.com

Sometimes it takes a single piece in a jigsaw puzzle for the whole picture to come into shape. And that happened earlier this weekend. I was able to crystallize a lot of what I have been thinking about technologies colloquially known as “Artificial Intelligence.” And it happened because of a 40-year-old piece of software that is used by tens of millions of people every day.

I’ve been writing about artificial intelligence for nearly a decade. In 2016, I argued in The New Yorker that we should think of it as augmented intelligence. Software that helps us deal with a world outpacing our capacity to process it. In 2022, I made the same case in The Spectator. The argument never changed. The world just took a while to catch up.

This week, I downloaded Claude for Excel and started working on a spreadsheet tracking my spending over the past twelve months. I took data from my credit cards and my bank accounts. I wanted to understand my spending patterns, and how I could budget smarter. And in doing so, something clicked. Under my breath, I said it: Finally!

I am no spreadsheet jockey. Never have been. I’ve always leaned on colleagues and friends who helped me build models, check assumptions, validate my cockamamie ideas. Now I work mostly by myself so I don’t have that luxury of colleagues anymore. My friends are busy. I am back to learning, adapting, and becoming more self-reliant. Yes, that includes getting intimate with Excel. Oh god!

After an hour of mucking about with Claude for Excel, it became clear that something was different. I wasn’t spending my time crafting elaborate prompts. I was just working. The intelligence was just hovering to help me. Right there, inside the workflow, simply augmenting what I was doing.

With all the information and data, I asked Claude to analyze and come up with a model for smart spending in 2026. I also had my portfolio details. Claude has access to the latest market data, from multiple sources. An hour or so later, I had methodically arrived at a working model. It was not the perfect model, but it was good enough for me to make common sense decisions.

Again, finally!

This is the easy on-ramp to the big bad scary world of AI, which will (and arguably already is) going to upend our information society. What Claude did inside Excel, I have experienced as a photographer. I use tools such as Adobe Photoshop and Topaz AI for editing my photos. These two have been a playground for what one could describe as “introductory AI.” They both started out pretty bad, but slowly, with every new release they have improved a lot.

Adobe has come under criticism for being the platform for generating artificial photos. It is a creative and moral debate, left for another time to someone else. It was easy for me to look at how both Adobe and Topaz have improved my everyday experience. I used to carry multiple lenses when I went on a photo adventure. Now I rarely take anything more than just my fixed lens Leica Q3 43. It is a 61-megapixel camera that produces big beefy files that can be easily upscaled for cropping by Topaz. To some that might be an issue, but I don’t see a problem with that. It is just AI at work.

Removing dust, expanding photos, upscaling resolution, intelligent masking, improving blemishes are all part of my workflow. These are tasks that took hours five years ago. Now, a handful of clicks. I am working smarter and faster. It is all AI, but I never think of it as “using AI.” It is just better software that allows me to do what I used to do faster, better. It also has allowed me to think even more creatively. Claude for Excel felt the same way.

We’ve spent the past few years obsessing over frontier models. With every new release the difference between the new models from Anthropic, OpenAI, and Google’s Gemini becomes less distinguishable. They are all fairly capable. Any edge one has over others is relatively short-lived. Whose LLM is larger, faster, more capable is really beside the point. In the end it all boils down to what we can really do with all that power.

This new power should turbocharge our capabilities. And in order for that, it has to live inside the tools we already use. Smaller, focused, embedded. It might not feel sexy, but when it shows up in spreadsheets, document editors, and email, the ordinary software of daily work, it becomes something we quickly become used to, and become dependent on. I’ve long been calling it augmented intelligence. Maybe it’s time to redefine it as “embedded intelligence.”


The history of technology offers great lessons and a rough road map to navigate in an unseen future. I have been looking at how software in general has evolved. While I am talking about software used for “business,” it also holds true for non-business software as well.

Before Software as a Service (SaaS) became the dominant form of delivering software, software came as a bundle, first in disks, then as downloads, to run on servers. These servers were first running inside the company’s four walls, and then eventually they were running inside a data center. These were one-size-fits-all software packages. As a company, you had developers customize this software to fit your specific corporate needs. You created a customized workflow. This was expensive, clunky, and slow.

But since everyone was moving away from the even more expensive and inflexible world of mainframe computing, no one minded. We had the 1990s “client server boom” that made software companies incredibly hot. Computing was moving from centralized mainframes to distributed systems and personal computers.

A decade or so later came SaaS. It delivered the same software via browser, and charged a monthly fee based on the number of people using the software. To be clear, I am speaking colloquially. This soon evolved into SaaS for anything and everything. Specialized tools for every function. Different industries needed different things, so suddenly we had many different kinds of the “same software.” This was (and still is) the “let many workflows bloom” era. That era has run its course.

Fast-forward to today. We are in the next phase of “software.” With AI, we will soon live in a world where the workflows will be customized not just based on corporate but also on personal needs. There is a common belief that it will all happen inside the chatbots—maybe—but don’t hold your breath. What is a more likely scenario, at least in the near term, is what I experienced with Claude for Excel.

There was no need to remake the platform (Excel) or write any custom code. I didn’t have to learn yet another tool. I didn’t need to change Excel. I didn’t learn a new interface. AI showed up inside the tool I was already using. It allowed me to just adopt it. And adapt to it. Without much friction.

When I was thinking about what really this specific piece of software was, and what Topaz AI did for me was different? First, they are trained for very specific tasks. And second, they tapped into sources of knowledge that keep helping me do the job better. Topaz AI, for example, updates every so often, as the models improve. They are learning from actual users, new technological approaches are coming to the fore, and computing is more capable. As a result, the improvements being offered to me are constant.

Claude for Excel does something similar. I always have access to the most updated version of Claude, but restricted to the spreadsheet. It knows every tab, every formula, every connection between cells. Structure, not just content. It is not a chatbot that sees pasted text.

The other interesting aspect is that Anthropic has been signing deals with data providers that people in finance actually use. Stock market data, PitchBook data, and other real data sources. This is not scraped from websites, this is verifiable information from sources that can be relied on for work. This is the key differentiation for “AI” at work.

Google, too, rolled Gemini into every tier of Workspace earlier this year. Is it very good? I don’t think so. Will it be good? I can bet good money on it. But since I already pay for Workspace and it is not a separate product, I don’t mind using it. Do I like the intrusion? Not really. It is subtle enough for me to ignore it if I don’t want to deal with it. (Also, How to turn off AI Summaries in Gmail.)

My friend Adam Bly’s System does the same for the doctors at Mayo Clinic. It is not making doctors go to a chatbot and ask questions. Instead it is meeting doctors where they are. DeepMind’s spinoff Isomorphic Labs embeds intelligence into the drug discovery workflow. It is woven into existing work, not bolted on top. Embedded.


AI’s progression is following the arc of any transformative technology. Every time technology prompts you to ask three questions. The first question is existential. The second is epistemological. The third is practical—that is when it becomes invisible. Let’s use Uber as an example.

Existential question: should we get into strangers’ cars? Epistemological question: Is this better than taxis, or just a regulatory arbitrage? Today, we simply open the app, order a ride, and don’t even think about it. The question is settled. It is part of our lives. It is no longer “Gig Economy” or “Ride Share.”

I saw it with cloud computing. As someone who was there at the genesis of the “cloud revolution,” the most common question I heard at my debut Structure conference was “should we put data on someone else’s servers?” Then it was “is this more reliable than our own data centers, or should we build our own cloud?” In the early days, only a few use cases for cloud emerged. Dropbox and Netflix for example. With the emergence of the app store, suddenly everything needed cloud and what it made possible. Fast forward to now, the cloud is just part of the technology infrastructure. We are seeing this with self-driving cars (essentially AI as applied to mobility and robotics workflow) as well.

AI, from my vantage point, is slowly entering that all important third phase. It is not there yet, but it sure is knocking on the doors. The question will no longer be whether to use it. The question is when it becomes invisible and part of everyday life.

The timing arc of every new technology follows a similar path. Silicon Valley turns out to be ahead by a few years, about seven or so. This prompts everyone outside of “Babylon” to think of every new trend as a bubble. A lot of it is indeed simply nonsense. On demand parking and scooters, for example. But it’s part of experimentation. Seven years or so after it launched, the questions about Uber and Amazon Web Services started to fade into the background. Normies started using the rideshare services.

You want to see where we are going, then look no further than the most talked about new thing, OpenClaw, an AI personal assistant previously known as Clawdbot. You can use it via a chat, and it sits on your local computer. You give OpenClaw access to a lot of your applications and services—-it is a big privacy and security nightmare—and it watches your conversations, learns context, and handles tasks when asked. Scheduling, summarizing, pulling information, coordinating across tools. The stuff that used to eat up hours of your day is done, without the drudgery. Others are using it in completely weird ways. Like figuring out how to tweet out what they are doing on Polymarket for example.

As an old timer this reminds me of Yahoo Pipes. They allowed people to dream of an interconnected web. So did services like IFTTT (If This Then That). Apple’s Shortcuts remain the big missed opportunity. OpenClaw is on that curve. Security problems or not, it is showing people what is possible. No wonder it got popular fast. Don’t look at the project as a standalone. But think of it as a sign of things to come.

It is so hot that earlier this month when there was an OpenClaw meet-up in San Francisco, the line was around the block. It had raw energy we have not seen in these parts for a long time. Is it there yet? No. But read the tea leaves. Longtime Mac developer Rui Carmo explains why on his blog. It’s “worth a look because it’s aiming at the right interface: chat is where people already are, and turning that into a command line for personal admin is a sensible direction.”

You can describe it as a crew of “Agents” doing all the work to make life easier for you. Or you can call it assistants. Or API calls. It doesn’t matter. What OpenClaw shows is how AI will work in the background. And that is what the “AI” future looks like for normal people. Not a separate AI app. Intelligence woven into tools you already use. Doing work you used to do yourself. Or used to hire someone to do—done by software. It feels very much akin to using widgets in 2005 and apps in 2010. A new way to do things.

I don’t want to go all rah-rah about this stuff. I am not naive about what comes next. The grunt work was the training. If the grunt work goes away, how do young people learn? They were learning how everything worked. It was training. The more models you make, the more you can intuitively understand what is right or wrong with a model. The reliance on automation makes people lose their instincts. Just look at how people blindly follow the directions on “Maps.” I often think about “What will be left for the human?” I don’t have an answer. If you do, leave a comment.


I have arrived at my point of view because for decades I have been seeing a continuous and endless explosion of data. The number of machines and sensors. Chips inside everything. The inevitable digitization of everything. With the arrival of always on smartphones, and ever faster networks, it was clear that our ability to wrangle information at human scale is over.

Now we live in a new world where everything moves at the speed of the network, and that speed is growing faster and faster. Data and machine learning were struggling to keep up, just as we were unable to make sense of it. And for that we desperately needed new ways of interacting with information.

The Silicon Valley hype machine has branded it artificial intelligence. But my more pragmatic way of seeing the world assumes the obvious need of these collections of technologies. I don’t see impending doom. Just as I don’t see an endless boom. Bubble or not, we are shifting gears into this world of information interaction.

My simpler explanation of “embedded intelligence” to myself makes me step away from the headlines and look at the present and the future in more realistic terms. My bet is that in five years, it will all be very different anyway. It always is. I am a believer in the power of silicon. When we have newer, more capable silicon, and more networks, we will end up with ever more capable computers in our hands. And the future will change.

For now, what I call embedded intelligence is a sensible on-ramp to the future. The hype may be about the frontier models. The disruption really is in the workflow.

February 6, 2026

6 thoughts on this post

  1. Thanks Om! You wrote another treasure that got me thinking this morning! Learning how to compute, calculate and write in my early years laid down a durable mental fabric. That fabric is the blanket that gave me a surplus today, not to mention an identity. I’m struggling to understand what happens when society separates the link between (early, formative) scarcity and (mature, stable) abundance. Delaying gratification is hard. How will the future define surpluses that aren’t created by learning to compute, calculate and write? I hope it’s not like the way society separated calories from nutrition opening up population-level obesity, which is really a form of starvation.

    1. Simon

      Thank you so much for sharing your feedback. Your comment about separation of calories from nutrition and its impact and equating with the impact of AI resonates deeply. I think this is the best analogy I have heard, and thank you for sharing this with me.

      For rest of your comment, we are going to have to wait and see how it plays out. Every generation writes its own script. I am going to observe and see how it pans out — that is if I am around long enough to see it unfold.

  2. “What will be left for the human?” asked Om.

    “What even is a human anymore?” responded the human.

    Time is a flat circle. We will just blissfully return to sitting around the Lyceum and Le Procope, wrestling with timeless existential questions, while our agents toil away and usher in an era of incomprehensible wealth. (Or so our overlords tell us.)

    1. I would argue when we don’t have purpose and things to do, we become who we already are – a species divided.

Leave a Reply to Drew Nicholsob Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.