Even as Apple’s final event of 2020 gradually becomes a speck in the rearview mirror, I can’t help continually thinking about the new M1 chip that debuted there. I am, at heart, an optimist when it comes to technology and its impact on society. And my excitement about the new Apple Silicon is not tied to a single chip, a single computer, or a single company. It is really about the continuing — and even accelerating — shift to the next phase of computing.
The traditional, desktop-centric idea of computing predates so much of what we take for granted in the smartphone era: constant connectivity, ambient intelligence of software systems, and a growing reliance on computing resource for daily activities, to name a few. Today’s computers are shape-shifting — they are servers in the clouds, laptops in our bags, and phones in our pockets. The power of a desktop from just five years ago is now packed inside a keyboard and costs a mere $50-a-pop from Raspberry Pi. Cars and TVs are as much computers as they are anything else.
In this environment, we need our computers to be capable of handling many tasks — and doing so with haste. The emphasis is less on performance and more about capabilities. Everyone is heading toward this future, including Intel, AMD, Samsung, Qualcomm, and Huawei. But Apple’s move has been more deliberate, more encompassing, and more daring.
Steve Jobs’s last gambit was challenging the classic notion of the computer, and the M1 is Apple’s latest maneuver. The new chip will first be available in the MacBook Air, the Mac mini, and a lower end version of 13-inch MacBook Pro (a loaner version of which I have been trying out over the last three days). To get a better sense of what the company is up to, I recently spoke with three of their leaders: Greg “Joz” Joswiak, senior vice president of Worldwide Marketing; Johny Srouji, senior vice president of Hardware Technologies; and Craig Federighi, senior vice president of Software Engineering.
The conversations shed significant light on the future — and not just of Apple.
But first, what is the M1?
Traditionally, computers are based on discrete chips. As a system on a chip (SoC), the M1 combines many technologies — such as Central Processing Unit (CPU), Graphics Processing Unit (GPU), Memory, and Machine Learning — into a single integrated circuit on one chip. Specifically, the M1 is made of:
- An 8-core CPU consisting of four high-performance cores and four high-efficiency cores
- An 8-core integrated GPU
- 16-core architecture Apple Neural Engine.
- It is built using cutting-edge 5-nanometer process technology.
- Packs 16 billion transistors into a chip.
- Apple’s latest image signal processor (ISP) for higher quality video
- Secure Enclave
- Apple-designed Thunderbolt controller with support for USB 4, transfer speeds up to 40Gbps.
In a press release, Apple claimed that the “M1 delivers up to 3.5x faster CPU performance, up to 6x faster GPU performance, and up to 15x faster machine learning, all while enabling battery life up to 2x longer than previous-generation Macs.”
The difference between this boast and Apple’s positioning back in the heyday of personal computers could not be more stark. Back then, the symbiotic relationship of WinTel — Intel and Microsoft — dominated the scene, relegating Apple to the fringes, where its chips were crafted by fiscally and technologically inferior partners at IBM Motorola. Its prospects fading, Apple had no choice but to switch to Intel’s processors. And once they did, inch-by-inch, they began to gain market share.
Jobs learned the hard way that, to stay competitive, Apple had to make and control everything: the software, the hardware, the user experience, and the chips that power it all. He referred to this as “the whole widget.” I’ve previously written about the critical need today’s giants have for vertical integration. Much of it can be summed up in this line from a 2017 piece: “Don’t depend on a third party to be an enabler of your key innovations and capabilities.”
For Apple, the iPhone represented a chance to start afresh. Their new journey began with the A-Series chips, which first made their way into the iPhone 4 and first-generation iPad. In the ensuing years, that chip has become beefier, more intelligent, and more able to do complicated tasks. And while it has become a hulk in its capabilities, its need for power has remained moderate. This balance of performance and muscle turned this chip into a game-changer. The latest iteration of that chip, the A14 Bionic, now powers the newest generation of iPhones and iPads.
Increasingly, Apple products have been powered by the genius of its ever-increasing army of chip wizards. Except for one notable exception: The device that got it all started, the Mac.
Enter the M1.
“Steve used to say that we make the whole widget,” Joswiak told me. “We’ve been making the whole widget for all of our products, from the iPhone, to the iPads, to the watch. This was the final element to making the whole widget on the Mac.”
Why The M1 Matters
- Modern computing is changing. Software is an end-point for data and works using application programming interfaces.
- Chips have become so complex that you need integration and specialization to control power consumption and create better performance.
- Apple’s chip, hardware, and software teams work together to define the future systems to integrate them tightly.
- The future of computing is moving beyond textual interfaces: visual and aural interfaces are key.
- Machine learning will define the capabilities of the software in the future.
It is very much like Apple’s chips inside the iPhone and iPad, except that it is more powerful. It uses Apple’s Unified Memory Architecture (UMA), which means that a single pool of memory (DRAM) sits on the same chip as various components that need to access that memory — like the CPU, GPU, image processor, and neural engines. As a result, the entire chip can access data without copying it between different components and going through interconnects. This allows them to access memory with very low latency and at a higher bandwidth. The result is a much better performance with less power consumption.
With this new technology, everything from video conferencing services, games, image processing and web usage should be snappier. And in my experience, it is — at least, so far. I have been using a 13-inch M1 Macbook Pro with 8GB of memory and 256 GB of storage. Internet pages load up briskly on Safari , and most of the apps optimized for the M1 — Apple calls them “universal apps” — are blazing fast. I have not had much time with the machine, but the initial impression is favorable.
Some other analysts are very bullish on Apple’s prospects. In a note to his clients, Arete Research’s Richard Kramer pointed out that the world’s first 5-nanometer chip put M1 a generation ahead of its x86 rivals. “Apple is producing world-leading specs over x86, and it is doing so at chip prices less than half of the $150-200 paid by PC OEMs, while Apple’s Unified Memory Architecture (UMA) allows it to run with less DRAM and NAND,” Kramer noted. He thinks Apple will drop two new chips next year, both targeted at higher-end machines and one of which will be focused on iMacs.
I don’t think AMD and Intel are Apple’s competitors. We should be looking at Qualcomm as the next significant laptop chip supplier. Apple’s M1 is going to spark an interest in new architectures from its rivals.
This approach to integration into a single chip, maximum throughput, rapid access to memory, optimal computing performance based on the task, and adaptation to machine learning algorithms is the future — not only for mobile chips, but also for the desktop and laptop computer. And this is a big shift, both for Apple and for the PC industry.
The news of the M1 focusing on the lower-end machines got some tongues wagging. Though, according to Morgan Stanley research, these three machines together represent 91% of trailing twelve-month Mac shipments.
“It seems like some of these people were people who don’t buy that part of our product line right now are eager for us to develop silicon to address the part of the product line that they’re most passionate about,” Federighi told me. “You know that their day will come. But for now, the systems we’re building are, in every way I can consider, superior to the ones they’ve replaced.”
The shift to the M-family will take as long as two years. What we are seeing now is likely the first of many variations of the chip that will be used in different types of Apple computers.
This is a big transition for Apple, and it is fraught with risk. It means getting its entire community to switch from the x86 platform to new chip architecture. A whole generation of software will need to be made to work with the new chip while maintaining backward compatibility. “This is going to take a couple of years, as this is not an overnight transition,” Joswiak cautioned. “We’ve done these big transitions very successfully in the past.”
The most significant of these shifts came in 2005. Hampered by the fading Power PC ecosystem, the company made a tough decision to switch to the superior Intel ecosystem. The shift to x86 architecture came alongside a new operating system — the Mac OS X. The change caused a lot of disruption, both for developers and the end customers.
Despite some turbulence, Apple had one big asset: Steve Jobs. He kept everyone focused on the bigger prize of a powerful, robust and competitive platform that would give WinTel a run for its money. And he was right.
I transitioned from the older Mac to the OS-X based machines, and after many years of frustration of working on underpowered computers, I enjoyed my Mac experience. And I am not alone. The move helped Apple stay relevant, especially among the developers and creative communities. Eventually, the normals became part of the Apple ecosystem, largely because of the iPod and the iPhone.
In his most recent keynote, Apple CEO Tim Cook pointed out that one in two new computers sold by Apple is being bought by the first time Mac buyers. The Mac business grew by nearly 30% last quarter, and the Mac is having its best year ever. Apple sold over 5.5 million Macs in 2020 and now has a 7.7 percent share of the market. In truth, many of these buyers probably don’t know or don’t care about what chip runs their computer.
However, for those that do, many are conditioned by multi-billion dollar marketing budgets of Intel and Windows PC makers to think about gigahertz, memory, and speed. The idea that bigger numbers are a proxy for better quality has become ingrained in modern thinking about laptops and desktops. This mental model will be a big challenge for Apple.
But Intel and AMD have to talk about gigahertz and power because they are component providers and can only charge more by offering higher specifications. “We are a product company, and we built a beautiful product that has the tight integration of software and silicon,” Srouji boasted. “It’s not about the gigahertz and megahertz, but about what the customers are getting out of it.”
Having previously worked for IBM and Intel, Srouji is a chip industry veteran who now leads Apple’s gargantuan silicon operation. As he sees it, just as no one cares about the clock speed of the chip inside an iPhone, the same will be true for the new Macs of the future. Rather, it will all be about how “many tasks you can finish on a single battery life.” Instead of a chip that is one-size-fits-all, Srouji said that M1 is a chip “for the best use of our product, and tightly integrated with the software.” [Additional Reading: Is it time to SoC the CPU: The M1 & Apple’s approach to chips vs. Intel & AMD ]
“I believe the Apple model is unique and the best model,” he said. “We’re developing a custom silicon that is perfectly fit for the product and how the software will use it. When we design our chips, which are like three or four years ahead of time, Craig and I are sitting in the same room defining what we want to deliver, and then we work hand in hand. You cannot do this as an Intel or AMD or anyone else.”
According to Federighi, integration and these specialized execution engines are a long-term trend. “It is difficult to put more transistors on a piece of silicon. It starts to be more important to integrate more of those components closely together and to build purpose-built silicon to solve the specific problems for a system.” M1 is built with 16 billion transistors, while its notebook competitors -— AMD (Zen 3 APU) and Intel (Tiger Lake) — are built using about ten billion transistors per chip.
“Being in a position for us to define together the right chip to build the computer we want to build and then build that exact chip at scale is a profound thing,” Federighi said about the symbiotic relationship between hardware and software groups at Apple. Both teams strive to look three years into the future and see what the systems of tomorrow look like. Then they build software and hardware for that future.
The M1 chip can’t be viewed in isolation. It is a silicon-level manifestation of what is happening across computing, especially in the software layer. In large part due to mobile devices, which are always connected, computers now must startup instantaneously, allowing the user to look, interact, and move away from them. There is low latency in these devices, and they are efficient. There is a higher emphasis on privacy and data protection. They can’t have fans, run hot, make noise, or run out of power. This expectation is universal, and as a result, the software has had to evolve along with it.
The desktop environments are the last dominion to fall. One of the defining aspects of traditional desktop computing is the file system — in which all of your software shares a storage area, and the user tries to keep it organized (for better or for worse). That worked in a world where the software and its functionalities were operating on a human scale. We live in a world that is wired and moves at a network scale. This new computing reality needs modern software, which we see and use on our mobile phones every day. And while none of these changes are going to happen tomorrow, the snowball is rolling down the mountain.
The traditional model is an app or program that sits on a hard drive and is run when the user wants to use it. We are moving to a model where apps have many entry points. They provide data for consumption elsewhere and everywhere. They respond to notifications and events related to what the user is doing or where they are located.
Modern software has many entry points. If you look at more recent mobile OS changes, you can see emergence of new approaches such as App Clips and Widgets. They are slowly going to reshape what we think of an app, and what we expect from an app. What they are showing is that apps are two-way end-points — application programming interfaces — reacting to data in real-time. Today, our apps are becoming more personal and smarter as we use them. Our interactions define their capabilities. It is always learning.
As Apple merges the desktop, tablet, and phone operating systems into a different kind of layer supported by a singular chip architecture across its entire product line-up, traditional metrics of performance aren’t going to cut it.
“The specs that are typically bandied about in the industry have stopped being a good predictor of actual task-level performance for a long time,” Federighi said. You don’t worry about the CPU specs; instead, you think about the job. “Architecturally, how many streams of 4k or 8k video can you process simultaneously while performing certain effects? That is the question video professionals want an answer to. No spec on the chip is going to answer that question for them.”
Srouji points out that, while the new chip is optimized for compactness and performance, it can still achieve a lot more than traditional ways of doing things. Take the GPU, for example. The most critical shift in computing has been a move away from textually dominant computing to visual-centric computing. Whether it is Zoom calls, watching Netflix, editing photos, and video clips, video and image processing have become integral parts of our computing experience. And that is why a GPU is as essential in a computer as any other chip. Intel, for example, offers integrated graphics with its chip, but it is still not as good because it has to use a PCIe interface to interact with the rest of the machine.
By building a higher-end integrated graphics engine and marrying into the faster and more capable universal memory architecture, Apple’s M1 can do more than even the machines that use discrete GPU chips, which have their specialized memory on top of normal memory inside the computer.
Why does this matter?
Modern graphics are no longer about rendering triangles on a chip. Instead, it is a complex interplay between various parts of the computer’s innards. The data needs to be shunted between video decoder, image signal processor, render, compute, rasterize all at rapid speeds. This means a lot of data is moving.
“If it’s a discrete GPU, you’re moving data back and forth across the system bus,” Federighi points out. “And that starts to dominate performance.” This is why you start to see computers get hot, fans behave like turbochargers, and there is a need for higher memory and more powerful chips. The M1 — at least, in theory — uses the UMA to eliminate all that need to move the data back and forth. On top of that, Apple has a new optimized approach to rendering, which involves rendering multiple tiles in parallel and has allowed the company to remove complexity around the video systems.
“Most of the processing once upon a time was done on the CPU,” Srouji said. “Now, there is lots of processing done on the CPU, the graphics and the Neural Engine, and the image signal processor.”
Things are only going to keep changing. For example, machine learning is going to play a bigger role in our future, and that is why neural engines need to evolve and keep up. Apple has its algorithms, and it needs to grow its hardware to keep up with those algorithms.
Similarly, voice interfaces are going to become a dominant part of our computing landscape. A chip like M1 allows Apple to use its hardware capabilities to overcome Siri’s many limitations and position it to compare favorably to Amazon’s Alexa and Google Home. I have noticed that Siri feels a lot more accurate on the M1-based machine I am currently using as a loaner.
At a human level, all of this means that you will see your system as soon as you start to flip open the screen. Your computer won’t burn your lap when doing zoom calls. And the battery doesn’t run out in the middle of a call with mom.
It’s amazing what goes into making these small-seeming changes that, without many of us even realizing it, will transform our lives.