In many ways, Tesla — Elon Musk’s lightning rod of a car company — is the perfect allegory for modern Silicon Valley. The ongoing psychodrama of personalities drowns out the amazing technical achievements that are happening all around us. While the latter is what will ultimately determine our future, it’s the soap opera most of us can’t turn away from.
As usual, this has been a real “Dr. Jekyll and Mr. Hyde” kind of week for Tesla. It had a disastrous earnings report card, and Elon keeps creating all the wrong sorts of headlines. But in the middle of this maelstrom, the company announced a new chip that is going to eventually become the brain for their electric car. This chip is not just any chip — it will be able to make sense of a growing number of sensors that allow the car to become better and better at assisted (if not fully automated) driving.
As many of you know, I don’t drive, and I often dream of owning a self-driving car. So, I have a vested interest in this. That (combined with good sense) is why I tend to ignore all the drama and focus on Tesla’s technology related news. For the past few years, Elon has been making big claims about Tesla and fully autonomous vehicles, and I want to see if he’s going to make good on his word. Needless to say, I tracked the coverage of the company’s new chip very closely.
The chip news also interests me because recent history has shown that success comes from being vertically integrated. Apple became successful by taking control of its destiny and building its own chips and other components, relying less on outside suppliers, who tend to slow things down. (See Apple’s botched bet on Intel.)
Tesla’s desire to build its own systems isn’t any different, especially since it is going to be facing more intense competition from more cash-rich rivals, such as Porsche, Audi, and BMW, who can afford to lock up component supplies.
If we are to believe that the car is the computer, it makes perfect sense for a company like Tesla to bet on its own silicon. Tesla’s chief chip head, Peter Bannon, who was previously at Apple, talked about Tesla’s full self-driving module and even elicited a response from Nvidia, one of the key players in the market of chips focused on neural networks. Nvidia’s comment in a nutshell: We already do this and are developing even more powerful modules. They have to — that’s their business.
Tesla’s module is based on two AI chips — each one made of a CPU, a GPU, and deep learning accelerators. The module can deliver 144-trillion operations per second, making it capable of processing data from numerous sensors and other sources and running deep neural network algorithms. Ian Riches, an analyst with Strategy Analytics, told EE Times that this is “effectively the most powerful computer yet fitted to a production vehicle.” And Tesla is going to make a next-generation module that will be more powerful and will consume a lot less power.
James Hamilton, VP and Distinguished Engineer at Amazon Web Services and one of my favorite thinkers and technologists, did a breakdown of the news and succinctly pointed out why this chip matters and what Tesla did well. “What I found most interesting is what they did to achieve the performance they needed with a relatively low performance, low-cost, and low-power memory subsystem,” he said. With investment and time, the computer engines could improve for years to come. The big question, however, is this: Will Tesla, which is running low on cash these days, be able to pay for all the development in the future?
(Given the company’s penchant for melodrama, I thought I’d end on a cliffhanger.)
This first appeared on my April 28, 2019, weekly newsletter. If you like to get this delivered to your inbox, just sign-up here, and I will take care of the rest.