Business

Why Tesla Is Designing Chips to Train Its Self-Driving Tech

Developing AI is costly and time-consuming. Custom silicon can give companies an edge.

Tesla makes cars. Now, it’s also the latest company to seek an edge in artificial intelligence by making its own silicon chips.

At a promotional event last month, Tesla revealed details of a custom AI chip called D1 for training the machine-learning algorithm behind its Autopilot self-driving system. The event focused on Tesla’s AI work and featured a dancing human posing as a humanoid robot the company intends to build.

Tesla is the latest nontraditional chipmaker to design its own silicon. As AI becomes more important and costly to deploy, other companies that are heavily invested in the technology—including Google, Amazon, and Microsoft—also now design their own chips.

At the event, Tesla CEO Elon Musk said squeezing more performance out of the computer system used to train the company’s neural network will be key to progress in autonomous driving. “If it takes a couple of days for a model to train versus a couple of hours, it’s a big deal,” he said.

Tesla already designs chips that interpret sensor input in its cars, after switching from using Nvidia hardware in 2019. But creating a powerful and complex kind of chip needed to train AI algorithms is a lot more expensive and challenging.

“If you believe that the solution to autonomous driving is training a large neural network, then what followed was exactly the kind of vertically integrated strategy you’d need,” says Chris Gerdes, director of the Center for Automotive Research at Stanford, who attended the Tesla event.

Many car companies use neural networks to identify objects on the road, but Tesla is relying more heavily on the technology, with a single giant neural network known as a “transformer” receiving input from eight cameras at once.

“We are effectively building a synthetic animal from the ground up,” Tesla’s AI chief, Andrej Karpathy, said during the August event. “The car can be thought of as an animal. It moves around autonomously, senses the environment and acts autonomously.”

Transformer models have provided big advances in areas such as language understanding in recent years; the gains have come from making the models larger and more data-hungry. Training the largest AI programs requires several million dollars worth of cloud computer power.

David Kanter, a chip analyst with Real World Technologies, says Musk is betting that by speeding the training, “then I can make this whole machine—the self-driving program—accelerate ahead of the Cruises and the Waymos of the world,” referring to two of Tesla’s rivals in autonomous driving.

Gerdes, of Stanford, says Tesla’s strategy is built around its neural network. Unlike many self-driving car companies, Tesla does not use lidar, a more expensive kind of sensor that can see the world in 3D. It relies instead on interpreting scenes by using the neural network algorithm to parse input from its cameras and radar. This is more computationally demanding because the algorithm has to reconstruct a map of its surroundings from the camera feeds rather than relying on sensors that can capture that picture directly.

But Tesla also gathers more training data than other car companies. Each of the more than 1 million Teslas on the road sends back to the company the videofeeds from its eight cameras. Tesla says it employs 1,000 people to label those images—noting cars, trucks, traffic signs, lane markings, and other features—to help train the large transformer. At the August event, Tesla also said it can automatically select which images to prioritize in labeling to make the process more efficient.

Gerdes says one risk of Tesla’s approach is that, at a certain point, adding more data may not make the system better. “Is it just a matter of more data?” he says. “Or do neural networks’ capabilities plateau at a lower level than you hope?”

Answering that question is likely to be expensive either way.

The rise of large, expensive AI models has not only inspired some big companies to develop their own chips; it has also spawned dozens of well-funded startups working on specialized silicon.

The market for AI training chips is currently dominated by Nvidia, which started out making chips for gaming. The company pivoted to supplying AI chips when it became clear that its graphics processing units (GPUs) were better suited to running large neural networks than the central processing units (CPUs) at the core of general-purpose computers.

In a neat bit of recursion, AI is also driving a diversification of chip designs. Chip design normally requires deep technical expertise and judgment, but machine learning has proven effective for automating elements of the process. Google, Samsung, and others are making chips that were designed, in part, by AI.

Kanter, the analyst, says some technical questions remain about specialized chips such as Tesla’s D1, including how effectively they can be connected together, and how well an algorithm can be split up and spread across different chips. “You are, in some sense, writing a big check for your software team to cash,” he says.

Tesla did not respond to requests for comment.

Huei Peng, a professor at the University of Michigan who focuses on autonomous driving, says if the D1 ends up being successful, Musk could sell it to other carmakers, which would need to follow its technical lead.

Peng says he doesn’t know if the approach Tesla is taking will work out financially or technically, but he’s learned not to bet against Musk. “They’ve done a lot of things that everybody says won’t work,” he says. “But it works in the end.”

More Great WIRED Stories

Products You May Like

Articles You May Like

Google Fires 28 Workers for Protesting Cloud Deal With Israel
Meta Is Already Training a More Powerful Successor to Llama 3
Google Workers Detained By Police for Protesting Cloud Contract with Israel
Crypto FOMO Is Back. So Are the Scams
The Real-Time Deepfake Romance Scams Have Arrived

Leave a Reply