Amazon Wants You to Code the AI Brain for This Little Car

Two years ago, Alphabet researchers made computing history when their artificial intelligence software AlphaGo defeated a world champion at the complex board game Go. Amazon now hopes to democratize the AI technique behind that milestone—with a pint-size self-driving car.

The 1/18th-scale vehicle is called DeepRacer, and it can be preordered for $249; it will later cost $399. It’s designed to make it easier for programmers to get started with reinforcement learning, the technique that powered AlphaGo’s victory and is loosely inspired by how animals learn from feedback on their behavior. Although the approach has produced notable research stunts, such as bots that can play Go, chess, and complicated multiplayer electronic games, it isn’t as widely used as the pattern-matching learning techniques used in speech recognition and image analysis.

DeepRacer was created by Amazon’s cloud computing division, Amazon Web Services, which produces most of the company’s profits. The car comes with an HD camera, a dual-core Intel processor, and other hardware it needs to pilot itself—but a blank slate where its driving skills should be. Programmers must help it learn those, using new Amazon tools to support reinforcement-learning projects.

“This is a technology that has been almost completely out of reach to all but the most well funded and motivated organizations,” says Matt Wood, the executive who leads AI programs at AWS. “We’ve abstracted away a lot of the complexity.”

Wood hopes DeepRacer will help coders get a feel for reinforcement learning and encourage them to apply it to weightier problems—generating new business for Amazon’s cloud division. Reinforcement learning can train software to react appropriately to changing conditions. Wood says that it’s a good fit for industrial scenarios, such as optimizing wind turbine operations under changing weather or power demands, or prioritizing ship and container scheduling in ports. With help from AWS, General Electric has used reinforcement learning to improve image-processing models in MRI machines.

After thousands of laps on a virtual track, the car’s driving skills can be good enough to navigate in the real world.


Amazon announced DeepRacer at its annual re:Invent cloud conference in Las Vegas on Thursday. The company plans a series of more than a dozen races for DeepRacer owners around the world, where they can win AWS credits and perhaps a free trip to the series finale at re:Invent next year. The project was inspired in part by a grassroots autonomous RC car scene, in which people use open source AI software to build and race their own miniature autonomous cars.

Reinforcement-learning algorithms pick up skills through repeated trial and error. They are guided by feedback from a “reward function” that provides a kind of simulated motivation—for example, by telling the software it must try to maximize its score or lift objects without dropping them. Over many attempts to win a virtual sumo bout or use a robot gripper, the software can gradually improve at achieving the goal it was set.

It can take millions of failures for a reinforcement-learning system to become proficient, so most projects using the technology depend on simulations to speed up the laborious process. The improved version of AlphaGo that Alphabet created last year, called AlphaZero, played 21 million games of Go against itself to master the game beyond the level of any human. Programmers who want to play with Amazon’s DeepRacer must first train their code in a virtual world created by Amazon for the project, in which a digital double of the car can drive—and crash—over and over.

Amazon is not the only cloud computing company trying to lure coders curious about reinforcement learning. Microsoft has released an open source simulation environment for drones and cars called AirSim, which is also used for reinforcement-learning experiments. Its cloud division, second only to Amazon’s in revenue, is promoting the technology to customers. Shell worked with Microsoft engineers to apply the technology to drilling tricky horizontal oil wells. Kevin Scott, Microsoft’s chief technology officer, says the technique is becoming accessible enough to see widespread use. “It’s not just game-playing any more,” he says.

More Great WIRED Stories

Products You May Like

Articles You May Like

The Global CrowdStrike Outage Triggered a Surprise Return to Cash
The Architect of Russia’s Google Is Back
Spotify, Stop Trying to Become a Social Media App
Huge Microsoft Outage Linked to CrowdStrike Takes Down Computers Around the World
OpenAI Slashes the Cost of Using Its AI With a ‘Mini’ Model

Leave a Reply