Business

We Made Our Own Artificial Intelligence Art, and So Can You

On the 3:13 pm train out of San Jose on a recent Friday, I hunched over a MacBook, brow furrowed. Hundreds of miles north in a Google data center in Oregon, a virtual computer sprang to life. I was soon looking at the yawning blackness of a Linux command line—my new AI art studio.

Some hours of Googling, mistyped commands, and muttered curses later, I was cranking out eerie portraits.

I may reasonably be considered “good” with computers, but I’m no coder; I flunked out of Codecademy’s easy-on-beginners online JavaScript course. And though I like visual arts, I’ve never shown much aptitude for creating my own. My foray into AI art was built upon a basic familiarity with the command line, and a recent encounter with 19-year-old Robbie Barrat.

Barrat doesn’t have formal qualifications in programming either, but he’s become an accomplished AI artist, and shares code and ideas on GitHub. I decided to try them after talking with Barrat in the course of writing about self-taught AI experts in the December issue of WIRED, and learning that a Parisian art collective called Obvious used his recipes and code to create a work that sold at Christie’s for $432,500.

Barrat makes art using artificial neural networks, webs of math that have spawned the recent AI boom by enabling projects like self-driving cars and automated cancer detection. Neural nets can learn to do useful or artistic things by processing large volumes of example data, such as photos. Barrat enabled my explorations, along with a nice payday for Obvious at Christie’s, by sharing the code and instructions to train image-generating networks with images collected from the giant art encyclopedia WikiArt.

Training neural networks is notoriously computationally demanding. It’s why graphics chipmaker Nvidia has seen its stock appreciate more than tenfold in the past five years, and Google has begun to design its own chips for machine learning. Not having a graphics processor—or $2,000 spare to get one—I used the $300 of credits Google offers new users of its cloud computing service to boot up a virtual computer that did. I picked one preconfigured with machine learning software. Because Barrat’s project is now more than a year old, I also had to install a machine learning tool called Torch, used by researchers at companies including Facebook and IBM that has been overshadowed by newer packages since.

A grid of portraits made by a neural network that studied thousands of paintings.

Tom Simonite

My first experiment involved a neural network Barrat had trained on thousands of portraits from more than a century of art history. Once I’d gotten the supporting software working, I could type a few dozen characters and spit out grids of weird portraits—some of them similar to the one that Obvious sold for almost half a million dollars. Barrat’s networks natively produce only small images. I tried enlarging one of my portraits with a service powered by machine learning called Let’s Enhance, which Barrat says one member of Obvious told him it used as part of its workflow.

An effort to enlarge a portrait created additional distortions.

Tom Simonite

Next I dug into the documentation to see what other tricks Barrat’s trained portrait generator might do. I made the images at the top of this article by asking it to produce larger images. The clumps of distorted heads and figures are the result of a neural network that learned to produce structures of a certain size, trying to fill a space larger than it was trained on.

Emboldened, I moved on to training image-generating neural networks of my own, again using Barrat’s instructions. The “scraper” he developed to pull images from WikiArt can be directed to collect images in many different styles and genres, such as cityscapes or pointillism. Barrat had covered portraits, nudes, and landscapes. I plumbed for marine art, and used the script to collect just over 2,000 images. I then doubled my haul with an image-editing tool to create mirror images of those images. This trick works because of a shortcoming of neural networks: They don’t natively perceive visual similarities that are apparent to people, like two photos being mirror images.

Some results from training a neural network with seascapes.

Tom Simonite

Training the network gave me new appreciation of grumbles I’ve heard in the course of reporting on machine learning. For one, there are elements of luck and craft to finding the right settings to get good results for a particular network on a given data set—it’s one reason Google is trying to automate that process. I embarked on trial and error similar to, but much less informed than those Barrat and the AI artist Mario Klingemann have told me they use, training networks over and over with small differences and trying to move toward the most promising results.

With access to just a single Nvidia graphics chip, training the neural networks took hours each time. It reminded me why tech companies spend heavily on hardware to accelerate their teams’ experiments, and are developing their own AI chips. One Facebook project that trained image recognition algorithms on billions of Instagram photos occupied 336 graphics processors for more than three weeks solid.

My own experiments spanned only a few days. But after a handful of duds that “painted” only blotchy glitches, I trained networks that could produce recognizable oceans, and even ghostly sailing ships. Sensing I was close to making them even better, I cued up a marathon training session—and accidentally crippled my virtual studio.

While waiting for my next greatest neural network to finish its education, I discovered a GitHub page from artist Alex Champandard offering code to use machine learning to scale up images. In trying to make it work, I broke a piece of the software infrastructure needed to support my virtual machine’s GPU. With my deadline approaching, there was no time to reinstall everything from scratch.

When I spoke to Barrat, he was encouraging about my scrappy art project, saying it was the kind of exploration he hoped his code and tutorial could enable. “My goal was people would use it like you’re doing to play around, and then maybe go on and do more stuff,” he said. He added that he liked the weird assemblages created by pushing his portrait network out of its comfort zone, something he hadn’t tried much himself. “You should go sell those for $400,000,” he joked.


More Great WIRED Stories

Products You May Like

Articles You May Like

Cybertruck Finally Gets Full Self-Driving (Supervised)
Epic Games Is Suing Samsung Now
How to Generate an AI Podcast Using Google’s NotebookLM
Hurricane Helene Will Send Shockwaves Through the Semiconductor Industry
Microsoft’s Copilot AI Gets a Voice, Vision, and a ‘Hype Man’ Persona

Leave a Reply