Draw me like one of your French AI-generated nudes

Draw me like one of your French AI-generated nudes. As one of many amorphous masses of flesh, all rolls and folds like a browner Rubens. Drooping and melting, spilling over, exceeding myself. A face that’s a sallow study in crisscrossing stretchmarks, accented with the bruisy purples of undereye circles. A body that’s dubiously beige, like when women’s magazines hit you with the Fair and Lovely filter. Ugly bags of mostly water. Supine or just slouching; it’s hard to tell.

It’s rare that I have such a visceral reaction to a set of nudes, a category of image which usually evokes a celebratory if not—excuse me for this—empowering response. The images, a set of AI-generated nude portraits from Stanford researcher Robbie Barrat, are undoubtedly as gorgeous as they are unsettling. “Usually the machine just paints people as blobs of flesh with tendrils and limbs randomly growing out—I think it’s really surreal. I wonder if that’s how machines see us,” he wrote in a tweet that went viral last week, adding that the machine always paints faces in the same way “with this weird yellow/purple texture.” He has no idea why, but he likes it. Personally, I find it terribly violent, in a boot stamping on a face forever kind of way.

Robbie Barrat, AI Generated Nude Portrait #3 (2018). via SuperRare.co.

Of course, it’s not a machine in the traditional sense, but an algorithm. And it isn’t painting per se, at least not in the way one might imagine an algorithm spitting commands to a mechanical arm wielding a brush, in the proto-Zamboni Formalist vein of Matthew Stein’s 1998 web robot Puma Paint. Rather, it generates images through call-and-response machine learning; it is a class of AI algorithms known as a Generative Adversarial Network, or GAN. (Call me the GAN girl, maybe.) Think of it as a dialectical faceoff—a classification struggle, if I may—between two neural networks that have been fed the same dataset of images.

The first network is the generator which, perhaps unsurprisingly, generates images based on that dataset.  The second is the discriminator, which evaluates that generated image against the dataset before assigning it a probability as to whether it is real or fake. Based on this feedback, the generator network tries to improve the image before trying its luck again, and again and again: it learns. As the algorithm gets trained, it produces better and better fakes, some of which appear photorealistic in their sophistication. (Efforts to apply GANs to natural—that is, human as opposed to computer—language generation have thus far been far less successful than their image counterparts). It’s not dissimilar to certain models of art pedagogy. Is a GAN something like an MFA for algorithms? And if so, what might outsider AI art look like? 

Back on Twitter, responses have been largely admiring, and mostly ellide the unbearable whiteness of Barrat’s dataset (because, art history) and by extension, portraits. “Ooooo like sweet mounds of dough,” comments one user. Francis Bacon comes up several times, and one user points to the similarities with William Untermohlen’s moving series of self-portraits chronicling his progressive degeneration into dementia. One image in particular has a familiar looking yellow coif; Trump jokes abound, as do references to various sci-fi dystopias, and Terry Bisson’s thinking, conscious, loving, dreaming meat. So many people use the language of dreaming, in fact, that I wonder whether Philip K. Dick or Google’s DeepDream Generator is responsible. The jury is very much out on whether machines can think for themselves (never mind the imminent Singularity) but everyone seems happy, at least in this thread, to agree that they can dream.

Most interesting is a comment from the Barrat comparing AI-generated art to Sol Lewitt’s’s rule-based art, which in turn begs the question of who exactly is the artist here. In response to someone asking why he didn’t try tinting his images in post-production, Barrat replied that he did not want to modify by hand what the AI outputs, and that doing so would run counter to the intention of the work. Still, he added “I am working on augmenting the trained network by overfitting on a small dataset of non-white nudes to try and get a more even distribution over skin tone, though.” Putting aside the trying “in the future we’ll all be brownish and what do you mean representation is not the same thing as reparations” feel-goodism, it’s worth wondering what else this will change beyond color. Depending on whose depictions his dataset draws from (one only hopes it won’t be Gauguin and/or his compatriots who turned their gazes to the Middle East), it is like that the poses will change. Perhaps they will read as more servile or more sexualised or even as less passive; perhaps these new images will even affect the pinkish-beige average so that all the AI’s nudes will rearrange their limbs. 

My one takeaway from several seasons of America’s Next Top Model was the different poses required for men’s and women’s magazines and I like to imagine a spectral, algorithmic Tyra Banks analogue, screaming poses and art directing from within. And I wonder too, what the algorithm wants, freed from the cis-hetness of art history. Does the generator network just really want to please the discriminator and is its ideal body one that is likeliest to be considered a match? Left to its own devices, would it arrive unsupervised at an androgynous, agendered mean? Regardless, the boundaries are clear: Barrat is only willing to alter the instructions and not the output, what the machine has created within those systemic constraints. If generative art can be understood as a ceding of control to external, logic-based systems—and what is more logical, in its own way, than the natural world?—who is giving up control here? Is the algorithm simply implementing Barrat’s concept? Are its ideas its own?

And—isn’t it funny to emphasize a kind of authentic, purely AI-generated facsimile (or at least its attempt), at a time when we’re so consumed by fakes? GANs haven’t been around that long. They emerged in mid-2014, predating this administration’s fake news bot-or-not maelstrom by a couple of years, but it’s still tempting to posit some kind of causality. Isn’t it kind of wild how entire swathes of the internet have swarmed together to function as a collective fact-checking discriminator networks? And a new front in this conflict has recently opened up around the phenomenon of deepfakes, or AI-generated porn based on the likenesses of real celebrities or people, which extends face swapping to its logical, Rule 34-ed conclusion although it has more recently been widely banned.

From its earliest days, the tech industry has framed computing in terms of passing, of hiding its artifice, its non-human fakeness.  One might consider GANs as akin to a neverending Turing test, except that here, a computer is both examiner and examinee. Meanwhile, with the advent of phenomena like botnets, the Turing test as we know it has been inverted, and it’s up to us to prove that we are not fake, that we match the database of blobs of flesh categorized as human. Now it’s people who are asked to decipher CAPTCHAs—to perform free labor for Google’s algorithms—and to check a little box that says I am not a robot. You know how people like to say “I, for one, welcome our new robot overlords”? Turns out we’re already working for them.