There’s an application for neural nets called “photo upsampling” which is designed to turn a very low-resolution photo into a higher-res one.

Three pixellated faces are turned into higher-resolution versions. The higher-resolution images look pretty realistic, even if there are small weirdnesses about their teeth and hair

This is an image from a recent paper demonstrating one of these algorithms, called “PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models

It’s the neural net equivalent of shouting “enhance!” at a computer in a movie - the resulting photo is MUCH higher resolution than the original.

Could this be a privacy concern? Could someone use an algorithm like this to identify someone who’s been blurred out? Fortunately, no. The neural net can’t recover detail that doesn’t exist - all it can do is invent detail.

This becomes more obvious when you downscale a photo, give it to the neural net, and compare its upscaled version to the original.

Left: Luke Skywalker (The Last Jedi, probably) in a blue hood. Center: Highly pixelated version of the lefthand image. Right: Restored image is a white person facing the camera straight on - instead of a hood, they have wispy hair, and the lips are where Luke’s chin used to be.

As it turns out, there are lots of different faces that can be downscaled into that single low-res image, and the neural net’s goal is just to find one of them. Here it has found a match - why are you not satisfied?

And it’s very sensitive to the exact position of the face, as I found out in this horrifying moment below. I verified that yes, if you downscale the upscaled image on the right, you’ll get something that looks very much like the picture in the center. Stand way back from the screen and blur your eyes (basically, make your own eyes produce a lower-resolution image) and the three images below will look more and more alike. So technically the neural net did an accurate job at its task.

Left: Kylo Ren from the shoulders up. Center: highly pixelated (16x16) version of the previous image. Right: Where Kylo’s cheekbones were, there’s now voldemort-like eyes. Where his chin was, is now the upper lip of someone whose lower face is lost in shadow.

A tighter crop improves the image somewhat. Somewhat.

Left: Kylo Ren cropped tightly to the head. Center: Pixelated version of the picture on the left. Right: Reconstructed version looks a bit like that one photo of Jon Snow with closed eyes.

The neural net reconstructs what it’s been rewarded to see, and since it’s been trained to produce human faces, that’s what it will reconstruct. So if I were to feed it an image of a plush giraffe, for example…

Left: the head of a plush giraffe

Center: 16x16 version of the previous image

Right: reconstructed to look a bit like Benedict Cumberbatch, if he had rather orange skin and glowing blue eyes and a couple of diffuse blobs floating on either side of his head.

Given a pixellated image of anything, it’ll invent a human face to go with it, like some kind of dystopian computer system that sees a suspect’s image everywhere. (Building an algorithm that upscales low-res images to match faces in a police database would be both a horrifying misuse of this technology and not out of character with how law enforcement currently manipulates photos to generate matches.)

However, speaking of what the neural net’s been rewarded to see - shortly after this particular neural net was released, twitter user chicken3gg posted this reconstruction:

Left: Pixelated image of US President Obama

Right: “Reconstructed” image of a white man vaguely resembling Adam Sandler

Others then did experiments of their own, and many of them, including the authors of the original paper on the algorithm, found that the PULSE algorithm had a noticeable tendency to produce white faces, even if the input image hadn’t been of a white person. As James Vincent wrote in The Verge, “It’s a startling image that illustrates the deep-rooted biases of AI research.”

Biased AIs are a well-documented phenomenon. When its task is to copy human behavior, AI will copy everything it sees, not knowing what parts it would be better not to copy. Or it can learn a skewed version of reality from its training data. Or its task might be set up in a way that rewards - or at the least doesn’t penalize - a biased outcome. Or the very existence of the task itself (like predicting “criminality”) might be the product of bias.

In this case, the AI might have been inadvertently rewarded for reconstructing white faces if its training data (Flickr-Faces-HQ) had a large enough skew toward white faces. Or, as the authors of the PULSE paper pointed out (in response to the conversation around bias), the standard benchmark that AI researchers use for comparing their accuracy at upscaling faces is based on the CelebA HQ dataset, which is 90% white. So even if an AI did a terrible job at upscaling other faces, but an excellent job at upscaling white faces, it could still technically qualify as state-of-the-art. This is definitely a problem.

A related problem is the huge lack of diversity in the field of artificial intelligence. Even an academic project with art as its main application should not have gone all the way to publication before someone noticed that it was hugely biased. Several factors are contributing to the lack of diversity in AI, including anti-Black bias. The repercussions of this striking example of bias, and of the conversations it has sparked, are still being strongly felt in a field that’s long overdue for a reckoning.

AI Weirdness supporters get bonus material: an ongoing experiment that’s making me question not only what madlibs are, but what even are sentences. Or become a free subscriber to get new AI Weirdness posts in your inbox!

Subscribe now

My book on AI is out, and, you can now get it any of these several ways! Amazon - Barnes & Noble - Indiebound - Tattered Cover - Powell’s - Boulder Bookstore

Subscribe now