Front and Profile

Mugshots Generated by Machine Learning

Front and Profile is a set of diptychs which pair archival mugshot photographs with their Machine Learning generated counterparts. In each diptych an archival b&w mugshot photo, either a front or profile view, is input into a Machine Learning neural network and the computer is tasked with producing the most likely alternate view. If a front view is fed in, it predicts the profile view. If a profile view is fed in, it predicts the front view.

This is an experiment to render pictures using a Machine Learning neural network. I had been using a U-Net image processing network as a front end to a generative adversarial network, i.e., a GAN. It turns out this idea had already been done successfully in the paper Image-to-Image Translation with Conditional Adversarial Nets by Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. Their network is called pix2pix. I eventually adopted the full pix2pix architecture by augmenting my code with a pix2pix implementation by Christopher Hesse. I later expanded the pix2pix architecture to address issues specific to this project.

The neural net was trained on 512 mugshot pairs drawn from a database from the National Institute of Standards and Technology, NIST. The database contains b&w mugshot photos from the 1930’s-1970’s. The mugshots were rendered using an alpha version of Tensorflow 2.0 running in a Google Colab notebook.

At first glance, the Machine Learning generated images in Front and Profile may seem like an ingenious piece of computer engineering. Using Machine Learning to draw pictures, pixel by pixel, is very new and we are at the infancy of this technology. The implications of this technology, however, are much more disturbing. What would happen if a state actor used this technology to profile its populace in the name of “security”? What are the privacy implications? Would we be comfortable, for example, if a state could produce an entire 3D model of a person from just their driver’s license photo? Front and Profile demonstrates we are much closer to this reality than one would think.

Another question which Front and Profile poses is, how accurate are these images? Does the Machine Learning generated image look like the real person? For now, the images are primitive and at best look something like a police artist’s sketch. But inevitably, the technology will improve and the images will become more and more photoreal. When this happens the illusion of verisimilitude will rise to the point where the images will easily be mistaken for being real. But will these future images be any more accurate than they are now, or will they simply be better and better fakes?

Lastly, a large percentage of the individuals in the database are African-American and this percentage is much larger than in the overall population. So, the question is, how will this racial bias in the dataset affect the results of the neural net? Will it even be possible to construct an "unbiased" dataset?

Each image in a Front and Profile diptych is a 15"x15" gelatin silver print.

A collection of computer graphics stuff