Interview: David Young on “Dandelions”
Transcription from phone call; edited for clarity
Q1: What inspired Dandelions?
A1: Dandelions is part of an ongoing series I have called “Learning Nature” which is an exploration of how machine learning and advanced technology can understand something other than consumption and efficiency. This project specifically came about during the spring of 2020, in the midst of the pandemic. I was struck by the fragility and the ephemeral quality of the dandelions when they were in their brief moment of transition from yellow flower to white bloom. It felt like a perfect encapsulation of the fragility of the world.
Q2: How was Dandelions made?
A2: All of the learning nature series is made using a machine learning technology called a GAN, which is short for Generative Adversarial Network. My approach to this technology is counter to how they’re used in more traditional corporate settings; in those places, the GAN is fed hundreds of thousands or millions of images, and it then generates its own image based on its understanding of what it was shown. In contrast to that, I use just a handful of images. I really want to this project to be on the scale of the individual, making it more intimate and more personal, in an attempt to develop an understanding of machine learning which is more intuitive.
And so for the Dandelion series, I picked, I don’t know- a bunch- of dandelions that were growing around my farm in upstate New York, and then I photographed those in my studio. I used those photographs, of which there were less than 100, to train my machine. Because the machine is trained on so few images, it develops this imperfect understanding of what a dandelion is, which is why the resulting images are not perfect; they have strange artifacts from the technology, because the machine is being taught with so few images. I feel that the visual quality of these images reflect what I’m calling the materiality of AI- what makes an image created with AI, machine learning, and GAN unique is reflected in the strange artifacts that appear in these works.
Q3: What do you feel we can learn from this “restricted GAN” process- using a limited set of images to train the AI?
A3: One of the reasons I’m doing this is to highlight the irrationality of AI and machine learning. We place so much faith in this technology, but the truth is that the technology is only as good as the data that we’re training the machine with. In real world applications, there are endless examples of how when the training data contains bias, that bias is simply reinforced in the methods by which the machine is trained; in its operation, it optimizes for the bias. One of the reasons I’m using small amounts of data is to highlight the myth that AI is in some way removed from bias or has a higher intelligence than what we have as human beings. Using small amounts of data reveals the inherent imperfection in this technology.