Thursday, 28 Nov 2024

From ‘Barbies scissoring’ to ‘contorted emotion’: the artists using AI

From ‘Barbies scissoring’ to ‘contorted emotion’: the artists using AI


From ‘Barbies scissoring’ to ‘contorted emotion’: the artists using AI
1.4 k views

You type in words - however nonsensical or disjointed - and the algorithm creates a unique image based on your search. This is Dall-E 2, a startlingly advanced, image-generating AI trained on 250 million images, named after the surrealist artist Salvador Dali and Pixar's Wall-E.

While use of Dall-E 2 is currently limited to a narrow pool of people, Dall-E mini (or Craiyon) is a free, unrelated version that is open to the public. Drawing on 15m images, Dall-E mini's algorithm offers a smorgasbord of surreal images, complete with absurd compositions and blurred human forms.

Already, trends have emerged: nuclear explosions, dumpster fires, toilets and giant eyeballs abound. On a dedicated Reddit thread, people delight in the images generated by the free, low-resolution version, which range from amusing (Kim Jong-un lego) to dark (The Last Supper by Salvador Dali), hellish (synchronized swimming in lava) and deeply disturbing (Steve Jobs introducing a guillotine). Like other machine-learning networks, this AI model seems biased in its images of people - who appear, perhaps unsurprisingly, overwhelmingly white and mostly male. (A cursory search for "the Guardian journalist" procured nine wallet-sized images of light-skinned men in suits, 90% of whom wore dark-rimmed glasses.)

OpenAI, the company behind Dall-E 2, acknowledges, however vaguely, that image-generators may "reinforce or exacerbate societal biases". The policy page says composite images may "contain stereotypes against minority groups".

The company's rules claim that the software prohibits the creation of "sexual or political content, or creating images of people without their consent". But who decides what is political? Isn't the very definition of "sexual" subjective?

Dall-E is not the first text-to-image AI model, but its sophistication, along with Dall-E mini's popularity, have given new urgency to questions about the role of AI in artmaking. When Dall-E produces an image, who is the creator? Is it the person who typed in the text, the coders who trained the neural network, the photographers whose images appear in the network - all of the above?

We spoke to four artists working across textiles, photography, installation, video art, and oil painting about harnessing Dall-E's trove of images - and asked them to provide us with an exclusive example of how they used the tool.

I'm at a break between shows and exploring Dall-E 2. I've been playing around with it, trying to break it or to see how far it goes or where the edge is. Some of this stuff you're playing with online, it could feel like, "oh it's so infinite" or sentient, but no, it's not as infinite as my imagination.

I'd been familiar with OpenAI through two projects I worked on - Neural Swamp, on view at Philadelphia Museum of Art, and my first foray into AI with MythiccBeing. I'd like to be able to combine images, like if you had the ability to mate two images and add context, write different scenarios. It's more surprising to put something not descriptive but more open-ended and let the Dall-E try to figure out what an adjective means. I'm interested in generated imagery in relationship to motion, which I'm sure is coming sooner rather than later. And [the machine learning system] GAN imagery is the average tool; Dall-E is the next step in that direction.

Mostly I've been typing in lines - almost poetry, like "writhing in contorted emotion". I also typed in: "Whenever I do something illogical, inefficient, unproductive, or nonsensical I can just smile at my innate humanity." I think that's more interesting than trying to do like "Kanye West as a clown in the middle of Times Square". I'm more interested in thinking about poetics. That's what brought me to machine learning in the first place.

It's cool, the novelty of it. Sometimes I think the images have a ghostliness or remind me, honestly, of drug trip imagery. They look subconscious, not fully rendered. Things aren't really rendered on the face: nostrils, or the way the earlobes are. Hands. I searched Kid Rock - it worked. They had the hat and stringy hair.

I've been doing image research, playing around with Dall-E mini; I'm on the waitlist for Dall-E 2. I'm researching the landscape of where I grew up and the land used to be part of a dump, so there would be this treasure out in the woods. I've been trying to think of myself as a young girl, so I've been Googling young girls a lot. I'm using them as figure models, but it feels creepy. It's like, "This is someone's child." I always delete all the source imagery from my computer once I've woven something. After a while these people become stand-ins, a conglomeration, but they're also actual people too.

Google used to be a cache of images that I used in the space of memory. I also used to use Flickr or Photobucket. Now, I look at library archives - like sexual education pamphlets or xeroxed brochures about domestic violence. When I was using other people's images, I was using the essence of a selfie or a self portrait. I don't need faces, so there's this blurring of identity. Dall-E blurs their faces for you. When I search, it defaults to white. It's never given me a non-white person.

People write about my work and say "sexy selfies", which is definitely simplified. Selfies are kind of a check-in with the internet, like, "Hi, I exist. This is what my human body looks like." When I search on Dall-E, I'm asking it to be a form, like "tapestry" or "selfie tapestry" or "not your grandma's quilt". When you put in "tapestry", it depicts what you see in dorm rooms - like a printed piece of fabric, it's not actually a woven piece of fabric. You have to put "woven tapestry", which is interesting because to me, the meaning of tapestry is something that's woven, but you have to add that language. I did a "selfie woven tapestry" and a "car buried in the ground" and "gas pump in the woods covered in pennies" - the first few I did were kind of creepy.

The idea that there are multiple versions in Dall-E [mini] is interesting - the thing is like showing you its sketches. When you're an insecure artist, you want to show the best of the bunch - or the opposite, when you're insecure you want to show the whole bunch. But when you're confident, you're like: "This one is the best, I only need to show one." So I think it's cool that it's like: here's nine.

A lot of my work is thinking about early queerness and sexuality. The things you did with toys. I would always make my Barbies hook up and my girlfriends were always a little bit confused. On Google, I searched for "Barbies scissoring" and it was just literally human people having sex with barbies. The internet is so strange and there is this pre-sorting. The roller coaster of things coming out on the internet. The FAQ doesn't say anything about adult content.

Online, there's this idea of somebody's image being used. Deepfakes or catfishing. It always felt safe to send nudes if there wasn't a face in the image, because it wasn't implicating you in the nudes, even though I have tattoos so there's no hiding who I am.

I have a background in programming but I'm not an engineer, I'm more of a tinkerer. I've made a lot of my own neural networks over the years - trained on my own datasets of my image-making process - to mimic my drawing style and apply it like a filter over an image. These ranged from maybe 500 drawings to 10,000 images. To train the networks, it takes days, but I have a pretty good computer that I can crunch that data on.

In Hologram Combines, you can see part of that neural network exposed. I usually approach shows by creating my own virtual world of something that exists wholly in virtual reality, and then I clip from that world to make source material. I like to keep my own world self-contained - an internal, metabolic system. Because there's such a saturation of images and media right now, but making my own set from my own visual language and logic is more fun than going out to Google, which is what this is trained on.

That's visual-to-visual search, not text-to-visual, like Dall-E. It's like playing tennis with myself. There's advanced, node-based processes on a neural network that, in the case of Dall-E 2 or mini, there's almost like five sub-neural networks that are happening at the same time - which is pretty incredible. Our AI is of course getting more sophisticated, but it's also getting a little bit more quantum, meaning there are several sub-processes that are happening.

I use text in an annotative way - more poetic and abstract than literal. I make something from a feeling, often body-based. It's much more like dream logic than this network, which is very literal. I think it's actually a lot more useful for people who are film directors because it's fun for sketching or storyboarding. But creatively, I don't really need it. It hasn't made its way into one of my projects, formally. And I think it's because I've worked with neural networks for a long time so the novelty has worn off.

This Person Does Not Exist is much better than Dall-E on faces. I couldn't help but think, "What does it think a Rachel Rossin looks like?" I have the same name as the Bladerunner Rachael Rosen, so on Dall-E 2, when I search for my name there's some of that. It's a white Jewish lady with brown hair, which looks pretty similar to me. That's the phenotype, I guess.

The thing that's most remarkable to me is the context or verb, the action-based things. If I searched "the bird is running up the street and lost its toupee", it knows what you want to see. It's going to be interesting when we can start to fold this into making films. Processing is going to get more powerful - it's here to stay.

There's a curatorial aspect that we're ignoring. There's this expectation that we're creating a sort of God, but we have to remember that machine learning, neural networks, artificial intelligence - all of these things are trained on human datasets. There's a trickle-down effect that happens because so much of our perception is folded into the technology, maybe arbitrated by engineers at Google and OpenAI. People are surprised when artificial intelligence is racist or sexist, like somehow forgetting that all of these things are trained on human datasets. It's basically a different type of Google search, that's all that's going on. It's putting trust in the internet.

It's important to remind people what artificial intelligence actually is. We are seeing a reflection of ourselves, and it seems like a magic black box.

My work is always a rhizomatic map. To make the painting [on view at the Venice Biennale], I was looking at a thousand images of hair and different sea life forms. I searched for images of people swimming underwater to see what their bodies would look like; what does Black hair, curly hair, dreadlocked hair look like when it's underwater? One painting became a chorus of a hundred faces. That's where mother Google came in, in place of having a model pose in the studio or an actual object to photograph.

I try to do the same search on other peoples' devices because even if I just switch genders, I'll get a whole different set of images. And from that, an amalgamation.

There's digital splicing, there's actual physical splicing. I'll have a printed image and then sometimes I use a projector, mostly for proportions. I'm very good at re-creating a texture but I get lost when it comes to making things at different scales.

Most artists that I know make images by splicing together information they've heard or images they know to create the one thing they imagine. But I don't think I'd ever use Dall-E, per se, because that's what I do. I can't see what the use would be, for me as an image maker. It's interesting that there's an attempt to echo the human hand or a painterly touch, but these images are pixelated and blurred out. A lot of the effort in the studio for me is trying to cobble together a meaning that feels truthful to my experience with whatever is actually available online.

When you do a Google search, even something that is supposed to have happened thousands of years ago or yesterday or projected to be tomorrow, it's all now. It's all presented in the same format. As much as I like this idea of flattening time and space, we are creatures of memory. We can only anchor ourselves in place. It's probably a limitation, but also a benefit of being human. So much about who we are as humans is about individual refraction.

In the gathering of images, the person who made that algorithm, or put out those images, all of that represents a real-world thing that reflects values, choices. What is that threshold of reality that we rely on?

I tried to search for "memory board" but Dall-E brings up computer memory boards instead. The West African tradition of memory boards is tactile, oral and visual. It's a sculpture tradition in which someone who knows the encoded language can, through touch, be able to retell the history of the community for generations. You have to engage all the senses in order to truly perceive. You can feel as much as you can see and remember.

Then I tried to search "lukasa", which is from southern Congo. It can't really place a geography on it, and when you zoom in, it's extra disappointing. It just feels sad. The western filter is coming into play.

It all goes back to: what are the things in the world that feel truest? Or that feel like me? Because so much of the canon is passed down and I love art but didn't feel like it included me. Some objects are still out of context in museums, like at the Met they'd have this object that reads: "Ritual object, maker unknown". If it's something that I responded to physically, or if it'd spark interest, I could go down the rabbit hole and find out what something was.

Interviews have been edited for length and clarity

you may also like

Mom's message in a bottle found by her own daughter 26 years later
  • by foxnews
  • descember 09, 2016
Mom's message in a bottle found by her own daughter 26 years later

A fourth grader went on a school trip when someone found a message in a bottle containing a letter that was written by her mom 26 years ago. The message was tossed into the Great Lakes.

read more