Things You Should Know But Don’t: AI and the Art of Horror

Posted October 24, 2022

Several months ago, OpenAI’s image generator, DALL-E Mini, entered the popular lexicon, trending on social media to the point where the site was often too busy to access. The full version of the program, DALL-E 2, was publicly released a month ago. DALL-E 2 is a machine learning program that can take user input in the form of a short text prompt and generate unique images from it. On a very basic level, DALL-E works by linking already-existing and identified images to the words in the text prompt. OpenAI describes the process as: “DALL-E 2 has learned the relationship between images and the text used to describe them. It uses a process called ‘diffusion,’ which starts with a pattern of random dots and gradually alters that pattern towards an image when it recognizes specific aspects of that image.” With its ability to interpret text prompts with surprising accuracy, people had fun creating both beautiful and hilarious images.

But not all of the experiences users have had while testing AI image generators have been pleasant. Even with a well-written prompt, it isn’t difficult to come across some surreal results. Prompts involving people, especially, can enter the uncanny valley very quickly. One artist, who goes by the username Supercomposite on Twitter, made a very interesting discovery while testing out the limits of one AI program. She started off using a technique called “negative prompt weights.” In layman’s terms, this technique creates an image that is as different as possible, or “opposite” from the given text prompt. The artist explains her entire process throughout the Twitter thread.  In the end, they “accidentally” created a strange and gruesome figure. The artist dubbed the human-like, haggard figure “Loab.” Loab went viral for its sudden and inexplicable appearance throughout Supercomposite’s work.

As the artist began feeding the algorithm new images, some with Loab and some without, she found that Loab would persist through many, many iterations of new images. For whatever reason, the algorithm held on to the visage of Loab, while also making her face progressively more weathered, gory, and unsettling. Take a look through Supercomposite’s original thread—but be warned, some of the images are graphic.

Some offered the explanation that the method Supercomposite used to create “opposite” images only drove the AI to include more and more upsetting themes—thus, driving the image formulaically further away from what was considered pleasant and acceptable. It makes sense that if the AI was told to move away from “normal” imagery to create an inverse, then it might produce some horrific and abnormal things. This is just one explanation, though. It is still unknown why the algorithm even latched on to the face of an older woman to begin with, or why it chose to consistently replicate identifying features to make a recognizable face. In their examination of the phenomena, journalist Max Read writes, “Indeed, it’s hard to say much of anything about Loab – what she is, where she comes from, why the AI has sorted her in the way it has — beyond educated guesswork.”

With new technology, especially AI, there always comes fear of the unknown and backlash. Sometimes it is justifiable; the art world has been abuzz lately with concerns surrounding DALL-E 2 and similar programs. These worries came to a head last month, when a fully AI-generated image was submitted to the Colorado state fair and won first prize. There have been the usual philosophical arguments over whether a machine that only feeds off the works of others can ever create art. On a more practical, immediate level, artists are also concerned about copyright and whether their art is being used as a part of an algorithm without their permission. One of the scarier things about AI and machine learning is that we’re not always entirely sure how they work in terms of where exactly they pull their data and extrapolations from. Once we’re not sure where the extrapolations come from, the A in AI gets a little bit blurrier, and the machines get a little more human. And as we know from things like personal experiences, true crime stories, and history— humans have the potential to be very, very scary.  Will our newfound AI cousins prove worse?

Leave a Reply

Your email address will not be published. Required fields are marked *