The battle between artists and AI art has turned a new chapter with a new digital tool that could ‘poison’ the GenAI art.
The tool is called ‘Nightshade’ and was developed by researchers at the University of Chicago. How Nightshade works is by adding an extra layer over each pixel of the selected images, thus ‘poisoning’ the images. This extra layer is invisible so that the images would look normal to its viewers. However, for an AI tool looking to train over the image, the extra layer would serve as noise and corrupt the results of the learning model.
Most popular GenAI art tools like Midjourney and DALL-E train over datasets of hundreds of thousands of images, with each image broken down at a pixel level. Right now, there is no way to tackle such ‘poisoned’ images except manually recognizing and removing such images from the training data. Professor Ben Zhao, the project’s lead researcher, said: “We assert that Nightshade can provide a powerful tool for content owners to protect their intellectual property against model trainers that disregard or ignore copyright notices, do-not-scrape/crawl directives, and opt-out lists. Movie studios, book publishers, game producers, and individual artists can use systems like Nightshade to provide a strong disincentive against unauthorized data training.”
Over the last two years, AI art has taken the art world by storm. While the general public is still divided on the issue, most artists have objected to the latest technology. However, thus far, the battle has been far too one-sided. While many artists and curators tried to sue these tools on the basis of copyright infringement, most court decisions have concluded that creating art by training on publicly available images is neither plagiarism nor copyright infringement. However, Nightshade could finally provide artists with a tool they could use to fight back against this latest trend.