Artists file class-action lawsuit against Stability AI, DeviantArt, and Midjourney

Technollama report

What many of us had expected has finally happened, artist have sued for copyright infringement a couple of AI companies, as well as an art repository site (complaint here). Is this the end of AI tools? I don’t think so, I’ll try to explain why, this will not be a detailed look at the lawsuit, there will be more time for that, this is my own take on some of the technical issues that I think the complaint gets wrong, so this is not intended as an in-depth look at the law anyway, as I suspect this may not get to a trial, more on that later. I’m also aware that this is at a very early stage, things may change, and most importantly, nobody can be sure of what the result will be, this is my own early speculation on the first filing as it stands, I’ll update and write further blog posts as needed.

The claims

Three artists are starting a class-action lawsuit against Stability.aiMidjourney, and DeviantArt alleging direct copyright infringement, vicarious copyright infringement, DMCA violations, publicity rights violation, and unfair competition. DeviantArt appears to be included as punishment for “betrayal of its artist community”, so I will mostly ignore their part in this analysis for now. Specifically with regards to the copyright claims, the lawsuit alleges that and Midjourney have scraped the Internet to copy billions of works without permission, including works belonging to the claimants. They allege that these works are then stored by the defendants, and these copies are then used to produce derivative works.

This is at the very core of the lawsuit. The complaint is very clear that the resulting images produced by Stable Diffusion and Midjourney are not directly reproducing the works by the claimants, no evidence is presented of even a close reproduction of one of their works. What they are claiming is something quite extraordinary: “Every output image from the system is derived exclusively from the latent images, which are copies of copyrighted images. For these reasons, every hybrid image is necessarily a derivative work.” Let that sink in. Every output image is a derivative of every input, so following this logic, anyone included in the data scraping of five billion images can sue for copyright infringement. Heck, I have quite a few images in the training data, maybe I should join! But I digress.

The argument goes something like this: images are scraped from the Internet without permission, these images are then copied, compressed and stored by the defendants, and these copies are used as a “modern day collage tool” to put together images from the training data, this is because machines cannot reason like people, so it stands to reason that they just put together stuff, hence all images are derivatives of the works in the training data.

The technology

I think that the argument in the claim is flawed because it does not accurately represent the technology, so I will attempt to make a very quick explanation of how tools such as Stable Diffusion or Midjourney produce images. What follows is using some excerpts from my forthcoming article, so stay tuned for a lengthier explanation.

I like to classify what happens in AI generative tools in two stages, the input phase and the output phase. The input phase is comprised of the gathering of data to create a dataset, and this is used to train a model. In the case of Stable Diffusion, it uses a dataset called LAION, which has of over 5 billion entries consisting of the pairing of a hyperlink to a web image (not the image itself) with its ALT text description. This dataset then is used to train a model, I will not go into detail into models, suffice it to say that a model is a mathematical representation of a real-world process that is trained using a dataset, this can be used to make predictions or decisions without being explicitly programmed to perform the task. There are various types of models, but Stable Diffusion and Midjourney both use diffusion models (see an explanation in a previous blog post). Long story short, diffusion models take an image, add noise to it, and then put it back together.

Read the full report

Artists file class-action lawsuit against Stability AI, DeviantArt, and Midjourney