General > General Technical Chat

Creative AI poison pill developed

(1/2) > >>

Bud:
A tool called Nightshade is screwing up AI image scraping/training by manipulating image pixels. Google for "Nightshade AI poison". The goal is to protect artists's intellectual property.

tom66:
If you can't beat it, er... smash your toys on the floor and have a tantrum?  If AI generation can replicate art to the level at which a professional does it, is that not more evidence that the skill is likely to die away much like horse farriers became substantially less employable with the invention of the car?

Of course I say this as someone whose day job does involve at least some amount of software development, so maybe I won't be so happy when AI tools can actually write competent code.  I can see the dilemma artists face, but the "make it not possible" crowd is never going to win.  The cat is out of the bag;  I can run Stable Diffusion on a £300 graphics card, it is not going away.

If this poison pill method involves hiding details in e.g. the least significant bits of the image, or adding extra metadata, it'll just require manual human tagging and/or filtering to be applied to the input data set.  That will only take a bit more time.  A great deal of input images in these models are already manually tagged, because the tag sets that come with the images aren't good enough yet. 

ebastler:

--- Quote from: Bud on November 02, 2023, 01:03:23 pm ---A tool called Nightshade is screwing up AI image scraping/training by manipulating image pixels. Google for "Nightshade AI poison". The goal is to protect artists's intellectual property.

--- End quote ---

Paper is here: https://arxiv.org/pdf/2310.13828.pdf

Having glanced through it, I am not sure I have understood the concept yet. It is not the simple "dirty label" approach, where you feed the AI images with misleading meta-information during the training process. (They talk about that as a reference point in section 4.)

Rather, they create images which look "right" to a human observer, but contain another, misleading image mixed in at a small amplitude (section 5). Is it just a linear combination of two images, and will the AI still pick up (misleading) patterns from the low bits?

Bud:
The original MIT Technology Review article gives actual samples of image transformations caused by poisoning. The article is called "This new data poisoning tool lets artists fight back against generative AI"

ebastler:

--- Quote from: Bud on November 02, 2023, 02:40:31 pm ---The original MIT Technology Review article gives actual samples of image transformations caused by poisoning. The article is called "This new data poisoning tool lets artists fight back against generative AI"

--- End quote ---

Yes, I saw those. Those are examples of the results, generated by AI models which have been "poisoned". (They are actually taken from the paper I linked to.) But how does the process of "poisoning" work??

Navigation

[0] Message Index

[#] Next page

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod