Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Multimedia creators now have the opportunity to counteract unintended uses in AI. With the aid of an emerging open-source tool, they can introduce a form of "poisoning" into their artwork to hinder AI models from utilizing it as training data.

Nightshade, a creation of University of Chicago researchers, serves as a tool that artists can incorporate into their images before uploading them to the web. This method of data poisoning intentionally misleads AI models into learning incorrect labels for objects and scenery within the images. In testing and by progression, as the AI system processed 50 poisoned image samples, it started generating peculiar-looking dogs. With 100 poisoned samples, it consistently produced a cat in response to user requests for a dog. By the time 300 poison samples were utilized, any request for a dog yielded a cat image that was nearly flawless.

It's worth noting that any tainted images previously incorporated into an AI training dataset would need to be identified and eliminated. If an AI model had already been trained using such images, it would likely require retraining.

Pin It