Unless you specifically opt-out, Adobe may assume that it’s ok to use your work to train its algorithms.
An eagle-eyed developer at the Krita Foundation noticed that Adobe had automatically opted them into a “content analysis” initiative. The program allows Adobe to “analyze your content using techniques such as machine learning (e.g. for pattern recognition) to develop and improve our products and services.”
The rule was implemented in August 2022 but managed to go unnoticed.
Artists, understandably, have been protesting over AI-generated art as a potential threat to their livelihoods:
While some artists believe AI is a tool for their work rather than a threat, there’s near-unanimous consensus that the method in which generative AI models are often trained is unfair.
Some artists have found their work has been scraped to train generative AI models without their consent or at least being paid royalties. This has raised questions over whether end-users could also unwittingly violate copyright and face legal consequences.
By changing its policy to allow AI models to be trained on the works of its users, Adobe doesn’t have to rely on scraping data from the web. Adobe, it’s worth noting, is set to
While Adobe claims that it doesn’t use data on customers’ Creative Cloud accounts to train its experimental generative AI features, the wording provides some legal flexibility.
In the company’s documentation, Adobe quite clearly says “we first aggregate your content with other content and then use the aggregated content to train our algorithms and thus improve our products and services.”
Such data collection should never be opted into by default, it arguably falls foul of regulations like GDPR. If you’re an Adobe user and want to opt-out, you can do so here.
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.