0

Adobe Will Sell AI-made Stock Images - Slashdot

 1 year ago
source link: https://tech.slashdot.org/story/22/12/05/1519250/adobe-will-sell-ai-made-stock-images
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Adobe Will Sell AI-made Stock Images

Do you develop on GitHub? You can keep using GitHub but automatically sync your GitHub releases to SourceForge quickly and easily with this tool so your projects have a backup location, and get your project in front of SourceForge's nearly 30 million monthly users. It takes less than a minute. Get new users downloading your project releases today!Sign up for the Slashdot newsletter! or check out the new Slashdot job board to browse remote jobs or jobs in your area.
×

Adobe Will Sell AI-made Stock Images (axios.com) 38

Posted by msmash

on Monday December 05, 2022 @10:20AM from the up-next dept.
Adobe is opening its stock images service to creations made with the help of generative AI programs like Dall-E and Stable Diffusion, the company said. From the report: While some see the emerging AI creation tools as a threat to jobs or a legal minefield (or both), Adobe is embracing them. At its Max conference in October, Adobe outlined a broad role it sees generative AI playing in the future of content generation, saying it sees AI as a complement to, not a replacement for, human artists. Adobe says it is now accepting images submitted from artists who have made use of generative AI on the same terms as other works, but requires that they be labeled as such. It quietly started testing such images before officially announcing the move today. "We were pleasantly surprised," Adobe senior director Sarah Casillas told Axios. "It meets our quality standards and it has been performing well," she said.

Do you have a GitHub project? Now you can sync your releases automatically with SourceForge and take advantage of both platforms. Do you have a GitHub project? Now you can automatically sync your releases to SourceForge & take advantage of both platforms. The GitHub Import Tool allows you to quickly & easily import your GitHub project repos, releases, issues, & wiki to SourceForge with a few clicks. Then your future releases will be synced to SourceForge automatically. Your project will reach over 35 million more people per month and you’ll get detailed download statistics. Sync Now



Adobe makes art tools more than art, they can make their own AI tools, in fact they already have for a long time. For the foreseeable future people will still need Photoshop, to fix hands:)

Yeah, mate, that's not even remotely how these tools work. They're not compostors. They don't have some database of images.

These models are trained on billions of images and a typical checkpoint is a couuple gigs. Aka, there's about one byte of neural net weightings per image. If you think you have some magical algorithm to compress an image in one byte, by all means share it with the class.

SD learns motifs and styles. SD ***CANNOT*** reproduce single images from its dataset. Here's an easy test: search LAION-400M [github.io], turn on captions and aesthetic scoring, and pick an image with a fairly unique caption. Now go to StableDiffusion's Dreamstudio, enter in that exact caption, and generate a bunch of images. Result? You don't get anything like the original image. Thematic, stylistic, motifs? Yes. The same image? Not at all. And it cannot, because the data just isn't there.

Now, I specified *single* for a reason. If you have something like the Mona Lisa or the Earthrise photo or whatnot, it's going to exist thousands of times in the dataset in different forms, and it will be able to reproduce those things to varying degrees. But such things are pretty much the definition of a motif - a common stylistic artistic element.

Again, you don't have to take my word for it - do the above test for yourself.

Again, to repeat: AI art tools are NOT compositors. They're NOT samplers. They're denoisers [fosstodon.org]. They're given latent static dotproducted with a textual latent, and try to clean it up so that the latent image and text together make sense. You can easily picture doing this manually: you see a staticky image and can picture, "well, it'd look more like a house if I pushed this here and tweaked that there". But what sort of pushing and tweaking you'd do would vary much depend on whether someone said it was, say, a picture of a horse vs., say, a picture of a go-kart. The training teaches it how to do just that: how to push and tweak static to look a tiny bit more "houselike" or "go-kart-like" or whatnot. And that's run again and again and again. In a way, it's sort of reverse image recognition - it's trying to *make* the static recognizable.

  • Re:

    That doesn't mean that ANY AI tool will be like this, mind you. If you have a huge neural net and a small dataset and train it enough, you'll overfit to the dataset, and it'll start producing originals, with good fidelity - indeed, in extreme overfit cases, to near perfection.

    But that simply cannot happen with one byte per image. Sorry, data compression just isn't that good.

    • Re:

      stahp

      You don't understand the words you're using, because if you did, you'd realize why that's a stupid name for this.

      The two houses are not the same. The claim is that it was spit right out, but it was false, because you can see that the two houses are not the same even from the scaled down sample images.

      Got any claims that aren't false?

        • Re:

          Well, there's definitely elements of that, but the art community has a lot of cringe too. I know full well that making stuff yourself is not easy. My lady is an artist, and we have had lots of conversations about this, so I'm not unversed in the artist's view at all. In fact, I'm much more familiar with it than I am with the actual technology behind these image models! I've been hearing about it since I started playing with these toys.

          Though no one asked, I don't consider myself to be "an artist", but I do

    • Re:

      Beyond the simple issue of "Exactly how much diversity do you expect in lowpoly houses anyway?", that is NOT "StableDiffusion". That is a custom trained checkpoint for StableDiffusion. A mod. Created by a user. Most likely trained with Dreambooth. They likely took - tops - several hundred images for the training, maybe no more than a dozen or two, and used that to train a SD's several gigs of weightings. It's very easy to overtrain in such a situation - indeed, it will overtrain if you run it for too

    • Re:

      Where's the theft? None of the generated images are the same as the 'original'. They bear some resemblance because the AI was asked to produce images similar to the original. None are close enough to trigger any sort of copyright concern.

      Or are you some IP extremist who thinks nobody can build a house without stealing because someone already built a house with walls and a roof? Can nobody ever do a professional portrait photo again because all of the hand poses are already taken?


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK