Hacker News new | past | comments | ask | show | jobs | submit login

Challenge accepted. How will they know?



My normal artwork (digital) is 9000x9000 pixels up to about 14400x14400. Good luck getting any of them to make something bigger than 1024x1024. Sure, you can use Gigapixel to upres things, but in the end it's basically impossible to make a decent artwork using ML at the moment. I only use Dall-E for offbeat ideas for geometric abstractions, so far Stable Diffusion sucks for that purpose.

Of course when you upload at no one wants you to do the full resolution image; at 1024x1024 you might be able to fool people with something ML made.

I believe most of these models only use 1k images so they really can't make anything bigger yet. Maybe someday but not today.


Have you seen the 'in-painting' technique yet? You can write prompts for each element of an image you want, erase the parts you don't like, position them on the canvas, and make SD fill in the blanks.


I believe most of these models only use 1k images so they really can't make anything bigger yet. Maybe someday but not today.

This is the sort of thing that changes incredibly quickly. I'd guess that a big chunk of the source data will be in 8K within a year or two.


Problem is the computing power necessary to implement those bigger models increases by some large value as well. An 81MP images is 81 times bigger than a 1MP image. Not sure exactly how the generation is implemented, but I am sure it probably becomes unaffordable if its linear.


It might require dedicated hardware. That only really becomes possible when you've proven the idea, but ASICs for cryptomining, TensorFlow, etc are quite real. There's no reason why dedicated hardware for training Stable Diffusion couldn't happen.


It's mostly a VRAM limitation at this point though, since even a 81x slowdown with something like Stable Diffusion would be perfectly acceptable to produce the final high-quality render.

Which is to say, it sounds like the solution is dedicated GPUs that focus on VRAM over speed.


With the models that are available for public use today, there are still usually signs, particularly when it comes to drawings of humans.


Sounds like a good opportunity for an "AI image detection"-as-a-Service.


Is it possible to automate detection of automation? If you can create an algorithm that detects the difference between AI art and human art, wouldn't that algorithm itself be able to create a new AI art algorithm that it itself could not detect?


Nice point. I think it's called 'adversarial training'. The Alphazero chess engine/model was grown this way it seems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: