I usually go for a deliberately more stylized look with many of my AI images (a little anime-ish). For these ones, though, I was purely going for some level of photorealism.
I look at so many of these images I find it hard to judge if these are looking photorealistic or still stand out as 'AI generated'. Curious to know what you think and also what you think would improve them.
Click the images at the bottom for the full resolution ones.
They are very realistic but look like they have been photoshopped - skin is unnaturally smooth, almost like plastic in some shots. The gunge always avoids the same parts of the face too.
I've tried, but image generators seem to get confused by how Gunge falls over a body. Like how it looks in a lap or slides down legs. Any success there ?
kipperflew2 said: They are very realistic but look like they have been photoshopped - skin is unnaturally smooth, almost like plastic in some shots. The gunge always avoids the same parts of the face too.
Thank you. This is really useful feedback. I agree they need to look more 'real' in order to be realistic and not trying to make them look perfect as that has the opposite effect in the end. Also probably some more training needed on the model with faces covered in gunge.
WildThang said: I think producers like me will be out of a job pretty soon. These still look fantastic!
Animation is still just that. Nothing will replace the real thing
It'll be indistinguishable from the real thing. Or better, if that's what you want.
Unless AI artists start defrauding the public and presenting their fakes as real people (and I wouldn't put that past them at all) there's no value in "indistinguishable" to those who want to see people slimed.
WildThang said: I think producers like me will be out of a job pretty soon. These still look fantastic!
Animation is still just that. Nothing will replace the real thing
It'll be indistinguishable from the real thing. Or better, if that's what you want.
Unless AI artists start defrauding the public and presenting their fakes as real people (and I wouldn't put that past them at all) there's no value in "indistinguishable" to those who want to see people slimed.
Then WildThang wont be out of a job! The real thing will be available as long as there's an audience. I don't see a conflict?
WildThang said: I think producers like me will be out of a job pretty soon. These still look fantastic!
Nah. I get a special thrill from seeing a real pie in the face. Even more if it's a "real" piefight (that is, with a plot, not just the girl standing there).
Where AI is going to do best is for situations that are so outrageous or dangerous that a producer couldn't get a real model to do them.
MMasia I'm wondering what upscaler you are using. I was using R-ESRGAN4x+. It upscaled and produced sharp results but lost details and realism. I switched to R-ESRGAN4x and was getting better results.
I have a difficult time training on colors. Prompting for a color would change both the subject and the background. In each of the images, did you prompt for the color slime? Was the lora trained on colors?
Also do you use options to restore faces? Sometimes I use it and it will make the face look better but it loses details.
mFeelzGood said: MMasia I'm wondering what upscaler you are using. I was using R-ESRGAN4x+. It upscaled and produced sharp results but lost details and realism. I switched to R-ESRGAN4x and was getting better results.
I have a difficult time training on colors. Prompting for a color would change both the subject and the background. In each of the images, did you prompt for the color slime? Was the lora trained on colors?
Also do you use options to restore faces? Sometimes I use it and it will make the face look better but it loses details.
There's a trick here (which I admit I haven't tried yet) involving the use of multiple controlnets. Basically you make the image you want, then import it into a photo editor and block out the colours you want, with extremely low resolution, then fade in a greyscale overlay of the image at about 20%.
Then you need two controlnets, one being the original image with Canny, and the other a depth net with the modified colour-blocked image you made.
Run IMG2IMG over that setup with quite a low denoise and you should get what you're after.