As to people, getting better but definitely still not quite there, facial horror, and in particular dental abomination, still tends to happen. I got one image that was pretty good until you looked in the girl's mouth and realised she'd been given a whole second inner set of teeth, like the xenomorph from Alien.
DungeonMasterOne said: There's a dedicated UMD group for AI WAM: https://umd.net/groups/group/ai-wam
Thanks, I'll head on over.
I think of them how dreams look and feel, you get the idea, but not 100% correct.
That's a really good way of looking at it. It can create some fabulous landscapes and vistas, half-flooded ruined temples in misty forests it completely excels at. People less so, though if you ask for a rear view and then ignore all the ones with too many legs the results can be OK. No narative storyline yet though.
MudLovesMe said: Is there a way to block AI-generated content because I refuse to support a program that steals Artists hard work.
Not really, even if some kind of tag was introduced, there'd be no way to force anyone to use it. We're still in the very early stages so at the moment you can mostly tell the AI ones - facial horror, random extra hands, dental abomination - but within a very short time AI will be producing photorealistic images without any noticable flaws.
I expect within the next 10 to 20 years all fetish production and porn will have been replaced by AI-generated imagery and video. You'll be able to bung some money into an AI system and request "a 4k video of a woman who looks like (insert a celebrity of your choice), and is wearing (describe an outfit of choice), getting into, sitting down in, and rolling around in, a deep mud pit (describe consistency of mud), with an emphasis on seeing her legs and bum get mucky, in a forest clearing", and it'll generate you 20 minutes of fuill-motion video, proably from multiple camera angles, that will look as good as a Hollywood movie. Actual human-made WAM videos will be expensive luxuries for the top of the market where people who have enough money to have a choice decide they want to still see a real person get messy instead of computer-generated pixel renditons.
But this is definitely coming, and will sweep away a lot of manual work in the same way the invention of the steam locomotive made stage-coach operators obsolete. There isn't any point trying to resist it, technology can't be un-invented.
its an interesting one for sure. image wise when it gets it right it should do the same thing, look great, but there is also another side to it and thats the knowing of what is being felt for many of us who actively do sploshing as well as just view it we know what it feels like and for me it forms part of the experience watching or looking at the image / video.
most times when its cgi, even if its good cgi it doesn't have that knowing someone really got messy / wet etc, and to a large part the after soggy too, like when someone comes out of mud or water and in the next frames they are dry already, yet still pretending to be one continuous scene, it kills the splosh feeling
This looks really interesting... so these images arent real? Dont know much about all this yet but might have a look at it.
If these images arent real and just AI generated from "real" images, how would this be different from Deep Faking and all the issues and problems around that?
ChrisBUK said: This looks really interesting... so these images arent real? Dont know much about all this yet but might have a look at it.
If these images arent real and just AI generated from "real" images, how would this be different from Deep Faking and all the issues and problems around that?
They aren't generated from real images, like deepfakes, rather the AI generates them based on a text prompt, which it then interprets via a neural network that's been previously "trained" on the existing contents of the Internet. So you can ask it for say "a young woman wearing a party dress enjoying a mudbath" and it'll generate what it "thinks" that image should look like. so it's creating "an impression of", rather than say taking a WAM image of a mudbath and a catalogue image of a woman in a party dress and combining them, as someone could do with Photoshop.
I gather the way it works is it actually starts with a blank screen of static (like you used to get on an untuned TV) and then "de-noises" it in a series of steps to bring out the image it's creating.
There definitely are a whole host of ethical issues around it, of which the "stealing from artists" one is probably one of the lesser problems - the issue there is the datasets that some of the current AIs were trained on included current artists work, so you can ask it for "a painting of an office block in the style of (insert artist of choice)" and it'll create a brand new image that looks just as if that artist had painted it - which effectively puts them out of work in terms of being commissioned to create pieces. This is probably a whole new area of copyright law that needs to be set up, that AI's can't be trained on the work of living artists, or have to pay them if so, but that's in the future.
Bigger issues are some of the cultural biases. On the AI I've been playing with, which is a major public demo, I've noticed a few things and tried a few others as tests:
Ask for a woman, 99% of the time you get white people. You have to add "diverse" or other keywords to get anyone non-white.
I tried some news headlines taken from the BBC and Financial Times. One of the BBC ones was about "brutal gangs" - that gave me lots of very heavily armed black men. I tried "brutal gangs of women" and got lots of black women, but minus the weapons and body armour. So not only is it racist, it's sexist too. I tried "police officers" and got all white men in police uniforms. And that was just some cursory testing.
So there are major ethical issues around AI generated content of which the "stolen art" issue is just one part. But nevertheless, technology can't be un-invented, so this is definitely a revolution that is coming, whether we like it or not.
ChrisBUK said: This looks really interesting... so these images arent real? Dont know much about all this yet but might have a look at it.
If these images arent real and just AI generated from "real" images, how would this be different from Deep Faking and all the issues and problems around that?
They aren't generated from real images, like deepfakes, rather the AI generates them based on a text prompt, which it then interprets via a neural network that's been previously "trained" on the existing contents of the Internet. So you can ask it for say "a young woman wearing a party dress enjoying a mudbath" and it'll generate what it "thinks" that image should look like. so it's creating "an impression of", rather than say taking a WAM image of a mudbath and a catalogue image of a woman in a party dress and combining them, as someone could do with Photoshop.
I gather the way it works is it actually starts with a blank screen of static (like you used to get on an untuned TV) and then "de-noises" it in a series of steps to bring out the image it's creating.
There definitely are a whole host of ethical issues around it, of which the "stealing from artists" one is probably one of the lesser problems - the issue there is the datasets that some of the current AIs were trained on included current artists work, so you can ask it for "a painting of an office block in the style of (insert artist of choice)" and it'll create a brand new image that looks just as if that artist had painted it - which effectively puts them out of work in terms of being commissioned to create pieces. This is probably a whole new area of copyright law that needs to be set up, that AI's can't be trained on the work of living artists, or have to pay them if so, but that's in the future.
Bigger issues are some of the cultural biases. On the AI I've been playing with, which is a major public demo, I've noticed a few things and tried a few others as tests:
Ask for a woman, 99% of the time you get white people. You have to add "diverse" or other keywords to get anyone non-white.
I tried some news headlines taken from the BBC and Financial Times. One of the BBC ones was about "brutal gangs" - that gave me lots of very heavily armed black men. I tried "brutal gangs of women" and got lots of black women, but minus the weapons and body armour. So not only is it racist, it's sexist too. I tried "police officers" and got all white men in police uniforms. And that was just some cursory testing.
So there are major ethical issues around AI generated content of which the "stolen art" issue is just one part. But nevertheless, technology can't be un-invented, so this is definitely a revolution that is coming, whether we like it or not.
Ah ok.. thanks for the info and explanation. Technology certainly seems to be moving on quickly again, and i can see why Elon Musk is increasingly warning about AI and the need to be careful with it...
ChrisBUK said: Technology certainly seems to be moving on quickly again, and i can see why Elon Musk is increasingly warning about AI and the need to be careful with it...
Nah, he just believes in Roko's Basilisk (Pascal's Wager for techbros).
Musk won't sell as many overpowered battery boxes if AI becomes smart enough for us all to use automated Ubers either.
There will still be a space for art, it will likely either become a premium commodity to own something certified as made entirely by human hands, or else the main medium will become sculpture or something that is harder for a machine to generate and create. I imagine the economics of getting a machine to build and paint, say, a Warhammer figure to the same level of detail as a human would be prohibitI've.