I hate to inform you all that I will no longer post content here. The reason is there are too many "hall monitors" who do not create any content themselves but instead flag my posts as a violation trying to get them deleted. Do they attempt to reach out to contact me first? No... no they do not. Does UMD reach out before just automatically deleting posts when they get a complaint? No... no they do not.
Sorry guys this is not something I do for any kind of benefit, financial or otherwise. It was all for the love of sharing. I don't care to argue with anyone about why the flags were reported. So don't bother trying to argue. The forums are just catering to the bad apples in the room plain and simple.
12/24/25, 11:36am: This user has repeatedly broken our rules on posting AI images. Each time we've removed content, they were informed with a reason. They also have been speaking directly with Messmaster. Instead of deleting this account for violations, we've kept giving them chance after chance. Since they are offended that they haven't been allowed to keep breaking our rules with impunity, and obviously disagree with our TOS, they are invited to delete their own account.
the real GoOfBaLL said: I hate to inform you all that I will no longer post content here. The reason is there are too many "hall monitors" who do not create any content themselves but instead flag my posts as a violation trying to get them deleted. Do they attempt to reach out to contact me first? No... no they do not. Does UMD reach out before just automatically deleting posts when they get a complaint? No... no they do not.
Sorry guys this is not something I do for any kind of benefit, financial or otherwise. It was all for the love of sharing. I don't care to argue with anyone about why the flags were reported. So don't bother trying to argue. The forums are just catering to the bad apples in the room plain and simple.
I've going to play devils advocate here. I've not seen the offending post but generally there is no smoke without fire. AI policies are rightly in place to prevent future issues with content posted and reduce liability. I don't have time to check every image posted with reverse imaging but if someone has, it's generally quite easy to ascertain where the source has come from. It's not fool proof, it's going to miss private images and social media but it's a given that ethically it wrong. If though, you think the image is falsely flagged, simply prove it. The datasets used in public models are Huge and it's possible the image generated is based on a public image that can be reverse searched. Provide the prompt and I'm sure it can be reproduced.
As soon as you get into using "images" as starting point, you're moving onto dangerous ground. Hypothetically you could use an image of woman wearing a dress, you only want to use the dress in your image. That whole iamge is going to have bias towards the output. Instead, create reference sheets. and use them. i.e. upload the original image and ask the model to create a front and back reference image of the dress/swimsuit etc. Then use reference sheet to guide any images. I guarantee the output will be better than just using the original image as you've stripped out all noise and detail you don't want to include.
That's too bad... I can absolutely understand your point, although not having experienced that. But sometimes I keep asking myself whether I should continue posting things here, as wet AI generated pictures don't seem to get much attention and I sometimes feel like talking to myself.
WF1 said: That's too bad... I can absolutely understand your point, although not having experienced that. But sometimes I keep asking myself whether I should continue posting things here, as wet AI generated pictures don't seem to get much attention and I sometimes feel like talking to myself.
I think with AI, you have a number of factors that result in low engagement.
1) If everyone has access to the same tools, what makes the image special? We've had decent image generators for over 2 years now, the more recent ones absolutely outstanding.
2) AI as a whole, just isn't embraced but most users of the site so you have a significantly smaller base of people viewing them.
3) Wet look specifically reduces the number of interested people further. Personally, Wetlook just isn't my thing and I generally avoid the wetlook forums and as an extension the AI stuff. I'd also argue that wetlook is significantly easier to train a model for or to use an existing model so people are going to do that.
Ultimately, AI creates a specific scene that is dialled in on the creator's specific preferences. Whether that is wet, messy or other. Creating content for likes is probably not going to get the feedback that you want. Create what you like and see what happens.
the real GoOfBaLL said: I hate to inform you all that I will no longer post content here. The reason is there are too many "hall monitors"
Don't blame you. Why don't the vigilantes just stay out of this group? Why do they come here looking for content they don't like?
I have cut way back on my posting in this group and stopped posting alltogether in Messy even when its about a real woman.
12/24/25, 11:40am: UMD does not allow vigilantes, flags don't stem from an anti-AI bias, and AI creators are not victims of oppression. People who flag content are helping UMD to stay legitimate. If anyone disagrees with the rules then we should have a community discussion. Complaining that the rules are being enforced does not make sense.
I feel your pain in more ways than one. I have been jumped on by certain individuals who think that when I go out in public and hit myself in the face with a Cream Pie.... that I have done something horribly wrong. I didn't ask the people that witnessed it for their permission and I didn't send them down tell them that it is a fetish of mine. I find that train of thought ludacris. I also get off wearing PVC Rubber and Leather, but I don't ask every single person's permission in the world if it's okay if I wear it in front of them when I'm out in public. When I get strangers to hit me in the face with Pies in public, I do give them a business card and a rundown about what's going on. I have heard the consent non-consent argument so much it makes my head spin. I am in my fifties and all I can say is I've had my fill of [the word "woke" used here, but we are noting that use of this term is racist and not allowed on UMD] nonsense and cry babies always being butt hurt because someone doesn't think the same way they do. There was one asshole that went the crying that when I hit myself in the face with the pie in public..... It's the same thing as flashing people. Anywho...... On to the AI debate. I recently used AI and had a fun time doing it and I was able to use it to flush out some pictures from my past that were taken from a disposable camera back in the 90s. Getting a selfie with a cell phone is a piece of cake kind of...... Getting a selfie from a disposable camera from the '90s was nearly impossible to get a good picture. You just had to hold it up and hope for the best. So anyway.... The content I posted came from real pictures but I used AI to flush out a larger image so that you could see all of the outfit and shoes...... Just a larger image.... And I altered the faces of the "Pie Girls" to protect their privacy. Back then they were all and super cool about pictures.... Maybe some of them wouldn't be so cool finding themselves on the internet now. Covering my moral bases. I don't make any money doing this it's something I do for fun. I myself find it offensive that people go running to the "teacher" to go tell on someone. I was asked to mark anything that was synthetic and so I did. It doesn't leave that bad of a taste in my mouth but, what leaves are bad taste in my mouth is the amount of crybabies I have constantly encountered here. I just can't imagine somebody who gets so easily offended at the drop of a hat. I realize there has to be rules and things have to be set in place because there has to be structure.... At the same time..... Oh my gosh, do you have to run to the teacher and tell on Little Billy just because he didn't use the pencil sharpener the right way. So if you decide to leave this wonderful community and I mean that with the most sarcasm.... You do what you have to do, but realize these kinds of people are in the fabric of our society and they're in everything.... As much as I hate them you just have to put up with them. You.... And everyone else reading this forum.... Have a good Christmas. Yeah that's right I said Christmas and not happy holidays.... Do what makes you happy and remember there's always an asshole around every corner.
ThePieBoy
12/24/25, 11:31am: It's the "crybabies" that keep this site legit and make it possible to exist for you to enjoy. Without them, people like ThePieBoy who disregard the rules become the actual threat, not only to our existence, but they muddy our understanding of our rules while spreading cynicism about other community members who only try to help.
Also note that the term "woke" is inherently racist and not allowed to be used on UMD.
ThePieBoy said: It doesn't leave that bad of a taste in my mouth but, what leaves are bad taste in my mouth is the amount of crybabies I have constantly encountered here. I just can't imagine somebody who gets so easily offended at the drop of a hat. I realize there has to be rules and things have to be set in place because there has to be structure.... At the same time..... Oh my gosh, do you have to run to the teacher and tell on Little Billy just because he didn't use the pencil sharpener the right way. So if you decide to leave this wonderful community and I mean that with the most sarcasm.... You do what you have to do, but realize these kinds of people are in the fabric of our society and they're in everything.... As much as I hate them you just have to put up with them. You.... And everyone else reading this forum.... Have a good Christmas. Yeah that's right I said Christmas and not happy holidays.... Do what makes you happy and remember there's always an asshole around every corner.
This is the most hilarious knee-jerk over reaction I've read in a while. It's not about woke reactions or infringing on your rights. Simply put, generative AI opens up a massive can of worms in terms liabilities that MM quite rightly wants to avoid. Ethics aside, the last thing we need as a community down the line is shit coming back to bite us. The current AI models are massively powerful, can create pretty much anything you can imagine. Why limit that to celebrities and non consensual persons. Now, I don't know what was posted, but I do know that with very little effort I can either use NBP with an uploaded photo of an ex and put her in all sorts of situations. If I really wanted to, I can also with a little more effort generate a Character Lora with Qwen that is near indistinguishable to reality and really go to town on the scenes. It all falls under the same banner, just where do where draw the line to what is ethical and not.
What I will say after 2 years being part of this forum, not a single person has created a model of themselves, published it to be used or even posted AI pictures of themselves. I wonder why that is?
Knee jerk??? Not a reaction just expressing my opinion. I could care less about your opinion or anyone else's. Community blah blah blah.... You don't know what you're talking about blah blah blah... You have no right to your own opinion blah blah blah. You're an idiot with a knee jerk blah blah blah. It's all Charlie Brown to me.... The teacher that is.... Womp womp womp blah blah blah wamp blah blah blah. Oh the lesson that you tried to teach me..... I'm sorry little man it fell on deaf ears.... I am after all an idiot with just a knee jerk response. Have a happy day. .
It's really straightforward to follow the rules. Don't post realistic likenesses of real people that have been placed in fantasy scenarios without their consent.
What you've been doing is posting a real photo of someone that has come from a shopping website or even from their workplace's "meet the team" page, then using that with nanobana to make some fantasy messy images which feature that person's likeness.
You do realise that anyone can do a reverse image search and see exactly where the original photo has come from?
Do you also realise that you don't need to do this? You can use text to image generation to make a totally original image that isn't based on an existing photo. You do know that, right?
Experience has taught me (and very likely others) that it's almost pointless trying to remonstrate with 'you can't tell me what to do' types on the internet especially if they are confident there will be no consequence for them. A lot of non-sequitur and motte-and-bailey tactics going on here, notably the meaningless catch-all buzzword 'woke' as an attempt to posture as somekind of free thinker or rebel. However, as a last try, I want these people to imagine something that genuinely disgusts them. Now imagine this is someone's fetish. Now imagine they get hold of a picture of YOU and use it to generate multiple images and videos and whatnot of your likeness engaging in something which, to repeat, you find absolutely disgusting and degrading. Are you honestly ok with that?
Oh, and I should add that in several countries, doing this is now explicitly illegal.
the real GoOfBaLL said: I hate to inform you all that I will no longer post content here. The reason is there are too many "hall monitors"
Don't blame you. Why don't the vigilantes just stay out of this group? Why do they come here looking for content they don't like?
I have cut way back on my posting in this group and stopped posting alltogether in Messy even when its about a real woman.
This reply exactly encapsulates what I am thinking. I have ZERO upside to posting here (except for feeling like someone enjoys the stuff I'm sharing) and only downside.
AI models (ALL OF THEM) train off of your PUBLICLY shared photos. So not surprising to me when I get stuff that looks uber realistic. Facebook, TikTok, Amazon, Reddit... these people SELL your DATA. Shocking!!! And to think there are still people out there who are surprised by this. It's like people will freely post their pictures to Facebook, TikTok, Amazon, whatever corner of the PUBLIC internet and then they are shocked someone treats their picture like public property. Building a LoRA using someone's image that you know and have permission is perfectly legal and perfectly ethical even by the rigorous standard set forth here at UMD. Yet STILL people here have nothing better to do but to complain.
I don't set the rules here at UMD. But my personal take is that the people who fought against Napster with the publicly shared music thing were on the losing side of history. It is not like people have copyrighted their images either like the songs were copyrighted. It is progress, it will happen... if you fight it you will ultimately lose. Argue all you want about privacy but the truth is you have no privacy anymore. Not like you have to wait for the future this has already happened.
And to put a fine point on it I am not posting AI wam photos to try and prove some kind of ethical point it is just fun.
AI models (ALL OF THEM) train off of your PUBLICLY shared photos. So not surprising to me when I get stuff that looks uber realistic. Facebook, TikTok, Amazon, Reddit... these people SELL your DATA. Shocking!!! And to think there are still people out there who are surprised by this. It's like people will freely post their pictures to Facebook, TikTok, Amazon, whatever corner of the PUBLIC internet and then they are shocked someone treats their picture like public property. Building a LoRA using someone's image that you know and have permission is perfectly legal and perfectly ethical even by the rigorous standard set forth here at UMD. Yet STILL people here have nothing better to do but to complain.
I don't set the rules here at UMD. But my personal take is that the people who fought against Napster with the publicly shared music thing were on the losing side of history. It is not like people have copyrighted their images either like the songs were copyrighted. It is progress, it will happen... if you fight it you will ultimately lose. Argue all you want about privacy but the truth is you have no privacy anymore. Not like you have to wait for the future this has already happened.
There is a huge misunderstanding here about how AI models are trained. As I said previously, the commercial models are trained on 10s of millions of images, that is not in dispute. What actually happens in training though is the model learns concepts of people, objects, backgrounds etc. The model learns what a characteristic of a woman is using all available data in the images. It doesn't just create a database to lookup against. The ACTUAL likelyhood of asking for a blonde woman in her 20's and an image of tailor swift appearing is infinitesimally small even if there are 1000's of pictures of her. A random woman that appeared in one image within the dataset.... Absolutely not. Now we know that the model is trained on Tailor swift and others. It knows who tailor swift is as a concept, if you directly prompt for her you will get an image.. well without gaurdrails ofc. that's why the guardrails are implemented post model to try and prevent celebrities being generated. The point I'm trying to make though is if your prompt is "Woman standing by the pool with blonde hair" you are still highly unlikely to generate a real woman that exists and be recognizable in a reverse search.
Additional risk comes with edit models like Qwen-edit and Nanobanana Pro. The ability to take any random image and add or remove features muddies the waters immensely. It's now not just the model providing input data but the user also bears responsibility. Using the argument of the data is out there, so it's a free for all doesn't cut it, it's still non consensual deepfaking and something many countries now have laws against. even putting aside the fact you are bringing into a sexual fetish.
I don't know whether what has been said is true or not, but lets run the hypothetical that someone here takes an image from a corporate site, creates fetish content of that person and then posts on here. You're IP is logged and most likely enough information from purchases or emails to put together a trail. Would you be comfortable enough in your moral and legal stance to defend your position if the content was reported back to the owner?
Honestly, reading you're reply, It sounds like your position is the images are out there, screw them. Which is a shame as you can easily create "fake" people with the drop of a hat using sites like https://this-person-does-not-exist.com/en
the real GoOfBaLL said: AI models (ALL OF THEM) train off of your PUBLICLY shared photos. So not surprising to me when I get stuff that looks uber realistic. Facebook, TikTok, Amazon, Reddit... these people SELL your DATA. Shocking!!! And to think there are still people out there who are surprised by this. It's like people will freely post their pictures to Facebook, TikTok, Amazon, whatever corner of the PUBLIC internet and then they are shocked someone treats their picture like public property. Building a LoRA using someone's image that you know and have permission is perfectly legal and perfectly ethical even by the rigorous standard set forth here at UMD. Yet STILL people here have nothing better to do but to complain.
You are fundamentally missing the point. Yes, AI models are trained on everyone's images. But they don't contain those images, they contain an understanding of what the concepts in those images represent - they "know" what a woman looks like, or a bridge, or a pickup truck, or an airplane. You ask an AI for a picture of a plane and you don't get back one of the images it was trained on, you get a new plane it's created from all the planes it knows about.
So it was very obvious that many of the images you created were based on images of real women taken from various external sources, and not AI women you'd had the model create for you. A simple reverse google search would find the originals - various users flagged your posts and as an admin I handled some of those reports and double-checked them by doing my own reverse searches (just to be absolutely sure none of the reports were from someone falsely reporting things - I take my moderation responsibilities very seriously). In each case, I found the original photos on various clothing seller or other external websites - they were real people, not AI generated creations. Which is against our rules. Hence the threads got deleted.
And the very basic fundamental difference is that it is completely wrong to put a real person into a fetish situation that they did not do in real life and did not consent to. In anything sexual (and fully cothed WAM still counts as sexual on a WAM fetish website), full informed consent is absolutely everything. No consent, no go, it's that basic.
the real GoOfBaLL said: I don't set the rules here at UMD. But my personal take is that the people who fought against Napster with the publicly shared music thing were on the losing side of history. It is not like people have copyrighted their images either like the songs were copyrighted. It is progress, it will happen... if you fight it you will ultimately lose. Argue all you want about privacy but the truth is you have no privacy anymore. Not like you have to wait for the future this has already happened.
It has ***nothing*** to do with privacy, and ***everything*** to do with consent and consensual use of someone's image for sexual purposes. Hire a model, tell her that you want to use her image to generate AI images, check she understands and is OK with that, shoot your seed images and use them as the basis for creating AI ones, all fine. She has given informed consent with full knowlege of what the images are to be used for, you've a signed model release to prove it, knock yourself out.
But DO NOT create synthetic splosh images using a photo of a model downloaded from a fashion house or some random person from social media as the starting point. DO NOT put someone into a fetish situation they never consented to in advance.
Yes, pretty much every swimsuit or bikini model knows that someone somewhere will wank to their photos, that's why many corporate firewalls have an option to block swimsuit selling websites. But there is a massive and monumental diffeerence between downloading an existing image and fantasising about it, and actually putting that person, without their consent, into a sexual situation and publishing it. It's exactly the same as creating deep-fake porn, just without the nudity - but ethically, morally, and increasingly legally, it's exactly the same thing.
messg said: There is a huge misunderstanding here about how AI models are trained. As I said previously, the commercial models are trained on 10s of millions of images, that is not in dispute. What actually happens in training though is the model learns concepts of people, objects, backgrounds etc. The model learns what a characteristic of a woman is using all available data in the images. It doesn't just create a database to lookup against.
^^^^ THIS!
AI models do not "mash up" existing images from some vast datastore to create new ones. They learn what the concepts behind subjects are, and what those look like, and use that knowlege to generate new versions of the subjects. The actual models are surprisingly small, mostly a few tens of gigabytes at most - they contain understanding, not image libraries.