So as you probably noticed, the image filters in Bing and Designer were dialed right up last night again. Whilst none of my existing prompts failed the new prompt filter, the image filter is way more strict. Many of my reworked prompts from the last week barely managed to generate an image even those that I used last night.
Where does that leave us now? You can still generate some interesting images but the filtering is going to block many elements it deems unsafe, even relatively tame stuff too. You can mitigate some of this by enhancing the prompt with more lighting filters, adjusting the poses of the subjects and yes... even certain clothing prompts will now just not return anything. For example, the success rate for a silk dress is way lower now than a blouse and skirt. Forget about tied white blouse etc. Overloading the prompt with detail helps confuse the image filter as it has always done but you are going to need more items in there. The downside to this is takes away from the main subjects too.
As always, you will have more success with "Fun" and "Safe" activities like gunge tanks and the like.
As things stand, my enthusiasm for pushing the boundary of the filters is dying. There is only so much you can do without impacting the out the out and I'm quite disappointed with what can be achieved at the minute.
Just as well my fav outfit is the blouse and skirt! Maybe there's no toy or rope, but these still look great. I have no doubt you will eventually find a new way around the filters!
messg said: As always, you will have more success with "Fun" and "Safe" activities like gunge tanks and the like.
You'd think, but no. At least not with what I'm doing.
My ChatGPT account is getting me OK results but without Bing's dataset it's not quite the same. I really ought to learn how to get results out of SDXL.
messg said: As always, you will have more success with "Fun" and "Safe" activities like gunge tanks and the like.
You'd think, but no. At least not with what I'm doing.
My ChatGPT account is getting me OK results but without Bing's dataset it's not quite the same. I really ought to learn how to get results out of SDXL.
What you're still able to achieve is impressive.
appreciate that but the time/effort is not there, to get the image filter to pass anything remotely risque, you have to introduce so much noise that the core image quality is not worth it. Interestingly, with my "hope" hat on, I wonder if MS will loosen the filters again now they seem to be getting to grips with the celebrity stuff. It's never going to be 100% but it's improving. The Superbowl is a major driver of this too, MS sponsors the event and Taylor swift is also the half time star. They want to avoid any more controversy before that.
Anyhow.. I think SDXL may be the way forward for some of the simpler stuff, Lora training with gunge etc is pretty good, wammypinupart's new training is good and would work well in gungetank/gameshow scenes. Its unfortunately not there for more complex stuff and multi person stuff yet.
Doesn't seem that filtered, not sure whether it piggy backs on chatgpt/bing etc?
CivitAI uses StableDiffusion together with the models and Loras that people upload. It's pretty NSFW as allows people to to create their own models etc. The prompt interpretation is not half as good as Bing/Designer and the output quality isn't as high. For non, wam you may get some good success but there isn't a a great deal of WAM models available.