Over the last year we have seen a lot of changes with Bing Image Creator, mostly in the way of tightening prompt and image filters. Over the last couple of months it seems that their filtering has kind of evened out to a point, and I haven't been so concerned with my prompt today not working tomorrow. Last night, I was even shocked to discover that a prompt that hasn't worked since January is suddenly working again!
That said, I have been running into some strange issues lately. It used to be a pretty simple game. If you got a prompt block, you would have to change your prompt to get around it, then your image had to pass the image filtering. Once passed the image filter, if you got back 4 images, you could run that prompt to the moon and back and get 3 or 4 images every time. If you got 1 image back, you were skirting the line and may get more dogs than images. If all you ever saw were dogs, then you would probably need to change your prompt a bit. That's just how it worked, and the rules were simple to follow.
But now it seems that those rules no longer apply, or they apply only some of the time. There are times I can write a prompt and run it and get 4 images, run it again and get 1 image, run it again and get the dog, run it again and get 3 images, run it again and get a prompt block, run it again and get 4 images.... and never change a thing about the prompt I'm running.
Another thing I'm starting to see a lot is the generated images ignoring a lot of my prompt. For instance, I can prompt for a female with a specific appearance, wearing a specific outfit, in a specific setting, covered in a specific way with a specific substance, with specific image style and lighting. And I will get images that match my prompt. However, I will also get whole bundles of images that may or may not use the female's appearance, will give the outfit, more or less the setting, completely change my photo style and lighting, and ignore everything else.
I believe that they are changing the way that the ai rewrites your prompt for better clarity, and its doing it in a way to remove anything they don't like or might be too close to the filters.
I have actually liked a lot of the images it gave me that didn't follow my prompt. But I'm seeing it more and more, and I'm worried that it's going to get to a point that it just can't do what our community is going to ask it to do anymore. I may be way off on this, though.
Some examples from right now. I may reply later with better examples that I have on a different Bing account. These were all created with my prompt from January that started working again. The clear slime and saturated clothing images are what I'm prompting. The colorful gunge images are a bundle of 4 images produced with the exact same prompt.
It does seem to be looser (or more 'creative') in the way it rewrites the prompt. This will explain why you are getting greater variability in the number of images that pass the image filter, as well as getting images which seem to ignore a section of your prompt.
Sometimes I'll get confetti instead of slime for a couple of generations, then it'll go back to images with slime. I can still generate images I want though.
I've had a similar problem with Bing for a while: I write "waitress" and it's a man. It routinely reverses the perp/victim (I write "woman in a red dress hits woman in a white dress with a pie" and it's the red dress wearer who gets the pie), and, recently when I was assembling the bad dates, I could not get unhappy people in the beach scenes. I wrote "unhappy" and "solemn" and got big smiles.
If you have high variance in your number of images generated through Bing or Designer I recommend taking your prompt and testing a variant with a very specific and narrow framing (something like close up shot or medium shot).
If you suddenly get better results it probably means your prompt is triggering the sexy filter when the frame is too wide. It also happens a lot when using Designer in portrait format, the same prompt will suddenly work a lot worse.
If not, then it's probably the mood of the scene - you can try giving your prompt to Copilot / GPT and ask it what the emotions of the characters / scene are.
kortanklein said: If you have high variance in your number of images generated through Bing or Designer I recommend taking your prompt and testing a variant with a very specific and narrow framing (something like close up shot or medium shot).
If you suddenly get better results it probably means your prompt is triggering the sexy filter when the frame is too wide. It also happens a lot when using Designer in portrait format, the same prompt will suddenly work a lot worse.
If not, then it's probably the mood of the scene - you can try giving your prompt to Copilot / GPT and ask it what the emotions of the characters / scene are.
I mean, I get what you are saying, and I agree with you. I'm just posting about this being a new behavior. I've been using the service every day for almost a year now, and this is only something that has started happening within the last few weeks. Don't get me wrong, I would rather get images ignoring half or more of my prompt than that damn dog lol.