With all the synthetic pictures doing the rounds, but doing nothing for me as it's the person behind the dirt that is special, I've tried to make my pictures look as true to life as they can be.
I've found that the camera automatically adds a tons of contrast and saturation on default settings, so I've been fighting to dial these back.
But that has made me wonder: does it look better looking like it did in real life, or does the synthetic exaggeration applied by the camera actually add something? In a way, all pictures may be synthetic in a small way.
Examples: the 1st picture is directly off the camera using default settings, where as the second is more true to what it actually looked like in real life.
What kind of camera are you using? Smartphones have to use all kinds of artificial sharpening and saturation and other computational photography goodies to make up for the fact that the actual part taking the picture is super tiny so that it can fit in the phone.
Personally, I like a good "realistic" image, but some processing will usually have to be done, especially if the lighting isn't great. Otherwise, everything can just look dark and flat and one-note and very...meh. Between the first and second one, I prefer the first processed picture since it's more visually interesting. It may not be what it looked like in "real life" but you probably didn't care about contrast or saturation or visual interest during your real-life session. But I, an outside observer seeing this for the first time, appreciate a little effort to "dress up" your experience.
It's a Sony ZV-e10 that I've pimped out a little . I noticed the same on the GoPro, the standard picture profile was awash with contrast.
I think there might be a difference between photography and videography - with a greater level of processing acceptable in the former.
I should have processes the same picture rather than finding two similar but not not identical samples - the 1st one has a 3:2 and crop applied. Does this picture look better for having the same treatment (with no change to colour, contrast, etc)?
Well first thing's first: they're really good, fun pictures regardless. Any suggestion I or anyone else would have would only be to boost it a little bit and still only a matter of personal opinion.
So, in my personal opinion, I think there are small tweaks you could make to boost up the overall exposure just a little bit (looking a bit dark for me) and then brighten up the highlights a little bit so that your face/head stands out from the rest of your body/the tub. To get back to your original point: it still makes for a realistic picture. The chocolate still looks like chocolate, you still look like you, we're not doing anything wild with the hues/saturation/luminance. Just little slight things to give the eyes something fun to look at. Because a camera can take a great picture, but it doesn't know what's interesting to see, which is why human intervention will always be necessary, even for completely AI-generated images.
BTW, if you're not shooting raw, you should. You'll have a lot more latitude to make those tweaks without breaking up the image.
All good points. Actually from a photography standpoint, I'm doing the unthinkable: I'm shooting XAVCS 4K H.264 25fps 60Mbps and these are both captures rather than photographs
So are optimised for video and are only a maximum size of 3820 x 2160 (4K) pixels. If I was taking pictures they'd be 6000 x 4000 pixels!
Lots of compromises needed to record video, like shooting in RAW.
People forget one important fact. Our eyes are active, constantly adjusting to what we look at. A photograph is static, so our eyes can only see the final processed image, hopefully in a way that has captured the essence of the scene.
Here's what I mean. Suppose we see a model laying in the mud on a sunny day. Part of their body is in the sun, and part in the shade. Some of the mud is dark, other areas are glistening in the sun.
When we look at such a scene, our eyes will move from area to area, and as they do, adjustments to focus, brightness, contrast and color are automatically made in a fraction of a second. If we look at a dark part of what is before us, our pupils will slightly dilate to let more light in, then when we look at a bright area where the sun is making wet mud glisten and reflect light, our pupils will shrink, to instantly lower the brightness and adjust the contrast, and we never even have to think about this.
So to answer the question, a 'natural' photo will only capture the bare necessities of the image, where editing it can boost contrast, correct color, sharpen blurry shots and do much more. We are only looking at a two-dimensional flat image, so we don't benefit from all the features our eyes provide in a real live setting. But if the photo comes closer to showing us a range of brightness levels and colors, it will look more real than an unprocessed one.
I think my main contention is that the clay doesn't look anywhere as dark or as red in real life when it's being used, so the second picture feels like it better represents what it actually looked like to the individual (me) or any on-lookers that happened to have be watching at the time (of which there were none in this case).
I'm beginning to believe that folk expect more true-to-life processing in videos than they do in pictures, as there's lots of posts about too much saturation and contrast making a video look ugly. This took me by surprise as I've historically balanced photographs across the full spectrum of black to white to give the broadest range of contrast... until I started taking video and researched ways to improve the quality. Now I'm just confused