Moderators this is a question for the main body of users that's why its posted in the messy forum.
Resolution has nothing to do with the size of the video on your screen, its about how clear and crisp the video is. This is a question all producers have to ask themselves, quality vs convenience. This is because 4K video is HUGE and Dirty Muse shoots in 4K. For example the video I just shot of Vika was 23 Gigabytes. This is also an issue with the UMD as the maximum size file you can upload is 1.5 gigs so what generally happens is when a producer outputs the file you buy its reduced in size and sometimes resolution. You can (and I do) break the video into 2 or 3 files.
From a customer perspective here are some examples of that that looks like:
I think anyone can see the difference. Cassidy's video was far to big to upload to the UMD so I had to cut it quite a bit and break it into 2 files, some will have to be 3 files. The question for the users of the UMD is do you care?
Anyone who has bought Dirty Muse video's know we really care about quality, we hire the best looking models, we do shoot in 4K and we care about lighting and editing. I am never going to stick my dick in a jam jar, video it with my phone and try and sell it!!! So this is an important question, its a lot more hassle working with 4K. You have to have the computer from hell to be able to load 23 Gig files in an editor. So shooting in 2K would be a lot easier.
Phil
11/20/22, 12:56pm: gender changed from female to n/a
My solution to this is that while we shoot and edit in 4k (and have done for the last few years), we output the edited file in Full-HD (1080) instead. The vast majority of customers these days watch on their phones or tablets, so files in the same resolution as the screens at our local multiplex really would be overkill. Of course screen resolutions and storage capacities constantly advance, so in ten years time when everyone had terrabit broadband to their quantum-processor devices and the UMD file size limit is 150 gig instead of 1.5, we can remaster and re-release everything in 4k straight off the original masters, no re-edits needed.
Also worth noting, while splitting a file at an actual logical boundary, like having the wash-off separate to the main messy section, or even splitting the messy section into "clothed section" and "nude section", or "model 1 gets messed up by model 2" and "model 2 gets her revenge", is allowed (and can help customers by letting them know exactly how long each section is), just splitting it to get round the file size limits is a breach of the ToS.
Generally I can get 25-odd minutes into a 1.5 gig file at 1080p at 6mbps, or 20-ish at 8mbps, using Magix Vegas two-pass compression to mp4.
And yeah, you're right about the computer and storage needed. My Mission Control PC is an i7 with 32 gig of ram and 17TB of hard disk on board. Those 4k files are **LARGE**.
Personally, I'd be fine with paying an extra fee for the option of downloading 4k in exchange for the extra bandwidth and storage requirements. The flexibility of having additional options could provide something for everyone.
I don't need massive res. If I am watching on a phone or storing on my hard drive then a smaller file size is definitely preferred so it doesn't clog up memory. I would be put off by a large file size or split video.
I prefer my WAM delivered over top of me by a beautiful Jayce Lane
I don't need 4k or 8k, I think 1080 for the most part if fine if we're seeing the detail we're looking for. I prefer a single video, but I guess it also depends on how long it is. I know you have broken things up in the past due to size constraints.
I think a good middle of the road is to maybe offer both. Have a 1080 for a lower price, and break up higher rez ones. Be very clear in what each part shows from beginning to end. This way if you have someone that doesn't like particularly like nudity, they can really focus in on say part 1 and maybe pass up part two. I think bundles are a good idea for high rezs that need to be broken up too.
The thing to remember is that *static* resolution is not the only factor relating to image quality. Frame rate plays just as much a role in *dynamic* resolution, as does the use of progressive scanning rather than interlacing, to avoid the spatio-temporal aliasing that has afflicted interlaced images since the 1930s. At least Uhd can only be captured in progressive mode, thank goodness! (Other factors that will come into play in the next decade will include Wcg (wide colour gamut), Hdr (high dynamic range) and Nga (Next Gen Audio). However, most of those are yet to be rolled out.) [These abbreviations would normally all be in capitals but MM's text filter was detecting them as 'shouty words' and insisted that I drop them down ... apparently only in the first paragraph, fortunately!]
BBC Research & Development (white paper 092) concluded back in 2004 that, in the UK, the *average* sized room couldn't conveniently accommodate a display that was likely to be large enough for most viewers to resolve the difference between even 720 and 1080 lines, let alone 2160 lines (UHD1 tier 1 & '4k' DCI), unless they were sitting closer than intended. (Std Def was 5H, HD is 3H, UHD1 tier 1 (often incorrectly referred to as '4k', which is DCI, not TV) will be 1.5H, where H is picture height. As you get progressively closer, you end up 'watching tennis' on the screen, while missing things on the other side of the frame, leading to viewer fatigue.
UHD is good for immersive gaming and simulations, where you need to be that close but not so great for normal TV viewing at that close distance, as you need to be further back to see the full frame, at which point the resolving ability of the eye is insufficient, so you're paying for all those extra pixels, huge storage and fast computing that you can't actually see when at the director's intended viewing distance! There's also the problem that, depending on the encoding, being artificially too close merely results in being able to see the compression artefacts that would remain invisible if viewing from a more sensible distance. (Nonetheless, HEVC/H265 is pretty good, when viewed at a normal distance.) As David Wood of the EBU used to say "We don't need *more* pixels, we need *better* pixels!"
If images can be *captured* at 2160p/50 rather than p/25 throughout most of the world (or at p/59.94 for USA/Pacific rim/Japan markets) then the down-converted image quality of even 720 lines is surprisingly good, though there's little point in taking it down that far, these days.
1080p/50, 1080p/25 or even 1080i/25 (discouraged) and 720p/50 are all easily downconverted from 2160p/50 without introducing obvious downconversion artefacts, since they are all direct submultiples of either the captured frame rate and/or the resolution. However. If space/speed/cost issues are in play, there's still little reason to go above 1080p/50 even for original capture, if there's no intention to emit at 2160p/50 or 2160p/25, as all of the above mentioned emission standards can still be achieved as submultiples from 1080p/50.
However, one (ormally realatively minor) potential advantage of capturing on 2160 lines is if using a single-sensor camera (e.g. DSLR) rather than a proper broadcast 3-chip unit with trichroic splitter block. Since the lenses and sensor in a single-chip camera have to be able to capture a much higher resolution even than UHD when used as a still image camera, the lower resolution capture of video can result in spatial aliasing when fine-patterned images are accidentally captured in focus. Capturing in higher resolution to start with should help to reduce this phenomenon, to some extent.
Aliasing creates two problems. 1) If the camera (or subject with the fine pattern) moves, the alias image pattern will move in the opposite direction, making it extremely obvious, and 2) the high frequencies being captured are difficult for the compression encoding to deal with, since this relies on the normal assumption that Fourier transformation will result in very few high frequency coefficients (i.e. fine detail), which can consequently be approximated to zero. Lots of HF detail means that the coefficients have to be encoded rather than being jettisoned as part of the compression process, so you can end up with not only annoying patterns moving about visibly on screen but also larger file sizes that are more difficult to emit.
By contrast, proper broadcast cameras have 3 sensors (one for each primary colour, RGB) with co-sited pixels (so no need to de-Bayer the single sensor array) and are fitted with an optical low-pass filter which restricts the incoming resolution to prevent the creation of alias images in the sampled data. They can do this because they only ever get used for video, not stills, so don't need to be able to transmit the higher frequency (i.e. fine detail) images through their optics, that stills cameras need.
Hence, if you do capture with a single-sensor camera (e.g. DSLR) at 2160 lines, then downconvert, the conversion software (better in hardware such as Snell & Wilcox units (as was) but massively expensive) *should* digitally low-pass filter the signals to prevent the creation of aliasing artefacts in the downconverted image. However, that's likely to be a function of what you pay for the downconversion software! It's often easy to spot hideously flashing or jagged diagonal lines on certain programmes where insufficient detail has been paid to such issues, on TV, these days. (Standards have fallen terribly in the last 20 years as people who don't understand the technical issues have been taken in from arts-based 'film schools' and the like and go blundering through, blissfully unaware of the havoc they're creating in the signals and, consequently, on-screen.)
Alan Roberts (ex BBC R&D) did loads of work on this sort of stuff, resulting in EBU tech 3335, which feeds into EBU R118, for those who want to delve into how these factors should be considered, in detail.
I sell both version of 4K and full HD of the same video on Xcream. 4K version is priced 100 yen higher than full HD. 4K version is about from 60 to70 percent of the total in volume terms. But 100 yen may be too slight to differentiate.
I have another conflicting data. A short while ago I had a questionnaire to my customers about what they place emphasis on when purchasing videos. It was Check-all-that-apply question, but the percentage of respondents who checked "video quality" was only 18.5 percent.
By the way, Xcream system is just like YouTube. Producers upload high quality videos to Xcream server, and Xcream encodes the videos. If producer uploads 4K video, the server encodes it to 4K, 1080P, 720P and 480P at moderate bitrate. And, producers can provide multiple prices based on upper limit of image quality, just like Netflix. If consumers buy 4K version, they can stream or download (only if producer permits it) the video at 4K or 1080P or 720P or 480P. If they buy 1080P verson, they can get 1080P or 720P or 480P. Producers can set price gap by his own choice.
Note: For those with their own websites and coding abilities, UMD already has a mechanism by which you can offer larger files than UMD's limits, using the Bonus Conten System that MM added a few years back.
Where a scene has Bonus Content, once someone's bought something they get an extra link that connectes them out to your site with a bunch of paramaters included. Your code then checks back with UMD that the IP your seeing matches the one that was browsing UMD, and a bunch of other checks, and if all is well, you reveal to the user your own link page / directory for that scene, where they can then download the bigger files directly from your server.
This enables you to provide files of any size - UMD does the transaction processing but your own system delivers the files to the customer, after verifying with UMD that they are the correct user who bought the content.
We have some scenes with enormous (up to 10 gig) photosets, for those, on UMD itself there's a sub-1-gig "bonus content samples" file, and then you use the links to get the actual big photoset.
This scene is one example, all the videos plus the sample zip are on UMD, but the full 2.5 gig photoset in three files is served directly by the Saturation Hall webserver: https://umd.net/download_info/maude-the-messy-manager
So for the sick fucks who want a video of me fucking a jam jar, watch out you may get your wish LOL
So this is super interesting almost a split, sounds like I should offer a high res and a lower res version. Its not a huge deal to render out two versions of a video, it would be nice if Mess Master could chime in on this idea. I mean how large will the UMD servers have to get if producers do this?
On a personal note I will often watch my smut on my 55" TV so the better the resolution the better the picture, I just put the vid on a USB stick and most TV's these days have a USB port.
I will continue filing in 4K and just res it down for the UMD, that way the original RAW footage will stand the test of time. What do you all think? Just res my stuff down a bit more or offer 2 download resolutions?
Generally I can get 25-odd minutes into a 1.5 gig file at 1080p at 6mbps, or 20-ish at 8mbps, using Magix Vegas two-pass compression to mp4.
DungeonMaster1 seems to be the only user that has mentioned the bitrate at which a video file has been encoded. This overlooked attribute has a lot to do with picture quality.
I have looked at some of the video files that I have downloaded from free sites as well as WAM producer sites. I have found that a 720p file with a bitrate around 5 mbps looked much sharper than another 1080p file with a bitrate around 1 mbps.
I note that Jayce and MessyGirl (two of my favourite producers) have both gone to higher bitrates in their recent videos. This has, in my opinion, made for increased picture quality.
For me, it is the combination of resolution (vertical pixel count) and bitrate that determines the quality. I see no real need for 2160p resolution for WAM videos; 1080p with a bitrate of at least 5 or 6 mbps produces a high quality picture, and bitrates in the 8 to 10 mbps are even better.
As DungeonMaster1 has indicated, higher bitrates at a given resolution results in larger files, but in my opinion, that should be the priority.
Richard6 said: DungeonMaster1 seems to be the only user that has mentioned the bitrate at which a video file has been encoded. This overlooked attribute has a lot to do with picture quality.
It might have got a bit lost in the mini 'essay' up there(!) but I hinted at this w.r.t. the encoder. The bit rate will depend on which encoding standard you're using, as well as the material itself. H265 will result in a lower bit-rate than H264 for a given image quality. Alternatively, it can provide a better image quality for the same bit-rate.
So long as care is taken to avoid capturing fine-pattern details (which result in being unable to round the high-frequency coefficients in the compression transform to zero), it should be possible to achieve a reasonable bit-rate. Progressive scanning rather than interlaced also results in more efficient encoding.
Avoiding those fine-check patterns in focus also avoids accidental creation of visible artefacts of alias patterns on screen too, meaning that pretty much the only reason for capturing in 2160p (then downconverting with low-pass filtering) will have gone, as well.
Therefore, I'd say 1080p/50 for most of the world (or 1080p/59.94 for USA/Japan/Pacific rim) is pretty decent for most purposes, since the higher frame rate provides better perceived image quality than 2160p/25 (2160p/29.97), which is what most affordable cameras can manage. Just avoid those fine pattern details!