This year's are all actually shot in 4K (future proofing), but at the moment we don't have the means to reliably deliver them in a format higher than 720p.
Our videos are still of fairly good quality, same as most producers here.
Thanks for the feedback and yes, she will get the full treatment this year.
Just putting my engineering hat on for a moment, unless you could shoot in 1080p/50 (which, until recently, was prohibitively expensive), you'd actually have been better off shooting in 720p/50 than in 1080i/25 or 1080p/25 so it was a good choice. The snag is that consumers don't always have their equipment set up to provide the optimum experience.
The perception of *dynamic* resolution is a function not only of the *static* resolution (720 vs 1080) but also of the frame rate (25fps vs 50fps).
Increasing the resolution without increasing the frame rate merely reveals more of the unwanted artefacts, so is an inevitable compromise. Since (until recently), emission of 1080 images was limited to only i/25 or p/25 then you were stuck with the problem of trying to represent motion captured at two different instants in time (interlaced) in the same frame but then viewed on a native progressively scanned device (i.e. anything other than old glass CRT) or you were stuck with the jerkiness of slow frame rates. Either compromises the "optical flow", as it's known in the trade.
Until 4k cameras became more affordable and enabled 1080p/50 shooting at sensible costs, the best bet was to achieve the highest frame rate possible on a progressively-scanned image (hence 720p/50). With H265 (HEVC) bit-rate reduction now becoming more widely available, that can now be increased to 1080p/50, which is an improvement.
Having said all that, BBC White paper WHP092 (which contributed to EBU recommendation R112) back in 2004 proved that the visual acuity of the eye is such that, unless the viewer is sitting artificially close to the screen then we are actually unable to resolve an increase in static resolution from 720 to 1080 lines in an average size room (since we're unable to get far enough away from it to enable the increase in screen size that would be necessary to see the difference ... or to put it another way, if you can see the difference then you're sitting too close).
Bearing in mind that 4k is, strictly speaking, 2160p (the 4k is merely 'marketing bollocks' as they've substituted pixels down the screen with those across the screen, the former having always been the previous measure) the move to 4k is, frankly, an exercise in screen manufacturers finding another excuse to con the public into buying the latest fad. It actually offers no visible improvement, especially if the frame rate is not increased to at least 100fps, for the same reasons as mentioned previously.
The snag with it, however, is that not only do you have many more pixels to record and transmit but you need to do so several times faster as well to accommodate the increased frame rate = massive increase in data rate / storage size. (So you're probably better off just shooting in 1080p/50.)
The only time that 4k will really 'come into its own' is for 'immersive environments' where you want to feel as though you're part of the action - though the filming has to be designed to enable that, as does the viewing environment.
I think, when it comes to sploshing and wam, the best way to be in an immersive environment is to join the girls in a bath of chocolate!
leonmoomin said: This year's are all actually shot in 4K (future proofing), but at the moment we don't have the means to reliably deliver them in a format higher than 720p.
Our videos are still of fairly good quality, same as most producers here.
Thanks for the feedback and yes, she will get the full treatment this year.
Thank you for your response
I notice the bit-rate is actually quite high in your videos but macro blocking artifacts still exist. For example when you're looking at Nadia's tights, all the details are blurred out and it's full of macro-blocking.
I believe this is down to the use of an older more inefficient codec wmv, probably due to using a video editor that outputs in wmv.
I've seen 1080p encodes with lower bit-rates using the well acclaimed x264 encoder that is pretty much free from artifacts and in which perceived detail levels are not too far from blu-ray. I see that your average bit-rate is 4 mbps for your videos. For that bit-rate, you should be able to produce a high quality 1080p encode free from artifacts.The downside is of course longer encoding times.
Is it possible if I could contact you by e-mail as a customer to discuss about your export and encoding process to see if I can help find a way to significantly improve video quality while reducing the file sizes you deliver?
Lizzie Claymore said: Just putting my engineering hat on for a moment, unless you could shoot in 1080p/50 (which, until recently, was prohibitively expensive), you'd actually have been better off shooting in 720p/50 than in 1080i/25 or 1080p/25 so it was a good choice. The snag is that consumers don't always have their equipment set up to provide the optimum experience.
The perception of *dynamic* resolution is a function not only of the *static* resolution (720 vs 1080) but also of the frame rate (25fps vs 50fps).
Increasing the resolution without increasing the frame rate merely reveals more of the unwanted artefacts, so is an inevitable compromise. Since (until recently), emission of 1080 images was limited to only i/25 or p/25 then you were stuck with the problem of trying to represent motion captured at two different instants in time (interlaced) in the same frame but then viewed on a native progressively scanned device (i.e. anything other than old glass CRT) or you were stuck with the jerkiness of slow frame rates. Either compromises the "optical flow", as it's known in the trade.
Until 4k cameras became more affordable and enabled 1080p/50 shooting at sensible costs, the best bet was to achieve the highest frame rate possible on a progressively-scanned image (hence 720p/50). With H265 (HEVC) bit-rate reduction now becoming more widely available, that can now be increased to 1080p/50, which is an improvement.
Having said all that, BBC White paper WHP092 (which contributed to EBU recommendation R112) back in 2004 proved that the visual acuity of the eye is such that, unless the viewer is sitting artificially close to the screen then we are actually unable to resolve an increase in static resolution from 720 to 1080 lines in an average size room (since we're unable to get far enough away from it to enable the increase in screen size that would be necessary to see the difference ... or to put it another way, if you can see the difference then you're sitting too close).
Bearing in mind that 4k is, strictly speaking, 2160p (the 4k is merely 'marketing bollocks' as they've substituted pixels down the screen with those across the screen, the former having always been the previous measure) the move to 4k is, frankly, an exercise in screen manufacturers finding another excuse to con the public into buying the latest fad. It actually offers no visible improvement, especially if the frame rate is not increased to at least 100fps, for the same reasons as mentioned previously.
The snag with it, however, is that not only do you have many more pixels to record and transmit but you need to do so several times faster as well to accommodate the increased frame rate = massive increase in data rate / storage size. (So you're probably better off just shooting in 1080p/50.)
The only time that 4k will really 'come into its own' is for 'immersive environments' where you want to feel as though you're part of the action - though the filming has to be designed to enable that, as does the viewing environment.
I think, when it comes to sploshing and wam, the best way to be in an immersive environment is to join the girls in a bath of chocolate!
Of course the best most immersive experience will be to join the girls in a bath of chocolate, but unfortunately, that's an impossible dream for me
I don't expect delivery in 4k as most people's hardware here wouldn't be capable to decoding the video, and that would require Leon to host more than one version which would be a pain.
I'd disagree actually about 1080i50/60 vs 720p50/60 although interlacing is admittedly an absolute menace that should be abandoned asap. Especially in Static scenes, even on a small laptop screen you really can tell the difference between 1080p and 720p.
I think researchers under-estimate the capabilities of the human eye, they used to say you couldn't tell 720p apart from 1080p on a 42 inch screen, now that all tv's sold at that size are at least 1080p, I think most people will doubt its true.
You mentioned about a White paper proving visual acuity, I wanted to point out that in science, there is no proof, there is theory backed by evidence.
Modern de-interlacers are highly impressive although it'll never be as good as progressive, nonetheless Leon's videos are all shot progressively, probably in 25p.
Doubling the frame rate or the resolution doesn't necessarily result in double the bit-rate required for either case because of the way video compression works. x265 is actually offers little to no improvement over x264 at hd resolutions.
I accept 4k is still a luxury at this point in time, although I argue that even at 24p, the difference is still perceivable at reasonable viewing differences, and for 8 bit video without the "wcg" and "hdr" marketing terms.
I just think much higher quality 1080p can be delivered in smaller file sizes thats easier to distribute that can be easily decoded by all modern hardware.
I'd disagree actually about 1080i50/60 vs 720p50/60 although interlacing is admittedly an absolute menace that should be abandoned asap. Especially in Static scenes, even on a small laptop screen you really can tell the difference between 1080p and 720p.
I think researchers under-estimate the capabilities of the human eye, they used to say you couldn't tell 720p apart from 1080p on a 42 inch screen, now that all tv's sold at that size are at least 1080p, I think most people will doubt its true.
Well, static scenes are, of course, the only possible image for which perfect reconstruction of an interlaced image is possible. However, most people don't think that makes for interesting video! LoL. (I'd happily watch the test card, myself - most informative if you know what to look for.) However, if you're watching on a laptop then I would suspect that you're probably still closer than 3-5H.
You mentioned about a White paper proving visual acuity, I wanted to point out that in science, there is no proof, there is theory backed by evidence.
True - but the point here is that the evidence does back up the theory.
Modern de-interlacers are highly impressive although it'll never be as good as progressive, nonetheless Leon's videos are all shot progressively, probably in 25p.
Yes - as mentioned above, other than a perfectly static image, it is mathematically impossible to achieve perfect reconstruction of a full frame from two fields taken at different times. If the emission is 25p but the resolution is only 720 then there's something odd going on. 720 is only ever captured at 50fps and the improvement in motion portrayal should be quite significant.
Doubling the frame rate or the resolution doesn't necessarily result in double the bit-rate required for either case because of the way video compression works. x265 is actually offers little to no improvement over x264 at hd resolutions.
Agreed again re: not doubling the bit rate. Since the residual errors will be reduced because a moving object will only have moved half the distance then this will result in a reduction from that point of view. That will be partly offset by having to send data more often but, overall, it should still be better. The point that I was making was not related to a compressed stream, however. Even with uncompressed video, p/50 will always appear to be of higher image quality than p/25, for a moving image of a given static resolution since the temporal response of the eye has tailed off much more by 50Hz than at 25Hz.
I accept 4k is still a luxury at this point in time, although I argue that even at 24p, the difference is still perceivable at reasonable viewing differences, and for 8 bit video without the "wcg" and "hdr" marketing terms.
I would be pretty surprised for that to be the case if you're at least 3-5H away. One possible effect could perhaps be the problem of spatial aliasing affecting lower resolution screens that aren't pixel-mapped compared with '4k' screens that are. (This is especially problematic on i/25 of course, since the imperfect scaling has to take place over a cluster of pixels that weren't captured at the same time, so it becomes not just a spatial aliasing artefact but spatio-temporal artefacts combined.)
I just think much higher quality 1080p can be delivered in smaller file sizes thats easier to distribute that can be easily decoded by all modern hardware.
I have to say that 4Mb/s is not a particularly high data rate for 1080. I'd be surprised if you could get quality that you consider to be satisfactory from 1080 at that rate, given that you can use higher rates than that for SD. I don't think you can divorce the original uncompressed rate from this. Remember that, uncompressed SD is 270Mb/s so is compressed at roughly 100:1. The equivalent 720p/50 or 1080p/25 in uncompressed form is at 1.485Gb/s so a 100:1 compression would be around 15Mb/s. Even when you factor in the reduction in delivery rate from using a progressive source (since the 'zig zag' encoding of the DCT coefficients is much more efficient per frame rather than per field) and the improvement from being able to use different shaped macroblocks etc in H264, I'm still a bit dubious that you could get it down to 4Mb/s and still get image quality that is free of visible artefacts at the viewing distance that you're using.
Interesting discussion, though we do seem to have drifted away from gorgeous girls getting covered in chocolate!
Sorry we are in a bit of a major transition at the moment, all will become clear very soon.
As for our latest clip, it's a little odd.
We are running two PCs right now, our old 4core 2.8gh HP and our 8core 4gh (writing on my phone now, so can't post exact specs) monster with far superior specs. The funny thing being that the older machine seems to take it's time to produce a much better clip than the new beast, same programme, settings, codecs etc. Even turned off the hardware acceleration on the new one.
Anyways, will probably edit and output media in its original quality from now on, then use "handbrake" or a similar programme to 2xpass render clips down.
As I said, we are in a transitional period right now, so please bare with us.