2/18/24, 1:35pm: Post will not be marked Synthetic, even if AI was used to enhance the footage. The content of the clip has not been changed, only sharpened and enhanced. Even though AI might have been used to accomplish the task, we do not consider this synthetically generated content. Any technology used to merely enhance what's already there does not carry with it the baggage of having created a scenario out of thin air (or the ramifications that go with that). Will un-mark this to no longer be tagged synthetic.
cmfan777 said: Seems like a nice upgrade to me. What program did you use?
It's called HitPaw Video Enhancer. It's a fairly simple process with a number of parameters to choose from. Though it's quite processor heavy and clips can take some time to render.
I think this is an interesting concept, especially since so many great 90s gungings were victim to the limitations of VHS. I think it'll be a mixed bag results wise but I'd be intrigued to see what the tech can do.
I think you would also get more improvement using higher-quality sources... If this is the PAX ad that's floating around on YouTube, the quality is pretty bad. The copy on old WAMTEC tapes (which he's digitized) or the Hurley cliptapes are much better.
I think the Alison Jack piengs would be good to try, as the current copies on YT are about as good quality as anything that's surfaced.
The sat down Alison Jacks will be an interesting one to try as the AI favours that sort of set up, i.e. a static camera with the subject in clear view, quite close to camera without any objects obscuring.
The standing Jacks clip might be a bit busy and has the subject further from camera. Seems the program tends to fair better when it can recognise a face i.e. before mess.
I have the Hurley version of the PAX clip now, though it's sharper in some respects, there's more noise and it's slightly less smooth which could lead to distortion, but who knows until it's tried!
I'll render these today and see what the results are. Fair warning it can produce some real monstrosities. The Mandrell sisters looked like something from a horror film
I've used TensorPix to upscale some old clips. Decent results but the cost soon adds up - whether its worth it I'm not sure, it's the same content, and ultimately the AI is just guessing at the missing information.
HotToast said: The sat down Alison Jacks will be an interesting one to try as the AI favours that sort of set up, i.e. a static camera with the subject in clear view, quite close to camera without any objects obscuring.
The standing Jacks clip might be a bit busy and has the subject further from camera. Seems the program tends to fair better when it can recognise a face i.e. before mess.
I have the Hurley version of the PAX clip now, though it's sharper in some respects, there's more noise and it's slightly less smooth which could lead to distortion, but who knows until it's tried!
I'll render these today and see what the results are. Fair warning it can produce some real monstrosities. The Mandrell sisters looked like something from a horror film
LMFO, now you have to post the video of the Mandrell sisters. My curiosity is too much.
It's sharp, but I still feel a bit like we're in the uncanny valley. Somehow things are both too clear and yet too fuzzy, like when a static image of a person's face gets all airbrushed to eliminate blemishes but then they also have no pores. Curious to see where the technology goes, tho.
I'd highly recommend Topaz Video AI for this purpose. It is continuously updated and has multiple, highly effective upscaling models for different video types. They take customer feedback and have been doing a great job improving the app over the past few years.
ABGamma said: I'd highly recommend Topaz Video AI for this purpose. It is continuously updated and has multiple, highly effective upscaling models for different video types. They take customer feedback and have been doing a great job improving the app over the past few years.
HotToast said: The sat down Alison Jacks will be an interesting one to try as the AI favours that sort of set up, i.e. a static camera with the subject in clear view, quite close to camera without any objects obscuring.
The standing Jacks clip might be a bit busy and has the subject further from camera. Seems the program tends to fair better when it can recognise a face i.e. before mess.
I have the Hurley version of the PAX clip now, though it's sharper in some respects, there's more noise and it's slightly less smooth which could lead to distortion, but who knows until it's tried!
I'll render these today and see what the results are. Fair warning it can produce some real monstrosities. The Mandrell sisters looked like something from a horror film
LMFO, now you have to post the video of the Mandrell sisters. My curiosity is too much.
Apologies! I actually deleted it in horror. lol Because of the low quality footage the AI basically mistook them for old crones with funny eyes. It can do other odd things, like add glasses after they've been pied. If I get another amusing misfire I'll post!
Bearing in mind there are still a few tweaks I can make to improve this footage, the results are not too shabby. I'm aiming is to make it as natural as possible which means losing improved definition here and there to make the scenes more real overall. Maybe I went too subtle with the Mandrell clip? Big thank you to Rich for supplying his VHS rip of the Mandrell scene, which is way better than the ones I've been working with.
Nice work! But right now, the tech still only feels like a 5% improvement at best.... Not dramatic enough to make a big difference. (As opposed to, say, the jump from cruddy VHS rips of the Stooges scenes to the really nice uploads the studio put on YT recently. But of course, that's not AI but rather a much better source.)
Weirdly, the second Alison Jack one is the most improved, despite having the most movement, which should've worked against it..?
Yeah, the Mandrell one is a very marginal gain. I think I dialled the first two back, way too much in my quest for naturalism. Though I welcome any improvement. It seems the Pax one, which feels like a massive boost, is more of an outlier. In the second Jacks one, even the cream looks good. It's difficult to say what makes some clips work and others not. Much of it is in the clarity of the source material. Perhaps the fact that Alison is in profile, which is more forgiving, has a lot to do with it, she's well lit and separated from the black background very nicely. So there's some good clarity there. I've had some other very good results so it's a bit of a mixed bag at the moment. If restoring catches up with the rest of the AI space then we might see something really wow a few years down the road.
I would think at some point the tech will be such that you could "tell" it the people involved in the scene (e.g. Barbara Mandrell) and it could render a better image based on pics and vids available on line. Wouldn't work with the Pax scene (although, I was searching for that scene on Youtube a while back and found a version where the woman in that scene responded in the comments-- she wasn't an actress but remembered it fondly).
Anyway, that is down the road and will likely run into some major issues with use of people's image without permission. But make no mistake, that is the path we are on right now.
I..guess I'm the only one who feels like the quality is kind of part of the experience?
like, the whole reason the clips are often effective is that they were made at a time when Digital Touch-ups weren't possible. there's a charm to WAMTECH, RobBlaine, Phoebe's stuff, etc.
I don't mean to be that militant about it, but in my mind so many clips have a texture that cannot be replicated and I don't see the point in trying or pretending otherwise?
but I also realize I grew up during a VHS era and younger people I'm sure are like "I can't look at this, this looks like ass" similar to how I must've felt when I was younger and looked back at stuff that was from a previous era.
Justin Fox said: Bearing in mind there are still a few tweaks I can make to improve this footage, the results are not too shabby. I'm aiming is to make it as natural as possible which means losing improved definition here and there to make the scenes more real overall. Maybe I went too subtle with the Mandrell clip? Big thank you to Rich for supplying his VHS rip of the Mandrell scene, which is way better than the ones I've been working with.
I will say I love the clips you chose, which I of course have seen before, but not in a while....damn, they're so sloppy and creamy. Those Mandrell Sisters pies are amazing. I just lament that we don't get that kind of coverage anymore? I suppose it was how the pies were made and the fact they were sitting under hot lights etc, so they were often drippier and liquid-ier, but damn if it didn't make for a really dynamic effect that still thrills me to this day when I see it.
ABGamma said: I'd highly recommend Topaz Video AI for this purpose. It is continuously updated and has multiple, highly effective upscaling models for different video types. They take customer feedback and have been doing a great job improving the app over the past few years.
I bought topaz a while ago, but I can't really figure out how to get "good" results out of them.
Been trying to upscale some of my VHHS stuff. Things look "a little" better but mpt as dramatically as in this thread!
JoeYoung said: I..guess I'm the only one who feels like the quality is kind of part of the experience?
like, the whole reason the clips are often effective is that they were made at a time when Digital Touch-ups weren't possible. there's a charm to WAMTECH, RobBlaine, Phoebe's stuff, etc.
I don't mean to be that militant about it, but in my mind so many clips have a texture that cannot be replicated and I don't see the point in trying or pretending otherwise?
but I also realize I grew up during a VHS era and younger people I'm sure are like "I can't look at this, this looks like ass" similar to how I must've felt when I was younger and looked back at stuff that was from a previous era.
JY
Totally agreed that there's a charm to the different eras of camera technology. You can usually make an accurate guesstimate as to which decade a scene comes from through the texture of the picture alone. To me, these restorations usually retain the character of the original source while allowing a clearer view of the contents of the scene. Though I can see, especially with the pax one, it can produce something that appears more like the product of the current era.
ABGamma said: I'd highly recommend Topaz Video AI for this purpose. It is continuously updated and has multiple, highly effective upscaling models for different video types. They take customer feedback and have been doing a great job improving the app over the past few years.
I bought topaz a while ago, but I can't really figure out how to get "good" results out of them.
Been trying to upscale some of my VHHS stuff. Things look "a little" better but mpt as dramatically as in this thread!
Where's a good place to learn?
If you type 'topaz video ai tutorial' in YouTube there's a few helpful videos there. I'm working by trial and error, running different pre-sets on one clip to test results.
I tried Topaz and actually got better results with the HitPaw defaults. Though Topaz has more parameters so I suspect as I get used to Topaz I'll be able to find more ways to achieve better quality.
ABGamma said: I'd highly recommend Topaz Video AI for this purpose. It is continuously updated and has multiple, highly effective upscaling models for different video types. They take customer feedback and have been doing a great job improving the app over the past few years.
I bought topaz a while ago, but I can't really figure out how to get "good" results out of them.
Been trying to upscale some of my VHHS stuff. Things look "a little" better but mpt as dramatically as in this thread!
Where's a good place to learn?
Honestly, each clip is unique and will have a different set of best settings. Finding what looks best requires a substantial amount of trial and error, adjusting each setting one at a time, trying different model variations, running different models in sequence, playing with stabilization and motion blur reduction, etc.
I've been processing all of my older clips with Topaz. Messygirl clips I bought about fifteen years ago and Messy Fun videos I bought from Rob Blaine are seriously degraded. But Topaz does a pretty good job restoring them.
ABGamma said: I'd highly recommend Topaz Video AI for this purpose. It is continuously updated and has multiple, highly effective upscaling models for different video types. They take customer feedback and have been doing a great job improving the app over the past few years.
JoeYoung said: I will say I love the clips you chose, which I of course have seen before, but not in a while....damn, they're so sloppy and creamy. Those Mandrell Sisters pies are amazing. I just lament that we don't get that kind of coverage anymore? I suppose it was how the pies were made and the fact they were sitting under hot lights etc, so they were often drippier and liquid-ier, but damn if it didn't make for a really dynamic effect that still thrills me to this day when I see it.
JY
Yeah, but that's just the nature of the beast.... WAM in general, pies in particular, and then mainstream scene (where no one is really thinking about "pie hit quality") vs. producer (where we are VERY aware of the consistency of a pie, how long it's been sitting, etc).
I mean, there's SO many scenes where a great setup was ruined cuz the pie just didn't cover well. (The one from "Wings" is a prime example, also the Julia Louise Dryfuss clips from SNL.) These pies were presumably sitting under the same hot lights, and yet... Didn't work at all. And then you get a clip like the Mandrell one, or Three's Company, where they just lucked into the perfect pie coverage.
I truly don't think there's anything more scientific than that.... Random luck. (And not using Redd-Whip or shaving cream sprayed on a paper plate!)
Well, at least Topaz solves one of the two problems I have with A.I. enhancing old SD footage.
My 2 problems were a) time, and b) costs ....and Topaz solves the problem with costs.
I have conducted A.I. enhancing tests with 3 different softwares in the last 12 months, and so far I have yet to be satisfied with the results and the rendering times are horrendous and have produced only marginal improvements over the original footage I have. All the tests I conducted were of 10 minute segments of footage, and the 3 software products I tried all took 16-18 hours to render a 10 minute piece of video. Therefore working the math on how much old SD footage I have in my archives versus 16-18 hours of processing time for a 10 minute clip, I calculated that it would take me 175 YEARS to completely redo all my old clips with A.I. Unfortunately my doctors say it is unlikely that I will live that long to complete the project.
The other problem I had with 3 other companies who make A.I. enhancing software, is that they do not allow you to buy the software outright, and instead they wanted you to pay for ongoing use via a $45 per month subscription system, so I would need to pay $45 per month for 175 years for a total cost of $94,500 -- I don't think so.
At least with Topaz (which I have not tried yet) you can make a one time purchase of $300 to buy the software outright. But the rendering times are just ridiculous and the results are unimpressive.
I am not impressed by the commercially available software on the market so far, because on Youtube you can see some really great results of 4K A.I. enhancing, i.e. Buster Keaton movies, and this enhanced version of the pie fight from Battle of the Century....very nice....
But none of my tests have produced results as good as this. My tests saw very little improvement and did not justify the 16-18 hours in rendering time.
My gut feel is that there are 2 types of A.I. software products, one that is available for amateurs and consumers, and a much higher grade system that is used by professional video studios with very high end workstations.
If we wait another 2-3 years then A.I. will be developed further and will soon be able to generate "missing scenes" from still photos and scripts. Imagine that - take the available stills of Esther Muir from "A Day at the Races" missing slapstick scene and feed her image and wardrobe and the background set into A.I. and have A.I. recreate the missing footage of her famous white wash gunging scene, or feed imagery of Natalie Wood from "The Great Race" into A.I. and submit your own script to generate behind the scenes footage of Natalie's pie scene.
I am less interested in using A.I. to enhance old footage and am more interested in using A.I. to generate missing footage or fantasy footage.