When it works it seems to be pretty damn good it has to be said, but the key word here is 'WHEN'!
I'm finding more often than not it takes ages for it to process, and then when I check back (like 24 hours later!), it has failed. I set a number of job to go using my free credits, but they almost always fail.
I was considering paying the subscription costs, but given the ridiculous length time it takes to generate with about a 5-10% success ratio, I don't really want to waste my money.
Probably an unpopular opinion here but as much as AI video has progressed this year and I do think think it's improved immensely, until the fluid mechanics and other areas improve I don't think it's worth the cost other than as curiosity. Imaging has improved significantly with flux that it's worth spending the time creating custom Loras. Video will also evolve over the next year and that will be the time to spend the time, effort and money.
schneider said: When it works it seems to be pretty damn good it has to be said, but the key word here is 'WHEN'!
I'm finding more often than not it takes ages for it to process, and then when I check back (like 24 hours later!), it has failed. I set a number of job to go using my free credits, but they almost always fail.
I was considering paying the subscription costs, but given the ridiculous length time it takes to generate with about a 5-10% success ratio, I don't really want to waste my money.
Is anyone else having issues?
I was having horrible rates of success and super long processing times of over a day, until I paid for the basic plan. Now, nearly every video works, and it takes five or ten minutes to create them. I highly recommend it for anyone wishing to get into Ai video creating. Check my profile for some recent ones I've done.
messg said: Probably an unpopular opinion here but as much as AI video has progressed this year and I do think think it's improved immensely, until the fluid mechanics and other areas improve I don't think it's worth the cost other than as curiosity. Imaging has improved significantly with flux that it's worth spending the time creating custom Loras. Video will also evolve over the next year and that will be the time to spend the time, effort and money.
I agree it is definitely not there yet. I'm happy making a few bits with my subscription credits to see how far it can currently get. I think the same thing will apply as with commercial image models in that they will not be the best way long term and an open source option will eventually appear that regular people can train, but I think that is quite a way off yet though.
The immediate downside I spotted to the video generators being able to do some mess are the raft of ill-thought out videos of celebrities appearing on youtube. Apologies if this has already been discussed as I've been away, but I think if anything is going to give both WAM and AI a bad name it is people creating deep fake fetish content of real people and putting it on youtube. I'm not surprised it is happening (having seen the online behavior of a minority of WAM fans in the past) but it would be good for people to think whether being able to do something means they actually should do it.
messg said: Probably an unpopular opinion here but as much as AI video has progressed this year and I do think think it's improved immensely, until the fluid mechanics and other areas improve I don't think it's worth the cost other than as curiosity. Imaging has improved significantly with flux that it's worth spending the time creating custom Loras. Video will also evolve over the next year and that will be the time to spend the time, effort and money.
I agree it is definitely not there yet. I'm happy making a few bits with my subscription credits to see how far it can currently get. I think the same thing will apply as with commercial image models in that they will not be the best way long term and an open source option will eventually appear that regular people can train, but I think that is quite a way off yet though.
The immediate downside I spotted to the video generators being able to do some mess are the raft of ill-thought out videos of celebrities appearing on youtube. Apologies if this has already been discussed as I've been away, but I think if anything is going to give both WAM and AI a bad name it is people creating deep fake fetish content of real people and putting it on youtube. I'm not surprised it is happening (having seen the online behavior of a minority of WAM fans in the past) but it would be good for people to think whether being able to do something means they actually should do it.
I agree completely. I've seen imaging on unrestricted Dalle3 which is incredible, however as always the fear of abuse (fully grounded when you see the deepfakes) causes the providers to pull back and apply guardrails which degrade the whole technology. I've seen nothing to suggest this isn't going to change in the future and whilst SORA, runway and the myriad of chinese providers will keep improving, I can't see these continue to be unrestricted. Local solutions like CogVLM2-video and others will appear but the power needed to run them at the same quality as the online providers will keep them a generation 2 behind.
The immediate downside I spotted to the video generators being able to do some mess are the raft of ill-thought out videos of celebrities appearing on youtube. Apologies if this has already been discussed as I've been away, but I think if anything is going to give both WAM and AI a bad name it is people creating deep fake fetish content of real people and putting it on youtube. I'm not surprised it is happening (having seen the online behavior of a minority of WAM fans in the past) but it would be good for people to think whether being able to do something means they actually should do it.
Yes, I completely agree. As soon as I posted some (entirely fictional) videos on YouTube I had comments asking for celebrities. I said no, only to find the requester had gone and done it themselves (rather badly I might add ). I've also had a couple of requests for making videos featuring "someone they know" - obviously said no to them as well. I'm assuming I'm not alone in this.
A strange quirk of UMD is that posting a non-consensual fake video of a real person is against TOS, but that posting a link to the same video hosted outside of UMD is not. Personally I think MM should close that loophole before it becomes a problem.
I think AI video is close enough to be effective for what I want to make. It could always be better. Hands are often wrong. The fluid dynamics on Kling are not quite there, but they are significantly better than anything else I've seen. One issue I've noticed though is that it will put compression artefacts into the results - I'm assuming this is from compression artefacts in its training data.
schneider said: When it works it seems to be pretty damn good it has to be said, but the key word here is 'WHEN'!
I'm finding more often than not it takes ages for it to process, and then when I check back (like 24 hours later!), it has failed. I set a number of job to go using my free credits, but they almost always fail.
I was considering paying the subscription costs, but given the ridiculous length time it takes to generate with about a 5-10% success ratio, I don't really want to waste my money.
Is anyone else having issues?
I was having horrible rates of success and super long processing times of over a day, until I paid for the basic plan. Now, nearly every video works, and it takes five or ten minutes to create them. I highly recommend it for anyone wishing to get into Ai video creating. Check my profile for some recent ones I've done.
Thanks for that, and thanks everyone else for your views. I might well give it a go. It looks like there's a discount on for first time subscribers. I'll see how it is for a month, and if it's rubbish, I'll cancel it.
I think AI video is close enough to be effective for what I want to make. It could always be better. Hands are often wrong. The fluid dynamics on Kling are not quite there, but they are significantly better than anything else I've seen. One issue I've noticed though is that it will put compression artefacts into the results - I'm assuming this is from compression artefacts in its training data.
Is it though? Even putting aside anatomy deformities, then bodily movements between frame can be quite jarring. Fluid dynamics of wam make them even less realistic. Slime and substances flow and disappear or change location. substances glide on and off bodies and clothing etc.
Of course, the same could be said of imaging a year ago before Dalle3, SDXL finetunes and Flux so it's not that I don't think things will improve significantly. The issue with Video is that you need significantly more processing than imaging to run local comparable to online generators. The online generators will not do NSFW and it also lacks the concepts to do the fluid mechanics well. No doubt there will be a paradigm shift similar to what Dalle/Flux has for prompt understanding but until then it's really an early alpha curiosity.
I think AI video is close enough to be effective for what I want to make. It could always be better. Hands are often wrong. The fluid dynamics on Kling are not quite there, but they are significantly better than anything else I've seen. One issue I've noticed though is that it will put compression artefacts into the results - I'm assuming this is from compression artefacts in its training data.
Is it though? Even putting aside anatomy deformities, then bodily movements between frame can be quite jarring. Fluid dynamics of wam make them even less realistic. Slime and substances flow and disappear or change location. substances glide on and off bodies and clothing etc.
Of course, the same could be said of imaging a year ago before Dalle3, SDXL finetunes and Flux so it's not that I don't think things will improve significantly. The issue with Video is that you need significantly more processing than imaging to run local comparable to online generators. The online generators will not do NSFW and it also lacks the concepts to do the fluid mechanics well. No doubt there will be a paradigm shift similar to what Dalle/Flux has for prompt understanding but until then it's really an early alpha curiosity.
Well, it works well enough for me often enough to make it worthwhile. Maybe I'm just easily pleased, and I'm happy enough with clothed content. I don't know when you last tried Kling, but with the most recent update I reckon we're more at beta than alpha. I'm happy with the results I get.
I do agree that not being able to generate offline is a big problem. I think we're at a stage with Kling where we're still figuring out effective prompting (both text and image), and experimenting gets expensive. Failures are expensive. With Dall-e 3 we all built up a bank of knowledge about prompting by getting to use Bing for free. With Kling, we don't even know which of the problems with substances can be fixed with better prompting, and which are intrinsic to the model.
Well, it works well enough for me often enough to make it worthwhile. Maybe I'm just easily pleased, and I'm happy enough with clothed content. I don't know when you last tried Kling, but with the most recent update I reckon we're more at beta than alpha. I'm happy with the results I get.
I do agree that not being able to generate offline is a big problem. I think we're at a stage with Kling where we're still figuring out effective prompting (both text and image), and experimenting gets expensive. Failures are expensive. With Dall-e 3 we all built up a bank of knowledge about prompting by getting to use Bing for free. With Kling, we don't even know which of the problems with substances can be fixed with better prompting, and which are intrinsic to the model.
I've played around with most of the major and small video generators, I'm not singling out any specific content produced so far but I genuinely feel it still at the the alpha point in terms of quality for wam whether that is clothed, NSFW or a combination.. It's like the the leap between Dalle2 and Dalle3 still needs to happen which given the complexity of fluids etc makes sense. I'm excited to see the progression over the next year or two but it's not at the position of quality yet where I feel I'd want to burn any money testing the models further. Who knows, someone's content may suprise me or there is a significant jump when sora release or an update to Kling, hailuoai, Runway pushes the boundary and I'll jump on it. As an aside, I still think we're at least a year before imaging becomes good enough for general purpose wam. I'm happy to sink time in that training locally and online . I have high standards and I still feel my Loras have a long way to go before I'm happy with the output despite the training I've been doing on the Lora side