I trained a new Wan2.1 lora, for pies. This is the first time I've trained one with videos instead of images, using some of the best video generations I'd gotten up till now.
I haven't pushed it too far yet, and I'm still working out the best generation settings. But I'm cautiously optimistic -- getting some decent early results. The motion is definitely better and more reliable than what I'd been able to manage before.
wammypinupart said: I trained a new Wan2.1 lora, for pies. This is the first time I've trained one with videos instead of images, using some of the best video generations I'd gotten up till now.
I haven't pushed it too far yet, and I'm still working out the best generation settings. But I'm cautiously optimistic -- getting some decent early results. The motion is definitely better and more reliable than what I'd been able to manage before.
More to come!
Great stuff! Did you need to do anything different with vid training compared to images? For example can you just throw images and videos together in a mixed training set using the same diffusion pipe approach as before, or do you need to modify the script to permit video training?