It’s been a lot of fun to experiment with this. Replicate is the only place where you can train on video clips, instead of just static images, so you can capture the cinematography of different styles, as you see here.
See more models in the thread 👇
🍿 You can now fine-tune open-source video models.
— Replicate (@replicate) January 24, 2025
We wrote a guide that shows you how to fine-tune Tencent's HunyuanVideo using @kohya_tech's Musubi Tuner. pic.twitter.com/hEm3oSaiFW
Here’s some eerie blue-eyed women, in the styles of:
- Pixar
- Spiderman: Into the Spiderverse
- Blade Runner 2049
- Westworld
Here’s some laughing gentlemen, fine-tuned on:
- RRR
- Joker
- Her
Can you guess which one the original prompt came from? ;)
Here’s two carriages in a spooky forest. The first from my Indiana Jones model. The second from the model I trained on The Matrix trilogy.
I think it’s cool how different the motion is: the camera, the speed of the wheels.
Also the Matrix carriage being a solid black box lol
Check out the different earrings and expressions on these faces. These are all the exact same prompt!
And the same seed, width, height, fps, lora scale, everything. It really does capture the “feel” of a certain film.
- Pulp Fiction
- Arcane
- Inception
- Spiderverse
Very different types of face on these, I’m guessing from sort of averaging all the actors in the training data? I also like the way the “golden hour” lighting comes from different angles.
- Game of Thrones
- La La Land
- The Lord of the Rings
- Westworld
Lots more models available on my profile. Make sure to check out the Examples on each one if you want to see how I’m prompting these and how they compare.
Feel free to send me stuff you create! I’m excited to see what people do with these
Things are starting to feel… magical