.jpg)
If you’ve been using Midjourney to make beautiful AI images, there’s something new you need to check out, Midjourney Video v1. It's still in its early testing phase, but it’s already doing something super cool: it brings your still images to life with short video loops.
It’s not full-blown animation, but it adds a subtle camera movement like a slow zoom, pan, or drift that makes your image feel alive, like you’re stepping into it.
So… what exactly is this?
Right now, Midjourney Video v1 takes an image you’ve created and adds a bit of movement to it. Think of it like a smooth camera passing across your image. It gives your artwork a cinematic feel like a living painting or a mood loop you’d use in a music video, art reel, or portfolio.
How Does Midjourney Video v1 Compare to Sora, Runway, and Veo 3?
Midjourney Video isn’t competing with these tools directly and that’s okay. It’s not trying to generate an entire video. Instead, it focuses on adding life and movement to a still image, a simple feature, deeply useful for designers, art directors, and creatives who want just a touch of motion without overcomplicating things.
Let’s test these tools:
We’ll start by generating the base image using Midjourney, then curate a detailed prompt based on that image for video generation. After that, we'll test the same prompt across different platforms like Midjourney Video, Firefly, Runway, and Sora. To compare how each tool interprets and animates the scene. This approach will help us evaluate consistency, realism, and overall video quality across these leading platforms.
Base img:

Prompt:
Wind turbines spinning in a strong breeze, tall grass swaying in the wind, birds flying across the sky, clouds moving gently overhead in this open landscape with rolling hills
Youtube embed
Runway:
It seems like Runway didn’t take the prompt seriously, the swaying grass, flying birds, and moving clouds are all missing. The only thing that shows any movement is the windmills. We’ve noticed this issue with a few other tools as well, where the model is trained to produce minimal motion, which becomes a major drawback for the tool
Youtube Link
Vimeo Link
Comfy UI (Wan 2.1):
So far, we've gotten the best video quality from this tool, even though it still lacks high frame rate and realism. The grass looks like it's captured in a time-lapse, the windmill rotates slowly, and the distorted birds only appear for a split second. Moving clouds are still missing. That said, we're genuinely impressed with the quality of the individual frames and for an open-source model, it's incredibly useful.
Vimeo embed
Midjourney:
The most realistic results we’ve gotten so far have been from Midjourney. The tool takes the prompt seriously: all the key elements are present: moving clouds, realistically swaying grass and windmills, and birds in motion, all at accurate speeds. While we were initially disappointed with the video quality in one of the generations, the realism made up for it. In fact, after testing a few more generations, the video quality turned out to be good and consistent, with little to no distortion, even in complex scenes featuring dense elements like bushes or thick grass.
The Winner?
It’s difficult to name a clear winner, as each tool brings its own strengths and limitations. Depending on your needs, whether it’s realism, quality, motion accuracy, or accessibility each model shines in different ways. Based on our tests, we’ve shortlisted four tools that stand out for video generation:
- Midjourney Video V1
- Wan 2.1
- Runway
- Firefly