/Thought  -
Insights

Midjourney Video V1

Exploring Midjourney Video v1: A New Era of AI-Generated Motion

If you’ve been using Midjourney to make beautiful AI images, there’s something new you need to check out, Midjourney Video v1. It's still in its early testing phase, but it’s already doing something super cool: it brings your still images to life with short video loops.

It’s not full-blown animation, but it adds a subtle camera movement like a slow zoom, pan, or drift that makes your image feel alive, like you’re stepping into it.

So… what exactly is this?

Right now, Midjourney Video v1 takes an image you’ve created and adds a bit of movement to it. Think of it like a smooth camera passing across your image. It gives your artwork a cinematic feel  like a living painting or a mood loop you’d use in a music video, art reel, or portfolio.


How Does Midjourney Video v1 Compare to Sora, Runway, and Veo 3?

Midjourney Video isn’t competing with these tools directly and that’s okay. It’s not trying to generate an entire video. Instead, it focuses on adding life and movement to a still image, a simple feature, deeply useful for designers, art directors, and creatives who want just a touch of motion without overcomplicating things.



Let’s test these tools:

We’ll start by generating the base image using Midjourney, then curate a detailed prompt based on that image for video generation. After that, we'll test the same prompt across different platforms like Midjourney Video, Firefly, Runway, and Sora. To compare how each tool interprets and animates the scene. This approach will help us evaluate consistency, realism, and overall video quality across these leading platforms.

Base image

Prompt: Wind turbines spinning in a strong breeze, tall grass swaying in the wind, birds flying across the sky, clouds moving gently overhead in this open landscape with rolling hills

It seems like Runway didn’t take the prompt seriously, the swaying grass, flying birds, and moving clouds are all missing. The only thing that shows any movement is the windmills. We’ve noticed this issue with a few other tools as well, where the model is trained to produce minimal motion, which becomes a major drawback for the tool

There’s noticeable distortion and poor quality. Adobe Firefly is personally our third-best choice after SDXL and Midjourney for image generation. However, when it comes to video generation, the model lacks many essential features and standard protocols. For example, when generating a video using a starting frame, the overall video quality drops significantly compared to the input image. We’ve only experienced this issue with Firefly. While Firefly excels at generative fill and image creation, Adobe still needs to make major improvements to its image-to-video model. It also fails to apply proper lens distortion, which makes the video feel less realistic and disconnected from the original image. This is another area where Firefly falls short in video generation.

So far, we've gotten the best video quality from this tool, even though it still lacks high frame rate and realism. The grass looks like it's captured in a time-lapse, the windmill rotates slowly, and the distorted birds only appear for a split second. Moving clouds are still missing. That said, we're genuinely impressed with the quality of the individual frames and for an open-source model, it's incredibly useful.