For veteran Midjourney users, the platform's recent updates have been a series of exciting surprises. From video generation that brings static images to life to reference systems for precise style transfer, new features are constantly expanding the boundaries of AI art, providing creators with unprecedented expressive tools. This article guides you through these exciting changes.
Video Generation: Animate Your Static Images
The most eye-catching update is the introduction and refinement of the video generation feature. You can now transform a static image generated by Midjourney into a short video with a single click, making elements within the frame truly come to life. Everything from subtle character movements to the flow of a scene's atmosphere can be vividly presented.
This feature is available to users on the Standard plan and above. It also includes an HD mode, which renders videos at a native 720p resolution, effectively improving motion smoothness and detail coherence. It's important to note that video generation consumes significant credits. To manage resources wisely, users are advised to generate smaller batches (like 1-2 videos at a time).
The Style Reference (Sref) and Refer Anything Systems
Another major update is the full rollout of the V7 Style Reference system. This feature is a powerful tool for style control. You simply drag a reference image to the designated area in the prompt field, or type "--sref" followed by the image link in your keywords. The generated image will then inherit the overall style, color palette, and texture of your reference.
Building on this, an even more powerful "Refer Anything" feature is in testing. It goes beyond just style; it can accurately integrate specific characters, objects, or even inanimate elements from your reference image into your new creation. This achieves a true "put this in my image" capability, offering strong support for character consistency and complex compositions.


