Midjourney recently added a “character reference” feature, making it easier to keep the same person’s face consistent across a series of generated images. The core method is to use -cref to lock in the source character image, then use -cw to control similarity, so you can maintain consistency in facial features, body shape, and clothing across different scenes.
What pain points does this update solve?
In Midjourney, even with the same prompt, multiple generations can still “look different,” which is especially troublesome when creating serialized posters, storyboards, or brand characters. The goal of -cref is to pull the “character” out of randomness, letting Midjourney use the reference image you provide as an anchor while it varies the scene.
Note that this capability still leans toward “generating character consistency” and doesn’t guarantee a true photo-level replica. If you’re aiming for strict, real-person-level consistency, Midjourney may still drift in fine details.
How to use -cref: lock the character first
The workflow is straightforward: prepare a character image you approve of, upload it to Discord, and copy the image link. Then add “ -cref image_link ” to your /imagine prompt, and Midjourney will treat that image as the character reference.
Example: /imagine a female detective running through neon streets on a rainy night, cinematic lighting -cref https://... . If you want the same character to change outfits or hairstyles, state the “what changes” clearly—but don’t change too many things at once; stability will be better.


