Have you ever tried generating multiple images with AI only to find the same character’s face shape and outfit completely mismatched? Midjourney’s recently launched Character Reference (-cref) and Omni-Reference features are designed to solve this exact pain point. Now, with just one reference image, you can keep a character highly consistent across different scenes, and even “transport” any object or living creature into your artwork. This article dives deep into both new features, along with practical parameter tips.
What is the Character Reference (-cref) Feature?
Character Reference is one of the most important updates since Midjourney V6. In the past, even when using the same prompt, the facial features, body proportions, and clothing of generated characters could change dramatically from one output to the next. Now, by adding --cref <image URL> at the end of your prompt, Midjourney extracts the character’s traits from the reference image, ensuring that subsequent scene generations retain the original face, body shape, and clothing style. For example, if you have a selfie and want to turn it into illustrations across different settings, -cref makes it easy.
Control Character Consistency with the -cw Parameter
Sharp-eyed users may notice that Character Reference isn’t a 100% copy—it allows you to adjust the reference weight with the --cw parameter. --cw ranges from 0 to 100, with a default of 0. At 0, Midjourney only references the character’s facial features; at 100, it preserves hair, clothing, pose, and other details. If you want to keep face consistency while freely changing the character’s outfit or scene, set --cw between 20 and 40. This ensures character recognition while leaving room for AI creativity.


