Titikey
HomeTips & TricksChatGPTMidjourney’s New Consistent Character Feature Explained: Quick Start with -cref and -cw

Midjourney’s New Consistent Character Feature Explained: Quick Start with -cref and -cw

2/17/2026
ChatGPT

When creating a series of images in Midjourney, the biggest headache is having the same character “swap faces and body types” across different scenes. With Midjourney’s new “-cref (Character Reference)” and “-cw (Character Weight),” you can more reliably preserve facial features, body shape, and clothing—making serialized posters, storyboards, and character sheets much easier to control.

What problem does this Midjourney update solve?

In the past, even with image prompts or repeated descriptions, Midjourney often produced drifting facial features, distorted hairstyles, or missing accessories. Now, with Midjourney’s character reference parameter, “who this is” is upgraded from a text description to “defining the person by an image.” For those creating IP characters, brand mascots, or comic storyboards, the amount of rework will drop noticeably.

How to use -cref (Character Reference) stably

The logic is simple: first prepare a clear reference image of the character, then add “-cref [reference image link]” to your prompt. Midjourney will prioritize learning the identity-defining traits from the reference image, and then carry out the scene and action requirements you write.

Example (replace the link with your own image):
portrait of a detective in a rainy alley, cinematic lighting --cref https://xxx.jpg

How -cw (Character Weight) adjusts similarity

“-cw” controls how tightly Midjourney “sticks” to the character reference. If you want outfit changes without changing the person, raise -cw; if you want to keep the face but allow more freedom in style and clothing, lower -cw appropriately.

Practical tip: start with a medium -cw and generate one image to confirm the face is stable, then decide whether to increase the weight to lock in details. This saves generations compared to locking everything from the start, and makes it easier to get natural poses and composition.

The “Edit” entry in the web editor is more usable

Beyond character consistency, Midjourney’s web-based image editing is also smoother: click the “Edit” button to enter the new interface, where you can use “Erase/Restore” to modify specific areas. You can also adjust the size and aspect ratio to expand the canvas, “extending” the original image outward to add more background space.

A smooth workflow is: first use -cref in Midjourney to lock the character → pick the closest image → go into the editor to remove small flaws or expand the canvas → then continue using the same character reference for the next image, and the series will be more consistent.

The two most common pitfalls when getting started

First, the reference image must be clear and the subject should take up enough of the frame; otherwise, Midjourney can’t capture stable features. Second, if your prompt piles on too many conflicting descriptions like “different face shape/different hair color,” it will cancel out the -cref effect. If you want variation, prioritize changing the scene and clothing terms—don’t change the face first.

HomeShopOrders