Titikey
HomeTips & TricksChatGPTHow to use Midjourney’s new editor: inpainting, retexturing, and moderation changes

How to use Midjourney’s new editor: inpainting, retexturing, and moderation changes

2/15/2026
ChatGPT

Recently, Midjourney has made “editing images” feel more like a complete workflow: you can upload an existing image, expand the canvas, crop, and inpaint within the same picture, and even switch the overall materials and lighting to a different style with one click. At the same time, Midjourney is also testing a more granular V2 moderation system, where prompts, masks, and outputs will all be checked together.

External image editor: expand, crop, and inpaint right after upload

This time, Midjourney’s external image editor is built around “start with an image, then refine it.” You can upload an image from your computer, then extend the frame (similar to filling in the edges), crop the composition, or use a region selection (mask) to specify the part you want to change, and then control the inpainting result with a text prompt.

In practice, it’s recommended to state your goal clearly first: what to change (subject/background/props), what to keep (composition/viewpoint/mood), and what elements you don’t want to appear. Midjourney’s inpainting relies heavily on “boundary descriptions”—the more specific you are, the less likely it is to mess things up.

Retexture mode: keep shapes and structure, switch materials and lighting overall

Midjourney’s new “image retexture mode” is more like reskinning an existing scene: it estimates the scene’s shapes and then reapplies textures, causing the materials, surface qualities, and lighting to change as a whole. It’s suitable for quickly turning the same product shot into multiple material options, or shifting a scene from “daylight realism” to a “cinematic night look.”

When writing prompts, try to put “materials and light” up front—for example, brushed metal, matte plastic, wet stone, hard light/soft light, rim/back lighting, etc. If you also want to preserve the original structural proportions, use fewer words that would heavily alter the composition.

The reference system still works: style reference, personalization, and character reference can stack

The good news is that this editing workflow isn’t isolated: Midjourney’s style reference (e.g., --sref), personalization model (--p), and character reference (cref used with cw strength) can all be used together with the editor. You can set the overall tone with a style reference first, then lock character consistency with a character reference, and finally make local fixes in the editor.

The higher the character-reference strength parameter cw, the more likely it is to “stick” to the hairstyle and clothing as well. If you want to change outfits but keep the face, it’s better to turn cw down.

V2 moderation system test: prompts, images, masks, and outputs are moderated together

Midjourney is also testing a smarter V2 moderation system: it checks your prompt, input image, painting mask, and the final generated output image as a whole. For creators, the most direct change is that wording that used to “barely pass” may now get blocked at any stage.

If you find that the same set of words passes at different rates between the generation and editing stages, first make the prompt more neutral and reduce ambiguity; next, check whether the mask covers sensitive areas. Midjourney is still adjusting the rules during the early testing phase, and occasional false positives are not uncommon.

Rollout rules and suggestions: confirm access first, then upgrade your workflow

Because the feature is fairly new, Midjourney is rolling it out in phases: the official note mentions that the first batch will prioritize accounts with very high generation volume (for example, users who have generated above a certain total) as well as some long-term subscribers. If you can’t see the editing entry point in your account, it’s most likely not an operational issue—it’s that access hasn’t been enabled yet.

Once you get access, it’s recommended to standardize the workflow into three steps: first, use Midjourney to generate a base image with the “correct structure”; then use the editor to inpaint locally to fix flaws; finally, use retexture mode to batch-produce different material versions. This makes the upgrade most noticeable and saves the cost of repeatedly regenerating from scratch.

HomeShopOrders