Titikey
HomeTips & TricksChatGPTNew ways to use Midjourney’s external image editor: upload edits and retexturing in one go

New ways to use Midjourney’s external image editor: upload edits and retexturing in one go

2/8/2026
ChatGPT

Midjourney has recently made “editing images” much more convenient: you can not only make local adjustments to images you generated yourself, but also directly upload local images and, in the web editor, extend, crop, repaint, and even switch materials and overall mood with one click. This article follows the actual workflow to clearly explain Midjourney’s external image editor and its image retexturing mode.

1. What the external image editor can do: expand, erase, inpaint

In Midjourney’s editor, you can upload images from your computer, then use “select an area + prompt” to add elements, remove clutter, replace backgrounds, and more. A common approach is to first expand the canvas (change the aspect ratio or extend outward), then use repainting to “fill” blank areas into the same scene. Compared with repeatedly using Vary in Discord, this kind of “edit exactly where you point” experience is closer to a conventional photo-editing workflow.

2. Image retexturing mode: keep the structure, change lighting and materials

If you like the composition but don’t like the materials, lighting, or overall texture, you can use Midjourney’s “image retexturing mode.” It first estimates the scene’s form and structure, then overlays new textures and surface properties, letting the same image quickly become different looks such as “rainy neon night,” “vintage film,” or “matte terracotta.” In practice, it’s recommended that your prompt prioritize materials and lighting (e.g., “brushed metal, soft rim light, rainy reflections”) and use fewer action words that would change the structure.

3. Tips for combining it with --sref, --p, and --cref

One key point of this update is that the editor is compatible with multiple Midjourney reference systems. You can mix style reference --sref with the personalization model --p to stabilize your aesthetic output; when you need character consistency, add --cref character image URL at the end of the prompt, and use --cw 0-100 to adjust reference strength. A common strategy is: first use --cref to lock the face and vibe, then use --sref to set the style, and finally only tweak local areas in the editor to avoid the whole image drifting.

4. Access and moderation changes: who gets it first

Because the feature is still rolling out in stages, Midjourney currently prioritizes access to the external image editor and related capabilities for certain user groups—such as annual subscribers, accounts with higher total generations, or users who have maintained a monthly subscription for a long time. At the same time, Midjourney is also testing a more fine-grained V2 AI moderation system that checks prompts, uploaded images, mask areas, and final outputs together, so it’s not surprising to encounter cases like “the same prompt but the result gets blocked.” To pass moderation more consistently, it’s recommended to avoid borderline terms and describe your intent concretely (materials, camera, environment) rather than using suggestive wording.

HomeShopOrders