Titikey
HomeTips & TricksChatGPTMidjourney’s New External Image Editor Features: Upload Edits and Retexturing Workflows

Midjourney’s New External Image Editor Features: Upload Edits and Retexturing Workflows

2/15/2026
ChatGPT

Midjourney has recently made “editing” feel much more like a complete workflow: you can upload local images directly, then crop in the editor, expand the canvas, repaint specific areas, and even retexture with one click. For people who need to refine details repeatedly, Midjourney is no longer just about generating a four-image grid.

What key capabilities did Midjourney update this time?

First is the external image editor: it supports uploading images from your computer and then expanding, cropping, repainting, and adding/removing elements, with precise control by combining text prompts with region selection. Second is “Image Retexture Mode”: Midjourney first estimates the scene structure, then swaps the materials, surfaces, and lighting into an overall new style.

At the same time, Midjourney is also testing a more fine-grained V2 AI moderation system, which checks prompts, input images, masks, and output results together. It’s smarter, but still in early testing, and the rules may change.

How to use the Midjourney editor for local edits

On the Midjourney web app, go to the Create page, drag an image into the prompt box or add it using the upload button. After selecting the image, enter the editor: you can crop or expand the canvas first, then use the region selection tool to mark the area you want to modify.

Then write “only change here” in the prompt, for example: “Change the streetlight in the selected area into a neon sign, keeping the nighttime atmosphere and perspective”. Midjourney will prioritize the selection constraints and keep other areas as unchanged as possible—good for adding objects, swapping small background elements, or fixing continuity issues.

How to write prompts for retexturing to succeed more easily

Retexturing is better for “changing materials and the feel of lighting” rather than drastically altering the composition. In wording, it’s recommended to state the target materials and style first, then add lighting and details, for example: “Keep the structure unchanged; convert the whole thing to a brushed metal look with cool-toned cinematic lighting, adding subtle scratches and reflections”.

If you usually use Midjourney’s style reference (--sref) or personalization model (--p), these can also be used together with the editor, making it easier to keep styles consistent. If you want character consistency, you can also combine character reference (cref) and the strength parameter (cw) to control “how similar it looks” and “how far the changes go.”

Access and pitfalls: why you might not see the entry

Because the feature is very new, Midjourney is rolling it out in batches: for example, long-time users with higher generation volume, annual subscribers, or users who have kept an active subscription for a period of time are usually more likely to get testing access first. If you don’t see the editor, first confirm whether your account is within the available range and whether you’re logged into the same Midjourney account.

Also, V2 moderation will more strictly cross-check prompt text and mask content; if you get blocked, prioritize making your prompt more specific and more “scene-descriptive,” and use fewer vague terms or words that easily trigger rules. Midjourney’s direction is clear: turn “image generation” into “controllable editing.”

HomeShopOrders