Midjourney has recently launched a set of more “hands-on” new capabilities: an external image editor, image re-texturing, and a more granular V2 moderation system test. For those who often revise work and need local edits, this update can significantly reduce the number of times you have to “reroll.” Below, following the real usage flow, we’ll clearly explain how to use Midjourney’s new features.
What pain points does this update solve?
In the past, when editing images with Midjourney, the common approach was to run /imagine again or try your luck with variants; it was hard to “change just one part.” Now Midjourney has completed the chain of “upload image → select area → edit with prompts,” allowing you to control the result more like you would when retouching. Combined with existing capabilities like style reference and character reference, the overall workflow is closer to an integrated “generate + edit” process.
External image editor: expand, crop, inpaint, and add elements after uploading
The core of the external image editor is: upload an image from your local device, then on the web do outpainting (expand the canvas), cropping, local repainting (inpainting), or adding/replacing scene elements. The usual process is to drag the image into the editor first, then use region selection (mask/selection) to mark the area you want to change, and finally use a text prompt to describe “what to change it into.”
It’s recommended to write prompts like “editing instructions,” for example: “Fill the blank area on the left with a window, warm indoor lighting, oak wood material, keep the overall composition unchanged.” In Midjourney, the clearer the selection and the more specific the description, the more reliably you can reproduce the direction of the edit you want.
Image re-texturing: keep the structure, swap materials and lighting mood globally
Image re-texturing is better suited to situations where “the composition is good but the feel is off”: it estimates the shapes and structure of the image, then reapplies textures and materials so that lighting, surface feel, and atmosphere all change together. Simply put, you can turn the same image from “matte ceramic” into “polished mirror metal,” or from “sunny natural light” into a “neon night scene,” while keeping the subject structure from falling apart as much as possible.
If you usually use Midjourney’s reference system (such as style reference --sref, personalization model --p, character reference, etc.), the editor also supports using them together. When unifying a brand style, you can first lock in the style with references, then use re-texturing to quickly align materials and atmosphere.
V2 moderation system testing and phased rollout: check whether you’re eligible first
Note that these features are very new, and Midjourney is using a phased rollout strategy: typically, they’re made available first to high-volume users, annual subscribers, or user groups that have maintained subscriptions for a long time, with the goal of letting the community and human moderation team adapt gradually. If you don’t see the entry yet, it’s likely not an operational issue, but simply that access hasn’t reached you.
At the same time, Midjourney is testing a smarter, more fine-grained V2 moderation system, performing holistic checks from the prompt, the uploaded image, and the mask area to the final output. In practice, try to avoid “borderline” descriptions and vague instructions; if you get blocked, revising the prompt to be more specific and neutral and then retrying is usually the most time-efficient approach.