Midjourney has recently completed the missing piece of “how to edit images after generation”: the web version now includes an external image editor that lets you upload pictures for localized repainting, canvas expansion, and cropping, and it also adds an “image retexturing” mode that can change materials and lighting across the whole image while preserving structure. For people who need to produce image series or revise poster drafts, Midjourney’s workflow now feels much closer to a real post-production tool.
External image editor: fine-grained repainting even with uploaded images
On Midjourney’s web version, after you open an image, clicking “Edit” will bring up a new interface. You can erase and restore specific areas, then use text prompts to control what gets repainted. It supports expanding the canvas and adjusting size and aspect ratio, so turning a landscape image into a portrait, filling in backgrounds, or adding elements no longer requires endless rerolls. If you already use Midjourney for e-commerce hero images or storyboards, this editor can significantly reduce rework.
Retexturing: keep the structure, swap materials and mood as a whole
“Image retexturing” is more like changing the skin without changing the bones: the system first estimates the scene’s shapes, then reapplies textures so that surface materials, lighting, and overall atmosphere change holistically. For example, the same interior image can shift from a “creamy” aesthetic to an “industrial metal” look while the furniture layout and perspective remain consistent. When exploring styles with Midjourney, this mode is more controllable than simply tweaking prompts.
Style and personalization compatibility: --sref, --p, and cref can be used together
One key point in this update is that you can “mix and match”: the editor is compatible with style reference (--sref), personalized models (--p), character reference (cref), and image prompts. If you want multi-scene images of the same character, you can add a cref URL after the prompt and use cw 0–100 to adjust strength; the lower the cw, the more it tends to lock only the face, while the default cw more easily locks hairstyle and clothing as well. For serialized illustrations or short-video storyboards, Midjourney’s character consistency should become more stable.
Moderation system upgrade: prompts, masks, and outputs will all be checked
Midjourney is also testing a more granular V2 AI moderation system that checks prompts, uploaded images, drawn masks, and final outputs as a whole. It’s still in early testing, so the rules may change; if you run into “the same words used to pass but no longer do,” it’s more likely a policy tightening than user error. It’s recommended to use neutral, descriptive phrasing in Midjourney to reduce false positives triggered by borderline terms.
Rollout rules and usage tips: confirm whether you have access first
Because the feature is brand new, Midjourney isn’t rolling it out to everyone in the first phase: the official note mentions it will be gradually opened to specific groups first, such as very high-volume users and long-term subscribers. Before trying it, check the web version to see whether the “Edit” entry point appears, so you don’t waste time searching in Discord. Once access is enabled, use it as an integrated “generate + edit” workflow—your efficiency will be much higher than repeatedly running /imagine in channels.