Titikey
HomeTips & TricksChatGPTMidjourney’s new image editor is live: hands-on with uploading edits and retexturing

Midjourney’s new image editor is live: hands-on with uploading edits and retexturing

2/7/2026
ChatGPT

Midjourney has recently brought its “upload an image and then edit it” capability to the forefront, centered on a brand-new image editor and a retexturing mode. This means Midjourney no longer only generates images from scratch—it can work more like a photo-editing workflow, letting you edit by region and by prompt.

What the image editor can do: extend, crop, and inpaint

Midjourney’s image editor supports uploading images from your computer, then extending the canvas, adjusting the aspect ratio, cropping the frame, and repainting (inpainting) specific areas. You can use a region selection (mask) to circle the part you want to change, then use text prompts to control “add elements, remove elements, change the scene.”

On the web version, you typically enter the new interface via “Edit” on the image. Common tools include “Erase” and “Restore,” used to precisely constrain the area Midjourney is allowed to modify. For scenarios like swapping backgrounds for e-commerce, changing copy areas on posters, or filling in building edges, Midjourney becomes noticeably more controllable.

Retexturing mode: keep the shape, redo materials and lighting

This time, Midjourney also introduced an “image retexturing mode.” It first estimates the scene’s structure and shapes, then replaces the textures, materials, surfaces, and lighting as a whole. Simply put, the outline doesn’t change much, but the “skin” gets re-rendered by Midjourney.

In practice, it’s recommended to first select the area that needs to change, then clearly specify the material and style direction in the prompt—for example, “change to ceramic glaze, high gloss reflections” or “change to rough concrete, overcast diffuse light.” This makes it easier for Midjourney to focus changes on surface quality rather than structure.

Compatible with style/personalization/character references: edits can use parameters too

Many people worry the editing feature might be disconnected from the parameter system, but Midjourney is emphasizing compatibility this time: the editor can be used together with model personalization, style reference, character reference, and image prompts. For example, you can use --sref as a style anchor while stacking --p personalization tendencies, making Midjourney more consistent at “looking like you.”

If you’re working on character consistency, you can still put cref URL at the end of the prompt and use cw 0-100 to control reference strength. Midjourney’s approach is to treat the “character profile” as a constraint, then let you decide in the editor exactly which part to change.

Access and moderation: rolling out in small batches; prompts and masks are both reviewed

Because the feature is very new, Midjourney is rolling it out in batches: for example, users who have generated many images, annual subscribers, or long-term continuous subscribers may receive access first (subject to the official rollout). If you don’t see the entry point yet, it’s usually not an operational issue—it’s that Midjourney hasn’t enabled the permission for that account.

At the same time, Midjourney is testing a smarter V2 AI moderation system that checks the prompt, the input image, the masked region, and the final output as a whole. It’s recommended to prepare clearer, more compliant source material and prompts before editing in Midjourney, to avoid repeated trial and error that can interrupt the workflow.

HomeShopOrders