Titikey
HomeTips & TricksChatGPTA Detailed Look at Midjourney’s New Image Editor and Re-Texturing Feature: A Workflow Closer to “Real Photo Editing”

A Detailed Look at Midjourney’s New Image Editor and Re-Texturing Feature: A Workflow Closer to “Real Photo Editing”

2/7/2026
ChatGPT

Midjourney has recently pushed “image output” one step further: instead of only generating a four-image grid, it now directly offers a controllable image editor and a “re-texturing” mode. You can upload local images, repaint or fill in specified areas, expand the canvas, and use text prompts to control the extent of changes. For those who often need to tweak details and unify styles, this update is closer to a complete creative workflow.

1. What the image editor can do: expand, crop, localized repainting

Midjourney’s image editor supports uploading images from your computer, then expanding the canvas, cropping the composition, and repainting selected areas (adding/removing/modifying elements). The key to the workflow is “selection + prompt”: first circle the area you want to change, then use a clear sentence to tell Midjourney what you want to add, replace, or avoid.

On the web version, after entering a single image, you can usually click “Edit” to open the new interface, and refine the mask using an “erase/restore” style of interaction. It’s recommended to start with a smaller selection so Midjourney makes changes within a controllable range; if you need bigger changes, iterating in multiple passes is usually more reliable.

2. Re-texturing mode: preserve structure, swap overall materials and lighting

“Image re-texturing” is more like an “overall reskin”: Midjourney estimates the scene’s shapes and structure, then reapplies textures so the materials, lighting, and surface feel change as a whole. It’s suitable for quickly switching the same image into different styles—for example, from realistic leather to ceramic glaze, or from daylight to a neon night scene.

When prompting, it’s recommended to prioritize describing “materials and light,” and write less about “composition and object placement,” because re-texturing’s strength is preserving structure while changing the look and feel. For more stable results, reduce requests to “add new objects” and focus on materials, color tone, and overall lens/mood.

3. Working with style reference and personalization: --sref can be combined with --p

This update also emphasizes compatibility: the editor can be used together with Midjourney’s model personalization, style references, character references, and image prompts. In particular, --sref (style reference) and --p (personalized model) can be mixed, meaning you can inherit the “vibe” of a reference image while also layering in your own trained preferences.

In practice, it’s recommended to use --p first to stabilize your personal aesthetic, then use --sref to “borrow a style.” When the result looks too similar to the reference image, lowering the style reference strength or switching to a more abstract --sref source tends to look more natural.

4. Upgraded moderation mechanism: V2 moderation checks prompts and masks

Midjourney is also testing a smarter V2 AI moderation system, which will comprehensively examine the prompt, the uploaded image, the mask drawing, and the final output. In other words, it’s not only “what you wrote,” but also “how you masked and what you changed” that may be taken into account.

If you find certain edits failing frequently, first revert to more neutral descriptions and avoid sensitive words and misleading phrasing; at the same time, shrink the selection and clearly state the purpose of the modification—this usually makes approval easier and yields more stable results.

5. Who can use it and where to access it: check permissions first, then find the entry point

Because the feature is in an early rollout phase, Midjourney will open it to some users first—for example, accounts with higher total generations or long-term subscribers. The web version may also be rolled out in stages; after reaching a certain number of generations, you’re more likely to see the full editing entry.

If you don’t see the editor for now, it doesn’t mean the feature is gone—more likely your permissions haven’t been enabled yet. You can first use Midjourney’s regular generation workflow to create a “satisfying base image,” then use the editor for fine retouching and style unification once access becomes available.

HomeShopOrders