Midjourney has recently pushed “image output” one step further: instead of only generating a four-image grid, it now directly offers a controllable image editor and a “re-texturing” mode. You can upload local images, repaint or fill in specified areas, expand the canvas, and use text prompts to control the extent of changes. For those who often need to tweak details and unify styles, this update is closer to a complete creative workflow.
1. What the image editor can do: expand, crop, localized repainting
Midjourney’s image editor supports uploading images from your computer, then expanding the canvas, cropping the composition, and repainting selected areas (adding/removing/modifying elements). The key to the workflow is “selection + prompt”: first circle the area you want to change, then use a clear sentence to tell Midjourney what you want to add, replace, or avoid.
On the web version, after entering a single image, you can usually click “Edit” to open the new interface, and refine the mask using an “erase/restore” style of interaction. It’s recommended to start with a smaller selection so Midjourney makes changes within a controllable range; if you need bigger changes, iterating in multiple passes is usually more reliable.
2. Re-texturing mode: preserve structure, swap overall materials and lighting
“Image re-texturing” is more like an “overall reskin”: Midjourney estimates the scene’s shapes and structure, then reapplies textures so the materials, lighting, and surface feel change as a whole. It’s suitable for quickly switching the same image into different styles—for example, from realistic leather to ceramic glaze, or from daylight to a neon night scene.
When prompting, it’s recommended to prioritize describing “materials and light,” and write less about “composition and object placement,” because re-texturing’s strength is preserving structure while changing the look and feel. For more stable results, reduce requests to “add new objects” and focus on materials, color tone, and overall lens/mood.
3. Working with style reference and personalization: --sref can be combined with --p
This update also emphasizes compatibility: the editor can be used together with Midjourney’s model personalization, style references, character references, and image prompts. In particular, --sref (style reference) and --p (personalized model) can be mixed, meaning you can inherit the “vibe” of a reference image while also layering in your own trained preferences.


