Want to bring a reference image’s composition, character pose, or material/style into your generation workflow? Midjourney’s image-to-image and partial redraw (Vary Region) are very practical. Below, following the order of “use image-to-image to set the direction first, then use redraw to refine details,” the steps are explained clearly so you can avoid detours.
How to start image-to-image: upload an image and use it as a prompt
In Midjourney Web, go to Create. First, drag the reference image into the input box or click upload so it appears in the prompt area. After confirming the image thumbnail has been added, supplement it with text descriptions—such as the subject, scene, camera, and style—so Midjourney knows “what to reference” and “what to generate.”
If you want it to look more like the reference image, you can add the parameter “--iw 1.5” to “--iw 2” at the end of the prompt (the higher the value, the stronger the reference weight). Conversely, if you want to keep the inspiration but not have it look too similar, lower “--iw” and use more specific text to constrain the details.
How to write image-to-image prompts: lock in the key points of composition and style
The most common issue with image-to-image is “it looks similar but it’s not right,” usually because you only gave mood words and didn’t clearly describe the composition and subject relationships. It’s recommended to write in this order: subject (who/what) + action/pose + environmental elements + lighting (soft light / side backlight) + materials (leather / metal / film grain) + visual style (realistic / illustration / photography).
In Midjourney, negative prompts are also important—for example, “no text, no watermark, no extra fingers” can reduce common flaws. If characters become distorted, first delete half of the “fancy adjectives” from the prompt, keep only the most critical structural information, and the results are often more stable.


