Midjourney has recently been testing a more “big-picture” V2 AI moderation system. It doesn’t just watch the prompt; it also considers the images you upload, the painted masks, and the final generated results as part of its judgment—reducing room for luck and making compliance boundaries clearer.
What exactly has the V2 AI moderation system updated?
In the past, many people got used to splitting risky content across different parts of the prompt or trying to “get around it” via partial re-draws, but Midjourney’s V2 AI moderation system performs an overall check. According to the official description, it evaluates the prompt, the input image, the mask you draw during editing, and the generated output image at the same time, and then decides whether to allow it through or block it.
Because this is early testing, Midjourney has also clearly stated that the rules are still being optimized, so you may encounter cases where similar prompts sometimes pass and sometimes get rejected.
Greater impact on prompts and “partial edits”
If you often use Midjourney’s editing workflow (for example, erasing an area and then filling in content), the V2 AI moderation system will look at “what you erased, what you intended to add, and what ended up being added” together. In other words, the intent within the masked area is easier for the system to understand, and violation risk won’t be overlooked just because you “only changed a small part.”
For normal creation, this is a good thing: compliance judgments become more consistent; for people trying to skirt the edge, Midjourney’s tolerance will be lower.


