You’ve definitely experienced this kind of frustration: a prompt that flows smoothly in ChatGPT becomes long-winded in Claude, then goes off-topic in Gemini; not to mention feeding it to Midjourney—it's like talking to thin air. When I do evaluations, my easiest trick is to write prompts in a “cross-model universal version” so one set works everywhere.
Tip 1: Write the goal as a deliverable
Don’t just write “help me write copy”; change it to “output 3 versions of e-commerce main image copy; each version includes a title within 12 characters, a subtitle within 20 characters, and 3 selling points.” The clearer the deliverable, the less ChatGPT and Claude want to improvise, and the more stable Gemini becomes too.
Tip 2: Constraining format is more important than constraining tone
If you want structured content, specify the format directly. For example, “output in JSON” or “explain using table fields.” Styles vary widely across models, but they tend to follow formatting instructions more reliably.
Tip 3: Split the background into three parts
I often use one line per block: scenario, audience, constraints. For example: “Scenario = Xiaohongshu seeding, Audience = beginners, Constraint = don’t exaggerate effects.” Writing it in separated blocks is less likely to be misread by a model than a big paragraph of prose.


