tl;dr
If your prompt is vague, your output will be vague. Use a simple five-part structure: role, context, constraints, output format, and quality bar. Then test one variable at a time.
Most people don't have a model problem. They have a prompt problem.
You ask for "a landing page," get mush, and then assume the model is weak. Usually it isn't. It just had to guess what you meant. And it guessed wrong.
If you want consistently good outputs, use a repeatable structure instead of rewriting from scratch every time. You can do this manually, or run your draft through the prompt optimizer first and then tune from there.
If you want a fast QA layer after optimization, run this prompt engineering checklist before shipping.
The 5-part prompt structure that actually works
Keep this simple. You need five parts.
- Role: who the model should be.
- Context: what it's working with.
- Constraints: what it must avoid or include.
- Output format: exact shape of the answer.
- Quality bar: what "good" means for this task.
No magic words. No mystical framework names. Just less ambiguity.
Raw prompt vs optimized prompt
Bad prompt:
Write a product announcement email for my SaaS.Optimized prompt:
You are a B2B SaaS lifecycle marketer.
Context:
- Product: bug triage tool for engineering teams
- Audience: engineering managers at 20-200 person startups
- Goal: drive trial activations from existing email subscribers
Constraints:
- Keep total length under 180 words
- Avoid hype terms like "revolutionary" and "game-changing"
- Mention one concrete customer outcome with a number
Output format:
- Subject line options (3)
- Preview text options (2)
- Final email body with one CTA
Quality bar:
- Clear in under 10 seconds
- Sounds human, not ad copy
- One sentence must include a measurable resultSame task. Different result quality. Night and day.
What changes by model (and what doesn't)
The backbone stays the same for all three major assistants. That's the useful part. You don't need three totally separate prompt systems.
What tends to change:
- ChatGPT: usually responds well to explicit output sections and concise constraints.
- Claude: often benefits from clearer tone guidance and examples of what to avoid.
- Gemini: often follows structured bullet instructions cleanly when format is strict.
What should not change:
- role clarity
- context depth
- output shape
- acceptance criteria
If you want a fast baseline, draft once and run it through the prompt optimizer, then test model-specific edits from there.
A concrete optimization workflow (15 minutes)
Minute 1-3: Write the ugly first draft
Don't overthink it. Just capture intent.
Example:
Need LinkedIn posts to launch my analytics tool for indie founders.Minute 4-6: Add hard constraints
Now force precision.
- number of posts
- format per post
- forbidden language
- target reader
- action to take
Minute 7-10: Lock the output schema
If output shape is fuzzy, you'll get uneven quality. Tell the model exactly how to return content.
Output exactly:
1) Hook
2) Body (80-120 words)
3) CTA
4) 3 hashtag optionsMinute 11-15: Evaluate and rerun one variable
Don't rewrite everything at once.
Pick one thing:
- tighten tone
- shorten length
- strengthen CTA
Then rerun.
This is where most teams lose time. They change six variables and can't tell what improved the result.
Common mistakes that kill output quality
1) Asking for quality without defining quality
"Make it better" means nothing.
Say what better means:
- fewer adjectives
- shorter sentences
- one concrete example
- zero passive voice in headline
2) Missing audience context
"Write blog intro" for who? CTO? solo founder? student? The model can't infer your funnel stage reliably.
3) No output format
If you skip format, you invite randomness. That's fine for brainstorming. It's bad for production workflows.
4) Prompt stacking without cleanup
People keep appending instructions until the prompt is a junk drawer. If it's bloated, reset and rewrite from clean structure. Or use the prompt optimizer to compress and reorganize it.
Quick model-ready templates
Template: feature announcement
You are a product marketer for a B2B SaaS company.
Context:
- Feature: [feature]
- Audience: [role + company size]
- Main user pain: [pain]
Constraints:
- 140-180 words
- one measurable benefit
- avoid generic hype language
Output format:
- 3 subject lines
- 1 email body
- 1 CTA lineTemplate: support doc rewrite
You are a technical writer.
Context:
- Source text: [paste]
- User level: beginner
Constraints:
- remove jargon
- keep each step under 18 words
- add one troubleshooting tip
Output format:
- title
- prerequisites
- numbered steps
- common errorsFinal pass checklist
Before you ship any prompt to production, check three things.
- Could a stranger run this prompt and produce a usable output?
- Does the output format match your publishing workflow?
- Did you define what to avoid, not just what to include?
If any answer is no, tighten it.
Then test in your target model. If you're moving fast and don't want to tune by hand every time, use the prompt optimizer as your first pass, then layer model-specific edits.
Clean prompt in. Cleaner output out. That's the whole game.
step 1
Start with the actual task
Write one sentence that states the exact job you need done, without fluff.
step 2
Add constraints and output format
Specify word count, audience, tone, and the shape of the response so the model has rails.
step 3
Run one revision cycle
Evaluate output quality, tighten weak sections, and rerun with one changed variable at a time.
FAQ
Why do optimized prompts perform better?+
Optimized prompts reduce ambiguity. Models make fewer assumptions when role, context, and formatting rules are explicit.
Do I need different prompts for ChatGPT, Claude, and Gemini?+
The core structure can stay the same. Usually only style preferences and length controls need slight model-specific adjustments.
What is the fastest way to improve a weak prompt?+
Add concrete constraints and a strict output format first. Those two changes usually improve results immediately.