Most prompt engineering advice is useless. "Be specific." "Provide examples." "Give context." Everyone knows this, yet most people still get mediocre outputs.
The real difference between amateur and expert prompting isn't tricks or templates. It's understanding that AI models are pattern completion engines, not thinking machines. Structure your prompts to set up patterns that lead to the output you want.
- AI models complete patterns, not thoughts. Set up the right pattern.
- Five techniques work: role assignment, constraint stacking, few-shot examples, chain of thought, negative constraints
- Most failures come from vague prompts, missing context, or asking for opinions
- The real skill is clear thinking, not prompt tricks
The Core Mental Model
AI models predict what text should come next based on patterns in their training data. Your prompt sets up the pattern. Your job is making that pattern point toward the output you actually want.
Bad Prompt
"Write me a marketing email."Good Prompt
Specific role + context + constraintsResult
Generic slop vs. targeted outputThe good prompt in full:
You are a senior copywriter at a B2B SaaS company. Write a follow-up email to a prospect who attended our webinar on productivity tools but hasn't responded to our initial outreach. Tone: professional but warm, not salesy. Length: under 150 words. Include one specific reference to content from the webinar. Avoid: generic phrases like "just following up" or "touching base."
This works because it establishes clear patterns: who's writing, what they're writing, the context, and what good looks like.
Related: The $50/Month Stack That Runs My Entire ... | How to Automate Your Freelance Business ...
Five Techniques That Actually Work
1. Role Assignment
Start with "You are a [specific expert role]" to prime the model to generate text matching that expertise pattern.
Why it works: The model has seen millions of examples of how different experts write. Invoking the role activates those patterns.
Examples:
- "You are a senior software architect reviewing code..."
- "You are an experienced copywriter specializing in B2B SaaS..."
- "You are a skeptical investor evaluating a pitch deck..."
2. Constraint Stacking
Add multiple specific constraints: word count, format, tone, what to include, what to exclude.
Why it works: Each constraint narrows the possibility space. AI models default to generating common, generic patterns. Constraints force them toward specific, useful outputs.
Constraint types:
- Format: "Write as bullet points" / "Structure as H2 sections"
- Length: "Under 200 words" / "Exactly 3 paragraphs"
- Tone: "Professional but warm" / "Direct and technical"
- Content: "Must include X" / "End with a clear call to action"
3. Few-Shot Examples
Show 2-3 examples of what good output looks like before asking for new output. This is the most powerful technique for consistent quality.
Why it works: Examples are the strongest pattern signal available. The model will closely match the style, structure, and tone of your examples.
Structure:
Here are examples of the writing style I want:
Example 1: [Good example]
Example 2: [Another good example]
Now write [your request] in the same style.
4. Chain of Thought
Ask the model to think through the problem step by step before giving the final answer.
Why it works: Intermediate reasoning steps create better patterns for the final output. The model "shows its work" and catches logical errors.
Trigger phrases:
- "Think through this step by step..."
- "First, analyze X. Then, consider Y. Finally, recommend..."
- "Walk me through your reasoning before giving your recommendation..."
5. Negative Constraints
Tell the model what NOT to do. This is surprisingly effective because models default to common patterns that are often generic and unhelpful.
Why it works: Without negative constraints, AI gravitates toward the most common patterns. Those patterns are often corporate jargon, hedged opinions, and safe generalities.
Useful negative constraints:
- "Don't use marketing jargon."
- "Avoid phrases like 'in today's fast-paced world.'"
- "Don't hedge with 'it depends' without giving specific guidance."
- "Don't list every option. Give me your top recommendation."
Common Prompt Failures and Fixes
Too Vague
Bad: "Help me with my resume."
Fixed: "Review my resume for a senior product manager role at a B2B SaaS company. Focus on: quantified achievements, relevant keywords for ATS systems, and whether the narrative shows clear career progression."
No Context
Bad: "Is this a good idea?"
Fixed: Provide complete context. What's the idea? What are your constraints? What does success look like?
Asking for Opinions
Bad: "What do you think about X?"
Fixed: "Analyze X using [specific framework]. List the top 3 pros, top 3 cons, and your recommendation with reasoning. Be direct."
Identify the Core Request
What exactly do you want? Not vaguely. Specifically.
Add Necessary Context
What does the AI need to know? Background, constraints, audience, purpose.
Specify Success Criteria
What makes the output good? Format, length, tone, what to include/avoid.
Iterate Based on Output
First output not right? Identify what's missing and add constraints.
The Iteration Loop
Great prompts rarely work perfectly on the first try. Expect to iterate.
- Write initial prompt with your best guess at role, context, and constraints
- Evaluate output against your actual criteria
- Identify specifically what's wrong or missing
- Add constraints, examples, or context to fix the gaps
- Repeat until satisfied
Most people stop after step 2 and conclude "AI isn't that good." The real value is in steps 3-5.
Prompt Templates
For tasks you do repeatedly, build reusable templates:
[ROLE]: {who the AI should be}
[TASK]: {what you need done}
[CONTEXT]: {relevant background}
[FORMAT]: {desired output structure}
[CONSTRAINTS]: {length, tone, what to include/avoid}
[EXAMPLES]: {optional: show what good looks like}
Fill in the blanks. Iterate over time as you learn what works.
Advanced Patterns
Once you've mastered the basics, these patterns unlock more sophisticated use cases:
System Prompts vs User Prompts
Most AI interfaces let you set a system prompt (persistent context) separate from user prompts (individual requests). Use this wisely:
System prompt: Persistent role, tone, and constraints that apply to all interactions. "You are a senior copywriter. Always write in a direct, conversational tone. Never use jargon."
User prompt: Specific task for this interaction. "Write the headline for our new product launch."
This separation lets you maintain consistency across many interactions without repeating yourself.
Multi-Turn Refinement
Don't try to get perfect output in one prompt. Use conversation to refine:
- First prompt: Get initial output
- Follow-up: "Make it more conversational"
- Follow-up: "Shorten the introduction"
- Follow-up: "Add a specific example in paragraph 2"
Each turn narrows toward what you want. This is often faster than trying to specify everything upfront.
Output Templating
For structured outputs, provide the exact template:
Return your response in this exact format:
SUMMARY: [one sentence]
KEY POINTS:
- [point 1]
- [point 2]
- [point 3]
RECOMMENDATION: [your recommendation]
CONFIDENCE: [high/medium/low with reasoning]
This eliminates ambiguity and makes outputs consistent and parseable.
Adversarial Prompting
For critical decisions, prompt the AI to argue against itself:
"Now argue the opposite position. What would a skeptic say about this recommendation? What's the strongest case against it?"
This surfaces weaknesses in reasoning that a single-perspective prompt would miss.
Model-Specific Notes
Different models respond differently to the same prompts:
Claude: Responds well to clear structure and explicit reasoning requests. Particularly good with long-form content and nuanced analysis. Can be overly cautious; sometimes needs permission to be direct.
ChatGPT: Strong at creative tasks and conversation. Tends toward verbose output; use word count constraints aggressively. Good at following complex instructions but may need explicit formatting guidance.
Gemini: Excels at multimodal tasks (images + text). Good factual recall but verify important claims. Responds well to structured prompts.
The techniques in this guide work across all models, but expect some variation in how strictly each follows your constraints.
The Real Skill
Here's the uncomfortable truth: prompt engineering is mostly about clear thinking, not clever techniques.
If you can't articulate exactly what you want, no prompt structure will save you. The AI amplifies clarity and confusion equally.
The best prompt engineers are people who:
- Know precisely what they want before typing
- Can articulate success criteria explicitly
- Understand their audience and context deeply
- Iterate based on feedback
The techniques help. But they're multipliers on your underlying clarity.
For more on using AI effectively, check out the best AI tools for solopreneurs and Claude vs ChatGPT for coding.