A prompt is not a wish. It is a design brief, a contract, and sometimes a small operating system.
The fastest users get answers. The best users get leverage.
Read it once for orientation. Use it forever as a field manual.
This is not a warmed-over list of prompt tips. It is a rebuilt handbook for April 2026: modern structure, all-new examples, model-aware strategy, richer business use cases, and a stronger visual system. It keeps the spirit of clear prompting, but upgrades it into a full practice.
You will learn the classic fundamentals - context, specificity, examples, constraints, roles, structure - plus the newer operator skills that matter in real work: structured outputs, long-context tactics, prompt chaining, evals, caching, and provider-specific differences.
Do not ask, "What is the perfect prompt?" Ask, "What information would make success obvious?" The model cannot infer what you never specified.
Founders, creators, marketers, consultants, analysts, product people, developers, educators, and anyone tired of vague AI output.
If you can type into a chat box, you can use this playbook. If you build systems, teams, or products, you can compound it.
From first prompts to repeatable high-leverage systems.
| Part | What you will learn |
|---|---|
| I | Prompting in 2026, the failure modes, and the 7-part Prompt Spine |
| II | Model-aware playbooks for OpenAI, Claude, and Gemini |
| III | High-value prompts for marketing, sales, writing, and content businesses |
| IV | Coding, long-context workflows, structured outputs, evals, caching, and debugging |
| Appendix | Template library, drills, and references |
| Chapter | Focus |
|---|---|
| 1 | Why decent users still get mediocre AI output |
| 2 | The 7-part Prompt Spine |
| 3 | Response-shaping tools: examples, constraints, roles, format, tags, variables |
| 4 | The OpenAI playbook |
| 5 | The Claude playbook |
| 6 | The Gemini playbook |
| 7 | Marketing prompts that can make money |
| 8 | Sales, consulting, and operations prompts |
| 9 | Writing and creator workflows |
| 10 | Coding prompts that ship better software |
| 11 | Advanced prompt systems: chains, long context, schemas, evals, caching |
| 12 | Prompt debugging clinic |
| 13 | Template library |
| 14 | Practice lab |
| 15 | References and further study |
This edition synthesizes current guidance from official OpenAI, Anthropic, and Google Gemini documentation, plus practical deployment notes from Microsoft Learn and AWS. Specific ideas are cited in the reference appendix using source codes such as [O1], [A3], and [G1].
Good prompts are not polite guesses. They are explicit instructions with a finish line.
Prompting got more powerful when models got stronger. It also got less forgiving of fuzzy briefs.
Most weak prompts fail for a boring reason: the user kept the success criteria inside their own head. The model saw a request. The user imagined a result. Those were not the same thing.
Modern models can write code, summarize books, critique strategy, and generate polished assets. But stronger models do not eliminate ambiguity. They often amplify it. A vague request can now produce a very confident, beautifully formatted wrong answer.
"Help with marketing." The model guesses your market, audience, budget, timeline, and goal.
You state the task, but not the stakes, constraints, or definition of done.
You specify the situation, objective, resources, and desired format.
You add quality checks, evidence rules, reusable variables, and system-level structure.
Treat every prompt like one of these:
The model is fast. Your job is to make the target visible.
If the output would cost you money, reputation, or time to fix, the prompt deserves more structure than a casual chat.
Help me market my product.
You are a direct-response growth strategist. I sell a $79 Notion dashboard bundle for wedding photographers. My buyers are solo operators who hate bookkeeping but want cleaner monthly cash-flow tracking. I want 3 low-cost acquisition angles I can test in the next 30 days with less than $500. Return a table with: - angle name - channel - core message - proof hook - effort level - what we would learn if it works or fails Avoid generic advice like "post consistently on social media."
A reusable blueprint that absorbs the classic basics and the modern upgrades.
The old beginner advice was right, but incomplete: be clear, specific, and detailed. The problem is that beginners rarely know which details matter. The Prompt Spine gives you a repeatable sequence.
What exact job should the model do?
What situation, audience, or business reality shapes the answer?
What source material, data, constraints, or facts must be used?
What must or must not happen? Budget, tone, legal, scope, timeline.
What does "good" look like in phrasing, structure, or behavior?
What format, length, sections, and audience fit the result?
How should the answer verify itself before finishing?
| Spine block | Ask yourself | Micro-example |
|---|---|---|
| Task | What single verb describes the job? | Draft, diagnose, compare, rewrite, estimate, classify |
| Context | What world is this operating inside? | Seed-stage SaaS, local clinic, B2B consulting, personal brand |
| Inputs | What should the model use, and what should it ignore? | Use the transcript below, the feature list, and these three objections |
| Constraints | What boundaries matter? | No hype, under 120 words, US audience, no legal claims |
| Examples | Can I show one good sample? | Mirror this style: direct, spare, evidence-first |
| Output | How should the answer arrive? | Return JSON, a table, a brief, or a two-step plan |
| Checks | What should it verify? | Flag weak assumptions, list missing facts, show confidence level |
Role: [optional but useful when expertise matters] Task: [the single job] Context: [business, audience, situation] Inputs: [facts, data, excerpts, examples] Constraints: [must, must not, scope, timeline, risk] Output: [format, length, sections, reader] Checks: [verify assumptions, cite evidence, list unknowns]
Role: developmental editor Task: rewrite my article introduction Context: audience is startup founders with limited time Inputs: use only the notes below Constraints: no cliches, no rhetorical questions, under 130 words Output: 3 alternative intros with a one-line rationale each Checks: make sure each intro has a concrete tension in sentence 1
Role: product strategist Task: prioritize feature requests Context: our app serves freelance accountants on mobile Inputs: feature list, support tickets, churn notes Constraints: team of 3 engineers, 6-week cycle, no platform rewrite Output: ranked table with impact, complexity, risk, and recommendation Checks: flag requests that sound loud but low-value
Most users stop after Output. Professionals add Checks. That single block often cuts hallucinations, fluff, and overconfident nonsense because the model is forced to inspect its own work before it hands it over.
These are the levers that turn a decent brief into a useful one.
Tell the model what to avoid and what to do instead.
Weak: "Do not sound salesy."
Better: "Avoid hype and exaggerated claims. Use plain language, product proof, and concrete customer outcomes."
Examples beat adjectives. "Make it sharper" is vague. A good before-and-after sample is a teaching instrument.
Anthropic recommends multishot examples as a core best practice, and Google explicitly recommends few-shot prompting whenever practical [A1][G1].
Say more than "be concise." Give a shape:
Roles work best when they supply domain judgment, not theater.
Useful: "act as a retention strategist for subscription products"
Less useful: "act as a legendary genius wizard marketer"
When your prompt mixes instructions, source text, data, and examples, boundaries matter.
OpenAI, Anthropic, and Google all support structured prompt sections well; XML-style tags are especially useful for complex inputs [O1][A2][G1].
If a value repeats, make it a variable. This reduces editing mistakes and lets you scale one prompt across many campaigns, clients, or experiments.
Write a launch email for my course.
You are an email copywriter for creator-led launches. Write a launch email for my cohort course "Client Systems in a Weekend." Audience: freelance designers earning $3k-$8k/month who feel buried in admin. Goal: drive applications, not impulse sales. Promise: install a lightweight project pipeline in 2 days. Proof: 4 case studies, median save is 5 hours/week. Tone: calm, capable, zero hype. Constraints: no fake urgency, no all-caps, no exclamation-heavy copy. Output: subject line + preview text + email body under 320 words. Checks: make sure the CTA is application-focused and the body mentions one concrete proof point.
If you ever type "help me" or "make this better," stop and add at least four blocks from the Prompt Spine before sending.
<task>Summarize the strongest buyer objections.</task> <transcript> [customer interview transcript goes here] </transcript> <format> Return a table with objection, evidence quote, and recommended copy response. </format>
Offer = {{offer_name}}
Audience = {{audience}}
Goal = {{goal}}
Tone = {{tone}}
Create 5 ad hooks for {{offer_name}} aimed at {{audience}}.
Optimize for {{goal}}.
Use a {{tone}} voice.The core principles travel. The emphasis changes by provider, workflow, and model family.
Prompt like you are building a product, not composing a one-off message.
OpenAI's current documentation makes two big points that matter in production. First: different model types need different prompting styles. Second: if you care about consistency, prompt engineering is inseparable from snapshot pinning and evals [O1].
| What to emphasize | Why it matters |
|---|---|
| Put durable behavior in the system or developer prompt | Tone, style, safety boundaries, and house rules should not be repeated every turn [O1][O2]. |
| Put task details and examples in user messages | This keeps the reusable scaffold stable while the task-specific details change [O2]. |
| Clamp verbosity and output shape explicitly | OpenAI guidance for newer GPT-5.x prompts emphasizes concise, structured output controls [O3]. |
| Pin model snapshots in production | Prompt behavior can drift across snapshots; pinning protects consistency [O1]. |
| Run evals when prompts change | A beautiful prompt is still a guess until you measure it [O1]. |
| Use prompt objects and variables for reuse | OpenAI provides prompt versioning and variables to manage prompts as assets [O2]. |
| Prefer structured outputs for machine-readable responses | JSON schema is more reliable than heavily worded formatting instructions [O4]. |
Define tone, boundaries, output discipline, and reusable behavior once. Keep the task layer lighter and more variable.
Clear destination, constraints, and evaluation standards. Less rambling instruction. More explicit finish conditions.
Versioned prompts, test suites, and structured outputs. Prompting becomes part of product engineering.
One underused pattern is to separate behavioral policy from task request.
This often improves reuse and keeps multi-step workflows cleaner [O2].
System / developer message: You are a product marketing analyst. Default to concise answers: 1 short overview paragraph, then up to 5 bullets. Use plain English. If evidence is weak, say so directly. When asked for structured output, follow the schema exactly. User message: Analyze these 14 churn survey responses for our invoicing app. Audience: founder and head of product. Goal: identify the top 3 retention opportunities for the next quarter. Use only the survey text below. Return: 1) one-paragraph summary 2) table with issue, evidence quote, probable root cause, suggested fix 3) one paragraph on what not to overreact to
A few patterns matter far more now than they did in beginner prompt guides.
Do not ask for "concise" and hope for the best. Specify the shape. Example: "2 short paragraphs, then a 4-row table. No preamble." This is directly aligned with newer OpenAI prompting guidance [O3].
When your app needs predictable keys, use structured outputs instead of writing prompts like "You MUST return valid JSON or your response will fail." OpenAI explicitly recommends Structured Outputs over older JSON-only approaches when supported [O4].
The best prompt is the one that wins on your tasks. Maintain a small eval set of real examples and rerun it when you change prompts, models, or tools [O1].
OpenAI now supports prompt objects with versioning and variables. Even if you are not using the API, copy the habit: save versions, label changes, and keep a changelog [O2].
If you run an agency or consultancy, maintain a library of versioned prompt packets for audits, proposals, email teardown, competitor analysis, and client reporting. You will deliver faster and sound more consistent across the team.
| If you want... | Prompt for this instead |
|---|---|
| Less rambling | "Answer in 3 bullets max. No preamble." |
| Cleaner machine output | Use a schema or a field list with strict required keys. |
| Better reliability on repeated tasks | Pin model snapshot + save prompt version + run eval set. |
| More honest answers | Add checks like "state uncertainty and missing data explicitly." |
Claude rewards clarity, examples, structured sections, and strong long-context habits.
Anthropic's own prompt engineering overview starts in a very healthy place: define success criteria, define how you will test them, then improve a real prompt [A1]. That mindset alone separates casual users from serious builders.
Anthropic documents XML tags as a major clarity tool because they reduce instruction/content confusion, make prompts easier to edit, and improve parseability of outputs [A2].
This is especially useful for contract review, multi-document comparison, content editing, and quote-grounded research.
<role>
You are a retention researcher for subscription businesses.
</role>
<documents>
<document id="survey_1">
<source>Q1 churn survey export</source>
<document_content>
[survey text here]
</document_content>
</document>
<document id="support_1">
<source>Support tickets from March</source>
<document_content>
[ticket text here]
</document_content>
</document>
</documents>
<task>
First, quote the most relevant lines from the documents.
Then identify the top 3 churn drivers.
Then recommend one high-leverage retention experiment for each driver.
</task>
<formatting>
Return sections: Evidence, Drivers, Experiments.
Do not use evidence that is not quoted.
</formatting>Anthropic reports that putting large context at the top and the query at the end can improve response quality on complex long-context tasks, with tests showing up to 30 percent quality gains in some cases [A3].
| Claude habit | Practical payoff |
|---|---|
| Quote first, then reason | Reduces hallucinated claims in long documents [A3]. |
| Use XML tags consistently | Cleaner separation of context, examples, and task [A2]. |
| Chain complex prompts | Lets you inspect intermediate work instead of trusting one giant jump [A1]. |
| Define success criteria before rewriting prompts | Improves iteration discipline [A1]. |
This is where a good chat prompt becomes a reliable workflow.
Anthropic explicitly notes that chain-style workflows remain useful even when models can think deeply, because chaining lets you inspect intermediate steps, route tasks, or insert tools at controlled points [A1].
Anthropic prompt caching works best when the reusable prefix comes first: tools, then system, then messages. Static examples, style guides, and long documents belong early [A4].
If you want a specific voice or format, do not stack ten adjectives. Show a short example of the style you want. Anthropic recommends multishot examples as one of its most broadly effective techniques [A1].
You can ask Claude to return XML-tagged sections such as <summary>, <risks>, and <recommendation>. This improves downstream parsing and review [A2].
If you analyze long client documents, recordings, contracts, or user research, Claude-style evidence-first packets are a premium service. Clear quote extraction makes your deliverables feel more defensible and more expensive.
Gemini rewards clear structure, direct instruction, and few-shot examples more than most beginners expect.
Google's Gemini documentation is wonderfully blunt: prompt design is iterative, instructions should be clear and specific, and you should include few-shot examples whenever possible [G1]. That last point is stronger than what many users assume.
| Gemini guidance | Why it matters |
|---|---|
| Clear, specific instructions [G1] | Gemini responds well to direct prompts without persuasive fluff. |
| Always use few-shot when practical [G1] | Examples regulate formatting, phrasing, and scope more reliably than loose description. |
| Use consistent structure and delimiters [G1] | Google explicitly recommends XML-style tags or Markdown headings used consistently. |
| Put system behavior in system instructions [G2] | Role, critical constraints, and persistent behavior should live there when available. |
| For large context, put the context first and the question last [G1] | This mirrors strong long-context habits across providers. |
| Use JSON schema for structured outputs [G3] | Predictable, parsable outputs reduce retry loops and fragile post-processing. |
Google says prompt examples are often so effective that you can sometimes remove instructions if the examples already teach the task clearly [G1]. In practice, two or three sharply chosen examples can outperform a paragraph of explanation.
Gemini exposes system instructions in its generation config. That makes it natural to separate stable behavior from the live request, just as with other providers [G2].
System instruction:
You classify inbound leads for a boutique analytics consultancy.
Be direct, concise, and businesslike.
Return JSON only.
User prompt:
Classify each lead into one of: high_fit, medium_fit, low_fit.
Use this schema:
{
"lead_name": "string",
"score": "integer 1-10",
"bucket": "high_fit | medium_fit | low_fit",
"reason": "string",
"next_step": "string"
}
Examples:
Lead: 20-person ecommerce brand, no analyst, $40k monthly ad spend
Result: high_fit because attribution cleanup and dashboard work are immediate pain points
Lead: student building a side project with no budget
Result: low_fit because there is no consulting budget and no urgent business need
Now classify the following lead notes:
[lead notes here]For structured outputs, Google supports JSON schema and recommends strong typing, clear property descriptions, and validation in your application code [G3]. In plain English: the schema is part of the prompt.
Portable prompting means knowing what changes and what stays stable.
Clear system rules, output-shape control, versioned prompts, eval-driven iteration, structured outputs.
Verbosity control, snapshot pinning, schema-first design, explicit finish conditions.
Long-context reasoning, XML-tagged document workflows, quote-grounded synthesis, thoughtful chains.
Examples, XML boundaries, evidence-first prompts, query-at-end long-context packets.
Direct instructions, few-shot formatting control, system instructions, JSON schema output, chain prompts.
Clear structure, example-led teaching, context-first packaging, schema descriptions.
| Portable principle | OpenAI accent | Claude accent | Gemini accent |
|---|---|---|---|
| Separate stable behavior from task | system/developer prompt | role + structured sections | system instruction |
| Teach with examples | use when behavior or format matters | multishot is central | few-shot is strongly recommended |
| Structure long context | delimiters, scoped sections | XML docs + query at end | context first, question last |
| Make outputs machine-readable | structured outputs / schema | tagged outputs or parser-friendly sections | JSON schema output |
| Improve over time | evals + snapshot pinning | success criteria + prompt iteration | iterative prompt refinement |
The transferable core is simple: specify the job, package the context, bound the scope, teach by example, define the output, and evaluate the results. Provider differences change emphasis, not the fundamentals.
The point is not prettier output. The point is better offers, better software, faster work, and more revenue.
Use AI to shorten the distance between market insight and campaign execution.
Most marketing prompts fail because they ask for assets before strategy. A landing page is not the first problem. The first problem is usually positioning, buyer tension, proof, and channel fit.
Useful for consultants, creators, agencies, and software founders. Better positioning often improves every downstream asset.
You are a positioning strategist for premium service businesses. I run a small consultancy that helps Shopify brands clean up tracking and reporting. Our best clients do $80k-$500k/month in revenue and are frustrated by conflicting data across Meta, Google, and Shopify. Create 3 positioning angles. For each angle include: - who it is for - painful current state - core promise - strongest proof mechanism - risk of sounding generic - homepage headline draft Avoid buzzwords like "unlock growth" or "supercharge."
Good when you need 10+ hooks fast but do not want 10 copies of the same idea.
Act as a paid social creative strategist. Product: a weekly meal-prep subscription for busy nurses. Goal: find ad angles we can test on Instagram Reels and TikTok. Audience: hospital shift workers, mostly women 24-39, low time, inconsistent meal habits. Return a matrix with 8 angles. Columns: angle, emotional trigger, visual concept, opening line, credibility cue, possible objection. Make the angles distinct from one another.
When a page gets traffic but does not convert, you usually need sharper message hierarchy, proof, and objection handling.
You are a conversion copywriter. Rewrite the structure for our landing page. Offer: a $149 mini-course that teaches Etsy sellers how to photograph products with just a phone. Audience: handmade sellers with weak visuals and tiny budgets. Goal: improve first-time buyer conversion from Pinterest traffic. Output: - hero section - 3 problem bullets - promise section - proof section - FAQ focused on objections - CTA text Tone: practical and encouraging. No fake scarcity.
Most launches underperform because they rely on one clever email instead of a sequence that escalates tension and proof.
Create a 5-email launch sequence. Offer: a live workshop called "Fix Your Freelance Proposal Funnel." Audience: solo consultants making inconsistent revenue. Goal: drive webinar registrations first, then workshop sales. For each email include: - angle - subject line - opening hook - key proof or story - CTA Sequence logic: 1) problem awareness 2) cost of inaction 3) proof 4) objection handling 5) final reminder
Ask for angles, objections, proof mechanisms, and learning goals. These are closer to revenue than generic copy blocks.
Prompting becomes more valuable when one output feeds the next.
| Stage | Prompt target | Why it creates value |
|---|---|---|
| Research | Jobs-to-be-done, pain, objections, proof | Improves product-market message fit |
| Creative strategy | Angle matrix, hook library, offer framing | Gives you more distinct tests |
| Asset creation | Pages, emails, ad scripts, social posts | Compresses production time |
| Review | Consistency, compliance, clarity, CTA strength | Reduces embarrassing mistakes |
| Iteration | What worked, what failed, what to test next | Turns campaigns into a learning system |
Review the draft landing page below. Score it from 1-10 on: - clarity of promise - specificity of pain - trust and proof - objection handling - CTA strength Then rewrite only the weakest section. Do not rewrite the whole page unless the structure is fundamentally broken. Explain your reasoning briefly.
When reviewing marketing copy, ask the model to rewrite only the weakest section. This keeps the revision surgical and reduces the risk of losing what already works.
AI is extremely profitable when it helps you think, qualify, package, and communicate faster.
You are helping me write 1-to-1 prospecting emails for service businesses. Target: small accounting firms with outdated websites and weak local search visibility. My service: website repositioning + local SEO clean-up. Use the business notes below. Write 3 opening lines that feel genuinely observed, not scraped. Then write one full email under 140 words. No fake flattery. No "just checking in." No "I noticed you are a leader in the space."
Act as a sales strategist. Based on the notes below, prepare me for a discovery call with a $6k/month prospect. Return: - likely pains - likely objections - 5 diagnostic questions - 3 ways this call could go off track - one concise summary I can say at the end if the fit is strong
Turn the raw notes below into an executive summary for a client audit. Audience: founder and marketing manager. Tone: candid, calm, useful. Return: - 1-paragraph overview - top 5 findings - impact of each finding - recommended next action - what not to prioritize yet
Build a standard operating procedure from the process notes below. Audience: a new hire with no prior context. Include: - purpose - tools needed - step-by-step process - common mistakes - escalation rules - checklist for completion Use plain language.
Operations prompts make businesses less fragile. That is monetizable. Agencies sell retainers. Consultants sell audits. Operators save founder time. All three benefit from strong prompt packets.
| Use case | Prompt ingredient most people forget | Why it matters |
|---|---|---|
| Prospecting | relationship stage | The right email to a cold lead is wrong for a warm referral. |
| Discovery prep | deal size and buying committee | A solo buyer and a 5-person committee need different questions. |
| Audit reports | audience seniority | Executives need decisions, not a wall of observations. |
| SOPs | failure modes | New hires break workflows where the document is vague. |
Prompting is not just for generating words. It is for building systems around ideas.
Writers usually need help in five places: finding a sharp angle, structuring a piece, preserving voice, extracting good lines from messy notes, and repurposing one idea across formats. Prompting helps most when you make the source material explicit and the voice constraints visible.
Useful for founders, educators, and solo creators who have ideas but no editorial rhythm.
You are my newsletter editor. Audience: independent professionals trying to build a reputation online. My voice: direct, curious, slightly contrarian, no motivational fluff. Use the notes below to create: - 3 possible subject lines - one tight intro - 3 section headings - a closing takeaway Do not invent examples I did not mention. If a claim feels weak, mark it for revision instead of polishing it.
Great for agencies and creators who need LinkedIn posts, short scripts, and email content from the same core thought.
Using the article below, create: 1) a LinkedIn post under 220 words 2) a 45-second talking-head script 3) a 5-bullet email version Preserve the same core argument across all 3 assets. Tone: expert, not preachy. Avoid repeating the same opening sentence across formats.
Courses, guides, and toolkits make more money when they are built from real audience friction, not a pile of disconnected lessons.
Act as an instructional designer. I want to create a paid guide for first-time virtual assistants who keep losing clients because their communication feels chaotic. Create a practical table of contents. For each section include: - the learner problem - the lesson goal - one exercise - one checklist or template to include Keep it action-first, not theory-first.
Essential if you want better prose but do not want the model to sand your voice down to corporate mush.
Edit the paragraph below for clarity and rhythm. Keep my voice direct and slightly informal. Do not remove my opinion. Do not make it sound like a corporate article. Return: - edited paragraph - 3 brief notes explaining the changes
When voice matters, feed the model a style sample or short excerpt from your own writing. Adjectives like "bold" or "warm" are weaker than examples.
The best prompt engineers eventually stop thinking in messages and start thinking in workflows.
AI is most useful in code when you stop asking for code and start specifying engineering constraints.
The original beginner lesson was right: show the environment, show the error, show the expected output. In 2026 that is still foundational. The upgrade is to think like an engineer reviewing a pull request.
| Coding prompt block | What to include | Why it improves results |
|---|---|---|
| Environment | language, versions, framework, runtime, database, deployment target | Prevents generic answers that do not fit your stack |
| Failure signal | error message, log excerpt, failing input, observed behavior | Turns guesswork into diagnosis |
| Definition of done | tests, edge cases, performance, security, migration notes | Moves from toy code to production-ready code |
| Output shape | full file, minimal patch, diff, tests only, explanation level | Controls scope and reviewability |
Write a function to process invoices.
Using Python 3.12 and Pydantic v2, write a function that parses invoice line items from OCR text. Requirements: - extract description, quantity, unit_price, line_total - ignore subtotal and tax lines - return a list of validated objects - if a line is ambiguous, attach a warning instead of guessing Output: - the Pydantic model - the parser function - 4 pytest cases covering clean input, noisy OCR, missing quantity, and mixed currency symbols
Ask for the smallest useful artifact. A patch, test file, or refactor plan is often better than a giant code dump.
Review the code below and return the smallest patch that fixes the bug. Do not rewrite unrelated functions. Explain the root cause in 3 bullets max.
Before changing the implementation, write tests that capture the expected behavior and the edge cases described below. Then update the code until the tests pass.
This is where AI starts acting like a serious coding partner.
Review this TypeScript function for: - correctness - readability - hidden edge cases - performance traps - security concerns Return: 1) issues ranked by severity 2) a minimal revised version 3) tests that would have caught the worst bug
Fix this FastAPI endpoint. Environment: Python 3.12, FastAPI, PostgreSQL, SQLAlchemy 2.x Observed error: sqlalchemy.exc.IntegrityError on duplicate email inserts Expected behavior: return 409 with a clean JSON error Show: - root cause - patch - test for duplicate email path
We need to process up to 3 million rows from S3 into ClickHouse nightly. Suggest an ingestion strategy. Constraints: - Python worker - memory limit 2 GB - job must finish within 15 minutes - failure should be restartable Return architecture, failure points, and monitoring checks.
Audit this authentication flow for security issues. Assume a public web app with email login and magic links. Return findings grouped by: - account takeover risk - replay risk - token handling - logging/privacy concerns Then propose the smallest secure improvements.
Developers who learn to ask for patches, tests, rollout notes, and risk reviews get more from AI than developers who ask for giant rewrites. That means faster delivery and fewer embarrassing regressions.
| Ask for this | Instead of this | Reason |
|---|---|---|
| minimal patch + explanation | rewrite the whole file | smaller diffs are easier to trust |
| tests + edge cases | happy-path code only | tests reveal hidden assumptions |
| performance and security constraints | generic implementation | production has costs and risks |
| migration or rollout notes | code only | real systems need change management |
The modern stack: chains, tags, schemas, long context, evals, and caching.
This is where most beginner guides stop too early. Great prompting is not just about one message. It is about how prompts behave inside a repeated system.
Break a big job into smaller verifiable stages. Example:
Chaining is slower than one-shot prompting, but often much more reliable.
For large document tasks, structure the packet:
Anthropic and Google both document context-first / query-last patterns for long inputs [A3][G1].
When software needs the answer, use schemas. OpenAI and Gemini both support JSON-schema-based structured outputs [O4][G3]. This removes a lot of brittle parsing logic.
Keep a small set of representative prompts and expected outcomes. Run them whenever you change prompts, model versions, or tooling. Prompt engineering without evals becomes aesthetic guesswork [O1][A1].
If the same context repeats, put static material first. OpenAI, Anthropic, and Gemini all support caching approaches that reward stable prompt prefixes [O2][A4][G5].
Give the model an escape hatch. For extraction tasks, say: "If the field is absent, return NOT_FOUND." Microsoft Learn recommends giving the model an out rather than letting it guess [M1].
Task: extract sales call details from the transcript.
If a field is not present, return null.
Do not infer values that are not supported by the transcript.
Schema:
{
"company_name": "string | null",
"contact_name": "string | null",
"budget_status": "known | unknown | null",
"timeline": "string | null",
"main_pains": ["string"],
"next_step": "string | null"
}How teams turn good prompts into dependable assets.
| Failure | Likely root cause | What to try first |
|---|---|---|
| Wrong scope | task not bounded | define what not to include |
| Format drift | output contract too loose | add schema or explicit template |
| Hallucinated facts | poor grounding | limit sources + require quotes + allow NOT_FOUND |
| Generic answers | insufficient context or examples | add audience, stakes, and one good example |
| Inconsistent results | no snapshot pinning or evals | version prompt and test across a fixed set |
Most prompt wins come from better problem framing, not more magical wording. Cleaner inputs, better examples, stronger evaluation, and more constrained outputs usually beat clever prose.
When the answer disappoints, diagnose the prompt before blaming the model.
Weak prompting often hides in patterns. Once you recognize the symptom, the fix becomes faster.
| Symptom | Likely cause | Fix |
|---|---|---|
| Too generic | context and stakes missing | add audience, business context, and goal |
| Too long | no length or section control | specify sections, bullet limits, or word range |
| Wrong tone | tone described vaguely | show one voice example and a short avoid list |
| Invented details | inputs under-grounded | limit to provided material and require evidence |
| Messy JSON | using prose instructions instead of schema | switch to structured outputs |
| One giant rewrite | scope not controlled | ask for smallest patch or weakest-section rewrite |
| Same idea repeated 10 times | request lacks diversity instruction | ask for mutually distinct options |
| Useful but not actionable | output is commentary, not decision support | ask for recommendation, tradeoff, and next action |
What exactly did I leave implicit? That single question solves an astonishing number of bad outputs.
These prompt habits feel natural. They quietly ruin results.
"Make it smart, premium, polished, trustworthy, exciting, and modern."
Better: define audience, tone reference, and one short example.
"Analyze this, summarize it, create ad copy, and write code from it."
Better: separate stages, or explicitly number the steps.
"Do not be vague."
Better: "Use concrete numbers, named examples, and one recommended next step."
If missing information matters, say what should happen: return null, NOT_FOUND, or "insufficient evidence." This reduces guessing [M1].
Microsoft Learn highlights two underrated habits: repeat critical instructions when they truly matter, and give the model a safe fallback when information is absent [M1]. Those two moves alone prevent many failures.
Copy, adapt, and improve. These are starting points, not sacred text.
You are a market researcher. Analyze the notes below and identify: - recurring pains - desired outcomes - current workarounds - buying objections - phrases customers use repeatedly Return a summary plus a ranked opportunity table.
Create 4 positioning angles for {{offer_name}}.
Audience: {{audience}}
Problem solved: {{problem}}
Proof available: {{proof}}
Constraints: no buzzwords, no competitor references.
Return angle, headline, promise, proof hook, risk of sounding generic.
Build a landing page outline for {{offer_name}}.
Include hero, problem, solution, proof, objections, CTA.
Audience: {{audience}}
Goal: {{goal}}
Tone: {{tone}}
Do not write filler transitions.
Generate 10 distinct ad hooks for {{product}}.
Audience: {{audience}}
Channels: {{channels}}
Return a table with emotional trigger, opening line, visual concept, proof cue, likely objection.
Using the notes below, create a newsletter issue.
Return 3 subject lines, one intro, 3 section headings, and a closing takeaway.
Keep my voice: {{voice_notes}}.
Do not invent examples.
Turn the notes below into an executive summary for {{audience}}.
Return:
- one-paragraph overview
- top findings
- impact
- recommendation
- what not to prioritize yet
Tone: crisp and evidence-first.
More templates for sales, operations, and coding.
Write a short prospecting email for {{service}}.
Prospect type: {{prospect_type}}
Observed issue: {{observed_issue}}
Goal: book a discovery call.
Constraints: under 140 words, no fake flattery, no generic opener.
Prepare me for a discovery call. Based on the notes below, return likely pains, likely objections, 5 diagnostic questions, deal risks, and a strong closing summary.
Create a standard operating procedure from the notes below. Audience: new hire with zero context. Include purpose, tools, step-by-step process, failure points, escalation rules, final checklist.
Environment: {{environment}}
Observed behavior: {{observed_behavior}}
Error: {{error_message}}
Expected behavior: {{expected_behavior}}
Return root cause, smallest patch, tests, and any deployment or migration notes.
Review the code below for correctness, readability, edge cases, performance, and security. Return issues ranked by severity, then a minimal revised version, then tests that would catch the worst issue.
Extract the fields below from the source text.
If a field is absent, return null.
Do not infer unsupported values.
Schema:
{{schema}}
Source text:
{{source_text}}
Advanced templates for long context and prompt systems.
<documents>
<document id="1"><source>{{source_1}}</source><document_content>{{doc_1}}</document_content></document>
<document id="2"><source>{{source_2}}</source><document_content>{{doc_2}}</document_content></document>
</documents>
<task>
First quote the most relevant evidence.
Then summarize the key findings.
Then identify contradictions or open questions.
</task>
Break the goal below into a prompt chain.
Goal: {{goal}}
Constraints: {{constraints}}
Return:
- stage name
- input to that stage
- output from that stage
- quality check for that stage
Score the draft asset below from 1-10 on clarity, proof, objection handling, tone, CTA strength, and distinctiveness. Then rewrite only the weakest section.
Analyze this meeting transcript. Return: - decisions made - unresolved questions - assigned owners - deadlines - risks or blockers If ownership is unclear, say unassigned.
Design a practical curriculum for {{audience}}.
Goal: {{goal}}
Constraints: {{constraints}}
For each module include outcome, lesson concept, exercise, checklist, and common mistake.
Review the prompt below. Identify missing context, vague goals, hidden assumptions, weak output definitions, and missing checks. Then rewrite it using the 7-part Prompt Spine.
Use these drills to build the instinct, not just the theory.
Rewrite this prompt using the Prompt Spine:
Help me write better ads for my app.
Questions to force better prompting:
You are summarizing a 40-page interview synthesis. Rewrite the prompt so the model must quote relevant evidence before drawing conclusions.
You currently prompt: "Return JSON with name, industry, score, and next step." Rewrite it as a stricter, schema-like instruction with fallback rules for missing data.
You want one prompt to analyze support tickets, find product issues, create a roadmap, and draft a launch email. Split it into a chain.
Take a toy request like "Build a login system" and add the environment, threat model, error cases, and definition of done.
There are many good rewrites. The point is not one perfect answer. The point is better information design.
A strong rewrite would specify the app category, the buyer, the channel, the campaign goal, the angle diversity required, the format, and the language to avoid.
Even a modest upgrade like "Write 6 distinct Meta ad hooks for a budgeting app for freelancers" is already far stronger than the original.
Good evidence-first prompts often use a sequence like: quote -> interpret -> recommend. This lowers hallucination risk and gives you more trustworthy synthesis.
The schema is not complete until you define what happens when a field is absent. Null, unknown, or NOT_FOUND are all better than silent guessing.
Whenever one task depends on the quality of another task, chaining usually helps. It creates checkpoints and makes failures easier to see.
Prompting skill is mostly the skill of externalizing what was previously invisible in your head.
These sources informed the guidance in this book. Read the official docs. They are better than second-hand folklore.
| Code | Source | Why it matters |
|---|---|---|
| O1 | OpenAI - Prompt engineering | Model-specific prompting, snapshot pinning, and eval guidance. |
| O2 | OpenAI - Prompting | Prompt objects, variables, and prompt caching overview. |
| O3 | OpenAI Cookbook - GPT-5.2 Prompting Guide | Verbosity control, scope discipline, and agentic prompt patterns. |
| O4 | OpenAI - Structured model outputs | Schema-first output design for reliable machine-readable responses. |
| A1 | Anthropic - Prompt engineering overview | Success criteria, examples, XML tags, roles, and chaining. |
| A2 | Anthropic - Use XML tags to structure your prompts | Why tags improve clarity, parseability, and accuracy. |
| A3 | Anthropic - Long context prompting tips | Context-first packaging, query-last structure, quote grounding. |
| A4 | Anthropic - Prompt caching | Static-prefix placement, cache hierarchy, and reuse patterns. |
| G1 | Google Gemini - Prompt design strategies | Iterative prompt design, few-shot guidance, structure, and chaining. |
| G2 | Google Gemini - System instructions | Behavior control through system configuration. |
| G3 | Google Gemini - Structured Outputs | JSON schema, type-safety, and validation guidance. |
| G4 | Google Gemini - Thinking | Thinking levels and budgets for reasoning control. |
| G5 | Google Gemini - Context caching | Implicit and explicit caching guidance. |
| M1 | Microsoft Learn - Prompt engineering techniques | Grounding, cues, repeated critical instructions, fallback behaviors. |
| W1 | AWS - Prompt engineering concepts | Template thinking, reusable recipe structure, and deployment habits. |
Pick one recurring task you do every week. Turn it into a versioned prompt packet with clear inputs, output shape, and checks. Then test it on five real examples. Your next breakthrough will not come from reading one more tip. It will come from operationalizing one task well.
The people who get the most from AI are rarely the people with the fanciest words. They are the people who can define the job, package the context, constrain the output, and evaluate the result.
That is a professional skill. It compounds across marketing, writing, coding, operations, and decision-making. Learn it once. Use it everywhere.