April 2026 edition

The Prompt Engineering Playbook

A vivid, practical, model-aware guide to prompts that produce better thinking, better code, better writing, and real business value.
Beginner to advanced
OpenAI + Claude + Gemini aware
Marketing, coding, writing, operations
Built for 2026 workflows

What this book believes

A prompt is not a wish. It is a design brief, a contract, and sometimes a small operating system.

The fastest users get answers. The best users get leverage.

Published by
kearai.com
Copyright 2026. All rights reserved.
Chapter 0

How to use this playbook

Read it once for orientation. Use it forever as a field manual.

What makes this edition different

This is not a warmed-over list of prompt tips. It is a rebuilt handbook for April 2026: modern structure, all-new examples, model-aware strategy, richer business use cases, and a stronger visual system. It keeps the spirit of clear prompting, but upgrades it into a full practice.

You will learn the classic fundamentals - context, specificity, examples, constraints, roles, structure - plus the newer operator skills that matter in real work: structured outputs, long-context tactics, prompt chaining, evals, caching, and provider-specific differences.

Read it in three passes

  1. Pass 1: read Parts I and II to understand the core patterns.
  2. Pass 2: read the value chapters that match your goals - marketing, writing, sales, or coding.
  3. Pass 3: use the template library and debugging clinic while you work.
Operator principle

Do not ask, "What is the perfect prompt?" Ask, "What information would make success obvious?" The model cannot infer what you never specified.

This book makes five promises

  • No recycled filler examples.
  • No provider tribalism. You will learn portable principles first.
  • No fake certainty. We focus on evaluation, iteration, and evidence.
  • No black-and-white boredom. Dense ideas deserve lively design.
  • No theory without value. Most examples aim at money, speed, clarity, or quality.

Who this is for

Founders, creators, marketers, consultants, analysts, product people, developers, educators, and anyone tired of vague AI output.

If you can type into a chat box, you can use this playbook. If you build systems, teams, or products, you can compound it.

Copyright 2026 kearai.com. All rights reserved. Educational use only. Edition date: April 2026.
Chapter Contents

Map of the book

From first prompts to repeatable high-leverage systems.

PartWhat you will learn
IPrompting in 2026, the failure modes, and the 7-part Prompt Spine
IIModel-aware playbooks for OpenAI, Claude, and Gemini
IIIHigh-value prompts for marketing, sales, writing, and content businesses
IVCoding, long-context workflows, structured outputs, evals, caching, and debugging
AppendixTemplate library, drills, and references
ChapterFocus
1Why decent users still get mediocre AI output
2The 7-part Prompt Spine
3Response-shaping tools: examples, constraints, roles, format, tags, variables
4The OpenAI playbook
5The Claude playbook
6The Gemini playbook
7Marketing prompts that can make money
8Sales, consulting, and operations prompts
9Writing and creator workflows
10Coding prompts that ship better software
11Advanced prompt systems: chains, long context, schemas, evals, caching
12Prompt debugging clinic
13Template library
14Practice lab
15References and further study
Research backbone

This edition synthesizes current guidance from official OpenAI, Anthropic, and Google Gemini documentation, plus practical deployment notes from Microsoft Learn and AWS. Specific ideas are cited in the reference appendix using source codes such as [O1], [A3], and [G1].

Part I

From Asking to Specifying

Good prompts are not polite guesses. They are explicit instructions with a finish line.

Chapter 1

Why good users still get mediocre output

Prompting got more powerful when models got stronger. It also got less forgiving of fuzzy briefs.

Most weak prompts fail for a boring reason: the user kept the success criteria inside their own head. The model saw a request. The user imagined a result. Those were not the same thing.

Modern models can write code, summarize books, critique strategy, and generate polished assets. But stronger models do not eliminate ambiguity. They often amplify it. A vague request can now produce a very confident, beautifully formatted wrong answer.

1

Chatty ask

"Help with marketing." The model guesses your market, audience, budget, timeline, and goal.

2

Clear request

You state the task, but not the stakes, constraints, or definition of done.

3

Real brief

You specify the situation, objective, resources, and desired format.

4

Operator prompt

You add quality checks, evidence rules, reusable variables, and system-level structure.

The four silent killers

  • Missing context: the model does not know your world.
  • Fuzzy finish line: it does not know what success looks like.
  • Buried constraints: budget, timeline, tools, tone, risk, or audience never show up.
  • Output entropy: you did not define the format, so the answer drifts.

A stronger mental model

Treat every prompt like one of these:

  • a creative brief
  • a bug ticket
  • a research assignment
  • a specification for a junior teammate

The model is fast. Your job is to make the target visible.

Fast rule

If the output would cost you money, reputation, or time to fix, the prompt deserves more structure than a casual chat.

Weak prompt
Help me market my product.

What the model must guess

  • What is the product?
  • Who buys it?
  • What channel matters?
  • How fast do you need results?
  • What are you able to spend?
Upgraded prompt
You are a direct-response growth strategist.

I sell a $79 Notion dashboard bundle for wedding photographers.
My buyers are solo operators who hate bookkeeping but want cleaner monthly cash-flow tracking.
I want 3 low-cost acquisition angles I can test in the next 30 days with less than $500.

Return a table with:
- angle name
- channel
- core message
- proof hook
- effort level
- what we would learn if it works or fails

Avoid generic advice like "post consistently on social media."
Why this version works
Chapter 2

The 7-part Prompt Spine

A reusable blueprint that absorbs the classic basics and the modern upgrades.

The old beginner advice was right, but incomplete: be clear, specific, and detailed. The problem is that beginners rarely know which details matter. The Prompt Spine gives you a repeatable sequence.

Task

What exact job should the model do?

Context

What situation, audience, or business reality shapes the answer?

Inputs

What source material, data, constraints, or facts must be used?

Constraints

What must or must not happen? Budget, tone, legal, scope, timeline.

Examples

What does "good" look like in phrasing, structure, or behavior?

Output

What format, length, sections, and audience fit the result?

Checks

How should the answer verify itself before finishing?

Spine blockAsk yourselfMicro-example
TaskWhat single verb describes the job?Draft, diagnose, compare, rewrite, estimate, classify
ContextWhat world is this operating inside?Seed-stage SaaS, local clinic, B2B consulting, personal brand
InputsWhat should the model use, and what should it ignore?Use the transcript below, the feature list, and these three objections
ConstraintsWhat boundaries matter?No hype, under 120 words, US audience, no legal claims
ExamplesCan I show one good sample?Mirror this style: direct, spare, evidence-first
OutputHow should the answer arrive?Return JSON, a table, a brief, or a two-step plan
ChecksWhat should it verify?Flag weak assumptions, list missing facts, show confidence level
Copy-paste skeleton
Role: [optional but useful when expertise matters]
Task: [the single job]
Context: [business, audience, situation]
Inputs: [facts, data, excerpts, examples]
Constraints: [must, must not, scope, timeline, risk]
Output: [format, length, sections, reader]
Checks: [verify assumptions, cite evidence, list unknowns]

A writer version

Role: developmental editor
Task: rewrite my article introduction
Context: audience is startup founders with limited time
Inputs: use only the notes below
Constraints: no cliches, no rhetorical questions, under 130 words
Output: 3 alternative intros with a one-line rationale each
Checks: make sure each intro has a concrete tension in sentence 1

A product version

Role: product strategist
Task: prioritize feature requests
Context: our app serves freelance accountants on mobile
Inputs: feature list, support tickets, churn notes
Constraints: team of 3 engineers, 6-week cycle, no platform rewrite
Output: ranked table with impact, complexity, risk, and recommendation
Checks: flag requests that sound loud but low-value
The hidden superpower of Checks

Most users stop after Output. Professionals add Checks. That single block often cuts hallucinations, fluff, and overconfident nonsense because the model is forced to inspect its own work before it hands it over.

Chapter 3

Response-shaping tools that actually change the answer

These are the levers that turn a decent brief into a useful one.

Set boundaries

Tell the model what to avoid and what to do instead.

Weak: "Do not sound salesy."

Better: "Avoid hype and exaggerated claims. Use plain language, product proof, and concrete customer outcomes."

Show examples

Examples beat adjectives. "Make it sharper" is vague. A good before-and-after sample is a teaching instrument.

Anthropic recommends multishot examples as a core best practice, and Google explicitly recommends few-shot prompting whenever practical [A1][G1].

Control length

Say more than "be concise." Give a shape:

  • 1 paragraph + 4 bullets
  • 3 options under 80 words each
  • table only, no intro

Assign roles carefully

Roles work best when they supply domain judgment, not theater.

Useful: "act as a retention strategist for subscription products"

Less useful: "act as a legendary genius wizard marketer"

Use tags and delimiters

When your prompt mixes instructions, source text, data, and examples, boundaries matter.

OpenAI, Anthropic, and Google all support structured prompt sections well; XML-style tags are especially useful for complex inputs [O1][A2][G1].

Promote variables

If a value repeats, make it a variable. This reduces editing mistakes and lets you scale one prompt across many campaigns, clients, or experiments.

Weak brief
Write a launch email for my course.

What is missing

  • Who is the course for?
  • What pain does it solve?
  • Why now?
  • What tone fits the brand?
  • What is the CTA?
Operator brief
You are an email copywriter for creator-led launches.

Write a launch email for my cohort course "Client Systems in a Weekend."
Audience: freelance designers earning $3k-$8k/month who feel buried in admin.
Goal: drive applications, not impulse sales.
Promise: install a lightweight project pipeline in 2 days.
Proof: 4 case studies, median save is 5 hours/week.
Tone: calm, capable, zero hype.
Constraints: no fake urgency, no all-caps, no exclamation-heavy copy.
Output: subject line + preview text + email body under 320 words.
Checks: make sure the CTA is application-focused and the body mentions one concrete proof point.
Micro-rule you can use today

If you ever type "help me" or "make this better," stop and add at least four blocks from the Prompt Spine before sending.

Delimiters example

<task>Summarize the strongest buyer objections.</task>
<transcript>
[customer interview transcript goes here]
</transcript>
<format>
Return a table with objection, evidence quote, and recommended copy response.
</format>

Variables example

Offer = {{offer_name}}
Audience = {{audience}}
Goal = {{goal}}
Tone = {{tone}}

Create 5 ad hooks for {{offer_name}} aimed at {{audience}}.
Optimize for {{goal}}.
Use a {{tone}} voice.
Part II

Model-aware prompting

The core principles travel. The emphasis changes by provider, workflow, and model family.

Chapter 4

The OpenAI playbook

Prompt like you are building a product, not composing a one-off message.

OpenAI's current documentation makes two big points that matter in production. First: different model types need different prompting styles. Second: if you care about consistency, prompt engineering is inseparable from snapshot pinning and evals [O1].

What to emphasizeWhy it matters
Put durable behavior in the system or developer promptTone, style, safety boundaries, and house rules should not be repeated every turn [O1][O2].
Put task details and examples in user messagesThis keeps the reusable scaffold stable while the task-specific details change [O2].
Clamp verbosity and output shape explicitlyOpenAI guidance for newer GPT-5.x prompts emphasizes concise, structured output controls [O3].
Pin model snapshots in productionPrompt behavior can drift across snapshots; pinning protects consistency [O1].
Run evals when prompts changeA beautiful prompt is still a guess until you measure it [O1].
Use prompt objects and variables for reuseOpenAI provides prompt versioning and variables to manage prompts as assets [O2].
Prefer structured outputs for machine-readable responsesJSON schema is more reliable than heavily worded formatting instructions [O4].

Best use of the system layer

Define tone, boundaries, output discipline, and reusable behavior once. Keep the task layer lighter and more variable.

What newer reasoning models like

Clear destination, constraints, and evaluation standards. Less rambling instruction. More explicit finish conditions.

What operators add

Versioned prompts, test suites, and structured outputs. Prompting becomes part of product engineering.

OpenAI operator note

One underused pattern is to separate behavioral policy from task request.

This often improves reuse and keeps multi-step workflows cleaner [O2].

OpenAI-style prompt packet
System / developer message:
You are a product marketing analyst.
Default to concise answers: 1 short overview paragraph, then up to 5 bullets.
Use plain English.
If evidence is weak, say so directly.
When asked for structured output, follow the schema exactly.

User message:
Analyze these 14 churn survey responses for our invoicing app.
Audience: founder and head of product.
Goal: identify the top 3 retention opportunities for the next quarter.
Use only the survey text below.
Return:
1) one-paragraph summary
2) table with issue, evidence quote, probable root cause, suggested fix
3) one paragraph on what not to overreact to
Chapter 4A

OpenAI field notes for 2026 workflows

A few patterns matter far more now than they did in beginner prompt guides.

1. Verbosity is a setting, not a hope

Do not ask for "concise" and hope for the best. Specify the shape. Example: "2 short paragraphs, then a 4-row table. No preamble." This is directly aligned with newer OpenAI prompting guidance [O3].

2. JSON schema beats threat-filled prose

When your app needs predictable keys, use structured outputs instead of writing prompts like "You MUST return valid JSON or your response will fail." OpenAI explicitly recommends Structured Outputs over older JSON-only approaches when supported [O4].

3. Evals are part of prompt engineering

The best prompt is the one that wins on your tasks. Maintain a small eval set of real examples and rerun it when you change prompts, models, or tools [O1].

4. Reusable prompts belong in version control

OpenAI now supports prompt objects with versioning and variables. Even if you are not using the API, copy the habit: save versions, label changes, and keep a changelog [O2].

Money move

If you run an agency or consultancy, maintain a library of versioned prompt packets for audits, proposals, email teardown, competitor analysis, and client reporting. You will deliver faster and sound more consistent across the team.

If you want...Prompt for this instead
Less rambling"Answer in 3 bullets max. No preamble."
Cleaner machine outputUse a schema or a field list with strict required keys.
Better reliability on repeated tasksPin model snapshot + save prompt version + run eval set.
More honest answersAdd checks like "state uncertainty and missing data explicitly."
Chapter 5

The Claude playbook

Claude rewards clarity, examples, structured sections, and strong long-context habits.

Anthropic's own prompt engineering overview starts in a very healthy place: define success criteria, define how you will test them, then improve a real prompt [A1]. That mindset alone separates casual users from serious builders.

Claude fundamentals

  • Be clear and direct [A1].
  • Use examples, especially 3 to 5 good ones when format matters [A1].
  • Use XML tags to separate instructions, examples, and source content [A2].
  • Give Claude a role when domain judgment matters [A1].
  • For long context, place large documents first and the query at the end [A3].

Why Claude users love XML tags

Anthropic documents XML tags as a major clarity tool because they reduce instruction/content confusion, make prompts easier to edit, and improve parseability of outputs [A2].

This is especially useful for contract review, multi-document comparison, content editing, and quote-grounded research.

Claude-style long-context packet
<role>
You are a retention researcher for subscription businesses.
</role>

<documents>
  <document id="survey_1">
    <source>Q1 churn survey export</source>
    <document_content>
      [survey text here]
    </document_content>
  </document>
  <document id="support_1">
    <source>Support tickets from March</source>
    <document_content>
      [ticket text here]
    </document_content>
  </document>
</documents>

<task>
First, quote the most relevant lines from the documents.
Then identify the top 3 churn drivers.
Then recommend one high-leverage retention experiment for each driver.
</task>

<formatting>
Return sections: Evidence, Drivers, Experiments.
Do not use evidence that is not quoted.
</formatting>
Claude long-context note

Anthropic reports that putting large context at the top and the query at the end can improve response quality on complex long-context tasks, with tests showing up to 30 percent quality gains in some cases [A3].

Claude habitPractical payoff
Quote first, then reasonReduces hallucinated claims in long documents [A3].
Use XML tags consistentlyCleaner separation of context, examples, and task [A2].
Chain complex promptsLets you inspect intermediate work instead of trusting one giant jump [A1].
Define success criteria before rewriting promptsImproves iteration discipline [A1].
Chapter 5A

Claude operator notes: chaining, thinking, and caching

This is where a good chat prompt becomes a reliable workflow.

Prompt chaining still matters

Anthropic explicitly notes that chain-style workflows remain useful even when models can think deeply, because chaining lets you inspect intermediate steps, route tasks, or insert tools at controlled points [A1].

Cache the static prefix

Anthropic prompt caching works best when the reusable prefix comes first: tools, then system, then messages. Static examples, style guides, and long documents belong early [A4].

Examples beat abstract preferences

If you want a specific voice or format, do not stack ten adjectives. Show a short example of the style you want. Anthropic recommends multishot examples as one of its most broadly effective techniques [A1].

Use tags in outputs too

You can ask Claude to return XML-tagged sections such as <summary>, <risks>, and <recommendation>. This improves downstream parsing and review [A2].

Money move

If you analyze long client documents, recordings, contracts, or user research, Claude-style evidence-first packets are a premium service. Clear quote extraction makes your deliverables feel more defensible and more expensive.

Chapter 6

The Gemini playbook

Gemini rewards clear structure, direct instruction, and few-shot examples more than most beginners expect.

Google's Gemini documentation is wonderfully blunt: prompt design is iterative, instructions should be clear and specific, and you should include few-shot examples whenever possible [G1]. That last point is stronger than what many users assume.

Gemini guidanceWhy it matters
Clear, specific instructions [G1]Gemini responds well to direct prompts without persuasive fluff.
Always use few-shot when practical [G1]Examples regulate formatting, phrasing, and scope more reliably than loose description.
Use consistent structure and delimiters [G1]Google explicitly recommends XML-style tags or Markdown headings used consistently.
Put system behavior in system instructions [G2]Role, critical constraints, and persistent behavior should live there when available.
For large context, put the context first and the question last [G1]This mirrors strong long-context habits across providers.
Use JSON schema for structured outputs [G3]Predictable, parsable outputs reduce retry loops and fragile post-processing.

Few-shot is a force multiplier

Google says prompt examples are often so effective that you can sometimes remove instructions if the examples already teach the task clearly [G1]. In practice, two or three sharply chosen examples can outperform a paragraph of explanation.

Use system instructions deliberately

Gemini exposes system instructions in its generation config. That makes it natural to separate stable behavior from the live request, just as with other providers [G2].

Gemini-style classification packet
System instruction:
You classify inbound leads for a boutique analytics consultancy.
Be direct, concise, and businesslike.
Return JSON only.

User prompt:
Classify each lead into one of: high_fit, medium_fit, low_fit.
Use this schema:
{
  "lead_name": "string",
  "score": "integer 1-10",
  "bucket": "high_fit | medium_fit | low_fit",
  "reason": "string",
  "next_step": "string"
}

Examples:
Lead: 20-person ecommerce brand, no analyst, $40k monthly ad spend
Result: high_fit because attribution cleanup and dashboard work are immediate pain points

Lead: student building a side project with no budget
Result: low_fit because there is no consulting budget and no urgent business need

Now classify the following lead notes:
[lead notes here]
Gemini system note

For structured outputs, Google supports JSON schema and recommends strong typing, clear property descriptions, and validation in your application code [G3]. In plain English: the schema is part of the prompt.

Chapter 6A

Cross-model translation: same goal, different emphasis

Portable prompting means knowing what changes and what stays stable.

Best at

Clear system rules, output-shape control, versioned prompts, eval-driven iteration, structured outputs.

Lean into

Verbosity control, snapshot pinning, schema-first design, explicit finish conditions.

Best at

Long-context reasoning, XML-tagged document workflows, quote-grounded synthesis, thoughtful chains.

Lean into

Examples, XML boundaries, evidence-first prompts, query-at-end long-context packets.

Best at

Direct instructions, few-shot formatting control, system instructions, JSON schema output, chain prompts.

Lean into

Clear structure, example-led teaching, context-first packaging, schema descriptions.

Portable principleOpenAI accentClaude accentGemini accent
Separate stable behavior from tasksystem/developer promptrole + structured sectionssystem instruction
Teach with examplesuse when behavior or format mattersmultishot is centralfew-shot is strongly recommended
Structure long contextdelimiters, scoped sectionsXML docs + query at endcontext first, question last
Make outputs machine-readablestructured outputs / schematagged outputs or parser-friendly sectionsJSON schema output
Improve over timeevals + snapshot pinningsuccess criteria + prompt iterationiterative prompt refinement
Portable rule

The transferable core is simple: specify the job, package the context, bound the scope, teach by example, define the output, and evaluate the results. Provider differences change emphasis, not the fundamentals.

Part III

Prompts that create value

The point is not prettier output. The point is better offers, better software, faster work, and more revenue.

Chapter 7

Marketing prompts that can make money

Use AI to shorten the distance between market insight and campaign execution.

Most marketing prompts fail because they ask for assets before strategy. A landing page is not the first problem. The first problem is usually positioning, buyer tension, proof, and channel fit.

Offer positioning

Turn a fuzzy product into a sharp promise

Useful for consultants, creators, agencies, and software founders. Better positioning often improves every downstream asset.

You are a positioning strategist for premium service businesses.

I run a small consultancy that helps Shopify brands clean up tracking and reporting.
Our best clients do $80k-$500k/month in revenue and are frustrated by conflicting data across Meta, Google, and Shopify.

Create 3 positioning angles.
For each angle include:
- who it is for
- painful current state
- core promise
- strongest proof mechanism
- risk of sounding generic
- homepage headline draft

Avoid buzzwords like "unlock growth" or "supercharge."
Ad angle matrix

Generate testable creative directions

Good when you need 10+ hooks fast but do not want 10 copies of the same idea.

Act as a paid social creative strategist.

Product: a weekly meal-prep subscription for busy nurses.
Goal: find ad angles we can test on Instagram Reels and TikTok.
Audience: hospital shift workers, mostly women 24-39, low time, inconsistent meal habits.

Return a matrix with 8 angles.
Columns: angle, emotional trigger, visual concept, opening line, credibility cue, possible objection.
Make the angles distinct from one another.
Landing page upgrade

Write around objections, not features

When a page gets traffic but does not convert, you usually need sharper message hierarchy, proof, and objection handling.

You are a conversion copywriter.

Rewrite the structure for our landing page.
Offer: a $149 mini-course that teaches Etsy sellers how to photograph products with just a phone.
Audience: handmade sellers with weak visuals and tiny budgets.
Goal: improve first-time buyer conversion from Pinterest traffic.

Output:
- hero section
- 3 problem bullets
- promise section
- proof section
- FAQ focused on objections
- CTA text

Tone: practical and encouraging.
No fake scarcity.
Launch email sequence

Move from one email to a full narrative

Most launches underperform because they rely on one clever email instead of a sequence that escalates tension and proof.

Create a 5-email launch sequence.
Offer: a live workshop called "Fix Your Freelance Proposal Funnel."
Audience: solo consultants making inconsistent revenue.
Goal: drive webinar registrations first, then workshop sales.

For each email include:
- angle
- subject line
- opening hook
- key proof or story
- CTA

Sequence logic:
1) problem awareness
2) cost of inaction
3) proof
4) objection handling
5) final reminder
Money move

Ask for angles, objections, proof mechanisms, and learning goals. These are closer to revenue than generic copy blocks.

Chapter 7A

Campaign design: from one prompt to a full growth loop

Prompting becomes more valuable when one output feeds the next.

A reusable campaign chain
  1. Analyze the audience and their objections.
  2. Generate angles and choose 2 to 3 strongest bets.
  3. Turn the best angle into landing page copy.
  4. Turn the page into ad hooks, email copy, and FAQ answers.
  5. Review all assets for consistency and weak claims.
StagePrompt targetWhy it creates value
ResearchJobs-to-be-done, pain, objections, proofImproves product-market message fit
Creative strategyAngle matrix, hook library, offer framingGives you more distinct tests
Asset creationPages, emails, ad scripts, social postsCompresses production time
ReviewConsistency, compliance, clarity, CTA strengthReduces embarrassing mistakes
IterationWhat worked, what failed, what to test nextTurns campaigns into a learning system
Review prompt for campaigns
Review the draft landing page below.
Score it from 1-10 on:
- clarity of promise
- specificity of pain
- trust and proof
- objection handling
- CTA strength

Then rewrite only the weakest section.
Do not rewrite the whole page unless the structure is fundamentally broken.
Explain your reasoning briefly.
Operator note

When reviewing marketing copy, ask the model to rewrite only the weakest section. This keeps the revision surgical and reduces the risk of losing what already works.

Chapter 8

Sales, consulting, and operations prompts

AI is extremely profitable when it helps you think, qualify, package, and communicate faster.

Cold outreach that does not sound like spam

You are helping me write 1-to-1 prospecting emails for service businesses.

Target: small accounting firms with outdated websites and weak local search visibility.
My service: website repositioning + local SEO clean-up.

Use the business notes below.
Write 3 opening lines that feel genuinely observed, not scraped.
Then write one full email under 140 words.
No fake flattery. No "just checking in." No "I noticed you are a leader in the space."

Sales call prep for premium offers

Act as a sales strategist.

Based on the notes below, prepare me for a discovery call with a $6k/month prospect.
Return:
- likely pains
- likely objections
- 5 diagnostic questions
- 3 ways this call could go off track
- one concise summary I can say at the end if the fit is strong

Consulting audit packaging

Turn the raw notes below into an executive summary for a client audit.
Audience: founder and marketing manager.
Tone: candid, calm, useful.
Return:
- 1-paragraph overview
- top 5 findings
- impact of each finding
- recommended next action
- what not to prioritize yet

Operations SOP builder

Build a standard operating procedure from the process notes below.
Audience: a new hire with no prior context.
Include:
- purpose
- tools needed
- step-by-step process
- common mistakes
- escalation rules
- checklist for completion
Use plain language.
Money move

Operations prompts make businesses less fragile. That is monetizable. Agencies sell retainers. Consultants sell audits. Operators save founder time. All three benefit from strong prompt packets.

Use casePrompt ingredient most people forgetWhy it matters
Prospectingrelationship stageThe right email to a cold lead is wrong for a warm referral.
Discovery prepdeal size and buying committeeA solo buyer and a 5-person committee need different questions.
Audit reportsaudience seniorityExecutives need decisions, not a wall of observations.
SOPsfailure modesNew hires break workflows where the document is vague.
Chapter 9

Writing and creator workflows

Prompting is not just for generating words. It is for building systems around ideas.

Writers usually need help in five places: finding a sharp angle, structuring a piece, preserving voice, extracting good lines from messy notes, and repurposing one idea across formats. Prompting helps most when you make the source material explicit and the voice constraints visible.

Newsletter engine

Turn raw notes into a weekly issue

Useful for founders, educators, and solo creators who have ideas but no editorial rhythm.

You are my newsletter editor.

Audience: independent professionals trying to build a reputation online.
My voice: direct, curious, slightly contrarian, no motivational fluff.
Use the notes below to create:
- 3 possible subject lines
- one tight intro
- 3 section headings
- a closing takeaway

Do not invent examples I did not mention.
If a claim feels weak, mark it for revision instead of polishing it.
Ghostwriting repurposer

One idea, many assets

Great for agencies and creators who need LinkedIn posts, short scripts, and email content from the same core thought.

Using the article below, create:
1) a LinkedIn post under 220 words
2) a 45-second talking-head script
3) a 5-bullet email version

Preserve the same core argument across all 3 assets.
Tone: expert, not preachy.
Avoid repeating the same opening sentence across formats.
Digital product builder

Outline a product people will actually use

Courses, guides, and toolkits make more money when they are built from real audience friction, not a pile of disconnected lessons.

Act as an instructional designer.

I want to create a paid guide for first-time virtual assistants who keep losing clients because their communication feels chaotic.
Create a practical table of contents.
For each section include:
- the learner problem
- the lesson goal
- one exercise
- one checklist or template to include

Keep it action-first, not theory-first.
Voice-preserving edit

Ask for improvement without sounding generic

Essential if you want better prose but do not want the model to sand your voice down to corporate mush.

Edit the paragraph below for clarity and rhythm.
Keep my voice direct and slightly informal.
Do not remove my opinion.
Do not make it sound like a corporate article.
Return:
- edited paragraph
- 3 brief notes explaining the changes
Writer rule

When voice matters, feed the model a style sample or short excerpt from your own writing. Adjectives like "bold" or "warm" are weaker than examples.

Part IV

Code, systems, and reliability

The best prompt engineers eventually stop thinking in messages and start thinking in workflows.

Chapter 10

Coding prompts that ship better software

AI is most useful in code when you stop asking for code and start specifying engineering constraints.

The original beginner lesson was right: show the environment, show the error, show the expected output. In 2026 that is still foundational. The upgrade is to think like an engineer reviewing a pull request.

Coding prompt blockWhat to includeWhy it improves results
Environmentlanguage, versions, framework, runtime, database, deployment targetPrevents generic answers that do not fit your stack
Failure signalerror message, log excerpt, failing input, observed behaviorTurns guesswork into diagnosis
Definition of donetests, edge cases, performance, security, migration notesMoves from toy code to production-ready code
Output shapefull file, minimal patch, diff, tests only, explanation levelControls scope and reviewability
Weak coding prompt
Write a function to process invoices.

The model has no idea

  • language or framework
  • invoice format
  • desired output
  • error handling rules
  • performance expectations
Stronger coding prompt
Using Python 3.12 and Pydantic v2, write a function that parses invoice line items from OCR text.

Requirements:
- extract description, quantity, unit_price, line_total
- ignore subtotal and tax lines
- return a list of validated objects
- if a line is ambiguous, attach a warning instead of guessing

Output:
- the Pydantic model
- the parser function
- 4 pytest cases covering clean input, noisy OCR, missing quantity, and mixed currency symbols
Production-minded rule

Ask for the smallest useful artifact. A patch, test file, or refactor plan is often better than a giant code dump.

Minimal patch prompt

Review the code below and return the smallest patch that fixes the bug.
Do not rewrite unrelated functions.
Explain the root cause in 3 bullets max.

Test-first prompt

Before changing the implementation, write tests that capture the expected behavior and the edge cases described below.
Then update the code until the tests pass.
Chapter 10A

Debugging, review, and production readiness

This is where AI starts acting like a serious coding partner.

Review prompt

Review this TypeScript function for:
- correctness
- readability
- hidden edge cases
- performance traps
- security concerns

Return:
1) issues ranked by severity
2) a minimal revised version
3) tests that would have caught the worst bug

Error-driven prompt

Fix this FastAPI endpoint.
Environment: Python 3.12, FastAPI, PostgreSQL, SQLAlchemy 2.x
Observed error: sqlalchemy.exc.IntegrityError on duplicate email inserts
Expected behavior: return 409 with a clean JSON error

Show:
- root cause
- patch
- test for duplicate email path

Performance-aware prompt

We need to process up to 3 million rows from S3 into ClickHouse nightly.
Suggest an ingestion strategy.
Constraints:
- Python worker
- memory limit 2 GB
- job must finish within 15 minutes
- failure should be restartable
Return architecture, failure points, and monitoring checks.

Security-aware prompt

Audit this authentication flow for security issues.
Assume a public web app with email login and magic links.
Return findings grouped by:
- account takeover risk
- replay risk
- token handling
- logging/privacy concerns
Then propose the smallest secure improvements.
Money move

Developers who learn to ask for patches, tests, rollout notes, and risk reviews get more from AI than developers who ask for giant rewrites. That means faster delivery and fewer embarrassing regressions.

Ask for thisInstead of thisReason
minimal patch + explanationrewrite the whole filesmaller diffs are easier to trust
tests + edge caseshappy-path code onlytests reveal hidden assumptions
performance and security constraintsgeneric implementationproduction has costs and risks
migration or rollout notescode onlyreal systems need change management
Chapter 11

Advanced prompt systems

The modern stack: chains, tags, schemas, long context, evals, and caching.

This is where most beginner guides stop too early. Great prompting is not just about one message. It is about how prompts behave inside a repeated system.

1. Prompt chaining

Break a big job into smaller verifiable stages. Example:

  1. extract customer pains
  2. rank them by frequency and business value
  3. turn the top 3 into messaging angles
  4. generate assets from the winning angle

Chaining is slower than one-shot prompting, but often much more reliable.

2. Long-context packaging

For large document tasks, structure the packet:

  • clear document labels
  • source metadata
  • task after the context
  • quote-first evidence rules

Anthropic and Google both document context-first / query-last patterns for long inputs [A3][G1].

3. Structured outputs

When software needs the answer, use schemas. OpenAI and Gemini both support JSON-schema-based structured outputs [O4][G3]. This removes a lot of brittle parsing logic.

4. Evals

Keep a small set of representative prompts and expected outcomes. Run them whenever you change prompts, model versions, or tooling. Prompt engineering without evals becomes aesthetic guesswork [O1][A1].

5. Caching

If the same context repeats, put static material first. OpenAI, Anthropic, and Gemini all support caching approaches that reward stable prompt prefixes [O2][A4][G5].

6. Grounding and fallback

Give the model an escape hatch. For extraction tasks, say: "If the field is absent, return NOT_FOUND." Microsoft Learn recommends giving the model an out rather than letting it guess [M1].

Schema-first extraction prompt
Task: extract sales call details from the transcript.
If a field is not present, return null.
Do not infer values that are not supported by the transcript.

Schema:
{
  "company_name": "string | null",
  "contact_name": "string | null",
  "budget_status": "known | unknown | null",
  "timeline": "string | null",
  "main_pains": ["string"],
  "next_step": "string | null"
}
Chapter 11A

A practical workflow for prompt systems

How teams turn good prompts into dependable assets.

The prompt ops loop
  1. Draft: build a first prompt from the Prompt Spine.
  2. Test: run it on real examples, not toy ones.
  3. Score: measure accuracy, format compliance, tone, latency, and failure rate.
  4. Refine: improve one variable at a time.
  5. Version: save the prompt and label the change.
  6. Cache: front-load the stable context where supported.
  7. Monitor: rerun evals on model or prompt changes.
FailureLikely root causeWhat to try first
Wrong scopetask not boundeddefine what not to include
Format driftoutput contract too looseadd schema or explicit template
Hallucinated factspoor groundinglimit sources + require quotes + allow NOT_FOUND
Generic answersinsufficient context or examplesadd audience, stakes, and one good example
Inconsistent resultsno snapshot pinning or evalsversion prompt and test across a fixed set
Field note from real deployments

Most prompt wins come from better problem framing, not more magical wording. Cleaner inputs, better examples, stronger evaluation, and more constrained outputs usually beat clever prose.

Chapter 12

Prompt debugging clinic

When the answer disappoints, diagnose the prompt before blaming the model.

Weak prompting often hides in patterns. Once you recognize the symptom, the fix becomes faster.

SymptomLikely causeFix
Too genericcontext and stakes missingadd audience, business context, and goal
Too longno length or section controlspecify sections, bullet limits, or word range
Wrong tonetone described vaguelyshow one voice example and a short avoid list
Invented detailsinputs under-groundedlimit to provided material and require evidence
Messy JSONusing prose instructions instead of schemaswitch to structured outputs
One giant rewritescope not controlledask for smallest patch or weakest-section rewrite
Same idea repeated 10 timesrequest lacks diversity instructionask for mutually distinct options
Useful but not actionableoutput is commentary, not decision supportask for recommendation, tradeoff, and next action

If the answer is too bland

  • Add more context about the reader, stakes, and constraints.
  • Replace adjectives with examples.
  • Request distinct options, not "some ideas."
  • Ban your most hated generic phrases.

If the answer keeps drifting

  • Break the task into steps or chained prompts.
  • Use tags to separate source material from instructions.
  • Make the output structure explicit.
  • Ask for checks before finalizing.
Prompt doctor question

What exactly did I leave implicit? That single question solves an astonishing number of bad outputs.

Chapter 12A

Classic anti-patterns

These prompt habits feel natural. They quietly ruin results.

Anti-pattern: adjective stacking

"Make it smart, premium, polished, trustworthy, exciting, and modern."

Better: define audience, tone reference, and one short example.

Anti-pattern: task pile-ups

"Analyze this, summarize it, create ad copy, and write code from it."

Better: separate stages, or explicitly number the steps.

Anti-pattern: forbidding without directing

"Do not be vague."

Better: "Use concrete numbers, named examples, and one recommended next step."

Anti-pattern: no abstention rule

If missing information matters, say what should happen: return null, NOT_FOUND, or "insufficient evidence." This reduces guessing [M1].

Field note

Microsoft Learn highlights two underrated habits: repeat critical instructions when they truly matter, and give the model a safe fallback when information is absent [M1]. Those two moves alone prevent many failures.

Chapter 13

Template library - part 1

Copy, adapt, and improve. These are starting points, not sacred text.

Market research brief
Use it for: finding pains, segments, and message gaps
You are a market researcher.
Analyze the notes below and identify:
- recurring pains
- desired outcomes
- current workarounds
- buying objections
- phrases customers use repeatedly
Return a summary plus a ranked opportunity table.
Offer positioning builder
Use it for: services, software, digital products
Create 4 positioning angles for {{offer_name}}.
Audience: {{audience}}
Problem solved: {{problem}}
Proof available: {{proof}}
Constraints: no buzzwords, no competitor references.
Return angle, headline, promise, proof hook, risk of sounding generic.
Landing page outline
Use it for: fast messaging architecture
Build a landing page outline for {{offer_name}}.
Include hero, problem, solution, proof, objections, CTA.
Audience: {{audience}}
Goal: {{goal}}
Tone: {{tone}}
Do not write filler transitions.
Ad hook matrix
Use it for: creative testing
Generate 10 distinct ad hooks for {{product}}.
Audience: {{audience}}
Channels: {{channels}}
Return a table with emotional trigger, opening line, visual concept, proof cue, likely objection.
Newsletter issue planner
Use it for: creator workflow
Using the notes below, create a newsletter issue.
Return 3 subject lines, one intro, 3 section headings, and a closing takeaway.
Keep my voice: {{voice_notes}}.
Do not invent examples.
Executive summary generator
Use it for: client reports and audits
Turn the notes below into an executive summary for {{audience}}.
Return:
- one-paragraph overview
- top findings
- impact
- recommendation
- what not to prioritize yet
Tone: crisp and evidence-first.
Chapter 13A

Template library - part 2

More templates for sales, operations, and coding.

Prospecting email generator
Use it for: outbound sales
Write a short prospecting email for {{service}}.
Prospect type: {{prospect_type}}
Observed issue: {{observed_issue}}
Goal: book a discovery call.
Constraints: under 140 words, no fake flattery, no generic opener.
Sales call prep
Use it for: closing higher-ticket work
Prepare me for a discovery call.
Based on the notes below, return likely pains, likely objections, 5 diagnostic questions, deal risks, and a strong closing summary.
SOP builder
Use it for: repeatable operations
Create a standard operating procedure from the notes below.
Audience: new hire with zero context.
Include purpose, tools, step-by-step process, failure points, escalation rules, final checklist.
Bug triage prompt
Use it for: debugging
Environment: {{environment}}
Observed behavior: {{observed_behavior}}
Error: {{error_message}}
Expected behavior: {{expected_behavior}}
Return root cause, smallest patch, tests, and any deployment or migration notes.
Code review prompt
Use it for: quality and safety
Review the code below for correctness, readability, edge cases, performance, and security.
Return issues ranked by severity, then a minimal revised version, then tests that would catch the worst issue.
Structured extraction prompt
Use it for: turning text into data
Extract the fields below from the source text.
If a field is absent, return null.
Do not infer unsupported values.
Schema:
{{schema}}
Source text:
{{source_text}}
Chapter 13B

Template library - part 3

Advanced templates for long context and prompt systems.

Long-document synthesis
Use it for: reports, research, contracts
<documents>
  <document id="1"><source>{{source_1}}</source><document_content>{{doc_1}}</document_content></document>
  <document id="2"><source>{{source_2}}</source><document_content>{{doc_2}}</document_content></document>
</documents>
<task>
First quote the most relevant evidence.
Then summarize the key findings.
Then identify contradictions or open questions.
</task>
Prompt chain planner
Use it for: multi-step workflows
Break the goal below into a prompt chain.
Goal: {{goal}}
Constraints: {{constraints}}
Return:
- stage name
- input to that stage
- output from that stage
- quality check for that stage
Campaign review scorecard
Use it for: marketing QA
Score the draft asset below from 1-10 on clarity, proof, objection handling, tone, CTA strength, and distinctiveness.
Then rewrite only the weakest section.
Meeting insight extractor
Use it for: ops and management
Analyze this meeting transcript.
Return:
- decisions made
- unresolved questions
- assigned owners
- deadlines
- risks or blockers
If ownership is unclear, say unassigned.
Course or guide designer
Use it for: education products
Design a practical curriculum for {{audience}}.
Goal: {{goal}}
Constraints: {{constraints}}
For each module include outcome, lesson concept, exercise, checklist, and common mistake.
Prompt QA checker
Use it for: improving prompts themselves
Review the prompt below.
Identify missing context, vague goals, hidden assumptions, weak output definitions, and missing checks.
Then rewrite it using the 7-part Prompt Spine.
Chapter 14

Practice lab

Use these drills to build the instinct, not just the theory.

Drill 1 - Upgrade the vague request

Rewrite this prompt using the Prompt Spine:

Help me write better ads for my app.

Questions to force better prompting:

Drill 2 - Add evidence rules

You are summarizing a 40-page interview synthesis. Rewrite the prompt so the model must quote relevant evidence before drawing conclusions.

Drill 3 - Convert prose formatting into a schema

You currently prompt: "Return JSON with name, industry, score, and next step." Rewrite it as a stricter, schema-like instruction with fallback rules for missing data.

Drill 4 - Split a giant task

You want one prompt to analyze support tickets, find product issues, create a roadmap, and draft a launch email. Split it into a chain.

Drill 5 - Make a coding prompt production-ready

Take a toy request like "Build a login system" and add the environment, threat model, error cases, and definition of done.

Self-review rubric
Chapter 14A

Answer sketches and coaching notes

There are many good rewrites. The point is not one perfect answer. The point is better information design.

For Drill 1, notice the upgrade path

A strong rewrite would specify the app category, the buyer, the channel, the campaign goal, the angle diversity required, the format, and the language to avoid.

Even a modest upgrade like "Write 6 distinct Meta ad hooks for a budgeting app for freelancers" is already far stronger than the original.

For Drill 2, quote-first rules matter

Good evidence-first prompts often use a sequence like: quote -> interpret -> recommend. This lowers hallucination risk and gives you more trustworthy synthesis.

For Drill 3, remember missing data rules

The schema is not complete until you define what happens when a field is absent. Null, unknown, or NOT_FOUND are all better than silent guessing.

For Drill 4, chaining is clarity

Whenever one task depends on the quality of another task, chaining usually helps. It creates checkpoints and makes failures easier to see.

Most important coaching note

Prompting skill is mostly the skill of externalizing what was previously invisible in your head.

Chapter 15

References and further study

These sources informed the guidance in this book. Read the official docs. They are better than second-hand folklore.

CodeSourceWhy it matters
O1OpenAI - Prompt engineeringModel-specific prompting, snapshot pinning, and eval guidance.
O2OpenAI - PromptingPrompt objects, variables, and prompt caching overview.
O3OpenAI Cookbook - GPT-5.2 Prompting GuideVerbosity control, scope discipline, and agentic prompt patterns.
O4OpenAI - Structured model outputsSchema-first output design for reliable machine-readable responses.
A1Anthropic - Prompt engineering overviewSuccess criteria, examples, XML tags, roles, and chaining.
A2Anthropic - Use XML tags to structure your promptsWhy tags improve clarity, parseability, and accuracy.
A3Anthropic - Long context prompting tipsContext-first packaging, query-last structure, quote grounding.
A4Anthropic - Prompt cachingStatic-prefix placement, cache hierarchy, and reuse patterns.
G1Google Gemini - Prompt design strategiesIterative prompt design, few-shot guidance, structure, and chaining.
G2Google Gemini - System instructionsBehavior control through system configuration.
G3Google Gemini - Structured OutputsJSON schema, type-safety, and validation guidance.
G4Google Gemini - ThinkingThinking levels and budgets for reasoning control.
G5Google Gemini - Context cachingImplicit and explicit caching guidance.
M1Microsoft Learn - Prompt engineering techniquesGrounding, cues, repeated critical instructions, fallback behaviors.
W1AWS - Prompt engineering conceptsTemplate thinking, reusable recipe structure, and deployment habits.
Best habit after reading this book

Pick one recurring task you do every week. Turn it into a versioned prompt packet with clear inputs, output shape, and checks. Then test it on five real examples. Your next breakthrough will not come from reading one more tip. It will come from operationalizing one task well.

Prompt less like a tourist. Design more like an operator.

The people who get the most from AI are rarely the people with the fanciest words. They are the people who can define the job, package the context, constrain the output, and evaluate the result.

That is a professional skill. It compounds across marketing, writing, coding, operations, and decision-making. Learn it once. Use it everywhere.

The Prompt Engineering Playbook
April 2026 edition
kearai.com