From understanding how AI works, to writing prompts that get results — your team's complete toolkit for showing up confidently at myKaarma.
The AI Dictionary
Every term that comes up in stand-ups, PRDs, and engineering conversations — explained for someone who thinks in experiences, not code.
Model Fundamentals
LLM — Large Language Model
The engine behind Claude, ChatGPT, Gemini...
Model+
📚 Analogy: The world's most-read book nerd
An LLM is a massive neural network trained on enormous amounts of text. It learned patterns in language so well it can generate coherent, contextually relevant text — code, prose, analysis — just by predicting what comes next. It doesn't "understand" like a human — it statistically predicts.
🎨 Design implication
Because LLMs predict rather than reason perfectly, your UI needs to handle wrong answers gracefully. Design for uncertainty: show confidence indicators, allow easy correction, don't present AI output as ground truth.
📋 PM angle
When writing a PRD, specify which model tier you're targeting — it affects cost, latency, and capability. A GPT-4-class model can cost 10–50x more per call than a smaller model.
Context Window
How much the AI can "see" at once
Model+
🗒️ Analogy: Working memory / short-term RAM
The context window is the total amount of text the model can process in one go — both your input and its output, measured in tokens. Once content falls outside the window, the model can't see it. Old models had ~4K tokens (~3 pages); modern ones handle 128K–1M+ tokens.
🎨 Design implication
This is why chat apps sometimes "forget" earlier parts of a conversation. Design for this: offer summarization features, create visual affordances when context is getting full.
📋 PM angle
Context window size is a key spec. Long-context tasks need bigger windows but cost more. Factor this into your cost model per interaction.
Token
The currency of AI — how text is measured and billed
Model+
🪙 Analogy: Syllables, roughly
Models don't process words — they process tokens. A token is roughly 3/4 of a word. "Unbelievable" might be 3 tokens. Everything you send to and receive from an AI is counted in tokens. Most APIs charge per token.
🎨 Design implication
Shorter prompts are cheaper and faster. Help users understand verbosity has a cost. Character limits on inputs can meaningfully reduce costs at scale.
📋 PM angle
Your unit economics depend on tokens. A 2,000-token system prompt is a fixed cost floor per interaction. Track avg input/output tokens as a core metric.
Prompt / System Prompt
The instructions that shape AI behavior
Model+
📝 Analogy: A creative brief for the AI
A prompt is the input you send. A system prompt is hidden backstage instructions set by developers — it defines the AI's role, tone, constraints, and persona before users ever interact with it.
🎨 Design implication
You can absolutely write system prompts — it's UX writing for AI. Tone, persona, response format, what the AI should refuse — these are design decisions you should own.
📋 PM angle
Treat prompts like product copy — they need review cycles and versioning. Include prompt requirements in your PRDs and define who owns prompt iteration.
Hallucination
When AI confidently makes things up
Model+
😬 Analogy: The intern who bluffs
LLMs sometimes generate plausible-sounding but completely fabricated information. The model doesn't "know" it's wrong — it's just producing statistically likely text. Fake citations, wrong dates, invented features are all common forms.
🎨 Design implication
Design verification flows for high-stakes output. Use visual patterns (warnings, citation markers) to signal "AI-generated — please verify." Never auto-publish unverified AI output.
📋 PM angle
Hallucination risk should be in your risk assessment section of any AI PRD. Define acceptable error rates and mitigation strategies (RAG, human review).
Temperature
Controls how creative vs. predictable the AI is
Model+
🌡️ Analogy: A creativity dial from 0–2
Low temperature (0.1) = focused, deterministic. High temperature (1.5) = creative, unpredictable. For code or data extraction, use low temp. For creative brainstorming, use higher.
🎨 Design implication
If you're designing a feature needing consistent output (legal summaries), tell your dev to lower temperature. Creative brainstorm tools? Crank it up. This is a UX decision you should be making.
📋 PM angle
Include temperature as a product spec. Running A/B tests comparing outputs at different temperatures is a legit PM experiment during early feature development.
Agents & AI Systems
AI Agent
An AI that takes actions, not just answers questions
Agent+
🤖 Analogy: A virtual employee with a to-do list
An agent doesn't just respond — it acts. It can browse the web, write and run code, send emails, book meetings. It loops: perceive → plan → act → observe results → repeat until done. You give a goal, not a single question.
🎨 Design implication
Agent UX needs visibility (what is it doing?), control (can I pause it?), and explainability (why did it do that?). This is a massive, underdeveloped design space to own.
📋 PM angle
A wrong action is worse than a wrong answer. Define what the agent can do autonomously vs. what needs human approval. Document "human-in-the-loop" checkpoints explicitly.
RAG — Retrieval Augmented Generation
How AI knows about YOUR data
Agent+
📎 Analogy: Open-book exam vs. memorization
RAG searches a knowledge base for relevant info, then feeds that into the prompt. So the AI answers based on YOUR documents, not just training data. A support bot using RAG searches your help articles before answering.
🎨 Design implication
RAG systems need source attribution UX. Design "cited from: [doc]" patterns. Think about what happens when no relevant document is found — design graceful fallbacks.
📋 PM angle
RAG quality depends on your knowledge base. Treat it as a product artifact with an owner, update cadence, and quality metrics. Garbage in, garbage out.
Guardrails / Safety Filters
Rules that prevent the AI from going off the rails
Agent+
🚧 Analogy: Content policy + compliance officer built in
Guardrails constrain what the AI does — topics it won't touch, actions it won't take, formats it must follow. Some are built into the model; others are custom layers engineers add.
🎨 Design implication
When AI hits a guardrail and refuses, design the error carefully — it shouldn't feel like a wall, it should redirect helpfully. AI refusal message design is high-leverage, underrated UX.
📋 PM angle
Work with legal to define content restrictions before launch. Build a failure taxonomy — every way the AI could fail or misbehave, and which require guardrails.
Infrastructure & APIs
API — Application Programming Interface
How your product talks to an AI model
Infra+
🍽️ Analogy: A restaurant order window
An API is how two software systems communicate. Your product sends a user's question via API, the AI sends back a response. Engineers write code handling this exchange. Every call has a cost and a latency.
🎨 Design implication
Every AI interaction has a delay. Design loading states for ALL AI calls. Streaming (the AI types as it responds) is one pattern to reduce perceived latency — request it as a design requirement.
📋 PM angle
Know your per-call cost, projected call volume, and monthly AI infrastructure cost. Put this in your business case with clear cost/user estimates.
Fine-tuning vs. Prompting
Training a custom model vs. better instructions
Infra+
🏋️ Analogy: Training someone from scratch vs. briefing an expert
Prompting: Write clever instructions to get better behavior. Fast, cheap, flexible — start here always. Fine-tuning: Actually retrain the model on custom data. Expensive, weeks of work, but achieves things prompting can't.
🎨 Design implication
If a feature feels "off" in tone, the fix might be in the prompt — which you can help write. You don't need engineering access to start improving AI quality. That's your leverage.
📋 PM angle
Most quality problems are prompt-fixable. Exhaust better prompts before escalating to fine-tuning, which is a weeks-long, costly investment requiring thousands of quality training examples.
PM Frameworks for AI
Evals (Evaluations)
How you measure if your AI is actually working
PM+
🧪 Analogy: QA testing, but for intelligence
Evals are test suites measuring AI output quality. You create test inputs and expected outputs, run your system against them, and grade results. This is how engineers know if a prompt change improved or degraded performance.
🎨 Design implication
You can contribute to evals. Writing test cases from a user perspective — "this is what good looks like for this user need" — is valuable input engineers often lack. Bring UX heuristics to eval design.
📋 PM angle
Before launching any AI feature, define what "good" looks like and set a quality threshold. Require evals to be built before launch. Make evals part of your Definition of Done.
Human-in-the-Loop (HITL)
Where humans check AI work before it goes live
PM+
✋ Analogy: An editor reviews before publish
HITL is a design pattern where a human reviews AI output before it takes effect. The spectrum runs from fully autonomous to "AI drafts, human approves everything." Most responsible AI products sit in the middle.
🎨 Design implication
Designing review UIs for AI output is one of the most important underexplored design spaces. Diff views, confidence indicators, bulk approval — these are all design patterns worth mastering now.
📋 PM angle
In your PRD, define your HITL model explicitly. Higher-stakes tasks need more oversight. Quantify human time cost of your review flow — this is part of your operational model.
The Prompt Guide
Everything you need to write effective prompts — from the core principles to the exact anatomy of a great prompt, with a real built example.
The 6 Principles
01
Give the AI a Role
AI performs dramatically better with a persona. "You are a..." primes the model with the right knowledge domain, tone, and behavior patterns.
✗ Vague
Summarize this document.
✓ With role
You are a senior UX researcher. Summarize this usability report, highlighting the top 3 friction points for a non-technical executive.
02
Specify the Output Format
Don't leave format to chance. Tell the AI exactly how you want the answer — bullets, table, numbered list, under X words.
✗ No format
Give me ideas for onboarding improvements.
✓ Formatted
Give me 5 onboarding improvement ideas. Format: bullet list. Each: one idea + one rationale. Max 25 words per bullet.
03
Add Context, Not Assumptions
The AI knows nothing about your specific situation unless you tell it. Background on your product, user, and constraints makes every output more relevant.
✗ No context
Write a push notification for our app.
✓ With context
Our app is for auto dealership service advisors. Users miss follow-ups. Write a push for day 3. Max 60 chars. Encouraging, not guilt-inducing.
04
Use Positive Instructions
Tell the AI what TO do, not just what to avoid. Negative-only instructions leave too much room for interpretation.
✗ Only negatives
Don't be too formal. Don't use jargon. Don't write too much.
✓ Positive framing
Use a warm, conversational tone. Write at a 7th grade reading level. Keep the response to 3 sentences maximum.
05
Show, Don't Just Tell
One concrete example is worth 10 abstract instructions. Include a sample of what good looks like — the AI will pattern-match to it.
✗ Abstract
Write error messages that feel human.
✓ With example
Write error messages like this: "Hmm, that didn't save. Try again?" — not like: "Error 403: Unauthorized request."
06
Ask for Reasoning First
For complex tasks, tell the AI to think step-by-step before answering. This "chain of thought" dramatically improves accuracy on hard analytical questions.
✗ Jump to answer
Which feature should we build next?
✓ Reason first
Think through effort, user impact, and strategic fit for each option — then give me your top recommendation with a 2-sentence rationale.
The 6 Building Blocks
🧑 Role
Who is the AI?Set a persona, expertise level, and perspective. Defines tone, knowledge domain, and decision-making lens.
📎 Context
What's the situation?Background the AI needs — your product, user, the problem, and any relevant data or constraints.
✅ Task
What should it do?One clear verb: write, analyze, compare, summarize, rewrite. One task per prompt is more reliable than bundled requests.
📐 Format
How should output look?Bullet list, table, JSON, paragraph, under 100 words. Explicit format prevents unpredictable structure.
🚧 Constraints
What are the rules?Tone, things to avoid, reading level, brand voice, character limits. Use positive framing where possible.
💡 Example
Show what good looks like.A sample input/output or "like this, not like that" contrast. The single highest-leverage thing you can add.
A Fully Built Prompt
All 6 blocks assembled for a real myKaarma UX writing task.
[ROLE] You are a senior UX writer with expertise in automotive service lane software and behavior change design.
[CONTEXT] We're designing myKaarma's mobile check-in flow for vehicle owners at dealerships. Target user: busy car owner, 30–55. Our "Add vehicle notes" step has a 22% completion rate — users skip it and we lose service context.
[TASK] Write 3 alternative versions of the copy for the "Add vehicle notes" step.
[FORMAT] For each version: headline (max 8 words), subtext (max 20 words), placeholder text (max 10 words), CTA button (max 4 words). Labeled list.
[CONSTRAINT] Tone: helpful and low-effort, not demanding. Avoid "required" or "must." Each version tries a different angle: time-saving, service quality, peace of mind.
[EXAMPLE] Current copy to improve: "Vehicle Notes — Tell us about your car. [Add notes] [Skip]"
The Iteration Loop
Step 1
Write & Run
Start with Role + Task + Format. Run it. Don't overthink the first draft.
Step 2
Diagnose
Wrong tone? Missing context? Too long? Identify the one thing that's off.
Step 3
Add a Block
Wrong tone → add constraints. Wrong output → add format or example.
Step 4
Save & Share
Good prompts are team assets. Save them to Templates so the whole team benefits.
Ready-to-Use Prompt Templates
Copy, fill in the brackets, and run. Each template is built on the 6-block anatomy and tested for common designer + PM tasks.
UX Critique Request
Get structured feedback on your designs from an AI critic
You are a senior UX designer with 10+ years of experience in [industry: e.g., fintech / healthcare / e-commerce].
I'm sharing a description of a screen/flow for [product name]. The target user is [user description] and their primary goal on this screen is [user goal].
Here's the screen description: [describe your design or paste your copy/structure]
Please evaluate it across these 4 dimensions:
1. Clarity — Is the user's next action obvious?
2. Trust — Does anything feel risky or confusing?
3. Friction — What's the biggest drop-off risk?
4. Delight — What's one thing that could surprise and delight?
Format: One paragraph per dimension. Be direct and specific.
Replace all [brackets] with your specifics before running
The more specific your screen description, the more actionable the feedback
Try running this for each major screen in your flow
Error Message Generator
Write human, on-brand error states fast
You are a UX writer for [product name]. Our brand voice is [e.g., warm and human / professional but approachable / playful and witty]. Our users are [user description].
Write 3 versions of an error message for this scenario: [describe the error, e.g., "user's payment failed" / "file upload too large" / "session timed out"]
For each version:
— Headline: max 6 words
— Body: max 20 words, explain what happened + what to do
— CTA: max 4 words
Tone versions: (1) Empathetic, (2) Minimal/direct, (3) Slightly playful
Never use: technical error codes, passive voice, or the word "invalid"
Add a "never use" list of words your brand avoids for even tighter output
Paste in an example of existing copy you like to anchor the tone
Flow Gap Finder
Identify missing states and edge cases in your flows
You are a QA-minded UX designer. I'm going to describe a user flow and I want you to identify every state, edge case, and error condition I might have missed.
Product: [product name + one-line description]
User goal: [what the user is trying to accomplish]
Flow I've designed: [describe step by step: step 1 → step 2 → step 3...]
Please list:
1. Empty states I might have missed
2. Error states I haven't accounted for
3. Edge cases (unusual inputs, slow connections, permissions issues)
4. States for different user types (first-time vs. returning, mobile vs. desktop)
Format: Numbered list under each category. Be specific — call out the exact step in my flow where each gap occurs.
Feature Brief / One-Pager
Draft a concise feature proposal for stakeholder alignment
You are a senior product manager writing a feature brief for stakeholder alignment.
Feature idea: [describe the feature in 1–2 sentences]
Product context: [product name, target users, stage: early-stage / growth / mature]
Problem it solves: [what user pain or business problem does this address?]
Known constraints: [timeline, technical, resource, or market constraints]
Write a one-page feature brief with these sections:
1. Problem Statement (2 sentences)
2. Proposed Solution (3 sentences)
3. Success Metrics (3 measurable KPIs)
4. Risks & Open Questions (bullet list, max 4 items)
5. Recommended Next Step (1 sentence action item)
Tone: Confident but not overselling. Data-driven language where possible. Executive-readable.
Add "Key stakeholders: [names/roles]" to tailor the language to your audience
Paste in user research quotes as context to make the problem statement more compelling
Prioritization Framework
Get AI help scoring and ranking your backlog
You are a product strategist. I need help prioritizing my feature backlog.
My product's current top priority is: [e.g., reduce churn / grow activation / increase revenue]
My team's capacity this quarter: [e.g., 2 engineers, 6 weeks]
Here are my backlog items:
[List each item: Feature name — one-sentence description]
Score each item on these dimensions (1–5):
— User Impact: How much does this improve the core user experience?
— Strategic Fit: How directly does this serve our #1 priority?
— Effort: Inverse score — 5 = very low effort, 1 = very high effort
— Confidence: How certain are we that users want this?
Present as a table with a final Priority Score (average). Then give a 2-sentence recommendation on where to start.
Stakeholder Objection Prep
Anticipate and prepare for pushback before your presentation
You are a skeptical but fair senior stakeholder — part engineer, part business leader. I'm about to present this proposal:
[Summarize your proposal in 3–5 sentences]
Stakeholders in the room: [e.g., CTO, Head of Sales, CFO]
Play devil's advocate. Give me the 5 hardest objections each stakeholder type would likely raise. For each objection, include:
— The question they'd actually ask (as a direct quote)
— What concern is underneath it
— A suggested response I could give (2–3 sentences, evidence-based)
Be genuinely tough — I want to be prepared, not just validated.
Interview Question Generator
Build a user interview guide for any research goal
You are a senior UX researcher specializing in generative user research.
Research goal: I want to understand [what you want to learn — e.g., why users abandon checkout / how teams currently manage X / what makes users trust Y]
Target participant: [describe who you're interviewing: role, context, experience level]
Write a 45-minute interview guide with:
— 2 warm-up questions (build rapport, not data)
— 4 core exploration questions (open-ended, no leading, behavior-focused)
— 2 follow-up probes for each core question
— 1 closing question
Rules: No "would you" questions. No hypotheticals. Ask about past behavior, not future intentions. Use "tell me about a time when..." framing.
Research Synthesis Assistant
Turn raw notes into themes and insights
You are a UX researcher performing affinity analysis. I will give you raw notes from [number] user interviews about [topic].
Raw notes:
[Paste your interview notes here — can be messy, direct quotes, bullet points, etc.]
Please:
1. Identify the top 4–6 themes (patterns that appear across multiple participants)
2. For each theme: give it a name, write a 2-sentence summary, and list 2–3 supporting quotes
3. Flag any surprising or contradictory findings
4. Suggest 2 "how might we" statements that could become design prompts
Format: Structured markdown. Bold theme names. Keep quotes verbatim.
System Prompt Writer
Write a production-ready system prompt for an AI feature
You are a prompt engineer helping me write a system prompt for an AI-powered product feature.
Feature: [describe the AI feature — what it does, where it lives in the product]
User: [who will interact with this feature, their context and goals]
Desired behavior: [what should the AI do well?]
Off-limits: [topics, actions, or output types the AI must never do]
Tone: [brand voice — e.g., professional, warm, concise, encouraging]
Write a system prompt that:
— Establishes a clear AI persona and role
— Defines behavioral boundaries clearly
— Specifies output format expectations
— Handles ambiguous or off-topic requests gracefully
— Is under 300 words
After the system prompt, write a 3-question test suite I can use to verify it works as intended.
The test suite at the end is crucial — always validate your system prompt before shipping
Iterate: run the system prompt, then come back and refine based on what the AI does wrong
Microcopy Batch Generator
Write all UI copy for a feature in one shot
You are a UX writer. Write all microcopy for the following feature.
Feature: [feature name and 1-sentence description]
Brand voice: [e.g., friendly and direct / professional and clear / warm and encouraging]
Users: [who they are]
Write copy for each of these UI elements:
— Page title
— Page subtitle / descriptor (max 15 words)
— Primary CTA button
— Secondary CTA / cancel link
— Empty state heading + body (max 20 words)
— Success message (max 12 words)
— Loading state label (max 5 words)
— Tooltip for the main action (max 15 words)
— Confirmation modal: title + body + CTA
Rules: No passive voice. No jargon. Use "you" not "the user". Keep all copy in the same voice.
Design → Dev Handoff Notes
Turn your design decisions into engineering-ready specs
You are a technical product designer writing engineering handoff notes. Translate my design decisions into developer-friendly language.
Feature: [feature name]
Design decisions I made: [describe your design in plain language — layout, interactions, states, behaviors]
Tech stack (if known): [e.g., React, iOS native, web — or "unknown"]
Write handoff notes that include:
1. Component inventory (list every distinct UI component)
2. Interaction spec (describe each user action and the expected system response)
3. States to build (list every state: empty, loading, error, success, disabled, etc.)
4. Edge cases to handle (inputs that could break the UI)
5. Open questions for the engineer (anything I haven't specified)
Use technical but clear language. Avoid design jargon. Be precise about what is and isn't defined.
This output becomes your Figma annotation content or Jira ticket description
Share the "open questions" section in your handoff meeting to unblock engineers faster
AI Feature Spec for Engineers
Specify an AI-powered feature in language devs understand
You are a PM/designer writing an AI feature spec for an engineering team.
Feature: [name and description]
What the AI should do: [describe the AI's job in this feature]
Input: [what data/text goes into the AI?]
Expected output: [what should the AI return?]
Write a technical spec that covers:
1. API integration requirements (input schema, output format, error handling)
2. Latency expectations and loading state requirements
3. Streaming vs. non-streaming recommendation + rationale
4. Fallback behavior when the AI call fails
5. Guardrails needed (what the AI must never return)
6. Suggested evaluation criteria (how will we know it's working?)
7. Logging requirements (what do we need to track?)
Be specific. Flag anything that needs an engineering decision.
📦
No templates yet in Other
Add templates here that don't fit the other categories — cross-functional prompts, personal workflows, or experimental ideas.
➕ Add a Template
Share a prompt that's saved you time. It'll appear in the category tab you choose, available to copy by anyone on the team.
✓ Template added! Find it in the category tab.
Template Changelog
Every edit made to any template — who changed it, what changed, and the ability to restore any previous version.
📋
No changes yet
When you or a teammate edits a template, every version will be tracked here so nothing is ever lost.
✏️ Edit Template
Your changes are versioned — the original is always preserved in the Changelog.
What's changing
✓ Saved and logged to Changelog.
↩ Restore Previous Version
Content to restore
Your name (logged in changelog)
Prompt Workshop
Three tools in one. Build a prompt from scratch, transform a weak one, or get AI feedback on something you've already written.
Context:
Build from blocks
Your Prompt
Your prompt assembles here as you fill in the fields...
RoleContextTaskFormatConstraintsExample
Paste your rough prompt
Try an example
Transformed Prompt
Your rewritten prompt will appear here, structured using the 6-block anatomy.
Rewriting…
Paste your prompt to critique
Load an example
Critique
Your critique will appear here — a score across the 6 blocks, the 3 highest-leverage improvements, and a coaching note.
Analysing your prompt…
AI Feature Red Flags Checklist
10 questions every designer should answer before any AI feature ships. Check them off as you go — your readiness score updates live.
0/10
Questions cleared
Not started
Work through the checklist below
🎨 UX & Design Readiness
0 / 3
Have you designed all AI output states — loading, streaming, error, empty, and success?
AI responses have more states than regular UI. A missing error state ships as a blank screen or broken layout in prod.
Critical
Is it clear to the user that this content was generated by AI?
Users who don't know output is AI-generated will trust it more than they should — and blame the product when it's wrong.
High
Can users easily correct, edit, or dismiss AI output without friction?
AI will be wrong. If correction requires too many steps, users will abandon the feature rather than fix it.
High
🤖 AI Behavior & Safety
0 / 3
Have you tested what happens when the AI produces a hallucinated or factually wrong response?
Hallucination is not a bug — it's a known behavior. Your design needs to account for it, not assume it won't happen.
Critical
Do you have guardrails defined for what the AI should never say or do in this feature?
Without explicit constraints, the AI will occasionally go off-brand, give dangerous advice, or expose sensitive information.
Critical
Is there a human-in-the-loop step for any high-stakes AI action (sending messages, making payments, modifying records)?
A wrong AI answer is annoying. A wrong AI action — sending the wrong amount, emailing the wrong customer — is a real incident.
Critical
📋 Product & PM Readiness
0 / 2
Do you have a quality baseline — an eval score or acceptance threshold — that defines "good enough to ship"?
Without a defined threshold, "good enough" is subjective and AI features ship on gut feel. That's how bad experiences get to production.
High
Do you know the per-interaction cost, and does it fit within your product's unit economics?
AI features can look cheap in testing and bankrupt a feature at scale. 10,000 daily users × $0.02/call = $6k/month before you notice.
Medium
⚙️ Technical & Performance
0 / 2
Have you designed for latency — including a fallback experience when the AI call is slow (>3 seconds) or fails completely?
Network issues, rate limits, and model outages happen. A blank screen or spinner with no timeout message destroys trust fast.
High
Is user data handled appropriately — have you confirmed what data is sent to the AI and that it meets privacy requirements?
Sending PII, payment data, or dealership-sensitive information to a third-party AI API without review is a compliance risk.
Critical
The Quick Reference Cheat Sheet
Bookmark this. Print it. Paste it in your Notion. Everything you need in one scannable view.
Prompt Do's and Don'ts
🎭
Setting the Role
Start with "You are a [specific expert]..."
Include domain and experience level
Say "Act as an AI that helps with..."
Skip the role entirely for complex tasks
📐
Format Control
Specify exactly: "3 bullets, each max 20 words"
Use word/character limits for short-form copy
Say "keep it brief" without a number
Ask for a "table" without specifying columns
✍️
Tone & Voice
Show an example: "Like this: [paste example]"
Define the audience reading level
Say "professional but not too formal"
Use vague descriptors: friendly, engaging, etc.
🔢
Task Clarity
One main task per prompt
Use action verbs: write / analyze / compare / list
Bundle 5 tasks in one prompt
Use passive phrasing: "I need help with..."
🧠
Complex Reasoning
Add "Think step-by-step before answering"
Ask for pros/cons before a recommendation
Expect perfect logic without guiding the reasoning
Accept the first answer for high-stakes decisions
🔁
Iteration
Tell the AI what was wrong: "Too formal — try again"
Ask for 3 variations then pick and refine
Accept mediocre output — push back with specifics
Start a new chat — continue refining in the same thread
Key Phrases That Improve Output
For Better Structure
"Present as a numbered list with headings"
"Use a table with columns: [col1], [col2], [col3]"
"Start each point with a bold label"
"Separate your answer into: [section A] and [section B]"
For Better Quality
"Think step-by-step before giving your final answer"
"Give me your 3 best options, not just one"
"Flag any assumptions you're making"
"If you're uncertain, say so — don't guess"
For Better Tone
"Write at a 7th grade reading level"
"Avoid jargon. If you use a technical term, define it inline."
"Use active voice throughout"
"Write as if speaking to a smart friend, not a client"
For Iteration
"The second option is closest. Make it more [X]."
"Keep everything the same but change the tone to [Y]"
"You went too [long/formal/vague]. Rewrite to fix just that."
"Give me 3 more variations, each trying a different angle"
Prompts by Situation
Situation
Starter Phrase
Stuck on copy
"Give me 5 ways to say [X]. Each with a different emotional angle."
Need to simplify
"Explain [X] in plain English for someone who is not technical."
Want outside perspective
"Play the role of a skeptical user and tell me what's confusing about [X]."
Prepping for a meeting
"I'm about to present [X]. What are the 5 hardest questions I'll get?"
Writing a brief
"Here's my rough idea: [X]. Turn it into a structured one-page brief."
Learning a concept
"Explain [technical concept] using an analogy a designer would understand."
Exploring options
"Give me 3 fundamentally different approaches to [problem], not just variations."
+ Save as Template
Give your built prompt a name and choose a category so the team can find and reuse it.
✓ Saved! Find it in the Templates tab under your chosen category.