Prompt Guide

Prompts that perform
at the highest level.

Five techniques every senior leader should know — with examples drawn from the real decisions executives face every day.


When you're a senior leader, you don't have time to become a prompt engineering specialist. But you do need to know enough to get consistently excellent output from AI — the kind that's board-ready, not just adequate.

The difference between a mediocre AI response and a genuinely useful one is rarely the tool. It's the quality of the prompt. Structure, context, and specificity consistently outperform length. This guide covers five core prompting techniques — and a sixth, more advanced approach — with examples built for executive-level work.

In this guide
  1. Zero-Shot — the fast executive query
  2. Few-Shot — teaching the model your standard
  3. Chain-of-Thought — structured strategic thinking
  4. Context-Aware — giving the model what it needs
  5. Role-Based — perspective as a strategic tool
  6. Metaprompting — using AI to write better prompts

01
Zero-Shot

The fast executive query

A zero-shot prompt asks the AI to perform a task with no examples or prior context — relying on the model's training to interpret your request and respond. It's fast, direct, and works well when your ask is clear.

Best for: Quick drafts, first-pass analysis, briefing summaries, email composition.
Drafting a board update
Draft a concise board update on our Q3 performance. Key facts: revenue £42m (4% below plan), operating margin 18% (on plan), headcount 312 (flat). Highlight the revenue shortfall and our response plan. Tone: direct, no spin.

The quality of a zero-shot prompt depends almost entirely on specificity. Vague instructions produce vague output. The more precisely you describe the task, the result you want, and the tone, the better the response — without any additional scaffolding.


02
Few-Shot

Teaching the model your standard

A few-shot prompt gives the model one or more examples of the output you want before asking it to perform the actual task. You're showing it your standard — the structure, tone, and format you expect — so it can replicate it consistently.

Best for: Any output requiring a consistent format or style: executive summaries, risk registers, board papers, investor Q&A.
Structuring investor feedback
Analyse investor feedback and categorise it. Use these labels: Strategy, Execution, Management, Valuation, Governance. Rate sentiment (Positive / Neutral / Negative) and urgency (High / Medium / Low). Example: Input: "We're not convinced the Asia expansion will generate sufficient return on capital within your stated timeline." Category: Strategy | Sentiment: Negative | Urgency: High Now analyse this feedback: [paste your feedback here]

Few-shot prompts are particularly powerful when you're processing large volumes of input — investor feedback, employee survey responses, customer data — that need to be structured consistently. The model learns your format from the examples and applies it reliably at scale.


03
Chain-of-Thought

Structured strategic thinking

Chain-of-thought prompting guides the model to reason through a problem step by step before arriving at a conclusion. Rather than asking for an answer, you ask for the thinking. This produces more considered, defensible output — and often surfaces angles you hadn't considered.

Best for: Complex decisions, strategy evaluation, M&A analysis, capital allocation — anywhere a snap answer would be insufficient.
Market entry decision
Think through this step by step. Step 1: Assess the size and growth trajectory of the UK professional services market over the next five years. Step 2: Evaluate the competitive landscape — key players, their positioning, and any gaps. Step 3: Identify the capital requirements and key risks for a new entrant. Step 4: Based on the above, give me your recommended decision on whether to enter this market, with your reasoning.

Phrases like "think step by step", "reason through this carefully", or "break this down systematically" consistently improve the quality of the output. This is what separates a considered strategic response from a surface-level one — and it makes the model's reasoning transparent enough to challenge.


04
Context-Aware

Giving the model what it needs to know

A context-aware prompt provides relevant background information before making the request. The richer the context, the more targeted and useful the output — and the less editing you'll need to do afterwards. The model can only work with what you give it.

Best for: Any situation where the model needs to understand your organisation, stakeholders, or constraints to give a relevant response rather than a generic one.
Communicating a restructure
Context: I am the CEO of a 400-person professional services firm. We are announcing a restructure that will reduce headcount by 12% due to the integration of AI into core delivery functions. The announcement will be made to all staff on Friday. Task: Draft an all-staff communication that is honest and direct, acknowledges the difficulty of this moment, explains the strategic rationale clearly, and maintains trust in leadership. Avoid corporate clichés. Tone: human, clear, and confident.

The gap between a generic AI response and a genuinely useful one is almost always context. Don't assume the model knows your industry, your people, your culture, or the stakes. Tell it — and you'll notice an immediate step change in the quality of the output.


05
Role-Based

Perspective as a strategic tool

A role-based prompt assigns the model a specific persona or professional expertise before you ask your question. This shapes how it frames its reasoning, the vocabulary it uses, and the assumptions it brings. It's one of the most powerful techniques for stress-testing your own thinking.

Best for: When you need a specific professional lens — a CFO scrutinising a business case, a board chair reviewing governance, an activist investor challenging your strategy.
Stress-testing a business case
You are a seasoned CFO at a FTSE 250 company with a reputation for rigorous capital discipline. I am about to present a business case for a £15m investment in a new technology platform. Your job is to challenge it robustly: identify weaknesses in the financial assumptions, ask the questions an investment committee would ask, and tell me what would need to be true for you to approve it. Here is the business case: [paste here]

You can stack multiple perspectives in sequence. After the CFO review, ask the model to evaluate the same case from the perspective of your largest institutional shareholder, or your most sceptical board member. Different roles surface different risks — often the ones you most need to hear.


06
Metaprompting

Using AI to write better prompts

Metaprompting is the practice of asking the AI to help you write or improve a prompt before you use it. Rather than spending time crafting the perfect prompt from scratch, you describe what you're trying to achieve and ask the model to structure the prompt for you.

Example
I need to prepare for a difficult conversation with my board about missing our annual revenue target. What context would make a prompt more effective for this? Draft a prompt I can use.

Once you have a draft, you can iterate: “Make this more specific. Add a constraint that the response should be no longer than one page.” This iterative loop — prompt, response, refinement — mirrors the way good strategic thinking works. You rarely arrive at the best answer on the first attempt.

Metaprompting is particularly valuable when you're working on something sensitive or high-stakes. The model can often surface angles or framing considerations you hadn't thought of — and flag ambiguities in your own brief before you commit to a direction.


The Marcus library is already
built this way.

Every prompt in Marcus embeds role, context, and structured output by design. Browse by your function or role — or use any prompt as a starting point in the Prompt Builder.

Explore the Library →Get Access