Five techniques every senior leader should know — with examples drawn from the real decisions executives face every day.
When you're a senior leader, you don't have time to become a prompt engineering specialist. But you do need to know enough to get consistently excellent output from AI — the kind that's board-ready, not just adequate.
The difference between a mediocre AI response and a genuinely useful one is rarely the tool. It's the quality of the prompt. Structure, context, and specificity consistently outperform length. This guide covers five core prompting techniques — and a sixth, more advanced approach — with examples built for executive-level work.
A zero-shot prompt asks the AI to perform a task with no examples or prior context — relying on the model's training to interpret your request and respond. It's fast, direct, and works well when your ask is clear.
The quality of a zero-shot prompt depends almost entirely on specificity. Vague instructions produce vague output. The more precisely you describe the task, the result you want, and the tone, the better the response — without any additional scaffolding.
A few-shot prompt gives the model one or more examples of the output you want before asking it to perform the actual task. You're showing it your standard — the structure, tone, and format you expect — so it can replicate it consistently.
Few-shot prompts are particularly powerful when you're processing large volumes of input — investor feedback, employee survey responses, customer data — that need to be structured consistently. The model learns your format from the examples and applies it reliably at scale.
Chain-of-thought prompting guides the model to reason through a problem step by step before arriving at a conclusion. Rather than asking for an answer, you ask for the thinking. This produces more considered, defensible output — and often surfaces angles you hadn't considered.
Phrases like "think step by step", "reason through this carefully", or "break this down systematically" consistently improve the quality of the output. This is what separates a considered strategic response from a surface-level one — and it makes the model's reasoning transparent enough to challenge.
A context-aware prompt provides relevant background information before making the request. The richer the context, the more targeted and useful the output — and the less editing you'll need to do afterwards. The model can only work with what you give it.
The gap between a generic AI response and a genuinely useful one is almost always context. Don't assume the model knows your industry, your people, your culture, or the stakes. Tell it — and you'll notice an immediate step change in the quality of the output.
A role-based prompt assigns the model a specific persona or professional expertise before you ask your question. This shapes how it frames its reasoning, the vocabulary it uses, and the assumptions it brings. It's one of the most powerful techniques for stress-testing your own thinking.
You can stack multiple perspectives in sequence. After the CFO review, ask the model to evaluate the same case from the perspective of your largest institutional shareholder, or your most sceptical board member. Different roles surface different risks — often the ones you most need to hear.
Metaprompting is the practice of asking the AI to help you write or improve a prompt before you use it. Rather than spending time crafting the perfect prompt from scratch, you describe what you're trying to achieve and ask the model to structure the prompt for you.
Once you have a draft, you can iterate: “Make this more specific. Add a constraint that the response should be no longer than one page.” This iterative loop — prompt, response, refinement — mirrors the way good strategic thinking works. You rarely arrive at the best answer on the first attempt.
Metaprompting is particularly valuable when you're working on something sensitive or high-stakes. The model can often surface angles or framing considerations you hadn't thought of — and flag ambiguities in your own brief before you commit to a direction.
Every prompt in Marcus embeds role, context, and structured output by design. Browse by your function or role — or use any prompt as a starting point in the Prompt Builder.