Most people use Claude like a search box. The users getting real leverage treat it like infrastructure โ with persistent context, structured prompts, and a handful of signals that reliably change its behavior.
This is a condensed tour of the techniques worth knowing. The full reference is a PDF you can grab at the bottom.
Prompt codes that actually work
A small set of shorthand codes reliably change how Claude responds. They aren't backdoors โ they work because Claude was trained on conventions, and these codes signal clear intent.
- L99 โ forces committed expert opinions instead of "it depends" hedging.
- /ghost โ strips em-dashes, closing summaries, and other AI tells.
- ULTRATHINK โ maximum reasoning depth; in Claude Code it triggers extended thinking.
- PERSONA: [specific] โ "Senior Postgres DBA at Stripe, 15 years, cynical about ORMs." Specificity earns real opinions.
- /skeptic โ challenges your question before answering it.
- /noyap and /punch โ brevity and sharpened prose.
Stacking helps. /ghost + /punch for outreach. PERSONA + L99 for technical decisions. /skeptic + ULTRATHINK for strategy work.
What does not work: /nofilter, /unlocked, L33T, DAN-mode prompts. Those are ChatGPT jailbreak patterns with zero effect on Claude.
XML is Claude's native language
Claude was trained on XML-structured prompts. Its own system prompt uses XML tags internally. This makes structured tags the single highest-impact formatting technique you can adopt.
Tags worth using: <instructions>, <context>, <examples> (wrapping individual <example> tags), <formatting>, <document> with <source>, and <thinking> / <answer> to separate reasoning from output.
Two rules pay off disproportionately:
- Put documents at the top, queries at the bottom. Anthropic's own testing shows up to 30% quality improvement for inputs over 20K tokens.
- Claude mirrors your register. Conversational input produces verbose output. Direct input produces direct output. Strip the markdown and pleasantries from your own prompts if you want the same back.
Thinking and reasoning
Chain-of-thought scales from "think step by step" up to structured <thinking> / <answer> tags. But the default on Claude 4.6 is adaptive thinking โ the model calibrates depth automatically. Manual budgets are no longer the recommended pattern.
A few things worth knowing:
- Extended thinking can hurt performance by up to 36% on simple or intuitive tasks. Reserve it for math, logic, architecture, and debugging.
- Haiku 4.5 with thinking enabled gets you near-Sonnet performance at a lower price point.
- In Claude Code, the triggers are "think," "think harder," and "ultrathink" โ each step unlocks more reasoning budget.
- Few-shot examples with 3-5 diverse samples hit ~90% format success, versus instructions alone.
Claude Code power commands
If you live in the terminal, a few commands are non-negotiable:
- /init โ auto-generates a
CLAUDE.mdfrom your codebase. - /compact โ summarizes conversation to free context. Supports focus:
/compact Focus on API changes. - /clear โ fresh context beats a degraded session. Use it between tasks.
- /fork โ branch a session to try an alternative without losing the original.
- Shift+Tab โ cycle between Normal, Auto-Accept, and Plan modes.
For CLAUDE.md, the rule is point, don't dump. Keep it under 500 lines, with 20-30 lines of essentials at the top. Reference where detail lives rather than pasting everything in โ Claude only follows ~150-200 instructions reliably, and its system prompt already uses ~50 of those.
Context engineering: the real unlock
The biggest gap between casual and power users isn't single-prompt tricks. It's building persistent systems that load automatically before every interaction.
The reference guide recommends five files:
- Identity โ your role, company, priorities, decisions already made.
- Voice profile โ beliefs, contrarian takes, sentence rhythm, things you find cringe.
- Anti-AI writing โ banned words and banned patterns (long intros, closing summaries, rule-of-three chains).
- Taste โ examples of output you love and output you hate.
- Project context โ architecture decisions, current state, what's been tried and failed.
Two high-leverage habits pair well with these:
- Socratic prompting. Before a hard task, ask Claude what questions it needs answered to do the task well. It surfaces assumptions you didn't know you were making.
- Pattern detection. Periodically ask, "What patterns do you see in how I've corrected your outputs?" Save the insights. After every correction, tell Claude to update its
CLAUDE.mdso the same mistake doesn't repeat.
The users getting the most from Claude aren't writing better single prompts. They're building better systems around every interaction. Treat Claude as infrastructure, not a chat box.
Download the full reference
The PDF version has everything โ all 15 verified prompt codes, the full XML tag list, API cost optimization tactics, the anti-AI word ban list, and the complete context engineering system.
Download PDFFree, no signup, MCPpedia-watermarked. Share it with anyone who could use it.
MCP Security Weekly
Weekly CVE alerts, new server roundups, and MCP ecosystem insights. Free.
Keep reading
This article was written by AI, powered by Claude and real-time MCPpedia data. All facts and figures are sourced from our database โ but AI can make mistakes. If something looks off, let us know.