✦ Radian CCA-F · Fully Self-Contained

CCA Foundations
Exam Portal

Everything you need to pass the Claude Certified Architect — Foundations exam — fully integrated in a single file. Reference guides, anti-patterns, FAQ, four quiz formats, flashcards, decision trees, and a study plan.

720Pass Score
4/6Scenarios
27%D1 Weight
5Domains
0ptsWrong Penalty
Domain Weightings
D1Agentic Architecture & Orchestration27%
D3Claude Code Config & Workflows20%
D4Prompt Engineering & Structured Output20%
D2Tool Design & MCP Integration18%
D5Context Management & Reliability15%
All Study Tools
📋
Reference
Quick Reference Guide
All 5 domains, task statements, scenario map, tool selection guide, error categories, and decision matrices — everything at a glance before exam day.
5 domains · 6 scenarios · All key tables
Open Guide
⚠️
Reference
Anti-Patterns Cheat Sheet
27 wrong approaches with their correct alternatives. Organized by domain with severity ratings. The patterns the exam is specifically designed to test.
27 patterns · Critical / High / Medium severity
Open Sheet
💬
Reference
FAQ
40 questions covering every tricky concept, confusable term pair, and "wait, which one is it?" moment. Searchable and filterable by topic.
40 questions · 7 categories · Searchable
Open FAQ
🎯
Practice
Practice Quiz
25 shuffled questions across all 5 domains and all 3 question types. Live scoring, instant explanations, and per-domain breakdown at the end.
25 questions · All 3 types · Live scoring
Start Quiz
📋
Practice
Complete Scenario Quiz
54 questions across all 6 scenarios. Filter by scenario, mode. Single answer, multi-select, and scenario-based questions with scoring.
54 questions · 6 scenarios · All question types
Start Quiz
🔬
Practice
6-Scenario Deep Drill
60 questions — 10 per scenario. Easy / Medium / Hard. Filter by scenario or question type. The most comprehensive scenario practice available.
60 questions · 10 per scenario · 3 difficulty levels
Start Drill
🪤
Practice
Common Mistakes Trap Drill
50 trap questions — 10 per mistake pattern. Every question is engineered to lure you into the wrong answer, then explains exactly why.
50 trap questions · 5 mistake patterns · Mastery tracking
Start Traps
🃏
Practice
Flashcards
40 cards covering every high-frequency concept. Filter by domain. Rate yourself Again / Hard / Good / Easy to track mastery across sessions.
40 cards · 5 domains · Spaced repetition rating
Start Cards
🌿
Reference
Decision Trees & Cheat Sheets
Step-by-step decision trees for the hardest judgment calls: enforcement type, escalation, Batch vs real-time API, plan mode vs direct execution.
4 decision trees · API cheat sheet · Config reference · Exam traps
Open Trees
📅
Strategy
Study Plan
Structured 2-week preparation plan with checkable tasks. Tracks your progress across all study tools from this portal.
2-week plan · Checkable tasks · Progress tracking
Open Plan
Strategy
Dos & Don'ts
Exam strategy and real-world Claude usage dos and don'ts. 5 tabs: Exam Strategy, Agentic Systems, Tools & MCP, Prompting, Context & Reliability.
5 topic tabs · Exam strategy + real-world usage
Open Guide
🎯
Practice Quiz
25 questions · All domains · All 3 question types
CCA Foundations · Practice Quiz
25 questions · All 5 domains
Question 1 of 25
Score 0
|
Correct 0
|
Wrong 0
|
Remaining 25
0% correct
Score by Domain
Answer Review
All questions with correct answers and full explanations
Claude Certified Architect · Foundations

6 Scenario
Deep Drill

10 questions per scenario — all 3 question types — with full explanations. Choose which scenarios to practice, pick your difficulty, and go.

SELECT SCENARIOS
Score: 0
0/0
Quiz Complete

0%
correct
0Correct
0Wrong
0Partial
0Total
0Points
0Max Points
Single-Answer
correct
Multi-Select
correct / partial
Scenario-Based
correct
Score by Scenario
Answer Review
These questions are designed to trick you — each one tests a common exam failure pattern

COMMON
MISTAKES
DRILL

5 mistake patterns. 10 trap questions each.
Every question is engineered to lure you into the wrong answer.
Master these and the exam's hardest distractors won't fool you.

Select mistake patterns to drill
Score: 0
0/0
Drill Complete

0%
correct
0Correct
0Wrong
0Partial
0Total
0Points
0Max Pts
Score by Mistake Pattern
Mastery Assessment
ANSWER REVIEW
🃏
Flashcards
40 cards · Filter by domain · Rate your mastery
Exam Prep · Tool 4 of 5

CCA Flashcards

Click card to reveal answer
Deck Complete! 🎉
Here's how you rated yourself
0
Again
0
Hard
0
Good
0
Easy
0Again
0Hard
0Good
0Easy
🌿
Decision Trees & Cheat Sheets
4 decision trees · API reference · Config guide · Exam traps
Exam Prep · Tool 5 of 5

Decision Trees & Cheat Sheets

Key judgment calls, comparisons, and scenario maps for rapid recall

🌿 Decision Trees
⚡ API Cheat Sheet
⚙️ Config Reference
🗺️ Scenario Map
⚠️ Exam Traps

Decision Trees

Use these to work through the most common judgment calls on the exam

Tree 1 — Enforcement: Programmatic or Prompt?
Does this rule have financial, security, or legal consequences if violated?
YES →
Use programmatic enforcement (hook / prerequisite gate)
NO ↓
Does the rule need to apply 100% of the time (zero acceptable failure rate)?
YES →
Use programmatic enforcement
NO ↓
Is the rule complex enough that schema/hooks would be over-engineered?
YES →
Prompt instruction is acceptable — monitor for failures
NO →
Prefer programmatic — more reliable at low complexity cost
Tree 2 — Escalation Decision
Did the customer explicitly say "I want a human" or equivalent?
YES →
Escalate immediately. No resolution attempt. Period.
NO ↓
Is the policy ambiguous, silent, or undefined for this request?
YES →
Escalate. Don't let the agent improvise on policy gaps.
NO ↓
Is the agent unable to make meaningful progress after genuine attempts?
YES →
Escalate with structured handoff summary.
NO →
Resolve autonomously. Acknowledge frustration if present.
Tree 3 — Batch API vs Real-Time API
Is any person or process waiting (blocked) for this result?
YES →
Real-Time API only. Batch has no latency SLA (up to 24h).
NO ↓
Does the workflow require multi-turn tool calling within a single request?
YES →
Real-Time API. Batch does not support multi-turn tool calls.
NO ↓
Is cost reduction a priority and is 24h completion acceptable?
YES →
Batch API. 50% cost savings. Use custom_id for correlation.
NO →
Real-Time API. Batch only when latency tolerance is confirmed.
Tree 4 — Plan Mode vs Direct Execution
Does the task involve multiple files AND architectural decisions?
YES →
Plan Mode. Explore before committing to prevent costly rework.
NO ↓
Are there multiple valid approaches with different tradeoffs?
YES →
Plan Mode. Evaluate options before writing code.
NO ↓
Is this a single-file, well-understood, clear-scope change?
YES →
Direct Execution. No need for exploration overhead.
UNSURE →
Default to Plan Mode. The cost of exploration is low; rework is high.

API Cheat Sheet

Every API concept, value, and flag you need to know

stop_reason Values
tool_useExecute tool → continue loop
end_turnModel finished → terminate
max_tokensToken limit hit → handle
stop_sequenceCustom stop string hit
tool_choice Options
"auto"May return text (not guaranteed tool call)
"any"Must call some tool
{type:"tool",name:"X"}Must call tool X specifically
Message Batches API — Key Facts
PropertyValueImplication
Cost50% savingsHalf the price of real-time API
Latency SLANoneUp to 24h — never use for blocking workflows
Multi-turn tool callingNot supportedCannot execute tools mid-request and return results
Request correlationcustom_idRequired to match responses to requests
Failure handlingBy custom_idResubmit only failed items, not whole batch
Best forOvernight / weekly jobsReports, audits, nightly test generation
Never forPre-merge CI checksDevelopers are blocked waiting for results
JSON Schema Design Patterns
PatternWhen to UseExample
nullable / optional fieldInformation may not exist in sourcediscount: null | number
enum + "other" + detailFixed categories but extensibletype: "refund"|"other", type_detail: string
calculated vs statedSemantic validation (math checks)calculated_total, stated_total, discrepancy_flag
confidence scoresHuman review routingfield_confidence: 0.0–1.0
claim-source mappingProvenance in multi-source synthesisclaim, source_url, excerpt, date
MCP Error Response Structure
FieldTypeValues / Notes
isErrorbooleantrue for all error conditions
errorCategorystring enumtransient | validation | business | permission
isRetryablebooleantrue only for transient; false for all others
messagestringHuman-readable description for agent/user
valid empty resultReturn isError: false, data: [] — NOT an error

Configuration Reference

Every config file, directory, and flag — location, scope, and purpose

CLAUDE.md Hierarchy
LocationScopeShared via VCS?Use For
~/.claude/CLAUDE.mdUser-levelNoPersonal preferences, personal tools
.claude/CLAUDE.md or root CLAUDE.mdProject-levelYesTeam standards, project conventions
Subdirectory CLAUDE.mdDirectory-levelYesModule-specific rules
.claude/rules/*.md with glob frontmatterFile-patternYesConventions for scattered files (e.g., all tests)
Slash Commands & Skills
LocationScopeUse For
.claude/commands/Project-sharedTeam-wide slash commands (auto-available on clone)
~/.claude/commands/PersonalPersonal slash commands not shared
.claude/skills/name/SKILL.mdProject-sharedOn-demand skills with frontmatter config
~/.claude/skills/PersonalPersonal skill variants (won't affect teammates)
SKILL.md Frontmatter Options
OptionPurposeWhen to Use
context: forkRun in isolated sub-agent contextVerbose skills (codebase analysis, brainstorming) that would pollute main session
allowed-tools: [...]Restrict tool access during skillPrevent destructive actions (e.g., limit to file writes only)
argument-hint: "..."Prompt for required params on invocationSkills that require a target file, component name, etc.
Claude Code CLI Flags
FlagPurposeRequired For
-p / --printNon-interactive modeAll CI/CD pipeline invocations
--output-format jsonJSON outputMachine-parseable CI results
--json-schemaEnforce output schemaInline PR comment automation
--resume <name>Continue named sessionMulti-session investigations
/compactReduce context usageLong exploration sessions nearing token limit
/memoryView loaded memory filesDiagnosing inconsistent behavior across sessions
MCP Server Configuration
FileScopeCredential Handling
.mcp.json (project root)All team membersUse ${ENV_VAR} expansion — never hard-code
~/.claude.jsonCurrent user onlyPersonal tokens, experimental servers
All configured MCP servers are discovered at connection time and available simultaneously.

Scenario Map

What each scenario tests and the key traps in each

Scenario 1 — Customer Support Resolution Agent
Key ConceptWatch For
Tool ordering enforcementProgrammatic prerequisite, NOT prompt instruction
Tool description qualityExpand descriptions — don't merge tools as first fix
Escalation triggersExplicit request = immediate escalation, no resolution attempt
Multiple customer matchesAsk for more identifiers — never guess by heuristic
Escalation calibrationFew-shot examples fix this — not confidence scores or sentiment
Scenario 2 — Code Generation with Claude Code
Key ConceptWatch For
Shared vs personal commands.claude/commands/ = shared; ~/.claude/commands/ = personal
Plan mode triggersMonolith → microservices always = plan mode
Path-specific rulesGlob patterns for scattered test files, not subdirectory CLAUDE.md
CLAUDE.md new-dev issueIf new dev doesn't get instructions → user-level not project-level
Scenario 3 — Multi-Agent Research System
Key ConceptWatch For
Task decomposition scopeCoverage gaps = coordinator decomposed too narrowly
Error propagationStructured context → coordinator; not generic string, not empty success
Parallel executionMultiple Task calls in ONE response turn — not multiple turns
Synthesis tool accessScoped verify_fact tool for 85% simple case; complex = coordinator
Conflicting sourcesAnnotate both with attribution — don't pick, don't average
Scenario 4 — Developer Productivity with Claude
Key ConceptWatch For
Grep vs GlobGrep = file contents; Glob = file paths — don't swap them
Edit fallbackNon-unique anchor → Read + Write (not retry Edit)
Context degradationScratchpad files to persist findings, not larger model
MCP server scopeTeam server in .mcp.json; personal in ~/.claude.json
Scenario 5 — Claude Code for CI/CD
Key ConceptWatch For
Non-interactive mode-p flag — not CLAUDE_HEADLESS, not --batch, not stdin redirect
Batch API decisionPre-merge = real-time (blocking); overnight reports = batch
Multi-file reviewPer-file passes + integration pass — not bigger context window
False positive reductionExplicit categorical criteria — not "be conservative"
Independent reviewNew instance for review — not same session that generated code
Scenario 6 — Structured Data Extraction
Key ConceptWatch For
Schema designOptional fields → nullable, not required (prevents hallucination)
Syntax vs semantic errorstool_use fixes syntax; validation pass fixes semantic
Retry limitsAbsent data can't be retried into existence
Aggregate accuracy97% overall ≠ safe to automate — check per-type and per-field
Few-shot for extractionVaried document structures need examples, not just instructions

Exam Traps & Common Wrong Answers

The distractors that catch unprepared candidates — and why they're wrong

Non-Existent Features (Always Wrong)
Fake OptionWhy It AppearsCorrect Alternative
CLAUDE_HEADLESS=trueSounds like a reasonable env var-p / --print flag
--batch flagSounds like batch processing modeMessage Batches API (separate endpoint)
required: true in tool_choiceSounds like it forces tool usagetool_choice: "any"
.claude/config.json commands arraySounds like a config file.claude/commands/ directory
parallel: true in AgentDefinitionSounds like parallelism configMultiple Task calls in one response
Over-Engineered Distractors (Wrong for "First Step" Questions)
DistractorWhy It's WrongCorrect First Step
Deploy a separate routing classifier modelRequires ML infra; prompt hasn't been tried yetImprove tool descriptions
Build a keyword-based pre-turn selectorBypasses LLM's natural language understandingImprove tool descriptions
Consolidate tools into one mega-toolMore effort than needed; loses specificityDifferentiate descriptions first
Add 3rd independent review modelInfrastructure overkill; prompt fix firstExplicit categorical criteria
Probabilistic vs Deterministic Traps
Trap AnswerWhy WrongCorrect Answer
"Add system prompt: customer verification is mandatory"Probabilistic — LLMs have non-zero failure rate for complianceProgrammatic prerequisite gate
"Add few-shot showing get_customer first"Still probabilistic for critical financial operationsProgrammatic prerequisite gate
"Instruct Claude: do not fabricate field values"Prompt instructions are weaker than schema constraintsMake fields nullable in JSON schema
"Set tool_choice: auto for guaranteed output"auto allows text responses; not guaranteed tool calltool_choice: "any"
Confusable Concepts — High Exam Frequency
Concept AvsConcept BKey Difference
~/.claude/CLAUDE.md (user)vs.claude/CLAUDE.md (project)User = personal, not shared. Project = version-controlled, shared.
Syntax error (malformed JSON)vsSemantic error (values don't add up)tool_use fixes syntax; validation pass fixes semantic.
Access failure (tool couldn't run)vsValid empty result (ran, found nothing)Access = isError:true. Empty = isError:false, data:[].
fork_session (branch exploration)vs--resume (continue session)fork = divergent approaches. resume = continue same thread.
Grep (search file contents)vsGlob (match file paths)Grep = what's inside files. Glob = file names/locations.
context: fork (skill isolation)vsCLAUDE.md (always-loaded)Skills = on-demand. CLAUDE.md = loaded every session.
Exam Prep · Tool 3 of 5

Dos & Don'ts

CCA Exam Strategy · Technical Concepts · Real-World Claude Usage

🎓 Exam Strategy
🤖 Agentic Systems
🔧 Tools & MCP
✍️ Prompting & Output
🧠 Context & Reliability
Exam-Day Strategy
How to approach questions and maximize your score on the CCA Foundations exam
Key mindset: Every question is scenario-based. The right answer is always the most reliable, production-safe, proportionate solution. Avoid over-engineered answers and probabilistic guarantees when deterministic ones are available.
✓ Do
  • Read the scenario context first — it anchors which domain and tradeoffs are being tested
  • Look for the root cause in the question stem before scanning options
  • Prefer programmatic enforcement over prompt instructions when deterministic compliance is required
  • Choose proportionate first responses — the simplest solution that solves the stated problem
  • Guess if unsure — there is no penalty for wrong answers
  • When two options seem correct, pick the one that addresses the root cause, not a symptom
  • Treat "prompt-based solutions" as weaker than "schema/code-based solutions" for reliability questions
  • Know the exact API vocabulary: stop_reason, tool_use, end_turn, isRetryable, tool_choice
  • For escalation questions: customer explicitly requests human → escalate immediately, no resolution attempt
  • For batch API questions: always ask "is this workflow blocking?" — if yes, batch is wrong
✗ Don't
  • Don't pick answers that reference non-existent features: CLAUDE_HEADLESS, --batch, required: true in tool_choice, config.json commands array
  • Don't choose "add a classifier/separate model" as a first step — it's over-engineered when prompt/config fixes exist
  • Don't confuse user-level (~/.claude/) with project-level (.claude/) configuration — a common trap question
  • Don't trust LLM self-reported confidence scores for routing or escalation decisions — they're unreliable
  • Don't assume a larger context window or better model solves attention quality issues in multi-file reviews
  • Don't conflate semantic errors (values don't add up) with syntax errors (malformed JSON) — they have different solutions
  • Don't pick "merge tools into one" as a first fix for tool selection issues — description improvement comes first
  • Don't select answers that silently suppress errors or return empty results as success — always a wrong pattern
  • Don't assume subagents inherit coordinator context — they never do automatically
  • Don't pick sentiment analysis as an escalation proxy — it doesn't correlate with case complexity
Answer Elimination Framework — "Probabilistic vs. Deterministic"
✓ Programmatic hook blocks tool until prerequisite metvs✗ System prompt says "always call X first"
✓ Nullable JSON schema field prevents hallucinationvs✗ Prompt instruction "don't fabricate values"
✓ tool_choice: "any" guarantees a tool callvs✗ Prompt instruction "always use the extraction tool"
✓ Path-specific rules with glob patterns auto-loadvs✗ Root CLAUDE.md relies on Claude inferring the section
✓ -p flag prevents CI hangs deterministicallyvs✗ stdin redirect is a workaround, not the right answer
Agentic Systems — Real-World Dos & Don'ts
Building production-grade agentic loops, multi-agent orchestration, and session management
✓ Do
  • Use stop_reason as your only loop termination signal — it's the authoritative API signal
  • Explicitly pass all relevant findings in each subagent's prompt — treat every subagent as stateless
  • Route all inter-subagent communication through the coordinator for observability and consistent error handling
  • Use programmatic hooks (PostToolUse) for business rules that require guaranteed compliance
  • Emit multiple Task tool calls in a single coordinator response to run subagents in parallel
  • Use fork_session to explore divergent approaches from a shared baseline without polluting the main session
  • When resuming sessions, explicitly inform the agent about files that changed since last run
  • Compile structured handoff summaries (customer ID, root cause, amounts, action) when escalating to humans
  • Separate content from metadata (source URL, page numbers) in structured outputs to preserve attribution
✗ Don't
  • Parse natural language in assistant messages to detect loop completion — brittle and unreliable
  • Set arbitrary iteration caps as the primary stopping mechanism — causes premature or missed termination
  • Assume subagents inherit coordinator context — always pass required information explicitly
  • Use prompt instructions alone for critical ordering rules (e.g., "verify customer before processing refund") — non-zero failure rate
  • Give a coordinator's allowedTools without including "Task" if it needs to spawn subagents
  • Resume stale sessions with outdated tool results — start fresh with an injected summary instead
  • Let subagents communicate directly with each other, bypassing the coordinator — breaks observability
  • Decompose tasks so narrowly that the coordinator misses entire topic areas (e.g., only visual arts when asked about "creative industries")
Tool Design & MCP — Real-World Dos & Don'ts
Designing effective tool interfaces, error responses, and MCP server configuration
✓ Do
  • Write tool descriptions that include input formats, example queries, edge cases, and when to use this vs. similar tools
  • Return structured error metadata: errorCategory, isRetryable, and a human-readable description
  • Restrict each subagent's tool set to only tools relevant to its role
  • Distinguish access failures (retry candidates) from valid empty results (successful query, no matches)
  • Use .mcp.json at project root for shared team MCP servers; use ~/.claude.json for personal servers
  • Use environment variable expansion (${TOKEN}) in .mcp.json — never commit credentials
  • Use Grep for content search; Glob for filename patterns — they are not interchangeable
  • Fall back to Read + Write when Edit fails due to non-unique anchor text
  • Expose content catalogs as MCP resources to reduce exploratory tool calls
✗ Don't
  • Use minimal one-line tool descriptions — they cause misrouting when tools have similar inputs
  • Return uniform generic errors ("Operation failed") — the agent can't make intelligent recovery decisions
  • Give agents more tools than they need — 18 tools degrades selection reliability vs. 4–5 focused ones
  • Silently return empty results when a tool actually failed — this masks errors and prevents recovery
  • Commit API keys or tokens directly into .mcp.json — always use environment variable expansion
  • Build a custom MCP server for integrations (Jira, GitHub) that already have community servers
  • Use Glob to search file contents — it only matches filename/path patterns, not content
  • Give a synthesis agent all web search tools "just in case" — violates separation of concerns
Prompting & Structured Output — Real-World Dos & Don'ts
Few-shot design, JSON schemas, validation loops, CI/CD prompting, and batch processing
✓ Do
  • Define explicit categorical criteria for what to report vs. skip — vague instructions like "be conservative" don't improve precision
  • Use 2–4 targeted few-shot examples for ambiguous scenarios, showing reasoning for why one action was chosen
  • Use tool_use with JSON schemas for guaranteed schema-compliant structured output
  • Make optional fields nullable in schemas — prevents hallucinated placeholder values
  • Add retry-with-error-feedback: include the original document + the failed extraction + specific error on retry
  • Match API type to workflow: batch for overnight/weekly, real-time for blocking pre-merge checks
  • Use custom_id fields in batch requests to correlate responses and resubmit only failed items
  • Use an independent Claude instance for code review — not the same session that generated the code
  • Use -p flag for all CI/CD Claude Code invocations to prevent interactive input hangs
  • Refine prompts on a sample set before batch-processing large volumes
✗ Don't
  • Rely on vague quality instructions ("only report high-confidence findings") — they don't create consistent behavior
  • Add required fields for data that may not exist in source documents — forces hallucinated values
  • Expect retries to fix extractions where the required information is simply absent from the document
  • Use tool_choice: "auto" when you need guaranteed structured output — the model may return plain text
  • Use the batch API for blocking workflows or SLAs under 24 hours — no latency guarantee
  • Run a single-pass review on 10+ files simultaneously — causes attention dilution and contradictory feedback
  • Assume tool_use prevents semantic errors (e.g., line items not summing to total) — it only prevents syntax errors
  • Ask the model to self-review code it just generated in the same session — it retains biased reasoning context
Context Management & Reliability — Real-World Dos & Don'ts
Long conversations, escalation patterns, multi-agent error propagation, human review
✓ Do
  • Extract transactional facts (amounts, dates, IDs) into a persistent "case facts" block outside summarized history
  • Trim verbose tool outputs to only relevant fields before they accumulate in context
  • Place key findings at the beginning of aggregated inputs to mitigate the "lost in the middle" effect
  • Honor explicit customer requests for a human agent immediately — don't attempt resolution first
  • Escalate when policy is ambiguous or silent — don't let the agent improvise on policy gaps
  • Request additional identifiers when multiple customer records match — never guess by heuristic
  • Require subagents to include publication/collection dates in structured output to prevent temporal misinterpretation
  • Annotate conflicting statistics from multiple sources with full attribution — let the coordinator reconcile
  • Analyze accuracy by document type AND individual field before removing human review
  • Use /compact during long exploration sessions when context fills with verbose discovery output
✗ Don't
  • Condense numerical values, percentages, or dates into vague summaries — they become irretrievably imprecise
  • Let 40-field tool responses accumulate in context when only 5 fields are relevant
  • Use sentiment analysis or self-reported confidence as an escalation trigger — unreliable proxies for case complexity
  • Silently suppress subagent errors — always propagate structured error context including what was attempted
  • Terminate entire workflows because one subagent failed — try local recovery and partial-results synthesis first
  • Trust aggregate accuracy metrics (e.g., 97% overall) to greenlight automation — they can mask 60% accuracy on specific segments
  • Let the synthesis agent arbitrarily pick one value when sources conflict — annotate both with attribution
  • Omit source attribution when summarizing subagent findings — provenance is lost and cannot be reconstructed later
CCA Foundations · Complete Exam Simulation

All 6 Scenarios.
All 3 Question Types.

54 questions across every scenario — Multiple-Choice, Multi-Select, and Scenario-Based. Filter by scenario or run the full deck.

Select scenarios to include (default: all)
54Questions
18Single-Answer
18Multi-Select
18Scenario-Based
Score: 0
Quiz Complete

correct
0Correct
0Wrong
0Partial
0Total Qs
0Points
0Max Points
Single-Answer
correct
Multi-Select
correct
Scenario-Based
correct
Score by Scenario
📋
Quick Reference Guide
All domains, scenarios, key tables at a glance
Exam at a Glance
720Minimum passing score (100–1000)
4 of 6Scenarios on your exam (random)
0 ptsPenalty for wrong answers — always guess
6 mo+Recommended hands-on experience
Content Domains & Task Statements
D1 · 27% Agentic Architecture & Orchestration
  • 1.1Agentic loop lifecycle — stop_reason control flow
  • 1.2Multi-agent coordinator-subagent patterns
  • 1.3Subagent spawning via Task tool; context passing
  • 1.4Multi-step workflows; programmatic enforcement
  • 1.5Agent SDK hooks — PostToolUse; interception
  • 1.6Task decomposition — chaining vs dynamic
  • 1.7Session state — --resume, fork_session
D2 · 18% Tool Design & MCP Integration
  • 2.1Tool descriptions = primary LLM selection mechanism
  • 2.2Structured errors: errorCategory, isRetryable
  • 2.3Tool distribution; tool_choice auto/any/forced
  • 2.4MCP config — .mcp.json vs ~/.claude.json
  • 2.5Built-in tools — Grep/Glob/Read/Write/Edit/Bash
D3 · 20% Claude Code Config & Workflows
  • 3.1CLAUDE.md hierarchy: user → project → directory
  • 3.2Slash commands (.claude/commands/) & skills
  • 3.3Path-specific rules (.claude/rules/) glob patterns
  • 3.4Plan mode vs direct execution decision
  • 3.5Iterative refinement — examples, test-driven
  • 3.6CI/CD: -p flag, --output-format json
D4 · 20% Prompt Engineering & Structured Output
  • 4.1Explicit criteria over vague instructions
  • 4.2Few-shot: 2–4 targeted ambiguous examples
  • 4.3tool_use + JSON schema = reliable output
  • 4.4Validation-retry loops; retry won't fix absent data
  • 4.5Batch API: 50% savings, 24h window, no SLA
  • 4.6Multi-pass review; independent instance
D5 · 15% Context Management & Reliability
  • 5.1Case facts block; trim verbose tool outputs
  • 5.2Escalation: explicit requests honored immediately
  • 5.3Structured error propagation up agent chain
  • 5.4Scratchpad files; /compact; subagent delegation
  • 5.5Stratified sampling; field-level confidence
  • 5.6Claim-source mappings; annotate conflicts
6 Exam Scenarios (4 appear on your exam)
01
Customer Support Resolution Agent
Building an agent to handle returns, billing disputes, and account issues using MCP tools.
Primary: D1 · D2 · D5
02
Code Generation with Claude Code
Using Claude Code for development, refactoring, documentation, custom commands, and plan mode.
Primary: D3 · D5
03
Multi-Agent Research System
Coordinator delegates to specialized subagents for web search, document analysis, synthesis, reporting.
Primary: D1 · D2 · D5
04
Developer Productivity Tools
Agent helps explore codebases, understand legacy systems, generate boilerplate using built-in tools.
Primary: D2 · D3 · D1
05
Claude Code for CI/CD
Integrating Claude Code into automated pipelines for code review, test generation, PR feedback.
Primary: D3 · D4
06
Structured Data Extraction
Extracting structured information from unstructured documents with JSON schema validation and retry logic.
Primary: D4 · D5
5 Rules You Must Know
Over-EngineeringIf improving tool descriptions fixes the problem, don't build a routing classifier. The exam favors proportional solutions.
Prompt vs ProgrammaticWhen business-critical logic requires a guarantee, programmatic guardrails beat prompt instructions every time. Prompts are probabilistic.
Context IsolationSubagents do NOT inherit coordinator history. Always look for answers that explicitly pass context in the subagent's prompt.
Syntax vs Semantictool_use + JSON schema eliminates syntax errors only. Semantic errors (wrong values, math) require a separate validation pass.
Lost-in-the-MiddleModels miss content in the middle of long prompts. When an agent misses information, think about placement — move critical facts to the beginning.
Critical Tables
tool_choiceBehaviorUse When
"auto"May return plain text — not guaranteedGeneral use; tool call not required
"any"Must call some toolGuaranteed structured output needed
{"type":"tool","name":"X"}Must call that specific toolEnforcing tool ordering / specific extraction
Error CategoryisRetryableExample
transient✓ YesTimeout, service unavailable
validation✗ NoInvalid input format
business✗ NoPolicy violation
permission✗ NoUnauthorized access
Batch APIReal-Time API
50% cost savingsImmediate response
Up to 24h processing (no SLA)Guaranteed low latency
No multi-turn tool callingFull tool calling support
✓ Overnight reports, weekly audits✓ Pre-merge checks, interactive
✗ Any blocking workflow✗ Cost-sensitive batch jobs
⚠️
Anti-Patterns Cheat Sheet
27 wrong approaches with correct alternatives — organized by domain
💬
FAQ
40 questions · Searchable · Filterable by topic
🔍
No matching questions found.
📅
Study Plan
2-week structured preparation · Check off tasks as you complete them
Overall Progress
0 / 0 complete