How to approach questions and maximize your score on the CCA Foundations exam
Key mindset: Every question is scenario-based. The right answer is always the most reliable, production-safe, proportionate solution. Avoid over-engineered answers and probabilistic guarantees when deterministic ones are available.
- ✓Read the scenario context first — it anchors which domain and tradeoffs are being tested
- ✓Look for the root cause in the question stem before scanning options
- ✓Prefer programmatic enforcement over prompt instructions when deterministic compliance is required
- ✓Choose proportionate first responses — the simplest solution that solves the stated problem
- ✓Guess if unsure — there is no penalty for wrong answers
- ✓When two options seem correct, pick the one that addresses the root cause, not a symptom
- ✓Treat "prompt-based solutions" as weaker than "schema/code-based solutions" for reliability questions
- ✓Know the exact API vocabulary:
stop_reason, tool_use, end_turn, isRetryable, tool_choice
- ✓For escalation questions: customer explicitly requests human → escalate immediately, no resolution attempt
- ✓For batch API questions: always ask "is this workflow blocking?" — if yes, batch is wrong
- ✗Don't pick answers that reference non-existent features:
CLAUDE_HEADLESS, --batch, required: true in tool_choice, config.json commands array
- ✗Don't choose "add a classifier/separate model" as a first step — it's over-engineered when prompt/config fixes exist
- ✗Don't confuse user-level (
~/.claude/) with project-level (.claude/) configuration — a common trap question
- ✗Don't trust LLM self-reported confidence scores for routing or escalation decisions — they're unreliable
- ✗Don't assume a larger context window or better model solves attention quality issues in multi-file reviews
- ✗Don't conflate semantic errors (values don't add up) with syntax errors (malformed JSON) — they have different solutions
- ✗Don't pick "merge tools into one" as a first fix for tool selection issues — description improvement comes first
- ✗Don't select answers that silently suppress errors or return empty results as success — always a wrong pattern
- ✗Don't assume subagents inherit coordinator context — they never do automatically
- ✗Don't pick sentiment analysis as an escalation proxy — it doesn't correlate with case complexity
Answer Elimination Framework — "Probabilistic vs. Deterministic"
✓ Programmatic hook blocks tool until prerequisite metvs✗ System prompt says "always call X first"
✓ Nullable JSON schema field prevents hallucinationvs✗ Prompt instruction "don't fabricate values"
✓ tool_choice: "any" guarantees a tool callvs✗ Prompt instruction "always use the extraction tool"
✓ Path-specific rules with glob patterns auto-loadvs✗ Root CLAUDE.md relies on Claude inferring the section
✓ -p flag prevents CI hangs deterministicallyvs✗ stdin redirect is a workaround, not the right answer
Building production-grade agentic loops, multi-agent orchestration, and session management
- ✓Use
stop_reason as your only loop termination signal — it's the authoritative API signal
- ✓Explicitly pass all relevant findings in each subagent's prompt — treat every subagent as stateless
- ✓Route all inter-subagent communication through the coordinator for observability and consistent error handling
- ✓Use programmatic hooks (
PostToolUse) for business rules that require guaranteed compliance
- ✓Emit multiple
Task tool calls in a single coordinator response to run subagents in parallel
- ✓Use
fork_session to explore divergent approaches from a shared baseline without polluting the main session
- ✓When resuming sessions, explicitly inform the agent about files that changed since last run
- ✓Compile structured handoff summaries (customer ID, root cause, amounts, action) when escalating to humans
- ✓Separate content from metadata (source URL, page numbers) in structured outputs to preserve attribution
- ✗Parse natural language in assistant messages to detect loop completion — brittle and unreliable
- ✗Set arbitrary iteration caps as the primary stopping mechanism — causes premature or missed termination
- ✗Assume subagents inherit coordinator context — always pass required information explicitly
- ✗Use prompt instructions alone for critical ordering rules (e.g., "verify customer before processing refund") — non-zero failure rate
- ✗Give a coordinator's
allowedTools without including "Task" if it needs to spawn subagents
- ✗Resume stale sessions with outdated tool results — start fresh with an injected summary instead
- ✗Let subagents communicate directly with each other, bypassing the coordinator — breaks observability
- ✗Decompose tasks so narrowly that the coordinator misses entire topic areas (e.g., only visual arts when asked about "creative industries")
Few-shot design, JSON schemas, validation loops, CI/CD prompting, and batch processing
- ✓Define explicit categorical criteria for what to report vs. skip — vague instructions like "be conservative" don't improve precision
- ✓Use 2–4 targeted few-shot examples for ambiguous scenarios, showing reasoning for why one action was chosen
- ✓Use
tool_use with JSON schemas for guaranteed schema-compliant structured output
- ✓Make optional fields nullable in schemas — prevents hallucinated placeholder values
- ✓Add
retry-with-error-feedback: include the original document + the failed extraction + specific error on retry
- ✓Match API type to workflow: batch for overnight/weekly, real-time for blocking pre-merge checks
- ✓Use
custom_id fields in batch requests to correlate responses and resubmit only failed items
- ✓Use an independent Claude instance for code review — not the same session that generated the code
- ✓Use
-p flag for all CI/CD Claude Code invocations to prevent interactive input hangs
- ✓Refine prompts on a sample set before batch-processing large volumes
- ✗Rely on vague quality instructions ("only report high-confidence findings") — they don't create consistent behavior
- ✗Add required fields for data that may not exist in source documents — forces hallucinated values
- ✗Expect retries to fix extractions where the required information is simply absent from the document
- ✗Use
tool_choice: "auto" when you need guaranteed structured output — the model may return plain text
- ✗Use the batch API for blocking workflows or SLAs under 24 hours — no latency guarantee
- ✗Run a single-pass review on 10+ files simultaneously — causes attention dilution and contradictory feedback
- ✗Assume
tool_use prevents semantic errors (e.g., line items not summing to total) — it only prevents syntax errors
- ✗Ask the model to self-review code it just generated in the same session — it retains biased reasoning context
Long conversations, escalation patterns, multi-agent error propagation, human review
- ✓Extract transactional facts (amounts, dates, IDs) into a persistent "case facts" block outside summarized history
- ✓Trim verbose tool outputs to only relevant fields before they accumulate in context
- ✓Place key findings at the beginning of aggregated inputs to mitigate the "lost in the middle" effect
- ✓Honor explicit customer requests for a human agent immediately — don't attempt resolution first
- ✓Escalate when policy is ambiguous or silent — don't let the agent improvise on policy gaps
- ✓Request additional identifiers when multiple customer records match — never guess by heuristic
- ✓Require subagents to include publication/collection dates in structured output to prevent temporal misinterpretation
- ✓Annotate conflicting statistics from multiple sources with full attribution — let the coordinator reconcile
- ✓Analyze accuracy by document type AND individual field before removing human review
- ✓Use
/compact during long exploration sessions when context fills with verbose discovery output
- ✗Condense numerical values, percentages, or dates into vague summaries — they become irretrievably imprecise
- ✗Let 40-field tool responses accumulate in context when only 5 fields are relevant
- ✗Use sentiment analysis or self-reported confidence as an escalation trigger — unreliable proxies for case complexity
- ✗Silently suppress subagent errors — always propagate structured error context including what was attempted
- ✗Terminate entire workflows because one subagent failed — try local recovery and partial-results synthesis first
- ✗Trust aggregate accuracy metrics (e.g., 97% overall) to greenlight automation — they can mask 60% accuracy on specific segments
- ✗Let the synthesis agent arbitrarily pick one value when sources conflict — annotate both with attribution
- ✗Omit source attribution when summarizing subagent findings — provenance is lost and cannot be reconstructed later