Keyboard shortcuts

Press or to navigate between chapters

Press ? to show this help

Press Esc to hide this help

Appendix C: Glossary

This appendix collects the technical terms that appear throughout the book, sorted alphabetically by English term.

TermDefinitionFirst Seen
Agent LoopThe core execution loop of an AI Agent: receive input -> call model -> execute tools -> decide whether to continueChapter 3
AST (Abstract Syntax Tree)Tree-structured representation of source code that preserves semantic relationships (rather than plain text)Chapter 28
Cache BreakAn event where the prompt cache prefix is invalidated due to content changesChapter 14
Circuit BreakerForces an automated process to stop after N consecutive failures, degrading to a safe stateChapters 9, 26
CompactionSummarizing conversation history to free context window spaceChapter 9
DCE (Dead Code Elimination)Bun's feature() function enables compile-time removal of gated codeChapter 1
Defensive GitA pattern that prevents data loss during AI-executed Git operations through explicit safety rulesChapter 27
Dynamic BoundaryA marker in the system prompt that separates static cacheable content from dynamic session contentChapter 5
Fail-ClosedThe system defaults to the safest option; explicit declaration is required to unlock dangerous operationsChapters 2, 25
Feature Flag (tengu_*)Experiment gates configured at runtime via GrowthBook, controlling feature enable/disableChapters 1, 23
Graduated AutonomyMulti-level permission modes ranging from manual confirmation to full automation, each with safe fallbacksChapter 27
Harness EngineeringThe practice of guiding AI model behavior through prompts, tools, and configuration (rather than code logic)Chapter 1
HooksUser-defined shell commands that execute at specific events (e.g., before/after tool calls)Chapter 18
LatchA session-level state that, once entered, remains stable — preventing cache oscillation or behavioral jitterChapters 13, 25
MCP (Model Context Protocol)A protocol standardizing the interaction between AI models and external tools/data sourcesChapter 22
MicrocompactPrecisely removing specific tool results (rather than compacting the entire conversation), keeping the cache prefix stableChapter 11
OutlineAn overview document of the book's table of contents structure and chapter topicsPreface
PartitionDividing tool calls into parallelizable and must-serialize batches, based on the isConcurrencySafe propertyChapter 4
Pattern ExtractionExtracting reusable design patterns from source code analysis, including name, problem, and solutionThroughout
Post-Compact RestoreSelectively restoring the most critical file contents and skill information after compaction completesChapter 10
Prompt CacheAn Anthropic API feature that caches message prefixes to reduce redundant token processingChapter 13
SkillA callable prompt template, injected into conversation context via SkillToolChapter 22
Token BudgetThe token usage cap allocated to various types of content within the context windowChapters 12, 26
Tool SchemaA tool's JSON Schema definition, including name, description, and input parameter formatChapter 2
YOLO ClassifierA secondary Claude API call used to make permission approve/deny decisions in auto modeChapter 17