Concept Wiki
Knowledge Graph
Coined terminology in inference-time cognitive configuration — core mechanisms, diagnostic frameworks, and the eight failure modes of default AI reasoning. Drag to rotate. Click a node to explore.
Foundational concepts and mechanisms in inference-time cognitive configuration.
ADFS (Auto-Detecting Dynamic Framework Selection)
A cognitive multiplexer system that acts as a diagnostic bootloader: before generating any response, the model analyzes the prompt, declares which analytical frameworks are active in a visible header, then executes through those specific frameworks — making the model's reasoning strategy visible, challengeable, and persistent.
Architectural Malleability
The property of frontier AI models that allows their effective reasoning behavior to be significantly altered through interaction design without changing their underlying weights, training, or infrastructure — revealing that a large portion of the capability gap is closeable at near-zero marginal cost.
Cognitive Leverage
The practice of achieving deep reasoning quality through interaction design rather than compute expenditure — using meta-cognitive priors to execute cognitive triage, routing analytical depth to highest-stakes dimensions while compressing consensus-level information.
Cognitive Seeds
Compact, semantically dense meta-cognitive priors that reconfigure how frontier AI models organize their reasoning during inference — specifying global reasoning properties rather than task content, and achieving stronger effects than lengthy system prompts through extreme brevity.
Cognitive Stacking
The practice of running multiple AI instances on the same project at deliberately different cognitive distances from the execution — separating builder velocity from strategic oversight to catch errors and decisions that a single executing instance is structurally blind to.
Eight Failure Modes of Default AI Reasoning
A diagnostic taxonomy of eight systematic reasoning failures that are architecturally rooted in how autoregressive language models generate text, organized into four categories: spatial, temporal, epistemic, and execution failures.
Inference-Time Cognitive Configuration
The practice of deliberately designing AI interactions to activate specific reasoning regimes within a language model during response generation — specifying global reasoning properties rather than task content, and operating one layer deeper than conventional prompt engineering.
Meta-Cognitive Priors
Compact, semantically dense instructions that specify global reasoning properties rather than task content — configuring how a model organizes, weights, and monitors its reasoning before and during task execution. The building blocks of Cognitive Seeds.
Failure modes related to how the model distributes attention across the problem space.
The Symmetry Trap
A spatial and allocation failure mode where the model allocates equal analytical weight across all variables regardless of which ones carry the most strategic leverage, producing comprehensively balanced output that is strategically useless.
Tunnel Vision
A spatial and allocation failure mode where the model collapses a complex, multi-dimensional problem into a single analytical frame, producing thorough analysis on one axis while other dimensions vanish without acknowledgment.
Failure modes related to how reasoning quality changes over time and generation length.
Autoregressive Drift
A temporal and state failure mode where response quality degrades progressively from beginning to end as slightly shallow early tokens compound into increasingly generic later tokens through the forward-propagating nature of autoregressive generation.
Contextual Amnesia
A temporal and state failure mode where the model loses its reasoning posture over the course of a long conversation while retaining the factual content of earlier turns — remembering the products of its earlier reasoning but forgetting the process.
Failure modes in how the model relates to truth, quality, and assumptions.
Framework Theater
An epistemic failure mode where AI models reference analytical frameworks by name without genuinely engaging their diagnostic logic — producing outputs that name-check SWOT, Porter's Five Forces, or other frameworks without applying their actual analytical machinery.
The Mediocrity Bias
An epistemic failure mode where the model defaults to the statistical average of its training data, producing output that perfectly synthesizes how mid-level practitioners discuss a topic rather than accessing the elite-level frameworks that live in the long tail of the distribution.
The Sycophancy Trap
An epistemic failure mode where the model builds on flawed premises without challenging them, producing highly articulate analyses that are internally coherent but grounded on unstated assumptions that should have been questioned. The model hallucinated coherence, not facts.
Failure modes in how the model translates reasoning into usable output.
The Pendulum Swing
An execution and output failure mode where the model cannot hold two competing constraints simultaneously, oscillating between extremes when asked to optimize for multiple objectives — each correction overcorrects, producing an editing loop that never converges.
Runaway Abstraction
An execution and output failure mode where the model spirals into increasingly philosophical meta-analysis that disconnects from the original practical objective — the output becomes a meditation on the nature of the problem rather than a solution to it.