About
I study how frontier AI models organize reasoning at inference time — and how interaction architecture can activate latent cognitive capabilities that default prompting leaves dormant.
My work sits at the intersection of cognitive science, information theory, and practical AI deployment. The core finding: the gap between what models can do and what they actually do in default operation is not a scaling problem. It's an architecture problem — solvable through what I call inference-time cognitive configuration.
I documented a configured GPT-4o outperforming standard GPT-5 across 30 analytical dimensions in a blind evaluation. The mechanism isn't better prompting in the conventional sense. It's the strategic deployment of meta-cognitive priors — structural constraints that activate latent reasoning regimes the model already possesses but rarely engages without explicit architectural cues.
This work has produced a diagnostic taxonomy of eight failure modes that emerge in default AI reasoning, a practical toolkit for cognitive configuration, and a measurement framework for quantifying the gap between default and configured output quality.
Full biography will be provided. This is placeholder content.