Open Questions

Questions

Intellectual provocations I'm actively thinking about. Not FAQs — these are the unsolved problems at the frontier of cognitive configuration.

01

Is there a theoretical ceiling on how much cognitive leverage can be extracted from a fixed model through interaction design alone?

We know that meta-cognitive priors activate latent reasoning regimes. But do those regimes have a hard ceiling determined by pretraining, or can the right configuration surface capabilities that the model has in principle but has never demonstrated?

02

What happens to the eight failure modes as model scale increases — do they attenuate, transform, or remain invariant?

GPT-5 still exhibits autoregressive drift and the symmetry trap, but possibly at different intensities than GPT-4o. Is there a scaling law for default reasoning failure, and does it predict anything useful?

03

Can semantic density be measured in real-time during generation, and if so, could it serve as an inference-time optimization signal?

Currently, semantic density is measured post-hoc. But if it could be computed incrementally during token generation, it might be possible to build self-correcting systems that detect and compensate for autoregressive drift as it happens.

04

Is there a minimal set of meta-cognitive priors that generalizes across domains, or is every domain a new configuration problem?

The current prior bank targets analytical reasoning. Legal analysis, medical diagnosis, creative writing, and code generation may require fundamentally different cognitive architectures — or they may share a common core that transfers.

More questions will be added. If you're working on any of these problems, I want to hear from you.