A single-page slide essay on how human cognition and artificial intelligence converge around similar design problems: memory, attention, agency, embodied sensing, representation, and the search for coherent self-regulation.
This page is framed as a systems-design argument: high-stakes AI should be evaluated not only by capability, but by coherence, memory, legibility, context recovery, and the user's ability to predict what the system will do next.
Translate cognitive architecture into interface requirements: attention, working memory, context, feedback, and user trust calibration.
Map RAG, context windows, agents, multimodal sensing, and tool use onto practical product failure modes.
Use this as the conceptual bridge across Clinical Roleplay, Praxik, Bonita, HapTrek, and embodied interface work.
Tracing the convergent evolution of human cognition
and artificial intelligence, from theory to body to mind.
This framing matters for product work because high-stakes AI does not only need answers. It needs a way to hold context, recover from uncertainty, and remain legible to the human using it.
AI builds intelligence through aggregation.
Humans develop intelligence through integration.
We approach this from both directions: top-down first, bottom-up second. The tension between those directions is where human-centered AI design becomes interesting.
How we first imagined intelligence, and why it made sense to start from abstraction.
When we first theorized intelligence in computer science and neuroscience, we worked downward from formal abstraction.
The assumption: intelligence = formal operations on symbols.
Model what you can measure. The top was legible.
The parallel is not perfect, but it is useful. It shows why current AI systems can feel intelligent in fragments while still struggling to maintain a coherent orientation across time.
Top-down processing is where cognition becomes coherent, but it assumes the lower layers are already running. Prefrontal cortex coordinates what the body has already perceived.
We model what we can measure. But the most fundamental intelligence, the kind that grounds meaning in a body, was always operating below the threshold of our theoretical models.
Top-down AI works well. But it inherits the same gap: it skips the ground floor.
What product design, HFID, and embodied robotics are forcing us to recover.
Gibson (1979): affordances emerge between organism and environment, not in either alone. Meaning is relational, not symbolic.
This is the layer AI has historically skipped.
And it's the layer embodied robotics is now being forced to build.For interface design, this means a system's intelligence is not only inside its model. It is also in what the user can perceive, predict, and physically coordinate with.
Multimodal sensors form a nested layer: they communicate upward to the command layer while also assisting actuation. In humans, this is fused. In AI, it is still a seam.
Gestalt principles such as proximity, similarity, and continuity operate before conscious grouping. They are the first filter between sensation and perception.
A robot can detect a cup. A human knows how to reach for one before they've decided to.
The body makes preattention purposive, not just reactive.Manuel Blum & Lenore Blum (2022), a computational theory of consciousness.
The CTM instantiates Global Workspace Theory. Three structural elements map directly onto the bottom-up / top-down framework:
Uptree = bottom-up sensory competition. Downtree = top-down command that tells the "muscles" how to respond.
The CTM proposes Brainish, a pre-linguistic representational language that is never directly translated to behavior, only processed internally.
Brainish is not English. It is the representational currency that integrates all three, and it has no direct AI equivalent yet.
This is why language models can be fluent while still missing embodied self-location. They can describe orientation without inhabiting the loop that makes orientation felt.
Michael Levin's proposal, and the question of whether mind exceeds the system.
Michael Levin (Tufts): individual minds may not be generated by biological systems; they may be discovered by them.
If mind precedes instantiation, then AI is not creating new kinds of mind. It may be accessing the same field through a different substrate.
Huh et al. (2024): independent of training, AI models converge on similar internal representations. Different architectures and different data still produce similar geometric structures in latent space.
The asynchronous conference across neuroscience, AI, systems biology, and philosophy of mind is slowly converging on the same territory from different directions.
But both may be approximating something that exists prior to either: a cognitive structure that intelligence discovers rather than invents. The question is no longer whether AI will become human-like. It is whether both are finding the same territory from different directions.
Original slide sequence preserved from the Practice Intuition intelligence convergence deck; reformatted here as a single-page portfolio essay.