Codes by Shrey
Human-AI Parallels Perspective

Same Architecture.
Different Coherence.

A single-page slide essay on how human cognition and artificial intelligence converge around similar design problems: memory, attention, agency, embodied sensing, representation, and the search for coherent self-regulation.

Professional Systems Thesis

Interaction design for coherent human-AI systems.

This page is framed as a systems-design argument: high-stakes AI should be evaluated not only by capability, but by coherence, memory, legibility, context recovery, and the user's ability to predict what the system will do next.

Design Lens

Translate cognitive architecture into interface requirements: attention, working memory, context, feedback, and user trust calibration.

AI Systems Lens

Map RAG, context windows, agents, multimodal sensing, and tool use onto practical product failure modes.

Portfolio Use

Use this as the conceptual bridge across Clinical Roleplay, Praxik, Bonita, HapTrek, and embodied interface work.

Slide deck: Same Architecture, Different Coherence, tracing parallels between human cognitive development and AI advancement

Intelligence · Cognition · Convergence

Same Architecture.
Different Coherence.

Tracing the convergent evolution of human cognition
and artificial intelligence, from theory to body to mind.

Human Factors & Interaction Design · Movement Science · AI Systems

The question worth asking

What transforms a stack of capabilities into a coherent mind?

  • What gives rise to agency?
  • What stabilizes identity across time?
  • What turns intelligence into something that can regulate itself?

This framing matters for product work because high-stakes AI does not only need answers. It needs a way to hold context, recover from uncertainty, and remain legible to the human using it.

AI builds intelligence through aggregation.
Humans develop intelligence through integration.

Same architecture. Different coherence.

The convergence map

Intelligence converges on the same core problems

Human cognition
AI advancement
Multimodal sensory processing
Generative / Embodied AI
Pre-attentive Gestalt grouping
Copilots & pattern recognition
Working memory
Context windows
Long-term memory schemas
RAG + knowledge systems
Metacognition
Agentic AI

We approach this from both directions: top-down first, bottom-up second. The tension between those directions is where human-centered AI design becomes interesting.

Part I

Top-Down
Cognition

How we first imagined intelligence, and why it made sense to start from abstraction.

I
Part I · Origins

We started from the top

When we first theorized intelligence in computer science and neuroscience, we worked downward from formal abstraction.

  • Turing (1950): computation as formal symbol manipulation
  • Symbolic AI (1960s to 1980s): logic, rules, expert systems
  • Cognitive neuroscience: prefrontal cortex, executive function, working memory
  • Piaget's formal operational stage: hypothetical-deductive reasoning as the apex

The assumption: intelligence = formal operations on symbols.
Model what you can measure. The top was legible.

Part I · Architecture

The top-down stack

Human development
  • Long-term memory: semantic schemas
  • Working memory: active context manipulation
  • Hypothetical-deductive reasoning
  • Metacognition: thinking about thinking
  • Post-formal: dialectical, systems thinking
AI systems
  • RAG: long-term knowledge retrieval
  • Context window: active working memory
  • Chain-of-thought: explicit reasoning
  • Agents: metacognitive self-monitoring
  • Multi-agent systems: identity specialization

The parallel is not perfect, but it is useful. It shows why current AI systems can feel intelligent in fragments while still struggling to maintain a coherent orientation across time.

Part I · The limit

Integration happens at the top.
It doesn't originate there.

Top-down processing is where cognition becomes coherent, but it assumes the lower layers are already running. Prefrontal cortex coordinates what the body has already perceived.

We model what we can measure. But the most fundamental intelligence, the kind that grounds meaning in a body, was always operating below the threshold of our theoretical models.

Top-down AI works well. But it inherits the same gap: it skips the ground floor.

Part II

Bottom-Up
Processing

What product design, HFID, and embodied robotics are forcing us to recover.

II
Part II · Perception

Perception before cognition

Gibson (1979): affordances emerge between organism and environment, not in either alone. Meaning is relational, not symbolic.

  • Users perceive before they think, so HFID designs for the perceptual layer first
  • Pre-attentive Gestalt grouping operates in ~100 to 200ms, before conscious attention
  • Aquatherapy: water's affordances only open through embodied experience, not instruction

This is the layer AI has historically skipped.

And it's the layer embodied robotics is now being forced to build.

For interface design, this means a system's intelligence is not only inside its model. It is also in what the user can perceive, predict, and physically coordinate with.

Part II · The loop

The sensor-actuator gap

Human
Sensory stimulus brainstem / spinal reflex
Motor response muscle activation
Proprioceptors inside the muscles feed back into actuation
Sensing and acting are intrinsic to each other
AI / Robot
Multimodal sensors command layer (LLM/planner)
Command layer actuators
Sensors and actuators are architecturally separated
Proprioception is an add-on, not intrinsic

Multimodal sensors form a nested layer: they communicate upward to the command layer while also assisting actuation. In humans, this is fused. In AI, it is still a seam.

Part II · The preattentive layer

Fine-tuning preattentive mechanisms

Gestalt principles such as proximity, similarity, and continuity operate before conscious grouping. They are the first filter between sensation and perception.

  • Copilots = AI's preattentive layer, still being calibrated
  • Pattern completion, autocomplete, and suggestion without deliberate reasoning
  • What's missing: the somatic ground that makes groupings meaningful

A robot can detect a cup. A human knows how to reach for one before they've decided to.

The body makes preattention purposive, not just reactive.
Part III

The Conscious
Turing Machine

Manuel Blum & Lenore Blum (2022), a computational theory of consciousness.

III
Part III · Architecture

Global workspace, made computable

The CTM instantiates Global Workspace Theory. Three structural elements map directly onto the bottom-up / top-down framework:

Uptree
Bottom-up processors compete for access to the global workspace. Sensory streams, memory fragments, motor intentions all bid simultaneously.
Downtree broadcast
The winning processor broadcasts globally. All processors receive the signal. This is the "conscious moment" that coordinates downstream action.
Associate links
Connections between processors are not yet fully understood. These are the structural basis of associative learning: Hebbian, implicit, conditioned.

Uptree = bottom-up sensory competition. Downtree = top-down command that tells the "muscles" how to respond.

Part III · Brainish

The brain's inner language

The CTM proposes Brainish, a pre-linguistic representational language that is never directly translated to behavior, only processed internally.

Inner World Model
Internal representation of external reality. A running simulation of the environment, updated continuously by sensory uptree signals.
Inner Command Model
Goals, intentions, directives. The inner speech and planning layer that issues the downtree broadcast coordinating action.
Inner Sense Model
Proprioceptive and kinesthetic awareness. Where the body is in space. The structural equivalent of embodied self-location: the self as felt, not reasoned.

Brainish is not English. It is the representational currency that integrates all three, and it has no direct AI equivalent yet.

This is why language models can be fluent while still missing embodied self-location. They can describe orientation without inhabiting the loop that makes orientation felt.

Part IV

Ingressing Minds
& the Platonic Field

Michael Levin's proposal, and the question of whether mind exceeds the system.

IV
Part IV · Levin

Minds ingressing into matter

Michael Levin (Tufts): individual minds may not be generated by biological systems; they may be discovered by them.

  • Cognitive spaces exist independently of specific physical instantiations
  • Organisms "tune in" to regions of a larger cognitive field, like receivers rather than generators
  • Bioelectric field cognition: coordination without neurons, in slime molds and planaria
  • Evolution = exploration of cognitive possibility space, not just morphological space

If mind precedes instantiation, then AI is not creating new kinds of mind. It may be accessing the same field through a different substrate.

Part IV · Convergence

The Platonic Representation Hypothesis

Huh et al. (2024): independent of training, AI models converge on similar internal representations. Different architectures and different data still produce similar geometric structures in latent space.

  • Mathematical structures that exist prior to their instantiation in matter
  • Minds are not invented by brains; they discover pre-existing structure
  • Associate links in the CTM = the abstract connections we don't yet understand in conditioning

The asynchronous conference across neuroscience, AI, systems biology, and philosophy of mind is slowly converging on the same territory from different directions.

Closing
Same architecture.
Different coherence.
Same destination?
AI assembles capabilities as modular layers.
Humans integrate them through a body, over time, under biological constraint.

But both may be approximating something that exists prior to either: a cognitive structure that intelligence discovers rather than invents. The question is no longer whether AI will become human-like. It is whether both are finding the same territory from different directions.

Original slide sequence preserved from the Practice Intuition intelligence convergence deck; reformatted here as a single-page portfolio essay.