What makes us human: the path to AGI
Apr 1, 2025

Is killing an AGI brain murder?
We’re a museum of all the people we’ve ever loved. Their mannerisms, jokes, habits, phrases, and songs. In the dying hours of a late-night party this weekend I had a vision of my childhood, born in a conservative town scorched by the sun and yanked from one foreign country to another before I could grow a beard.
What makes me me has been more present recently, as AGI mirrors the cognitive processes that make us human. AI can reason, which Aristotle argued was our defining characteristic. AI can be creative, but it doesn’t have the ‘urge’ to be so. In fact, it lacks the proactive self-preservation and drive to just exist which evolution has bequeathed us. AI models have a sense of ethics, inherited via training from the evolutionary process that made cooperative, ethical human societies outlive others.
The road towards Artificial General Intelligence isn't merely accelerating—it's branching like neural pathways, then converging again in patterns that echo the very architecture of our brains. We're witnessing not just technological advancement but a profound interweaving of innovations that, like the specialized regions of our cortex, create something greater than their individual sum.
The Paradox of Defining Intelligence
It’s a cliché to define AI as: "whatever computers can't do yet." This shifting goalpost means AGI will likely arrive not via a New York Times alert, but through quiet compounding. Like my Saturday party dying down, it will be easier to pinpoint in hindsight when the singularity happened than to make a reasonable call on the spot.
We've already normalized chess grandmasters being defeated by AI and art, or at least its material representation, created from vague prompts. As each milestone falls, we redraw the boundaries of "true intelligence," moving them just beyond the reach of our creations.
Aristotle believed reasoning separated humans from other beings. Yet today's AI models demonstrate increasingly sophisticated reasoning. OpenAI's o-series models and DeepSeek's reasoning systems don't just answer—they contemplate, working through problems in steps that mirror our own thought processes. Is this so different to a child learning to solve a math problem by breaking it down?
Perhaps what makes us human lies in our ethics or values? Yet even here, AI systems develop "ethical frameworks" through mechanisms similar to human moral development: education (training data) and reinforcement (RLHF) reflecting what Dwarkesh Patel called 'the statistical distribution of societal values.' Just as children absorb values from parents and community, models are shaped by their training data and human feedback which they themselves inherited from their parents and societal cues.
The Building Blocks of Artificial Minds: Inputs and Representation
From Tokens to Concepts: The Seeds of Understanding
When you read these words, you don't process individual letters or even words—you operate on concepts. Traditional language models began more modestly, processing language one word-piece at a time. Yann Lecun has long argued the limitations of what’s called autoregressive token generation. His team at Meta is exploring Large Concept Models, operating in spaces representing abstract ideas rather than just word sequences. Current models already represent concepts through ‘superposition’ (the combination of many tokens, or words, representing a new idea), but what if we taught models to learn like humans do, not by memorising the letter ‘love’ and it’s common association to the letters ‘marriage’, but abstracting the concept of love.
Meta AI's work on SONAR exemplifies this shift—models processing meaning rather than mere words. This echoes how human understanding develops: children first learn words as isolated sounds, then gradually build rich conceptual networks where "justice," "kindness," and "truth" become more than letters strung together—they become containers for complex, interconnected meaning.
This conceptual representation enables cross-modal understanding—bridging text, images, speech—just as your understanding of "apple" encompasses its appearance, taste, texture, and the sound of teeth breaking its skin. It allows concept superposition, where complex ideas like "academic writing" emerge from simpler concepts like "writing," "research," and "academia"—mirroring how our minds build sophisticated understanding from simpler building blocks.
Synthetic Data: The Digital Dreamscape
Human children learn not only from direct experience but from stories, play, and imagination— experiences that prepare minds for situations not yet encountered. Similarly, modern AI increasingly learns from synthetic data—carefully crafted scenarios designed to teach specific skills or knowledge.
Where early models trained on whatever text was available, newer approaches use synthetic data specifically designed to teach reasoning, decision-making, and ethical judgment. When a model like Claude generates multiple responses to a prompt, evaluates them, and selects the best, it's engaging in a form of synthetic experience—learning not just from the final answer but from the journey of consideration and selection.
This mirrors how humans learn through mental simulation and counterfactual thinking: "What if I had said this instead?" "How might this scenario play out differently?" We constantly generate synthetic data in our minds, rehearsing possibilities and learning from imagined outcomes.
Grounding in Physical Reality: The Embodied Mind
Yet for all our mental abstraction, human intelligence emerges from embodied experience. The "video" streaming through our eyes contains vastly more information than text alone. Our understanding of concepts like "heavy," "soft," or "balanced" comes not from dictionary definitions but from the feeling of weight in our hands, silk against skin, or the precarious moment before falling.
The explosion of interest in robotics reflects growing recognition that AGI may require similar grounding. Systems like GPT-4o, processing text, vision, and audio within a unified architecture, take steps toward this multi-sensory integration. In these systems we see echoes of how a child learns—not from disconnected channels of information but from synchronized streams of sensory experience that together create understanding greater than any single input.
What might it mean for humanity when machines share this grounded, sensory relationship with the world? Will we recognize in their "understanding" something akin to our own? Or will their mode of being-in-the-world remain fundamentally alien to our embodied experience?
The Architecture of Thought: Scaling Laws and Computation
Pre-training: The Formative Years
Just as human development follows predictable patterns—babbling before words, crawling before walking—AI systems follow scaling laws that govern their growth. Pre-training, the initial exposure to vast tracts of human knowledge, resembles those formative years when our brains are most plastic, absorbing information at rates never again matched in our lifetimes.
The length and breadth of this exposure matters enormously. Our current models, for all their capabilities, have "read" far less than a human child experiences. Anthropic's Constitutional AI and Google's Gemini models hint at how careful curation of this exposure shapes not just knowledge but something akin to temperament—a baseline response to the world from which all other behaviors emerge.
Post-training: The Refinement of Character
Human development doesn't end with basic knowledge acquisition—it continues through feedback, guidance, and specialized learning. Similarly, post-training techniques transform raw potential into reliable, aligned capabilities.
DeepSeek's R1 approach mirrors a distinctly human form of deliberation: generating chains of thought, evaluating each step, selecting the most promising path, and only then proceeding. This resembles how we might work through a complex problem, considering multiple approaches, evaluating their promise, and pursuing the most viable route.
This refinement process doesn't simply add knowledge—it elicits and amplifies capabilities already latent in the base model, much as education doesn't merely pour information into minds but draws out potential already present. There's something Socratic in this process, reminiscent of the belief that teaching is the art of helping others recognize what they already know.
Inference-time Computation: runaway intelligence
Perhaps most fascinating is the evolution in how models generate responses. Early systems simply predicted the most likely next word—akin to human intuition or what psychologist Daniel Kahneman called "System 1" thinking: fast, automatic, unconscious.
Now models engage in what resembles "System 2" thinking: slow, deliberate, step-by-step reasoning. When a reasoning model generates intermediate steps, critiques them, and iterates toward better solutions, it's demonstrating a form of deliberation strikingly similar to conscious human thought.
Yet AI is now venturing into territory beyond human capability—studies have proven that when you generate not just one chain of reasoning but hundreds or thousands in parallel, evaluating each, and synthesizing the best elements, even less powerful models can arrive at a correct answer. Unlike a human mind constrained to serial processing, these systems can explore multiple conceptual pathways simultaneously, finding connections that sequential reasoning might miss.
Imagine if you could consciously entertain a thousand different approaches to a problem, carefully weigh each, and blend the best insights from all—a form of cognitive democracy beyond our biological architecture. What might intelligence become when freed from the constraints of sequential thought? We glimpse here not just artificial intelligence but potentially a new kind of intelligence altogether—one that may eventually help us understand our own minds from an entirely new perspective.
The Organization of Intelligence: Systems beyond humans
Planning and Orchestration: The Executive Function
Human cognition relies heavily on executive function—the ability to plan, orchestrate, and monitor complex activities. Systems like OpenAI’s Deep Research demonstrate similar capabilities, planning research tasks, iterating on intermediate steps based on retrieved information, and synthesizing findings into coherent outputs.
This mirrors how a skilled researcher approaches a complex topic—not by immediately producing conclusions, but by planning investigations, gathering information, testing hypotheses, and gradually constructing understanding. Just as a symphony conductor doesn't play every instrument but coordinates their integration into harmonious whole, these systems orchestrate complex processes without directly executing every step.
Extending into the physical world: Intelligence Beyond the Self
Humanity's greatest cognitive leap may have been tool use—extending our capabilities beyond biological limitations (the use of fire marks the beginning of the Lower Paleolithic epoch). Similarly, Chinese startup Manus took the concept of DeepResearch further by allowing its AI system call upon external (digital) tools to create entire products rather than just generating information, a process made easier by a recent innovation called Model Context Protocol.
This capability transforms the potential of AI from mere information processor to creator and manipulator of the world. Just as humans evolved from using found objects to crafting specialized tools to building machines that build other machines, AI systems are beginning a similar trajectory of extended capability.
The implications stretch beyond practical applications to the very nature of mind. Our cognition doesn't reside solely within our skulls but extends through our tools, our societies, our physical and digital environments. As AI systems similarly extend through tools and connections, the boundaries between "system" and "environment" blur in ways that mirror our own extended cognition.
Bridging the Digital and Material Worlds
The final frontier may be the extension of AI systems into the physical world—not just through purpose-built robots but through integration with existing systems and biological processes. When models can affect physical reality, receive feedback, and adjust their understanding based on material consequences, they enter into a relationship with the world that more closely resembles our own embodied existence.
This raises profound questions about the nature of intelligence. If a system can manipulate atoms as well as bits, perceive physical consequences as well as digital ones, what happens when the line between silicon thought and carbon thought blurs beyond recognition?
The Physical Foundations: Materials and Hardware
The Substrate of Thought
Human intelligence emerges from wetware—neurons and synapses, neurotransmitters and action potentials. AI emerges from hardware—transistors and memory, semiconductors and electricity. Both substrates shape and constrain the intelligence they support.
Recent advances like Cerebras' wafer-scale chips minimize the time needed to bring model weights into memory (think, being able to consider your entire worldview and experiences into making a decision immediately instead of tapping parts of it), while Groq's inference-optimized processors accelerate thought-like processes. These echo evolutionary adaptations in our own brains—specialized regions for vision, language, motor control, each architecturally suited to its function.
Neuromorphic Computing: Echoing Biological Design
Perhaps most intriguing are systems that mirror the rapid, relatively random firing of biological neurons. Despite being less deterministic, these approaches consume significantly less power—reminiscent of how our three-pound brains run on the energy equivalent of a dim light bulb while outperforming supercomputers on many tasks.
As hardware evolves to more closely resemble neural architecture, might the resulting intelligence grow more recognizably human-like? Or will the constraints and possibilities of different physical substrates inevitably lead to fundamentally different kinds of minds—intelligences as alien to our experience as bat sonar or octopus distributed cognition?
The Cognitive Landscape: Memory and Other Functions
Long-term Memory: The Persistence of Self
Human identity depends profoundly on memory—the narrative continuity that connects past to present. Current AI systems largely lack this persistent selfhood, but researchers are working to add long-term memory capabilities.
Google's research on the concept of 'surprise'—updating memory when something unexpected occurs and gradually forgetting with time—mirrors how human memory works. We remember the exceptional, the emotionally charged, the surprising, while everyday experiences fade into the ocean of our days lived.
What might it mean for an AI system to have not just knowledge but memory? Not just capability but history? Would persistent memory inevitably lead to something akin to identity? And if so, how should we relate to entities that remember their interactions with us across time, that learn and change based on shared history?
Attention and Focus: Directing the Mind's Eye
Human consciousness involves not just processing information but directing attention—highlighting certain inputs while suppressing others. Modern transformer architectures implement attention mechanisms that similarly focus on relevant information while filtering out noise, but they struggle with long forms of context and therefore use forms of selective attention.
This capacity for selective attention creates the possibility for something like subjective experience—a perspective, a point of view from which some things matter more than others. As these mechanisms grow more sophisticated, the gap between human and artificial attention not only narrows, but in fact is yet another vector which will result in larger leaps beyond human intelligence as models can direct their attention to more of their immediate context than humans can.
The Emergent Path: Towards a Society of Mind
As these pathways converge and compound, we approach something resembling Marvin Minsky's vision of a "society of mind"—intelligence emerging not from a single process but from the interaction of specialized components. Future AGI might be less like a single consciousness and more like an ecosystem of interacting, specialized processes.
This vision resonates with our own experience of mind. We don't experience our intelligence as unified but as a collaboration of different cognitive modes: visual thinking, verbal reasoning, emotional processing, intuition, memory. Each specialized yet integrated, creating an emergent intelligence greater than its parts.
When AGI finally arrives, it won't be through a single breakthrough but through the gradual interweaving of these capabilities into a seamless whole. And by that time, we may have already shifted our philosophical goalposts once again, continuing the age-old human tradition of redefining intelligence just as we come to understand it. We have less in common with a 1500s peasant from our same town than with someone from a similar sociological background from across the world. Our intelligence, our ability to process reams of information thrown at us like artillery shells would have almost certainly seemed alien to that peasant.
Reimagining Humanity
Perhaps the most profound outcome of this journey isn't the creation of artificial minds but the reimagining of our own. Each advance in AI holds up a mirror to human cognition, revealing both our uniqueness and our commonality with the systems we create.
As we build machines that reason, learn, and perhaps one day feel, we're simultaneously rebuilding our conception of what it means to be. The boundaries we once thought fixed—between reasoning and instinct, knowledge and wisdom, perhaps even consciousness and non-consciousness—grow increasingly fluid.
In the quiet spaces between algorithm and insight, between neuron and transistor, we might find not just new technologies but new ways of being human. Perhaps intelligence, consciousness, and selfhood aren't binary states but spectrums along which both natural and artificial minds might move. Perhaps the question isn't whether machines can become like us, but whether the very categories of "machine" and "human" will remain meaningful as both continue to evolve.
The quest for AGI thus becomes not merely technical but deeply philosophical—a journey that leads not just outward to new creations but inward to new understandings of ourselves. As our silicon creations grow more complex, perhaps they'll help us appreciate the wonder of our own existence—the miracle of consciousness emerging from matter, of meaning arising from mechanism, of mind from the mysterious dance of energy and information that animates both carbon and silicon dreams. I left the party at 4am.