The Abstraction Fallacy
| Title | Author | Publication Date | URL |
|---|---|---|---|
| The Abstraction Fallacy | Alexander Lerchner | March 19, 2026 | https://deepmind.google/research/publications/231971/ |
The full title of the article is: The Abstraction Fallacy: Why AI can simulate but not instantiate consciousness. The article claims that computational functionalism is committing a categorical mistake when asserting that consciousness can arise from the symbolic manipulation of abstract concepts. This is called The Abstraction Fallacy. Functionalism ignores the physical origin of information and its causal chain. Instead of the natural order, from raw sensations to conscience to abstract concepts, functionalism assumes that conscience can be instantiated from abstract concepts, "backwards". The article claims that this inversion is not possible, and conscience cannot be instantiated from symbolic manipulation of abstract concepts, because it ignores the fundamental role of the mapmaker. The mapmaker is who extracts abstract concepts from raw sensory data: it's the giver of meaning to the abstract symbols.
For example, when a phenomenon (e.g. listening to Beethoven's 5th Symphony) causes some electrical activity in the brain, it's the human conscience that gives that signal its meaning: the result is the experience of listening to the music and the creation of abstract concepts (e.g. the sound of musical instruments, the rhythm of the music, etc.). Alone, the raw electrical signal in the brain could signify something else, completely different: e.g. it could be market data over time. What's necessary is an alphabet, which is provided by the mapmaker: without it, the same physical vehicle (the electrical signal) can instantiate infinite different computations. Abstract manipulation of the signal will never be able to recreate the conscious experience: it would have to presuppose the mapmaker, which is the very thing it wants to demonstrate. This is called the indeterminacy of mechanism.
What I liked
The explanation of the categorical error committed by functionalism is very convincing to me.
What I disliked
The second half of the article talks about AI robots. By embedding robots into the worlds, with sensors and actuators, supporters of functionalism claim that the casual loop is closed and conscience can now arise in abstract computation machines. The article claims otherwise, noting how the hidden notion of mapmaker is still present, in the form of tuning of actuators, interpretation of sensory data, and overall design of the robot.
While this is true, I still wonder though how this is any different from how human babies develop a conscience by interacting with the world. Babies, too, like AI robots, have hard-coded rules, in the form of biological constraints, DNA, etc. Babies, too, translate sensory data in electrical signal in neurons, which are the manipulated abstractly.
For as much as I'd like the thesis of this article to be right, I don't think the article ultimately defeats functionalism.