I had a conversation with an Aeon article a few weeks ago. It didn’t go well. I will relate as best I can:
[The brain] does not contain most of the things people think it does—not even simple things such as memories.
Well that’s a bold statement and I will continue reading.
For more than half a century now, psychologists, linguists, neuroscientists and other experts on human behaviour have been asserting that the human brain works like a computer.
Not exactly: the majority of the literature I’ve read has either lamented or celebrated the many ways computers don’t work like brains.
Senses, reflexes and learning mechanisms—this is what we start with, and it is quite a lot, when you think about it. If we lacked any of these capabilities at birth, we would probably have trouble surviving.
“Probably” seems a bit superfluous here, but I’m a fan of British humour, so I’ll let it slide, but “not alive in meaningful sense” would have done the job and still been palatable to both sides of the Atlantic.
But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers—design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them—ever.
That’s a long list of nebulous words. Information and data are sort of the same thing, along with knowledge, though knowledge arguably requires someone to have it, unlike information, and information is also a measure of entropy, then lexicons… I mean that’s how we talk about sets of representations and how we communicate knowledge, and you’re not even allowed to use the word representations as a thing that does or does not exist unless you have at least two published philosophy books that are at least 10 percent footnotes and references. Then there’s processors and subroutines and buffers, which are at least firmly in the realm of computer science, but then you drop symbols and models (see representations), then encoders and decoders which are descriptors of information manipulation. I assume you’re talking about computer programs when you juxtapose programs with algorithms, but those are categorically different things. “Memories” doesn’t belong here at all, which makes me think you snuck it into this word salad to prep a later point.
And who in Hell thinks computers behave intelligently?
After this there’s a rough pseudo-introduction to binary encoding, wrapped up with:
Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.
I need to be clearer: they really, really don’t. The notion of symbolic representation is the first thing you have to get rid of in order to work with computers in any important way. Symbolic representation is exactly what computers cannot do. We interact with computers via symbolic representation because of the pixel creations coded to obscure the crap code that makes computers run for five or six years. And the reason we call what we do symbolic representation is because we don’t have a strong grasp on how we do whatever it is we do.
The next three sentences are just weird: what is a non-physical memory, if we’re talking about the real world? Do squirrels not store and retrieve nuts? “They really process” is barely english, and contains either a notable lack of symbolic representation or way, way too much. The last sentence is a nice theory, but if it were remotely true I’d be out of a job.
The invention of hydraulic engineering in the 3rd century BCE led to the popularity of a hydraulic model of human intelligence, the idea that the flow of different fluids in the body—the ‘humours’—accounted for both our physical and mental functioning. The hydraulic metaphor persisted for more than 1,600 years, handicapping medical practice all the while.
That wasn’t the only thing handicapping medical practice. Medical practice was handicapped by the difficulty in performing double blind studies on arterial wounds. Medical practice was snake oil prior to World War I, when enough people died in a small enough space of time that somebody asked what doctors were good for besides cutting off limbs and distributing pain killers.
By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines.
There were a few deterministic notions of the functioning of the body and mind back in the day, arguing for simplistic one-to-one mechanical relationships between environment and physiology. My ex tore up my copy of the book that describes when they were debunked, but I’m pretty sure is was at least a hundred years before science was cool, when someone noticed that if you pricked somebody in the palm with a needle, they had to use a different set of muscles to recoil depending on which way the palm was initially facing. It’s not like nobody was trying, even if they sucked at it.
Each metaphor reflected the most advanced thinking of the era that spawned it.
No shit.
The information processing (IP) metaphor of human intelligence now dominates human thinking, both on the street and in the sciences.
Here, you snagged my interest again, but you’re going to have to do some hard back tracking to disentangle the idea of information processing to be the same as thinking a brain is a computer.
There is virtually no form of discourse about intelligent human behaviour that proceeds without employing this metaphor, just as no form of discourse about intelligent human behaviour could proceed in certain eras and cultures without reference to a spirit or deity.
Dude, do you even Asimov?
But the IP metaphor is, after all, just another metaphor—a story we tell to make sense of something we don’t actually understand. And like all the metaphors that preceded it, it will certainly be cast aside at some point—either replaced by another metaphor or, in the end, replaced by actual knowledge.
That is not what a metaphor is. Also, didn’t you just say we could not and never will have either metaphors or knowledge? What do I have then? Because I’m assuming you agree whatever my brain is doing it’s representing—no, wait, can’t do that either. Whatever it’s doing, it contains the sum total of my subjective experience, so metaphors and knowledge are ideologica non grata.
Just over a year ago, on a visit to one of the world’s most prestigious research institutes, I challenged researchers there to account for intelligent human behaviour without reference to any aspect of the IP metaphor. They couldn’t do it, and when I politely raised the issue in subsequent email communications, they still had nothing to offer months later.
You almost got me back here. But I think you haven’t kept up with the other sciences: describing anything in the universe without reference to information change is a bit tricky these days. If there’s a problem, it goes a lot deeper than our brain metaphors.
The faulty logic of the IP metaphor is easy enough to state. It is based on a faulty syllogism—one with two reasonable premises and a faulty conclusion. Reasonable premise #1: all computers are capable of behaving intelligently. Reasonable premise #2: all computers are information processors. Faulty conclusion: all entities that are capable of behaving intelligently are information processors.
Okay, where in fuck are you getting your information? NOBODY THINKS COMPUTERS ARE CAPABLE OF BEHAVING INTELLIGENTLY. My computer is dumb as nails. I’m starting to think your IT guy hates you. And in my fifteen years of working as a software engineer, nobody in any forum, conversation, or youtube comment, has referred to a computer as an information processor. Look: if you disagree with Pinker’s computational model, be direct about it. Stop accusing of us of holding absurd premises and being unable to do basic logic.
In a classroom exercise I have conducted many times over the years, I begin by recruiting a student to draw a detailed picture of a dollar bill—‘as detailed as possible’, I say—on the blackboard in front of the room. When the student has finished, I cover the drawing with a sheet of paper, remove a dollar bill from my wallet, tape it to the board, and ask the student to repeat the task. When he or she is done, I remove the cover from the first drawing, and the class comments on the differences.
Well, those drawings are vastly different. Were I a caustic asshole disemboweling your arguments with a disconcerting grin on my face, I might say your students were carrying symbolic representations around in their heads, instead of strict patterns of information they retrieved from a buffer.
The idea that memories are stored in individual neurons is preposterous: how and where is the memory stored in the cell?
Now I’m wondering if you read anything in your own field of science, to the point where I’m willing to write this off as a typo. If memories are stored anywhere, they’re in connective patterns across millions of neurons. If it’s not a typo, I’m still willing to write it off as another pawn in an army of scarecrows.
A wealth of brain studies tells us, in fact, that multiple and sometimes large areas of the brain are often involved in even the most mundane memory tasks.
So we can have memory tasks, but not memories. That’s potentially valid, but you’d have done better by saying memories are processes—ah, shit, we can’t have those either. At least we have tasks.
… We’re much better at recognising than recalling. When we re-member something (from the Latin re, ‘again’, and memorari, ‘be mindful of’), we have to try to relive an experience; but when we recognise something, we must merely be conscious of the fact that we have had this perceptual experience before.
That latin class was totally worth it, don’t let anyone tell you otherwise. This argument against memories existing is kind of a non-argument. I just quoted it to tell you that latin class was totally worth it, and the latin lesson added a lot to this paragraph. Seriously, don’t let anyone give you shit for that latin class.
As we navigate through the world, we are changed by a variety of experiences. Of special note are experiences of three types: (1) we observe what is happening around us (other people behaving, sounds of music, instructions directed at us, words on pages, images on screens); (2) we are exposed to the pairing of unimportant stimuli (such as sirens) with important stimuli (such as the appearance of police cars); (3) we are punished or rewarded for behaving in certain ways.
As I copy and paste your essay, I find myself unwilling or unable to transfer your stylistic choices into HTML, so, for both our benefit, I’ll note that you emphasized “observe,” “pairing,” and “punished or rewarded.” Considering how quick you were to dismiss “symbol” and “representation” I would be remiss in not pointing out that putting a word in italics does not instantly define it as the thing you want it to be. Would it be crushing to learn that “pairing” comes up in computer science discussions? If punishment and reward are the only players you’ll allow into cognitive science, it seems like you want to move the argument backward, or at least take it out of relation to any other science, so it stays entombed in a nonstarter theory.
We become more effective in our lives if we change in ways that are consistent with these experiences—if we can now recite a poem or sing a song, if we are able to follow the instructions we are given, if we respond to the unimportant stimuli more like we do to the important stimuli, if we refrain from behaving in ways that were punished, if we behave more frequently in ways that were rewarded.
If we are able to follow instructions we are given CAN YOU FUCKING LISTEN TO YOURSELF?
Misleading headlines notwithstanding, no one really has the slightest idea how the brain changes after we have learned to sing a song or recite a poem. But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions.
Now I can see the potentially interesting point that you’ve buried under this mountain of nonsense. There’s no specific location of everyone’s brain that can be scalpeled up and glued into Gordon Lightfoot lyrics. But the hard drive on your computer is also just changing in an orderly way so it can play Song for a Winter’s Night under certain conditions. That’s the language of information theory. That the order of the brain is far more complicated and less orderly than a hard drive distinguishes it from computers, but not information theory, since you literally just described an information process. Again, this might be wrong, but now we’re proposing deep issues in physics and language, not just the brain metaphor that I can’t store in my brain.
My favourite example of the dramatic difference between the IP perspective and what some now call the ‘anti-representational’ view of human functioning involves two different ways of explaining how a baseball player manages to catch a fly ball—beautifully explicated by Michael McBeath, now at Arizona State University, and his colleagues in a 1995 paper in Science. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight—the force of the impact, the angle of the trajectory, that kind of thing—then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.
That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.
This is where you boggled the mind of everyone who knows what an algorithm is, because you literally described an algorithm, then said it was a proce—sorry, an “account” free from algorithms. This is where it became clear that you don’t know the difference between an algorithm and a physics equation being solved in real time.
Because neither ‘memory banks’ nor ‘representations’ of stimuli exist in the brain, and because all that is required for us to function in the world is for the brain to change in an orderly way as a result of our experiences, there is no reason to believe that any two of us are changed the same way by the same experience.
I’ll spare you the embarrassment of reitalicizing (from the Latin re, ‘again’, and italicize, ‘add wine’) this bit. No one believes that, because you are correct: there is no reason to believe such a thing.
… The neurobiologist Steven Rose pointed out in The Future of the Brain (2005), a snapshot of the brain’s current state might also be meaningless unless we knew the entire life history of that brain’s owner—perhaps even about the social context in which he or she was raised.
“Meaning” gets a curious pass since we’ve dispensed with symbols, representations, and information. I haven’t read The Future of the Brain, so I have no idea how this claim is supported. A much more interesting notion is that for a transporter to work, it would have to replicate not just the position of every particle, but the velocity, when it reassembles your atoms. You italicized ‘social context.’ Don’t do that.
Big finish:
We are organisms, not computers. Get over it. Let’s get on with the business of trying to understand ourselves, but without being encumbered by unnecessary intellectual baggage. The IP metaphor has had a half-century run, producing few, if any, insights along the way. The time has come to hit the DELETE key.
This time, “Get over it” is slanting its way toward insistent typographic relevance.
There’s an alarming misunderstanding of what words and information are throughout this argument, even before and after the noodle incident with the algorithms.
We wrapped up the second millennium with a rich language for describing how information moves and morphs, and that language was coopted by the emerging industry of working with information. To some degree, everybody rebelled against this abstract lexicon, which is why we decided to grab yet more sports metaphors and have scrum meetings. Seriously, what group of software engineers would you describe as “agile” in the last century’s interpretation of the word?
Information is the term we use for the stuff of the universe. It’s the medium of interaction we have when we do the math and record our experiments. Unless you define it in some strict lexical framework at the beginning of an argument—which you did not, but physics and information theory did—you’re whipping the ocean, because rocks and stars are information processors, as are computers and brains, because all of them exhibit increasing entropy.
Brains are not MacBooks. But you haven’t made any convincing argument that brains are not computational, even if computing in a vastly different way than our silicon tools. You yourself are incapable of escaping the words that relate to information, and I don’t fault you for that because words are information.
I think you have particular animosity toward the notion of modeling a human brain in the same algorithms we use to run computers. That’s an argument worth having, and we’ve been having it since somebody asked if you wrote all the actions of Einstein’s brain in a book, would it be Einstein? I’d say no, but the question of would it be Einstein if we modeled all of his neuronal interactions in a SuperMacBook is stickier. To say brains are not computers is obvious, but your bizarrely constructed argument ends up being a semantic rarity: a tautological oxymoron. You want to say the brain undergoes orderly structural change in response to external input. I would be hard pressed to think of a better definition for algorithmic information processing, so I’m skeptical that you know what any of these words mean. Sorry, symbolically represent. No, sorry, are metaphors for. Dammit, sorry, I meant—no, I meaBUFFER_OVERRUN