From photography to supercomputers: how we see ourselves in our inventions

Neuroscience encourages us to think of our brains as calculation machines, but such analogies, while useful, also demonstrate our limitations
  • The Observer,
  • Jump to comments ()
Paraplegic Juliano Pinto kicks off this year’s World Cup
Paraplegic Juliano Pinto kicked off this year’s World Cup using a brain-controlled robotic exoskeleton.

Back in 2008, the technologist Ray Kurzweil estimated that the processing power of the human brain was in the region of 20 quadrillion calculations per second and that, as soon as we developed a supercomputer fast enough, simulating the brain would just be a problem of getting the software right. It was announced last month that the world's fastest supercomputer, China's Tianhe-2, can carry out almost 34 quadrillion calculations per second, meaning that, according to Kurzweil, we have the potential to simulate one and two-thirds of a human brain inside a single machine.

The idea that we could fit "one and two-thirds" of our brain function in a computer may seem a little flippant but it is not an unreasonable conclusion if you think of the brain as primarily a calculating engine. If this seems a little distant from your everyday experience, the idea that the mind is this "computation at work" is an assumption so embedded in modern neuroscience that it's almost impossible to find anyone arguing for a non-computational science of the brain.

It's not that this approach is necessarily wrong. Science has produced many useful and important advances based on exactly this assumption. The first kick in the World Cup was taken by a paralysed man in a robotic exoskeleton that he controlled through a brain-computer interface, all of which was based on exactly this mathematical view of the mind. The difficulty comes, however, when we assume that there is nothing more to explain in the mind and brain than calculations. What starts as a tool to help us understand ourselves, begins to replace us in our understanding.

But avoiding this pitfall may be more difficult than we think. Historically, our theories of the brain tend to be dominated by ideas we take from the technology of the day. The historian of psychology Douwe Draaisma has shown that while we often believe that we first learn about ourselves and then apply this knowledge to technology, it almost invariably happens the other way round. We tend to understand ourselves through our inventions.

The ancient Greek philosopher Plato had a theory that the mind was like a wax writing tablet. More than two millennia on, after seeing an alchemist demonstrate glow-in-the-dark phosphorescent liquid that had been synthesised for the first time, the 17th-century scientist Robert Hooke suggested that the mind stored memories just as this material seemed to "store" light. In the 1870s, when Thomas Edison first presented the phonograph to the world, scientists began discussing a theory of auditory memory as "an album containing phonographic sheets". When photography was invented, it was used as a metaphor for memory partly because it captured information in a way that was a little blurry and had a tendency to fade. In an interesting twist, modern cognitive scientists have to remind their audience that "memory is not like a taking a photograph" because modern cameras do their job too efficiently to be a good metaphor for remembering.

When computers arrived, we inevitably saw ourselves in our machines and the idea of the mind as an information processor became popular. Here, the mind is thought to consist of information processing networks where data is computed and transformed. One of the newest and most fashionable theories argues that the central function of the brain is to statistically predict new information. The idea is that the brain tries to minimise the errors it makes in its predictions by adjusting its expectations as it gets new information. It's a theory that originated from the mathematical "predictive coding" model that was developed to help second world war gunners predict where moving enemy planes would be in the two seconds it took for an anti-aircraft shell to reach them. The statistical theory became widespread when it was realised that it could be used to guess missing audio when sound was sent over a telephone network, meaning that less information needed to be sent as the rest could be mathematically reconstructed. Similar ideas were first adopted to improve artificial intelligence – Google's speech recognition is now based on it – and then taken up by neuroscientists to explain data from the brain.

It could be that we've reached "the end of history" as far as neuroscience goes and that everything we'll ever say about the brain will be based on our current "brain as calculation" metaphors. But if this is not the case, there is a danger that we'll sideline aspects of human nature that don't easily fit the concept. Our subjective experience, emotions and the constantly varying awareness of our own minds have traditionally been much harder to understand as forms of "information processing". Importantly, these aspects of mental life are exactly where things tend to go awry in mental illness, and it may be that our main approach for understanding the mind and brain is insufficient for tackling problems such as depression and psychosis. It could be we simply need more time with our current concepts, but history might show us that our destiny lies in another metaphor, perhaps from a future technology.

From Terminator to Transcendence, popular culture is awash with fears about cyborgs, but, in terms of understanding ourselves, we have been cyborgs for centuries. We've lived in a constantly evolving relationship with machines that has profoundly affected how we see human nature. Perhaps the question is not whether we are lazy to take ideas from machines in order to understand ourselves but whether we can ever think beyond them.

Today's best video

Today in pictures

;