Consciousness is new. Estimates of behavioural modernity – that is the set of traits that distinguishes Homo Sapiens from our closest primate relatives – hover 50,000 years in our evolutionary past. In this time we have migrated from Africa, hewn ourselves from within the natural order, overpopulated the world and left a trail of lottery tickets, codpieces and operas in our wake. All the while, the apparatus that gave rise to our great achievements has remained shrouded in mystery. This is perhaps due to the influence of theistic religion on our thinking and certainly due to an absence of technology sophisticated enough to take on the challenge of scientifically explaining this fundamental feature of human nature. Philosophers have provided some insights in this field but arguably the greatest philosophical advances in understanding the mind have come from non-philosophers. No more is this true than in the present, as psychologists, philosophers, linguists, computer programmers and mathematicians collaborate with neuroscientists to map the body’s sexiest organ: the brain.
By studying the brain, we are looking into aspects unique to the species – to our species. We might want to make a claim that we are studying universal things that we have in common with every human being that has walked the planet. I am not making this claim, but it does seem that these kinds of motivations are at play. Hippocrates long ago recognized what was at stake: “from nothing else,” he wrote, “but the brain, come joys, delights, laughter and sports, and sorrows, griefs, despondency, and lamentations.”
Having a brain enables us to create worlds and to strive for an understanding of them. Although there are plentiful examples of ingenuity displayed by organisms without a central nervous system (the caddis fly larva, for example, catches food by constructing a trap remarkably similar in design to a lobster cage) outside our species there are scarce examples of ingenuity that require an understanding of what constitutes the ingenuity of a given behaviour (the caddis fly larva need not and does not understand how the trap works, or why it is the best way to catch food).
Neuroscience is heading us towards such scientific descriptions of these intrinsically human characteristics. It is so thrilling because it imparts knowledge about the conscious, highly intelligent creatures that we all are. This understanding is very different to knowing how our DNA replicates, a bit like learning how ears keep us balanced and a lot like finding out what makes us individually happy. It provides insight into the tacit coping mechanisms that constitute our awareness of the world. Science is no longer only focusing on the objects of awareness, but on awareness itself.
Here’s some science to back it up. Two recent experiments opened the way for a new understanding of the functioning of the brain, allowing us to recreate dynamic perceptions. The first experiment was reported last year at UC Berkley in the Gallant Lab and deals with visual perception. Psychologists created an apparatus that translates brain activity in the visual cortex of subjects watching movie trailers into video collages of YouTube clips which bear a close resemblance to the original movie trailers. Researchers were thus able to watch in real time a composite video that looked like a shadowy, high contrast version of what the subject was watching.
Reconstructing dynamic perception was a big hurdle for the scientists to overcome, as previously only static images could be reliably reconstructed. This is due to the coarse nature of functional magnetic resonance imaging (fMRI) data. fMRI takes as its unit of analysis small volumes of brain tissues called voxels, which are roughly 2.0 x 2.0 x 2.5mm. The hemodynamics of each voxel, that is the changes in blood flow, blood volume and blood oxygenation, are analysed to give an impression of brain activity. However, these processes work several orders of magnitude slower than the electrical activity of neurons. Because of this, they give a very coarse impression of the life of the brain, as analysis of the hemodynamics of one voxel effectively reduces the activity of hundreds of thousands of neurons to a single value.
To overcome this, Gallant and his team constructed a two-stage model. The first stage modelled the behaviour of thousands of individual motion-energy sensors in the brain as they respond to the shapes, edges and motion of objects in film trailers. This provided the fine-grained, dynamic neural world-to-brain mapping that is not available from fMRI. They then fed this information into a second model that describes how neural behaviour impacts on hemodynamics, which can be read by the fMRI. This model was then used to build dictionaries that translated shapes, edges and motion in any YouTube video into fMRI data. And so, by reverse engineering the model, it was possible to watch a reconstructed image of what the subject was watching by taking a feed directly from the brain.
Recently PLoS (a non-profit scientific publisher) published a paper by UC Berkeley based neuroscientists who have successfully reconstructed continuous auditory representations of words from measured neural signals. They measured the action potential across the surface of the auditory cortex of 15 subjects as they listened to individual words being spoken. It is important to note that the subjects had all undergone invasive surgery as treatment for epilepsy, which meant researchers were able to directly gather electrical data describing the activity of neurons, instead of inferring it using the usual technique of fMRI.
This neural information was then applied to a model, the output of which was a spectrogram. This spectrogram was in turn fed to a spectrograph, an instrument that generates visual representations of sound waves and are used for example by cetologists to analyze whale calls. They have also been reversed engineered by composers like Aphex Twin to turn images (in his case a self-portrait) into music. Once the apparatus was properly calibrated, these spectrograms were accurate enough that a computer could read them and the resulting sound would be recognisable as the word originally heard by the subject.
The two above mentioned experiments gave us a new understanding of the functioning of the brain, allowing us to recreate dynamic perceptions, and the fun part is thinking about the possible applications of this kind of technology. The first thing that jumps to mind is being able to plug ourselves into a programme that reproduces our visual thoughts and dreams on a screen. We could wake up in the morning and watch the dream we just had either for entertainment or to learn about something about ourselves. There are other, less fanciful applications. For example, this kind of technology could be applied to people with locked in syndrome, or those in a vegetative state, in order to assess their brain activity and, where possible, assist them in communicating. It could also be used in brain-machine interfaces, which allow users to control machinery – everything from prosthetic limbs to music composition software – with their brain. All these applications are for the good and in general with cognitive science, there isn’t much bad that can come of it. No atom bombs, no anthrax, nothing like that. So far.