The explanatory void

A breeze caresses your shoulder. Pitter patters of rain beat gently on the roof. The setting sun dashes the horizon with red and purple hues. These experiences are so familiar to us, and yet, they all fall under a category David Chalmers has described as the “hard problem” of consciousness. This description, which is frequently contrasted with so-called “easy problems”, deals with the nature of qualia: raw, subjective experiences which appear to be irreducible into material parts. The familiarity of these experiences betrays their oddity. The difference between qualia and the physical world is so bizarre, in fact, that our inability to explain one in terms of the other is referred to as the explanatory gap.

In order to appreciate how daunting the hard problems of consciousness appear, it’ll help to discuss a few easy problems. The easy problems, according to Chalmers, are those which are explainable by “computational or neural mechanisms”. Instead of trying to describe the characteristics of such problems, however, I’ll introduce a thought experiment in order to clarify what they are. Imagine an incredibly powerful supercomputer hovering over the earth which is capable of extracting all the data it needs from the surface. We would imagine that such a computer can not only track the position of every individual organism on the planet, but also take high-definition images and biometric scans of their bodies, revealing biomolecular features and physiological levels at all times. Such a computer would contain vast amounts of knowledge — in fact, we can consider it to have all of the answers to every easy problem we might pose to it.

Take thirst as an example. An endocrinological “scan” of the human body would provide a thorough understanding of the mechanisms of thirst. Cerebral osmoreceptors respond to a drop in blood pressure by sending a neural signal to the hypothalamus, resulting in a hormonal cascade which constricts the blood vessels, increases water retention, and releases neuropeptide Y. As a result of blood vessel constriction, blood pressure is conserved, and water retention has the obvious effect of minimizing water loss. Neuropeptide Y causes the organism to seek out water. All of these responses can be considered products of homeostasis, a kind of force that attempts to keep physiological levels at a certain condition in order to ensure survival of the organism. Indeed, these responses all exist by virtue of their beneficial quality to the organism (or perhaps more precisely: to its genes), and our supercomputer would certainly possess some knowledge of natural selection, which bestowed the organism with the ability to have these responses in the first place.

So, we could ask the computer “what is thirst” and receive such a thorough and concise account. However, there appears to be something missing. We all know that thirst involves more than simply physiological responses: there is something like the feeling of being thirsty, and this feeling has been left out of the computer’s otherwise flawless report. We might press it more about neuropeptide Y, which the computer might know as “that chemical which causes the water-seeking response”. Alright, we might say, how does it cause the water-seeking response? Its answer: the chemical activates such and such a neuron, which results in water-seeking behavior. Fair enough, but how does this neuron lead to the behavior? The computer does more scans and replies: this neuron firing at this time results in this motor neuron firing, which results in the arm’s reach towards a glass of water sitting on the table. Hardly helpful at all.

In fact, on closer inspection, it appears impossible that the computer, despite its immense information gathering and computing power, would ever be capable of explaining to us why the feeling of thirst arises from neuropeptide Y activity. After all, objective descriptions of the world ultimately rely on material events, and there seems to be something stubbornly non-material about qualia like thirst. Suddenly, the tables are turned: that omniscient being, with all its enviable knowledge and paradigms, will never understand the idea of responding to a feeling. At the very best, it can only explain that neuropeptide Y results in water-seeking behavior on the part of the human, but this is as simple as explaining that the contact of a cue ball results in the motion of its target ball. The computer might even construct infinitely complicated equations that relate the intensity of neuropeptide secretion to water acquisition, and predict in each scenario how long it would take for the human to find water. But again, something is missing, this computer does not understand us at all: it is on the other side of the gap.

By such an account, a question that could be posed and answered by the supercomputer would be considered an easy problems, while a question that is still incapable of being answered would be in the domain of the hard problems.

Another fascinating dimension of this discussion arises when we ask the question: since when has the computer been incapable of knowing all things? In other words, when did hard problems begin to form? Imagine the state of the supercomputer just before the proterozoic era, around 2,500 million years ago. It would know the state of every molecule of every single-celled organism that was beginning to develop, and since nervous systems hadn’t been developed yet, it seems intuitive that there is not yet anything “like” being such an organism. So our computer’s descriptive account of the world seems sufficient (unless you ascribe to a view called panpsychism, which argues that there is in fact “something like” being every object in the universe, to varying complexities).

Let’s fast forward. To keep things brief, I’ll limit the discussion to one organism: some hypothetical species of worm which possesses the earliest precursors of ganglia and nerves. Let’s suppose that the worm has developed a nifty behavioral response: when internal fluid levels are low, the central ganglia activates a “burrowing response”, which causes the neurons of the thorax to fire in such a way that the worm begins digging into the ground. The behavior would continue until the worm inevitably finds water in the lower levels of the ground, which results in consumption and a return to homeostatic fluid levels. This response would be positively selected since it would grant the worm with a reproductive advantage. Moreover, this response is perfectly explainable by the supercomputer: effectively, it would tell its user a more complicated version of the story I just told.

But somewhere along the line, something incredible happens. In fact, I would call it the most inexplicable and mysterious event in the evolution of life: the neural connections become capable of tapping into pain/pleasure responses. So, a certain neuron firing results in the worm feeling either pain or pleasure. The significance of this cannot be understated. Now, instead of correlating with a specific behavioral response, the brain can respond to thirst by simply making the organism feel negative valence, a valence which can only be countered when the organism accomplishes a certain correct manner of behavior. In other words, instead of controlling for specific behaviors, the neural firing can now control for a specific outcome: the organism becomes compelled to achieve this outcome.

I am convinced that this moment was the birth of the hard problem. At the risk of sounding too zealous, I would also say that it was the birth of the “soul”, or at least the “subject” which we consider to be the seat of experiences such as pain and pleasure. And there is more to be said about this. There appears to be a tautological relationship between pleasure (which I’m simply using to mean any positive valence reaction), pain (negative valence) and will. If I was to pick which is the most fundamental aspect, I might say that pleasure/pain (let’s call it valence) is. Valence is what really exists, and will is simply the actions an organism takes in order to achieve positive valence and avoid negative valence. On the other hand, I could say that will is the most fundamental thing that exists, and the definition of pleasure is simply that which is always willed, while pain is that which is never willed.

Either way, pain/pleasure and will appear to be the most fundamental constituents of what make us “more than material”. For reasons I can’t articulate yet, I would guess that what we consider A-consciousness is not consciousness at all: it only appears that way because each object of A-consciousness (let’s say, the image of an elephant or what I had for lunch yesterday) carries with it P-conscious responses, namely, the emotional responses I might have to those concepts. In other words, I believe that the only reason we think there is “something like” abstracting is because we cannot abstract without little undercurrents of experiential content.

Regardless, I hope it’s clear now that the explanatory gap is quite the euphemism: a void might be a more fitting description. Remember that the questions that the supercomputer can answer might be very complicated indeed, but there’s still a frontier it cannot pass: subjective experience, common to us all. And I think that’s something to feel special about.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s