Accounting for phenomenal structure–the forms, aspects, and features of conscious experience–poses a deep challenge for the scientific study of consciousness, but rather than abandon hope I propose a way forward. Connectionism, I argue, offers a bi-directional analogy, with its oft-noted “neural inspiration” on the one hand, and its largely unnoticed capacity to illuminate our phenomenology on the other. Specifically, distributed representations in a recurrent network enable networks to superpose categorical, contextual, and temporal information on a specific input representation, much as our own experience does. Artificial neural networks also suggest analogues of four salient distinctions between sensory and nonsensory consciousness. The paper concludes with speculative proposals for discharging the connectionist heuristics to leave a robust, detailed empirical theory of consciousness.

I am not a zombie. Neither are you. You know right away that you are not, but how can you be sure about me? And how can I be sure about you? Philosophers of mind have made a steady living exploring the logical possibility of zombies among us, but even they feel the common-sense conviction that ordinary, awake humans are conscious. In contrast to the a priori philosophy of consciousness, then, we encounter the inescapable folk theory of consciousness: if it walks like a conscious agent and talks like a conscious agent, then it is a conscious agent. Between these two poles–the apodictic philosophical theory and plain folk practicality–lies the tricky business of the empirical scientific theory of consciousness. This paper explores this middle ground.

Scientific theorizing about consciousness is a tricky business owing to a well-known dualism of viewpoints (e.g. Nagel, 1974). From the point of view of science, conscious states of mind are inferred or theoretical entities. Science is full of such entities, however, so this in itself is a standard challenge. But this challenge is confounded by a second difficulty, which is that from the point of view of the conscious agent consciousness is the complete antithesis of an inferred entity. Conscious states of mind are real, palpable, and laden with specific and identifiable properties. For the conscious agent, awareness is simply there, present as something known without inference or theory. (This is why outright eliminativism for consciousness makes no sense.) Somehow the target entity, consciousness, is both an inferred unobservable entity and the one-and-only self-intimating directly observable “given”.

This seeming antinomy of viewpoints complicates the scientific project, and I would like to launch this paper by briefly examining the complications. First, at the heart of a scientific theory of consciousness will be a set of statements asserting the identity of states (or processes) of consciousness with states (or processes) of the brain. Let us call these statements the core theory of consciousness. What form will these core statements take? Will they hold only for individuals, or within a species, or for conscious beings “as such”? Philosophers have a number of in-principle arguments about the most desirable core theory, but I think the empirical researcher must simply wait and see. Sad to say, for the moment there are no empirically confirmed core hypotheses, and so the question of the ultimate scope of core identities cannot yet be addressed.

Meanwhile, non-core research into consciousness is booming. Outside the core, the scientific task is to explain consciousness by placing it in its causal and evolutionary contexts. The recent history of the science of consciousness would seem to suggest that the core theory comes along last, the final conclusion in a massive research program. But it need not be that way. As William Wimsatt has pointed out (particularly with reference to the history of genetics), a fruitful research strategy can begin by assuming core identities across domains or “levels” of nature, and exploit those identities (and Leibniz’s law) as tickets for importing explanations across domains, thereby generating a host of new explanatory hypotheses (Wimsatt, 1976). An inevitable dialectic ensues, as both the identities and the explanations based on them adapt to the experimental and observational landscape. This suggests a natural strategy for the scientific study of consciousness, inviting the brave to go ahead and propose parts of the core theory as hypotheses and discovery devices.

It is right here that the antinomy of viewpoints becomes troubling. By the Wimsatt strategy, our hypothetical identification of c (some state of consciousness) with b (some state of the brain) would invite us to substitute “c” for “b” in any explanation of b, and if the shoe fits . . . But is that all there is? In the study of consciousness, the dualism of viewpoints necessitates a further step. The target entity in this case is not simply the cause of observed phenomena, but is itself phenomenal. It has its own rich and complicated structure, the structure of consciousness. As a result, once one has the correlate pegged and the causal relationships in registration, one must go further to explain why the hypothesized physiological state of consciousness should have the phenomenology that it has for conscious subjects (Levine, 1993). This extra step makes the study of consciousness unique.

So far these points might be made by any of a number of critics of the scientific approach to consciousness, a group known to their opponents as the mysterians (Flanagan, 1992, p. 109). Against the mysterians, the fans of science argue that regular science will be sufficient to reach the core theory of consciousness. But for the reasons just stated, the mysterians have a point: the core theory of consciousness will require an explanatory step beyond what would be sufficient grounding in any other scientific domain, a step required just because what is scientifically unobservable is phenomenally observable. First-person experience cannot be dismissed, nor is it automatically accounted for in the non-core research program [1].

The mysterians, however, go further to maintain that the unique requirements on the core theory of consciousness strongly suggest that a materialist core theory is either unknowable or impossible. In this conclusion they err twice. First, they construe the question of consciousness as mysteriously and ineffably monolithic, and understand a core theory to be one or a few statements that somehow reduce and illuminate consciousness all at once, in all its glimmering seemings and self-intimations. In response, we should recall that consciousness is complex, variegated, and diverse in forms, manifestations, and properties. It lends itself to incremental study, considering its aspects and episodes one by one, building toward an ultimate theory. The second error in mysterian thinking follows on the first. They fail to foresee an obvious extension of the Wimsatt strategy: interdomain identifies need not be limited to scientific domains. The complexity of consciousness, its structure and subjectivity, is the traditional province of phenomenology. A core theory of consciousness can exploit phenomenological description of a state of consciousness and attempt to co-ordinate it with neural explanation of the corresponding brain state. The mysterians are right to settle for nothing less, but prematurely eager to abandon the struggle to achieve it. However, rather than continuing this debate at the metatheoretic level, I propose a more modest effort, somewhat in the spirit of a case study, with the goal of sketching a part of the core theory. To the extent that this effort succeeds, the explanatory gap closes, weakening mysterian mysticism while advancing the prospects for understanding and explaining consciousness.


Lloyd, D. (1996). Consciousness, connectionism, and cognitive neuroscience: A meeting of the minds. Philosophical Psychology, 9(1), 61.