How the Psychule Theory answers questions about consciousness

In response to the recent controversy over the “IIT is pseudoscience” letter, Kevin Mitchell wrote a nice article (here) delineating a set of questions that a theory of consciousness should be able to address. As the proud originator of such a theory, Psychule Theory, I thought I would go ahead and answer those questions.

First, let me give a brief description of the theory. The psychule is claimed to be the fundamental unit of consciousness (compare molecule, eg., H2O as the fundamental unit of water). The claim is not that the psychule *is* consciousness. The claim is that any good theory of consciousness will fundamentally involve psychules, and these psychules will be invoked to describe the activities of consciousness.

The psychule is a process. Every process can be described mechanistically as a set of inputs which, when presented to a system (mechanism) generates (causes) a set of outputs. In some cases the mechanism is a black box, but the mechanism may involve internal processes, each of which could, in theory, be described via the input->mech->output paradigm.

Specifically, the Psychule is a two-step process, i.e., a process which has at least two internal processes linked by output from one being the input of the other. It can be diagrammed thusly:

(Inputs) –> [mechanism 1] –> (outputs 1) –> [mechanism 2] –> (outputs 2)

So what makes a two-step process a psychule? To answer that, I will need to use a few terms (information, goals, and purpose) which terms have different meanings to different people. I therefor have to specify how I am using them.

Information: By information, I generally mean the concept known as mutual information, or correlation, or, if you want to go to the quantum level, entanglement. Mutual information describes a relation between two physical systems, such that measuring one will tell you what to expect if you measure the other. It’s important to note that any given physical system doesn’t have just a single mutual information. You can ask what is the mutual information between any two systems. The value may be high between systems A and B, and negligible between A and C.

Goals: A system can be said to have a goal if there is a state of the world, the goal state, such that the system can detect a deviation from that state and initiate an action which changes the world in the direction toward the goal state.

Purpose: Some systems achieve their goal(s) by generating subsystems which have the desired effect: moving the world toward the goal state. Such a system is said to have achieving that goal state as its purpose. The subsystem can also be said to have a goal, but only if it also can detect the deviation from the goal and act accordingly. Thus, a fire in a fireplace can have the purpose of heating a room, whereas a thermostat has the goal (and purpose) of maintaining a room at a specific temperature.

Agent: An agent is a system that has more than one goal and uses information (via psychules) to determine which actions to take to gain the most benefit towards those goals.

So what is a psychule? A Psychule is a two-step process wherein the first process constitutes a pattern recognition, the output of which constitutes a symbolic sign vehicle. A symbolic sign vehicle is a physical system whose sole purpose is to carry mutual information with respect to the pattern recognition mechanism, and by extension, mutual information with whatever instigated that pattern recognition. The second process in the psychule then takes that sign vehicle as input and generates an action which moves the world toward a goal of the encompassing system. Note: given that a system has some mutual information with every other system, the system with the goal must coordinate the linkage between the pattern recognition unit and the responding (interpreting) unit.

So now the Psychule diagram looks like this:

(Inputs) –> [pattern rec.] –> sign vehicle –> [interpreter] –> (action(s))

As I said above, most theories of consciousness will involve various structures utilizing psychules. It will be useful in answering the questions below to first describe a few such structures. One such structure is Ruth Millikan’s basic unitracker. A unitracker is essentially a pattern recognition unit, but may be a unit which gets its inputs from other pattern recognition units (or other unitrackers). Thus, unitrackers for retinal pixels provide input to unitrackers for features such as boundaries, edges, textures, which provide inputs to unitrackers for whole objects, etc. Unitrackers can also provide feedback as output, thus giving predictive processing, plus outputs to other systems, such as a “global workspace”.

Another useful structure is Chris Eliasmith’s semantic pointer which can take inputs from unitrackers and combine them in structured ways, such as associating them with a time or place or some other context. These associations could then be utilized directly or placed in memory. Semantic pointers may also be candidates for a global workspace.

So with that introduction I think I can start answering Kevin’s questions:

What kinds of things are sentient? What kinds of things is it like something to be? What is the basis of subjective experience and what kinds of things have it? 

 Things that use psychules are conscious. Some people associate affect with sentience. In the Psychule theory, affect is the interoception resulting from psychules whose outputs are system-wide, usually hormonal. A thing that has a single system which can take the outputs of multiple psychules (possibly as memories, or possibly directly) as inputs to further psychules can be said to be “something it is like to be”. What it is like to be that thing depends on and is limited to the specific psychule outputs available to the appropriate system.

Does being sentient necessarily involve conscious awareness? Does awareness (of anything) necessarily entail self-awareness? What is required for “the lights to be on”? 

Sentience necessarily involves psychules, but “conscious awareness” usually refers to inputs of a specific subsystem: Damasio’s(?) autobiographical self. Most people, when discussing consciousness, are referring to this particular subsystem.

What distinguishes conscious from non-conscious entities? (That is, why do some entities have the capacity for consciousness while other kinds of things do not?) Are there entities with different degrees or kinds of consciousness or a sharp boundary?

Again, consciousness necessarily involves psychules. The “consciousness” of an entity is determined by the number and organization of psychules, so yes there are degrees and kinds.

For things that have the capacity for consciousness, what distinguishes the state of consciousness from being unconscious? Is there a simple on/off switch? How is this related to arousal, attention, awareness of one’s surroundings (or general responsiveness)? 

Consciousness as psychules refers to processes, so a “state” of consciousness must refer to a dynamic state, which is presumably a set of psychules happening recurrently. A state of unconsciousness is just a state where psychules are not happening. Attention could, for example, refer to a specific semantic pointer. Attention would be the controlling mechanisms which determine which unitrackers provide input to the semantic pointer.

What determines what we are conscious of at any moment? Why do some neural or cognitive operations go on consciously and other subconsciously? Why/how are some kinds of information permitted access to our conscious awareness while most is excluded? 

 “What we are conscious of at any moment” again refers to a specific subsystem:t he autobiographical self. This system presumably makes use of a global workspace (semantic pointer?) and would only access the outputs from that workspace/pointer. Psychules in and among the unitrackers not outputting to that workspace would still be happening.

What distinguishes things that we are currently consciously aware of, from things that we could be consciously aware of if we turned our attention to them, from things that we could not be consciously aware of (that nevertheless play crucial roles in our cognition)?

The outputs of only some psychules would go to the workspace/pointer, and which psychules actually output there would be controlled by some attention mechanism.

Which systems are required to support conscious perception? Where is the relevant information represented? Is it all pushed into a common space or does a central system just point to more distributed representations where the details are held? 

Not sure how you are using “representation” here. If a unitracker for “cat” is activated because a visual image of a cat on the retina, because there is an actual cat in view, the internal sign vehicle and also the output of that unitracker has mutual information w/ respect to that cat, and if that unitracker activates part of the activity in a semantic pointer (via attention), then the output of *that* psychule (which includes the semantic pointer) also has mutual information w/ respect to the cat out there, etc.

Why does consciousness feel unitary? How are our various informational streams bound together? Why do things feel like *our* experiences or *our* thoughts? 

Consciousness feels unitary because there is only one world, and wherever you look, that’s what you see. (“Wherever you go, there you are.”) But “informational streams” are brought together by unitrackers tracking other unitrackers, and also in semantic pointers. Experiences and thoughts just are our feelings. We don’t feel our feelings. We just group all the feelings and assume they belong to us (except for weird pathologies when we don’t.)

Where does our sense of selfhood come from? How is our conscious self related to other aspects of selfhood? How is this sense of self related to actually being a self?

There are various senses of self. Sometimes I think my cell phone is part of my self. Sometimes I feel my car is part of my self. I don’t there is any singular thing as “being a self”. There is only adhoc useful references. Oh, and unitrackers for the concept me, self, etc.

Why do some kinds of neural activity feel like something? Why do different kinds of signals feel different from each other? Why do they feel specifically like what they feel like? 

Depending on what you mean by “feel like something”, outputs of psychules go to some systems, not others. Semantic pointers “point” to the unitrackers that activated them, and thus those unitrackers are distinguishable. They feel like they feel like because they have to feel some way or you couldn’t distinguish them.

How do we become conscious of our own internal states? How much of our subjective experience arises from homeostatic control signals that necessarily have valence? If such signals entail feelings, how do we know what those feelings are about?

We have unitrackers for internal (interoceptive) states just like for retinal states. How much experience arises from homeostatic control signals w/ valence? This much. [Holds hands 2 feet apart.] Some psychules respond by generating systemic responses via, for example, releasing hormones. Any such response will generate further interoceptive responses. When these interoceptive responses come in recognizable sets, we create unitrackers for them and give them names like fear or anger or desire. Sometimes we get into these states but don’t actively realize, so don’t activate the unitracker. Some such sets of interoceptive states activate psychules the output of which tend to cause recent actions to be repeated or avoided, and we call the resulting interoceptive states “good” or “bad” as appropriate.

How does the aboutness of conscious states (or subconscious states) arise? How does the system know what such states refer to? (When the states are all the system has access to).

Aboutness comes from mutual information, but the specific mutual information results from the coordination of the pattern recognition w/ the interpreting response. I expect the work is done by variation and selection.

What is the point of conscious subjective experience? Or of a high level common space for conscious deliberation? Or of reflective capacities for metacognition? What adaptive value do these capacities have?

Conscious subjective experience does not have a point. It’s just a description of what something is doing. Mutual information is an affordance to achieve goals. Sophisticated pattern recognition allows one to achieve goals that would not otherwise be achievable. High level common space for conscious deliberation allows one to recognize non-obvious patterns, such as the laws of physics.

How does mentality arise at all? When do information processing and computation or just the flow of states through a dynamical system become elements of cognition and why are only some elements of cognition part of conscious experience?

Mentality arises when natural selection (or other design) seizes on the affordance provided by mutual information in physical systems. Information processing and computation become elements of cognition when they are used to achieve goals. All cognition is conscious. It’s just not accessible to all systems.

How does conscious activity influence behavior? Does a capacity for conscious cognitive control equal “free will”? How is mental causation even supposed to work? How can the meaning of mental states constrain the activities of neural circuits?

Conscious activity is necessarily behavior. The behavior attached to a pattern recognition via psychule determines the experience. “Mental causation” is just the description of normal description of causation, but when the mechanism is a psychule. As for “free will”, this theory is compatabilist. An agent is a system with more than one goal and determines actions based on information via psychules.

Whew.

Any more questions?