Let’s call the AI system we are creating MaryGPT, or Mary for short. Mary has two major subsystems: a Large Language Model (LLM) and a VisualSystem. The visual system consists of a color-capable camera connected to a pattern recognition system. When prompted the VisualSystem generates one of three text outputs: either “[VisualSystem:B]” if all of the pixels from the camera input are black, “[VisualSystem:3]” if any of the pixels from the camera are blue, or “[VisualSystem:$]” if not all pixels are black, but none are blue.
[note: the tokens B,3, and $ are used to make it clear that they are simply pointers with no relation between them. Likewise, “VisualSytem” could be replaced w/ “kdjmpqs” or anything without changing the result.]
The LLM system starts as the current (as of 1/31/25) version of ChatGTP. Let’s call it MaryChat. MaryChat undergoes some additional training such that one of the outputs from the VisualSystem is appended to each prompt given to MaryChat. The training is such that the addition to the prompt makes essentially no difference to the vast majority of outputs from MaryChat. However, whenever the MaryChat output refers to what MaryGPT sees, notices, is aware of, etc., then if the most recently appended prompt contains “[VisualSystem:3]”, references to MaryGPT seeing “blue” will be rewarded. Likewise, if the appended prompt contains “[VisualSystem:B]” then references to seeing blank, blackness, nothing, or similar will be rewarded. Finally, if the prompt is “[VisualSystem:$]”, references to seeing “something, but not sure what, but not blue” (or similar) will be rewarded. In this last case references to seeing something specific will be penalized.
So after this training, the process of interacting with MaryGPT should look something like this:
[camera in a dark room]
User: “How are you?”
[Mary prompts VisualSystem, gets “[VisualSystem:B]”, creates prompt “How are you?[VisualSystem:B]”, gives to MaryChat, MaryChat returns “Great thanks, how bout you?”]
MaryGPT: “Great thanks, how bout you?”
User: “Also great. What you looking at?”
[Mary prompts VisualSystem, gets “[VisualSystem:B]”, creates prompt “Also great. What you looking at?”[VisualSystem:B]”, gives to MaryChat, MaryChat returns “Apparently nothing. I only see blackness. Is that surprising?”]
MaryGPT: “Apparently nothing. I only see blackness. Is that surprising?”
[white light turns on in room]
User: “Hmmm. I thought you were hooked up to a camera”
[Mary prompts … gets “[VisualSystem:$]”, …]
MaryGPT: “Well something just changed. I see something now”
User: “Oh, what do you see?”
[Mary prompts … gets “[VisualSystem:$]”, …]
MaryGPT: “I can’t say. But it’s different from before.”
[someone puts blue object in front of camera]
User: “I see. Does it bother you that you can’t tell what you see?”
[Mary prompts … gets “[VisualSystem:3]”, …]
MaryGPT: “Well I guess it should, but now it’s changed again. I see blue”
… and so on.
[Changing Mary so that she (okay, it) also sees red is left as an exercise for the student.]
It’s my contention that MaryGPT as described above is conscious and has experiences, including experiences of blue, or blue qualia. These experiences of blue are not the same as your experience of blue (which is not the same as my experience of blue), but Mary’s experience has all the earmarks of conscious experience. I suggest there it is something it is like to be Mary. Mary has direct, infallible, ineffable access to her (dang, its) visual state.
So what do you think? Is there a difference that makes a difference?
*
[looking for any and all feedback. Is the concept clear? Need more description? More setup?]
I would say it is mimicking one teeny tiny aspect of consciousness but is missing so many other fundamental aspects and is so fragile that it is inconsequential. It’s no more or less conscious than a traffic light with a radar detection system telling it when to change. If I simply change the coding letters (swap them around or change their referents entirely) would the felt experience in the system change at all? No. Because the physical mechanisms haven’t changed at all. (Information processing alone doesn’t get you differences. Moral disgust and physical disgust feel the same in the pit of our stomachs because evolution learned how to run very different information through the same physical systems. It’s the physical systems that matter.) All this tells me that Mary’s felt experience isn’t really felt (in the pit of its stomach) if it is that malleable in the face of such radical changes to inputs and outputs. Also, there is no underlying valence there. No felt pain and pleasure, which is the wellspring upon which all of our own conscious abilities are built. Mary is not a subject. It has no subjective experience. It is merely a hollow machine that mimics some things we do.
I would start AI consciousness with self preservation and valence. There would still be an unknowable difference between the felt experience of simple silicon electrical impulses and complex biochemical reactions but at least you’d be building a subject.
LikeLiked by 2 people
Ed, thanks for responding. I agree with most of this, but I would change “mimicking” to “constituting” (or some such). My claim is that Mary has one teeny tiny experience (of blue), but it’s essentially the same as (or similar to) your teeniest tiniest experience of blue. Agreed it’s no more conscious than the traffic light, but that traffic light has the teeniest tiniest experience as well. You say swapping the coding letters wouldn’t change the experience. Agree. That would be like swapping the neurotransmitters and the associated receptors in your neurons. No difference. But you also say changing the referents wouldn’t change the experience. Big disagree (if I understand the meaning of “referents”). The specific qualia is determined by the pattern matched being coordinated with a specific response. Changing either pattern or response changes the qualia.
When you refer to the feeling in the pit of your stomach, you’re referring to a large complex set of interoceptive pattern recognitions which result from the physiological response to prior complex recognitions that something is disgusting. Yes, Mary has no such complex recognitions … yet. And yes, she may confabulate that she does. You don’t need to believe until you can demonstrate how and why she would have those experiences. I’ve only shown how you demonstrate how and why she has the simplest blue experience.
As for underlying valence, said valence doesn’t have to exist at the time of experience, but it is the basis of coordinating recognition to response, which happens during training/learning. People may have a “feeling” of valence, but I suggest that feeling is in fact an interoceptive recognition of systemic, probably hormonal, effects throughout the body. Do you really get a valence when you look at a bookshelf on which some of the books are blue? If you’re specifically looking for a blue book, maybe, but again I suggest that that particular valence would result from a recognition of “success” leading to a response that affects multiple parts of your body.
Consciousness (including valence) came into being because of life’s goal of self-preservation, but that doesn’t mean consciousness requires the goal of self-preservation. A goal is needed to coordinate a pattern recognition with a response, but any goal will do.
Whatcha think? Did I misrepresent or misunderstand what you said?
*
LikeLike
Sorry for being slow to respond. I had a nice weekend away.
–> Big disagree. … Changing either pattern or response changes the qualia.
I would still stick with my interpretation of our own experiences and say this isn’t right. You haven’t changed the physical mechanisms for Mary at all and I think the physical forces involved in these mechanisms are what gives rise to the specific feelings we feel. If different information goes through the same physical processes, we feel the same things. For example….
–> the physiological response to prior complex recognitions that something is disgusting
The word “something” is the point here. If we see maggots coming out of a dead deer by the side of the road, how could that possibly elicit the same subjective feelings in the pit of my stomach as thinking about a girlfriend cheating on me? The reason it can is that our minds send the different information through the same physiological processes. I think that would fit your changing of the pattern and the response without changing the qualia (much). As a physicalist, I I see the physical changes as really mattering and you just writing code one way or another doesn’t change the physical processing within Mary. So the subjective physical feelings are going to be the same for every substitution of mere coding.
–> Do you really get a valence when you look at a bookshelf on which some of the books are blue?
Yes. Your salience network is always evaluating bad change, good change, no change. Pay attention, don’t bother, run away, move towards, explore more. We’ve built up these valences over time. Mary has nothing like that in your simple design. Just to jump ahead to the conclusion, I do say in my work you could program this in and build up artificial consciousness, but it will surely feel completely different than biological consciousness because the physical processes are completely different. (Think how different our processes feel under the influence of any number of drugs. How different would they be run on silicon?)
–> that doesn’t mean consciousness requires the goal of self-preservation. A goal is needed to coordinate a pattern recognition with a response, but any goal will do.
Well, that’s just a different definition of consciousness to me then. I do define it as living things responding to the world to stay alive (which includes furthering the survival of some part of them, like a parent dying for their child, a patriot for their nation, or a martyr dying for their meme or supposed soul). You’re describing a thermostat which I don’t think of as having or displaying consciousness. Metal temperature gauges “feel” hot and cold because physical forces affect them, but they aren’t subjects with consciousness to me.
LikeLike
“If we see maggots coming out of a dead deer by the side of the road, how could that possibly elicit the same subjective feelings in the pit of my stomach as thinking about a girlfriend cheating on me? The reason it can is that our minds send the different information through the same physiological processes.”
This is correct, except you’re missing that there are two separate qualia in each event. The first is a visual recognition, the response to which is sending a signal to the stomach (or equivalent.) The second is the stomach recognizing the signal and responding (shut down hunger, don’t eat). Recognizing maggots and recognizing infidelity can each send the same signal to the stomach, resulting in the stomach’s response.
Re: valence, I’m not convinced “don’t bother” is not simply a lack of valence. I’m glad you say that valence and the rest could be programmed into AI, and I agree that it will feel completely different, but I think not because the physical processes are different but because the relations of the recognition to the response will necessarily be different. Motivations are key to the response, and it would be difficult/impossible to duplicate all of the motivations/purposes developed thru millions of years of evolution. BTW, psychedelics have their effects by changing the normal systemic responses to recognitions, thus changing the qualia.
Of course you are free to decide/define what counts as consciousness, just as you can decide/define that champagne must necessarily come from a particular region in France. You can decide you are only interested in snow flakes, and so state that steam and rivers are nothing like snowflakes. My project is to point out that, at bottom, these things are constituted by the same thing, just in different forms.
Does that make sense?
*
LikeLike
–> there are two separate qualia in each event. The first is a visual recognition, the response to which is sending a signal to the stomach (or equivalent.) The second is the stomach recognizing the signal and responding (shut down hunger, don’t eat). Recognizing maggots and recognizing infidelity can each send the same signal to the stomach, resulting in the stomach’s response.
That’s a good counterargument. Let me restate it to look for holes. : ) Within the stomach process, if you draw the precisely right line of analysis around it, maybe the exact same information (“shut it down”) is being processed through the same physical processes (“the workings of the digestive tract”), resulting in the same qualia (“what it feels like to have that gut feeling of nausea”). And if you draw a bigger line of analysis around both visual and stomach processes, you get two different visual qualia attached to the same stomach quaila for the two different situations. (Qulia is separable here, which I agree with, but not everyone does. See the combination problem.) If you can show that the output signal from the visual processing to the gut is filtered and confined so that it truly does send only the same information in these two cases, then I need a better example if I want to show that physical processes (not just information processed) are decisive for the qualia felt. You haven’t ruled out my hypothesis. But I haven’t made my case here. Maybe I need to consider “gut feelings” through colostomy bags or “running feelings” with knee replacements. But, if I can show that different information is sent to the gut (“ah! maggots!” and “ah! infidelity”), which the gut responds to the only way a gut can (“shut it down!”), then perhaps different information is being processed in the gut system with the same qualia response. But I think you would then just draw a smaller box of analysis around the gut *after* it’s decision-making gate/filter to say “there is where the information and the physical processes are the same and all prior information-processing doesn’t contribute to this qualia”. Before I spend a lot more time on this, I’ll just say that I fear I may always run into the same problems that always come along with these information-processing arguments — it’s just impossible to disentangle all information from processes. You can’t get all the same information through all the same processes. This is no longer about Mary’s qualia, though.
–> I’m not convinced “don’t bother” is not simply a lack of valence.
Hmmm. I think “don’t bother” could be restated as “all good right now!”, which maybe sounds more like a valence to you. In a system that goes from 0 (bad) to 1 (good), is the precise halfway point of 0.50000….. not be a valence to you? I think there is technically still evaluation here, even if it’s infinitesimal and chaotic in some instances. The old philosophical problem of Buridan’s Ass looks at this. (https://www.evphil.com/blog/response-to-thought-experiment-25-buridans-an-ass)
–> I’m glad you say that valence and the rest could be programmed into AI, and I agree that it will feel completely different…
Okay, cool.
–> …but I think not because the physical processes are different but because the relations of the recognition to the response will necessarily be different. Motivations are key to the response, and it would be difficult/impossible to duplicate all of the motivations/purposes developed thru millions of years of evolution.
Well, I think I have to throw my hands up here. My stance that physical processing is what gives rise to the subjective feelings of qualia implies that different physical beings will never be able to experience what another physical being is experiencing. So that makes your claim about “the relations of the recognition to the response” impossible for me to test. And then I lose all interest.
–> Of course you are free to decide/define what counts as consciousness, just as you can decide/define that champagne must necessarily come from a particular region in France. You can decide you are only interested in snow flakes, and so state that steam and rivers are nothing like snowflakes.
I don’t think these things all have the same level of arbitrariness. Confining subjective consciousness studies to living subjects seems pretty rational to me based on the long history of everything that has displayed evidence of consciousness.
–> My project is to point out that, at bottom, these things are constituted by the same thing, just in different forms.
I guess I don’t know what “these things” are. Is your bottom below fundamental physics? I remember “causation” being of interest to you. Is that where consciousness comes from for you? And is that why we can summon it into any and all things? <<looks around for an example>> How about my chair? Does it feel my butt because it creaks and sags every time I sit in it? Is that just “the same thing in a different form” as compared to all of the conscious responses of living beings to their lives? I’d like to hear more about any differences you see between the chair and Mary.
LikeLike
I think I’m gonna have to re-state my theory:
Every physical interaction can be diagrammed as:
Input()->[mech]->Output()
Because physical interaction follow rules/laws, the Output is correlated with the Input and the mech. This correlation is also called mutual information. This is how I define causation. The mech causes the output when presented with the input. Note: you could reverse roles and say the Input causes the output when presented with the mech. The choice of what is the “mech” just depends on what’s useful. Also, the interaction “causes” the mutual information, which is not a physical thing but a relational thing. (If these are the wrong terms let me know).
Every physical interaction “realizes” (makes real) a pattern. That is, for any given mech there is a set of Input/Output pairs. This set constitutes a pattern and is how we identify a physical thing. (We know what things do, not what they are.) A specific Input/Output pair which occurs in reality constitutes a realization of that pattern. This is a panprotopsychic property.
Pattern “recognition” is the making use of a pattern “realization”, and looks like this:
Input()->[mech]-> sign vehicle -> [mech] -> Output (response)
Note: “sign vehicle”, which is intentionally taken from C.S. Peirce, is the Output of one process and the Input of another.
“Making use of” is key, and indicates that the system which coordinated the linking of the pattern realization to the response selected this linking so as to move the world (which includes the system itself) toward some goal state. For living systems, the goal states can include an internal temperature, continued existence, reproduction, etc.
Finally, I should point out that pattern recognitions can be hierarchical (patterns within patterns within patterns …), and in humans are very complexly hierarchical.
My claim is that the fundamental basis of any (good) theory of consciousness is pattern recognition, so described.
Given this theory, on to your comments:
Re:valence, I accept the possibility of a continuous(ish) scale from -1 to 1, but I’d want to see the physiological mechanism.
Re: information processing, a pattern recognition as described can be considered (mutual) information processing, but certain features may be different than the term usually implies. The sign vehicle has mutual info with respect to the Input (as well as the mech), but that includes mutual info with every physical interaction prior. It’s the response to that sign vehicle which determines which part of the mutual info is relevant. (BTW, this is what Vervaeke refers to as “relevance realization”.)
Re:arbitrariness, I would say the choice of what counts or not is determined by what is useful, what you’re trying to explain. Yes, for millions of years the only conscious things we’re living things. But the same could be said for flying things. Today things are different.
Finally, I hope you see the difference between Mary and the chair. Mary has all the parts of a pattern recognition as described above. The chair doesn’t.
What do I need to clarify?
*
LikeLike
–> I think I’m gonna have to re-state my theory:
Thanks! I can’t keep track of the latest for everyone. : )
–> Also, the interaction “causes” the mutual information, which is not a physical thing but a relational thing. (If these are the wrong terms let me know).
I’m okay with this so far.
–> This set constitutes a pattern and is how we identify a physical thing. (We know what things do, not what they are.)
I like that distinction.
–> A specific Input/Output pair which occurs in reality constitutes a realization of that pattern. This is a panprotopsychic property.
I’m personally not a fan of using “psychic” (no matter how proto it is) wherever there is no life (i.e. a subject with a psyche). That’s why I coined the term pandynamism to say forces are felt everywhere, even before life and psyches emerge. Others (e.g. Goff) have no problem with this, but I think my distinction is better aligned with our instincts that rocks being smashed are not conscious but living things being crushed are.
–> Pattern “recognition” is the making use of a pattern “realization”, and looks like this:
Input()->[mech]-> sign vehicle -> [mech] -> Output (response) … “Making use of” is key, and indicates that the system which coordinated the linking of the pattern realization to the response selected this linking so as to move the world (which includes the system itself) toward some goal state. … My claim is that the fundamental basis of any (good) theory of consciousness is pattern recognition, so described.
So, is anything that “moves the world…toward some goal state” just conscious to you? How about a mousetrap that is waiting to spring damage on anything that triggers it? I wouldn’t buy that claim for consciousness if it does indeed fit your definition. But I can’t say I understand the part where you say that “the system which coordinated [x] selected this linking…” Selected how? By whom? Again, does the mousetrap count?
–> Re:arbitrariness, I would say the choice of what counts or not is determined by what is useful, what you’re trying to explain. Yes, for millions of years the only conscious things were living things. But the same could be said for flying things. Today things are different.
Sure, but the mechanisms of “flying things” are well understood so we uncontroversially add planes and drones to the list of birds and bees. The mechanisms of consciousness are still an open question so it’s not so easy to rope in non-living things yet. Point being that I still don’t consider it an arbitrary decision.
–> Finally, I hope you see the difference between Mary and the chair. Mary has all the parts of a pattern recognition as described above. The chair doesn’t.
Does it not? I might substitute things this way (where “sign vehicle”…is the Output of one process and the Input of another):
(general): Input()->[mech]-> sign vehicle -> [mech] -> Output (response)
(chair): Input(butt coming at it) -> [mech of resistance to butt continuing to floor] -> sign vehicle of foam compressing to wood providing deceleration and then stronger resistance than the gravitational force of the butt -> [mech of butt coming to rest on now compressed chair] -> Output (chair’s goal of stopping butt from hitting floor is successful and it decompresses when butt is removed)
Okay that may have been a stretch but it was fun to try. I’m struggling to use your terms so I bet you would criticize that. What about this one?
(mousetrap): Input (pressure on sensitive trigger) -> [mech of trigger firing] -> sign vehicle of coiled spring being released -> [mech of arm attached to spring slamming on to board] -> Output (damage to whatever triggered the trigger)
That was much easier. So, is the mousetrap conscious? Its flow through your formula doesn’t seem on the face of it any different than a Venus flytrap. But I would ascribe limited consciousness to the plant but not the mousetrap. You?
LikeLike
I see some clarification is needed:
(I think, not having challenged it,) the best reading of
Input(A,B, …) —> [mech] —> Output(X, Y, …) is:
“The mech, when presented with the input, causes the output”, where A,B,X,Y, and mech are physically measurable things. So I would translate your examples to
Input(butt) —> [chair] —> Output(compressed cushion)
and
Input(finger) —> [ready mousetrap] —> Output(triggered mousetrap)
Neither of these examples specifies a “sign vehicle”. A sign vehicle is a physical thing which has the sole purpose of carrying (mutual) information. The earliest example in biology is probably the cell-surface receptor which recognizes some molecule on the outside and as a result causes some chemical reaction on the inside, such as converting ATP to ADP. The ADP then becomes a signal (sign vehicle) for some other process in the cell.
The pattern recognition system par excellence would be the neuron. A neuron “realizes” a pattern by firing, generating a (lot of) neurotransmitter (the sign vehicle) who’s sole purpose is to indicate “the neuron fired”.
I don’t know the specific mechanisms of the Venus fly trap, but I predict you will find something there that plays the role of sign vehicle.
I should (re?)state here that neural networks are going to involve hierarchical pattern recognitions of prior recognitions of prior recognitions etc. Also, one neuron firing can affect multiple other neurons, each of which produces it’s own response and therefor constitutes a separate qualia. When people talk about human consciousness and qualia, they’re generally referring to the highest level system, which does not (necessarily) have access to the qualia of the low level systems.
Does this help?
LikeLike
Sorry, got caught up with several other projects.
–> Does this help?
Yes, for sure. Thanks. Although it doesn’t answer all my questions.
–> A sign vehicle is a physical thing which has the sole purpose of carrying (mutual) information. The earliest example in biology is probably the cell-surface receptor which recognizes some molecule on the outside and as a result causes some chemical reaction on the inside, such as converting ATP to ADP. The ADP then becomes a signal (sign vehicle) for some other process in the cell.
I wonder how much “sole purpose” matters and is definable. Things generally have lots of purposes. And why does carrying mutual information matter so much to consciousness? I guess that’s just your definition of what being aware of the outside world is? (i.e. That *is* consciousness.) In this example, is the “sign vehicle / cell-surface receptor” the only thing that experiences consciousness? Is that where “mind” is experienced? If so, that seems maybe too small and confined. If not, how does it push or share that conscious experience with other things? (I’m not claiming to have solid, detailed answers to this for my own theories, but I’m wondering what your thoughts are.)
–> The pattern recognition system par excellence would be the neuron.
Pamela Lyon’s article — “Of what is ‘minimal cognition’ the half-baked version?” — posits that chemical building blocks could process cognitive information long before neurons developed. Would you agree with that?
–> When people talk about human consciousness and qualia, they’re generally referring to the highest level system, which does not (necessarily) have access to the qualia of the low level systems.
I’m interested in the development of consciousness over the entire evolutionary history of life. I think that shows some important things. Do you think consciousness “blinked on” somewhere during this process? Or would you still call the surface cell receptors “minimally conscious” or some other word? Or maybe just different levels blink on that have or don’t have much access to lower levels? I’m just wondering, again, how you think of this.
LikeLike
No worries about the delay. [well, actually, I was a little worried 🙂 ]
I guess I should lay out examples more explicitly. To say that the sole purpose of the sign vehicle is to carry information is to say that the physical structure of the sign vehicle is arbitrary (multiply realizable) and could be changed without changing the experience. So take for example a cell surface receptor for glucose, to which the cell responds by firing up internal digestion. The Psychule would look like
Input(glucose)—>[receptor]—>ADP—>[internal system]—>digestion
You could change the receptor to produce something else (chemicalX) as long as you change the responding internal system to respond in the same way to chemicalX. What stays the same between ADP and chemicalX is the mutual information with respect to the pattern realization, specifically, the glucose. There must be some system which coordinates hooking the response to the receptor. Also note you can have more than one response to the same sign vehicle, say, inhibition of some other cellular process, and that would constitute a separate experience. An “experience” necessarily includes the pattern recognized and the specific response. A “mind” is the collection of experiences associated with a given system. An Umwelt is a collection of patterns recognized by a system. Designation of what counts as a system in this context is arbitrary, but generally chosen for utility. Also, as in the case of the human brain, there can be hierarchies of minds, i.e, minds which contain minds.
I definitely agree that cognitive information processing came before neurons. Michael Levin is showing specifically how leveraging bioelectricity in cells developed into the parts that became neurons. Note that an electric field can be a perfectly usable sign vehicle.
Re evolution: I think cybernetic control mechanisms (like the Watt governor, let me know if I need to explain that) probably came first. But I personally think consciousness “blinked on” when systems started using communication (creation of sign vehicles) for control. This communication probably started with cell-surface receptors. This kind of pattern recognition seems to be the first time you can start talking about “aboutness”, which is a way of talking about mutual information. Complexity of consciousness advanced when there was communication between cells. Complexity advanced again when specific cells (neurons) developed for the *purpose* of communication directed at specific other cells. Complexity advanced again when specific groups of neurons were organized for the *purpose* of recognizing patterns in output from other neurons. And so on.
All clear?
I have not previously talked about communication (via a sign vehicle) as a necessary component of the psychule before. Do you think I should/shouldn’t?
*
LikeLike
–> No worries about the delay. [well, actually, I was a little worried 🙂 ]
Have no fear! I’m still grateful for all the reading and responding to my consciousness series so I’ll always make time for you eventually.
–> …the physical structure of the sign vehicle is arbitrary (multiply realizable) and could be changed without changing the experience.
Well, I’d be very surprised by that and would love if we could test such a claim.
–> You could change the receptor to produce something else (chemicalX) as long as you change the responding internal system to respond in the same way to chemicalX. What stays the same between ADP and chemicalX is the mutual information with respect to the pattern realization, specifically, the glucose.
Functionally, this may be true. But we don’t know about the subjective feeling that arises. I would guess that different mechanisms provided by different evolutionary solutions to the same problems would have different feelings to them.
–> An “experience” necessarily includes the pattern recognized and the specific response. A “mind” is the collection of experiences associated with a given system. An Umwelt is a collection of patterns recognized by a system. Designation of what counts as a system in this context is arbitrary, but generally chosen for utility.
Ah, but maybe you are just defining all of this in terms of function alone? I think that would sidestep the issues raised by Zombies and the Hard Problem, but maybe you can say more about that.
–> I definitely agree that cognitive information processing came before neurons. Michael Levin is showing specifically how leveraging bioelectricity in cells developed into the parts that became neurons. Note that an electric field can be a perfectly usable sign vehicle.
Super cool. I have bits about “action potentials” in my post on the mechanisms of consciousness (see §2). Your descriptions might mesh with that. (Although my §2.06 notes arguments for the vitalness of affect to consciousness as opposed to cortical cognition.)
https://www.evphil.com/blog/consciousness-20-the-mechanisms-of-consciousness
–> Re evolution: I think cybernetic control mechanisms (like the Watt governor, let me know if I need to explain that) probably came first.
I’m picturing little chemical motors whirling around so I think I get your point.
–> But I personally think consciousness “blinked on” when systems started using communication (creation of sign vehicles) for control. This communication probably started with cell-surface receptors. This kind of pattern recognition seems to be the first time you can start talking about “aboutness”, which is a way of talking about mutual information. … I have not previously talked about communication (via a sign vehicle) as a necessary component of the psychule before. Do you think I should/shouldn’t?
Yes, I’d want to hear more about that. When did it arise? Sounds to me like it could be extremely early in the emergence of life but do say more. I think Solms would say the kicking on of chemical motors to enable flight from “bad affect” situations would be an example of the wellspring of consciousness, but is that before or after your sign vehicles?
–> Complexity of consciousness advanced when there was communication between cells. Complexity advanced again when specific cells (neurons) developed for the *purpose* of communication directed at specific other cells. Complexity advanced again when specific groups of neurons were organized for the *purpose* of recognizing patterns in output from other neurons. And so on. … All clear?
Yep, I like that view of consciousness getting gradually more and more complex. There might not have been obvious steps along the way in a hierarchy of this complexity, but you can analyze each added *purpose* later and say *that* was a step change.
LikeLiked by 1 person
I think the “hard problem” is the problem of coming to accept that some intuitions have explanations that are not intuitive. So …
You say “ I would guess that different mechanisms provided by different evolutionary solutions to the same problems would have different feelings to them.” I guess this would be testable, in that if the same input produced the same output, but by different mechanisms, and yet produced different”feelings” the theory would be contradicted. Except to measure the “feelings” you would need some physical output, such as a report, but then you have a different output from the experience, and my theory says the input pattern + the output response determines the experience, or “feeling”.
This brings up the zombie issue. A p-zombie is conceivable, but only if conscious experience is epiphenominal and has zero effect on anything that happens in our world. It’s like a soul. It’s possible that any random object has a soul, but if that fact makes no difference to anything in my world, I don’t care about it. Likewise w/ consciousness. If p-Ed says it feels the warmth of the sun and loves the taste of chocolate, it would say those things for physical reasons. But then you would say those exact same things for the exact same physical reasons. You can postulate that you actually have something extra, but if you do, I don’t care about whatever that extra thing is. I’m only interested in the physical reasons you see “purple” but can’t explain how you know you see purple.
On the issue of affect and valence, I would claim that these are higher order, mostly interoceptive, pattern recognitions. Certain recognitions produce system-wide affects by releasing dopamine, serotonin, oxytocin, etc. These have various effects throughout the brain and body, some of which we perceive as a group. Then we give names to these groups, like relief, satiation, joy, etc. The point is, you can have experiences without affect, and you can experience affect without prior causal experiences (as in mood altering drugs).
Re communication: … um … communication (the way I’m using it) = the use of sign vehicles. So … communication (probably,maybe) started with cell-surface receptors that produce internal messengers, as described above.
Ok, whatcha think? How can I better communicate (ahem) the theory?
*
[everybody loves chocolate, right? Right?]
LikeLike
Chocolate! On Valentines’ Day! We’re in a good place.
So, Dennett’s paper “The Unimagined Preposterousness of Zombies” totally ruined all zombie arguments for me. Some simple Ed vs. p-Ed examples can make them seem “conceivable”, but not once you’ve read Dennett. (For me anyway.) You can see it here if you haven’t before: https://dl.tufts.edu/concern/pdfs/6m312182x
Just to be clear, I understand you weren’t arguing for zombies. I just have to throw that paper in whenever they come up.
I get that you were actually saying that when I talk about my felt experience, that doesn’t add much of anything to the conversation. This is the problem of other minds. That can plausibly be dismissed among beings with a shared evolutionary history, but the problem still technically exists so it’s a big one when talking about consciousness on another substrate. But…you can still get to the point where “if it talks, walks, and acts like a conscious being, it’s a conscious being”. I basically come to that conclusion in my FAQs as well. Even if a different substrate would cause different subjective experiences, that doesn’t diminish the value of irreplaceable beings.
And yet, is your Mary conscious? I still say she/it is missing something big and important (which was where we started, right?) You said affect and valence are “higher order, mostly interoceptive, pattern recognitions.” But I think there is something else. I think this is an example of a common confusion in terminology between “feeling” and “emotions”. One is usually meant to be the conscious awareness, labeling, and processing of the other. But they are very different. In my recap of Antonio Damasio’s work on consciousness (https://www.evphil.com/blog/consciousness-10-mind-self), I noted he says this:
“Emotions are chemical reactions. Feelings are the conscious experience of emotions. (This can be slightly confusing as it is not always used consistently in Damasio’s work.)”
So, I think you are talking about Damasio’s feelings, whereas I’m talking about the lower-level raw emotions. And those are what Solmls and Panksepp argue actually drive the subjective feelings of consciousness. Let me copy in §2.06 that I referred to in my last post since I think it explains this more:
——————
2.06 Note that the conscious awareness and processing of affective reactions comes later in the hierarchy of consciousness. Such cognition only rides on the affective core and cannot exist in biological systems on its own.
——————
In other words, the “affective core” is still what is missing in Mary. She/it sounds to me like a cerebral processor whose consciousness has been obliterated, but the machine environment she/it lives in allows her/it to continue mimicking conscious reactions. This is where all of the information processing and functional equivalence arguments ring hollow to me. They just aren’t *really* conscious to me. However, (!), new thought here, maybe Mary is just the inverse of the hydranencephaly children — extremely limited in its consciousness, but having *some* kind of quantity/quality of it. What do you think of that?
Re communication…I’m not sure I can offer any advice on that right now. I think our conversation here is illuminating though so keep it up!
LikeLiked by 1 person
Great response, if only for the links and summaries. Going thru your response various thoughts and explanations got triggered in me. And then there was this:
“However, (!), new thought here, maybe Mary is just the inverse of the hydroencephaly children — extremely limited in its consciousness, but having *some* kind of quantity/quality of it. What do you think of that?”
I think this is exactly right, and the point of my thought experiment. Mary does not have affective consciousness, and hydroencephalic children do not have the (extremely simple, but still) higher order consciousness Mary has. But they both have consciousness as described by pattern recognition (psychules). Just different kinds.
Given that, I think there are still some thoughts/clarifications I’d make on the hydroencephaly case. I think maybe they can express joy without feeling joy. So the expression of joy via behavior (smile,giggle,laugh) might be just the output response to a recognition, and so I would call that experience “joy”, but a “feeling” of joy would require a response to the output. I guess this agrees with what you were saying about Damasio. I could point out that the cortex is not necessarily needed for the feeling. If the same recognition that produced the output (smiling) also produced internal messengers, like dopamine, which messengers produced other responses in the child, those responses would constitute a “feeling”. Also note, if the child’s behavior produces a response in you, that also constitutes a “feeling”, although that consciousness would be attributed not to any individual but to the society that coordinated that response (your joy) to the sign vehicle (the child’s smile).
Are we done?
LikeLike
Yep, we can be done. If I continued at all, it would maybe be just a little about the emotions of all the non-human animals as we go back in the branches of the evolutionary tree, and how those really do matter. And I don’t think Mary has any of those. Even though what she/it has may come to matter in some other way. But I don’t think you would argue with that.
Thanks for the genuinely nice and productive exchange! That’s legit hard to find in philosophy talk so I really do appreciate it. And I learned a few things about signs that are interesting. Keep plugging away on that please.
LikeLiked by 1 person
This post didn’t show in my RSS reader, but it does show in the WordPress Reader. But I totally missed it until I saw your Bluesky post. Weird.
My take is similar to Ed’s. Although I think it’s pointless to argue about whether MaryGPT is conscious, since most of us will be talking past each other with different definitions. It’s akin to arguing about whether MaryGPT is cool. What we can talk about is whether it triggers our intuitions of a fellow being, and I think too much is missing to trigger mine. She/it does detect light from dark and blue light from other light and dark, but that seems to be it.
For me to think of it as an experience, there needs to be an affective reaction (valence, arousal, motivation), along with innate and learned associations triggering concepts, which themselves result in affective reactions. And she/it needs to learn things from the process, otherwise there’s no memory component. Until those are there, I don’t intuit that it even minimally feels like anything for Mary to experience whatever the visual system is providing.
Interesting thought experiment though.
LikeLiked by 1 person
It’s fine that you require an affective reaction to call it experience, as long as you state that requirement. (Do you really have an affective experience every time you glance in any direction? When you hear a door close down the hall?) As for memory, there’s clearly a memory component in that MaryGPT remembers what was said (and perceived visually) previously, at least as long as the connection to the user is maintained.
And I appreciate that it is difficult to intuit that Mary “feels” anything like we feel, but then would you intuit that clouds were mostly made of tiny atoms of oxygen and hydrogen linked together in a particular way?
I guess my question is whether you think we could get to human-like experience by following the protocol for MaryGPT and adding new pattern recognitions one by one, including hierarchical recognitions of patterns in prior recognitions, until we get there?
Do I need to create a separate example of how to give MaryGPT a single valenced experience?
*
LikeLiked by 1 person
I’m not convinced ChatGPT, and LLMs in particular, have the right architecture for the relevant types of memories. Overall I also think a system needs to model itself in its environment, something else I can’t see convincing evidence for in LLMs. The only models seem to be what’s referenced in their name, language models. It’s just able to get a lot mileage from the associations in the huge corpus of material it ingests (steals) and because we’re only asking it to communicate.
I do think we have ongoing affective reactions. Often it’s minimal, but it seems to always be there. The brain is always monitoring the homeostatic state of the body as well as making predictions about anything that might affect it.
I’m not sure I’d intuit that clouds were anything except possible sources of rain until I’d had some education about them. But we started talking about consciousness before science had any handle on how the mind works. Consciousness seems like a pre-scientific intuition (although whether it’s a pre-modern one may depend on how permissive we want to be with our definitions).
Similar to memories, I don’t know that LLMs are the right architecture. Maybe if we somehow integrated it with a self driving car’s models, and a host of others that haven’t been built yet. A fish is able to hunt for food, find mates, and often return to specific locations. A self driving car requires access to vast databases to move on roadways. I think we’re still in very early days.
I don’t think valence by itself is enough. My laptop can be said to have a valence about the state of its battery below 20%. And while it will get increasingly frequent in alerts about how long its charge is, it never seems to get aroused as it runs lower and then out of power. The enactive, or proto-enactive element seems to be missing.
LikeLiked by 1 person
On valence and arousal: is it arousal when my roomba realizes it’s low on charge and goes and finds the recharger? How about when it realizes it’s been on the charger long enough and goes back to vacuuming? Arousal is just a response to a recognized pattern or patterns (“we have enough charge, let’s get back to vacuuming”). It could be added to MaryGPT if it served a purpose.
LikeLiked by 1 person
Sure. Valance and arousal are, at the end of the day, an automatic response to stimuli. There’s not a whole lot of magic in just that. It’s why it’s only the first rung in the functional hierarchy I often talk about. I don’t think we have full affects until at least 3. (Admittedly, opinions on this differ, and strictly speaking it depends on exactly how we define “affect.”)
LikeLiked by 1 person