Reece Nathan Russo

The Great Russo, Sorcerer Supreem

Is the Simulation of Consciousness Impossible


Since I don't experience consciousness the way you do, I can't help but feel this debate is oversimplified. Therefore, a non-materialist might rephrase this title as, "Philosophical zombies are contradictory, therefore a simulation of consciousness is impossible.


Self Awareness


I don't see how zombies talking about zombies is any less coherent than zombies talking about anything else. Some would call that begging the question It's not begging the question as the assumption isn't that a simulation of consciousness is actually conscious, but that a simulation of consciousness is possible. And if consciousness is necessary, why can't you replace consciousness with some other component of "equal complexity" that provides the necessary information consciousness provides? From my perspective, the hard problem of consciousness exists, but I don't think it needs to exist. Why does continuity require these two conscious entities to be different, surely both could be continually be the same conscious experience. But all the information provided by conscious experience must also be present in the simulation by my Information Equivalence theorem. I'd agree with you, but I don't think consciousness is binary, but a gradient. The p-zombie experiences the memory of consciousness P-zombies don't experience since they lack consciousness. So the p-zombie must contain the exact same information regarding "conscious experience" as the assumed conscious entity.


In Simulating


I am assuming that it is possible to simulate consciousness and so we must grant that it would be conscious. With the p-zombie strawman slayed, this closes the gap between simulations of consciousness and actual consciousness: if the two systems are information-equivalent, then that simulation must be just as conscious as the assumed conscious entity. Some people are more 'conscious' than others, just as I think 'people' are more conscious than cats. So either the concept of a p-zombie is contradictory and it in fact does experience consciousness, or qualia is meaningless. The conscious experience of red is in fact the interface of the information with my decision making apparatus. But maybe to be conscious you need to add a, then b, then c, and no other way gives you consciousness. The concept of philosophical zombies is often used as a rationale for requiring something "more" than simply physical processes to explain first-person conscious experience. Although Chalmers work on defining the hard problem of consciousness is elegant and logical, I don't see it's ultimate utility.


Bridging the Gap


I don't think there's a 'gap' for consciousness to fit in.


I have built a zombie, a creature which is outwardly indistinguishable from a conscious human, but lacks conscious states. A sufficiently large lookup table of inputs->outputs may be functionally equivalent to consciousness, but it doesn't mean it is consciousness. So consciousness is entangled with various possible outcomes and the experience of consciousness is itself in a superposition with various versions of reality. Those things that aren't conscious could nonetheless claim they are conscious and in fact act exactly as if they were.


Therefore rather than say that consciousness exists in both cases, I would go the other way and say that consciousness is not causally relevant. How does the organization of information give rise to the rich experience of first-person conscious experience? The problem is we don't know what causes consciousness, maybe it's information equivalence, but I see no particularly strong reason to think so. Otherwise you have something which has all the outward appearances of consciousness but is not consciousness, and you're basically just describing a p-zombie. It isn't that AI isn't conscious, it's just that its consciousness degree, compared to ours, sucks, like many other mammal consciousnesses. Your argument inherently assumes consciousness is a physical process , and then argues that this proves that consciousness must therefore be a physical process.


Yes, but if you define consciousness purely functionally then you're begging the question My argument does not assume anything about consciousness.


Is Detection Possible


I can ask a p-zombie a million questions designed to get at its conception of its conscious experience, yet it responds in the exact same way as the conscious entity. It seems like you're saying if something behaves as if they're conscious, their internal state must be identical to that of a conscious being, thus they are conscious. You might have a perfect simulation, but maybe it doesn't match this external criteria, so it's not conscious. The conscious entity has experience, by definition and the zombie does not by your assertion. If our idea of human consciousness is indeed reducible and equal to just some form of information processing, is consciousness merely an illusion? Would you really argue that: if detects and then speak "I just tasted a sweet thing" was a conscious experience? Perfect simulation means a simulation that is indistinguishable from the real thing through any and all analysis of its behavior. You're basically saying if something behaves like something that is conscious, that thing is itself conscious. Not neccessarily - this assumes that our conscious experience does have a role in shaping our future behaviour, which isn't neccessarily true. But a conscious experience must be capable of shaping behavior, or it is not an "experience" in any meaningful sense. The p-zombie experiences the memory of consciousness, whereas I experience the actual event, or at least that is what most non determinists believe. I only have knowledge that I am seeing red because of the conscious experience of seeing red.

I don't think rejecting it altogether is meaningful: the fact is we say we have subjective experience so it must be accounted for in some way. And if consciousness is necessary in how we process inputs, why can't you replace consciousness with some other component of "equal complexity"? I think it's possible that a p-z could have a decision making process that is equally complex but lacks consciousness, right? If by "operation" you mean the way that it behaves, I don't think this is the case, and I have tried to demonstrate this using earthquakes. It's certainly not directly observable, but conscious experience is necessarily linked to the physical world by way of its affect on our behavior.


Arguing the Explanation


People arguing that an explanation of consciousness requires more than physical processes would not have been using zombies as an argument in the first place. I think there's some mysticism assigned to consciousness in our culture and it might not be necessary.


If reasoning for why that is that "anything that models the behaviour of consciousness is by definition conscious then you have a circular argument. If we had the necessary and sufficient conditions for consciousness, we could simply ask ourselves, does the p-zombie meet them? there is no meaningful distinction between a simulation of a thing and that thing itself I think this is demonstrably untrue. Now you're trying to define consciousness as the appearance of consciousness, but that's begging the question. a simulation of consciousness is conscious. 1) Conscious experience shapes behavior of the entity in possession of it Only if conscious experience is phenomenal. My argument says that all components of consciousness that interact with the physical world must be present in the p-zombie.

If experience is bogus then that would mean there is nothing meaningful behind the concept of consciousness. Whether it turns out to be an "illusion" or not I don't think carries a meaningful distinction. In order to proceed with your argument that a simulation of consciousness is conscious, you have to make the case that consciousness is necessary to produce identical outputs given the same set of inputs.


The answer is obviously no, he needs no conscious comprehension of what he is doing to perform the algorithm. We're assuming p-zombies exist in our reality, and so it follows that they have consciousness with respect to our reality. I don't think epiphenomenalism is a very attractive view, but I'm not sure it's really incoherent.


Because my conscious mind, such as it is, has really only offered to do a victory dance in the endzone. b) a p-z is a perfect simulation of a human So, to summarize: Q: Is the p-z a perfect simulation of humans? You might say: well the representation of that information is different in the p-zombie vs the conscious entity. Since it has no access to subjective experience, and no consciousness, it cannot formulate sensible responses to questions about them.


Then again, I don't think anyone has been able to show a macro-scale result of superposition/entanglement. Your earthquake example is analogous to a conscious entity produced from a biological brain and a simulation produced by a program running on silicon. You might say well there must be some "core" information that is shared among all humans, and that's what causes consciousness, but I don't buy that either.


Truly Indistinguishable


Am I saying that a p-zombie is "indistinguishable from a truly conscious person through outward behavior Yes, I meant just outward behavior.


What we can take from philosophical zombies is that consciousness is only ever evident to oneself. We must assume that the simulated ghosts experience consciousness in an equivalent manner as the real thing. If you define consciousness as just a physical process, then the distinction between persons and p-zombies is, by definition, nonexistent. I believe that I have sufficiently shown that any process must be information-equivalent in order to be a perfect simulation. I don't think this is true, but, even if it was, so what? Maybe there's a magical field surrounding the Earth such that everything inside it is conscious, and nothing else is. So basically your position is that p-zombies don't exist ergo one cannot accurately simulate a conscious entity? And so our "simulation of language comprehension" is in fact language comprehension, there is no meaningful distinction between the two. It is very well thinkable to experience sensation without it ever having a chance to influence your behaviour I don't think this is possible. All that zombies have is past tense, second hand, where as if real consciousness does exist then it would be present tense first hand. Believing you're conscious is being conscious. For a p-zombie to completely and accurately mirror a conscious person's responses, they must have the same Kolmogorov Complexity/Intrinsic complexity/Internal information.


If I damage your brain is your consciousness not impaired? If I destroy your brain does your consciousness not disappear? The point is the reason we're conscious might not have anything to do with humans at all! B)Resort to some form of epiphenomenalism C)conclude that conscious experience just is physical events. Is it meaningful for someone outside the box to ask whether or not you are still experiencing consciousness? I don't think this is any less intuitive than the standard many words interpretation. If you state that all that matters of consciousness is its outward behaviour This is not what I'm stating. My question to you would be, why think consciousness is something specific to humans? If this was the case, it would be unnecessary for consciousness to be a component of the computer's system.


I read through information processed by my conscious mind by way of epiphenominal events. Are conscious experiences exactly equal if they are representations of equivalence information, or if the representation of information is not too disimilar? It's not to make an argument for dualism, it's just to show us that we don't have a good enough understanding of consciousness. What we are creating is a hypothetical simulation that is so close as to be indistinguishable from an outside observer. If all that created my consciousness was removed, I would simply cease to exist. If I build an AI which can pass the Turing test, I haven't necessarily built something that has phenomenal states. Then they are conscious and the conception of the p-zombie is contradictory. Moreover, the metaphysical possibility of zombies seems compatible with epiphenomenal dualism, though it may not be compatible with some variety of panpsychism. The problem however, is that I do experience subjective awareness and, for lack of a better term, 'experience'.


I can imagine invisible pink unicorns, and I don't think they're a logical impossibility. The flaw found in the p-z simulation is that they don't experience qualia. Regardless, the majority of philosophers don't even think that zombies are metaphysically possible. If I did not have that experience I would not be able to identify the color of the object in my field of vision. Yes, but if you define consciousness purely functionally then you're begging the question - you started off claiming that p-zombies are impossible.

Consciousness and Memory are closely tied, but I don't think they're the same. Lets assume there's an invisible unicorn sitting in the nearest black hole that produces all our epiphenomenal conscious experiences. Until we find the necessary and sufficient conditions for consciousness, we won't be able to discount p-zombies.


All we know is that he thinks that those things with their particular functional roles are conscious. On a related point, we can't assume that conscious experiences are "private" just because we haven't worked out how to share them yet.


We can't even detect consciousness or measure subjectivity, so it's premature to conclude that it has to be measurable. By contrasting how humans process information and how p-z's process information, we isolate what consciousness is. Yeah, I think you have built too much into the concept of perfect simulation. You'd also better define consciousness, because you seem to have included free will that influences the material world in the package. If we call C the measure of consciousness for system X, then for a perfect simulation Y of X, C >= C must be true because the simulation must be information equivalent. I simply respond to them as is appropriate to respond to conscious beings. In fact I would say this is the "naive" consequence of consciousness and many worlds. There are robots that implemented some sort of consciousness as a learning mechanism and it did help them. I think the key will be some theory involving information in the context of physical processes. The first is from Consciousness and Language; I do not infer that my dog is conscious, any more than, when I come into a room, I infer that the people present are conscious. The problem here is essentially a lack of tools to ascertain whether a given subject is a conscious person or a p-zombie. If you define consciousness as just a physical process I'm not defining it as a physical process.


Searle Isn't A Functionalist


I'm aware that Searle isn't a functionalist, but I would have thought that his biological naturalism disinclined him towards believing in the possibility of zombies. I don't see how this argument works unless you already assume functionalism to be the case. If it's true and it's true that change in behavioral functions implies change information, then the two entities don't contain the same information. A D-Zombie is just like a P-Zombie except it breaks down when you talk to it about consciousness.

As a physicalist, you presumably posit that consciousness is a consequence of the configuration of the brain, right? More specifically, there is no possible line of questioning that will reveal a difference in internal representation between the p-zombie and a conscious entity. Yes, and the conscious entity does experience something that the zombie doesn't. They have an equivalent output for all possible inputs, and yet its assumed that they contain less complexity than a conscious entity.


I think it can be interesting in some ways to ponder, but attempting to actually reach a conclusion is impossible. Q2: If consciousness is necessary to produce the outputs we produce given a set of inputs, is it impossible to substitute a component of equal complexity in place of consciousness? The point is that the experience shapes my behavior in a way that is impossible without having an equivalent experience. You may be interested in this discussion from not too long ago: short, I think there is something meaningful to be said about continuity of consciousness. I think this leaves the materialist with a few options A) reject subjective experience altogether. Taking this line of reasoning to its conclusion, there is no meaningful distinction between a simulation of a thing and that thing itself. That simulation could then not only simulate us and a behavioral zombie, it could also fool our MRI scanners. How is the "memory of consciousness" and "consciousness" different? By definition a philosophical zombie is indistinguishable from a truly conscious person through outward behavior. So it's possible to believe that you experience when you actually don't experience? Is this the explicit reasoning that Searle presented, or are you making an inference based upon his denial of functionalism? Luckily we're not infallibilists anymore - we can have knowledge that P even though it's logically possible that P is false. So the definition includes that a P-wombie is functionally equivalent to P. Suppose that a P-wombie is never a P-zombie. However, I don't see the relevance of whether the position is held by a minority. it takes in input and produces the exact same output for all possible inputs as a conscious person, but without having an equivalent internal experience as the conscious person. Just because we are the ignorant observer with regards to consciousness, we shouldn't be satisfied with the earth around us shakes = earthquake. So if you're willing to grant the label consciousness to the "real ghosts", then you must also do so for the simulated ones. I don't initially assume anything about consciousness. The problem is that the very concept of a philosophical zombie is internally inconsistent, and this result has real implications for the nature of consciousness and the question of AI consciousness. This also motivates further line of inquiry that is more practical and down-to-earth: how does the experience of consciousness and associated outward behavior arise from information representation/processing? I just treat them as conscious beings and that is that. On my view such a case would be impossible because we know that the structure and function of the brain are causally sufficient to produce consciousness. If this allows all important components of consciousness , why do we need to assume anything more? I don't think it's a boring discussion, I just think it's completely impossible to reach a conclusion on the matter because it transcends sensory experience and even logical analysis. If you don't think that's a loose end which needs explaining, then cool, you're an eliminativist. It seems like you're trying to say information-equivalence is this necessary and sufficient condition, but I don't quite see how you can think that. Immediately after this, if you did not kill yourself, you yourself know that you continue to experience consciousness, but this information is inaccessible to an observer outside the box. Otherwise you have something which has all the outward appearances of consciousness but is not consciousness My argument says that all components of consciousness that interact with the physical world must be present in the p-zombie. If I introduced you to my friend Mark, but assured you he was in fact a p-zombie, you'd be rightly skeptical. Accepting the possibility of zombies implies one or both of the following premises: Both premises are highly dubious. However, I maintain that the negation of functionalism implies the possibility of zombies. We only know that Searle believes that those particular people and his dog is conscious.


I agree that the biggest problem for materialist explanations is coming up with a satisfactory explanation of subjective experience. I don't think this distinction is a necessary part of my argument though. The physical actions of the man create a "realm" where information-objects interact and this interaction is experienced by the algorithm. I guess I wonder what the requirements are for something to be a "perfect simulation". One of the things that makes consciousness special is that it is inescapably wired into our behavior. Why can't consciousness be a side product of how we produce outputs given a set of inputs? P-zombies formulate the concept of p-zombies due to an additional nonphysical component not present in conscious humans. In that plane, conscious beings exist that are attached to the zombie bodies in the physical plane.


IMO consciousness without subjectivity is not consciousness anymore. But all the information provided by conscious experience must also be present in the simulation by my Information Equivalence theorem Only if you assume that the information provided by consciousness is a necessary to how we produce our outputs. Perhaps the non-physical laws at particular worlds ensure that only certain agents are conscious. The zombies would just agree to call it "red" and call it a day. But a p-z ins't a perfect simulation of a human, that's kind of the point of the p-z. Probably not, but I think it's fair to require significant testing instead of just brief non verbal encounters. So even though they are equivalent from a functional perspective, most people would agree the human possesses consciousness and the android does not. Even from a practical point of view, it's hairy: if p-zombies and conscious people are equivalent, should simulations then get the same rights? If you simply define consciousness as something that is metaphysical then you might as well claim invisible unicorns are pulling your strings mental strings. The only thing that be imagined to act exactly like an earthquake is an earthquake, thus they are information equivalent.


It seems like Searle considers zombies logically possible but metaphysically impossible, although I mostly have the impression that he treats them with a "who cares?" Maybe there are infinite ways that consciousness arises, predetermined by the nature of the universe, and each human is just one of those infinite instantiations.


Does Observation Count


Sure an ignorant observer may make the mistake that it's an earthquake but that still doesn't make the hydraulic slab a complete simulation. Why couldn't we have those same beliefs in a world where it is true that we actually have those mental states? Biological processes take into account certain features, say F, such it's unclear whether simulations of F are F, e. g. What I do is derive true statements about consciousness by an analysis of its output. You reduce consciousness to a data table You have a very impotent view of information. I think different experiences or even an absence of experience can lead to the same behaviors. I don't mean to change the topic, but I'm curious what might think of a situation. Because experience is consequential, the lack of it would necessarily result in different behavior or necessarily require different structure. They are information-equivalent, and it is claimed that the conscious entity experiences something the p-zombie doesn't. Zombies could have mental representations like "I have had such-and-such an experience" even though such a representation does not accurately depict the world. Is it a gradual split as you gain consciousness from conception through to infancy? You may assume that conscious experience happens outside of the physical world. I disagree, as different computational systems may simulate each other's behaviour without necessarily emulating each other's internal structure, processing or data-representation. Given your interpretation, conscious minds must split apart every time a wave function collapses; this must require a gargantuan number of conscious minds experiencing exactly the same thing at every moment. A simulation of a person running in a computer would speak of "reality" in the same way that we do outside of that simulation, and they wouldn't be wrong.


Therefore no, I suspect a human being cannot respond in "infinite" ways to stimuli I can presumably count to infinity. An entity with full command of language can speak of any experience it possesses Assuming language can express any experience. Nonetheless the programs don't process chess games the same way humans do, and don't know what chess actually is. I think what you're missing here is that a P-Zombie is supposed to have a brain that works like a non-P-zombie's brain. Q1: Is consciousness a necessary part of how we produce outputs given a set of inputs? P-zombies formulate the concept of p-zombies due to a physical difference between them and conscious humans. I can open up a text editor right now, and program a series of responses to questions about subjective experience. Although I agree that the perception of continuity is important for having an identity, the perception of continuity can be false. When you probe the algorithm for questions about its inner mind, it will answer in the same way that people do. I don't think that earthquake analogy holds. If you state that all that matters of consciousness is its outward behaviour, naturally you're going to conclude that there is no difference between a conscious person and a p-zombie.


If only some of them were attached to bodies, the ones with no attached consciousness would be p-zombies and the others would be sentient individuals. So, if you have thoughts/feelings , as opposed to a simple input output logic gate senario, this would be the difference. So behavioral zombies can exist, if you put them under an MRI scanner they would however show up as not being regular humans. There is a real computational constraint that requires the complexity of a simulation at least equal that of the thing it is simulating. Additionally, you might think that being acquainted with the experiences themselves is a kind of non-propositional knowledge. The last is clearly the interpretation that we want, since personal identity does not depend on stimulus.


Responding To Further Stimulus


The way I said it = point given. Tononi uses information theory to propose a description of "how" conscious a system is. Then our perfect simulation is not conscious. Of course that last experiment would be a little silly, as that simulation would already mean that we are nothing more then information processing. The results are the same, if both the simulated and actual ghosts are trapped in a maze chasing pac man. The perfect detector was meant to mean something that could detect all possible properties of an earthquake with perfect precision.


How do you know humans have knowledge of their experiences without recourse to their account of their experiences? This is because what OP said was merely an illustration of a status, such that the thing being illustrated is the premise. As far as I can tell that is, somebody else can correct me if they think they have a better understanding.


If they succeed in making an artificial human that can convince me it has qualia, I'll pay up. I think this is where I found myself completely agreeing with you. If it does, then the negation of functionalism implies the possibility of P-zombies, and thus that Searle believes in the possibility of P-zombies. Well I don't think this only a practical consequence. But if it were hardcoded that would imply a noticeably different neural architecture, making it no longer a zombie by definition. With a cooperative test subject, you could begin to correlate specific types of wonkiness with specific types of experiences. It is worth noting that the non-materialist position gets falsified if brains are proven to be fully physically deterministic. Or is there some convincing reason to believe our minds our special compared to all other properties exhibited by material things? In order to argue that computers are conscious, I think you have to make arguments that a) consciousness is necessary to produce the same set of outputs, not just that a system of equal complexity must be present. But it is interesting to think about: we would be looking for precisely the areas where our attempts to simulate have failed. The whole point is that it does provide consequences, but because it is non-physical, it is non-simulable by any purely physical means. If our neural architecture and environment gives cause for us to have philosophical discussions, our zombie twins would have those same discussions. I don't think it is pointless at all. Therefore, either qualia is a meaningless concept, or it is purely dependent on information representation and thus the p-zombie has conscious experiences in the same manner that a conscious entity does. The amount we would both contribute to the meaning of the universe, and substantiality we could produce would be different. If input "how are you feeling today Al?", then generate random feeling response: "i'm not really feeling like myself today. But nobody will think it is conscious, except maybe Chalmers and the like. You can't simply assume that consciousness has to have an observable effect on the world. This violates the premise that p-zombies are indistinguishable from conscious humans, and so is invalid. Given that p-zombies function completely objectively like machines, they aren't conscious and the difference remains.


Chalmers Position


If the complexity is on par, then the simulation is equivalent to the real thing. I explain it further in my response to the top comment here, but in short its a matter of information equivalence. The thought experiment basically says "What if there were invisible pink unicorns that can't be measured in any way?" You flip a coin, and if it comes up heads, you kill yourself , and if not, you don't. A sulking lover will answer all possible lines of questioning the same as a box of ice cubes , but that doesn't mean they're equivalently conscious. I'm inclined to follow the logic proposed by thinkers like Dennett and the Churchlands and away from Chalmers position.


I've been looking for an opportunity to put it in words and this seemed like a good place to do it.

Just because sound waves are easier to detect than electrochemical patterns doesn't mean the first is "outward" the second is "inward". This seems like a very fundamental problem, to the point where I don't think the reason why anyone would be wrong in such a situation is well-defined. Perhaps I should have said that the it will never produce a meaningful conclusion, not that it is necessarily "uninteresting".

Just like the universe has no privileged reference-frame, it has no privileged information representation--information is independent of representation or medium.


I think it's safe to say that's a frontier that hasn't been touched. It only needs the logical possibility that qualia zombies can exist and we'd never know it. It's not at all clear that such a table is even possible, so skepticism regarding the Chinese Room and zombies are both equally warranted.


In order to proceed with this argument I think you'd have to make the case that consciousness is necessary to produce identical outputs given the same set of inputs. I think there are many good reasons to think that it is possible to do so, but this is definitely an assumption that I failed to mention. For a conscious entity to have something a simulation lacks, there must be some source of information within that system such that its operation will be different in some way compared to the simulation If by "operation" you mean the way that it behaves, I don't think this is the case, and I have tried to demonstrate this using earthquakes. If that was your point in bringing up Kolmogorov complexity, you didn't do a very good job conveying it.


In fact the only way to even determine they are different is to actually look at the source code. And this is what I think is the real source of the resistance to this idea. This means that P-wombie always has the mental status of P. This implies that functionalism is true. This belief warrants skepticism of functionalism and thus skepticism of the impossibility of the impossibility of P-zombies. Searle Kripke Putnam Although three constitutes a minority, the agreement of those three suggests a valuable position. And so, insofar as conscious experience influences behavior, i.


e. How far down do you think consciousness goes? They would have no point of reference for such a conversation, the conversation would have to be "hardcoded. " Not to mention intangibles, like the positive effect holding this worldview has had on me as a person.


Is Perfect Simulation Debatable


Can I perfectly simulate a system X with an intrinsic complexity C by a system with complexity < C? The reduced complexity means that there is necessarily an input that will reveal a difference between the two systems. If the thing is the conscious entity , then yes in this case.


You need to be introspective and examine your own consciousness to really get anywhere. provides information with which to make decisions, that is also present in the simulation. Isn't the problem here simply that we assume unlimited simulation powers but limited investigation powers? So, I think your argument is that experience must be produced in order for identical outputs to be produced from the same set of inputs.


I'm not sure if a P-Zombie is possible, but I think a D-Zombie is possible. The concept of a perfect simulation doesn't really apply to an earthquake though. And that is the logical conclusion of the point I am making, and I fully embrace it. I would say it's not interesting at all, as it presupposed the color of something that can't be observed. This in itself is a value I hold real to 'consciousness'.


It is a measurement of how much information something contains, not what kind of information it contains. Even though this program can give the same account of red as me, it has absolutely no knowledge of red. But I'm not sure what that has to do with the nature of zombies themselves. The aim is to show that if a P-wombie can't be a P-zombie, then functionalism is true. I recommend reading this post as it relates informational equivalence to general claims about functionalism. Why should we assume that consciousness is basically magic, but think it's strange if somebody were to claim that the software on their computer is independent from the hardware? If the non-conscious-but-equal-complexity thing produces equal brain states and equal behaviors, what exactly is gained by any metaphysical assumptions? We could hook up electrodes that bypass the broken portions of its brain and read its output that way. Well, we know that information cannot escape a black hole, and so our "epiphenomenal experience" has absolutely no way to affect our behavior.


What makes you think rocks aren't conscious? Still, physics suggests that there are things which are truly random, so maybe our choices are not set in stone. Thus providing us with a neurological zombie, at least from our perspective, the guys running the simulation would still be able to tell the difference. Well in this case, it needs to keep a record of the questions it previously received and the responses it previously gave. That works both ways: you can't claim with absolute certainty that there are no invisible unicorns pulling our strings either. That's fine if you're looking for a practical guideline for action, but that hardly seems to be the case here. And so on, we can walk down the uncanny valley until we arrive at the swamp of the deepest disagreement. If there is no unique information gained as a result of the occurrence, then it is meaningless to say that anything occurred at all. However, you could imagine a completely separate inaccessible plane where information only flows from the first plane into this other plane. The distinction between "simulation of memory" and memory itself is meaningless. However, above, you suggest that OP is attacking anti-physicalist arguments, while OP only attacks anti-functionalist arguments. You started by using the Turing test but now you are treading very close to something like the identity if indiscernables. More formally, the Kolmogorov complexity of a string is the length of the shortest program which outputs that string.


Information is independent of the medium that creates/sustains it, which is analogous to an earthquake being independent of its particular cause.


Here, you only show that some kind of physical functional equivalence does not imply mental status. If a P-zombie is just fetching hard-coded answers to questions about consciousness, his brain would be undergoing a different process than a non-P-zombie answering the same questions. If p-zombies can respond in the same way to having tasted something sugary, then they too must have an equivalent experience of taste. The information of red interacts with our decision making apparatus within the same "realm" , and so the information of red feels real from this level of interaction. Since presumably your conscious experience will be exactly the same in this other universe, is there any meaningful sense in which you are not also experiencing this parallel universe? If I look away from it, it doesn't exist in my mind.


Assuming that's true, then there seems to be no particular reason to believe that humans themselves actually have experience, even though they believe they do. It's falsified if some Blue-Brain-esque project succeeds in creating, using deterministic algorithms, an artificial human that can easily convince anyone it has qualia. My bet is against it, for reasons I've given all over this thread, but there it is. A process that is assumed to be conscious is subject to this requirement. And there arguments from some brain researchers that consciousness is just a part in the brain that looks over other parts and having that capability is beneficial.


Of course, that still doesn't solve practical questions: we still don't know whether that guy in the apartment above us is a p-zombie or a conscious being.


Looking At the Chinese Room


The question that is usually asked is does the man in the chinese room understand chinese? I find the naïve intuition that non-physical experiences affect the physical world to be more powerful than any other speculations I've heard to date. I guess I will concede that point, but you seemed to ignore the other stronger one. But in any case, my first scenario doesn't allow any information from phenomenal experience to interact with decision making this does not apply. You are correct, I should have said they have the memory of consciosness rather than the experience. It's only a coincidence if something else causes the exact same effects This is literally impossible taken to infinity. Your initial claim was that it was impossible for two systems which share a functional appearance to differ in internal composition. You are assuming we can somehow know everything there is to know about the two entities, philosophical zombie and normal person. And the second is from The Mystery of Consciousness; Chalmers takes the argument one step further, in a direction I would not be willing to go. I remember his asserting the possibility of zombies. Because the qualia which occur in your subjective experience have correspondences to physical brain-states which are causally potent.


It is very well thinkable to experience sensation without it ever having a chance to influence your behaviour. There's no criterium that tells us wheather experience does in fact has a necessarily influence on future behaviour. If epiphenomena is causally independent of the physical world, what good does it do to suppose its existence? If Qualia has absolutely zero effect on behavior then it serves no purpose and has no explanatory power. "How can you tell that something isn't X if it appears in every possible way to be X?" So basically you are saying that since it seems to be the same in every way then it must be the same. The possibility follows from the negation of functionalism and he is not a functionalist. If you have a better solution, or an argument for why A is the best answer, I think everyone in the world would be happy to hear it. This seems reasonable under a purely material/informational view of consciousness.

Why try to shoehorn our consciousness into a flatter construct? A perfect simulation is information equivalent, and therefore all sources of information in the original system must be present in the simulation in some manner, and disseminated throughout the system in an equivalent manner.


I think all that you have said up until this point is plausible. It seems like you're adding something to the equation when you say these two experiences are different. Searles' Chinese room experiment and the Turing's test are two or prime examples of my argument against qualia as hard problem being useful. And so equivalent experiences can be shared, and only differ inasmuch as the outcome under consideration differs. Information is energy: it allows a physical process to make decisions, to take one path over another. It is the information-objects that give rise to the "realness" of subjective experience--precisely because it is real. If you build an AI that has information about what it seems like to be that AI, you no longer have a p-zombie. I think it just assumes that the computer produces the output without this component. I guess my point is that there's nothing terribly special about Mark; any p-zombie you'd ever encounter would give you this impression. Rule of thumb: If you say that something is "clearly evident," you're probably the one doing the presupposing. The only way around this is to claim that the behavior of the brain is non-computable.


I think so.


Adding Things Up


Consciousness is more than the sum of the parts of the system, it is how the parts interact with each other. He, similarly, lacks the means to make the distinction but that doesn't mean there is no difference between the two groups. Your argument rests mostly on the complexity of human thought, and the difficulty of enumerating all of its functions. Of course I'm skeptical that it can be done, but I'm happy to have my expectations and assumptions altered. A dead body and a living body are the same in terms of molecules, but the difference is crucial. That you want to see meaning only through function is just something peculiar to your current frame of mind. Philosophically, you're trying to rule the chess board with a bishop on white while the other king is safely on black. A system that is information-equivalent must have that same information presented to it in an equivalent manner.


I can write a program that replicates that behavior with a single line of code, and stick it inside a convincing android.


But as long as the set of information that you interact are the same , the two will still have exactly equivalent experiences. I think it'll take decades still to fully understand how the physical mind causes consciousness; but such an account will surely be possible, and it will explain why we perceive a continuous self.


If you have a suggestion on how to confirm the existence of other minds without using observations of behavior I'm all ears. So far it's very interesting except for some areas where the author categorizes things like the process of life reproducing itself as the same kind of problem as consciousness. What I doing is defining two sets of the properties of consciousness: those that have an informational consequence to the system , and those that have no informational consequence. Do you have a link to it, or was this a lecture you attended personally? If it can speak in all the infinite number of ways a person can speak of the experience of tasting sweet, then yes. What I meant is: "So it might well be that this concept of experience is bogus. " If a p-zombie responds in exactly the same way to all possible lines of inquiry as compared with an assumed conscious entity, then those two entities are information-equivalent and thus the p-zombie is just as conscious as the assumed conscious entity. That being said, I would certainly question your sanity if you were to grant such a label to the "real inky and binky". Our simulated earthquake would be different from an earthquake in a very important way, even if it was indistinguishable from a natural earthquake. A zombie society might develop a civilization very similar to ours, but they would not have, for instance, a particularly deep philosophy of mind. And that falls back on the reason there can't be zombies.


Of course, if you want to assume there is something non-physical, non-interacting, and provides no informational consequence to its possessor, then go right ahead. When I taste something sweet I'm capable of verbalizing that I just tasted a sweet thing. But by definition of experience, that occurrence must be able to shape its future behavior.


The question remains whether subjective experience can be formulated as information without losing something. If continuity is so important, at what point do the two mirror universes split experientially? You may go this way but most people won't be inclined to agree with you. All I'm saying is that this concept is a lot to take on and I don't think a lot of people are going to be willing to grant you it. All of our conscious mental processes supervene on enery/information. "Black-box functional" equivalence and "structural" equivalence/"internal functional" equivalence are not the same thing, but you're conflating them here.


Also, I don't see the difference between this and the invisible pink unicorn. I start with consciousness being a black box.


We're not talking about information, but of consciousness. You're taking a behaviorist approach to what defines consciousness.


But the joke at the end wouldn't prove consciousness. Not that I want to bother doing the job, mind you, or would wish it on anyone. I find it to be more like attempting to prove a version of God accepted by deists. For the record, I know one philosopher who is motivated to find alternate modes of knowing-about-minds through. Just pointing out where I can read the argument you're vaguely remembering would be helpful to me.


A reality is created that exists solely within the interactions of the algorithms and this reality is just as "real" as any physical world. A man shuffles papers according to some algorithm, and outside of the room a fluent chinese speaker reads out symbols and feeds in symbols. We can also take from the invisible pink unicorn that color can exist without any perception of it.


There's no reason to treat the brain as a black box with inputs and outputs, that's a simplified model of what's really going on. It's a test that has to convince AI experts who are aware of knowledge encodings and their limitations. Either the epiphenomena of qualia has a meaningful influence on our behavior or assuming its existence is absurd. I've been toying with a possible physicalist explanation of the "hard problem" for a while, but I'm finding it hard to communicate it effectively. Each version of you would experience its own version of the universe as the two words diverged. I can at least understand conceptually a "zombie" person who doesn't really have consciousness but is able to fake it, although I think the setup of such a thing is suspect at best.


By definition, the zombie has no information about what it is like to experience something. I think I misinterpreted the previous post, Shaper did not necessarily mean that it isn't based on a physical process, only that at least some element of the process isn't material. Without the latter, science would seem to stop at "The brain is really wonky.


" I can no more disprove invisible unicorns than I can disprove god. We know that Searle believes in at least some zombie feature, since he thinks that certain mental processes can be simulated but not duplicated. So the particular mechanism that we use to probe is an aside to the main line of reasoning. That is a practical problem, and I'm fairly confident that practical solution to that problem will become available around the same time. This leads to quite a few loaded questions, most notably "what is the difference between two nonphysical things with identical observable effects?" Whatever is left over has no bearing on the physical world and thus is meaningless and utterly superfluous. It may be that only some do, and that half the people on the planet are actually p-zombies. Well, you need to understand the context of words, how they relate to each other, what they refer to in the real world, etc. if we look at our two adding programs, let's say a b c represents human behavior. c) because I don't even comprehend what that means. Of course it's going to seem like consciousness is causally efficacious - in some sense, I am a stream of consciousness, and my experiences seem to correspond to the world and to my decisions in a certain way. If epiphenominal events have no causal effect on my brain then how can I read? With such a neural architecture but without experience, there would be nothing for the zombies to translate, and they would not talk about experience at all, making them different from humans. You seem to think that because it is hard for humans to do something that therefore it is impossible to actually fake it. You're examining consciousness from a 3rd person perspective, as an onlooker. Information-equivalence between the two entities does not necessarily imply equal consciousness. And how else would you propose determining if something is conscious? I was just argueing there's no way to tell wheather the "experience of sensation" does in fact have a necessary influence on our future behaviour. I'm not denying qualia, I'm simply coming at the question of qualia from a different direction and proving a true statement about qualia. To use your example, a bit adder, a dictionary mapping adder, and a neural network adder won't all have the same Kolmogorov complexity. Or, you could take, say, the bit adder and modify it by adding irrelevant bullshit that the program throws away at the end. they have the same belief-like representations as us even though those representations are false. Would you just mean a literal earthquake or would it somehow not be an earthquake at all? Let "P-wombie" have the weaker meaning 'a thing that is behaviorally equivalent to P, where P is some mind' without the inclusion of ' is not P'. Two things can contain the same quantity of information but contain vastly different information. I can create an exact model of anything given enough complexity and equivalent information processing. The fact that there is something that you can know about yourself, but can't know about anyone else, I find to be epistemically fascinating. I don't see how the latter follows. I'm fine with letting very simple experiences go all the way down to fundamental particles, personally. They are betting that projects like Blue Brain will never result in a true human intelligence due to brains involving a factor beyond physical determinism.


Of course there could be zombies that talk about experiences.


If on the other hand it does then we will always be able to tell the difference between consciousness and a P-Zombie making P-Zombies useless as a philosophical tool.


The problem with that though is that identical input/outputs doesn't mean identical internal representations. He asks us to imagine a case where the whole system is physically identical to a normal human being down to the last molecule but is without any conscious states at all.


Let's keep consciousness and free will separate for the moment. By your logic here, the fact that they weigh the same and have the same size means that rocks must necessarily be flour. In an explanation from red light hitting my eyes, to me saying I see red light, and anything in between, I don't think we need anything but a physical account of what's happening. I used thought and language as the basis for probing behavior simply because its conceptually easy. I see three options, plus an escape hatch: P-zombies would not formulate the concept of p-zombies. So it could give you the outward behavior of pain, without the subjective experience of pain.