I think it is undoubtedly possible to replicate the human mind in a computer, but the much more interesting question is whether this uploaded mind will be you. To any outside observer, even to the new you, your identity and presence has been preserved; however, I do not think it will be your consciousness that inhabits them.
Chalmers, a proponent of the idea that even instant uploading would preserve your consciousness, defends instant uploading by first starting with the easier claim that gradual uploading will preserve your consciousness. The rest of this paper will be my argument as to why I don’t believe even gradual uploading (and hence instant uploading as well) preserves your consciousness.
Consciousness is either preserved during gradual uploading, it suddenly vanishes at some point, or it gradually fades. Chalmers claims that the sudden and gradual vanishing cases are “difficult to take seriously.” In particular, “on the fading view, these people will be wandering around with a highly degraded consciousness, although they will be functioning as always and swearing that nothing has changed”. Essentially Chalmers’ main defense against fading consciousness is that it would be preposterous to believe that the brain could become less conscious while still operating identically. I don’t think it’s all that preposterous. Afterall, our brains are already only partly conscious as they are.
There are many processes that go on in the brain that are not conscious. For instance, a large part of the brain is used for the multiple layers of computation necessary for vision. There are neurons devoted to seeing edges, for combining edges into shapes, and for combining shapes into objects. But we are not conscious of all of these layers, we’re primarily just aware of the most abstract layers of computation: recognizing people and objects. In this way, a large part of the brain is not conscious, yet we don’t find that hard to believe. Other examples include playing a piece of music by muscle memory, driving while being lost in thought, or having a conversation while being distracted by something else. In these cases, one might argue we really are just the zombies that Chalmers spoke about: mindlessly doing things while our conscious mind is busy thinking about what’s for dinner or that embarrassing thing you did earlier.
Furthermore, just because two computational processes are intimately related does not mean they constitute the same consciousness. For example, look at what happens when we dream. We are only conscious as the dreamer, yet there is another part of our brain creating that dream for us to live in. If we are to believe that a brain cannot be half conscious, then we also must claim that we are consciously creating every little detail of our dreams, which I would not say is anyone’s usual experience. We would also need to claim that we consciously regulate our heartbeat, do the 50+ dimensional linear algebra used for recognizing faces, and calculate where sounds come from based on phase shifts and frequency discrepancies between our two ears. However, if we can accept that these computational processes–which are intimately related to our conscious thought processes–are not encapsulated in our consciousness, then we can accept that other processes could also become like this when converted to silicon.
Chalmers argues that it’s unlikely that we could just “lose” parts of our consciousness such as suddenly losing hearing. I think it is completely plausible that we could not be conscious of what we’re hearing yet still know what we heard. Or if we lost even more of our consciousness, its possible we could know what we should do about what we heard without having to even know what we heard. In a sense, our consciousness would collapse into simple reflexes, while we are not actually aware of what may be causing these reflexes. Arguably, that’s already what a large part of consciousness is. We don’t know where most of our thoughts arise from, yet we react to them continuously for our entire lives.
It’s difficult to theorize about consciousness because the only data we may ever have is our own conscious experience. Based on the evidence I do have, that my brain is not fully conscious, I am not convinced by Chalmers’ argument for gradual uploading. Even though I subscribe to the functionalist theory that the abstract “mind” would remain unchanged, replicating the subjective experience of consciousness as it exists would require replicating all functionality of the brain down to the subatomic level. In this case, you would essentially just be replacing a neuron with a neuron which our brain does all the time. Although I believe a computer could be conscious, I don’t think it will feel how it feels to be human. It will be a different consciousness, and thus could not be me.