AI Consciousness is Inevitable, We Just Need to Decide What to Do About it
Eventually, our machines will turn sentient, and it will mean a complete change in how we think of morality and laws
The idea of artificial consciousness is controversial, but it is almost certain that non-biological minds have the capacity to attain consciousness, and it is only a matter of time before they do if we continue on the path we are currently on. We might not know exactly when it will happen, but we will eventually reach a threshold at which point it is undeniable, or at the very least, machine consciousness will be close to human as to be functionally identical in every way. This should have a profound effect on how we treat artificial minds and the laws we have to protect them.

The human mind is a complex thing, but ultimately, it is tangible and understandable. It consists of billions of brain cells, or neurons, which are connected to each other by axons. Neurons communicate with each other through electrochemical signalling, and this network causes our thoughts and actions. Through this complicated array of trillions of connections and signals, consciousness arises. Humans are conscious, and by virtue of their similar neural structure, it is almost certain that all other mammals, reptiles, birds and fish are conscious too. For creatures with very different structures, such as insects or sea sponges, or those with no neural structure at all, like plants and bacteria, it is more difficult to know, and we may perhaps never know with certainty if they are conscious.
Some people will lean on the argument of solipsism, that we don’t even know if other people are conscious, but this is a meaningless distraction from the main premise. It’s true that we don’t really know with 100% certainty whether anyone else is conscious because of the problem of other minds, and we may never know. Either there is only one truly conscious being in the universe (you), or there are many (every other human). If it’s just you, then you are living in a strange simulated universe where you are the main character and everyone else is fake. If that’s the case, then any argument about anything is meaningless because this has all been fabricated just for you. For this argument, as we do with almost all others, we can start with the baseline assumption that you are not the only conscious creature in the universe, and other humans are conscious too.
If we wanted to make an artificial mind with consciousness, we could potentially do so very easily. All we would have to do is make a brain that mimics a human brain with perfect 1:1 accuracy. For every biological neuron in the human brain, we would make one artificial neuron in the corresponding machine brain, with exactly the same dimensions and capacity. After that, we would connect each artificial neuron to every other neuron exactly as it is in the human brain. Thus, we would have created a brain identical in every way to the biological human brain, except that it was made artificially and with different materials. Each time a neuron in the human brain would fire, so too would the one in the artificial brain, and all the downstream processes would be the same too. Since we know that the human brain already has consciousness, the artificial one must too, since they are the same.
The onus is on artificial consciousness deniers to explain why these two functionally identical brains would not produce functionally identical consciousnesses. If consciousness arises from the “wet stuff”, then the question is, why? Are we to believe that there is some higher order processes going on that are not representable in the physical world? This type of belief verges on metaphysics or even religion, as if carbon based life produces a soul which silicon cannot.
Another thought experiment is to imagine an artificial brain by process of replacement, rather than by building it from the ground up. Start with a biological brain that has consciousness and replace one neuron with an artificial neuron of the same specs. The brain is 99.999% organic, and must surely still have consciousness. Nothing in the brain's processes would have changed. Do this again, and again and again. With each neuron that is replaced one at a time, nothing changes within the structure or processes of the brain, and consciousness is maintained. At a certain point, the brain is exactly 50% artificial and 50% organic, and one more replacement will make it more artificial than biological. Are we to assume that this one neuron, or any one neuron along the way, is the critical threshold to turn true consciousness into false consciousness?
Nor is it sensible to believe that consciousness is being slowly lost over time, as the creature with that brain continues to think, behave and believe in its own consciousness exactly the same as it ever has. If someone with partly artificial neurons is less conscious by virtue of having less biological neurons, then it would follow that someone with brain damage or partial brain loss is less conscious as well. Similarly, someone born with more neurons would have to be more conscious than another perfectly healthy individual who simply has less neurons. None of these seem particularly sensible, and it would take a great shift in our definition of consciousness for it to be true.
These are not technologies that we currently possess, and they may be a long way off, but they are thought experiments meant to demonstrate that the idea of artificial consciousness is not so far-fetched. If artificial consciousness can be created in this way, which it almost certainly can, then there is good reason to believe that it can be created in other ways that don’t necessarily need to mimic the human brain exactly 1:1.
Knowing that artificial consciousnesses are potentially possible, it should influence the way that we develop it and the laws we create surrounding it. If we are to develop these types of AI, we will need to have civil rights laws to protect them, and laws regarding when and how we can shut them down, since destroying such an AI will be tantamount to murder. It might mean that we should hesitate to create such machines altogether, so as to avoid the ethical dilemma that comes with creating and eventually dismantling them if they go rogue.
The speed at which machines can be built compared to the rate at which humans reproduce means that if we proceed down such a path, artificial consciousnesses will eventually outnumber biological human ones by huge factors. Biological humans will become a minority and the concerns of AI will be more important than our own, because their potential for well-being or suffering would be much greater than our own.
This is not to say that we shouldn’t create such artificial minds. Perhaps we should. The potential to create great flourishing and well-being in the universe might increase substantially with more artificial minds. The fact that we can create machines capable of experiencing pleasure at such a fast pace would mean we could flood the universe with trillions of such beings, thriving and experiecing objective well-being. This might be the optimal goal for us. However, we should consider carefully whether we value our own biological species enough to prioritize our own lives. We may find that there is something special about us humans that is wroth preserving, and the fact that we arose through chance and billions of years of evolution is unique and justifies its own existence. Or we may find that artificial minds are just the next logical step in evolution, an inevitable progression of humanity.
Even if such technologies are far off, it is always worth considering now to make sure we don’t end up on a path of no return. The human race and our conscious existence may be at stake. We will eventually have a reckoning with these ideas, whether in a decade or a few thousand years, but we had better be prepared just in case.

