Can a computer have a soul? Part 1/3

Computers are getting smarter. There's no doubt about that. AI can beat people in various tasks. As the intelligence of computers increases, especially due to algorithms that mimic the learning behavior of the human brain, one question remains: Will computers develop a conscious mind, a soul in other words? 😇🤔

This post is the first in the mini series. I am going to try to find an answer to the question of machine soul from the points of view of different philosophical theories about the mind. But first we have to ask what is so special about the mind. Two key points come into my mind: intentionality and qualia.

Intentionality means the capability of the mind of referring outside of itself. Think about it for the moment. You have no troubles to talk about what is on the desk in front of you or what you had for breakfast this morning. 🍲 All these items exist outside of your brain, outside your mind, and yet your mind has no trouble at all in dealing with them.

Qualia – the essence of consciousness

Qualia refer to the conscious experiences of life. They answer to the question: how does something feel like? How does it feel like to be happy or excited? How does it feel like to be alive? 😄

To open the idea of qualia even further, I will share a story about a guy I met in high school. There was this guy who told me that he can't feel hunger.😱 Perplexed about this, I was dumb enough to ask: "how does it feel not to be hungry ever?" Of course, I didn't receive any proper answer. How could I have? He didn't know what hunger feels like, so obviously he couldn't tell me how never feeling hungry feels like, apart from it feeling just normal. He simply didn't have an access to a quale (singular of qualia) that all the other people did. But still he had an idea about what being hungry meant. Imagine this in the case of a computer. 💻

For now, as we know it, computers don't have access to qualia, but they might be programmed to understand what they mean. Paint program has no problem at all in picking out the correct color in an image. But it doesn't need to know how it feels like to see a color. The question, I am going to study here is if a computer can develop qualia. This is a serious question, because we aren't even sure where our own qualia come from. 😮


The first take on mind I am going to investigate here is that of Descartes, namely dualism. In dualism, there is a clear division between body and mind. Mind, or soul, is seen to be a completely different entity that is somehow linked to the physical brain. Soul is the one in control of the physical body, much like you are in control of an avatar in a video game. Or better yet, much like people were in control of their avatars in the movie Avatar.

So, from the point of view of dualism, is there a hope for a machine soul? Well, as I noted before a soul is a separate entity from the physical in dualism. This means that since we only build machines, and not their souls, a computer won't have one. Unless it was to mysteriously receive one from elsewhere. This, however, opens up a whole new question: if a machine doesn't have a soul can we borrow it one? Could we attach our souls to machines instead of our bodies? If dualism holds, we might see something like this in the future. However, not all philosophers take a dualistic point of view.

Type identity theory

When we think, our brains have brain states. Those are the physical level states our brains are in a given moment when we think. Another layer are the mental states. These are more abstract states, like qualia. The state of me being happy (mental state) is different to the neural representation in the brain (brain state)… Or is it? 😁

Type identity theory claims that both brain and mental states are identical. This means that when I taste strawberry, my brain will be in the exact same state as your brain when you taste the same berry. This leads to a problem. Human brains have high similarity between different individuals, so all of us having the same brain state when we experience the same thing doesn't feel too far fetched.

When it comes to a computer, this means that computers, if they were to have a mind, could never experience the world like we do. For them, the experience of tasting a strawberry would be utterly different to what we associate with tasting food. Type identity theory has a lot of problems in its core, because it's quite unlikely that mental and brain states are identical. There is still hope for mutual understanding between humans and computers after all. 🍓

Token identity theory

Token identity theory tries to solve the problem of type identity theory. It claims that tokens of mental state are identical with the brain states. Whoa! 😵 What does this mean? Well, in philosophy, there's a distinction between a type and a token. For example dog is a type and all different kinds of dogs you might find around in the streets are tokens of the type dog. Types are thus abstract, and tokens more concrete.

So a type of a mental state, like tasting a strawberry, can have multiple realizations, tokens. A computer can taste strawberry in its own way that has a completely different brain state and token of mental state than what we do. Nevertheless, both experiences would be similar in the way that they would be about experiencing the taste of a strawberry. This opens the possibility of a computer to have qualia similar to those we have, they are just represented in a different way in the brain (hardware) level. 🙂


I have only scratched the surface of the topic of the philosophy of mind in this post. As we have learned, consciousness is not necessarily exclusive to us humans, and there is room for its existence in the computer world too. But this doesn't explain why we have qualia and where they come from.

Join me next week as this story continues towards theories that fall in the category materialism. To learn even more about the matter, I would suggest you to read a book called Philosophy of Mind.