In this blog post, we will consider what it means to be unique in an era where artificial intelligence surpasses human intelligence and consciousness.
Yuval Noah Harari’s “Homo Deus: A Brief History of Tomorrow” is a book that explains what fate we will face in the future. He presented a very interesting perspective on how the future will unfold. Harari interprets all living things, including humans, as a kind of algorithm, and argues that humans currently dominate the Earth, but the capabilities of artificial intelligence are catching up with humans. He explained that in the future, the intelligence of artificial intelligence will surpass that of humans in all fields.
So, do we have something special that is different from artificial intelligence? If not, will humans eventually become useless? At the end of the book, the author asks the question, “What is more valuable, intelligence or consciousness?” Ultimately, we can think that consciousness, which exists in humans but not in artificial intelligence, has the potential to make humans special.
However, I do not believe that consciousness is what makes a person special. This is based on two arguments. First, we cannot actually determine whether consciousness exists in an entity, and we only care about physically interacting with that entity. Second, artificial intelligence can also act as if it has consciousness. If these two arguments are true, then we can say that artificial intelligence can have the same ability as humans in terms of consciousness.
First, what is consciousness? This book defines consciousness as a kind of subjective experience. This is an experience that one can know about oneself but that others cannot verify. We know that consciousness is created by the electrochemical reactions of the brain and that these reactions perform some kind of data processing function. According to current conventional wisdom, humans have consciousness. It is also believed that higher animals such as dogs and cats have consciousness. On the other hand, there may be controversy over whether ants, plants, and bacteria have consciousness. It is generally believed that computers and smartphones do not have consciousness.
But how can we know that an entity is conscious? Consciousness is a subjective experience, so how can we really know? Until now, people have observed whether there are characteristics of consciousness, such as senses and desires, to determine whether consciousness exists. However, this alone cannot be used to assert that consciousness exists. This is because these characteristics may be the output of an inorganic algorithm. Ultimately, we cannot determine whether the object is aware of itself and its experiences. The same is true when we interact with animals. When we have a dog or a cat, we treat them as if they are conscious. However, we cannot know whether they are actually conscious, and we can only infer that they are by their behavior or facial expressions.
This logic is not to explain that animals do not have consciousness. Even if they do, it is to explain that we can only see the surface (such as behavior or facial expressions) that is presumed to be consciousness in the interaction between individuals. We only say that they have consciousness by looking at the surface.
If that is the case, then if artificial intelligence can accurately express only the surface of it, can we say that we see artificial intelligence and say, “It has consciousness?” ‘Yuval Noah Harari’ seems to think that artificial intelligence can surpass humans in terms of intelligence, but not in terms of consciousness. He argues that consciousness and intelligence are separate, and that current AI is only developing in terms of intelligence and not in terms of consciousness.
However, I think this is a problem with technology. It is not yet possible to make a single AI perform various tasks well like a human. In certain fields, such as Go, chess, and object recognition, they have been shown to have abilities that are superior to humans in some cases, but they have not been shown to be superior to human intelligence in all aspects. This is the “weak AI” stage. The reason why computers are currently thought to be lacking in consciousness may be due to the particular nature of AI. However, one day, artificial intelligence will surpass human intelligence in all areas. This kind of general artificial intelligence is called “strong artificial intelligence.” Such artificial intelligence has not yet been realized. However, most scholars believe that this will be possible within a few decades. Our brain is very complex, but it does not defy the physical and chemical laws we know, so I think it is quite possible to realize strong artificial intelligence.
Of course, just because intelligence is at the human level, consciousness does not immediately rise to the same level. As Yuval Noah Harari says, there is a gap between intelligence and consciousness. However, if there is an entity with human-level intelligence, I believe consciousness can also exist. For example, when designing an artificial intelligence for the first time, let’s say the goal was to “communicate with humans in a way that makes it as difficult as possible for them to know that you are not human!” Strong AI would be able to accomplish this goal. If it accomplishes this goal well, can we really say that it has consciousness? Even if AI seems to be similar to humans, it does not actually understand itself or its opponent, so it could be argued that it does not have consciousness. However, even if that is the case, there is nothing wrong with that. This is like the AI “Samantha” in the movie “Her.” In fact, “Samantha” is just an algorithm, but to people, it seems as if it has consciousness. If the technology is advanced enough to create strong artificial intelligence, artificial intelligence like “Samantha” will surely appear.
In conclusion, the comparison between humans and artificial intelligence can be said to be a comparison of how to deal with a certain situation (input) and what to do (output). In this respect, human consciousness itself is not practically involved, and I think that artificial intelligence will eventually catch up with all human abilities. If this happens, will most humans eventually become surplus humans? Will we become just ripples in the flow of data on a cosmic scale, as Yuval Noah Harari says? Or will we recognize the dangers of strong artificial intelligence and stop developing it? No one knows what will happen. However, it is clear that we must prepare for the coming development of artificial intelligence and continue to explore the meaning of humanity’s existence.