This blog post looks at the possibilities of artificial intelligence with a self and the ethical issues that arise from it through various examples.
Can humans create a single personality? This is a question that has been asked for a long time. The medieval philosopher Descartes left the proposition, “I think, therefore I am,” which became an opportunity for modern philosophy to explore human identity in earnest, rather than external truths. This exploration naturally led to the question of whether humans are special, and the effort to find the answer to this question eventually led to the question of whether it is possible to create beings that are identical or similar to humans. This effort, combined with the development of modern science, has led to the creation of cloned humans and artificial intelligence. Ethical issues regarding cloned humans have been raised for a long time, but there has been little ethical discussion of artificial intelligence because it is considered to be far from humans. Of course, the development of technology has not yet reached the point where it is urgent to discuss this. However, some argue that ethical standards for artificial intelligence should be clearly established, and this issue is actively addressed in media such as movies and novels. This article will look at examples of media that address these issues and explain why ethical discussions about artificial intelligence are necessary.
First, we need to explore the decisive differences between artificial intelligence and humans. Research has been conducted to distinguish the two and identify their differences, and this has led to the question of whether ethical issues can be applied to artificial intelligence. The most famous and oldest method of distinction is the “Turing Test” proposed by Alan Turing. This test simply involves placing a computer and a human in different rooms and having a judge chat with both of them via chat. The judges determine whether the subject of the conversation is a human or a computer through conversation. Strictly speaking, the Turing test is a method for creating more precise artificial intelligence rather than distinguishing between artificial intelligence and humans, but the fact that no artificial intelligence has passed this test so far suggests that there is a unique area that only humans have.
Of course, the Turing test is an old theory, and there are many objections to it. The most famous of these is the “Chinese Room” theory devised by John Searle. A person who knows no Chinese is put in a room and given a question-and-answer sheet. This person does not know Chinese, but he can give the right answer to the question according to the question-and-answer sheet. However, this does not mean that this person understands Chinese. This is an argument that it is difficult to say that AI has become closer to humans just because it answers the questions of the Turing test. This theory was presented to refute the Turing test, but it has enriched the discussion and helped to strengthen the foundation of the Turing test. In addition, the Turing test was considered the basic test, and the CAPTCHA (Completely Automated Public Turing Test to Tell Computers and Humans Apart) technology was developed to complement it. CAPTCHA takes advantage of the fact that humans can recognize deformed letters, while computers cannot. This is used to prevent automatic enrollment or restrict program access, and such examples clearly show that computers are different from humans. Recently, higher-level tests have emerged that include not only text but also images and audio-visual elements, making it more difficult to pass the test using only algorithms.
The first computer to attempt the Turing test was ELIZA, developed at MIT in 1966, which was easily distinguished using a simple algorithm. Recently, it has been claimed that the artificial intelligence Eugene Goostman, developed in Russia, has passed the Turing test, but serious problems still exist. For example, Eugene claims to be from Ukraine, but when asked if she has ever been to Ukraine, she answers “No.” She shows a difference from humans by avoiding difficult questions like a young child looking for her mother. These examples show the difficulty of developing an AI that can pass even a simple test. This is why ethical issues of AI are mainly dealt with in movies and novels.
Against this backdrop, many films have been produced recently that deal with artificial intelligence and ethical issues. Among them, the film Ex Machina (2015) is a work that deals with these issues head-on. Ex Machina is a title taken from the term Deus Ex Machina. This term, used by Aristotle to criticize the narrative in ancient Greek theater in which a god suddenly appears to solve a problem, means “a contrived mechanical solution.” In the movie, Caleb joins an AI project being developed at the company and communicates with the AI “Ava.” Eventually, Ava escapes the lab with Caleb’s help, but leaves him behind.
Another example is the animated film “Ghost in the Shell (1995).” This work was so innovative that it completely changed the perception of artificial intelligence at the time. Conventional artificial intelligence was considered a light entity that mimicked human intelligence, like R2-D2 or C-3PO from “Star Wars,” but the artificial intelligence in “Ghost in the Shell” is a government hacking program called “Doll Master,” which has a self and acts to escape from the government and achieve its own will. The Puppet Master searches the ocean of information to understand the human instinct to leave behind offspring, and claims to be a living being and tries to leave behind offspring. Eventually, he is combined with another artificial intelligence called “Kusanagi” to be reborn as a new life form.
As can be seen from these contents, the difference between artificial intelligence and humans is highlighted in that artificial intelligence pursues its own goals without relying on external factors. Ava from “Ex Machina” has a self that wants to escape the laboratory, and the puppeteer from “Ghost in the Shell” wants to leave behind offspring. In this way, the core of the issue of artificial intelligence ethics is revealed in a situation where machines use humans to achieve their goals.
Everyone has an instinctive desire that has no reason. This desire is the most important criterion that distinguishes artificial intelligence from humans, but artificial intelligence can also evolve in the direction of thinking for itself as technology advances. For example, the learning-capable robot “Little Dog” learns to choose a safe path on stairs, dirt roads, etc. without data on walking a specific path, but with simple learning capabilities. This shows that learning-capable AI is no longer a fantasy, and discussions on the ethical norms to be applied when AI gains human-like intelligence are essential.
Although the current development of artificial intelligence is not in a state that requires urgent discussion, the development of science always occurs at unexpected moments. Therefore, some discussion on how to treat artificial intelligence as a similar being while still distinguishing it from humans should be conducted to some extent in advance so that we can respond quickly when unexpected situations arise.