On the 23rd July last year a robot passed a simple ‘self-awareness’ test. It was one of three robots who were told that two of them had been given a ‘dumming’ pill which would stop them from speaking, while the last had been given a placebo. They were then asked which pill they received, to which one of the robots replied, “I don’t know”. However, the robot then changed its answer, knowing now that it had not been given the pill because it was able to speak.
This response demonstrates that the robot understood the rules of the test, was able to recognise its own voice, and could distinguish itself as an individual entity from the other robots. These are the key components of the ‘self-awareness’ test; a test which, until last summer, was believed only humans were capable of passing. So the question is, how long will it be before we build robots with genuine artificial intelligence (A.I.)? Robots capable of acting totally independently, but also of feeling emotions like anger and love just as humans, and what happens when that day comes?
There is a fear, largely generated by popular culture, that robots will one day be smart and sophisticated enough to overthrow humanity. Is it therefore wrong to continue to build and develop such technology? I would say no, for a number of reasons. The first is that we aren’t currently anywhere near the point where robots could possibly be of any threat to us. Maybe if someone says that they are on the verge of building a robot that will be able to function entirely independently and will act of its own accord without reference to human desire, like Ava in the 2015 science fiction film Ex Machina, then we will have to have a discussion about whether or not that is a good idea, but at the moment the developments being made in robotic engineering are nothing but massively beneficial to us.
For example, the introduction of ‘robotic arms’ into hospital operating theatres has improved the precision, ease and speed of open surgery, leading to greater chances of the operation being a success, as well as shortened recovery times for patients. Benefits can also be seen in manufacturing, chemical engineering and space science. Beyond these benefits though, is a nobler, less pragmatic reason for continuing to make developments in advanced robotics: to push ourselves, and strive to achieve that which we are not even sure is achievable. Edison’s lightbulb, the Wright Brothers’ aeroplane, Neil Armstrong’s first step on the moon; all events which were believed impossible not long before they occurred, and yet they are events which have shaped humanity’s scientific and social development in extraordinary ways. Just because we are not sure that true A.I., the creation of which is described in Ex Machina as “the greatest scientific event in the history of man”, is achievable, does that mean we shouldn’t try, if only for the sake of continuing to test whether it is possible in the first place? I don’t believe so.
The more pressing question, it seems to me, is what would such an A.I. be like? Could it truly feel emotion as humans feel emotion? There appears to be no reason why it couldn’t, given that emotions are simply a mixture of chemical and electrical impulses flying around our bodies, and such impulses, while hugely complex, cannot be impossible to duplicate. True A.I. would presumably have an operating system that was very similar to that of the human brain, in order for it to function not just as a simple machine but as a complex pseudo-organism. In this case, how would we distinguish between the robots and the people? If robots were capable of feeling human emotions in the same way and to the same degree as us, then how could we treat them any differently? Currently, robots are locked away in scientific facilities, not allowed self-autonomy, but this would surely no longer be an option in such a scenario. Such treatment would be no better than the way in which slaves, who for so long were regarded as subhuman for no other reason than their appearance, were treated. The mental, emotional aspect of such robots would be the same as a person’s, it would only be their appearance that was different.
Perhaps one day we will create artificial intelligence capable of feeling emotions like love and anger just as humans do. It seems to me that in this eventuality, we will have no choice but to accept them into society. After all, mistreatment of them will only end in the same way as the mistreatment of humans by other humans always has, with hatred, pain and suffering.