Can You ‘Kill’ a Robot?


Deciding whether or not something is alive seems like a fairly simple question: if your schooling has been anything like mine, you’ll be familiar with the ‘7 life processes’. This model for distinguishing what’s alive suggests that a living thing can move, reproduce, is sensitive, can grow, respires, takes in nutrition, and expels waste. The definition seems fairly definitive, yet many animals cannot reproduce alone ­ are they only ‘alive’ in the presence of a mate? And what about hybrids such as mules, which can’t reproduce at all ­ don’t we consider them to be living? Finally, in the contemporary world can machines be alive? It’s already common parlance to woefully declare the ‘death’ of one’s phone or laptop, so is there a point at which machines or software can be considered living things? I took a dive into this tangle of biological-philosophical mess.

In the 1990s NASA, in their search for extra-terrestrial life, charged the molecular biologist Dr Gerald Joyce to model a working definition for life itself. Joyce characterised life as a ‘self-sustaining system capable of Darwinian evolution.’ At first glance, this definition seems pretty satisfying ­ systems which get along without outside interference and adapt to their environment are alive, that’s all. But what about viruses? These tricky little pathogens are fuzzy cases in most models of life. They cannot replicate themselves without a cell host, and are only strands of RNA encased in protein, but they certainly reproduce and evolve. Equally, some computer programs could be considered alive by Joyce’s standards. Programs which are able to independently copy and adapt themselves are already a reality. If humans were to produce a self-sufficient machine which could build others similar to itself, this should also count as life, therefore can be qualified to be ‘killed’.

In fact, definitions of life are so slippery that the scientific journalist Ferris Jabr has suggested that ‘life does not really exist’. It sounds a little insane, but what he means is that ‘life’ is more a concept than a definite reality, and the line which separates ‘living’ and ‘non-living’ things is a human construct. The universe, rather, consists only of various configurations of atoms in more and less complex systems, some of which humans have come to dub ‘life’. If we take on board Jabr’s idea, that life is a subjective idea, the question ‘can you kill a robot’ becomes moot. Perhaps a stroll into the murky woods of neuroscience and philosophy could provide a less ambivalent answer. Generally, the act of killing involves the destruction of a conscious being ­ if our robot was conscious, by most philosophical standpoints it could be killed.

Inconveniently, scientists are yet to come up with an absolute definition for consciousness, but a couple of feasible ideas about how it’s produced do exist. One is ‘Integrated Information Theory’. This suggests that consciousness arises from the combined input of sensory information and cognitive processes, seamlessly ‘integrated’ into what we call experience. By this definition, a robot could be conscious under certain conditions: if it had external sensors similar to those possessed by the human body, linked to cognitive functions, perhaps it could become sentient. The other theory of consciousness is called ‘Global Workspace’ theory. This is the idea that consciousness arises from the transfer of information around the brain from a ‘memory bank’. Computer programs which do this already exist ­ just go to google, search ‘cats’, and your computer will retrieve and display to your adoring eyes a plethora of fluffiness form the vast bank of information which is the internet. Does this mean that some machines are already conscious? Computers and brains work via surprisingly similar systems ­ computers are programmed using binary code, meaning only two symbols, 0 or 1, are used to store data and provide instructions, whilst the brain stores and transfers information via neurons, which can either be ‘on’ or ‘off’. Currently it’s impossible to know if simulating a brain digitally would necessarily produce consciousness, let alone if the machines we use now are sentient. For the time being, it’s impossible to say whether a robot could ever be ‘killed’, but the closer humans come to simulating our own intelligence, the more difficult this question will become. I for one will be refraining from abusing machines just in case.

Except printers. We already know they’re sentient and evil.

[Bethany Garner]

Advertisements

What do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s