Ludwig-Maximilians-Universität München
print

Language Selection

Breadcrumb Navigation


Content

The RoboLaw Project

The Essence of the Machine

München, 04/11/2014

Software-based systems, autonomic machines and intelligent prosthetic devices are on the horizon. An EU-sponsored project plans to draw up a regulatory framework for robotics. The LMU philosophers involved are now analyzing the ethical implications of this aspect of our technological future.

Roboter (Foto: jim - Fotolia.com)
Photo: jim - Fotolia.com

In the film “Her”, an American man falls in love with the operating system of his computer, which appears to be both intelligent and empathetic, and communicates with him in the winning accents of Scarlett Johansson. The storyline may sound absurd, but the film won this year’s Oscar for the best screenplay. How would you characterize this kind of plot? Off the wall, a plausible scenario or a sanitized nightmare?
Nida-Rümelin: I find it hard to believe that software-based systems, and in particular domestic-service robots equipped with a personal identity or a subjective perspective, that are capable of genuine communication with us can be developed in the foreseeable future. And anyway there are plenty of films that have exploited such analogies between people and machines. Take Blade Runner, for instance, where we have very human-like creatures that have obviously developed a subjective perspective – they fear death, for example. And a complicated psychological test, shown in the film’s opening scene, is required to discover who is human and who is not. It is the subjective element, the mental dimension, which distinguishes between a person and a machine, which is non-person. Now many people would contest that assertion. They would point out that humans themselves are no more than a collection of software, and what the hardware looks like is not that important. And indeed the film Her tells the story of individuals who have messed up their lives – in a sense, these are people whose mental software no longer functions like it should.

What will the coming world of humans living alongside machines look like? What can we expect from the robotic age?
Well, it is very difficult to tell. But it looks as if there are certain, relatively rigidly structured, rule-based areas in which robots can be usefully employed. One such area concerns people in need of care and the elderly. In Japan robots are already being used to lift nursing-home patients out of bed and into their wheelchairs. Interestingly, some studies show that patients do not perceive this as an affront on their humanity. Indeed, a considerable part of them are more relieved than humiliated, and do not regard it as a loss of dignity, when an impersonal machine takes on certain aspects of care, especially in the case of severely incapacitated individuals.

Advocates of the robotic caregivers argue that they will help the elderly to retain their independence into advanced old age. Are machines like this really a solution to the problem of ageing societies?
It may be true that the use of autonomous robots in the care of dependent persons can help us to cope with this demographic challenge. And technological devices can indeed help us to extend the length of the period in which we remain capable of managing our own lives. But they can also – and this is where the ambivalence comes in – contribute to the further isolation and depersonalization of the elderly, cutting them off from the company of others. If one is dependent on a robot, not even the need for nursing care will bring one into regular contact with other people. And here again, the question of responsibility comes up. If a human caregiver injures a person in his/her care, it is clear where the liability lies. But what if the caregiver is a robot? Who is liable for damages? The manufacturer? The person or agency who recommended its use? The person it is meant to help? And here‘s another example from our not too distant future: software-based, fully automatic vehicles that carry passengers without human drivers. What happens when they go out of control?

Researchers in Würzburg recently tried to obtain a road traffic license for an electric wheelchair which they had upgraded into a self-driving vehicle. The license was granted on condition that the vehicle was always accompanied by someone who could assume control and bring it to a halt. Is that just an amusing anecdote or an example of a larger problem?
It is symptomatic for our current uncertainties. But we do need to be very careful. The emergence of new technologies can lead to a dramatic change in our whole cultural practice. That does not mean that all development work on autonomous robotics or software-controlled systems should be stopped. But we must keep in mind how risky it all is.

Robotic devices are also used in war zones. They dispose of mines, defuse bombs, patrol the skies as unmanned drones. The pilots who control them may be many miles away. Is it not time to regulate the missions that robots may undertake in military conflicts?
This is an issue that goes beyond the scope of the present Robolaw project, and would only become a focus of our research if the project were to be extended. But we have already proposed a number of theses regarding this issue. At a recent conference on the use of robots in military operations, held at the Technical University of Delft, we drew attention to the ethical arguments. Some of these make a case for, others against the use of robots in war. Among those in favor is the following: Generally speaking, autonomous robots are better at distinguishing between legitimate and illegitimate targets than are humans on the ground. They can assess the information and data relevant to such a decision more rapidly and more objectively, and can more effectively limit the risk to the civilian population. The argument against the use of robots in war would be: The very utilization of robots and software-based weapon systems relieves its human protagonists of responsibility for the consequences to an extent that is unacceptable – and that would be a highly problematic development.

Should developers be given binding guidelines based on ethical principles, which prohibit them from designing and building certain types of robots or incorporating particular functions into their programs, or oblige them to include fail-safe features that prevent their misuse?
That is a basic question in the ethics of technology. The fact that it is possible to imagine even very drastic misuses of technologies is not a sufficient justification for banning their development. What we need instead is a broad-based debate about the application of new technologies, within the hard and soft sciences, at the political level and in the public sphere as a whole. One should also avoid demonizing the role of the researchers and engineers who are directly involved. They are not those who are primarily responsible for the application of new technologies, nor should the decision whether inherently risky new technologies should be developed further be left to them or imposed on them. If that were to happen, a very small group within our society would have immense power to shape the development of society as a whole.

Studies have shown that robots stimulate emotions in people if they appear to be capable of behaving in an autonomous fashion and of communicating with us. One researcher in the US allowed experimental subjects to play with a small dinosaur robot for a while, and then told them to “kill” it. But they refused and wanted to protect it instead. What are the implications of this emotional attachment for the development of robots and how we interact with them in the future?
In their overt behavior, software-based, in other words artificial, “organisms” are becoming more like animals – with the result that we attribute mental states, fear, joy, pain to them, just as we quite naturally do with animals, or at any rate with mammals with which we are familiar. So I can understand the inclination that you describe. But it is important to draw a clear distinction here between our reactions to and interactions with animate creatures and those with inanimate artefacts that act as if they were animate. After all, we teach even small children to differentiate between how they treat real, live animals and their furry toys. When they play with their cuddly toys, they may ignore this distinction, but there are certain things that they would never try out on a real animal.

When we experience an impulse to protect robots from harm, we are actually protecting ourselves, said the leader of the cited study. According to her, we are responding to the realization that someone who would wish to harm a robot could also be a danger to people. Is this inference correct?
I have never found this argument convincing. Kant actually used it in relation to our treatment of animals. But there are lots of reasons to doubt that it is sound. And, as I have just said, the distinction between animate and inanimate is fundamental in any case. This debate is very similar to the one surrounding the new culture of gaming, over the effects of virtual violence and simulated killing on screen, the dehumanizing effects of which are disputed.

A few years ago, scientists in Italy equipped a patient with a robotic hand that could be controlled by his thoughts. Other prosthetic devices such as cochlear implants for the hearing-impaired also involve direct coupling between the machine and the human nervous system. These individuals are now intimately connected to, and dependent on the function of, a certain technical apparatus. Do we also need specific safeguards in this area?
I believe that this form of prosthetics has a particularly bright future. These devices raise the question of where our physical boundaries lie. Where does my personhood end? It is not only a mental entity; my identity is closely linked with my physical body. That is why technical devices that serve as functional replacements for limbs, and are therefore part of one’s bodily existence, deserve the same protection as the body itself.

For more than two years you and your colleagues from Pisa, Reading and Tilburg have been investigating an interdisciplinary investigation on the problems in the field of roboethics and robolaw – with the aim of developing ethical and legal recommendations for the European Commission. What conclusions have you reached so far?
Several very interesting results emerged from our research, which have been very positively received by the scientific community. In fact Professor Erica Palmerini as project leader of Robolaw Project received an award for the work from the World Technology Network last year. One of the results of the research concerns the question of responsibility. Like our partners at the Scuola Superiore Sant’Anna in Pisa and at Tilburg University, we believe that the emergence of new robotic technologies does not require any fundamental revisions to the way I which we attribute responsibility or in the definition of notions like action and intentionality, as established in criminal law, civil law and in the morality of everyday practice.

So there will be no new categorical imperative for artificial intelligence?
No. Robots are not quasi-persons, and therefore cannot be treated like real persons. But we must come up with much more specific and precisely defined criteria for real-world applications and situations. How does one set limits to the responsibility of a person who makes use of software-controlled and perhaps self-learning systems? This poses, even more urgently than before, the old problem of how we deal with risks that technologies confront us with, and which we cannot really assess in advance.

Interview: math

Prof. Dr. Julian Nida-Rümelin Prof. Dr. Julian Nida-Rümelin holds the Chair of Philosophy IV at LMU. The interdisciplinary and multinational project “RoboLaw – Regulating Emerging Robotic Technologies in Europe: Robotics facing Law and Ethics” is funded by the European Union’s 7th Research Framework Program. For further information on the project and the researchers involved in it, see: www.robolaw.philosophie.uni-muenchen.de/index.html.