Do robots have rights? The European Parliament addresses artificial intelligence and robotics



A lively discussion is currently under way in the business world regarding possible applications of intelligent IT systems and autonomous machines and equipment. Rapid technical development in these areas has spurred the imagination of users. The application areas are extremely diverse, and include production robots in industry, drones and self-driving delivery robots in logistics and warehousing, healthcare robots and driverless vehicles. What sounds like science fiction has already become reality in some cases, with intelligent robots being particularly common in production and logistics.

From a legal viewpoint, there are still a host of unanswered questions around robotics and the artificial intelligence (AI) incorporated into robots. The European Parliament accordingly passed a resolution with recommendations to the European Commission on civil law rules on robotics (2015/2103(INL)) on 16 February 2017; the resolution was adopted with 396 votes in favour, 123 against and 85 abstentions. The recommendations are based on a report published by the European Parliament’s Committee on Legal Affairs on 12 January 2017. The Commission is not under any obligation to comply with the Parliament’s recommendations, but must state its reasons if it refuses to do so.

The recommendations of the European Parliament relate to general principles around developing robotics and AI for civil use, and address various topics involving these new technologies. Key points include the desire to establish ethical principles for developing and using AI-based robotics and resolving the numerous liability issues that arise. In this context, the European Parliament is calling on the Commission to consider introducing a specific legal status for intelligent robots in the long term. The Parliament’s resolution also advocates the establishment of a European agency for robotics and artificial intelligence, with the aim of providing the technical, ethical and regulatory expertise required to meet the challenges and opportunities arising from the development of robotics in a timely and informed manner. There are also recommendations with regard to setting up a register of robots across the European Union and introducing mandatory registration and insurance for intelligent robots.

An ethical framework: the “Charter on Robotics”

The European Parliament notes that the development and use of robotics give rise to a number of tensions and risks relating to human safety, privacy, integrity, dignity, autonomy and data ownership. A majority of MEPs believe that an ethical framework is required for the design and use of robots. They have therefore submitted a proposal for the establishment of a “Charter on Robotics” which aims to set out an ethical framework for the design and use of robots. The Charter requires researchers in the field of robotics to commit themselves to the highest standards of ethical and professional conduct. They would have to comply with the principles of beneficence (robots should act in the best interests of humans), non-maleficence (robots should not harm a human), autonomy (the capacity to make an informed, un-coerced decision about the terms of interaction with robots) and justice with regard to fair distribution of the benefits associated with robotics. The form in which this Charter on Robotics could ultimately become law is still entirely open. The principles contained in the Charter are defined in very broad and general terms. The final wording will doubtless provide scope for considerable discussion.

Robots as “electronic persons”? – Potentially sensible with regard to insurance law…

In particular, the Parliament’s proposal to consider introducing a specific legal status for robots in the long term is likely to spark a huge debate. Should robots really be given a special legal status, often referred to as “electronic person” or “e-person”? Although the idea sounds rather strange and downright bizarre at first, on closer inspection it is actually based on very practical considerations. If a robot has its own specific legal status, it can also be made responsible for its own actions and decisions via this status. If it causes damage, for instance, the robot itself could be sued for compensation. That will only be worth doing if the damage is covered by insurance, of course. For this reason, the Parliament is also proposing the introduction of obligatory insurance for intelligent robots. From a legal perspective, the introduction of an “electronic person” could make sense when combined with obligatory insurance for intelligent robots. The development of highly-intelligent, fully-automated systems appears to be just a matter of time. Human responsibility will decline in importance as machines become more autonomous and take more decisions on their own. Increasingly, humans will deny responsibility by saying that they were entitled to rely completely on intelligent technology. After all, the whole aim of automation and AI is to avoid the need to continuously give instructions to and monitor such devices. It is also debatable whether continuous human control will even be feasible in the case of intelligent, sophisticated systems that act autonomously. In addition, it will not always be possible to determine who is responsible or establish their exact degree of responsibility if damage is caused, particularly where interaction between multiple intelligent systems is involved. This is unknown territory in terms of our current legal system because civil law recognises natural persons and legal persons, but not “e-persons”.

…and contract law

A specific legal status for robots is probably still some way off. If it is introduced, there would also be benefits with regard to contract law. Robots that can make declarations on their own behalf and act based on their own legal personality will themselves become contracting partners and thereby have their own rights and obligations. As such, robots would also be “personally” liable and subject to litigation. This clearly brings its own set of challenges. The question arises, for instance, as to whether a robot should acquire assets to support such liability and how this could be done. Should it be rewarded for its work? While this may all sound highly futuristic and other-worldly to us today, the economic rationale behind it is eminently sensible and paves the way for robots to pay tax on their earnings. That could be a crucial factor in securing the future of social welfare systems. Insurance premiums for robot liability insurance could also be paid out of this income.

Summary and outlook

The European Parliament has put forward initial proposals in its resolution on legal rules for machines that are able to act with a high degree of autonomy and take their own decisions through being equipped with AI and having physical freedom of movement. This will not be the final word on the matter from a legal perspective, and we are still some years away from corresponding laws being enacted. In the meantime, technical development in the field of AI and robotics will not wait for national or European lawmakers and is set to continue unabated. It remains to be seen whether technical progress might not soon overtake the legal discussion.

Aside from the legal issues surrounding robotics, lawyers will be interested to see how AI finds its way into our own professional lives. There has been a lot of talk recently about legal tech and digital transformation in relation to legal advice. Yet just looking at the numerous new legal issues that arise in connection with AI and robotics, robots appear to be creating as much new work for us on the one hand as intelligent assistants will be able to take over on the other.

For further information on this topic please contact Markus Häuser.