The human/technology frontier in the wider context
Published November 2016
The blurring of the boundary between humans and technology has far reaching consequences throughout different areas of society.
Are there recent scientific, legal or social developments?
There have been recent claims that the rise of robots could lead to unemployment rates greater than 50 per cent within 30 years. Virtual reality and augmented reality technology has become increasingly affordable and accessible to the general population and the recent popularity of Pokemon Go attests to the public interest. Major car manufacturers have predicted the appearance of fully autonomous cars on the road by 2021. Technological advance within the military world such as combat drones have revolutionised how war is conducted. The United Nations has held a meeting of experts on lethal autonomous weapons systems (LAWS) every year for the last 3 years to address the potential challenges and threats arising from the development of such technology.
Are there complex ethical issues?
If as a society we are conditioned to see online reality as less ‘real’ then there is a concern that the moral code for use of virtual reality and augmented reality may be similarly relaxed. If technology allows human acts to be detached from humanity than there is danger that people will not feel the weight of their actions. The use of drone warfare, for example, enables troops to deploy deadly weapons while safely remaining thousands of miles away from the battle frontline. If LAWS were allowed to be developed they could select and engage targets without any human intervention. Autonomous systems require guidance as to how to behave in ethically challenging situations. If an autonomous car, for example, swerves to avoid a child but risks hitting someone else. More than a simple set of rules for how to behave, autonomous systems also require rules allowing them to anticipate the possible effects of their own actions. There may be further concerns about creating path dependency in automated systems such that ‘normal’ ways of thinking become embedded which may be exclusionary to large minorities or which miss new solutions.
Advances in artificial intelligence (AI) in the future could raise a number of fundamental questions about our notions of intelligence and consciousness, and even the very question of what it means to be human. ‘Technological singularity’ is the hypothesis that the invention of artificial superintelligence will ultimately trigger runaway technological growth resulting in a moment when some computers are smarter than humans. Some have questioned whether this could mean the end of the human race. The Machine Intelligence Research Institute has suggested the need to build ‘friendly AI’ whereby the advances occurring with AI should include an effort to make AI intrinsically friendly and humane.
Is there a potential policy impact?
The best-known set of guidelines for robo-ethics are the “three laws of robotics” coined by Isaac Asimov, a science-fiction writer, in 1942. In 2010 experts at the joint EPSRC and AHRC developed the five ethical rules for robotics. In June 2016 Satya Nadella, CEO of Microsoft, roughly sketched a set of rules for artificial intelligence to be observed by designers. There is a need, however, for further guidance regarding responsible research and innovation within AI, robotics and autonomous systems. The impact of non-human technologies on the human workforce will be a matter for consideration in wider social policy.
Is it a subject of public concern?
There has been substantial media coverage of the idea of robots replacing the human workforce and impacting on unemployment as well as considerable media interest in autonomous vehicles.
Is the consideration timely?
Technologies have reached a point such that the deployment of LAWS may be practically feasible within the next few years. The EPSRC UK Robotics and Autonomous Systems Network (UK-RAS Network) was established in March 2015 and the first UK Robotics Week was held in 2016 with a similar programme planned for 2017. The EPSRC has also been working this year on human-like computing and the challenges faced by researchers. At the 2012 Singularity Summit a study of AI predictions found a wide range of predicted dates for singularity to occur, with a median date of 2040.
Can the Council offer a distinctive contribution?
The UN has taken on the issue of international development of LAWS. The House of Lords Science and Technology Committee is to conduct an inquiry into future uses of autonomous vehicles in the UK, and the House of Commons Science and Technology Committee recently recommended establishing a standing Commission on AI. The Council may be able to offer a particular contribution looking at the ethical rules online and specifically within the spheres of virtual reality or augmented reality. Of particular interest, might be how the younger generation come to engage morally with the ever-expanding virtual world. Another possible contribution might involve a specific focus on robotics, looking at the ethical hurdles to ensuring robots and humans come to co-exist safely and productively. A consultation with the public would be valuable in exploring acceptable decision-making principles that could be included in autonomous systems.
Possible future work topics
This is one of the topics that have been suggested as possible project areas for further investigation by the Council. These topic summaries do not aim for comprehensiveness; rather, they are intended to sign-post some of the key considerations and to provide a starting point for discussion. Each summary includes links to relevant publications on the topic.