Challenges for Governance
AI has applications in fields that are subject to regulation, such as data protection, research, and healthcare. However, AI is developing in a fast-moving and entrepreneurial manner that might challenge these established frameworks. A key question is whether AI should be regulated as a distinct area, or whether different areas of regulation should be reviewed with the possible impact of AI in mind.
Further challenges include the need to ensure that the way AI is developed and used is transparent, accountable, and compatible with public interest, and balanced with the desire to drive UK innovation. Many have raised the need for researchers, healthcare professionals, and policy-makers to be equipped with the relevant skills and knowledge to evaluate and make the best use of AI.
The future of AI
In the future, it is likely that AI systems will become more advanced and attain the ability to carry out a wider range of tasks without human control or input. If this comes about, some have suggested that AI systems will need to learn to ‘be ethical’ and to make ethical decisions. This is the subject of much philosophical debate, raising questions about whether and how ethical values or principles can ever be coded or learnt by a machine; who, if anyone, should decide on these values; and whether duties that apply to humans can or should apply to machines, or whether new ethical principles might be needed.