As acronyms go, GDPR doesn’t exactly trip off the tongue – to some of us it rather trips up the tongue. Nor would it be the most memorable (consider by comparison for example the US National Institutes of Health’s Brain Research through Advancing Innovative Neurotechnologies® Initiative) if it wasn’t for the ways it has penetrated our internal communications in past months. Like many others we are getting to grips with the nuts and bolts of complying with the forthcoming General Data Protection Regulation in our own work: what to do with those precious excel spreadsheets, neatly filed emails and important phone numbers scribbled on post-its?

But there are of course implications beyond our own archiving habits – starkly illustrated by the recent uncovering of Cambridge Analytica’s shady use of Facebook data. We should know, having not too long ago emerged from a two-year inquiry into how data about individuals are collected, linked and used in biomedical research and healthcare, and the ethical issues that arise from those practices. Our report took shape while the GDPR was being negotiated, and while it did not set out to propose what exactly the law should look like, it did propose an ethical (surprise!) approach to data use based on some of the principles that underpin the law – more about that later – and some suggestions as to what this might look like in practice.

The ‘in practice’ bit, though, is (and should be) the subject of continuous discussion, no less so with new regulation about to be rolled out. While the GDPR has been agreed by EU member states, how exactly it will apply in the UK, particularly after Brexit, is currently unclear. The UK's own Data Protection Act is currently being hammered out in parliament, and will come into force in May 2018, at the same time as the GDPR comes into force in all EU member states. Moreover, there is plenty of ambiguity in the text of the law to keep lawyers, guidance-writers, and IT consultants busy with interpretation.

One such ambiguity pops up in a handful of articles in the GDPR concerning automated decision making. Article 22 states that “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her." Exceptions to this rule apply, but in most of those cases “the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision”.

In other words, and with additional provisions set out in Articles 13-15, if anyone is planning to feed your data into an algorithm that will use it to make a decision that affects you (say about a bank loan, an insurance quote, or, in future, a medical diagnosis), you are entitled to know that this is happening. If that decision is going to have serious consequences for you, there must be a human involved in making it, and you are entitled to challenge it. At least that’s how I first read it (and here is where I should insert my own disclaimer: I am not a lawyer and this should not be read as legal advice). But in fact there is disagreement about a number of things that will be important in practice, such as:

  • What kind of information ‘data subjects’ will be entitled to know about how their data has been used and how a decision has been made (a bit of a spat has arisen between academics about whether a ‘right to explanation’ can be found in the law – with this, this and this paper at its core).
  • What exactly ‘automated’ means and to what extent humans could or should be involved – could companies get away with tokenistic human presence or must there be people integral to the process and able to overturn decisions?

...just to name a few. Underlying these questions is a tricky balance of various interests such as the protection of individual human rights and particularly privacy rights, intellectual property rights and trade secrets, and avoiding any stifling of innovation in the technology sector. How these are to be weighed up might eventually be for a court to decide.

Automated decision making is only one of many ways to process data but it is receiving wider attention at the moment along with other possible uses of technologies that are often thought of as artificial intelligence (AI). These are technologies that aim, or claim, to mimic features of human intelligence, in order to carry out tasks that have previously been carried out by humans, or that have not been possible before because the scale of information processing has been beyond our capabilities. The promises of AI technologies are huge (some would say hugely hyped) and the implications for a range of fields including healthcare and research could be considerable. This is reflected in the sheer numbers of initiatives that are exploring what those implications might be – with reports and ongoing projects by organisations such as the PhG Foundation, Reform, the Royal Society, and the European Group on Ethics. The House of Lords set up a select committee on AI, due to report on Monday, and the House of Commons Science and Technology committee has recently completed their own inquiry into algorithms in decision making. The UK Government has just begun recruiting for a new Centre for Data Ethics and Innovation to focus on uses of data-driven technologies. Meanwhile, a few flights of stairs above our offices, the Nuffield Foundation has established the Ada Lovelace Institute, an independent body to focus on ethical and social issues arising from data use, AI and associated technologies.

We are publishing our own short briefing note on AI in healthcare and research shortly so I will not give away too much here.

Common to all AI technologies though are that they depend on data, and particularly data about people – hence their mention in the GDPR. As previously mentioned, our own report on the use of biological and health data sets out an ethical approach to the use of data drawing on basic rights and principles that underpin the legal system: respect for persons; respect for human rights; public participation; and formal and social accountability. It is our view that those involved with initiatives using people’s data – particularly in areas where there is significant public interest such as in healthcare – must make sure they are both formally and socially accountable, and that they involve people properly to find out their values, interests, expectations and preferences, and build these into their design and governance models.

Our conclusion that public engagement and involvement should be a continuous process, and that there is no one-size-fits-all model for this seems to become ever truer with the dizzying speed of advances in big (and small) data technology.

Comments (1)

  • Chris Jackson   

    Great article. I'm a US citizen and have been reading about GDPR for some time. It will be interesting to see how other countries follow Europe and put their own data policies in place similar to GDPR.

Join the conversation

Share