Thanks for inviting me here today. I’m pleased that the issues of data protection are taking such a front and centre role in discussions about ethics and innovation.
I thought I’d start with a story. Are you sitting comfortably? Then I’ll begin.
Once upon a time there was a little girl called Ada. She had a quick brain and a huge imagination. The daughter of a poet and a gifted mathematician, Ada studied hard and became quite something in the world of computers.
But the twist in this tale is that Ada’s pioneering work took place nearly 200 years ago.
At a time when electricity was “new”, steam trains were an unconventional form of travel and the sticky postage stamp was a revolution in communication.
When I address conferences I often remark on how technology has changed beyond all recognition in the space of a generation – the 20 years since the Data Protection Act, the law my office regulates, was forged.
It’s easy to forget the origins of this revolution go way, way back.
Ada Lovelace may well be known to you – as the daughter of romantic poet Lord Byron, or a visionary with a passion for flying or for creating the world’s first machine algorithm.
But here’s what sets Ada apart and why I mention her here today. Ada looked beyond what was immediately possible. She saw Charles Babbage’s Analytical Engine – the first ever general purpose computer – as more than just a number cruncher.
She saw how numbers could represent other things – letters, musical notes, symbols – and how the machine could manipulate them according to rules.
She developed a vision of computer capability, a mind-set that she called “poetical science”. It led her to ask questions and examine how individuals and society relate to technology as a collaborative tool.
Ada saw the future. And now it’s our job to make some predictions of our own.
What will technology look like in the future? What will it look like in another 200 years – Yuval Harari has some interesting thoughts on that subject in his latest book Homo Deus.
How artificial intelligence will ultimately outsmart us all and reduce our role as humans to bystanders.
We’re not quite there yet, but the world already seems a pretty futuristic place. The Transport Minister has indicated the first autonomous cars could be on sale in just three years.
Law enforcement agencies use biometric software to scan faces in CCTV footage and security firms use it to collect demographic data on crowds.
Businesses are changing too; using AI technology to improve customer service and streamline their operations.
Almost every day I read news stories about AI’s capabilities and effects. You’ll all have read about Facebook’s controversial new algorithm that can judge whether an individual’s posts may indicate thoughts of suicide.
And I recently read about computers that could, one day, assess your body mass index from a photo before offering you health insurance.
It makes me wonder – will our story have a happy ending? That’s why we’re here today.
And why am I here today? What role does the Information Commissioner’s Office play in this space?
Many issues relating to data ethics involve personal data. And when it comes to personal data, that’s my office’s domain.
It may be useful for me to set out our regulatory role here. First off, we are a statutory regulator independent of government.
We are responsible for ensuring that personal data is handled in line with the law – specifically the Data Protection Act 1998. We educate and advise, comment on and raise awareness on issues related to data protection. When we need to, we can take enforcement action.
Our duties are wide and comprehensive and note merely a complaints based regulator. But when you strip it all back, my office is here to ensure fairness, transparency and accountability in the use of personal data on behalf of people in the UK.
These are principles of data protection, but they apply to some of the fundamental ethical questions we are discussing here as well.
These principles in the law are fit for purpose. They have stood the test of time, are technology-neutral, and those that argue we need a new legal framework miss the mark.
I accept that the Data Protection Act is not perfect and that it has struggled to keep pace with technological advances, including AI. The 1995 directive and the Data Protection Act have not affected the evolution of the internet or prevented surveillance from becoming the prevailing business model, the law has its limitations.
But there is a new law in town. The General Data Protection Regulation.
And this is a significant step up in the law. It was drafted by legislators here in the UK and throughout Europe for the very purpose of trying to tackle opaque decision-making by machines.
The GDPR significantly enhances people’s ability to challenge decisions made by machines. It provides for a measure of algorithmic transparency.
It provides for human intervention in decisions that have legal or similar effects.
This is not a new game played by different rules. The rules remain the same – fairness, transparency, accountability – and my office is well placed to regulate them.
The idea that data protection, embodied in legislation, does not work in a big data context is wrong.
Investigation into use of data analytics for political purposes
You’ll know of our investigation into the use of data analytics for political purposes. We’re looking at whether personal information was analysed to micro-target people as part of a political campaign and have been particularly focussed on the EU Referendum.
The overall goal of this work is to give the public insight into the vast sources of data and personal information used in the political arena.
I doubt very much that the majority of people understand the practices behind the scenes, data brokers, parties, campaigns, social media platforms, let alone the potential impact on their privacy.
It is still too soon for me to speculate on the outcome of our investigation.
But I will say this. Whether or not we find practices that contravened the law – and this is where I have jurisdiction – there are significant ethical questions here.
Ethical questions about truthfulness, fairness, respect, bias and maintenance of public trust in our political campaigns and referendums and perhaps even our democracy.
Even if it’s transparent, even if it’s legal, is it the right thing to do?
Ethics is at the root of privacy and is the future of data protection. In my view, this is the way forward. There must be a convergence.
For those of you who are interested, a fuller update on our investigation will be published on the ICO website this afternoon.
So I have the law to back me up. But, as I say, laws, regulation and guidance must keep pace with advancing technologies like AI and machine learning.
It’s important to create an environment that supports innovation without compromising individuals’ privacy rights.
As I’ve mentioned, on 25 May 2018 a new chapter begins when the GDPR takes effect. This is a much-needed modernisation of the law which gives us the right tools to tackle the challenges ahead.
The GDPR does not specifically reference data ethics, but it is clear that its considerable focus on new technologies – particularly profiling and automated decision making – reflects the concerns of legislators about the personal and societal effect of powerful data-processing technology.
It also embeds the concept of data protection by design – an essential tool in minimising privacy risks and building trust – and Data Protection Impact Assessments, which will be compulsory in some high risk circumstances and, in some cases will have to be assessed and approved by my office.
The new law minimises the chances of acting in haste, repenting at leisure. The work has to be done up front.
But these tools need not be restricted to data protection. It’s hard to separate data protection by design from data ethics by design.
Companies must ask themselves questions that identify the risks they are creating for others and mitigate those risks. There is every reason to include ethical considerations as part of that process.
The most innovative companies will go further and use these tools as a springboard to think of ways they can integrate their data protection and ethical assessments.
That just makes common sense. And it speaks again to convergence.
We’ve offered practical advice on applying GDPR compliant impact assessments in the specific context of big data analytics. It forms part of our paper on Big data, artificial intelligence and machine learning.
It addresses the broader societal implications of AI and says that “embedding privacy and data protection into big data analytics enables not only societal benefits such as dignity, personality and community but also organisational benefits like creativity, innovation and trust.
”In short, it enables big data to do all the good things it can do.”
There is a lot of good it can do.
The world of data protection and data ethics are not sitting in separate universes. But there are broader questions beyond the law. We are all struggling to define the gaps and work out how the outstanding questions can be addressed.
Although I would like to think my office is sagacious in this space, we do need to have a broader conversation across many sectors and society.
There are other key players, reports and initiatives contributing to a go-forward approach for the UK – and many of them are in the room today. The Royal Society and British Academy, Wendy Hall’s report to government on the AI industry, the Alan Turing Institute, the Nuffield Foundation, and key studies by parliamentarians.
Last month the Government announced its intention to create a new body concerned with data ethics. Matt Hancock has already spoken about it this morning.
The Centre for Data Ethics and Innovation can complement the role of the ICO and other regulators by promoting the consideration of ethical issues. We recognise it can be a positive enabler and encourager of innovation particularly around AI and machine learning.
The Centre for Data Ethics and Innovation
So how do I see the new Centre shaping up?
I’d like to see it facilitating meaningful public consultation on matters that, ultimately, impact on people and their privacy. These consultations will help define the public and societal benefit in use of data and ensure it benefits communities and not just a few individuals.
I’d like to see it focus on futurology. Stepping out of the here and now and scanning the horizon for the next big data ethics challenge.
We would like the centre, or a hub of bodies linked to it, to work with regulators to provide overarching ethical principles for AI and machine learning.
We recognise general principles will have specific applications across sectors.
AI applications for automated vehicles could have very different implications than in criminal justice or intelligence services, for example.
That’s quite a wish list!
But while I’m talking about it, the Centre could also support and encourage codes of conduct and standards.
For example, support the development of a code of conduct for ethics committees in companies. What does good look like?
It is critically important that the new body takes time early on to properly assess its role and how it can fill the gaps that exist. It should not take on a regulatory role which would only complicate the landscape.
We look forward to working with the new Centre and sharing our expertise – especially around the Impact of ubiquitous data collection and technologies like artificial intelligence.
And we’ll continue to co-ordinate our work with other independent regulators in the data ethics space.
In my view there’s no dichotomy between ethics and innovation. But ethical considerations should dictate the direction of travel.
The UK has always been a leader in data protection – it’s one of the things that attracted me to this job – and the UK is a leader in the digital economy.
This will continue if we can embrace the law, and think about its principles as we continue to innovate.
We’re in a race to the top with economies like Japan, Singapore and France that are focussed on AI and digital economies. They know – we know – how important it is to get ethical issues right when it comes to AI.
In closing, allow me to look again to the past.
Ada said: “Understand well as I may, my comprehension can only be an infinitesimal fraction of all I want to understand.”
There is so much more for us all to understand. But I do know this: The UK is uniquely placed to be a leader in this space and to ensure that the principles of data protection and data ethics are firmly embedded in a future framework.