The GDPR and Beyond: Privacy, Transparency and the Law

Gigacycle > Information & Guidance  > The GDPR and Beyond: Privacy, Transparency and the Law

The GDPR and Beyond: Privacy, Transparency and the Law

Thank you to the Alan Turing Institute for the invitation to speak here today. You know, about a mile from my office in Wilmslow – and just a brisk walk from my own home – is a five bed Victorian semi. Fixed to its red-brick facade is a blue plaque that reads:

“Alan Turing, founder of computer science and cryptographer, whose work was key to breaking the wartime Enigma codes, lived and died here.”

Now here we are 64 years after his death, still admiring him, his work, his legacy.

In his day, Turing was one of few futurists. He had the ability to look beyond what was probable and into what might one day be possible. He had the power to identify potential and apply it in ways his contemporaries couldn’t begin to imagine.

Strange to think that Turing’s achievements have gone down in “history” – when his discoveries, his ideas were created barely a lifetime ago.

We’ve come so far in such a short space of time. In the Manchester Museum of Science and Industry there’s a replica of Baby – developed in 1948 it was the first computer to store and run a programme. This is a machine that would fill my living room, yet is has less power and capability than the iphone in my pocket.

So what does all this have to do with me? The UK’s Information Commissioner charged with upholding the rights of individuals to keep control of their personal information.

Well, the most significant risks to individuals’ personal information are now driven by the use of new technologies. The revelations over the last few days involving Cambridge Analytica and Facebook and political campaigns is a dramatic case in point.

But we’re also dealing with a rise in cyber-attacks as well as web and cross device tracking and, of course, the rise of Artificial Intelligence, big data and machine learning.

These technologies use high volumes of personal data from a wide range of sources making decisions and providing new insights about individuals. And cloud computing platforms enable the storage and processing power to be used at scale.

AI is not the future. It is the now. New facial recognition tools are being used in law enforcement – I’ll be blogging soon about this – and the credit and finance sectors are already using social scoring techniques.

The ability of AI to intrude into private life and effect human behaviour by manipulating personal data makes highlighting the importance of this topic a priority for the ICO.

So, my office has a significant role to play. I have often spoken about how innovation and privacy must go hand in hand. As technological developments progress ever rapidly, I am duty bound to stand up for the privacy rights of UK citizens. I will not allow their fundamental right to privacy to be carried away on a wave of progress.

In another 64 years from now, historians will look back at what we did. Not just at the nuts and bolts of our inventions, but at the steps we took to ensure they were used in ways that were ethical and moral. That we anticipated the risks, we mitigated them and, in turn protected individuals and the broader society.

Winston Churchill famously said: “Those who fail to learn from history are doomed to repeat it.”

History has much to teach us. We know that once the genie is out of the lamp, it’s darn near impossible to shove him back in.

Our history books are full of examples of good inventions used for bad things. Or great discoveries that ran amok in ways no-one foresaw.

When the Curies discovered radium it was hailed as a wonder. It was used in cosmetics, toothpaste, toys and novelty watches. Even when it became clear that radium may be responsible for sickness and even deaths, corporations seemed loathe to take it off the market. Where was the concern for the public? What should have been done differently?

If we had known that the Internet would be used to sell illegal drugs and create a dark web where terrorists flourish, would we have been more cautious?

Okay – we can’t know what we don’t know. But history has taught us that there are repercussions, consequences. It’s our job to search out what they might be and act before it’s too late.

History has its eyes on us.

The future

Today we benefit from the wisdom of many futurists. Notably the late Stephen Hawking – he said AI is the road to dystopia. Yuval Harari – AI will make bystanders of us all. Ray Kurzvile – the American author and inventor who says AI will outsmart the human brain in computational capability by the middle of this century.

I am a futurist too. I am excited by how AI is already enriching our lives. How it is a way to lever the economy. It can improve health, transport, economics, law enforcement, research – every part of what we do.

But I’m also trying to predict how AI will impact on the privacy of individuals.

Algorthims are not new. Ever since the first shampoo bottle instructed us to “wash, rinse, repeat” we’ve been using a formula to reach a conclusion.

What’s changed is the volume of personal data, the velocity at which it can be processed and the value attached to it. And organisations are slurping it up.

My office must be a triple threat. I need the law. Regulation is where the power is. But in order to exercise that power, the regulator must also understand the technology and the ethics.

The law

Let’s start with the law.

A lot has changed since the Data Protection Act was forged 20 years ago. Soon there will be a new law in town, the General Data Protection Regulation.

It’s a much-needed modernisation that gives us the right tools to tackle the challenges ahead.

Its considerable focus on new technologies reflects the concerns of legislators here in the UK and throughout Europe about the personal and societal effect of powerful data-processing technology like AI, profiling and automated decision-making.

The GDPR enhances people’s ability to challenge decisions made by machines. It provides a measure for algorithmic transparency. It provides for human intervention in decisions that are relevant to their lives – shopping recommendations generated by machines – not a big deal. But being eligible for a certain school, a career promotion or medical treatment. Get a human involved.

Accountability and transparency are driving forces in the GDPR. The rules of transparency and fairness have not changed, but organisations are obliged to account for what they do, why and how they do it.

So my office is unlikely to have an issue with narrow AI that’s designed to solve a specific problem using defined data sets. This type of AI can be compliant with data protection law.

AI that’s opaque, on the other hand, gives rise to the questionability of fairness because the actions being taken cannot be readily shown. It crosses the red line if it can’t be explained.

Because if a developer can’t explain what an algorithm is looking for, how it does its work or how a decision is reached, it is not transparent. How can that be fair?

The GDPR also embeds the concept of data protection by design – an essential tool in minimising privacy risks and building trust. And it embeds Data Protection Impact Assessments, which will be compulsory in some high risk circumstances and, in some cases will even have to be approved by my office.

The GDPR increases and intensifies my regulatory armoury – from issuing warnings or reprimands to fining those that deliberately, consistently or negligently flout the law up to £17 million or four per cent of annual global turnover, whichever is greater. I can even stop an organisation from processing personal data.

So yes, this regulator will have teeth. But I prefer the bark to the bite and my office is committed to prevention over punishment.

Yes there is enforcement – but encouragement, engagement and education must all come first. Because at the heart of this law is the public. People. In the end, it comes down to building trust and confidence that organisations will handle their personal data fairly and in line with the law. When you understand and commit to that, compliance will follow.

Technology

So I have the law to back me up.

But without a solid understanding of new technologies, my office is not relevant and our authority is weak.

We are well on the way to becoming an innovative regulator but I will admit we have work to do. There are particular challenges around auditing algorithms, for example, and we will undoubtedly require specialist resource. But we are progressing rapidly.

We have just published our first Technology Strategy that outlines how we will adapt to technological change as it impacts information rights and how we’ll plan ahead for the arrival of new technologies.

Understanding how tech effects information rights is a not a niche area for the ICO or just the responsibility of one department. Yes, my new Technology Policy Department will spearhead the work, but this is a thread that must run through the fabric of my office.

AI is one of our top three priorities for 2018/19 and we’ll kick off our new Technology Fellowship programme with a two-year post-doctoral appointment to investigate and research the impact of AI on data privacy.

I’ve spoken before about introducing a regulatory sandbox to enable organisations to develop innovative products and services while benefitting from advice and support from the regulator. We intend to consult on implementation this year.

And I should say we are not coming at this from a standing start. Our award-winning AI and Big Data paper, which we updated in September, has enjoyed widespread acclaim. It addresses the broader societal implications of the technological age.

Data ethics

And that brings me to data ethics.

This is a subject that fascinates me. I spoke at length about it to the TechUK Data Ethics summit in December last year. The subject, I believe, is still in its infancy.

My new Head of Technology Nigel Houlden came to us a few months ago from Wrexham Glyndwr University. In his team of 14, only one had a PhD in a data ethics. We are good at the doing, we have to get better at the thinking.

So here’s what I think. The worlds of data protection law and data ethics are not sitting in separate universes. But there are broader questions beyond the law. And we are all working to define the gaps and address outstanding questions.

I like to think my office is sagacious in this space, but I am not so naive as to fail to recognise the need for broader conversations across many sectors and society.

There are other key players here – not least the Alan Turing Institute with whom we enjoy a close working relationship – but also the Royal Society and British Academy, The Nuffield Foundation, academics and parliamentarians.

I am enthused by the introduction of a Centre for Data Ethics and Innovation as announced by the then Minister for Digital Matt Hancock last November. It’s a £9 million investment and part of Government’s plans to make the UK the best place in the world for businesses developing AI to start, grow and thrive.

We’re learning from the past to inform our present and shape our future. Using the law to keep tech in check and making the most of ethical discussion and debate.

But for me there’s another key component. For me it comes down to public trust and confidence.

Conclusion

AI offers a World of Pure Imagination. Some say it’s a wonder. It’s exciting. Its capabilities may be beyond our wildest dreams.

But we can’t afford to go crazy like a kid in a candy store.

There are questions about whether the use of data is acceptable in one context, but not another. We have to consider the right to autonomy and privacy and understand that the way opaque AI algorithms interpret personal data cannot be addressed by legislation alone.

As I’ve said, there are broader questions beyond the law. Even if it’s legal, is it the right thing to do? How do we use anonymised data ethically? How can we regulate information derived from personal data, from medical records for example?

What can we do to spot what’s coming up on the technological horizon?

How can we draw together information and analysis from a wide range of disciplines and resources – not just regulatory ones?

I have an eye on the future. I know most of you in his room do too. We can get the public policy right. The UK is an established and recognised leader in this space. The principles of data protection and data ethics are firmly embedded in our future framework.

We’re in a race to the top with economies like Japan, Singapore and France that are focused on AI and digital.

This is not a machine vs human battle. It is a defining moment which requires a sense of responsibility and a long term view.

Future generations will thank us if the way in which we develop artificial intelligence today looks at the true value it can deliver while respecting data protection and ethical principles.

Go to Source

No Comments

Sorry, the comment form is closed at this time.