Privitar: How to Protect Private Data

Dame Wendy makes Introductions

Dame Wendy makes Introductions

Fresh from the weekend, and with memories of my lunch on Friday still in my mind, Monday led to another Web Science Institute Talk, this time with Tom Rowledge alongside. We were attending the Web Science Centre for Doctoral Training talk on “How to Protect Private Data”, given by Jason McFall, who is the Chief Technical Officer at Privitar. Jason is responsible for Privitar’s research agenda, technology strategy and product development. He has spent over a decade leading teams building enterprise software to record, analyse and act in real-time upon large and complex sets of customer data.

Dame Wendy Hall introduced the talk, providing a bit of background about her own career and links to Privitar before another Jason, Jason Dupres, spoke about Privitar itself. Jason Dupres is the CEO of the company, and said that the Snowden leaks were the tipping point in his mind of when the public became worried about data privacy. Jason said although we’re moving towards a data-driven economy, people often don’t make good decisions about data.

The Beginning of Privacy

The Beginning of Privacy

Getting in to the main body of the talk Jason McFall covered the evolution of privacy from its first use in the Harvard Law Review when the Kodak Camera was launched, to now where data is incredibly abundant and analysis of it is far more easily accessible. One example is the Russian app “Find Face” – like “Shazam for Faces”. Jason progressed through his presentation posing questions, including “Is Privacy Dead?” “Is Privacy Important?”, and spoke about Facebook, democracy and discrimination.

The power of data can help develop maps for efficient transportation, help respond in natural disasters and improve energy usage in the home. However, opposition to projects such as these, like Linky in France, show that people can be cautious about accepting this use of their data. Jason showed, using the example of health care, that it is possible to remove anonymisation from the records if you have prior knowledge, and enough fragments of data are available. It is also possible to reveal things from Machine Learning and Data Analysis, and Jason gave the example of data about taxi journeys in New York, and how it could be cross-referenced with photos of celebrities getting in and out of taxis.

How do we deal with this? By remembering “Common Sense isn’t enough”. There are a variety of methods that can be employed, including Differential Privacy. This is being employed by Google, Apple and the BBC. It involves adding noise to data, to issue recommendations for example, without collecting the data in its entirety. This can be done at a local level or to a query result. Differential privacy has a series of advantages; providing a formal and quantifiable definition of privacy, and protection against post processing. Jason spoke more about the maths behind this, before unfortunately running out of time, to ensure he left the opportunity for questions.

The talk was again another fantastic opportunity to explore more about a particular aspect of Web Science, and a great call to action for technologists to “do the right thing”.

Leave a Reply