The role and importance of Context and Verifiability in Data Protection

Over the last 18 months I’ve been attending a Data Protection/Privacy event almost every month. It has been a pretty rewarding experience; one that is very different to the usual round of CS conferences that I have been following for the better part of my career.

I’ve been listening to policy makers, lawyers, marketeers, journalists, and occasionally engineers, discussing and debating the perils from the “erosion of privacy”, the measures to be taken, and the need to find a balance between innovation and growth on one side, and civil rights, on the other.

In addition to the events that I have attended myself, I have also read several reports on the outcomes of other consultations on the topics (for example the “bridges” and “shifts” reports). With this post I would like to discuss two issues that have been spinning in my head since the earliest days of my involvement with privacy and data protection. I am sure that these are thoughts that must have occurred to others as well, but I haven’t seen them spelled out clearly, hence the post.

Context (or lack of)

I’ve always enjoyed discussing abstract ideas — fairness, transparency, reputation, information, privacy. There’s something inherently tempting in discussing such abstract notions (I’ll try to avoid using the “ph” word). Maybe it is the hope that a breakthrough at this abstract layer will automatically solve innumerable specific and practical problems relating to each on of these abstract ideas. Whoever makes such a contribution certainly has a claim (and a chance) on immortality.

I am tempted to believe that this might be the underlying reason that the huge majority of the discussions that I have attended stay at this very high, very abstract level. “A general solution to the privacy issue”, “the value of private information”, “the danger from privacy leakage”. All these statements provide good and natural starting points for debates in the area. But to make a founded argument, and hopefully reach some useful conclusion, one that stands a chance to have an impact on real world technologies and services, you need to have a handle, something concrete enough to build upon. I call this “Context”. My main point here is that most discussions that I have attended stay at a very abstract level and thus lack concrete Context.

Having Context can improve many of our discussions and lead to tangible results faster and easier. If such tangible results don’t start showing up in the foreseeable future its only natural to expect that everyone will eventually get fed up, become bored and exhausted, and forget about the whole privacy and data protection matter altogether. Therefore why dont we start interleaving in our abstract discussions some more grounded ones. Pick up one application/service at a time, see what (if anything) is annoying people about it, and fix it. Solving specific issues in specific contexts is not as glamorous as magic general solutions but guess what — we can solve PII leakage issues in a specific website in a matter of hours and we can come up with tools to detect PII leakages in six months to a year, whereas coming up with a general purpose solution for all matters of privacy may take too long.

Making tangible progress, even in specific contexts, is good for moral. It’s also the best chance that we have to eventually develop a general solution (if such a thing is possible anyway).

In a following up post I’ll touch upon Verifiability, which is the second idea that I have not seen in most public discussions around data protection.

 

DTL Award Grants’16 notifications sent

We are done!

6 great new proposals selected to receive funding this years. List and details coming online at the Data Transparency Lab web site in the coming days.

Congratulations to the winning proposals and and a big thanks to all applicants, the selection committee, my co-chair Dr Balachander Krishnamurthy and the members of DTL’s board.

New transparency software on its way!

DTL_drinks

54 proposals in DTL Award Grants’16

This past weekend marked the deadline for submitting proposals for the second DTL Award Grants. We received 54 proposal in total — US (19), EU (23), Asia/Oceania (3), and joint teams (9). Time for the committee to get busy in selecting the best ones. More great transparency software on its way to complement our first batch from last year.

Cows, Privacy, and Tragedy of the Commons on the Web

As part of my keynote during the inaugural workshop of the Data Transparency Lab (Nov 20, 2014, Barcelona) I hinted that a Tragedy of the Commons around privacy might be the greatest challenge and danger for the future sustainability of the web and the business models that keep it going. With this note I would like to elaborate on this statement and maybe explain why my slides were full of happy, innocent looking cows.

What is the Tragedy of the Commons?

According to Wikipedia:

The tragedy of the commons is an economic theory by Garrett Hardin, which states that individuals acting independently and rationally according to each’s self-interest behave contrary to the best interests of the whole group by depleting some common resource. The term is taken from the title of an article written by Hardin in 1968, which is in turn based upon an essay by a Victorian economist on the effects of unregulated grazing on common land.

In the classical Tragedy of the Commons, individual cattle farmers acting selfishly keep releasing more cows onto a common parcel of land despite knowing that a disproportionate number of animals will eventually deplete the land of all grass and inevitably drive everyone out of business. All farmers share this common knowledge but still do nothing to avoid the obvious impending disaster. For an explanation of this “paradox” one has to consider human selfishness and self-illusion.

Selfishness dictates that it is better for a farmer to reap the immediate benefits of having more cows, diverting the damage to others and pushing the consequences to the future. Self-illusion refers to the utopic belief that he can keep accumulating cows without ever facing the tragedy because, miraculously, others will self-restrain and reduce the size of their respective herds, thereby saving the field from depletion. Unfortunately, everyone thinks alike and thus, eventually, the field is overgrazed to destruction.

Are there any cows on the Web?

There are several.

Not only in .jpeg, .gif or .tiff but also in other formats that, unlike the aforementioned graphics standards, can lead to (non-grass related) tragedies. In my talk I have hinted at the following direct analogy between the aforementioned fictitious cow-related metaphor and the very real public concern around the erosion of privacy on the web.

Farmer: A company having a business model around the monetization of personal information of users. This includes online advertisers, recommenders, e-commerce sites, data aggregators, etc.

Cow:  A technology for tracking users online without their explicit consent or knowledge. Tracking cookies in browsers and apps, analytics code, browser and IP fingerprinting, leakage of Personally Identifiable Information (PII), etc.

Grass:  The trust that we as individuals have on the web, or more accurately, our hope and expectation that the web and its free services are doing “more good than bad.”

The main point here is that if the aforementioned business models (farmers) and technologies (cows) eat away user trust (grass) faster than its replenishment rate (free services that make us happy), then at some point the trust will be damaged beyond repair and users … will just abandon the web. What’s even worse, such loss of trust can be caused by the actions of a minority of companies (even small ones) that by engaging in questionable and offending data collection practices may harm the entire industry, including a majority of companies that are sensitive to users’ privacy requirements.

As extreme as it may sound that users may one day abandon the web for another medium, the reader needs to be reminded that other immensely popular media have been dethroned in the past. Print newspapers are nowhere near as popular as they used to be in, say, the 30’s. Broadcast television is nowhere near its prominence in the 60’s (think the moon-landing, JFK’s assassination, etc.).

The signs of a quickly decaying public trust on the web are already here.

– More than 60% of web traffic was recently measured going over encrypted HTTP, and all reports agree that the trend is accelerating.

–  AdBlock Plus is the #1 add-on for both Chrome and Firefox with close to 50 million users and a 41% annual growth in the last year. Other browser or mobile app marketplaces are heavily populated with anti-tracking add-ons and services.

–  Mainstream press is increasingly covering the topic on front pages and prime time, sometimes revealing truly shocking news.

–  Regulators and privacy activists on both sides of the Atlantic are mobilizing to address privacy related challenges.

If ignored, the mounting concern around online privacy and tracking on the web can lead to mass adoption of tracking and advertisement blocking tools. Removing advertising profits from the web directly leads to the demise of free services that we currently take for granted.

This impacts negatively on innovation, investment in services and network infrastructure, tech employment, etc.

Last but not least, let’s not forget that advertisement and recommendation is something desired and appreciated by most people so long as it does not cross any red lines in terms of privacy.

What constitutes a red line may change from person to person but certain categories are obvious candidates (health, sexual preference, political or religious beliefs).

In a recent study we have shown that it is possible to detect Online Behavioral Advertising (OBA) driven by personal data, including very sensitive ones. Our methodology is based on training artificial “personas”, i.e., clean web-browsers on freshly installed operating systems, which we use to imitate human users interested in particular categories and then test whether these categories are targeted on web-sites that the persona visits. Surprisingly (or maybe not) we found strong evidence that even very sensitive categories were indeed targeted (see slide 30 here for a list).

Is there something we can do to avoid a tragedy of the commons around privacy?

“Sunlight is the best disinfectant”

The famous quote of American Supreme Court litigator Louis Brandeis may have found yet another application in dealing with the privacy challenges of the web.

Despite the buzz around the topic, the average citizen is in the dark when it comes to issues relating to how his personal information is gathered and used online without his explicit authorization.

A few years ago we demonstrated that Price Discrimination seems to have already crept into e-commerce. This means that the price that one sees on his browser for a product or service may be different than the one observed at the same time by user in a different location.

Even at the same location, the personal traits of a user, such as his browsing history, may impact the price offered.

To permit users to test for themselves whether they are being subjected to price description we developed (the price) $heriff, a simple to use browser add-on that shows, in real time, how the price seen by a user compares with the prices seen by other users or fixed measurement proxies around the world.

Researchers at Columbia UniversityNortheastern University, and INRIA have, in a similar spirit, developed tools and methodologies that permit end users to test whether the advertisements or recommendations they received have been specifically targeted at them, or if they are just random or location dependent.

Tools like $heriff and X-ray improve the transparency around online personal data. This has multifold benefits for all involved parties:

– End users can exercise choice and decide for themselves whether they want to use ad blocking software and when.

– Advertising and analytics companies can use the tools to self regulate and prove that they abstain from practices that most users find offensive.

– Regulators and policy makers can use the tools to obtain valuable data that point to the real problems and help in drafting the right type of regulation for a very challenging problem.

Transparency has in the past proved to be quite effective in stirring the Internet in the right direction. Indeed, almost a decade ago the Network Neutrality debate was ignited by reports that some Telcos were using Deep Packet Inspection (DPI) equipment to delay or block certain types of traffic, such as peer-to-peer (P2P) traffic from BitTorrent and other protocols. Lost sometime among scores of public statements and discussions, groups of computer scientists started building simple to use tools to check whether a broadband connection is being subjected to P2P blocking. Similarly, tools were built to test whether a broadband connection matches the advertised speed or not. These tools made it very simple for end users to understand technical details about their broadband connection that would otherwise be far beyond their reach and seem to have created the right incentives for Telcos to avoid such practices.

In a similar way, we believe that the development of transparency tools around privacy and data protection can only help the Internet ecosystem move again in the right direction. For this reason we founded in November 2014 The Data Transparency Lab with the mission to develop software tools that would shed light on data collection and processing on the Internet, and by doing so make sure that the previously described tragedy of the common stays a metaphor and does not become reality.