The role and importance of Context and Verifiability in Data Protection

Over the last 18 months I’ve been attending a Data Protection/Privacy event almost every month. It has been a pretty rewarding experience; one that is very different to the usual round of CS conferences that I have been following for the better part of my career.

I’ve been listening to policy makers, lawyers, marketeers, journalists, and occasionally engineers, discussing and debating the perils from the “erosion of privacy”, the measures to be taken, and the need to find a balance between innovation and growth on one side, and civil rights, on the other.

In addition to the events that I have attended myself, I have also read several reports on the outcomes of other consultations on the topics (for example the “bridges” and “shifts” reports). With this post I would like to discuss two issues that have been spinning in my head since the earliest days of my involvement with privacy and data protection. I am sure that these are thoughts that must have occurred to others as well, but I haven’t seen them spelled out clearly, hence the post.

Context (or lack of)

I’ve always enjoyed discussing abstract ideas — fairness, transparency, reputation, information, privacy. There’s something inherently tempting in discussing such abstract notions (I’ll try to avoid using the “ph” word). Maybe it is the hope that a breakthrough at this abstract layer will automatically solve innumerable specific and practical problems relating to each on of these abstract ideas. Whoever makes such a contribution certainly has a claim (and a chance) on immortality.

I am tempted to believe that this might be the underlying reason that the huge majority of the discussions that I have attended stay at this very high, very abstract level. “A general solution to the privacy issue”, “the value of private information”, “the danger from privacy leakage”. All these statements provide good and natural starting points for debates in the area. But to make a founded argument, and hopefully reach some useful conclusion, one that stands a chance to have an impact on real world technologies and services, you need to have a handle, something concrete enough to build upon. I call this “Context”. My main point here is that most discussions that I have attended stay at a very abstract level and thus lack concrete Context.

Having Context can improve many of our discussions and lead to tangible results faster and easier. If such tangible results don’t start showing up in the foreseeable future its only natural to expect that everyone will eventually get fed up, become bored and exhausted, and forget about the whole privacy and data protection matter altogether. Therefore why dont we start interleaving in our abstract discussions some more grounded ones. Pick up one application/service at a time, see what (if anything) is annoying people about it, and fix it. Solving specific issues in specific contexts is not as glamorous as magic general solutions but guess what — we can solve PII leakage issues in a specific website in a matter of hours and we can come up with tools to detect PII leakages in six months to a year, whereas coming up with a general purpose solution for all matters of privacy may take too long.

Making tangible progress, even in specific contexts, is good for moral. It’s also the best chance that we have to eventually develop a general solution (if such a thing is possible anyway).

In a following up post I’ll touch upon Verifiability, which is the second idea that I have not seen in most public discussions around data protection.


Mathematical fixed points, scale invariance, and poetry

I know a tiny bit of mathematics and even less of poetry, but I recognise fixpoints and scale invariance when I read them:

What is Good? What is Evil?:

“A point A point
and on it you find balance and exist
and beyond it turmoil and darkness
and before it the roar of angels

A point A point
and on it you can progress infinitely
otherwise, nothing else exists any more”

And the Scales which, stretching my arms,
seemed to balance light and instinct, were

this small world the great!

“Axion Esti”
Odysseus Elytis

O. Elytis
B. Mandelbrot
L. E. Brouwer

Thoughts on reviewing and selection committees


At last, after a very intense month of running DTL Award Grants’16 I can sit back on the balcony, have a coffee, and relax without getting Comment Notifications from HotCRP, or urgent emails every 2 mins from my super-human co-chair Dr Balachander Krishnamurthy (aka bala ==> spanish translation ==> bullet … not a coincidence if you ask me or if you ‘ve been in my shoes).

We’ve selected 6 wonderful groups to fund for this year, the work is done, the notifications have been sent, and the list of winners in already online.

So what am I doing writing about DTL Grants on my first calm Saturday morning after a month? Well, I guess I cannot disconnect yet. But there’s more to it. Having done this for a second year in a row I am starting to see things that were not apparent before, despite the numerous committees on which I have served in the past. So I guess this is a log of things that I have learned in these two years. In no particular order:

Selection CAN be lean and fast

We asked applicants to describe their idea for transparency software in 3 pages. We asked each TPC member to review 9 submissions. Given the length of proposal we estimated this to be a day’s worth of work. Additional time was set aside for online discussions but this again lasted only 4 days, while the final TPC phone meeting (no need to fly to the other side of the world) lasted ** exactly ** 1 hour. Last but not least, we announced the winners 5 weeks after receiving the submissions.

Why and What vs. How

Anyone having some experience with either paper selection (sigcomm etc) or grant selection (H2020, FP7, etc) committee work surely understands that the above numbers are far from usual. The load we put on both the applicants and the committee was modest and we returned the results in record time. Assuming that the produced result is of high quality (it is, but I’ll come to this next) the natural question is “what’s the trick?”.

Well if you haven’t already noticed, the answer in on the heading above and in bold. We focused on the “Why” and the “What” instead of the “How”. Technical people love the How. It’s what got them their PhD and professional recognition and what they actually love doing and are great at — finding smart solutions to difficult problems. But the mission on the committee was not to have in-depth How-discussions. We do that all the time and in various contexts. What we wanted was to hear from the community about important problems on the intersection between transparency/privacy/discrimination (the Why) and ideas about new software to address those problems (the What). The How would be nice to know in detail but this would come at a high cost in terms of workload for both the applicants and the committee. And remember we wanted to run a lean and fast process. So we set the bar low on How. You just needed to show that it is doable and that you have the skills to do it and we would trust that you know How.

Good enough vs. perfect rank

Ok lets be clear about this — our ranking is NOT perfect. There are several reasons for this, including: we cannot define perfect, even if we could, probably we would be fooling ourselves with the definition, we cannot understand perfect, we don’t have enough time or energy for perfect, we are not perfect ourselves and last but not least … we don’t need perfect. Yes we don’t need perfect because our job is not to bring perfection and justice in an other wise imperfect and unjust world. This is not what we set out to do.

What we set out to do is to find proposals addressing interesting transparency problems and maximise the probability that they may actually deliver some useful software in the end. This means that our realistic objective was to select ** good enough proposals **. Thus our biggest priority was to quickly identify proposals that did not target an interesting enough problem or that failed to convince that they would deliver useful software. Notice here, that most proposal that failed on the latter, did so because they didn’t even bother to explain the “What?”. In no case that I remember there was a rejection due to “How?” but there were several because it was not clear what would be delivered in the end.

Having filtered those, we were left with several good enough proposals. From there to final acceptance little things could make a big difference in terms of outcome. Things like “has this already been funded?”, “would I install/use this software?”, “is this better for a scientific paper instead of a Grant?”, “is this something that would appeal to non-researchers?”. We did our best to make the right calls but there should be no doubt that the process is imperfect. Given our objectives however: 1) good enough vs. perfect, and 2) lean vs. heavy I believe we have remained truthful to our principles.

I’ll stop here by thanking once more all the applicants, the committee, Bala, and the board for all their hard work.

This has been fun and I am very optimistic that it will also be useful.

See you next year.


DTL Award Grants’16 notifications sent

We are done!

6 great new proposals selected to receive funding this years. List and details coming online at the Data Transparency Lab web site in the coming days.

Congratulations to the winning proposals and and a big thanks to all applicants, the selection committee, my co-chair Dr Balachander Krishnamurthy and the members of DTL’s board.

New transparency software on its way!


54 proposals in DTL Award Grants’16

This past weekend marked the deadline for submitting proposals for the second DTL Award Grants. We received 54 proposal in total — US (19), EU (23), Asia/Oceania (3), and joint teams (9). Time for the committee to get busy in selecting the best ones. More great transparency software on its way to complement our first batch from last year.

Cows, Privacy, and Tragedy of the Commons on the Web

As part of my keynote during the inaugural workshop of the Data Transparency Lab (Nov 20, 2014, Barcelona) I hinted that a Tragedy of the Commons around privacy might be the greatest challenge and danger for the future sustainability of the web and the business models that keep it going. With this note I would like to elaborate on this statement and maybe explain why my slides were full of happy, innocent looking cows.

What is the Tragedy of the Commons?

According to Wikipedia:

The tragedy of the commons is an economic theory by Garrett Hardin, which states that individuals acting independently and rationally according to each’s self-interest behave contrary to the best interests of the whole group by depleting some common resource. The term is taken from the title of an article written by Hardin in 1968, which is in turn based upon an essay by a Victorian economist on the effects of unregulated grazing on common land.

In the classical Tragedy of the Commons, individual cattle farmers acting selfishly keep releasing more cows onto a common parcel of land despite knowing that a disproportionate number of animals will eventually deplete the land of all grass and inevitably drive everyone out of business. All farmers share this common knowledge but still do nothing to avoid the obvious impending disaster. For an explanation of this “paradox” one has to consider human selfishness and self-illusion.

Selfishness dictates that it is better for a farmer to reap the immediate benefits of having more cows, diverting the damage to others and pushing the consequences to the future. Self-illusion refers to the utopic belief that he can keep accumulating cows without ever facing the tragedy because, miraculously, others will self-restrain and reduce the size of their respective herds, thereby saving the field from depletion. Unfortunately, everyone thinks alike and thus, eventually, the field is overgrazed to destruction.

Are there any cows on the Web?

There are several.

Not only in .jpeg, .gif or .tiff but also in other formats that, unlike the aforementioned graphics standards, can lead to (non-grass related) tragedies. In my talk I have hinted at the following direct analogy between the aforementioned fictitious cow-related metaphor and the very real public concern around the erosion of privacy on the web.

Farmer: A company having a business model around the monetization of personal information of users. This includes online advertisers, recommenders, e-commerce sites, data aggregators, etc.

Cow:  A technology for tracking users online without their explicit consent or knowledge. Tracking cookies in browsers and apps, analytics code, browser and IP fingerprinting, leakage of Personally Identifiable Information (PII), etc.

Grass:  The trust that we as individuals have on the web, or more accurately, our hope and expectation that the web and its free services are doing “more good than bad.”

The main point here is that if the aforementioned business models (farmers) and technologies (cows) eat away user trust (grass) faster than its replenishment rate (free services that make us happy), then at some point the trust will be damaged beyond repair and users … will just abandon the web. What’s even worse, such loss of trust can be caused by the actions of a minority of companies (even small ones) that by engaging in questionable and offending data collection practices may harm the entire industry, including a majority of companies that are sensitive to users’ privacy requirements.

As extreme as it may sound that users may one day abandon the web for another medium, the reader needs to be reminded that other immensely popular media have been dethroned in the past. Print newspapers are nowhere near as popular as they used to be in, say, the 30’s. Broadcast television is nowhere near its prominence in the 60’s (think the moon-landing, JFK’s assassination, etc.).

The signs of a quickly decaying public trust on the web are already here.

– More than 60% of web traffic was recently measured going over encrypted HTTP, and all reports agree that the trend is accelerating.

–  AdBlock Plus is the #1 add-on for both Chrome and Firefox with close to 50 million users and a 41% annual growth in the last year. Other browser or mobile app marketplaces are heavily populated with anti-tracking add-ons and services.

–  Mainstream press is increasingly covering the topic on front pages and prime time, sometimes revealing truly shocking news.

–  Regulators and privacy activists on both sides of the Atlantic are mobilizing to address privacy related challenges.

If ignored, the mounting concern around online privacy and tracking on the web can lead to mass adoption of tracking and advertisement blocking tools. Removing advertising profits from the web directly leads to the demise of free services that we currently take for granted.

This impacts negatively on innovation, investment in services and network infrastructure, tech employment, etc.

Last but not least, let’s not forget that advertisement and recommendation is something desired and appreciated by most people so long as it does not cross any red lines in terms of privacy.

What constitutes a red line may change from person to person but certain categories are obvious candidates (health, sexual preference, political or religious beliefs).

In a recent study we have shown that it is possible to detect Online Behavioral Advertising (OBA) driven by personal data, including very sensitive ones. Our methodology is based on training artificial “personas”, i.e., clean web-browsers on freshly installed operating systems, which we use to imitate human users interested in particular categories and then test whether these categories are targeted on web-sites that the persona visits. Surprisingly (or maybe not) we found strong evidence that even very sensitive categories were indeed targeted (see slide 30 here for a list).

Is there something we can do to avoid a tragedy of the commons around privacy?

“Sunlight is the best disinfectant”

The famous quote of American Supreme Court litigator Louis Brandeis may have found yet another application in dealing with the privacy challenges of the web.

Despite the buzz around the topic, the average citizen is in the dark when it comes to issues relating to how his personal information is gathered and used online without his explicit authorization.

A few years ago we demonstrated that Price Discrimination seems to have already crept into e-commerce. This means that the price that one sees on his browser for a product or service may be different than the one observed at the same time by user in a different location.

Even at the same location, the personal traits of a user, such as his browsing history, may impact the price offered.

To permit users to test for themselves whether they are being subjected to price description we developed (the price) $heriff, a simple to use browser add-on that shows, in real time, how the price seen by a user compares with the prices seen by other users or fixed measurement proxies around the world.

Researchers at Columbia UniversityNortheastern University, and INRIA have, in a similar spirit, developed tools and methodologies that permit end users to test whether the advertisements or recommendations they received have been specifically targeted at them, or if they are just random or location dependent.

Tools like $heriff and X-ray improve the transparency around online personal data. This has multifold benefits for all involved parties:

– End users can exercise choice and decide for themselves whether they want to use ad blocking software and when.

– Advertising and analytics companies can use the tools to self regulate and prove that they abstain from practices that most users find offensive.

– Regulators and policy makers can use the tools to obtain valuable data that point to the real problems and help in drafting the right type of regulation for a very challenging problem.

Transparency has in the past proved to be quite effective in stirring the Internet in the right direction. Indeed, almost a decade ago the Network Neutrality debate was ignited by reports that some Telcos were using Deep Packet Inspection (DPI) equipment to delay or block certain types of traffic, such as peer-to-peer (P2P) traffic from BitTorrent and other protocols. Lost sometime among scores of public statements and discussions, groups of computer scientists started building simple to use tools to check whether a broadband connection is being subjected to P2P blocking. Similarly, tools were built to test whether a broadband connection matches the advertised speed or not. These tools made it very simple for end users to understand technical details about their broadband connection that would otherwise be far beyond their reach and seem to have created the right incentives for Telcos to avoid such practices.

In a similar way, we believe that the development of transparency tools around privacy and data protection can only help the Internet ecosystem move again in the right direction. For this reason we founded in November 2014 The Data Transparency Lab with the mission to develop software tools that would shed light on data collection and processing on the Internet, and by doing so make sure that the previously described tragedy of the common stays a metaphor and does not become reality.