By Ben Rapp
You’ve probably seen the Facebook data lineage document that’s been leaked, or if not, you’ll have read the Vice article about it. The question, of course, is what can we learn from these disclosures? While I think most of us have always been sceptical about Facebook’s commitment to privacy, our suspicions are clearly confirmed by the evidence that there are not only no proper controls over the use of data but also limited if any knowledge inside Facebook about how data are used. An estimate of 450-600 engineering person years (and at least three, but more credibly “many” calendar years) to implement compliance with basic requirements for withdrawal of consent and object under Article 21 to processing on the basis of legitimate interest is damning enough in itself.
But what’s really disturbing about the paper, ostensibly written by a “privacy engineer”, is what it doesn’t say. The need for this gargantuan engineering project is driven only by response to regulatory pressures – both privacy and competition – not by any ethical commitment to the value of privacy or the rights of data subjects. There’s no sense from the paper that the situation in which Facebook finds itself – 350,000 endpoints using a semi-documented API in a complete free-for-all of data ingestion and recombination – represents any kind of moral failure or that the company should do more than try to minimise regulatory impact.
Equally worrying in terms of silence is the failure to acknowledge the other rights that data subjects might exercise. The proposal is to provide filtering, at some internal base layer of data, to exclude data subjects who have objected or withdrawn consent. There’s no effort to improve transparency – after all, if Facebook doesn’t know how it uses data, how can it tell its data subjects? There’s no thought given to rectification or erasure – the data will be filtered, not altered. It sounds very much as though “deleting Facebook”, while it may remove your visible presence on their app, does nothing to remove your historical data from their ad infrastructure or their underlying data lakes.
Bashing Facebook is easy, however. The company remains the poster child for poor privacy. What can you do to do better in your organisation? It’s easy to say that the first step is to make an ethical commitment to use data appropriately, and wherever possible only when there is a demonstrable benefit to the data subject. Delivering on this commitment is harder. It starts with properly mapping your processing – know what data you have, where they came from, where they goes, and how you use them; this is Facebook’s first failure.
The second step is to embed privacy into your design process, so that neither compliance nor ethical behaviour become expensive retro-fits. Privacy by design is not new – Facebook has no excuse here (see recital 46 of the 1995 European Data Protection Directive) – but as systems become more complex and we are all encouraged to think of data as an asset class to be exploited, we must become much more vigilant in our integration of privacy compliance.
And the third, and most important, is to regard transparency as both a guiding principle and a marketing tool. Instead of trying to hide your use of data behind legalistic privacy notices lurking in your website footer, shout from the rooftops about your trustworthy stewardship. Remember that the data do not belong to you; they belong to the data subject, and have been entrusted to you in the expectation that you will treat both the data and their proprietor with respect and consideration.
The lesson from our Privacy Made Positive research is that people not only care about their privacy, they act on those concerns. whether you want to improve purchasing propensity, decrease basket abandon rates, raise net promoter scores or increase staff retention, better privacy practices will pay dividends.