Privacy wars - personal data and the social contract

Gaining control of privacy is not going to be easy If you want the sort of absolute privacy that extreme libertarians espouse, that fight is pretty much lost (unless you live in a cave and have never left it). Most people are more pragmatic: they...

Share

Gaining control of privacy is not going to be easy

If you want the sort of absolute privacy that extreme libertarians espouse, that fight is pretty much lost (unless you live in a cave and have never left it). Most people are more pragmatic: they may not know just how wide their digital footprints really are, but they probably hope that the advantages of interconnectivity and the accessibility of online data resources outweigh the disadvantages.

Or they did until the Snowden ‘revelations’ generated the sort of public debate previously associated with Magna Carta or the Declaration of Independence, when many people shifted again on the continuum between complacency and paranoia.

As you’d expect, the security industry has offered advice, not all of it commercially motivated, but didn’t get excited. Whatever our political opinions, researchers tend to assume that a degree of surveillance by government and law enforcement agencies is inevitable but limited; no agency has the unlimited resources to examine everything that passes through the ether manually.

You may, of course, mistrust the accuracy or appropriateness of the criteria used to determine what should be examined, and it would be stunningly naïve to assume that no agencies or employees have ever misused their powers. But the recent (inconspicuous) revelation by the Information Commissioner’s Office (ICO) that it suffered its very own unspecified breach of the Data Protection Act brings home yet again that those in the public or private sectors who hold our data may not always do so competently. The trade-off is that - properly used and implemented - those powers increase our security. Apparently. But as I find myself saying a lot nowadays, it doesn’t matter how good your password is if the wrong people can just steal the database.

But what about the social media, which trade their services for our data to companies that want to sell more to us? In this instance, the terms of the social contract are murkier than they may seem. In a market economy, successful selling does tend to keep some people in employment, while better-targeted marketing is less annoying than unremitting irrelevant offers and surveys. However, the trade-off may be more sinister than spam and pop-ups, or even the selling on of private data.

Take Facebook’s recent exercise in experimental psychology, attempting to influence some of its subscriber’s emotions by manipulating their newsfeeds. Researchers from Facebook and Cornell University teamed up to study whether subtly biasing content  as positive or negative changed the corresponding emotions of Facebook users towards that information. It was claimed to show that positive feeds led to positive behaviour on Facebook and vice versa.

Understandably, the popular reaction to this has been ‘yuk, creepy!’ But why is it so sinister? Because it was unannounced? Well, good ethical practice in experiments with human subjects is to advise subjects in advance, but it’s not necessarily an absolute requirement if such advice would invalidate the results and participants aren’t harmed.

What about because Facebook apparently only mentioned the use of data for research after the experiment? Well, yes. But because the real value of the research to Facebook was not the (methodologically unsound) results but the sense that it is prepared not only to monitor and share its subscribers’ data, but to change their behaviour. You may not mind being on the supermarket shelf: how do you feel about the laboratory test-bed?

David Harley CISSP, ESET Senior Research Fellow and ISC2 member

"Recommended For You"

Facebook nudges users to take control with privacy makeover Facebook privacy policy changes proposed to appease Irish government