Building Trust Identity Improving Technical Security Privacy

The data-driven world doesn’t run on data: it runs on trust.

News of the recent, large-scale data breach at US health insurer Anthem adds yet another big name to the list of companies that have suffered targeted attacks, compromising the personal details of millions of people. The company itself has reacted quickly with an admission, an apology, and a website with more information for those affected. But what’s the bigger picture, in terms of good practice in data custody?

First and foremost, this is an issue of trust. The data-driven world being what it is, we no longer have an alternative to being digital citizens. If you bank, pay tax, receive social security or healthcare benefits, or use a telephone, you are partly digital… and of course most of us do far more than that in the digital realm every day. No matter how concerned you are about privacy, it just isn’t realistic to expect to withdraw from the digital world, so you have to place your trust in the growing number of organisations who collect, process and share your personal data.

The Internet Society recently published its approach to cybersecurity:

We risk losing the trust of users who have come to depend on the Internet for many of life’s activities. And we believe that we also risk losing the trust of those who have yet to access the benefits of the Internet, thereby discouraging the kind of investment needed to complete the job of connecting everyone in the world.

Data breaches undermine trust, and shake people’s confidence in services from which they often cannot simply withdraw. But the problem is not confined to health insurance, or to commercial organisations, or to the US. There are plenty of examples of poor data custody in the public sector and in other countries around the world. So, what can and should organisations do to ensure that they are the best possible custodians of personal data, are worthy of trust, and – when the worst happens – can rebuild the trust of the individuals whose data they hold?

Be a good data custodian

One lesson about good practice can be drawn from Anthem’s own response to this breach. According to their website, although the data breach affected a worrying set of personal details, the company expresses confidence that it has not compromised other specific datasets such as claims data, medical information and credit card details. We don’t yet know the details of the attack, but one question, clearly, will be whether the data that was not breached benefited from protection that could have been applied to the data that was breached. For example, credit card companies insist on specific safeguards under schemes such as PCIDSS (Payment Card Industry Data Security Standard). Were some of Anthem’s data stores simply easier to access than others? Were the authentication and access controls strong enough?

It would be premature to jump to conclusions in Anthem’s case, but these principles are generally applicable:

  • Compartmentalise data so that the impact of any single breach is limited.
  • Restrict access to data, so that only the right users/roles and applications can unlock it.
  • Increase the strength of authentication required, according to the sensitivity and scope of data accessed.
  • Protect privileged users’ access with particular care.

The recent publicised attacks have targeted data at rest, as opposed to data in motion – but there’s little point securing your databases if you let the same data cross the network in clear. Data needs protection whether it’s being stored or being sent. Session-level encryption can protect data against exposure while it’s in transit, and encrypting datasets/documents before sending them will ensure that they don’t just fall out of the end of a session-encrypted ‘pipeline’ in a vulnerable form.

Any data custodian should think carefully about the privacy and security implications of measures that undermine data confidentiality, such as cryptographic “back doors” or “golden keys” to facilitate third-party access.

Remember: if you’re a data custodian and you suffer a breach, you’re not the victim: you’re the route the attacker took to get to the victims. Don’t leave that route open.

Size isn’t important

Although breaches at big, high-profile companies hit the headlines, technology makes it increasingly easy for tiny organisations to accumulate colossal amounts of data. We’re frequently told that big data is the new oil… but who would spend millions finding and extracting oil to make billions from it, but then keep it in a big plastic bucket in the back yard?

To put it bluntly: if your business model is to monetize individuals’ personal data, then protecting your raw material makes sense for you as well as them.

Prevention is better than cure

Anthem, like many organisations before them, try to reassure victims of the breach by offering credit monitoring and identity protection services. But the individuals whose personal data has been compromised still face months, maybe years, of effort and inconvenience to mitigate the resulting risk. They are at greater risk of identity theft, identity fraud, worsened credit rating, and reputational damage.

It is notoriously difficult to associate a specific data breach with the harm an individual might suffer further down the line – and that difficulty grows with each successive breach. An identity (built up from data like name, address, date of birth, social security number) is not like a credit card. If your credit card is compromised, simple: you have the bank cancel it and issue a new one. You can’t do that with your identity, yet we protect the credit card data with more care. Does that make any sense?

Where personal data is concerned, the impact of a breach is potentially so irreversible that prevention, rather than cure, must be the priority.

Recognise the real value of personal data

The hidden theme through all these recommendations is this: personal data has value, both to the individual concerned, and to the organisations that collect and process it. Too often, that value is disregarded for the sake of convenience or cost-saving. Identity data deserves just as much protection as credit card data, or medical data.

Every individual whose data you process puts their trust, and to some extent their future in your hands. Be a safe repository for their data, and a worthy repository for their trust.

Building Trust Internet of Things (IoT) Privacy

Does Big Data and the Internet of Things spell the End of Privacy As We Know It?

Rajnesh D. Singh (ISOC) and Yoonee Jeong (TRPC) at the “Online Privacy in an Internet of Things World” Roundtable, Bangkok, Thailand (December 2014)

In the last few years, there has been a phenomenal increase in the number of connected devices globally and we now have more connected devices than people in the world. These ever-increasing number of connected devices are going to keep growing – exponentially at least for the short term as the Internet of Things (IoT) evolves into the Internet of Everything and becomes mainstream in the things we do everyday.

These devices are – for the large part – also generating, collecting, and transmitting data, while still other devices are busy analysing and processing this data. In the process, vast amounts of data are being collected about pretty much everything about and around us – be it personal data (think your fitbit-type device or home automation) or data for things like environmental conditions, traffic flow, industrial processes, etc. etc.

Analysing, and processing this data can help us with with making informed choices and decisions, help improve how we live as a society at large and – looking into the future – the data we collect today may allow future generations to better innovate, invent and to find solutions to a myriad of situations and problems.

This vast collection of data also means we are also – in some cases voluntarily, in some cases not so voluntarily – giving up this data and some say with it our right to privacy. Earlier on – provided you read, understood and accepted all the terms and conditions (sometimes running into scores of pages of text) associated with a particular service – you consented to giving up some data about you and your online presence in exchange for the service. In some instances this service was provided to you “free”, in others at a nominal price. This provided all parties involved had some clear guidance as to what was being given up, under what circumstances, by whom, and how what was given up would be used – though this has and continues not to be a perfect science by any stretch of the imagination.

In this bold new IoT-Big Data everything-connected world, vendors and service providers have the ability to tailor their offerings to consumers as well as all parts of the value chain; and it allows potentially greater efficiency and productivity all around.

But at what cost?

Rapid advancement in data analytics capabilities mean the ability to identify, connect and mine personal information from aggregated data is far much easier than it has ever been.

There are already concerns with existing privacy and data protection policies and lapses, as well security breaches. Coupled with various mass surveillance programmes that have come to light in the recent past, these concerns become further amplified; and more so when we throw Big Data-IoT into the mix.

The actual concept of what privacy is has also been evolving. Personal privacy as practiced by our parents and grandparents are very different to that of us, and further still different for the next always-connected digital native generation.

Big Data and the Internet of Things does not necessarily mean the end of privacy. Personal data protection laws have generally followed the OECD template which requires the data controller to seek explicit permission of the data subject for the collection and use of their data. In the Big Data-IoT world, data controllers may not be able to fully deliver on their commitments.

Perhaps an alternative approach which may be useful in such circumstances would be to  look into the ways in which the data controller uses the data. This may include efforts taken to protect the data and the transparency of the processes used. Another important component would be the user’s ability to understand the data privacy policies in place – and in plain language rather than pages and pages of legalese – together with an accompanying assessment of the risks involved.

The global borderless nature of the Internet means that data could be stored, processed, etc. pretty much anywhere – this requires greater efforts and commitment in the harmonisation of policies towards personal data protection across jurisdictions. Doing so would also potentially reduce the costs of compliance and the likelihood of breaches of local laws.

We have already seen the line between voluntary and involuntary sharing of information rapidly blurring with e-commerce and social media. With Big Data and IoT, the kinds and depth of personal information that can (and will be) collected by operators and businesses will increase, and this requires some effort towards the ability of individuals being able to manage the online behavioural information they reveal. The transparent and ethical use of collected data should be the norm and the collectors of such data must ensure privacy by default; otherwise users and policies will always have to be vigilant and play catch-up. At the same time, there are other drivers such as technologies around data anonymisation, regulatory forces and business incentives that could help strike the balance to ensure the full potential of Big Data and IoT are realised by protecting the most important part of the Internet ecosystem – the user.


Internet Governance

Big Data: Big Questions

Whatever session you attend at IGF this year, chances are you’ll hear all about “Big Data.” What it means, depends on which room you’re in at the time and who’s doing the talking. From themes on content creation to disaster relief to privacy, security and everywhere in between, the buzzword “Big Data” was to 2013 what “Cloud Computing” was to IGF a couple years back. Big Data is the new black but when data is involved things are never really black and white. It was raised as a key issue at the Opening Session and the discussions of IGF 2013 provide a good opportunity to go deeper.

Big Business? Big Brother? Big Opportunities? What will big data mean for us all? These were the questions posed on Day Two of the Internet Governance Forum 2013 by Workshop 203- Big Data: Promoting development and safeguarding privacy as it took a look behind the buzz at the dimensions of big data.

There are incredible benefits that can be leveraged from aggregate and anonymised data. From the subtleties of consumer personalisation to finding endemic and epidemic trends in healthcare or disaster prevention, the availability of data can be a powerful tool. The adage (or cliche) Knowledge is Power still pertains and the panel raised some powerful possibilities of how Big Data could be applied to provide very real benefits to people’s lives.

As with all discussions on the subject of Power however, data collection and its use has huge implications as Lynn St Amour candidly raised in her IGF opening address that confronted the aftermath of the revelations of how data is being used by governments.WS 203 asked where is the line between personalisation and discrimination. Profiling was an issue of major concern. Through the use of social media and the sharing of more and more personal information in conscious and unwitting ways online, there is an indelible fingerprint that codes and decodes aspects of the self. Self revelation was an issue of key debate and its link to surveillance as a process that can have dangerous effects that can erode fundamental human rights based on prejudices surrounding ethnicity, gender, sexuality, age and affiliationsThere was a question as to assumptions of privacy and whether the virtual world has disrupted privacy or whether this was an imagined privacy to begin with and what really changed was the latency of the data that could now be retained. Other points of view on privacy argued that the Internet had indeed fundamentally changed how privacy can be regarded and practiced and whether there were spaces for choice or a constant panopticon.

A criticism was pitched against vague legal terms that did little to empower users with a choice and there was a call for more informed consumers. How this could be achieved in any concrete way needs further dialogue. As it is clear that transparency of uses and ethical considerations surrounding not only collection but current and future use also need to be addressed. Who will address this? This is an issue that affects everyone tweeting, reading or breathing right now in very real ways.

The discussions at IGF are pointing to exciting possibilities for applications of Big Data. How this can be forged in positive ways demands dialogue and action which can only be successful within frameworks of collaboration that actually work and that meet at the intersections of protecting human rights and exploring enhanced opportunites. It is clear that a consciousness is warranted on what Big Data implies. As IGF 2013 with its Big Talk progresses, it will be interesting to see what implications emerge and what Actions result.