Internet of Things (IoT) Privacy

Can A Machine Care About Privacy?

I recently attended the Digital Enlightenment Forum 2015 in Kilkenny; not your average tech conference, and not the average discussion topics, either – but topics of growing relevance.

For me, the value was in having a conference that offers the time – even if only for a day – to step back and look at the bigger picture within which all our urgent-seeming daily task-work takes place.

One theme in particular stood out for me, and it’s also a major strand of the Trust and Identity team’s work plan over the coming couple of years. Several sessions, including two breakout groups, addressed the theme of digital ethics. The discussion was wide-ranging, sometimes detailed and often abstract, but fascinating and – ultimately, entirely practical. There will be a full paper on the theme in due course, but here’s a hint of some of the ground we covered.

[Warning: may contain philosophy…]

I hope that warning hasn’t immediately discouraged you. “Don’t panic!”, as Douglas Adams would have said. There’s a really simple model for discussing complex topics like this when you have a very diverse group of people round the table; almost all the discussion tends to fall into one of four categories:

  • Philosophy/principles
  • Strategy/society
  • Implementation/practicalities
  • Technology

Once you know that, it’s much easier to avoid getting mired in the intricacies of any one of the four categories, and that keeps the discussion relevant to everyone.

So, philosophy:

Taking our cue from one of the morning’s presentations, we went right back to fundamentals: what have thinkers said about ethics in the pre-digital past? There’s the “continental” philosophical approach of people like Foucault and Barthes, who cast ethics as a series of narratives and structural relationships; then there’s the more “traditional” analytic approach, looking at systems of ethics based on consequences, rules and justice. What they have in common is a recognition that ethics is contextual, and a function of the society in which it evolves.

In our case, we’re in the midst of a post-industrial, technically-oriented society. It’s sometimes hard to imagine that things could be any other way… but what happens if you subtract technology from the ethical equation? You’re left with information (rather than data), decisions, relationships, and semantics. Technology may change a lot of things, but it doesn’t remove those fundamentals, and it doesn’t alter the contextual nature of ethics, so we can be reassured that we have some solid principles to build on.

What’s happening at the social level?

Here, the main point I picked up was about “agency”. In our technologically-oriented society, almost every action we are able take (our “agency”) is mediated – either through technology, such as computers, phones etc., or through third parties, such as banks, the retail supply chain, telcos, internet service providers, identity providers and so on. Ethically, the fact that what we do is mediated often moves us further from the consequences of our decisions and actions. This can leave us feeling that we’re not really responsible for what might happen. As one participant put it:

“Technically mediated phenomena are outstripping human-centric’ ideas of privacy and ethical outcomes.”

In the context of our discussion at the time, that was a perfectly normal and rational conclusion to draw. When you stop and think about it, it could be quite a scary one, too.

But so what… why should I care?


Well, we should care because all those third parties through whom we act are making decisions, every day, which directly affect us. Sometimes they do so with our knowledge and consent, but on the Internet, that is far from the norm, as I suspect we all acknowledge. Here are some examples of the kinds of decision which are made on your behalf all the time:

  • “This privacy policy and these cookies are fine for you; there’s no need to ask you explicitly if you’re OK with them.”
  • “We’ll opt you in to our data-sharing policy by default. If you don’t like it, you can always tell us later.”
  • “Your personal data is safe with us, because we anonymise it. You don’t need to worry.”
  • “Collecting this data does compromise your privacy here and now, yes… but we expect there to be a collective benefit in the long run, so that’s OK.”
  • “We’re ‘personalising’ our prices for you, based on the really expensive laptop you’re using. But don’t worry – we have your best interests at heart.”

These are all real, practical consequences of our technically-mediated society, and they affect your privacy every day.


So what’s the technical dimension? Again, what struck me was “agency”. The number and diversity of ethical agents we encounter is growing fast, and… not all of them are human. A lot of decisions these days are made by algorithms (remember those stock market volatilities caused by too many automated trading systems all reacting to each other?), and any algorithm that makes decisions is not ethically neutral. “Ah,” I hear you say, “but they only do what they’re programmed to do. Someone. somewhere is responsible… not the algorithm”.

OK – let’s look at that for a moment. First, some algorithms are adaptive; there are already network security products, for instance, that learn, over time, what constitutes “normal” behaviour, and adjust their own behaviour accordingly. Then there’s machine learning in its broader sense. Researchers into artificial intelligence already report that the algorithms they create frequently go on to evolve in often unexpected ways, and to exceed human capabilities.

And last: machines are increasingly capable of autonomy – self-driving cars are a good example. They will react to changing conditions, and deal with circumstances they have never encountered before, without human intervention. The first time a driverless vehicle runs someone over, we’ll see where the ethical buck stops.


This has been a lightning gallop through several hours of discussion. What did we conclude?

  • First, that modern life raises just as many ethical issues as it ever did.
  • Second, that if we’re not careful, all the ethical calculations get made on our behalf – and not always in our best interest.
  • Third, that if we’re to retain our agency, we need to understand that that’s what we’re trying to do, and why.
  • Fourth, that there are indeed some constants here, despite the pace of change around us. Ethics is a social, contextual thing, and it has to do with meaning, relationships and decisions. Those are very human things.

And last, that we have entered the age where a growing number of ethical agents are non-human, and we have to understand how that affects us and our societies. Is there a fundamental ethical principle based on ‘global’ human values? Might that principle be to do with consciousness, or autonomy, for example? And if so, what’s the ethical status of machines that are increasingly autonomous and might even, at some point, be described as conscious?

We aren’t necessarily at the point where it makes sense to ask whether a machine can care about privacy… but we’re not far from it.

Internet of Things (IoT) IPv6

New "Internet Of Things Consortium" Launched

Earlier this month at the Consumer Electronics Show (CES) in Las Vegas, a new “Internet of Things Consortium” was announced bringing together 10 companies with the stated goal of fostering and supporting the growth of Internet-connected devices for consumers.  The consortium has a website now visible at

The term “Internet of Things” has been around for some time (Wikipedia dates the first use to 1999) and is generally used to refer to the networks of devices and objects that we are connecting to the Internet and that are using the Internet for communication.  Sensor networks are an example.  Another is connected homes where lights, appliances and even power outlets might all be connected.  A number of the companies involved with this consortium make game consoles, televisions and other entertainment devices that would be connected to a home network and on out to the public Internet.

All of these devices are ultimately connected to the Internet – and communicating often amongst themselves in so-called “machine-to-machine” or “m2m” connections.

Now, this new Internet Of Things Consortium is not the first or only such consortium out there.  There are other alliances and groups that are working on promoting open standards for connected homes and devices.  But it’s great to see another group of companies working in this space. The CEO of Ube, one of the participants, was quoted in a TechCrunch article as saying in part this:

“The successful adoption of [machine-to-machine] and connected home technologies is dependent on open standards for the provisioning and control of millions of headless devices.”


Here at Deploy360 we’ve been interested in the “Internet of Things” for a long time because to bring all the billions of devices (and power outlets!) onto the Internet, we’re going to need more IP addresses than what we can get with IPv4.  I queried the new consortium about their IPv6 support and the consortium chairman Jason Johnson came back with this response:

We should absolutely support IPv6 – or there won’t be billions of devices with IP addresses.

That’s exactly right… and I look forward to seeing what they do in this regard and helping them if they need it.

Some out there regard the “Internet Of Things” as marketing hype… but the reality is that we are connecting more and more devices to the Internet.  It is happening today – and we’re going to need IPv6 to make it all work!