Sunday, May 3, 2015

IoT's Security - Part 2 - Open privacy overview

The privacy-vs-security model identified for IoT security needs first to be broken down in terms of functions. From the isolation of the user from the traditional strict corporate environment, it follows that we need to have identification and thus authentication functions provided openly. In fact, users are not any users, they are the “thing” generated data owners, or at least sources, who might be using the device either as an individual (device owner or rented) or as an employee/member of an owning organization that provides the device.

Identification can take many roots, such as an email address, a passport number or an ID provided and certified by a third party, some ID broker or even a government. In a general IoT architecture, the need is to ensure that some recognized service is provided that brings reasonable assurance that the device user’s ID is trustworthy.

In a traditional, closed architecture, the user’s organization will provide such a service and ensure the user’s credential protection, at least internally. This matches most of the open IAM solutions we see emerging on the market. But in a truly open future, the service will be provided in SaaS (or machine-to-machine) mode by some authority trusted as such by the users or by their organizations. For instance, the French state may provide ID services certifying that I am indeed a living French citizen; this with a level of trust that may suit me as well as my employer(s) – but maybe not suit some other organization I belong to, in which case a different ID provider might be sought.

Identification is an important service that helps activating the device and applications for a known user and with all the related personal and contextual data. But to protect that user’s data, authentication is needed to ensure the ID is not usurped. The most known authentication mechanism is probably the good old password, but many other techniques exist – fingerprint matching for instance. In the history of security within companies, the question of what makes up a good password has inked entire volumes. We start to see ‘risk-based’ authentication, a dynamic technique which adapts its challenge to the users depending on some analysis of their situation – connection from some foreign country or from some untrusted device are typical. But none of those are individual focused, like typical security measures, as discussed in part 1.

In a privacy focused IoT, just like the users should be able to pick up their ID broker, they should also be able to choose the authentication technique that matches their perception of the risk. Consider what Google already proposes: for your Google account, you can opt for the basic password requirements, or for the ‘two-step verification’ procedure, which doubles the password with a challenge over your smartphone. The point is that the user decides based on a personal view of the risk.

I can already hear many voices ridiculing this user focus idea on the premise that the basic user has no clue of the exposure that a poor password would put on their data and even worse on the companies they are connecting to. Is that really so? There are two answers to that naïve argument. First, people learn, even if the hard way. Observe someone who has their email account hacked: it usually hurts enough for them to use a stronger password onwards. Second, consider Google again. They give you the choice, but only within options that they consider secure enough. Thus the user’s choice is in fact between secure and more secure.

Therefore, IoT will not be less secure due to poor passwords. Instead, it will be more secure because more users will be able to opt for stronger authentication thanks to authentication brokering – most likely provided by the same brokers as for identification.

The next elements to consider are data and the famous RBAC model. First, we need to find a way to have the users know what data the “thing” generates to then be able to decide how they should be protected. Typically, the device manufacturer would provide a web service where each client can review how much the data are sensitive or not. Or simply switches directly on the device would result in the same feature. The point is, there is the need for the users to enforce their sensitive data policy at device level.

In the general case, this can be arbitrarily complex, in types of interface as just seen, or simply in variety of perceptions of data criticality by the innumerable users and then in protection requirements. One way to simplify the matter is to rely on predefined data classification scales and on predefined protection profiles. In fact, one can imagine users defining or even registering their own privacy policy or policies. For instance, I would declare that I want all my ‘Public’ data to be protected at a ‘Basic’ level, whereas all my ‘Family’ data should benefit from ‘Medium’ level protection and all my ‘Confidential’ data should not even be uploaded on the net. Of course, data that I consider ‘Confidential’ could well be only ‘Sensitive’ to you and you would thus require less stringent measures for such data in that case.

A ‘Medium’ protection level for instance would ensure strong password authentication, logging – see below – and access granted only to a few ID pattern. An ID pattern could for example be ‘*’ or ‘*geyres@{yahoo,gmail}.com’, meaning that only people in my company and members of my family would have access. I am not running here into the details of how to define access rights or role, I am sure you get the idea.

The trick is to rely on third parties that would provide classification, population, protection profile and policy services. Classifications would make it possible for both users and device producers to align on tags for the data, so that users can either validate predefined classifications or edit them based on standard and meaningful scales. Likewise, populations names would standardize ways to list who or which patterns the user grants access to their data; protection profiles would give names to a consistent set of security functions and policies would bundle the whole under the user’s ID.

In that way, the device producer would be able to pre-configure their “thing” for me at order time just by checking my own requirements as already published under my policy via some policy registrar. I would be able to adjust further as needed thanks to the interface to the device, but I might not even have to.

Finally, as part of the protection profile, the device would be expected to enable – or to come with the possibility of – some form of logging, namely, I as a user would like to be able to track and review details on the transactions made. Typically, if my data have been accessed by third parties, I might like to have the possibility to review the list of such accesses. Of course, the producer will need to make provision for such features and to ensure logs are securely written somewhere within their systems, but they do not need to build all the machinery. Many of the reporting services, for example, can be provided be a third party, specializing in analytics and reporting dashboards.

A lot more details could be given to this high level functional breakdown, but I hope I have given enough to articulate the level of challenges the IoT raises in terms of totally reversing the traditional security architectures. Most likely is the fact that this is not what we are going to witness, and probably the offers by the device producers will be a lot simpler. My point here is to raise the concern that if that should happen, as the underlaying security model would then be much poorer in features, we would be bound to experience much worse conditions for security and privacy than if a route such as the one I depicted should be followed. Not because of poor technology, but because of poor security design.

1 comment:

Aleph Tav Technologies said...

IoT can be useful in many different categories including asset tracking and inventory control, shipping and location, security, individual tracking,