Monday, May 4, 2015

CISOs and their huge budgets

Since the explosion of the cyber wave a few months or years ago, the CISOs of this world have seen huge budgets being poured in security investments. Many have thus seen their direct budget follow the same inflation. For a population which has spent years to moan on the premise of a lack of resources, this is now time to smile.

Or maybe is it not? Because in reality, what are they going to do of their big budgets? For those may well be signs of two major misconceptions eventually unfolding as time bombs.

The first issue is already emerging in minds; it is the question of “what now?”. At a time where the market, as exemplified by the exhibitors at the recent RSA Conference, all vendors are building a bubble out of the innumerable products or solutions that they push in response to the budget inflation, one may legitimately wonder: After all those products are implemented and after all that money is thus spent, will security be fixed once and for all? What now?

“Fixed” seems of course a fair expectation from the C-Level. Set aside obscure requirements falling on their shoulders from regulations, when asked, senior management usually demonstrates a clear view of which information needs to be protected. They thus expect such a view to be naturally implemented and thus to be fixed if it has to be. And indeed in theory, there is no reason why information systems, even when ‘open’, should be insecure. Provided the technology and the system designs do not bring in more risk than is implicitly assumed by management, which is alas hardly ever the case. First to come to mind, unanticipated vulnerabilities contribute to blur the game. What is worse, never can anyone be sure that none still exist. Very often, this gap of perception by senior management between secure and leaking is the root of many failures.

“Once and for all” also sounds legitimate: on paper, a fixed system has no reason to become insecure with time. Except if change management and new projects in general can themselves be the source of new holes in the information system, which is usually the case. In other words, senior management largely invests to fix security and will wake up one day with the surprise of information systems that seem not to have improved in exposure in any way. This leads to the second misconception.

The second misconception bears on the mesh of security accountability within the company, or organization. That is, though the CISO is supposed to be in charge, reality is different. And in facts – and in due common sense – the actual responsibility for insecure systems spreads across pretty much everyone. Let’s consider why through a few basic examples.

If a new applications ends up totally hopeless regarding security, it is the sole CISO’s responsibility? Or is it the sponsor’s one for not having given any security functional requirements and for not specifying any data protection need? And then the project manager’s one for not noticing and for not caring for security policy compliance? And then the developer’s one for not asking why this application does not ask any password? Same for the testers and same for the users. You get the idea.

We could go by the same logic at all levels across the company. Network design is not thought with resilience in mind, but laziness – sorry, for ease of change. People are not removed from directories because it complicates archiving, laptops have admin rights to please those less careful, and so on. There may be exceptions, but usually, none of those small decisions are made by the CISO and none pops up to the board, so that senior management has no clue of all the little drifts which stack up to corrupt their precious information system into a source of nightmare.

In fact, the big mistake that comes with hiring a CISO is to shift everyone’s accountability off to his sole shoulders. And in fact, the more power and budget on the CISO’s desk, the less on those who are the actual players. And there is more. Even the minimum could be too much for the CISO’s agenda.

Consider a security policy. It sounds like a good idea to have one, it helps ensuring basic rules are in place. Could be – though everyone has a story to tell about how seldom compliance is fully met. Or maybe there could be another way? Maybe without a policy on passwords, it would be the sponsor’s full responsibility to ensure proper authentication? And because of that, maybe sponsors would be more likely to take this seriously?

With such a decentralized approach to security, it is easy to see that a huge central budget for security is a sign of many dysfunctional processes. In fact, the bigger the budget, the bigger the internal disorganized security processes. It is fine to invest in security, but the amount should be spread in consistence with each actor’s role.


The bottom line is that a useful CISO is not a CISO with a huge budget. But one that keeps the board aware of how much gap, if any, there is at any time between their perception of the security risk and the actual exposure – together with explanations and actionable suggestions. Such a CISO does not need a budget; the company does.

Sunday, May 3, 2015

IoT's Security - Part 2 - Open privacy overview

The privacy-vs-security model identified for IoT security needs first to be broken down in terms of functions. From the isolation of the user from the traditional strict corporate environment, it follows that we need to have identification and thus authentication functions provided openly. In fact, users are not any users, they are the “thing” generated data owners, or at least sources, who might be using the device either as an individual (device owner or rented) or as an employee/member of an owning organization that provides the device.

Identification can take many roots, such as an email address, a passport number or an ID provided and certified by a third party, some ID broker or even a government. In a general IoT architecture, the need is to ensure that some recognized service is provided that brings reasonable assurance that the device user’s ID is trustworthy.

In a traditional, closed architecture, the user’s organization will provide such a service and ensure the user’s credential protection, at least internally. This matches most of the open IAM solutions we see emerging on the market. But in a truly open future, the service will be provided in SaaS (or machine-to-machine) mode by some authority trusted as such by the users or by their organizations. For instance, the French state may provide ID services certifying that I am indeed a living French citizen; this with a level of trust that may suit me as well as my employer(s) – but maybe not suit some other organization I belong to, in which case a different ID provider might be sought.

Identification is an important service that helps activating the device and applications for a known user and with all the related personal and contextual data. But to protect that user’s data, authentication is needed to ensure the ID is not usurped. The most known authentication mechanism is probably the good old password, but many other techniques exist – fingerprint matching for instance. In the history of security within companies, the question of what makes up a good password has inked entire volumes. We start to see ‘risk-based’ authentication, a dynamic technique which adapts its challenge to the users depending on some analysis of their situation – connection from some foreign country or from some untrusted device are typical. But none of those are individual focused, like typical security measures, as discussed in part 1.

In a privacy focused IoT, just like the users should be able to pick up their ID broker, they should also be able to choose the authentication technique that matches their perception of the risk. Consider what Google already proposes: for your Google account, you can opt for the basic password requirements, or for the ‘two-step verification’ procedure, which doubles the password with a challenge over your smartphone. The point is that the user decides based on a personal view of the risk.

I can already hear many voices ridiculing this user focus idea on the premise that the basic user has no clue of the exposure that a poor password would put on their data and even worse on the companies they are connecting to. Is that really so? There are two answers to that naïve argument. First, people learn, even if the hard way. Observe someone who has their email account hacked: it usually hurts enough for them to use a stronger password onwards. Second, consider Google again. They give you the choice, but only within options that they consider secure enough. Thus the user’s choice is in fact between secure and more secure.

Therefore, IoT will not be less secure due to poor passwords. Instead, it will be more secure because more users will be able to opt for stronger authentication thanks to authentication brokering – most likely provided by the same brokers as for identification.

The next elements to consider are data and the famous RBAC model. First, we need to find a way to have the users know what data the “thing” generates to then be able to decide how they should be protected. Typically, the device manufacturer would provide a web service where each client can review how much the data are sensitive or not. Or simply switches directly on the device would result in the same feature. The point is, there is the need for the users to enforce their sensitive data policy at device level.

In the general case, this can be arbitrarily complex, in types of interface as just seen, or simply in variety of perceptions of data criticality by the innumerable users and then in protection requirements. One way to simplify the matter is to rely on predefined data classification scales and on predefined protection profiles. In fact, one can imagine users defining or even registering their own privacy policy or policies. For instance, I would declare that I want all my ‘Public’ data to be protected at a ‘Basic’ level, whereas all my ‘Family’ data should benefit from ‘Medium’ level protection and all my ‘Confidential’ data should not even be uploaded on the net. Of course, data that I consider ‘Confidential’ could well be only ‘Sensitive’ to you and you would thus require less stringent measures for such data in that case.

A ‘Medium’ protection level for instance would ensure strong password authentication, logging – see below – and access granted only to a few ID pattern. An ID pattern could for example be ‘*@mycompany.com’ or ‘*geyres@{yahoo,gmail}.com’, meaning that only people in my company and members of my family would have access. I am not running here into the details of how to define access rights or role, I am sure you get the idea.

The trick is to rely on third parties that would provide classification, population, protection profile and policy services. Classifications would make it possible for both users and device producers to align on tags for the data, so that users can either validate predefined classifications or edit them based on standard and meaningful scales. Likewise, populations names would standardize ways to list who or which patterns the user grants access to their data; protection profiles would give names to a consistent set of security functions and policies would bundle the whole under the user’s ID.

In that way, the device producer would be able to pre-configure their “thing” for me at order time just by checking my own requirements as already published under my policy via some policy registrar. I would be able to adjust further as needed thanks to the interface to the device, but I might not even have to.

Finally, as part of the protection profile, the device would be expected to enable – or to come with the possibility of – some form of logging, namely, I as a user would like to be able to track and review details on the transactions made. Typically, if my data have been accessed by third parties, I might like to have the possibility to review the list of such accesses. Of course, the producer will need to make provision for such features and to ensure logs are securely written somewhere within their systems, but they do not need to build all the machinery. Many of the reporting services, for example, can be provided be a third party, specializing in analytics and reporting dashboards.

A lot more details could be given to this high level functional breakdown, but I hope I have given enough to articulate the level of challenges the IoT raises in terms of totally reversing the traditional security architectures. Most likely is the fact that this is not what we are going to witness, and probably the offers by the device producers will be a lot simpler. My point here is to raise the concern that if that should happen, as the underlaying security model would then be much poorer in features, we would be bound to experience much worse conditions for security and privacy than if a route such as the one I depicted should be followed. Not because of poor technology, but because of poor security design.