The Internet of Things raises many questions about its security and how security should be embedded or at least addressed. Many big names publish paper over paper on their vision of IoT’s security, the challenges it faces and the various technologies or vendors that emerge and are likely to play a role or to lead the game.
For an old-timer as I am, what is striking is that it seems that because IoT is new and trendy, all related concepts would need to be new as well, or at least reconsidered, including the good old principles and methods that the security folks have developed over the past decades. But of course, there is simply no reason for such a theoretical rupture. Just like there never is such thing as a new economy, because economy is rooted in human action, there never is such thing as a new security – at least as long as machines will continue to be Von Neumann’s ones – because security is rooted in data and people, not in technology.
For a “thing” on the Internet like anything else, the question of security is that of ensuring only who has authorized access to which data actually has, nothing more and nothing less. Every word is however important in such a definition, let’s make a quick review. Many would define security by means of the three “CIA” initials: confidentiality, integrity, and availability – funny that a spying agency full of secrets should have picked up those three for its own acronym…
But they are all three encompassed within the concept of access: integrity is about change, but change of data assumes it disclosed and available. The ‘who’ and the ‘which’ make the core of the famous RBAC concept (Role-Based Access Control): Access is Controlled Based on the Role you are assumed to play at runtime; for instance accounting data is only accessible to accounting people in the company. ‘Authorized’ is key and twofold: it assumes someone grants you some role(s) in consistency with your job position, and it also assumes the machine, the “thing” is so coded that it does enforce fully but only those access rights entitled to that role(s). Subtle, ‘only … actually has’ brings in the negative dimension of security, the need that no one else has access whilst I truly have.
Finally, all this logic is expected to be ‘ensured’, which means that from the theoretical RBAC model down to the device, all due actions during the design and build process are taken to avoid that the final “thing” runs a different model. In other words, controls are enacted all along the development phases to get – reasonable – assurance that neither any bug nor any non-documented feature ends up in the “thing” that could result in defeating or circumventing the securing RBAC model – usually called a security model.
Such principles have emerged long ago, a famous example of historic standard developing such concepts together with the concept of security functions (identification, authentication, access control, audit, imputation) and security assurance being the ITSEC of 1991 – which evolved into the still alive Common Criteria. They are still the root of all security controls nowadays and I would be very surprised that IoT would have anything so specific as to turn this rule upside down.
The attentive reader will have noticed that I did not mention privacy so far. Though rarely seen that way, privacy is in fact a generalization of security, whereby the main difference is not so much in the controls than in the governance and the actors. The definition I gave of security, with RBAC at its core, is well suited for people within a company or an organization. Indeed, in such a closed environment, everyone has a role according to their job. The data belongs to the company and someone can grant or authorize you a role on behalf of the company. Privacy is different in that the data belongs to me and I want – or I would want – to be the one granting or not the role or access rights. But once I am fine with the RBAC model, privacy is nothing but security: the controls and intricacies are the same.
Today in reality, this requirement of individuals to be in the position of granting RBAC rights on privacy data, which is hardly ever implemented, has been balanced by tons of regulations, which all try to provide static authorization models in substitution to actual dynamic citizen authorization. In other word, privacy is security where the rules come from laws instead of from company policies. Many issues on privacy have their roots in this inability to nicely empower the data owners in their RBAC granting needs.
This angle of view on privacy versus ‘traditional’ security is a key step to moving our logic further to the IoT tomorrow. Consider the BYOD issue: today, should my computer be only sourced from my company or should my company accept mine – provided it is secure? The point regarding RBAC is, how can a thing I own be enforced to comply with an RBAC model that my company requires? There are two cases with respect to my computer. Either it is owned by my company which entitles me to its use, or it is mine and I need to accept to abide by the company’s rule – at least when working. It will be the same for all of IoT: either a device is mine or it comes to me from work – or from some form of work eg a non-profit I have contracted with. My point is to highlight the dichotomy between security (company focus) and privacy (individual focus).
Please note that I have not considered the BYOD question from the technical perspective. That is, at this point, the question is not that of possible vulnerabilities coming along with the BYOD, with your own device. There can be vulnerabilities anywhere and I will cover the topic in due time. At this point, what I would like to make clear is that the security model that BYOD – thus IoT – implies cannot be ignored or dismissed if the IoT is ever to be secure. At this point, our view can be summed up as: IoT’s security is a privacy issue where the data to be protected needs to be made explicit ahead of runtime, the user should be empowered to grant access rights to such data and such rules should be assured to be correctly implemented and not leading to vulnerabilities or hidden backdoors.
This may seem pretty basic or obvious but actually it raises significant challenges. Because it means that the companies designing the IoT devices have to build in security features that rely on a security model which is not theirs to end-to-end control – access to a SaaS application would be an example where the provider has end-to-end control of the security model, and the user only can abide by it.
In a next development, let’s try to clarify what this means in terms of architecture of the security model that IoT infers…
For an old-timer as I am, what is striking is that it seems that because IoT is new and trendy, all related concepts would need to be new as well, or at least reconsidered, including the good old principles and methods that the security folks have developed over the past decades. But of course, there is simply no reason for such a theoretical rupture. Just like there never is such thing as a new economy, because economy is rooted in human action, there never is such thing as a new security – at least as long as machines will continue to be Von Neumann’s ones – because security is rooted in data and people, not in technology.
For a “thing” on the Internet like anything else, the question of security is that of ensuring only who has authorized access to which data actually has, nothing more and nothing less. Every word is however important in such a definition, let’s make a quick review. Many would define security by means of the three “CIA” initials: confidentiality, integrity, and availability – funny that a spying agency full of secrets should have picked up those three for its own acronym…
But they are all three encompassed within the concept of access: integrity is about change, but change of data assumes it disclosed and available. The ‘who’ and the ‘which’ make the core of the famous RBAC concept (Role-Based Access Control): Access is Controlled Based on the Role you are assumed to play at runtime; for instance accounting data is only accessible to accounting people in the company. ‘Authorized’ is key and twofold: it assumes someone grants you some role(s) in consistency with your job position, and it also assumes the machine, the “thing” is so coded that it does enforce fully but only those access rights entitled to that role(s). Subtle, ‘only … actually has’ brings in the negative dimension of security, the need that no one else has access whilst I truly have.
Finally, all this logic is expected to be ‘ensured’, which means that from the theoretical RBAC model down to the device, all due actions during the design and build process are taken to avoid that the final “thing” runs a different model. In other words, controls are enacted all along the development phases to get – reasonable – assurance that neither any bug nor any non-documented feature ends up in the “thing” that could result in defeating or circumventing the securing RBAC model – usually called a security model.
Such principles have emerged long ago, a famous example of historic standard developing such concepts together with the concept of security functions (identification, authentication, access control, audit, imputation) and security assurance being the ITSEC of 1991 – which evolved into the still alive Common Criteria. They are still the root of all security controls nowadays and I would be very surprised that IoT would have anything so specific as to turn this rule upside down.
The attentive reader will have noticed that I did not mention privacy so far. Though rarely seen that way, privacy is in fact a generalization of security, whereby the main difference is not so much in the controls than in the governance and the actors. The definition I gave of security, with RBAC at its core, is well suited for people within a company or an organization. Indeed, in such a closed environment, everyone has a role according to their job. The data belongs to the company and someone can grant or authorize you a role on behalf of the company. Privacy is different in that the data belongs to me and I want – or I would want – to be the one granting or not the role or access rights. But once I am fine with the RBAC model, privacy is nothing but security: the controls and intricacies are the same.
Today in reality, this requirement of individuals to be in the position of granting RBAC rights on privacy data, which is hardly ever implemented, has been balanced by tons of regulations, which all try to provide static authorization models in substitution to actual dynamic citizen authorization. In other word, privacy is security where the rules come from laws instead of from company policies. Many issues on privacy have their roots in this inability to nicely empower the data owners in their RBAC granting needs.
This angle of view on privacy versus ‘traditional’ security is a key step to moving our logic further to the IoT tomorrow. Consider the BYOD issue: today, should my computer be only sourced from my company or should my company accept mine – provided it is secure? The point regarding RBAC is, how can a thing I own be enforced to comply with an RBAC model that my company requires? There are two cases with respect to my computer. Either it is owned by my company which entitles me to its use, or it is mine and I need to accept to abide by the company’s rule – at least when working. It will be the same for all of IoT: either a device is mine or it comes to me from work – or from some form of work eg a non-profit I have contracted with. My point is to highlight the dichotomy between security (company focus) and privacy (individual focus).
Please note that I have not considered the BYOD question from the technical perspective. That is, at this point, the question is not that of possible vulnerabilities coming along with the BYOD, with your own device. There can be vulnerabilities anywhere and I will cover the topic in due time. At this point, what I would like to make clear is that the security model that BYOD – thus IoT – implies cannot be ignored or dismissed if the IoT is ever to be secure. At this point, our view can be summed up as: IoT’s security is a privacy issue where the data to be protected needs to be made explicit ahead of runtime, the user should be empowered to grant access rights to such data and such rules should be assured to be correctly implemented and not leading to vulnerabilities or hidden backdoors.
This may seem pretty basic or obvious but actually it raises significant challenges. Because it means that the companies designing the IoT devices have to build in security features that rely on a security model which is not theirs to end-to-end control – access to a SaaS application would be an example where the provider has end-to-end control of the security model, and the user only can abide by it.
In a next development, let’s try to clarify what this means in terms of architecture of the security model that IoT infers…