Monday, May 4, 2015

CISOs and their huge budgets

Since the explosion of the cyber wave a few months or years ago, the CISOs of this world have seen huge budgets being poured in security investments. Many have thus seen their direct budget follow the same inflation. For a population which has spent years to moan on the premise of a lack of resources, this is now time to smile.

Or maybe is it not? Because in reality, what are they going to do of their big budgets? For those may well be signs of two major misconceptions eventually unfolding as time bombs.

The first issue is already emerging in minds; it is the question of “what now?”. At a time where the market, as exemplified by the exhibitors at the recent RSA Conference, all vendors are building a bubble out of the innumerable products or solutions that they push in response to the budget inflation, one may legitimately wonder: After all those products are implemented and after all that money is thus spent, will security be fixed once and for all? What now?

“Fixed” seems of course a fair expectation from the C-Level. Set aside obscure requirements falling on their shoulders from regulations, when asked, senior management usually demonstrates a clear view of which information needs to be protected. They thus expect such a view to be naturally implemented and thus to be fixed if it has to be. And indeed in theory, there is no reason why information systems, even when ‘open’, should be insecure. Provided the technology and the system designs do not bring in more risk than is implicitly assumed by management, which is alas hardly ever the case. First to come to mind, unanticipated vulnerabilities contribute to blur the game. What is worse, never can anyone be sure that none still exist. Very often, this gap of perception by senior management between secure and leaking is the root of many failures.

“Once and for all” also sounds legitimate: on paper, a fixed system has no reason to become insecure with time. Except if change management and new projects in general can themselves be the source of new holes in the information system, which is usually the case. In other words, senior management largely invests to fix security and will wake up one day with the surprise of information systems that seem not to have improved in exposure in any way. This leads to the second misconception.

The second misconception bears on the mesh of security accountability within the company, or organization. That is, though the CISO is supposed to be in charge, reality is different. And in facts – and in due common sense – the actual responsibility for insecure systems spreads across pretty much everyone. Let’s consider why through a few basic examples.

If a new applications ends up totally hopeless regarding security, it is the sole CISO’s responsibility? Or is it the sponsor’s one for not having given any security functional requirements and for not specifying any data protection need? And then the project manager’s one for not noticing and for not caring for security policy compliance? And then the developer’s one for not asking why this application does not ask any password? Same for the testers and same for the users. You get the idea.

We could go by the same logic at all levels across the company. Network design is not thought with resilience in mind, but laziness – sorry, for ease of change. People are not removed from directories because it complicates archiving, laptops have admin rights to please those less careful, and so on. There may be exceptions, but usually, none of those small decisions are made by the CISO and none pops up to the board, so that senior management has no clue of all the little drifts which stack up to corrupt their precious information system into a source of nightmare.

In fact, the big mistake that comes with hiring a CISO is to shift everyone’s accountability off to his sole shoulders. And in fact, the more power and budget on the CISO’s desk, the less on those who are the actual players. And there is more. Even the minimum could be too much for the CISO’s agenda.

Consider a security policy. It sounds like a good idea to have one, it helps ensuring basic rules are in place. Could be – though everyone has a story to tell about how seldom compliance is fully met. Or maybe there could be another way? Maybe without a policy on passwords, it would be the sponsor’s full responsibility to ensure proper authentication? And because of that, maybe sponsors would be more likely to take this seriously?

With such a decentralized approach to security, it is easy to see that a huge central budget for security is a sign of many dysfunctional processes. In fact, the bigger the budget, the bigger the internal disorganized security processes. It is fine to invest in security, but the amount should be spread in consistence with each actor’s role.

The bottom line is that a useful CISO is not a CISO with a huge budget. But one that keeps the board aware of how much gap, if any, there is at any time between their perception of the security risk and the actual exposure – together with explanations and actionable suggestions. Such a CISO does not need a budget; the company does.

Sunday, May 3, 2015

IoT's Security - Part 2 - Open privacy overview

The privacy-vs-security model identified for IoT security needs first to be broken down in terms of functions. From the isolation of the user from the traditional strict corporate environment, it follows that we need to have identification and thus authentication functions provided openly. In fact, users are not any users, they are the “thing” generated data owners, or at least sources, who might be using the device either as an individual (device owner or rented) or as an employee/member of an owning organization that provides the device.

Identification can take many roots, such as an email address, a passport number or an ID provided and certified by a third party, some ID broker or even a government. In a general IoT architecture, the need is to ensure that some recognized service is provided that brings reasonable assurance that the device user’s ID is trustworthy.

In a traditional, closed architecture, the user’s organization will provide such a service and ensure the user’s credential protection, at least internally. This matches most of the open IAM solutions we see emerging on the market. But in a truly open future, the service will be provided in SaaS (or machine-to-machine) mode by some authority trusted as such by the users or by their organizations. For instance, the French state may provide ID services certifying that I am indeed a living French citizen; this with a level of trust that may suit me as well as my employer(s) – but maybe not suit some other organization I belong to, in which case a different ID provider might be sought.

Identification is an important service that helps activating the device and applications for a known user and with all the related personal and contextual data. But to protect that user’s data, authentication is needed to ensure the ID is not usurped. The most known authentication mechanism is probably the good old password, but many other techniques exist – fingerprint matching for instance. In the history of security within companies, the question of what makes up a good password has inked entire volumes. We start to see ‘risk-based’ authentication, a dynamic technique which adapts its challenge to the users depending on some analysis of their situation – connection from some foreign country or from some untrusted device are typical. But none of those are individual focused, like typical security measures, as discussed in part 1.

In a privacy focused IoT, just like the users should be able to pick up their ID broker, they should also be able to choose the authentication technique that matches their perception of the risk. Consider what Google already proposes: for your Google account, you can opt for the basic password requirements, or for the ‘two-step verification’ procedure, which doubles the password with a challenge over your smartphone. The point is that the user decides based on a personal view of the risk.

I can already hear many voices ridiculing this user focus idea on the premise that the basic user has no clue of the exposure that a poor password would put on their data and even worse on the companies they are connecting to. Is that really so? There are two answers to that naïve argument. First, people learn, even if the hard way. Observe someone who has their email account hacked: it usually hurts enough for them to use a stronger password onwards. Second, consider Google again. They give you the choice, but only within options that they consider secure enough. Thus the user’s choice is in fact between secure and more secure.

Therefore, IoT will not be less secure due to poor passwords. Instead, it will be more secure because more users will be able to opt for stronger authentication thanks to authentication brokering – most likely provided by the same brokers as for identification.

The next elements to consider are data and the famous RBAC model. First, we need to find a way to have the users know what data the “thing” generates to then be able to decide how they should be protected. Typically, the device manufacturer would provide a web service where each client can review how much the data are sensitive or not. Or simply switches directly on the device would result in the same feature. The point is, there is the need for the users to enforce their sensitive data policy at device level.

In the general case, this can be arbitrarily complex, in types of interface as just seen, or simply in variety of perceptions of data criticality by the innumerable users and then in protection requirements. One way to simplify the matter is to rely on predefined data classification scales and on predefined protection profiles. In fact, one can imagine users defining or even registering their own privacy policy or policies. For instance, I would declare that I want all my ‘Public’ data to be protected at a ‘Basic’ level, whereas all my ‘Family’ data should benefit from ‘Medium’ level protection and all my ‘Confidential’ data should not even be uploaded on the net. Of course, data that I consider ‘Confidential’ could well be only ‘Sensitive’ to you and you would thus require less stringent measures for such data in that case.

A ‘Medium’ protection level for instance would ensure strong password authentication, logging – see below – and access granted only to a few ID pattern. An ID pattern could for example be ‘*’ or ‘*geyres@{yahoo,gmail}.com’, meaning that only people in my company and members of my family would have access. I am not running here into the details of how to define access rights or role, I am sure you get the idea.

The trick is to rely on third parties that would provide classification, population, protection profile and policy services. Classifications would make it possible for both users and device producers to align on tags for the data, so that users can either validate predefined classifications or edit them based on standard and meaningful scales. Likewise, populations names would standardize ways to list who or which patterns the user grants access to their data; protection profiles would give names to a consistent set of security functions and policies would bundle the whole under the user’s ID.

In that way, the device producer would be able to pre-configure their “thing” for me at order time just by checking my own requirements as already published under my policy via some policy registrar. I would be able to adjust further as needed thanks to the interface to the device, but I might not even have to.

Finally, as part of the protection profile, the device would be expected to enable – or to come with the possibility of – some form of logging, namely, I as a user would like to be able to track and review details on the transactions made. Typically, if my data have been accessed by third parties, I might like to have the possibility to review the list of such accesses. Of course, the producer will need to make provision for such features and to ensure logs are securely written somewhere within their systems, but they do not need to build all the machinery. Many of the reporting services, for example, can be provided be a third party, specializing in analytics and reporting dashboards.

A lot more details could be given to this high level functional breakdown, but I hope I have given enough to articulate the level of challenges the IoT raises in terms of totally reversing the traditional security architectures. Most likely is the fact that this is not what we are going to witness, and probably the offers by the device producers will be a lot simpler. My point here is to raise the concern that if that should happen, as the underlaying security model would then be much poorer in features, we would be bound to experience much worse conditions for security and privacy than if a route such as the one I depicted should be followed. Not because of poor technology, but because of poor security design.

Tuesday, April 28, 2015

IoT's Security - Part 1 - It's about true privacy...

The Internet of Things raises many questions about its security and how security should be embedded or at least addressed. Many big names publish paper over paper on their vision of IoT’s security, the challenges it faces and the various technologies or vendors that emerge and are likely to play a role or to lead the game.

For an old-timer as I am, what is striking is that it seems that because IoT is new and trendy, all related concepts would need to be new as well, or at least reconsidered, including the good old principles and methods that the security folks have developed over the past decades. But of course, there is simply no reason for such a theoretical rupture. Just like there never is such thing as a new economy, because economy is rooted in human action, there never is such thing as a new security – at least as long as machines will continue to be Von Neumann’s ones – because security is rooted in data and people, not in technology.

For a “thing” on the Internet like anything else, the question of security is that of ensuring only who has authorized access to which data actually has, nothing more and nothing less. Every word is however important in such a definition, let’s make a quick review. Many would define security by means of the three “CIA” initials: confidentiality, integrity, and availability – funny that a spying agency full of secrets should have picked up those three for its own acronym…

But they are all three encompassed within the concept of access: integrity is about change, but change of data assumes it disclosed and available. The ‘who’ and the ‘which’ make the core of the famous RBAC concept (Role-Based Access Control): Access is Controlled Based on the Role you are assumed to play at runtime; for instance accounting data is only accessible to accounting people in the company. ‘Authorized’ is key and twofold: it assumes someone grants you some role(s) in consistency with your job position, and it also assumes the machine, the “thing” is so coded that it does enforce fully but only those access rights entitled to that role(s). Subtle, ‘only … actually has’ brings in the negative dimension of security, the need that no one else has access whilst I truly have.

Finally, all this logic is expected to be ‘ensured’, which means that from the theoretical RBAC model down to the device, all due actions during the design and build process are taken to avoid that the final “thing” runs a different model. In other words, controls are enacted all along the development phases to get – reasonable – assurance that neither any bug nor any non-documented feature ends up in the “thing” that could result in defeating or circumventing the securing RBAC model – usually called a security model.

Such principles have emerged long ago, a famous example of historic standard developing such concepts together with the concept of security functions (identification, authentication, access control, audit, imputation) and security assurance being the ITSEC of 1991 – which evolved into the still alive Common Criteria. They are still the root of all security controls nowadays and I would be very surprised that IoT would have anything so specific as to turn this rule upside down.

The attentive reader will have noticed that I did not mention privacy so far. Though rarely seen that way, privacy is in fact a generalization of security, whereby the main difference is not so much in the controls than in the governance and the actors. The definition I gave of security, with RBAC at its core, is well suited for people within a company or an organization. Indeed, in such a closed environment, everyone has a role according to their job. The data belongs to the company and someone can grant or authorize you a role on behalf of the company. Privacy is different in that the data belongs to me and I want – or I would want – to be the one granting or not the role or access rights. But once I am fine with the RBAC model, privacy is nothing but security: the controls and intricacies are the same.

Today in reality, this requirement of individuals to be in the position of granting RBAC rights on privacy data, which is hardly ever implemented, has been balanced by tons of regulations, which all try to provide static authorization models in substitution to actual dynamic citizen authorization. In other word, privacy is security where the rules come from laws instead of from company policies. Many issues on privacy have their roots in this inability to nicely empower the data owners in their RBAC granting needs.

This angle of view on privacy versus ‘traditional’ security is a key step to moving our logic further to the IoT tomorrow. Consider the BYOD issue: today, should my computer be only sourced from my company or should my company accept mine – provided it is secure? The point regarding RBAC is, how can a thing I own be enforced to comply with an RBAC model that my company requires? There are two cases with respect to my computer. Either it is owned by my company which entitles me to its use, or it is mine and I need to accept to abide by the company’s rule – at least when working. It will be the same for all of IoT: either a device is mine or it comes to me from work – or from some form of work eg a non-profit I have contracted with. My point is to highlight the dichotomy between security (company focus) and privacy (individual focus).

Please note that I have not considered the BYOD question from the technical perspective. That is, at this point, the question is not that of possible vulnerabilities coming along with the BYOD, with your own device. There can be vulnerabilities anywhere and I will cover the topic in due time. At this point, what I would like to make clear is that the security model that BYOD – thus IoT – implies cannot be ignored or dismissed if the IoT is ever to be secure. At this point, our view can be summed up as: IoT’s security is a privacy issue where the data to be protected needs to be made explicit ahead of runtime, the user should be empowered to grant access rights to such data and such rules should be assured to be correctly implemented and not leading to vulnerabilities or hidden backdoors.

This may seem pretty basic or obvious but actually it raises significant challenges. Because it means that the companies designing the IoT devices have to build in security features that rely on a security model which is not theirs to end-to-end control – access to a SaaS application would be an example where the provider has end-to-end control of the security model, and the user only can abide by it.

In a next development, let’s try to clarify what this means in terms of architecture of the security model that IoT infers…

Tuesday, April 21, 2015

Untel sécurise mon coffee

Ce matin j'assiste à la RSA Conférence à la présentation du patron de Intel Security ex McAfee qui base son discours sur la promotion des statistiques issues du big data comme le prochain paradigme, le prochain nirvana de la cyber sécurité.
Il fait la comparaison avec le baseball où il y a quelques 10 ans les Oakland Athletics se sont fait connaître et ont failli gagner la compétition grâce un emploi totalement nouveau des statistiques dans leur sport.
Et donc sa thèse consiste à pousser l'idée que nous devrions espérer de tels progrès dans notre métier. Cela me fait penser à tous ces économistes qui pensent qu'on peut mettre l'économie et le monde en chiffres. Dans les deux cas ils oublient de se demander si le domaine qu'ils abordent se prête à la statistique. Et dans les deux cas la réponse est non.
Certes on peut espérer des résultats dans la détection des attaques. Et encore faut-il avoir une idée des modes d’attaque pour chercher avec pertinence. Ce n’est guère de la statistique mais de la recherche dans les masses de données. La statistique n'est qu'un outil elle n'est pas la solution.
Mais ce n'est pas le centre de notre sujet. Si le big data peut aider à la détection, il ne peut en rien aider à la réduction des problèmes en amont.
On oublie encore que les attaquants ne peuvent passer à travers un logiciel que lorsque celui-ci a des bugs et qu'une fois un bug trouvé, son exploitation est systématique. En d'autres mots, le domaine où réside le vrai défi de la sécurité à savoir le développement d'applications et de systèmes sûrs, ne donne aucune prise à la statistique. Et donc annoncer celle-ci comme le prochain âge de la cybersécurite n'est rien d'autre qu'une plaisanterie ridicule.

Saturday, March 14, 2015

La liberté d’expression en entreprise

On est fier en France de revendiquer la dimension absolue de la liberté d’expression, comme le phénomène de Charlie l’a récemment laissé à penser. Dès lors, la liberté d’expression en entreprise ne serait, par extension, que pleine et entière et se poser la question serait presque blasphématoire.

Pourtant, quiconque a un peu d’expérience professionnelle sait très bien que toute entreprise a ses codes et autres règles et qu’on ne parle pas à son patron ou à son collègue comme au premier inconnu dans la rue – du moins si on ne perd pas totalement sa carrière de vue.

On peut alors s’interroger sur cette spécificité apparente de l’entreprise. Est-ce à dire que l’entreprise serait une forme de prison de la pensée, un domaine où, contrairement à tous les autres, on se doit d’éteindre ses opinions comme ses sentiments dans l’intolérance absolue pourtant décriée ?

Il me semble que non et que c’est d’abord sur le concept même de liberté et de liberté d’expression qu’il faut revenir, pour mieux comprendre ce qu’il se passe en entreprise comme ailleurs.

C’est bien connu, la liberté s’arrête à celle d’autrui.  De là, de nombreux philosophes ont traité de sa nature et de ses limites. Beaucoup sauront bien mieux que moi en faire le tour et la synthèse. Pourtant, il est une branche de la pensée occidentale, hélas par trop oubliée de nos jours, qui propose une articulation à la fois extrêmement simple et profonde de la liberté et de son ancrage dans notre réalité sociale : les libéraux classiques des Lumières.

Ainsi John Locke est connu pour être un des premiers à avoir articulé le lien entre liberté, droit et même économie, de manière simple et réaliste. De nombreux autres ont suivi – Boisguilbert, Destutt de Tracy, Tocqueville, Herbert Spencer, etc. – et je limiterai à un de leurs héritiers contemporains, Henri Lepage. Celui-ci nous propose une brillante définition de la liberté : « le droit de faire ce qu’on désire avec ce qu’on a ». (1)

Autrement dit, la liberté est totale chez soi, mais conditionnée à l’accord du voisin une fois chez celui-ci. C’est assez conforme à notre fonctionnement social spontané : invité chez quelqu’un, on ne se comporte plus comme chez soi, mais « comme il faut » et même comme notre hôte s’y attend. La liberté n’est jamais absolue en ce monde, et c’est la propriété qui en matérialise les bornes, c’est une condition fondamentale du fonctionnement social.

Mais alors la liberté d’expression ? N’est-elle pas supposée absolue ? Pourquoi en irait-il autrement que pour notre liberté « tout court » ?

En fait, la liberté d’expression ne fait pas exception, contrairement à bien des voix qui le proclament autrement. Prenons deux exemples simples pour s’en convaincre. L’invité se doit de respecter les règles de son hôte, on l’a vu. Aller chez un voisin musulman et y clamer son islamophobie, c’est s’exposer à une mise à la porte pleinement justifiée. Et dans une salle de cinéma privée, crier « au feu » pour jeter volontairement la panique, c’est tout autant s’exposer à une exclusion qu’on ne saurait dénoncer.

On le voit bien, l’abus de liberté d’expression a des conséquences et des bornes. Mais cet « abus » n’est pas arbitraire : c’est le propriétaire des lieux qui décide de l’abus, ou non. Dans notre cas, c’est donc l’entreprise, ou ses dirigeants. La liberté d’expression en entreprise, ce n’est pas un concept dénué de bornes et ces bornes sont dans les mains de ceux qui font les règles. On retrouve bien là les codes évoqués au début.

Mais il y a une seconde dimension à la liberté d’expression. L’expression libre laisse des traces chez les autres. Elle forge ainsi l’opinion que les autres se font de nous, notre réputation. On entend souvent dire qu’on a un droit à l’image, mais cela n’a pas de sens, car notre image est faite par tous les autres en réaction à nos actes et à nos propos. Et il en est bien sûr de même en entreprise, concernant notre relation quotidienne et continue avec nos collègues.

Il y a donc en conclusion deux freins à la totale liberté d’expression en entreprise : notre  réputation et les codes internes spécifiques. Mais cela est en réalité vrai de la liberté d’expression au sein de toute organisation sociale privée – copropriétés, associations ou entreprises. Toute liberté n’est faite que d’équilibre et l’expression ne fait en rien exception.
  1. (1)    « Dans cette perspective lockéenne, il n’est plus possible de réduire la liberté au seul « droit de faire ce qu’on désire ». Admettre qu’on puisse faire ce qu’on veut, c’est en effet nier la propriété des autres, et donc violer leur liberté. On retombe sur le problème posé par Hobbes. Les deux termes sont contradictoires. Sauf si l’on définit la liberté comme « le droit de faire ce qu’on désire avec ce qu’on a » (plus exactement : avec ce à quoi on a « naturellement » droit, ce qu’on s’est légitimement approprié, ou ce qui a été légitimement transmis). », in « Libéralisme et Propriété Privée », Henri Lepage, in « Libres ! », Collectif La Main Invisible, 2012

  1. (2)    Lire aussi ce bref article :

Saturday, January 24, 2015

Internet Security and Freedom Intertwined - Foreword

Benjamin Franklin is famous for the following quote on freedom versus security in a civilized society: “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety”, which, in more modern a wording, reminds us that “Those Who Sacrifice Liberty for Security Deserve Neither”.

If this quote is famous for its relevance in political science, why should it make up the first lines of a compilation of papers on Internet Security? Because the whole theme of this series of papers is precisely to remind us, to make the point that Internet will not last if insecure and that its security will require freedom in its very design. Or in other words, if we bet that Internet will last whatever decisions people, companies or government will make, Internet will eventually be both secure and free, or else, another Internet will loom as a – free and secure – substitute.

This is not a mere wish, a dream or a prayer that some lunatic would hope to come true, like if from some technology god. All the contrary. It is a claim rooted in realism. Internet is now and will be onwards a net where mostly people interact. Yes, today, it is still highly used and kept busy by traffic generated by companies, state organizations and e-commerce activity. But its actors are individuals, and more and more, with Facebook as a major hint, it will become our second life, for us all.

Eventually, every one of us will be interacting to some degree with anyone of us thanks to the net. Internet is not the realm of collective entities, but that of individuals. It is thus the next free market, will behave as such and will follow the related economic and social rules.

And because it is made of individuals interacting, like the next free market, it will see similar mechanisms emerge. Mechanisms that emerged in the human society to enable safe and free interactions. Because why should I interact with you if I see you as a danger – unless I am forced to? The use of Internet does and will more and more rely on mutual trust. I buy on Amazon because for various reasons I trust their ability to meet my expectations whilst not forcing me to buy in any way. Trust is an expression of free will, of our individual freedom to adopt or not – Internet like anything else. Trust in turn assumes security: I need to identify and authenticate my counterpart, I may need to have confidence in their dependability and honesty, their ability to ensure confidentiality, or that they are not tampered with by some third party.

Nevertheless, security and interactions do not mean exactly the same as in the real world, and we need to have this awareness to be able to build security right. IT systems and the virtual world have a completely different set of features from which many concepts stem. For instance, IT risks do not follow the same logic as physical ones, mainly because IT has not a stochastic nature. The challenge for security professionals is thus to find the right balance between IT realism and social realism.

Freedom and liberty, at least their social principles, have proven to be those best adapted to make up a just and prosperous society – even if many in the political world may dispute this. There is simply no alternative than to have the Internet adopt them entirely, to embrace them fully – though with all the differences there are between the normal life and the virtual one – if Internet is to continue to grow and become the decisive driver to a happy and better future for the human race.

The objective of the series of articles to come is precisely to explain and illustrate the importance of a secure and free Internet, whilst articulating how its distinctive features and its virtual nature require our social mechanisms to be revisited to that end. There are many facets to this umbrella objective, the series promises to be fruitful.

Sunday, January 4, 2015

L’actionnariat salarial, une fausse bonne idée

L’actionnariat salarial est à la mode chez les libéraux. Il est censé véhiculer une dynamique entrepreneuriale vers la classe moyenne, rendre accessible le capitalisme et sa logique d’entreprise au plus grand nombre. Comme le statut d’auto-entrepreneur, il se veut démocratiser l’entreprise et remotiver les masses. Surtout, il est censé être un excellent moyen de motiver les salariés d’une entreprise, souvent de belle taille, au succès économique de celle-ci. Les patrons, ou du moins les concepteurs de ce statut, imaginent fidéliser et dynamiser des salariés qui sinon pourraient plus facilement lever le pied ou regarder ailleurs si l’herbe y est plus verte. Un statut miraculeux ?
Pour l’avoir vu de l’intérieur, dans une grande entreprise de services informatiques qui comme par hasard vient de se faire racheter, je suis pour ma part convaincu que ce statut est en réalité néfaste à long terme, pour tout le monde. Imaginez par exemple un groupe où la part des salariés est si forte dans l’actionnariat que leur collectif siège au conseil d’administration pour y bloquer toute option jugée de nature à imposer de trop forts changements aux salariés. Pire qu’un syndicat classique.
Il me faut dire que je dois ma conviction en grande partie à Pascal Salin, qui dans son excellent ouvrage « Libéralisme », consacre plusieurs pages à ce sujet. Pour y expliquer que contrairement à ces idées reçues, l’actionnariat salarial n’est pas la panacée libérale que beaucoup s’imaginent.
L’analyse qu’en fait Pascal Salin est assez simple et repose sur la comparaison de la logique entrepreneuriale des deux statuts de salarié et d’actionnaire. Car il faut se souvenir que nous sommes tous des entrepreneurs, même quand on est salarié, et que les multiples statuts possibles ne sont que des réponses libres et individuelles à la prise de risque à laquelle chacun de nous fait face chaque jour.
Un salarié est ainsi un entrepreneur individuel qui a choisi de passer un contrat exclusif avec une entreprise. Il lui promet de lui fournir travail et savoir-faire en échange d’un revenu stable et assuré. Son pari est celui de la sécurité, à laquelle il sacrifie la possibilité de gain supplémentaire si l’entreprise réussit mieux que prévu. L’actionnaire raisonne exactement à l’inverse. Il investit à long terme, il sacrifie le revenu sûr et régulier pour parier sur le fort dividende une fois l’entreprise mûre et prospère. Le salarié privilégie le court-terme, l’actionnaire est sur le long-terme.
On rétorquera que justement, l’actionnariat salarié permet de profiter et du court et du long terme. Sauf quand il s’agit d’arbitrer et de prendre des décisions stratégiques. Quand une entreprise doit fermer un site ou licencier, qui prendra la décision ? L’actionnaire ou le salarié, peut-être concerné ?
Je pense ce statut incohérent : si vous croyez au projet de votre entreprise, ayez le courage d’en être actionnaire. Si vous restez salarié, c’est que vous n’y croyez pas vraiment.
L’actionnariat salarial est un produit du clientélisme social-démocrate, qui veut donner l’illusion au peuple qu’il pourrait devenir entrepreneur et riche sans avoir à en payer le prix, voire en faisant payer ce prix par les vrais entrepreneurs que sont les actionnaires traditionnels. Une mascarade socialiste de plus, en sorte.