Related Projects


Redundancy and diversity in security


fault tolerance, protective redundancy, diversity, security


Much research in computer security is inspired by a goal of providing "complete" security: algorithms or mechanisms that deterministically prevent some set of undesired security events. However, this is known to be generally unattainable, if we consider the necessary role of people in properly deploying and using security mechanisms, the potential for error in design and implementation, and the widespread reliance on off-the-shelf products that are riddled with security vulnerabilities. All this suggests the need for 'fault tolerance' (or 'protective redundancy') for security, as widespread outside the computer world, e.g. in the form of "defence in depth". This approach accepts that all defence mechanisms, human or machine-implemented, may be affected by systematic defects and/or random error in operation. Designers will thus provide extra, redundant defence mechanisms, to put extra hurdles between the attackers and their goals, and to increase the chances of detecting, and reacting to, attacks in time to repel them or contain their consequences.

This view, advocated for a long time by a few researchers [2], has slowly gained support in recent years, with major projects in both Europe (e.g. MAFTIA) and the U.S. ( dedicated to fault tolerance for security. Since vulnerabilities are built into the software applications and platforms used, diversity is essential: for instance, replicating a service on diverse servers would reduce the risk of both being taken out at the same time by the identical attacks.

However, the usual difficult questions apply: how much will a certain form of fault tolerance improve security? And thus, which forms of fault tolerance will be effective and cost-effective to protect a certain service, in a certain threat environment? Which ways of pursuing diversity will be most effective? We contend that the only rational way of reasoning about these questions is in terms of probabilities. It may be difficult to assign probabilities to events of interest, but given the uncertainty that surrounds existing vulnerabilities, attacker behaviour and defenders' errors, any attempt to avoid probabilities will lead either to arbitrary simplifications or inconclusive hand-waving.

The DIRC position paper below [1] summarises the state of the debates in the security community about fault tolerance and about probabilistic reasoning, states the case for a probabilistic approach, and proposes an example of application, with reference to the problem of choosing a combination of intrusion detection systems, an area where diversity is often advocated as "obviously beneficial" (even in product advertisements) but without addressing the problem of assessing this benefit or comparing alternative forms of diversity.


Theme: Diversity


[1] B. Littlewood, L. Strigini. "Redundancy and diversity in security", in Proc. ESORICS 2004, 9th European Symposium on Research in Computer Security, Sophia Antipolis, France, Springer-Verlag, Lecture Notes in Computer Science 3193, September 2004, pp. 423-438.

Other References

[2] B. Randell and J. E. Dobson, "Building Reliable Secure Computing Systems Out Of Unreliable Insecure Components", in Proc. 1986 IEEE Symposium on Security and Privacy, Oakland, California, 1986, pp. 187-193.


Lorenzo Strigini


Page Maintainer: Credits      Project Members only Last Modified: 10 August, 2005