http://www.dirc.org.uk/  
 
 
   
Overview
Research
 

   Themes  
   Results

Sites
People
Publications
Events
Related Projects
   
 

Full title

Analysing structure in dependability cases

Keywords

Dependability cases, GSN, Air Traffic Control, structure, analysis

Summary

Argumentation is used to assure a third party that a system is dependable. Convincing arguments of system dependability are now required by a number of relevant standards, such as the UK Def-Stan 00-56 (Safety Management Requirements for Defence Systems). This standard requires the production of a safety case defined as

“a structured argument, supported by a body of evidence, that provides a compelling, comprehensible and valid case that a system is safe for a given application in a given environment.”

Types of evidence used to support claims of system dependability are varied. They can be objective evidence as well as subjective beliefs, they can be expressed qualitatively as well as quantitatively. Standards such as the UK Def-Stan 00-56 require that diverse evidence be provided that safety requirements have been met. This aims to reduce the likelihood that the overall safety argument is compromised by errors or uncertainties in individual pieces of evidence or the reasoning applied to them. Typically, the safety case includes direct evidence relating to the system (e.g. formal arguments, validation testing etc. in the case of software), as well as process-related indirect evidence, intended to strengthen the confidence in the direct evidence.

Convincing a third party that an argument of dependability is adequate is a difficult task. For this reason logical or quantifiable arguments with a formal verifiable meaning are preferred to descriptive arguments that convince through their clarity, exhaustiveness and depth. In software, quantitative data is usually based on statistical testing. In relation to human performance, it is common to incorporate quantitative human error estimates within a probabilistic safety assessment. Such quantitative estimates are elicited from databases of basic human error probabilities. However, these are all qualities that are difficult to measure. In practice it is often impossible to quantify the undependability of a system that is yet to be fielded and has only been tested in a limited, possibly simulated set of conditions. Thus it is important to make best of use of both qualitative and quantitative approaches.

DIRC at the Universities of York and Newcastle has produced a conceptual toolset facilitating the understanding, construction, and assessment of dependability arguments. The result is defined to be valuable to regulators as well as the safety engineers and human factors engineers involved in the development of safety cases. The conceptual toolset has evolved in the context of case studies from aviation and the medical domain.

The approach [5] begins with a general structure of arguments using it to understand those aspects of the argument influencing its quality, including the uncertainty inherent in evidence, uncertainty inherent in the argument, the coverage of evidence, as well as the relationship and the dependence of the pieces of supporting evidence on one another.

Two structural characteristics - the depth and the breadth of arguments - proved to be particularly suitable for the assessment and improvement of the quality of dependability arguments. The depth of an argument relates to the rigour of the argument, while the breadth of an argument relates to uncertainty and coverage.

Appeal to barriers [1,2] typically forms part of the mitigation argument, intended to demonstrate that either a hazard’s likelihood of occurrence or the severity of its consequences have been sufficiently reduced. Making this use of barriers explicit within the structure of the argument can be helpful in analysing and assessing how the barriers are implemented in the actual system (or a previous version of the system), and whether there are any potentially weak spots, such as single barriers for high-risk hazards, or independent barriers for which operational feedback provides evidence of common failure behaviour [3]. Both of these issues - the structure of arguments and the structure of barriers - have been explored most recently using as an example the public domain Reduced Vertical Separation Minimum analysis published by EATMP (the EUROCONTROL Programme for Performance Enhancement in European Air Traffic Management). In order to perform the analysis, the structure of the argument and the use of barriers were modelled explicitly with the aid of Goal Structuring Notation (GSN).

The use of diverse or multi-legged arguments as a means of increasing the confidence to be attached to dependability arguments is a frequent practice in safety-critical industries. For example, one leg may contain an argument about the dependability of the system backed by direct evidence, such as operational testing. The other leg may then be concerned with the demonstration that the evidence produced in the first leg is trustworthy, or that the overall design process followed has adhered to some industry-specific and relevant standard. This second type of evidence is indirect in that it does not make any direct claim about product quality. Whether assumptions of diversity can be made in a specific argument is not thoroughly understood (an issue that is considered in more detail in work done in City).

Further work has involved the quantification of the amount of reuse in hazard analyses [4]. Hazard analyses typically provide evidence to support the claim that all relevant hazards have been sufficiently mitigated. Inappropriately reused mitigation arguments may have implications for the quality of the overall argument, for example by propagating errors and inconsistencies, or by inducing a misplaced confidence in the coverage of the argument through the creation of artificial argument diversity.

Key References

 

[1] Harms-Ringdahl, L. (2003) Investigation of barriers and safety functions related to accidents, Proceedings of the European Safety and Reliability Conference ESREL 2003, Maastricht, The Netherlands

[2] Hollnagel, E. (1999) Accidents and Barriers, In J-M Hoc and P Millot and E Hollnagel and P. C. Cacciabue Proceedings of Lex Valenciennes, pp. 175-182. Volume 28, Presses Universitaires de Valenciennes.

[3] Smith, S.P., Harrison, M.D. and Schupp, B.A. (2004) How explicit are the barriers to failure in safety arguments?, Computer Safety, Reliability, and Security (SAFECOMP'04), M. Heisel, P. Liggesmeyer and S. Wittmann (Eds), Lecture Notes in Computer Science Volume 3219 pp. 325-337, Springer.

[4] Smith, S.P. and Harrison, M.D. (2005). Measuring Reuse in Hazard Analysis. Reliability Engineering and System Safety, pp. 93-104, volume 89, Elsevier.

[5] Sujan, M.A., Smith, S.P. and Harrison, M.D. (2006) Qualitative Analysis of Dependability Argument Structure. In C.Jones, D. Besnard and C. Gacek (Eds.), Structure Book.

Links

 

PAPERS

Smith, S.P., Harrison, M.D. and Schupp, B.A. (2004) How explicit are the barriers to failure in safety arguments?, Computer Safety, Reliability, and Security (SAFECOMP'04), M. Heisel, P. Liggesmeyer and S. Wittmann (Eds), Lecture Notes in Computer Science Volume 3219 pp. 325-337, Springer.

Smith, S.P. and Harrison, M.D. (2005). Measuring Reuse in Hazard Analysis. Reliability Engineering and System Safety, pp. 93-104, volume 89, Elsevier.

Sujan, M.A., Smith, S.P. and Harrison, M.D. (2006) Qualitative Analysis of Dependability Argument Structure. In C.Jones, D. Besnard and C. Gacek (Eds.), Structure Book.

Author

Mark Sujan and Michael Harrison (Newcastle)

 

 
Page Maintainer: webmaster@dirc.org.uk Credits      Project Members only Last Modified: 12 August, 2005