SPECIALISTS IN INFORMATION SECURITY MANAGEMENT SYSTEMS (ISO/IEC 27001)  
YOU ARE IN GAMMA’S RESEARCH ARCHIVES — THIS PAGE IS OF HISTORIC INTEREST ONLY — EXIT

 

Risk Assessment Models and Evolving Approaches

Dr. David Brewer

This is an historic paper given at an IAAC workshop at Senate House, London in  July 2000. Some three years after this paper was written we totally revised our approach to risk assessment, basing it on an analysis of events and impacts, from which the traditional threats, vulnerabilities and assets just fall out of the analysis.

Introduction

This paper builds on the author's experience to discuss emerging approaches to risk assessment and the problems and opportunities that are presented when applying risk assessment methods to increasingly complex and interdependent infrastructures.

So what is risk assessment?

Risk assessment, as defined by BS 7799:1999 Part 1 is "assessment of threats to, impacts on and vulnerabilities of information and information processing facilities and the likelihood of their occurrence". This rather unwieldy definition translates into risk being some function of threat, asset and vulnerability. This concept has been around for at least two decades. Twenty years ago the British Government first took a serious interest in computer security. Since them computer systems have evolved from being system-centric mainframes to user-centric mobile web-based technologies. There has been a shift of philosophy from risk avoidance to risk management, and there has been a change of emphasis in consideration of the threat. It is interesting that our concept of risk has not changed at all over this time.

Four generations of technology

Whist our concept of risk has not changed very much, our approach and the technology that we use have. At the present time we can categorise risk assessment technology as belonging to four generations. The first consists of paper based methods. CRAMM, "Memo 10" all started here. They deal with risk in simple terms (high, medium and low), they are ignorant of specific software vulnerabilities, deal in generic network topologies and use look-up tables in order to calculate the risk. Second generation tools are merely a software version of their first generation counterparts. Third generation tools (such as "Expert") make use of vulnerability-safeguard libraries that are regularly updated. They will allow network scanning and use more sophisticated algorithms. Fourth generation tools will allow the effectiveness of safeguards to be determined and the risk of different networks to be compared.

What are we up against?

Figure 1 shows the vulnerability and risk measures for an NT4.0 laptop, with MS Office 97, web authoring tools, a network scanner, the ability to connect to a corporate network over a VPN and dial-up access to an ISP. There are over 100 vulnerabilities. A damage score of 10 equates to gaining root access to the laptop. That includes stealing it, and gaining access to any FAT drive. The risk measures were determined by the successive application of safeguards starting with the most effective, in terms of risk reduction first. Note the fairly rapid reduction of risk (by 50%) on application of the first few dozen safeguards, the plateau and the inability of any measure to reduce the risk to zero. The graph shows that too much security achieves nothing save frustrating the user, which is why security often has such a bad reputation. The trick is therefore to select the most effective safeguards. To do that, we need to be able to measure risk.

Values can be assigned to threat, vulnerability and asset. A measure of risk can be then determined by calculating their product. By assigning values to safeguards, using the same parameters that are used to measure threat, vulnerability and asset, the application of a safeguard will mathematically reduce the risk measure (see Figure 2). It also shows that there are three types of safeguard: threat-reducing safeguards (e.g. firewalls, locked doors, safes and personnel vetting), vulnerability-reducing safeguards (e.g. procedures, hot-fixes and service packs) and asset value-reducing safeguards (e.g. back-ups and encryption). A separate calculation has to be made for each valid combination of threat, vulnerability, asset and safeguard, and the results added together to determine the residual risk.

The threat value is a function of whether or not the threat agent has physical and/or electronic access to the asset, the capability and motivation of the agent and the likelihood of an attack. Whether an attacker is highly focused on violating a particular asset or has a general interest in violating any asset may be another consideration. However, that factor is best modelled by only associating the threat with the assets that we suspect that the attacker is interested in. Nevertheless, these parameters allow us to distinguish between natural threats (fire, flood, Murphy's Law etc.) and human threats (internal, external, hostile or non-hostile in intent, premeditated, opportunistic or accidental). The accompanying slides show the results of a USAF study on the threat values.

Tools such as CRAMM and Expert parameterise asset values through a separate consideration of the asset's value in terms of losing its confidentiality, integrity or availability (CIA). This is necessary as different safeguards are required (apart from the ubiquitous access control) to protect the asset against loosing each CIA facet. However, ultimately it is necessary to value an asset in terms of a protective-marking scheme (such as the DTI Scale, see http://www.dti.gov.uk/industry_files/pdf/confidential.pdf). Indeed, in order to be able to compare the risks to different information assets and different threat environments it will be necessary to agree a common standard for evaluating threats and asset values.

Vulnerabilities, as suggested earlier in consideration of Figure 1, could be valued in terms of the damage that might ensue if the vulnerability were to be exploited. The expertise and knowledge required to exploit the vulnerability are other factors.

BS 7799 - Information Security Management Systems (ISMSs)

There are two parts to BS 7799:1999. The first part of the standard is a code of practice that presents a wide range of best practice security controls, covering policy, legal, organisational, personal, physical and IT issues. Part 2 presents a specification for an Information Security Management System (ISMS) and in so doing tells an organisation what to do to establish an acceptable level of security.

Figure 3 shows a process model of an ISMS. Note the mandatory requirement to conduct a risk assessment and the fact that it is contained within two feedback loops. The inner loop ought to be triggered by the receipt of vulnerability-safeguard database updates, changes in network topology and software upgrades. Errors in the risk analyses, mis-configuration of firewalls and other security devices, and inadequacies in procedures ought also to be trapped by this feedback loop. A more detailed illustration of an ISMS is shown in Figure 4. The diagram is reminiscent of the software V-model. Here policy and audit replace the software V-model's customer requirement and customer acceptance tests. Note the management box at the top of the diagram. We will return to that later. The diagram also shows that there are two components to the inner feedback loop of Figure 3: one, generated by scanning and conducting other analyses for the presence of vulnerabilities being a proactive loop; the other generated by incidents and near misses being a reactive loop.

Modelling the Part 1 controls

BS 7799:1999 contains 127 controls. It recognises that not all of these might be applicable to all organisations and suggests that risk analysis should be used to differentiate between those that are and those that are not. Part 2 of the standard requires the results to be recorded in a "statement of applicability" (SOA) that lists the 127 controls and justifies why they are included or excluded.

A way to model the effect of the controls is to invent a corresponding vulnerability. Its inclusion in the risk model increases the risk. The inclusion of the corresponding control decreases the risk, although not necessarily by the same amount - procedures are fallible. Some BS 7799 controls are threat safeguards, and in their case it is unnecessary to invent a vulnerability.

Application to Complex and Interdependent Infrastructures

Quite often an organisation will deploy a bespoke application, such as a payment system for a particular bank. There are emerging technologies such as JavaCardT and WAP enabled mobile phones. You will not find the vulnerabilities of these applications/technologies lurking in a vulnerability library that comes with a risk/vulnerability assessment product. Outsourcing, particularly to ISPs and ASPs for Internet connectivity and when high value assets are at stake is another complication.

The first step is to break the problem down into physically and electronically separated zones, which protect assets of predominantly the same value and are exposed to similar threats. The second step is to use different techniques to deal with general "best practice" controls featured in BS 7799 Part 1, vulnerabilities in bespoke applications and emerging technologies, and vulnerabilities in commercial-off-the-shelf software.

The BS 7799 controls can be rejected or accepted and allocated to a zone without the deployment of any complex risk assessment technique whatsoever. Common sense will prevail. The control will be relevant to the business, the protective marking of the zone or not. Where risk assessment comes into play is in judging the degree of assurance required for a particular control (e.g. is a password good enough or do we need a biometric?). The problem here is that current safeguard databases are not linked as in ISO 15408 Part 2 to show hierarchies of increasingly stronger safeguards. Nor do they recognise architectural solutions such as CESG's approach to securing the GSI. These obstacles just make it harder to apply the risk assessment technology and more error prone. However, the greater problem is not technical.

For some organisations the management committee identified in Figure 4 will be composed of representatives drawn only from the organisation's various departments such as IT, sales, marketing, operations, HR and finance. If any part of the IT network is outsourced (e.g. a "dotcom" might outsource its web-servers to an ISP) then security management becomes the joint responsibility of the asset owner and the service provider. The full extent of this requirement will really start to bite when the 1998 Data Protection Act takes full effect. The management solution in both cases is dealt with by a (trans-) organisational peer group that heads an ISMS. All parties will need to have a common understanding of the principles of information security and indeed BS 7799 itself. Of course, there is nothing new in this. The Ministry of Defence (MOD), with its programme for approving defence contractors has always insisted that contractors follow its security rules. Communities of these ISMS/peer groups may be used to maximise the effectiveness of CERTs, as there will be an efficient channel to distribute the vulnerability reports and an ISMS mechanism ready to act upon them.

Summary

The problems and opportunities presented when applying risk assessment methods to increasingly complex and interdependent infrastructures is really a management issue. There is no one piece of technology that can be relied upon, with the possible exception of BS 7799:1999.

The theory of risk assessment is relatively robust and there have been significant developments flowing from 1st to 4th generation tools. Risk assessment can be simplified by breaking a complex problem down into logically simpler ones. The most complex problem is dealing with software vulnerabilities, but there exist a variety of useful network scanning tools that can assist, and new developments will certainly take place. However, to be successful in managing risk complex and interdependent infrastructures we must first adopt BS 7799 and the principle of trans-organisational ISMS peer groups.


Appendix - a closer look at the DTI scale

The diagram shows possible standard values for a variety of asset classifications. The dtiSECx refer the the UK Department of Trade and Industry's "unified classification markings", plus one (dtiSEC0) we invented with the DTI in April. The remaining markings reflect the usual military markings.

dtiSEC1 represents information which if improperly disclosed, particularly outside an organisation, lost or found fraudulent would be inappropriate and inconvenient.

dtiSEC2 represents information which any of these things happen to it would cause significant harm to the interests of the organisation. It includes personnel information and therefore would be the asset value relevant to European Data Protection Legislation.

dtiSEC3 represents information which likewise could prove fatal to an organisation. The higher markings reflect damage to a nation, which is why they are ranked higher. 

dtiSEC0 represents information which we don't mind being wrong, given away or lost. In this sense, dtiSEC0 represents the "secure state". The relationship between all these asset values is logarithmic.

If we determine a reference risk based on actual vulnerabilities, the STP with "James Bond" substituted for all assets, then we should be seeking a risk target of 14.5% of that reference given the actual threat profile and asset values.

It all works something like this...