Proving Protection Profile Compliance for the CCL/ITRI Visa Open Platform Smart Card*

David Brewer   Chilung Wang, Paulie Tsai
Gamma Secure Systems Limited
Diamond House, 149 Frimley Rd
Camberley, Surrey GU15 2PS, UK
  Industrial Technology Research Institute
X200, Bldg. 51, 195-11, Sec. 4, Chung Hsing Rd.,
Taiwan, R.O.C.
* PUBLISHED AT THE 3rd INTERNATIONAL COMMON CRITERIA CONFERENCE, 13-14 MAY 2002, OTTAWA, CANADA © 2002 GAMMA & ITRI. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from both Gamma and ITRI.


This paper presents a methodology for creating security targets that are compliant with two or more protection profiles.  In our case, we are developing a hardware based Java Card™ implementation of Global Platform’s cross-industry standard for reconfigurable smart cards (called Open Platform).  We have four protection profiles to contend with: Open Platform Protection Profile (OP3), Java Card System Protection Profile (JCSPP), Secure Silicon Vendors Group Protection Profile (SSVG-PP) and the Smart Card Security Users Group Smart Card Protection Profile (SCSUG-SCPP).

Despite the Common Criteria’s common language for expressing security function requirements (SFRs), we have found that even if all four protection profiles ask for the same SFR, in some cases the TOE requires a single TOE security function (TSF), whereas in others it requires multiple TSFs.  Moreover, demonstration of compliance against free-form descriptions of threats, assumptions and other Common Criteria paraphernalia – some of which may be identical, others overlapping and still others superficially similar but actually divergent – presents even more difficulty to the security target author.  Our methodology overcomes these problems by mapping the SFRs and other requirements onto a hierarchically-layered security architecture that has the property that each layer inherits the security properties of the lower layers.  This paper should be of prime interest to security target authors who have the task of complying with two or more PPs, particularly in the smart card arena.


In producing the Security Target for the CCL/ ITRI Visa Open Platform Smart Card (see Figure 1), we recognised at the outset that it would probably be insufficient to claim conformance with just the Open Platform Protection Profile (OP3) [OP301], even though OP3 covers the entire Target of Evaluation (TOE)[1]. From a technical perspective, OP3 [BRE01] recasts the security requirements, laid down in the Open Platform Card Specification (OPCS) [OPC01], into the language of the Common criteria.  Although OPCS does not elaborate on the security requirements for the underlying Runtime Environment (RTE) or the integrated circuitry (IC), OP3 does.  In particular, OP3 specifies the security properties of what it calls the “Card/Chip Operating Environment” (COE), in terms of the security objectives, the anticipated security functions and the types of threat that the COE is expected to counter.  Thus, OP3 provides a complete statement of the security requirement for an Open Platform compliant smart card.

However, the Smart Card Security Users Group (SCSUG), which represents the payment associations (American Express, Europay, JCB, MasterCard, Mondex and Visa), has also produced a protection profile [SCS01]. The objective of their work is to utilise the Mutual Recognition Arrangement of the Common Criteria to allow the results a vendor’s Common Criteria evaluation to be accepted by all the payment associations, whereas at present each product has to be evaluated independently by each association.  Thus, in order to render our product more saleable, it makes good sense to aim for compliance with the SCSUG profile as well.

Likewise, Sun is producing a protection profile for its Java Card™ system (JCS) and the Secure Silicon Vendors Group (SSVG) [SSV01] has produced a protection profile for the IC component of our TOE.  From a marketing perspective it makes good sense to aim for compliance with these profiles as well, in order to claim that our card, which is Java Card based, meets the JCS security requirements and that our IC is at least as secure as those that are built to conform to the SSVG profile.

Thus, we decided to create a security target that is conformant with OP3 and the JCS, SSVG and SCSUG protection profiles.

Statement of the problem

At first view, it is necessary to ensure that the security target covers every Security Function Requirement (SFR) contained in the four protection profiles.  However, we soon noticed that even if all four protection profiles asked for the same SFR there was no guarantee that this meant that the SFR was a common requirement to be satisfied by a single TOE Security Function (TSF).  For example, consider FPT_SEP.1:  in the JCS profile it refers to software, in the SSVG profile it refers to hardware and in OP3 and the SCSUG profile it could refer to either or both.  We also noticed that different protection profiles sometimes used different SFRs to describe the same requirement.  For example, OP3 uses two cryptographic components (FCS_COP.1+5 – MAC Chaining and FCS_CKM.1+1 – Session Key Generation) to specify Open Platform’s defence against a number of attacks including replay attack, whereas the SCSUG profile uses the explicit component FPT_RPL.1 – Replay Detection.

Thus, how could we determine consistently which:

  • dissimilar SFRs should be implemented by the same TSF?
  • identical SFRs should be implemented by different TSFs?

There was also the question of how, in the security target, could we deal with the threats, assumptions, policies and objectives.  Unlike the SFRs, these entities are expressed free-form in natural language.  Some appear to be identical, while other descriptions overlap and still others appear superficially similar but are actually divergent.

In addition, Open Platform posed its own problems:

  • As appreciated in OPCS and OP3, the OP Environment (OPEN) component of the functional architecture (see Figure 1) is tightly infused with the RTE.  In particular, the OPEN must always be in control of the card following successful power up.  Thus a clean separation between OPEN and the RTE is not possible.  This implies that the OPEN TSFs do not “sit upon” the RTE TSFs, as suggested by Figure 1, but are rather juxtaposed with them.  The question was, how?
  • There are functions common to the Issuer Security Domain and all other Security Domains, e.g. the Secure Channel.  Although, from the perspective of OPCS, each Security Domain has its own Secure Channel, from an implementation perspective, there is only one software module that implements the Secure Channel, the identity of a particular domain being taken on by the dataset that it accesses.  For example, the Security Domain Module takes on the identity of the Issuer Security Domain, when it is invoked using the Issuer Security Domain Data.  Thus, functions that rightfully belong to more than one component of the functional architecture are in practice implemented by a single TSF.

Finally, our development team raised a fifth challenge: how could we specify the TSFs, necessary to complete the security target, without constraining the design?

Approach to the solution

The development team challenge was influential in deciding our approach.  Given that we had a functional architecture (Figure 1) and that one day in the future we would have a robust hardware/software “implementation” architecture, what we probably needed at this point in time was an intermediate architecture, which would act as a bridge between the functional architecture and the implementation architecture.  We decided to call this intermediate architecture the “security architecture”.  We knew that it would have certain properties.  For example, unlike the functional architecture, it would be strictly hierarchical.  We felt that this would overcome the OP3 problems and provide a framework in which to solve the root problem of mapping the SFRs from our four protection profiles onto our TSFs. 

In the remainder of this paper we describe our approach to defining and testing the security architecture.  We also present our results and draw some conclusions.  Before doing so, however, it is useful to present our solution for dealing with the threats, assumptions and the other free-form paraphernalia.

Demonstrating Compliance

Mathematically, security objectives serve to map the SFRs to the threats and policies. Thus, if (a) the security objectives address all the threats and policies and (b) the SFRs address all the security objectives, then the SFRs must address all the threats and policies.  Hence, given the same set of threats and policies and the same set of SFRs, it must be possible to create different sets of security objectives that achieve exactly the same mapping of SFRs to threats and policies, even though there will different mappings of objectives to threats and policies and objectives to SFRs.  Naturally, it is unnecessary to identify all possible sets of security objectives, just one will do.

However, threats and policies are interchangeable.  One protection profile could attribute a security objective to a threat and other to a policy.  Thus, the SFRs will in practice meet many unstated threats and policies.  In demonstrating the utility of the chosen SFRs, a single minimal set of threats and policies will suffice.  It is not necessary to identify all the unstated threats and policies.  Indeed this would be an endless task.  In conclusion, it would therefore appear that to demonstrate compliance with more than one protection profile, it is only necessary to demonstrate compliance with the SFRs (and SARs): for this purpose the threats, policies and objectives are redundant.

However, as pointed out in the SCSUG protection profile, certain SFRs must be implemented in hardware.  Therefore our conclusion is only valid if the TSFs are implemented in the “proper” place from an architectural perspective, and therefore have the semantics that the protection profile author intended.

This implies that in building the architecture there should be a clean separation between the hardware and software layers and that we should be careful to assign the SFRs in accordance with the instructions given in their respective protection profile.  For example, the majority of SFRs in the SSVG protection profile concern the IC (i.e. hardware); the SCSUG protection profile identifies which SFRs are implemented by hardware and which by software.

Devising the Architecture

In devising the architecture, we started with a 13-layer model.  We associated the highest layer with the applications and the lowest layer with the physical characteristics of the IC.  Each intermediate layer was made to represent some logical function, which in general terms provided some security service to the higher layers.  Within each layer we defined a variety of entities, which we later recognised as being the TSFs or groups of related TSFs. We mapped the SFRs from the four protection profiles onto these TSFs.  In doing so, we strived to associate like meaning SFRs together, whilst preserving the general rules that (a) subordinate layers provide services to superior layers and (b) where a protection profile designated a SFR as being specifically implemented in software or hardware, then that requirement was satisfied.  Whenever there was a problem in satisfying these rules, the architectural layers were revised and the allocation process restarted from the beginning.  With the general aim of simplifying the architecture as much as possible, we were able on the fifth or so iteration to reduce the number of layers to eight.  To assist this we refined rule (a) to become TSFs may provide services to other TSFs in the same layer as well as generally providing a service to the superior layers.

Before making any further observations about this process, it is useful to present the final solution.

What does it look like

The resulting security architecture is shown in Figure 2. There are 8 hierarchical layers, called:

  • Layer 1 – Physical Resistance

  • Layer 2 – Electrical Interface

  • Layer 3 – Software Control

  • Layer 4 – Card Integrity

  • Layer 5 – Card Management

  • Layer 6 – Security Domain Management

  • Layer 7 – Application Control

  • Layer 8 – Application

Layers 1 and 2 constitute the hardware of the smart card; Layers 3 through 8 constitute the software. The layers may be briefly described as follows.  Assuming the TOE is trusted, there are just three ways by which an attacker may attempt to gain access to the assets protected by the TOE: (a) as an application (Layer 8), which means that the attacker must somehow introduce an “agent” application onto the card; (b) as an off card entity via the input/output (I/O) facility (Layer 2), which is the normal method for an off card entity to communicate with the card; (c) by a direct attack on the card.

In the first instance, such a direct attack may attempt to penetrate the defences of the first layer – Physical Resistance.  Subsequently the attacker has to contend with the defences of the second layer – Electrical Interface. An attack perpetrated via an off card entity via the I/O facility is also addressed at this layer.  Thus, these two layers form the first line of defence against a direct attack on the card.

At Layer 3 – Software Control, Open Platform takes control of security, ensuring that the security features of Open Platform and the RTE are always invoked and cannot be bypassed, deactivated, corrupted or otherwise circumvented. The “Supervisor” TSF performs this task. Layer 3, building on memory management features included at Layer 2 and Java Card “firewall” mechanisms embodied in Layer 3, also provides applet separation. Thus, another service provided by Layer 3 to the higher layers is that one application, including the SELECT-able Open Platform applications (i.e. Card Manager and the various Security Domains) cannot interfere with each other.

At Layer 4 – Card Integrity, any problems resulting from premature withdrawal of the card from a card acceptance device (CAD) or any other form of power-failure are resolved.  At Layer 5 – Card Management, access to the Open Platform Card Management functions by off card entities is controlled, the necessary Open Platform Card Manager functionality is provided and the various state transitions (e.g. card termination, application locking and unlocking) take effect.  At Layer 6 – Security Domain Management, access to the Open Platform Security Domain functions (e.g. the establishment of a Secure Channel) by off-card entities is controlled and the relevant functionality is provided.  At Layer 7 – Application Control, access to the Open Platform API by applications is controlled.  Particular security functions of Open Platform that operate at Layer 5 prevent the loading of applications, which execute at Layer 8, that are unauthorised by the Card Issuer (or some other Controlling Authority, see OPCS section 3.3).

In addition to the various layers, Figure 2 also identifies the TOE Security Functions.  These are listed in Appendix A. Note that there are no TSFs associated with the Application Layer.  This is because none of the protection profiles that we have studied so far cover applications.

Testing the architecture 

Five tests were applied to the architecture:  

  • Were the layers are logically ordered, with the applications on the top and the interfaces to external devices at the bottom?
  • Were all relevant SSVG protection profile SFRs allocated to the hardware layers?
  • Were all SCSUG protection profile SFRs that SCSUG did not intend to be implemented in hardware allocated to the software layers?
  • Does each layer provide a security service to the higher layers?
  • For any given layer, inclusive of all lower layers, were all missing dependencies, if any, fully justified?

Once we had determined that the answers to all these questions was “yes” we concluded that the architecture work was “complete”, in the sense that it was adequate for the purposes of constructing our security target.

Subsequently, we applied one final test, which was to see how the Eurosmart protection profiles, PP9806 [EUR98] and PP9911 [EUR99] fitted within the architecture. The match was very good and only required one small amendment, which was to split Layer 2 into 2 sub-layers.  This allows the architecture to faithfully describe cards with and without hardware-based encryption.


The architecture provides an elegant way of showing how many types of smart card PP fit together.  We have already noted that different protection profiles use different SFRs to describe the same function.  In many cases we feel that these different ways of describing the same function actually adds clarity. The Message Authentication and Integrity Security Function is a good example.  In this case OP3 points out that two cryptographic functions are to be used – cryptographic chaining and session keys, whereas SCSUG reminds us that a particular purpose of this defence is to guard against replay attack.  The one profile says “how”, the other “why”.

The architecture also shows how in perhaps concentrating on one particular set of issues individual profiles fail to see the whole story.  A case in question is the secure and non-secure channel functions in Layer 6.  The SCSUG protection profile only considers the import of user data without security attributes.  The JCS protection profile only considers the import of user data with security attributes.  OP3 avoids the issue and describes the import of data using other SFRs.  Which profile is correct?  Our answer is that that are all correct, although none of them is telling the whole story.  In piecing the different parts of the puzzle together, we realised that although OP3 describes how the “secure channel” is used for off-card communication, OPCS does not insist on its use.  Therefore, there is another channel, which we have called the “non-secure channel”.  The import of user data without security attributes is associated with this latter channel; the import of user data with security attributes is associated with the former channel.

Regrettably, at Layer 5 we had to introduce a catch-all TSF in which to place all SFRs that did not fit elsewhere.  This “Sargasso” TSF (Sec Mgt) is entirely populated with management functions (e.g. FMT_MOF.1, see Appendix A) that have a mixture of subjects, each of which is associated with a different TSF.  If we were able to rewrite the protection profiles such that they instantiated these management functions so that each confined itself to a particular management topic, then we would be able to dispense with this anomalous TSF.

FPT_SEP.1 (TSF domain separation) is a requirement common to all four protection profiles, as mentioned in the introduction.  It maps to the Firewall Security Function and to the Memory Management Security Function. The semantics of these two functions is subtly different.  At the Layer 2, all that the Memory Management Security Function knows about are memory addresses and segments.  In particular it does not know how the segments are related to application identifiers (AIDs).  This allocation is performed at Layer 3 as part of the Firewall Security Function.

There has been much controversy over the SCSUG requirement for auditing, principally because of the resource limitations of smart cards[2].  In completing the relevant SFRs we discovered that there is a two-fold purpose to the requirement.  Firstly, it allows an off-card entity to access the “production history file”.  This contains information that can be used to identify the card and distinguish it from all others.  Second, it facilitates the recording of a traditional audit trail, which for the purposes of a smart card can be limited to a round-robin buffer.  We feel that the ability to record the AID of the selected application, its associated security domain and possibly the APDU[3] command type, method and class, would be useful in helping an off-card entity to detect when a card has been compromised off-line.

At Layer 2, we associated the SSVG requirement for information flow control with the need for a cryptographic non-bypassability policy.  In other words, data that is intended to be encrypted before leaving the card must not be allowed to leave the card until it has been encrypted.  Thus, the development of the architecture provided us with new insights as to the utility of various SFRs. It also identified where the Common Criteria is weak.  For example, the Random Number Generator Security Function not only provides a source of random numbers for the cryptographic module but also generates random wait states for use by the CPU to help combat the differential power analysis (DPA) attack.  However this use of the random number generator is not described by any SFR.

A note on technique

We constructed the security target in Microsoft Word using the cross-reference mechanism to link the original SFRs extracted from the protection profiles to our completed SFRs.  We keep the original SFRs in an appendix.  The completed SFRs are grouped under their relevant TSF heading and thus the TOE summary specification and the statement of TOE security functional requirements are interleaved. The resultant file, although extremely large (3.03 MB), facilitated the easy movement of the SFRs to allocate and reallocate them to the TSFs during the construction of the architecture, whilst maintaining the cross-references to the originals.  If we were to repeat the work, however, conventional hypertext might prove to be a more suitable medium.

Dealing with Assumptions, Threats, Policies and Objectives

The TSFs being implemented in the “proper” place from an architectural perspective and the conclusion concerning demonstrating PP-compliance using SFRs is therefore valid. However, a security target still requires a set of relevant assumptions, threats, policies and objectives.  In order to complete our security target we therefore used those defined by OP3, augmented as necessary in order to describe the additional SFRs required by the JCS, SSVG and SCSUG protection profiles.

Summary of the Methodology

In summary, our method for demonstrating compliance with two or more protection profiles, involves:

  • Asserting that it is only necessary to demonstrate compliance with the SFRs and security assurance requirements (SARs)
  • Devising a hierarchically layered architecture of TSFs
  • Allocating the SFRs from all the protection profiles under consideration to those TSFs such that the layers of the architecture possess the properties listed below
  • Choosing one of the protection profiles as a source of baseline assumptions, threats, policies and objectives and augmenting these as necessary to encompass the totality of the SFRs drawn from all the protection profiles under consideration.  Use this as the set of assumptions, threats, policies and objectives for the security target.

The properties are:

  • The layers are logically ordered, with the applications on the top and the interfaces to external devices at the bottom.
  • Each layer provides a security service to the higher layers.  (TSFs may also provide security services to other TSFs in the same layer.)
  • Where a protection profile indicates where or how an SFR is to be implemented, the allocation of SFRs to TSFs reflects that requirement (e.g. hardware SFRs are allocated to hardware layers).
  • Where there is a choice in the manner in which an SFR may be implemented (e.g. in terms of hardware or software), the architecture does not constrain that choice.
  • For any given layer, inclusive of all lower layers, all missing dependencies, if any, are fully justified.


We believe that this architecture may considered to be generic and applicable to all Open Platform smart cards, regardless of whether the COE is Java Card or not.  Indeed the method for deriving the architecture may assist the authors of other security targets concerned with other protection profiles to generate their own architectures.  The method certainly provides insight as to how all the SFRs fit together to create an integrated whole.

We suspect that the architecture will also assist in performing the vulnerability analyses (AVA_VLA) and that security functions in the higher layers may prevent the exploitation of vulnerabilities in the lower layers.  This is certainly true in the case of Open Platform, where the use of public key cryptography (at Layer 5) prevents an attacker who has compromised the security at Layer 2 from forging a “load token”[4].


The original architecture work was performed under contract F-X2-K0017 for the Industrial Technology Research Institute (ITRI) in Taiwan, R.O.C.  The authors would like to thank ITRI for its support and funding. The authors would also like to thank Marc Kekicheff of Visa International for his helpful advice and encouragement.


[OP301]  The Open Platform Protection Profile, Version 0.9 issued March 2001, obtainable from Global Platform (www.globalplatform.org)

[BRE01]  Smart Cards: The Open Platform Protection Profile, Brewer, D.F.C., Kekicheff, M., Kashef, F., Proceedings of the Second International Common Criteria Conference, Brighton, UK, 2001

[OPC01]  The Open Platform Card Specification, version 2.1, issued June 2001, obtainable from www.globalplatform.org

[SCS01]   The Smart Card Security Users Group Smart Card Protection Profile, version 3.0 issued August 2001

[SSV01]   The Secure Silicon Vendors Group Smartcard IC Platform Protection Profile, version 1.0, July 2001, BSI-PP-0002

[EUR98]   Smart Card Integrated Circuit Protection Profile, PP9806, Version 1.0, September 1998

[EUR99]   Smart Card Integrated Circuit with Embedded Software Protection Profile, PP9911, Version 2.0, June 1999

Appendix A – Cross References of SFRs to Architectural Layer and TSF

Architectural Layer/ TOE Security Function

SFR (Bold means that the SFR originates from OP3 – it may also be included in one of the other protection profiles. Italics means that the SFR is not included in OP3, but originates from one  of the other protection profiles, or the CCL security target)



OP API Access Security Function

FDP_ACC.1+3, FDP_ACF.1+3


CVM Handler Access Security Function

FDP_ACC.1+5, FDP_ACF.1+5



Security Domain Access Security Function

FDP_ACC.1+2, FDP_ACF.1+2


Secure Channel Security Function


Invocation by an Application



Invocation by Security Domain Users



Host/Entity Authentication



Message Encryption



Message Authentication and Integrity



Key and other Secret Data Receiving Services

FCS_COP.1+2, FCS_COP.1+10


Non-Secure Channel Security Function



Confirm Code Verification Security Function



Load File Verification Security Function



CVM Handling Security Function




Card Manager Access Security Function

FDP_ACC.1+1, FDP_ACF.1+1


State Transition Command Security Function



Delegated Management Security Function

FCS_COP.1+6, FCS_COP.1+7


Key Management Security Function


Key Generation Services



Key Distribution Services



Key Access Services



Key Destruction Services



CCMF Security Function



Security Management Security Function




Self Test Security Function



Rollback and Recovery Security Function




Supervisor Security Function



Firewall Security Function



Object Reuse Security Function



Exception Handling Security Function



Object Identity Security Function



Card Audit Security Function




Sensor Security Function



Bus Security Function



Memory Management Security Function



Input/output Security Function



CPU Security Function



Random Number Generator Security Function



Cryptographic Security Function

FCS_COP.1 (instantiated for each algorithm)



Coating Security Function




[1] Everything shown in Figure 1, with the exception of the “applications” is within the scope of the TOE.

[2] Compared with a PC, smart cards are not only limited in terms of memory size and computer power, but typically they have no internal power or clock.

[3] Application Protocol Data Unit

[4] If an attacker were able to forge a load token then they would in principle be able to load applications that were not authorised by the Card Issuer.  However, in order to load an unauthorised application that does something nasty, other higher level security services have also to be overcome.