1 Introduction

The expressive power of HOL allows modeling the process of social explanation inspired by Max Weber into an Isabelle Insider Threat framework. We applied this framework to case studies from airplane safety and security [8], insider threats for the IoT [10], and for auction protocols [9]. The CHIST-ERA project SUCCESS [2] will employ the framework in combination with attack trees and the Behaviour Interaction Priority (BIP) component architecture model to develop security and privacy enhanced IoT solutions. A pilot case study from the health care sector, cost-effective IoT-based bio-marker monitoring for early Alzheimer’s diagnosis, will enable us to investigate the feasibility of the approach.

The Isabelle Insider framework [14] is used as a basis for a formalisation of an architecture-level description of the infrastructure including human actors, their psychological disposition, and core privacy and security requirements integrated as logical predicates of local security and privacy policies. The modelchecking procedure advocated in the invalidation approach to Insider threat analysis [13] has meanwhile been incorporated into the Isabelle tool [7]. Its applicability has been demonstrated by means of an example on the analysis of an earlier IoT case study [10]. However, in this earlier IoT case study, we originally extended the Isabelle Insider framework by the concept of attack trees to refine known IoT Insider attack vectors. Attack trees allow refining known attack vectors into sequences of state transitions explaining how the attack leads to a state in which the security property is violated. Thus the refined attack corresponds to a path in the state graph of the system model. Similarly, the process of modelchecking produces automatically a sequence of state transition – if the checked property does not hold in the model.

The question we investigate by means of an IoT health care case study is whether the concepts of modelchecking and attack tree refinement correspond. The extension by modelchecking [7] and the embedding of attack trees into the Isabelle Insider framework allow us to examine this correspondence using the mathematical rigour and automated proof support of Isabelle. The results provide important insights on how the methods of modelchecking and attack tree analysis can be fruitfully combined to enhance the verification of attacks on human centric infrastructure models and possibly even the discovery of yet unknown ones.

This paper begins by briefly reviewing the Isabelle Insider framework with a special emphasis on the extensions to modelchecking as well as attack trees. The running example of a simple health care scenario and its privacy and security risks is then introduced followed by the presentation of its formalisation in the Isabelle Insider framework. We reconsider the definition of state transition in modelchecking introducing an adaptation that explicitly shows the attack paths. This allows the transformation of attack traces found by modelchecking into the attack tree refinement process. We show how these processes relate. As illustrated by the case study, we can use the combination of modelchecking and attack trees to guide the attack tree refinement in finding and analysing the attacks in human centric scenarios.

2 Isabelle Insiders, Modelchecking and Attack Trees

In formal analysis of technical scenarios, the motivation of actors and the resulting behaviour of humans is often not considered because the complexity is beyond usual formalisms. The Isabelle Insider framework [14] provides expressiveness to model infrastructures, policies, and humans while keeping up the level of proof automation. In this section, we give a short introduction to this framework for modeling and analysing Insider attacks. We describe its extensions by attack trees and modelchecking. A detailed technical introduction to the framework is given in [14], the extensions are introduced in [7, 10] and the Isabelle sources are available online [6].

2.1 Isabelle Insider Framework

The Isabelle Insider framework [14] is based on a logical process of sociological explanation [3] inspired by Weber’s Grundmodell, to explain Insider threats by moving between societal level (macro) and individual actor level (micro).

The interpretation into a logic of explanation is formalized in Isabelle’s Higher Order Logic. This Isabelle formalisation constitutes a tool for proving security properties using the assistance of the semi-automated theorem prover [14]. Isabelle/HOL is an interactive proof assistant based on Higher Order Logic (HOL). Applications can be specified as so-called object-logics in HOL providing reasoning capabilities for examples but also for the analysis of the meta-theory. Examples reach from pure mathematics [11] to software engineering [5]. An object-logic contains new types, constants and definitions. These items reside in a theory file, e.g., the file Insider.thy contains the object-logic for social explanation of Insider threats (see [6, 14]). This Isabelle Insider framework is a conservative extension of HOL. This means that our object logic does not introduce new axioms and hence guarantees consistency.

The micro-level and macro-level of the sociological explanation give rise to a two-layered model in Isabelle, reflecting first the psychological disposition and motivation of actors and second the graph of the infrastructure where nodes are locations with actors associated to them. Security policies can be defined over the agents, their properties, and the infrastructure graph; properties can be proved mechanically with Isabelle.

In the Isabelle/HOL theory for Insiders, we express policies over actions get, move, eval, and put We abstract here from concrete data – actions have no parameters:

figure a

The human component is the Actor which is represented by an abstract type and a function that creates elements of that type from identities:

figure b

Policies describe prerequisites for actions to be granted to actors given by pairs of predicates (conditions) and sets of (enabled) actions:

figure c

We integrate policies with a graph into the infrastructure providing an organisational model where policies reside at locations and actors are adorned with additional predicates to specify their ‘credentials’, and a predicate over locations to encode attributes of infrastructure components:

figure d

These local policies serve to provide a specification of the ‘normal’ behaviour of actors but are also the starting point for possible attacks on the organisation’s infrastructure. The enables predicate specifies that an actor a can perform an action a’ \(\in \) e at location l in the infrastructure I if a’s credentials (stored in the tuple space tspace I a) imply the location policy’s (stored in delta I l) condition p for a:

figure e

We demonstrate the application of the Isabelle Insider framework in Sect. 5.1 on our running example of an Insider case study from the health care sector.

2.2 Attack Trees

Attack Trees [19] are a graphical tree-based design language for the stepwise investigation and quantification of attacks. They have been integrated as an extension to the Isabelle Insider framework [10, 18]. In this Isabelle framework, base attacks are defined as a datatype and attack sequences as lists over those:

figure f

The following definition attree defines the nodes of an attack tree. The simplest case of an attack tree node is a base attack. Attacks can also be combined as the “and” of other attacks. The third element of type attree is a baseattack (usually a Perform action) that represents this attack, while the first element is an attack sequence and the second element is the attribute, simply a “string”:

figure g

The functions get_attseq and get_attack are corresponding projections on attack trees returning the entire attack sequence or the final base attack (the root), respectively.

The main construction concept for attack trees is refinement defined by an inductive predicate refines_to syntactically represented as the infix operator \(\sqsubseteq \). There are rules trans and refl making the refinement a preorder; the rule refineI shows how attack vectors can be integrated into the refinement process. We will investigate this rule in detail when integrating with modelchecking in Sect. 4.2 because this is where modelchecking and attack tree refinement complement each other nicely.

The refinement of attack sequences allows the expansion of top level abstract attacks into longer sequences. Ultimately, we need to have a notion of when a sufficiently refined sequence of attacks is valid. This notion is provided by the final inductive predicate is_and_attack_tree. We will not focus on this here. For details, see [10] or the online formalisation [6]).

Intuitively, the process of refining corresponds to enlarging an attack tree as depicted in Fig. 1.

Fig. 1.
figure 1

Attack refinement for healthcare case study (see also Sect. 2.3).

2.3 Modelchecking

Modelchecking is often advertised as a ‘push-button’ technique in contrast to automated verification techniques, for example with Isabelle, where the user has to interact with the tool to verify properties. Thus it is in practice very successful mainly due to this full automation. The applications in the Isabelle Insider framework that we construct are mostly performed by simple combinations of automatic proof procedures once the theorem and lemmas have been stated. The most well known problem of Modelchecking is the exponential growth of the number of states, the ‘state explosion’ common in most applications because of infinite data domains. Due to this restriction, models often oversimplify.

Another important advantage of modelchecking is the natural use of temporal logic to express system constraints, e.g., to express “on all paths in the model M it is the case that a send request is eventually followed by an acknowledgement ack”.

Due to the expressiveness of HOL, Isabelle allows us to formalise within HOL the notion of Kripke structures, temporal logic, and formalise the semantics of modelchecking by directly encoding the fixpoint definitions for each of the CTL operators [7]. To realize this, a change of the state of the infrastructure needed to be incorporated into the Isabelle Insider framework. A relation on infrastructures is defined as an inductive predicate called state_transition. It introduces the syntactic infix notation I \(\rightarrow _{i}\) I’ to denote that infrastructures I and I’ are in this relation.

figure h

The definition of this inductive relation is given by a set of rules. To give an impression of this definition, we show here just the rule for the move action.

figure i

3 Health Care Case Study in Isabelle Insider Framework

The case study we use as a running example in this paper is a simplified scenario from the context of the SUCCESS project for Security and Privacy of the IoT [2]. A central topic of this project for the pilot case study is to support security and privacy when using cost effective methods based on the IoT for monitoring patients for the diagnosis of Alzheimer’s disease. As a starting point for the design, analysis, and construction, we currently develop a case study of a small device for the analysis of blood samples that can be directly connected to a mobile phone. The analysis of this device can then be communicated by a dedicated app on the smart phone that sends the data to a server in the hospital.

In this simplified scenario, there are the patient and the carer within a room together with the smart phone.

We focus on the carer having access to the phone in order to support the patient in handling the special diagnosis device, the smart phone, and the app.

The insider threat scenario has a second banking app on the smart phone that needs the additional authentication of a “secret key”: a small electronic device providing authentication codes for one time use as they are used by many banks for private online banking.

Assuming that the carer finds this device in the room of the patient, he can steal this necessary credential and use it to get onto the banking app. Thereby he can get money from the patient’s account without consent.

Fig. 2.
figure 2

Health care scenario: carer and patient in the room may use smartphone apps.

4 Combining Modelchecking and Attack Trees

We now show the interaction of the modelchecking and attack tree approaches by introducing the necessary extensions to the Isabelle Insider framework while highlighting the adaptations that manifest the combination. We use the health care case study introduced in the previous section to show how this combination enables the analysis of the Insider risk given by the carer.

4.1 Relation Between State Transition and Attack Sequences

Modelchecking introduces the concept of state transition explicitly into the Isabelle Insider model. The relation (see Sect. 2.3) provides a transition between different states of the infrastructure that can evolve into each other through changing actions taken by actors. By contrast, in the attack tree world, we have not explicitly introduced an effect on the infrastructure’s state but we have equally investigated and refined attacks as sequences of actions eventually mapping those actions onto sequences of base attacks.

The main clue to combine modelchecking and attack tree analysis is intuitively described as using the Kripke models as the models for the attack tree analysis. More precisely, the sequences of attack steps that are eventually found through the process of refining an attack, need to be checked against sequences of transitions possible in the Kripke structure that consists of the graph of infrastructure’s state changes.

Technically, this transformation needs a slight transformation between sequences of steps of the infrastructure’s state changing relation and that same relation but with the actions leading to the exact same state changes annotated at the transitions. Those annotations then naturally correspond to the paths that determine the way through the Kripke structure. They can be one-to-one translated into attack vectors.

Formally, we simply define a relation very similar to but with an additional parameter added as a superscript after the arrow.

figure j

For the definition of this inductive relation we show here again just the rule for the move action which is nearly identical to before just adding the action.

figure k

We define an iterator relation state_step_list over the state_step that enables collecting the action sequences over state transition paths.

figure l

Note, how in Isabelle overloading of the operator \(\rightarrow ^{(\_)}\) can be neatly applied.

With this extended relation on states of an infrastructure we can now trace the modelchecking action sequences. Finally, a simple translation of attack sequences from the attack tree model to action sequences can simply be formalised by first defining a translation of base attacks to actions.

figure m

From this we define a function transf for transforming sequences of attacks.

figure n

4.2 Improving Attack Refinement

This relative simple adaptation of the modelchecking state transitions to action sequences paired with the transformation from attack traces has a simplifying as well as unifying effect: the attack tree approach necessitated the explicit definition of “attack vectors” that could be used to replace an abstract attack node by a sequence of attacks (see Sect. 2.2). This was manifested by the rule refineI which required a predefined list of attack vectors.

figure o

An example of an attack vector that had to be replaced for P and provided as premise to the above rule is the example UI_AV7 of unintentional Insider attack vectors [10].

figure p

Previously, such attack vectors had to be defined as inductive rules in an axiomatic fashion. Now, the attack vectors can be inferred from the modelchecking process. The new rule in the attack refinement definition is refineIMC.

figure q

An application can be seen in the following section.

5 Analysing Carer Attack

5.1 Health Care Case Study in Isabelle Insider Framework

We only model two identities, Patient and Carer representing a patient and his carer. We define the health care scenario in the locale scenarioHealthcare. The syntax fixes and defines are keywords of locales that we drop together with the types for clarity of the exposition from now on. The double quotes “s” represent strings in Isabelle/HOL. The global policy is ‘no one except the patient can use the bank app’:

figure r

The graph representing the infrastructure of the health care case study has the following locations: (0) smart phone, (1) room, (2) bank app, and (3) health app: In order to define the infrastructure, we first define the graph representing the scenario’s locations and the positions of its actors. The actors patient and carer are both initially in room. The graph is given as a set of nodes of locations and the actors residing at certain locations are specified by a function associating lists of nodes with the locations.

figure s

In the following definition of local policies for each node in the office scenario, we additionally include the parameter G for the graph. The predicate \(@_G\) checks whether an actor is at a given location in the graph G.

figure t

In this policy, any actor can move to the room and when in possession of the PIN can move onto the sphone and do all actions there. The following restrictions are placed on the two other locations.

  • healthapp: to move onto the healthapp and perform any action at this location, an actor must be at the position sphone already;

  • bankapp: to move onto the bankapp and perform any action at this location, an actor must be at the position sphone already and in possession of the skey.

The possession of credentials like PINs or the skey is assigned in the infrastructure as well as the roles that actors can have. We define this assignment as a predicate over actors being true for actors that have these credentials. For the health care scenario, the credentials express that the actors Patient and Carer possess the PIN for the sphone but Patient also has the skey.

figure u

The graph and credentials are put into the infrastructure hc_scenario.

figure v

5.2 Modelchecking Supported Attack Tree Analysis

As a setup for the state analysis, we introduce the following definitions to denote changes to the infrastructure. A first step towards critical states is that the carer gets onto the smart phone. We first define the changed infrastructure graph.

figure w

The dangerous state has a graph in which the actor Carer is on the bankapp.

figure x

The critical state of the credentials is where the carer has the skey as well.

figure y

We use these changed state components to define a series of infrastructure states.

figure z

We next look at the abstract attack that we want to analyse before we see how the modelchecking setup supports the analysis.

The abstract attack is stated as ([Goto bankapp] \(\oplus _{\wedge }^{move-grab}\) Perform eval). The following refinement encodes a logical explanation of how this attack can happen by the carer taking the skey, getting on the phone, on the bankapp and then evaluating.

figure aa

This refinement is proved by applying the rule refineI (see Sect. 4.2). In fact, this attack could be found by applying refineI and using interactive proof with the modelchecking extension of the Isabelle Insider framework to instantiate the higher order parameter ?l in the following resulting subgoal.

figure ab

This proof results in instantiating the variable ?l to the required attack sequence [Perform get, Goto sphone, Goto bankapp].

So far, we have used the combination of a slightly adapted notion of the state transition from the modelchecking approach to build a model for attack refinement of attack trees. We can further use the correspondence between modelchecking and attack trees to find attacks. To properly employ modelchecking, we first define the Kripke structure for the health case scenario representing the state graph of all infrastructure states reachable from the initial state.

figure ac

Following the modelchecking approach embedded into the Isabelle Insider framework [7], we may use branching time logic CTL to express that the global policy (see Sect. 5.1) holds for all paths globally.

figure ad

Trying to prove this must fail. However, using instead the idea of invalidation [12] we can prove the negated global policy.

figure ae

The interactive proof of this EF property means proving the theorem

figure af

This results in establishing a trace l that goes from the initial state hc_scenario to a state I such that enables I bankapp (Actor ”Carer”) eval. This I is for example hc_scenario”’ and the action path get, move, move is a side product of this proof. Together with the states on this path the transf function delivers the required attack path [Perform get, Goto sphone, Goto bankapp].

6 Conclusions

Summarizing, we have considered the benefits of relating earlier extensions to the Isabelle Insider framework to modelchecking and to attack trees and illustrated the benefits on a health care case study of an Insider attack.

Clearly relevant to this work are the underlying framework and its extensions [7, 9, 10, 14] but also the related experiments with the invalidation approach for Insider threat analysis using classic implementation techniques like static analysis and implementation in Java [18] or probabilistic modeling and analysis [1].

We believe that the combination of modelchecking and attack trees is novel at least in the way we tie these concepts up at the foundational level. Considering the simplicity of this pragmatically driven approach and the relative ease with which we arrived at convincing results, it seems a fruitful prospect to further explore this combination. Beyond the mere finding of attack vectors in proofs, the expressivity of Higher Order Logic will allow developing meta-theory that in turn can be used for the transfer between modelchecking and attack tree analysis.

There are excellent foundations available for attack trees based on graph theory [15]. They provide a very good understanding of the formalism, various extensions (like attack-defense trees [16] and differentiations of the operators (like sequential conjunction (SAND) versus parallel conjunction [4]) and are amply documented in the literature. These theories for attack trees provide a thorough foundation for the formalism and its semantics. The main problem that adds complexity to the semantical models is the abstractness of the descriptions in the nodes. This leads to a variety of approaches to the semantics, e.g. propositional semantics, multiset semantics, and equational semantics for ADtrees [16]. The theoretical foundations allow comparison of different semantics, and provide a theoretical framework to develop evaluation algorithms for the quantification of attacks.

Surprisingly, the use of an automated proof assistant, like Isabelle, has not been considered despite its potential of providing a theory and mechanised analysis of attacks simultaneously. The essential attack tree mechanism of tree refinement is relatively simple. The complexity in the theories available in the literature is caused by the attempt to incorporate semantics to the attack nodes and relate the trees to actual scenarios. This is why we consider the formalisation of a foundation of attack trees in the interactive prover Isabelle since it supports logical modeling and definitions of datatypes very akin to algebraic specification but directly supported by semi-automated analysis and proof tools. There have already been attempts at formalising attack trees for specific application domains in the interactive theorem prover Isabelle [10] (for IoT Insider attacks). They are also based on the Isabelle Insider framework but support only the use of axiomatized “attack vectors” derived from real Insider attacks. It is necessary to assume these attack vectors to provide the semantics of attack tree refinement. Clearly, in state based systems, attacks correspond to paths of attack steps. Hence, it is quite obvious to use a modelchecking approach to analyse attack trees. In fact, implementations like the ADTool [17] use modelchecking based on the guarded command language gal to analyse scenarios expressed as graphs. Surprisingly, again, it has not been considered to use a logical framework powerful enough to emulate modelchecking to augment this natural approach to modeling and analysing attack trees. The modelchecking approach brings the additional advantage of exploring and thus finding possibilities of attack refinements necessary for the attack tree development. If embedded within an interactive theorem prover, the integration of the formalism can be applied to case studies and meta-theory can be proved with the additional support and safety guarantees of a proof assistant.

The presented foundation of attack trees in Isabelle is consistent with the existing foundations [4, 15, 16] but instead of providing an on paper mathematical foundation it provides a direct formalisation in Higher Order Logic in the proof assistant. This enables the application of the resulting framework to case studies and does not necessitate a separate implementation of the mathematical foundation in a dedicated tool. Clearly, the Isabelle framework is less efficient and the application to case studies requires user interaction. However, the formalisation in Isabelle supports not only the application of the formalised theory but furthermore the consistent development of meta-theorems. In addition, dedicated proof automation by additional proof of supporting lemmas is straightforward and even code generation is possible for executable parts of the formalisation.

Again in comparison to the existing foundation [4, 15, 16], the presented attack tree framework is restricted. For example, it does not yet support disjunctive attacks nor attack-defense trees, i.e., the integration of defenses within the attack tree. We are convinced that this is a straightforward future development and will be provided in due course.