Computer systems of any organization face an immense threat due to malicious attacks. One of the paramount measures to mitigate these threats is to conduct a situational analysis of an organization’s computer networks coupled with information systems. Upon doing this, an organization becomes strategically placed to engineer sound and subtle strategic decisions on what is necessary to protect it and the mechanisms of responding to attacks that impair the organization’s highly valued assets: information systems. However, making such decisions is intricate particularly in the case of enterprise computer networks. This is because computer enterprise networks span numerous WANS besides supporting numerous and vastly distributed services. Therefore, understating situations that pose threats to enterprise systems is both time-consuming and problematic. Inducing an incredible understanding of operators’ situation analysis may be accomplished in several approaches among them being the deployment of technologies coupled with situation-aware systems. However, designing such systems for CND (computer network defense) involves one of the difficult tasks that an organization has to do to enhance the security of its information systems. Currently, CND and SA principally dwell on self-awareness. Hence, they do not go beyond their domain. Their focus is to block the enemy from attacking organization systems. The danger in this approach of enhancing systems security is that it gives the enemy opportunities to figure around the next move that would not be blocked. IDS, Firewall, and IPS are all CND components that feed into SA. Again, this is not adequate to form the basis for making decisions. The argument here is that CND and SA is just one domain focus: all its activities fall in the domain they own. Additionally, current defense mechanisms hardly interact with attackers.
They only focus on defending. Another approach of dealing with attackers’ threats is through the employment of ‘Endsly’ model of self-awareness. However, the paper maintains that this model is only appropriate for cyber security since it approaches risky situations from only one dimension. Also inherent to this model is the challenge of being passive and relaying only information garnered from its domain. With these raised challenges of the existing CND and SA approaches, it is necessary for organisations to consider influencing and interacting with enemies in the attempt to learn about their intents rather than just blocking them. This is critical in aiding to garner information about they enemies’ domain. The whole idea is about using deceptions to redirect the enemies to a remote deception server so that it becomes possible to know their intention and their abilities. This helps in updating an organisation’s system defenses. To achieve these concerns, an active SA is required to help in identification of who is attacking, why the attacks are done, and the ultimate goal of the attacker. Making use of an active SA implies that blocking is insufficient since it leads to loss of important information about the enemy. The implication of this argument is that, an active SA accords organizations the ability to attack the enemy’s domain coupled with enriching the capability of an organisation’s SA. Nevertheless, this approach may have some drawbacks since attackers are essentially innovative. Hence, chances exist that they can also engineer other new strategies that offset this approach. Consequently, before putting efforts to analyse the effectiveness of an active SA, it is crucial to consider the existing knowledge of CND and SA to know how they may contribute to the security of an organisation’s systems. This is the focus of this paper.
Within the last two decades, the applicability of SA has been incredibly revolutionarised especially in the field of ATC (air traffic control). Another subtle area where the concepts of SA have been widely researched on is in military operations (Salter et al. 1998, p.5). The effectiveness of SA application in military operation is critical since military operations are comparable to CND in terms of eminent threats posed by the enemies. However, while the application of SA in military operations has been largely investigated, researches on the application of SA in CND are still in an embryonic phase (Ou, Boyer & McQueen 2006, p.336). Application SA in military operation has resulted to the enhancement of military operations in terms of identification of threats and possible ways of countering threats without anticipation of other forms of threats related to the already mitigated threats. However, the application of SA in CND only leads to making organisations consider blocking the enemies’ servers before their intentions have been known. This makes CND suffer from achieving its purposes for which it was created. With regard to Computer Economics Malicious Code Attack Economic Impact Update (2008), this purpose involves making sure that networks and other organisational systems remain secure and in continuous operation both consistently and reliably (Para.9). Such a purpose involves adopting various actions through networks of computers to facilitate protection, detection, analysing, controlling, monitoring, and responding to myriads of cyber attacks, network disruptions, and intrusions among other perceived actions that are not authorised by network administrators. They have probabilities of impacting or even compromising systems of information and networks defense (Borchgrave et al. 2000, p.56: Cordesman 2002, p.12: Erbschloe 2001, p.83 ). The tasks of CND are executed by collective and collaborative efforts of an organisation’s personnel predominantly charged with monitoring, defense systems maintenance and management, operation, and maintenance of network infrastructure. These include network engineers, analysts of systems security, and administrators. One of the noble tasks of these personnel is to ensure that CND system is maintained, monitored, and that a necessary action is adopted to alleviate a network system from risks posed by cyber attackers through espionage, malicious software, destructive codes, service denials, and electronic attacks such as Stuxnet. These tasks are numerous implying that CND environment is both complex and challenging thus calling for the incorporation of SA in aiding to identify potential threats to organisations’ network systems.
SA involves the processes of perception of various elements in an environment that may pose threats to network systems. It entails the creation of an understanding of the elements through intensive analysis while not negating future projections of impacts of such threats (Endsley 2000, p.33: Gonzalez et al. 2003, p.593). This implies that, in CND approaches, SA essentially focuses on assessing various situations in the complex and dynamic CND environment to make precise forecasts that enable operators to estimate the repercussions of attacks. They also help to identify network foes besides conducting evaluation of risks as the foundation of arriving at the most subtle decisions to proactively protect the most valued assets of an organisation- information systems in a concise manner (Gonzalez & Dutt 2010, p.412: Busemeyer & Diederich 2009, p.103: Schneier 2008, p.71 ). Given these central areas of concern of SA in CND, it seems likely that SA can produce immense benefits in enhancing security of networks owing to the success of SA in other fields such as safety controls, military operations, and flight operations.
Arguably, SA involves mental processes of human beings. For effectiveness of these processes in assessing risks, presentation of information that is necessary to create an immense understanding of eminent risks coupled with how such risks may vary over a given span of time, integration of technology is necessary. This implies that CND systems must possess mechanisms of enhancing human operators to realise effective and accurate situational analysis. Endsley and Garland (2000) assert that enhancing situational analysis of an operator is a central design factor while developing interfaces of operators of organisation’s systems applications and infrastructure (p.39).The big question is- how effective are the current CND situation analysis approaches in mitigating network threats posed by adversaries both presently and in some future period expressed in time domain?
The centrality of the need to curb cyber attacks in IT governance within organisations creates immense implications that the existing CNDs are ineffective particularly where the application of the current SA is expressed in some future time domain. This argument is evidenced by the ever-increasing sophistication of enemies’ hacking strategies such the ones experienced by Wikileaks and various nations’ databases. Such network threats are magnified by government agencies’ decisions to depend on online system (Sideman 2011, p.5). Indeed, the capacity of the current SA to mitigate an organisation from risks of attacks both presently and in future attracted the Brussels convection that was attended by the European Union, NATO, and the U.S defense department in 2011. The main agenda of the convention was to come up with strategies of detection, prevention, recovery, and defense against cyber attacks (Sideman 2011, p.6). While the participant of the convention ardently agreed that recommendable strides for the realisation of the main agenda of the convections had been made, they held that the current measures for cyber security are incomplete. Consequently, it is vital for organisations to embark on fast tracking innovative plans for responding to incidences of attacks. One of such plans is to advocate for collective organisational personnel duty to remain vigilant to instances of malicious attacks (Lute & McConnell 2011, p.1). This means that the CND operators within an organisation have extra roles to play in an attempt to ensure that their organisations remain free from attacks, and in case attacks occur, an immediate action is taken to respond to the attacks by determining the intents and goals of the attackers. This extra role overrules the current approaches of mitigating network attacks through the creation of barriers for interaction of attackers and organisations’ network systems. What this implies is that computer network defenses (CND) requires adoption of cyber SA approaches in three main stages and processes: recognition, comprehension, and projection (Endsley1995, p.35: Tadda et al. 2006, p.625). Recognition refers to the creation of situational awareness of the prevailing network situation while comprehension refers to the development of incredible understanding of malicious behaviours of current situations that pose threats to a network. On the other hand, projection entails conducting future assessments of the anticipated future actions inspired by the prevailing situation experienced by the network.
The above discussion creates impressions that human decision makers are critical in the realisation of an effective CND system. Such a system is largely dependent on decisions and results of the analysis conducted by human operators on the prevailing SA in network. Jajodia et al. (2010) advocates for this line of view. The authors argue that the capacity of corporates to enhance network protection from cyber attacks and malware through the deployment of algorithms and myriads of available cyber tools without the employment of human intervention interfaces in helping to make the overall decision that sets the necessary course of action remains a distant goal (p.157). In this sense, the part played by human network operators in enhancing security of network systems is not only crucial but also indispensable (Gonzalez, et al. 2011, p.51: Johnson-Laird 2006, p.11).
Existing literature on the application of SA in computer network defense reveals that the existing SA depends largely on creating self-awareness of the potential network threats and then blocking them. For the operation of this CND models, security analysis plays proactive roles. Cyber-situational analysis is a function of the network analyst’s memory of cyber insecurity situations (Jajodia et al.2010, p.159) and risks tolerance capacity developed by an analyst (McCumber 2004, p.123: Li et al. 2009, p.5). Approaching SA from this dimension poses an enormous threat to organisational goals encompassing the need of remaining risks and threats resilient in the future since experience is only developed upon exposure. Attackers are innovative: they keep on changing their attacking strategies. Consequently, in the future, an organisation cannot anticipate to be attacked in the manner in which it is attacked presently. This makes the SA model based on experience unsuitable for effecting CND. What is desired is a learning model.
A learning model depends magnificently on the simulation of risks tolerance and memory experiences developed by a network analyst following exposure to risky situations (Dutt & Gonzalez 2011, p.3: Dutt et al. 2010, p.10). The effectiveness of this model in the enhancement of future mitigations of simulated risk of attacks depends on the capacity of a network analyst to make subtle decisions to respond to risks in the appropriate time and in the right way. Research findings on JDM (judgment and decision making processes) make this model problematic in the mitigation of future network attack risks. According to these findings, people’s experiences in various environmental events incredibly shape the manner in which people make choices of decisions (Hertwig et al. 2004, p.536: Lejarraga, Dutt & Gonzalez 2011, p.129). This implies that building a memory of repeated bad experiences may make network analysts (decisions makers) consider paying no attention to the activities that would ensure that the system remains secure. On the other hand, good experiences may increase the probabilities of an analyst to underestimate the activity at hand (Hertwig et al. 2004, p.537). Coincidentally, past research has established that similarity amounts to a substantive contributor to the enhancement of problem solving techniques besides boosting decision-making processes, as well as enhancing categorisation of risks coupled with cognitions of risks (Goldstone, Day & Son 2010, p.117: Vosniadou & Ortony 1989, p.76). In particular, two main similarity models have been proposed by various scholars. These models are geometric model (Shepard 1962a, pp 125-140: Shepard 1962b, pp 219-246) and feature-based model (Tversky 1977, pp.327-352). From the perspectives of geometric models, the similarity that exists between two objects, for instance, a situation requiring a decision and memory experiences, is indirectly related with the separation in space between the points separating the two objects. This separation is either in linear or square difference of various points in the space (Shepard 1962a, p.127). On the other hand, feature-based model is characterised by similarities in the manner the features of objects are matched with processes that are dependent on distinctive and weighting of various common features of the two objects (Tversky1977, p.351). This is largely consistent with what Endsley (2004) perceives as constituting SA. In this context, Endsley (2004) believes that SA constitutes perceptions of various elements contained in a volume of both space and time coupled with the comprehension of the status of the elements in some future period expressed in time domain (p.319). This definition for SA is embraced in the model for SA proposed by Endsley (Endsley 1995, p.43-49) and latter discussed by Andre, Foyle and Hooey (2005) in the forth level of SA referred to as resolution (p.24).
For precise determination of the effectiveness of the existing systems of computer network defense, it is crucial that empirical data is available to aid in the analysis of experiences in threats exposure and risks tolerance in enhancing computer networks security. Unfortunately, little literature exists addressing these concerns (Jajodia et al., 2010, p.159: McCumber 2004, p 123: Salter et al. 1998, p.7). There also seems to have little or no research, which empirically scrutinises the contribution of risk tolerance and prior experiences on risks exposure in enhancing the capacity of network human operators to create a highly effective cyber situational analysis system to curb attacks. However, it is crucial that some attention is paid to the operation of some of the common CND systems in the attempt to introspect various capabilities of situational analysis systems. This is the concern of the next section.
IDS, Firewall, and IPS
IDS, Firewall, and IPS constitute the CND component, which essentially feeds into SA. Firewalls comprise of a number of filters or simply rules. To enhance security of the enterprise network systems from attack, the firewall system matches its rules with the incoming traffic. This means that firewalls have only the capacity to detect the potentially of risky situation involving malicious malwares while they get into the system but not after entering into a net work system (Paxson1998, p.84: Mattord 2008, p.290). The overall intent of developing firewalls is to block potentially dangerous traffic from getting into the system. However, although firewalls ensure limited accessibility between networks so that attacks can be mitigated, no signal is given to the system administrator about the threat or the goal of the attacker. This implies that firewall cannot avail an opportunity for human network operators to engineer network security strategies for preventing similar attacks in the future. Consequently, the similarity model proposed by Shepard (1962a, pp. 125-140: 1962b, pp. 219-246) and Tversky (1977, pp. 327-352) fails to apply for the cases of utilisation of firewalls to mitigate network threats. This is because network operators do not get information on the nature of the blocked attacks. Firewalls are essentially active but not reactive. Opposed to firewalls, intrusion detection system (IDS) is largely passive. IDS watch data packets going through the system without blocking them. Much like firewalls, IDS has numerous rules with which it matches the data packets for attacks. When potential attacks are detected, IDS raises an alarm to the administrator (Amoroso 1999, p.45). Many of the attacks such as Trojan horses, viruses, and worms among others can be detected by IDS even if some may have escaped the scrutiny of firewalls. IDSs have also the capacity to detect service attacks, data-targeted attacks, and attacks based on the host, for instance, unauthenticated system logins among other malicious attacks that slip off firewall blocking (Denning 1999, p.110: Lunt 1993, p.101). Nevertheless, IDSs have a central draw back akin to its challenges of false positives associated with the technology. False positives mean that IDS may illegitimate an otherwise legitimate traffic and tag it as bad. Consequently, IDS requires tuning in an effort to maximise threat recognition while also ensuring minimisation of false positives. Tuning is normally required when potentially new threats are discovered and or when the structure of CND is changed (Anderson 2001, p.387: Anderson 2002, p.113). Important to note also is that, even with the maturation of IDS technology to enhance better weeding of false positives, the complete elimination of the number of false positives remains a challenge especially when it is required to maintain an optimal control. This challenge remains even on adoption of IPS, which an increasing number of scholars take as the technology evolving from IDS technology.
Comparatively, intrusion prevention system (IPS) possesses all essential components of IDS. However, it is different from IDS in the sense that it can burr malicious traffic and malware from attacking an enterprise. IPS operates by waiting in-line in the traffic flow into a network for possible attacks where it shuts off all attempted attacks flowing through the network wire. Additionally, IPS has the ability to end “connections in the network by blocking the target access from the account of the user, IP addresses, or any other network association with attackers” (Jones & Endsley 1996, p.504). It may also block targeted host network access, service, and or applications. IPS reacts against crucial dangers that are sensed in other chief styles. Firstly, IPS can distort other network defense controls among them being router and firewalls. Reconfiguring is done to block attacks. The second mode of response involves the removal of suspected malicious contents in an attempt to mitigate packets. An example of this mode of response is the “deletion of attachments that are affected in emails before the email is passed on to the target user” (Anderson, 2002, p.114).
Critical introspection of IPS, IDS, and firewall reveals that failure to employ IPS truncates into flow of attacks in a network without noticing human network operators. On the other hand, firewall allows addresses, services, and ports to flow through the network while filtering followed by blocking of likely attacks. Additionally, firewalls do not possess the intelligence of determining the legitimacy of traffics. This implies that while IDS and IPS detect and scrutinise traffic to determine whether it comprises an attack or not, firewalls permit flow and blockage of potential attacks. Even after a scrutiny of attacks by IDS and IPS, if threats are detected, the main aim is to block or deny access. Therefore, arguably, IDS, IPS, and firewalls are not indebted to reveal both the identity and aims of attacks (Anderson 2002, p.114). This leaves network system administrators with little opportunities to derive alternative mechanisms of dealing with similar likely threats in the future. The implication of this argument is that the application of IDS, IPS, firewall, CND, and SA is just in their own domain: all their activities fall in one domain. Moreover, these current defense mechanisms never interact with network attackers. They only attempt to defend the host’s system from attack. With the use of IPS, IDS, and firewalls, the theory on prior experience, situation memory, and risks tolerance fails to influence the development of a better capacity to conduct situational analysis to help in the creation of a more effective CND following an attack.
Endsley’s Model of Self-awareness
CND and SA can also be approached from the context of Endsley’s model of self-awareness. As pointed out before, “Endsley’s model for SA comprises of three main phases: perception, comprehension, and projections” (Jones & Endsley 1996, p.504). Without the perception of crucial information around an operator environment, probabilities are increased for the operator to form inaccurate pictures of the risky situations facing the network system. Perception here means the capacity to identify a threat. While applied to CND, this component of Endsley’s model is critical since Jones and Endsley (1996) discovered that about 76 percent of total SA errors made by pilots were affiliated to the perception of the prevailing information within the pilot’s environment (p.507). In the comprehension phase, the critical issue of concern is the manner in which people combine, interpret, store, and even retain the garnered information. However, in CND, this cannot be achieved without the possession of ample information about the attackers’ intentions and goals. This argument is directly congruent with the argument that the constructs of SA require head-on tackling of information related to meaning of a risky situation (Flach 1995, p.3). Consequently, meaning deserves to be interpreted from the dimension of subjective interpretation or simply the awareness. To achieve this, it is desirable that human operators of network systems have tangible implications of potential threats of attack- something that is difficult to achieve without knowing the intentions and overall goals of attackers. Projections constitute the highest level of SA model developed by Endsley. It entails the capacity to realise precise forecasting of certain situations in the future events coupled with situations dynamics. This aspect is well developed among people or operators who possess the highest levels of the understanding of risky situations. This understanding is arguably well enhanced if human network operators are able to perform dissection of risky and malicious malware and other system threats. This cannot be realised without interaction with attackers.
Revisiting Endsley’s model, SA is depicted as a separate entity from decision-making coupled with performance. This implies that SA is shown as comprising of the internal models of environmental state used by the operator. In this sense, situation analysis is depicted as a subtle precursor for decision-making. However, it is critical to note that many elements come into action when incredible SA is to be converted into a successful performance (Barbara et al. 2001, p.61). The argument here is that application of Endsley’s model in SA in CND infers that the model extends its paradigms of self-influence and self-awareness. The main issue is that the model looks at a risky situation from only one dimension: what the enemy wants a network administrator to see. Consequently, Endsley’s SA model is best applied in cyber security. However, the model also incorporates the challenges of IPS, IDS, and firewalls since it is passive and relays only the information the model garners from its domain. Therefore, a research opportunity to create other approaches to CND and SA exists such that network security systems may become interactive with the attackers. The goal of such network security systems is to ensure that attackers are directed to a deceptive server to help in garnering their attack details, their aims, and goals. Armed with this information, it becomes possible to develop a security system that may capture even ill intentions of attackers in the future.
The exiting CND and SA focus principally on self-awareness. This aspect makes them not to go beyond their own domains. This paper focused on the introspection of various approaches to enhance organisational security systems to develop resilience to potential threats in the form of malware, unauthenticated system log in, service denial, and unprecedented accessibility of an organisation’s data among others. It is held that network security systems such as IDS, IPS, and firewalls among others are largely restricted to their own domains so that security is enhanced through blocking of attackers. This approach is argued as providing a loophole in the future capacity of an organisation to mitigate threats since blocking the network of an attacker implies that human network operators and analysts are incapacitated to develop ample knowledge about the true intentions and goals of the attacker. Consequently, the paper maintains that a better network surveillance system would entail interaction of both an organisation’s network system and the attackers’ network system so that the identity and the intent of the attacker can be known.
Amoroso, E 1999, Intrusion Detection: An Introduction to Internet Surveillance, Correlation, Trace Back, Traps, and Response, Intrusion, Net Books, New Jersey.
Anderson, P 2002, Computer Security Threat Monitoring and Surveillance, Anderson Co., New Jersey.
Anderson, R 2001, Security Engineering: A Guide to Building Dependable Distributed Systems, John Wiley and Sons, New York.
Barbara, D, Couto, J, Jajodia, S, & Popyack, L 2001, ADAM: Detecting Intrusions by Data Mining, Proceedings of the IEEE Workshop on Information Assurance and Security, West Point, New York.
Borchgrave, A et al. 2000, Cyber Threats and Information Security: Meeting the 21st Century Challenge, The Center for Strategic and International Studies (CSIS), Washington, D.C., New York.
Busemeyer, R & Diederich, A 2009, Cognitive modeling, Sage, New York, NY.
Computer Economics Malicious Code Attack Economic Impact Update 2008, On- line. Internet, Web.
Cordesman, A 2002, Cyber-Threats, Information Warfare, and Critical Infrastructure, Routedge, London.
Denning, E 1999, An Intrusion Detection Model, Lunt, Teresa F., IDES: An Intelligent System for Detecting Intruders, Proceedings of the Symposium on Computer Security; Threats, and Countermeasures, Rome, Italy.
Dutt, V & Gonzalez, C 2011, Cyber situation awareness: Modeling the security analyst in a cyber attack scenario through instance-based learning. Proceedings of the 20th Behaviour Representation in Modeling & Simulation (BRIMS) Conference, Utah, Sundance.
Dutt, V, Cassenti, N, & Gonzalez, C 2010, Modeling a robotics operator manager in a tactical battlefield. In Proceedings of the IEEE Conference on Cognitive Methods in Situation Awareness and Decision Support, Miami Beach, FL. Miami.
Endsley, M 1995, ‘Toward a theory of situation awareness in dynamic systems’, Human Factors Journal, vol. 37 no. 1, pp. 32–64.
Endsley, M 2004, Situation awareness: Progress and directions. In Banbury, S., & Tremblay, A cognitive approach to situation awareness: Theory, measurement and application, Ashgate Publishing, Aldershot, UK.
Endsley, M & Garland, J 2000, Situation awareness analysis and measurement, Lawrence Erlbaum Associates, Mahwah, NJ.
Erbschloe, M 2001, Information Warfare: How to Survive Cyber Attacks, Osborne/McGraw-Hill, New York City.
Flach, J 1995, ‘Situation Awareness: Proceed with caution’, Human Factors, vol.37 no.1, pp. 147-157.
Goldstone, L, Day, S, & Son, Y 2010, Comparison. In B. Glatzeder, V. Goel, & A. von Müller, On thinking: Volume II, towards a theory of thinking, Heidelberg, Springer Verlag, Germany.
Gonzalez, C & Dutt, V 2010, ‘Instance-based learning: Integrating decisions from experience in sampling and repeated choice paradigms’, Psychological Review, vol. 118 no.4, pp. 412- 417.
Gonzalez, C, Dutt, V, & Lejarraja, T 2011, How did an IBL model become the runners-up in the market entry competition? Springer, New York.
Gonzalez, C, Lerch, F, & Lebiere, C 2003, ‘Instance-based learning in dynamic decision making’, Cognitive Science, vol. 27 no.4, pp. 591–635.
Hertwig, R, Barron, G, Weber, U & Erev, I 2004, ‘Decisions from experience and the effect of rare events in risky choice’, Psychological Science, vol. 15 no.8, pp. 534–539.
Jajodia, S, Liu, P, Swarup, V, & Wang, C 2010, Cyber situational awareness, Springer, New York, NY.
Johnson-Laird, P 2006, How we reason, Oxford University Press, London, UK.
Jones, G & Endsley, R 1996, ‘Sources of situation awareness errors in aviation’, Aviation, Space and Environmental Medicine, vol. 67 no.6, pp. 507-512.
Lejarraga, T, Dutt, V, & Gonzalez, C 2011, ‘Instance-based learning: A general model of decisions from experience in repeated binary choice’, Journal of Behavioural Decision Making, vol.12 no.9, pp. 123-147.
Li, J, Ou, X, & Rajagopalan, R 2009, Uncertainty and risk management in cyber situational awareness, Web.
Lunt, F 1993, Detecting Intruders in Computer Systems, 1993 Conference on Auditing and Computer Technology, SRI International, New York.
Lute, H & McConnell, B 2011, A civil perspective on cyber security, Web.
Mattord, V 2008, Principles of Information Security, Course Technology, Oxford University Press, Oxford.
McCumber, J 2004, Assessing and managing security risk in IT systems: A structured methodology, Auerbach Publications, Boca Raton, FL.
Ou, X, Boyer, F, & McQueen, A 2006, A scalable approach to attack graph generation. In Proceedings of the 13th ACM Conference on Computer and Communications Security, Vancouver, British Columbia, Canada.
Paxson, V 1998, Bro: A System for Detecting Network Intruders in Real-Time, Proceedings of The 7th USENIX Security Symposium, San Antonio, TX. Protection: Defending the U.S. Homeland, Praeger Publishers, , Westport, CT.
Salter, C, Saydjari, O, Schneier, B, & Wallner, J 1998, Toward a secure system engineering methodology, In Proceedings of New Security Paradigms Workshop, Charlottesville, VA, ACM.
Schneier, B 2008, Secrets and Lies: Digital Security in a Networked World, Wiley Computer Publishing, New York City, NY.
Shepard, N 1962 (a), ‘The analysis of proximities: multidimensional scaling with an unknown distance function: Part I’, Psychometrika, vol. 27 no. 3, pp. 125–140.
Shepard, N 1962(b), ‘The analysis of proximities: Multidimensional scaling with an unknown distance function: Part II’, Psychometrika, vol. 27 no. 4, pp. 219–246.
Sideman, A 2011, Agencies must determine computer security teams in face of potential federal shutdown, Web.
Tadda, G, Salerno,J, Boulware, D, Hinman, M, & Gorton, S 2006, ‘Realising situation awareness within a cyber environment’, Proceedings of SPIE, vol. 62 no. 42, pp. 624-204.
Tversky, A 1977, ‘Features of similarity’, Psychological Review, vol. 84 no. 7, pp. 327-352.
Vosniadou, S & Ortony 1989, Similarity and analogical reasoning, Cambridge University Press, New York, NY.