Keywords

1 Introduction

Control mechanisms for the evolving potential of autonomous systems are not yet sufficiently established. However, there is a need for control to be allocated throughout organisational and institutional levels of society in order to manage increasing complexities At the same time, training seems to lag significantly behind the level of technological complexity in automated system. This study seeks to highlight current automation/autonomy issues as represented by stakeholders positioned at various echelons of those institutions involved in civil ATM. The study will first review how these concerns are expressed in different domains (such as rail, civil aviation and the military) supported by reviews in cognitive and resilience engineering literature.

As automation increases the degree of coupledness (defined as the interdependence of one element on another), the impact and interaction with other system components cannot be satisfactorily anticipated. This is why increased automation contributes to those accidents/incidents where components’ behaviour, while within the range of expected variability, creates the condition for the breakdown. In order to better appreciate the effects of multiple dependencies, Feary and Roth (2104) emphasise that new information from the context is needed when designing for complex interactions. Much too often safety risks associated with the introduction of new technologies, such as safety nets, are examined without an explicit definition of the system’s strategic goals, objectives and constraints. Contextual information should include a model of the system and operations viewed as subcomponents in the context of a larger system in order to allow for potential interactions/dependencies among sub-components to be identified. In particular, goal conflicts, resource shortages, double-bind situations and coordination/communication issues should be considered for their impact on the success of the new human-automation subsystem.

Operational issues raised by the introduction of automation have been documented by Balfe et al. (2012) in a study about the consequences of an Automatic Route Setting (ARS) on signallers’ management of train routing. Some of the issues raised were: (i) lack of appropriate co-ordination mechanisms now needed as new dependencies were created between remote teams; (ii) lack of contextual, up-to-date traffic info requiring operators to monitor for inappropriate ARS decisions on routing; (iii) decision-making by ARS is based on a narrower set of criteria, making it inadequate to reflect the regulating task complexity; (iv) lack of focused training that would allow signallers to understand and predict ARS; (v) poor feedback preventing signallers knowing what the system is doing (no matter the degree of technical complexity); (vi) sparse knowledge of how the system works; (vii) effects caused by ‘automation surprises’ due to a inadequate understanding of underlying ARS logic; (viii) bending the use of a safety net to cope with the design deficiencies of ARS; (ix) workload is not reduced tout court; rather, its pattern has changed; and (x) responsibility allocation is not clearly defined. All of the items on this list resonate with issues raised in cognitive engineering literature dating back to the 1990 s and make those recommendations by Feary & Roth about contextual framing of automation even more compelling.

The US Defense Science Board (DSB) published a review of major automation issues and assumptions in 2012. In this review, the role of ‘autonomous’ systems in US Department of Defence (DoD) operations is critically appraised. Although autonomy technologies – which presume even less human involvement than automation - are recognised as having significant impacts on warfare worldwide, their potential contribution not only to vehicle and platform control but throughout all echelons of the command structure is hampered by a lack of a proper conceptual framework for the design and implementation of autonomous systems. Two points are worth noting. First, the widely accepted notion of ‘level of automation’ does not seem to be adequate as it emphasises the tasks each agent is made responsible for without highlighting the fundamental need to coordinate the management of different aspects of the mission. Second, it tends to imply that throughout a mission planning and execution at these levels are fixed and there is no guidance as to how to switch among different levels throughout. If the potentialities of these intensive technological environments are to be realised, then a more comprehensive framework for design and deployment needs to be developed. The proposed framework extends the scope of autonomous systems by including the echelons of the mission structure from the mission commander to the section leader, and on down to the pilot or sensor operator. The autonomous system framework outlines three criteria that design decisions should meet:

  1. [1]

    Ensure clear allocation of functions and responsibilities to achieve specific capabilities;

  2. [2]

    Understand that allocations might vary within the same mission and through echelons; and

  3. [3]

    Make explicit the high level system trade-offs that are inherent in any autonomous capabilities.

It is important to emphasise the need to ensure coordination across echelons and roles as autonomous components increase. In particular, : design trade-offs should be explicitly considered. Often autonomous systems are introduced without considering the wider consequences and adjustments required once fully deployed. As a consequence there is a risk that new sources of errors are introduced and the autonomous technology is not typically used to its full potential.

In a recent review of automation issues as raised by the two fratricide incidents in the second Gulf War, Hoffman et al. (Hoffman et al. 2014) report and comment on an investigation commissioned to the Army Research Laboratory (ARL) on Patriot-human system performance. (Patriot - an example of intensive technology deployed during the second Gulf War - is a missile system that launches advanced-technology ammunition capable of neutralising multiple air targets.) A fairly broad range of issues emerged, mainly related to what is called an “undisciplined insertion” of autonomous technology where very little, if anything at all, was done to anticipate downstream consequences. Undisciplined automation might involve some of the following: software failures not being adequately addressed during software upgrades, and not made known to operators through training or standard operating procedures (SOPs); emphasis on software development: does not address software failure and possible human coping strategies but it emphasises development of more autonomous technology with less and less apparent need for human intervention. Front line remains uninformed about the issues; failure to train for expertise while encouraging a ‘blind’ trust in the autonomous weapon system; thus training emphasised rote drill rather than the highly specialised technical skills needed to master the complexity of the monitoring and control process; technology-intensive systems require considerable operator expertise for effective use; complexity cannot be reduced by progressively designing the human out of the loop. Already the current trend is to characterise human-on-the-loop rather than human-in-the-loop, which indicates a trend towards monitoring the autonomous system rather than controlling it - such a function requires a new set of skills which have not received adequate attention from developers, commanders and higher up in the echelons of organisations; inadequate administrative procedures for job allocations such that crew member staff are rapidly rotated out of the battle positions on to other jobs. It turned out to be difficult to keep operators and crews in the same position long enough for them to reach a satisfactory level of competence. The result is a fairly inexperienced crew that learns and adapts to the new complexities in a somewhat haphazard fashion.

The autonomous system should be the subject of analysis, testing and costing in its organisational context, including infrastructures and training requirements.

A report on the interfaces of modern flight deck systems in the current aviation industry prepared by the Federal Aviation Administration (FAA) described it as “very safe” (FAA, 1996). However, a separate review of the data identified vulnerabilities in flight crew management of automation and their functional understanding of the situation. To address these concerns, the Performance-based Aviation Regulatory Committee (PARC) and the Commercial Aviation Safety Team (CAST) established a joint working group comprising representatives from the industry (including individuals drawn from both the authorities and research communities) to update the 1996 report. The working group identified several factors that may have an impact on future operations:

  • Growth in the number of aircraft operations;

  • Evolution in the knowledge and skills needed by crew and air traffic personnel; historically low commercial aviation accident rates that make the cost/benefit case for further safety measures and regulatory change more challenging to argue;

  • Future airspace operations that exploit new technology and operational concepts for navigation, communication, surveillance and air traffic management.

The list above is a departing point of the FAA report to illustrate current issues of automation (the focus is mostly on flight crews) interaction, design, and training. Two recommendations and two (combined) findings from the report follow: these summarise some of the most commonly debated problems, such as the need to model interactions and dependencies and to develop up-to-standard training programmes and training skills.

Recommendation 6 and 14 - Flight Deck System Design and training: flight crew training should be enhanced to include […] system relationships and interdependencies during normal and non-normal modes of operation[…].Instructor/Evaluator Training and Qualification. Review guidance for […] training and qualification for instructors/evaluators. This review should focus on the development and maintenance of skills and knowledge to enable instructors and evaluators to teach and evaluate flight path management, including use of automated systems.

Recommendation 18 – Methods and Recommended Practices for Data Collection, Analysis and Event Investigation that address Human Performance and Underlying Factors: Explicitly address underlying factors in the investigation, including factors such as organisational culture, regulatory policies, and others.

Findings 27–28. Interactions and Underlying Factors in Accidents and Incidents and mitigation to risk factor. Current practices for accident and incident investigation are not designed to enable diagnoses of interactions between safety elements and underlying or “latent” factors […] there is a lack of data available addressing such factors. When developing safety enhancements, such factors (e.g. organisational culture or policies) are just as important to understand as the human errors that occur. [Interaction among latent conditions might imply that] mitigations to one risk factor can create other, unanticipated risks.

Historically, there has been a somewhat unhelpful human-centric approach, where automation issues revolved around the one-to-one (human – automation interface) interaction; future debate should no longer place humans - viewed as being disconnected from influences and roles of societal pressures, at the centre, of a far too impoverished ‘universe’ of dynamic interactions.

There are considerable challenges posed for humans by technology in complex work systems (Hoffman et al. 2014). These challenges need to be better understood so as to provide the more appropriately skilled workforce that emerging technologies now require. It is thus vital to embrace the concept of human-machine inter-dependence and collaboration (Bradshaw et al. 2013). It is also necessary to acknowledge the role of human expertise in the implementation of such systems. As Hawley (2011) notes, it is a mistake to believe that we can achieve optimal performance levels through technology alone (Hoffman et al. 2014).

These and other concerns have moved the UK Air traffic National Service Providers (ANSP) NATS and the Civil Aviation Authority (CAA) UK to elicit views and concerns within the air transport industry through a number of workshops held to keep up to date with the continuous changes taking place within the industry. The workshops were intended to contribute to the theoretical and practical body of reference material that can be used by industry specialists (such as regulators) to understand safety attitudes at an organisational and managerial level. In one such workshop held by the NATS/CAA in February 2014, 66 industry professionals (including pilots, engineers, regulators and air traffic controllers) were asked a series of questions, each designed to explore the present and future implementation designs that use advanced human-system integration, i.e. automation and the need for further regulation. Questions and answers were written in bullet point form and placed within a matrix. Using Grounded Theory (GT), this study will map and interpret the workshop data and questionnaires gathered to elicit professionals’ views on automation in the aviation industry. The GT method is used in order to analyse the factors affecting automation at an organisational level - that is, roles and responsibilities as identified by the analysis. The aim of the study is to gauge stakeholder attitudes at an organisational level.

{The 2014 workshop and industry surveys held in 2015 provide a critical reflection of industry professionals’ views, including: how confident stakeholders feel; what they believe; issues that need to be addressed; and the changes that could be made to improve aviation safety. The GT process interprets the characteristics of the data gathered by describing, categorising and developing themes before applying theoretical foundations to them.}

Two important features of the GT method which make it particularly suitable for this study are that that the themes are traceable to the data and are ‘fluid’ – meaning that emphasis is placed on process and the temporal nature of the theory. This production process helps to build a story or an “account” and becomes the building blocks of a hypothesis.

In doing so, the method draws out observations of reciprocal changes in patterns of action/interaction between humans and automated systems as well as among humans themselves. Seen at the organisational level, these interactions highlight the needs, behaviours and actions of humans within a consistently changing environment.

2 Study Objective

This study builds on previous research commissioned by NATS to elicit critical views from all parties involved in the design, implementation, regulation and use of existing or planned automated systems (Amaldi and Smoker 2013). It will also analyse views from major stakeholders in civil aviation/air traffic management environments about the current status of automation and the roles that each stakeholder group can play in addressing areas of concern. The study will lay the foundations for an automation survey to gauge stakeholder attitudes at an organisational level. This will differ from previous automation surveys which historically have been largely limited to the level of human-machine interaction at an individual (operator) level. In doing so, it will identify the challenges and paths for improvements in the field.

The study comprises three key stages:

  1. (a)

    Carrying out a thematic analysis of the statements issued during the workshop.

  2. (b)

    Elaborating on 2014 workshop statements through surveys with stakeholders to gain a thorough understanding of issues; checking that study themes constitute an accurate synthesis of individual stakeholder through a survey.

  3. (c)

    Testing the relevance and significance of statements at an organisational level by asking the group to score the relevance of statements on a scale – thus capturing the group view.

Results from stage (a) will be reported in what follows.

3 Thematic Analysis

Systems thinking is about non-linearity and dynamics rather than linear cause-effect-cause sequences. In ultra-safe industries, “accidents come from relationships, not broken parts (rules)” (Dekker 2011). These “relationships” comprise “soft”, difficult-to-define issues such as the nested layers in complex interactions between human agents (engineers, pilots, ATCs and regulators), between human agents and procedures (flight plans, rules and procedures), and between operators and technical systems (radar systems, aircraft navigation systems, traffic alert and collision avoidance system).

Using GT, sixteen themes that can be integrated into six broad interaction-related themes have been identified, all of which appear to have some relation to the over-arching concept of safety culture:

  • 1. Feedback loops within stakeholder interactions. Generally speaking, there is an assumption that feedback will assist stakeholders to: (1) increase their knowledge of the system and thus (2) improve awareness of individuals within the organisation. (Amalberti 2013). More importantly, it is also assumed that through such interaction stakeholders can better understand and anticipate the impact of their actions so as to prevent system failures. Regulators, in particular, are required to rely on feedback gained from industry experts in order to gain a broader perspective of the potential interactions that anticipate failures within the system. However, because regulators do not conduct actual operations and have different educational background from other stakeholders (engineers, pilots and ATCs), they may not fully understand the broader perspective of the organisational system. The best they can do is to rely on the knowledge base of other stakeholders causing them to gain only partial knowledge, which may lead to either misinterpretation or bias on their part when creating s regulatory structures. As a result, regulator group interactions that rely on feedback can be inadequate because regulation becomes more opaque and difficult to manage (Amalberti 2013). This raises the question of whether the system is unmanageable.

  • 2. System not designed to optimise socio-technical interactions. Designers cannot foresee all possible scenarios of system failure and are thus not able to provide automatic safety devices for every contingency. Automation therefore becomes limited with regard to dealing with multiple failures, unexpected problems and situations requiring deviations from Standard Operating Procedures (SOPs). Furthermore, unanticipated situations requiring human agents to manually override automation are difficult to understand and manage. For instance, too much time spent trying to understand the origin, conditions, or causes of an alarm or several alarms may distract pilots from other priority tasks, such as the value of pitch, power and roll when flying the aircraft. This may cause adverse circumstances due to the pilot’s surprise as well as induce peaks of workload and stress. Furthermore, interactions of this kind may foster human agents’ feelings of distrust, dis-use of automation in future (Woods 2006).

  • 3. Interactions with automation can undermine confidence/trust. Human distrust of automation undermines confidence in it (Gao et al. 2006). Operators also lose confidence in their own ability because the use of automation contributes to the lack of manual skills practice and can cause skills degradation (Gao et al. 2006). This may make them more reluctant to be proactive when interacting with automation.

  • 4. Degree and delegation of control, autonomy, authority and responsibility between human and automated agents need to be better understood. There is a poor understanding of the mechanics behind automation and how to manage human interaction with it. How much should we grant to automated systems? Which stakeholders should have authority for which tasks? How are decisions taken to empower automation, and to what extent? Too much authority, autonomy and control in the hands of the human agent can be seen as not optimising automated systems (Sheridan and Parasuraman 2006). On the other hand, too much authority, autonomy and control given to automated systems can give rise to an over-reliance on automation (Thackray and Touchstone 1989; Wiener 1981). Over-reliance on automation can lead to interaction challenges when operators need to transition back to manual or degraded modes, Furthermore, new technologies can complicate or change the operator’s tasks, altering the situations in which tasks occur and even the conditions that affect the quality of the operator’s, work and engagement in such tasks (Carroll and Campbell 1988; Dekker and Woods 2002). For example, pilots today “monitor screens” rather than fly planes. The building of interactions based on trust, cooperation, and coordination between human agents and automation needs to be better understood and managed. (Dekker and Woods 2002; Hancock 2014).

  • 5. Organisational “Just Culture” undermined by legal realities. This refers to “good practice” principles that reduce finger-pointing and encourage individuals to report near-misses (Dekker 2007). While Just Culture is an admirable goal (and one that would shed more light on potential safety improvements), legal realities are such that individuals may be reticent about reporting incidents for which they might be held liable for fear of the legal consequences. These contradictions and the lack of transparency can create blind spots within the organisation, which may subsequently obscure sound decision-making.

    6. Human interaction and the ETTO PRINCIPLE. Acknowledging the Efficiency–Thoroughness Trade-Off principle - balancing the trade-off between efficiency or productivity/profits/business realities on the one hand, and thoroughness (such as safety assurance and human reliability) on the other (Hollnagel 2009). In an ideal world, corporate governance and ethical management practices would be of paramount importance, particularly in high-risk industries such as aviation. However, market forces encourage productivity, incentivising stakeholders to focus on increases in production/workload without foreseeing the impact on safety (Hollnagel 2009). As the ETTO principle demonstrates, the reality is that management’s prioritisation of production over safety will have financial benefits on the one hand, but also a negative impact on safety that only becomes clear in the future - by which time a manager could have left the organisation. Such a “bad decision” is rarely traced back to the manager once time has passed and he is no longer with the organisation. As a result, short-term financial gain (such as a significant bonus) may prevail over long-term safety considerations (for which there is no trail of responsibility). It can therefore not be assumed that companies will prioritise safety when making strategic decisions. Business leaders are under pressure to be productive/competitive/efficient, and thus may run the risk of encouraging an organisational culture where productivity is favoured over safety.

4 Integration with Previous Study

The web of connected themes in Fig. 1 constitutes the articulated views and perspectives of the stakeholders interviewed in the February 2014 workshop.

Fig. 1.
figure 1

The ‘web’ of connections and links between themes are displayed above.

The outcome of this workshop is compared with that of two previous workshops (Amaldi and Smoker 2012) to check for thematic overlap and consistency. Figure 2 shows a number of statements extracted from the previous workshops. Figure 3 shows where themes from previous workshops overlap with those from the 2014 workshop.

Fig. 2.
figure 2

Main reflections about automation taken from December 2011 survey (Amaldi & Smoker, 2012).

Fig. 3.
figure 3

Themes from February 2014 workshop overlap with those from December 2011 survey (Amaldi & Smoker, 2012).

5 Conclusion

The focus of the study is ultimately on the interactive processes between technology and humans as conceived within and across the domain of expertise, from front line operators to regulators. Emerging technologies lead to inadequate understanding, control and management of the increasing complexities faced by stakeholders (Bradshaw et al. 2013). As automation/autonomous systems make their way into safety critical systems, the dependencies of interactions that impact on other system sub-components cannot be satisfactorily anticipated (Perrow 1984). Thus, socio-technical interactions, particularly in times of crisis, become poorly understood (Hancock, 2014). Furthermore, it makes training for these for these technologies inadequate (Hancock 2014). Therefore, the challenges of emerging technologies need to be better understood in order to provide control and a more skilled workforce (Bradshaw et al. 2013). Recent research views human-machine inter-dependence and collaboration as key to improve effectiveness in a work system (Hancock 2014). For this to happen, human expertise needs to be recognised as key since optimal performance levels can not be achieved through technology alone (Bradshaw, et al. 2013; Hancock 2014). The themes developed through GT assisted in viewing the emerging patterns of the relationships, intent, behaviours and actions of individuals interacting with each other as well as with automation. These patterns help to provide a clearer picture of what expertise stakeholders believe is needed in order to perform at optimal performance levels, and where potential pitfalls lie. Human-machine teamwork is seen as vital to allow “virtues” to emerge and propagate – chief among these being the wisdom to understand “how to work smarter” (Johnson et al. 2014). Mapping the patterns of interactions between team members in socio-technical systems may be a step in the right direction to gain such wisdom Fig. 4 .

Fig. 4.
figure 4

Connected themes appear to be influenced by pervasive external themes within the organisational environment; the over-arching theme seems to be safety culture.

The findings of the study are intended to contribute to the theoretical and practical body of reference material that can be used by industry specialists (such as regulators) to understand safety attitudes with respect to automation within management and organisations.

Using these findings, current regulations will be tested for their suitability in future operating environments. Assessments will also be made as to whether any further requirements are needed both nationally and globally.

This study also aims to lay the foundations for an automation survey to assess measure stakeholder attitudes at an organisational level - automation surveys have historically mostly been limited to the level of human-machine interaction at an individual (operator) level. Further this study is part of an ongoing CAA/NATS-initiated project whose aim is to provide guidance material to help create, design and deploy systems for safe and effective operation, while recognising business drivers for the industry as a whole.