Keywords

1 Introduction

In the movie “The Terminator”, a computer system in the future has become self-aware and is leading an onslaught against the human race. This theme of intelligent robots has been consistently played out in various books, movies, etc. Similarly, leadership has been a topic of study and reflection for thousands of years, but it is almost always thought of in a human context – humans leading other humans. Recently, some studies have examined the relationship between, and performance of human-agent teams and others examined the role of humans leading these artificial agent teams [1, 2]. This current examination is unique in that explores if an artificially intelligent machine is capable of providing limited leadership to a person for a specific task.

There are many different definitions of leadership. As Stogdill [3] points out in a review of leadership research, there are almost as many definitions of leadership as there are people who have tried to define it. Peter Northouse offers the following observations and definition that will serve as the working definition of leadership in this paper: “leadership is a process whereby an individual influences a group of individuals to achieve a common goal” [4]. The main difference in this study is that an artificially intelligent agent will influence a group of individuals to achieve the goal. Recent leadership research has concentrated on transformational leadership [5, 6] and a charismatic human leader creating a vision that people will want to follow. However, as important as transformational leadership is, it has some weaknesses [4, 7]. Transactional leadership is still required in a vast set of circumstances and is closely aligned with effective management [8, 9]. A transformational leader needs to create the vision and a transactional leader will ensure that the vision is implemented and both are necessary and complementary [10]. A transactional leader does not individualize the needs of subordinates or focus on their personal development. Transactional leaders exchange things of value with subordinates to advance their own and their subordinates’ agendas [11,12,13].

As intelligent systems advance and become more ubiquitous, we need to explore new dimensions of human-computer interactions based on natural communication patterns and consideration of human individual differences. Next generation information systems will involve both the automated delivery of human-like communication and the interpretation human verbal and non-verbal messages [14,15,16,17]. Given these assertions, the ability for a computer system to have a knowledge base on which to draw in order to deliver appropriate messages to a human user is an ambitious undertaking and is a novel conceptualization for information systems [18].

In this paper, we focus on transactional leadership and propose that this form of leadership need not be confined to human-to-human interactions. We take the position that transactional leadership can be automated. In other words, a system can be developed and/or trained to provide leadership to human counterparts in a computer-to-human interaction for a limited project. We propose to automate three leadership behaviors and explore the social presence of the automated leader as a moderating variable.

2 Research Background

At the macro-level, this research paper proposes to examine the relationship between automated leadership, social presence and task performance, and follower satisfaction. Figure 1 shows the proposed relationships between each of these constructs. The very essence of leadership is to improve performance and develop followers. Leadership theories posit that leadership consists of behaviors that should be applied strategically and systematically to motivate individuals and teams to perform [19,20,21,22,23,24,25].

Fig. 1.
figure 1

Basic research model

2.1 Automated Leadership

There are multiple supervisory behaviors that have shown a positive impact on performance [22, 26,27,28]. We propose to apply an automated leadership style, which highlights the importance of certain behaviors, such as providing information and developing goals [29]. Research in virtual teams has shown that effective leaders in distributed teams are extremely efficient at providing regular, detailed, and prompt communication with their peers and in articulating role relationships (responsibilities) among the virtual team members [30]. Three leadership behaviors have been selected for automation: goal setting, performance monitoring, and performance consequences.

Goal Setting.

A goal is a desired state or outcome [31]. According to Locke and Latham, goals affect performance through four mechanisms. First, they serve a directive function. Second, goals have an energizing function. Third, goals affect persistence. Fourth, goals affect action indirectly by leading to the arousal, discovery, and/or use of task-relevant knowledge and strategies. Locke and Latham showed that the highest and most difficult goals produced the highest levels of effort and performance [31]. They also found that specific, difficult goals led to consistently higher performance than urging people to do their best. Atkinson [32] showed that there was an inverse, curvilinear relationship between task difficulty (measured as probability of success) and performance. The highest level of effort occurred when the task was moderately difficult. Therefore, effective leaders will set goals of appropriate difficulty to stimulate the optimal performance according to a given team’s capability. For the ad hoc nature of this leader and follower experiment, effective goal setting involves formulating specific, challenging and time-constrained objectives [33].

Performance Monitoring.

Antonakis et al. [19] noted that transactional leadership is the ability to control and monitor outcomes. Research by Larson and Callahan [34] looked at the role of monitoring on performance. They hypothesized that performance monitoring would have an independent effect upon work behavior and found that monitoring improved subjects work output independent of other factors. Similarly, Brewer [35] found that the quantity of work improved when monitored. Aiello and Kolb [36] examined the role of electronic performance monitoring and social context on productivity and stress. They found that individually monitored participants were vastly more productive than those monitored at the group level for a simple task. More recent research has evaluated how electronic performance monitoring systems impact emotion, performance, and satisfaction [37, 38]. Therefore, effective leaders will actively monitor the performance of individual team members and the team as a whole.

Performance Consequences.

Bass [39, 40] argued that theories of leadership primarily focused on follower goal and role clarification and the ways leaders rewarded or sanctioned follower behavior. Similarly, Larson and Callahan [34] found that monitoring along with consequences (feedback to the subjects about their performance during the task) significantly increased subjects’ work output and that this provided the largest increase in productivity. Thus, how well a leader is able to monitor performance and influence the team’s behavior is a measure of transactional leadership ability. Follower behavior can be shaped by effectively providing feedback and appropriate consequences. Consequences can be defined as either motivating/reinforcing events or as disciplining/punishing ones [41, 42]. Komaki et al. [27, 28] expanded this definition to include consequences that are neutral and informational in character. For this study, we use their definition of performance consequences, which is defined as communicating an evaluation of or indicating knowledge of another’s performance, where the indication can range from highly evaluative to neutral. This type of communication is vital for performance and compliance [27, 43]. An artificial system can operate by creating clear structures that make it certain what is required of the subordinate team members, and the rewards that they will receive for following instructions. Punishments can also be clearly stated and then a computer system can be coded to use operant conditioning on followers. Komaki provides several examples of positive, negative, and neutral consequences. Several examples are listed below:

Positive

  • “You have done good work; no signs of errors!”

  • “Great, you have done it so quickly.”

Negative

  • “You have made a great deal of errors.”

  • “Oh no. You have done this all wrong.”

Neutral

  • “You have over 300 open cases.”

  • “He made a call yesterday for those materials.”

2.2 Social Presence

Social presence is the sense that one is together with another. It encompasses the idea that embodied agents have a persona that causes natural reactions from human beings. Heeter [44] said that this phenomenon relates to the apparent existence of and feedback from the other entity in the communication and that social presence is the extent to which other beings in the world appear to exist and react to the user. Biocca et al. [45] posit that social presence may be the byproduct of reading or simulating the mental states of virtual others and that social presence is related to the theory of the mind. They state that when users interact with agents or robots, they “read minds” and respond socially, even when they know that no mind or social other really exists. They continue that although humans know that the “other” is just ink on paper or patterns of light on a screen, the social responses are automatic [45]. Similarly, Computers as Social Actors (CASA) theory proposes that human beings interact with computers as though computers were people [46]. In multiple studies, researchers have found that participants react to interactive computer systems no differently than participants react to other people [47]. It is suggested that people fail to critically assess the computer and its limitations as an interaction partner [48] and as a result, the norms of interaction observed between people occur no differently between a person and a computer [49]. CASA has been used in multiple studies to provide structure for experimentation. Some similar studies include instances were computers have been specifically designed to praise or criticize performance [50], to display dominant or submissive cues [51, 52], to flatter participants [53], explored the role of gender and flattery [54] or to display similar or dissimilar interaction cues with participants [51]. More recent studies have shown how individuals may form group relations with computer agents [55], how social presence affects interaction in a virtual environment [56], and that social presence factors contribute significantly to the building of the trustworthy online exchanging relationships [57].

2.3 Text-Based Automated Agents

One level of interaction for the automated leaders is simply to send a text-based message. Each of the leadership behaviors described above can be put into an agent that is “unembodied” and communicates with the follower through text messages that appear on the screen. We propose to use several levels of social presence/embodiment with text-based being the least “present”. There are several reasons to use an embodied face over only sound and text when communicating and interacting with individuals. People interacting with embodied agents tend to interpret both nonverbal cues and the absence of nonverbal cues. Embodied agents can effectively communicate an intended emotion through animated facial expressions [58]. The nonverbal interactions between individuals include significant conversational cues and facilitate communication. Incorporating nonverbal conversational elements into an automated leader may increase the engagement and satisfaction of individuals interacting with the agent [59,60,61,62,63]. We anticipate that this lowest level of presence will moderate both performance and satisfaction and should provide greater performance than no leadership at all.

2.4 Embodied Automated Agents

The next level of presence for the adaptive intelligent agent in our experiment will be a “flat”, embodied agent. The primary means the automated leader has for affecting its follower are the signals and messages it sends to the human via its rendered, embodied interface. For this paper, embodied agents refer to virtual, three-dimensional human likenesses that are displayed on computer screens. While they are often used interchangeably, it is important to note that the terms avatar and embodied agent are not synonymous. If an embodied agent is intended to interact with people through natural speech, it is often referred to as an Embodied Conversational Agent, or ECA [17]. The signals available to the agent take on three primary dimensions, which are appearance, voice, and size. The appearance can be manipulated to show different demeanors, genders, ethnicities, hair colors, clothing, hairstyles, and face structures. One study of embodied agents in a retail setting found a difference in gender preferences. Participants preferred the male embodied agent and responded negatively to the accented voice of the female agent. However, when cartoonlike agents were used, the effect was reversed and participants liked the female cartoon agent significantly more than the male cartoon [64].

Embodied Conversational Agents are becoming more effective at engaging human subjects as though they were intelligent individuals. Humans engage with virtual agents and respond to their gestures and statements. When the embodied agents react to human subjects appropriately and make appropriate responses, participants report finding the interaction satisfying. On the other hand, when the agents fail to recognize what humans are saying, and respond with requests for clarification or inappropriate responses, humans can find the interaction very frustrating [65]. It has been proposed that Embodied Conversational Agents could be used as an interface between users and computers [66]. While humans are relatively good at identifying expressed emotions from other humans whether static or dynamic, identifying emotions from synthetic faces is more problematic. Identifying static expressions was particularly difficult, with expressions such as fear being confused with surprise, and disgust and anger being confused with each other. When synthetic expressions are expressed dynamically, emotion identification improves significantly [58].

In one study on conversational engagement, conversational agents were either responsive to conversational pauses by giving head nods and shakes or not. Thirty percent of human storytellers in the responsive condition indicated they felt a connection with the conversational agent, while none of the storytellers to the non-responsive agents reported a connection. In this study, embodied conversational agent responsiveness was limited to head movement, and facial reactions were fixed. Subjects generally regarded responsive avatars as helpful or disruptive, while 75% of the subjects were indifferent towards the non-responsive avatars. Users talking to responsive agents spoke longer and said more, while individuals talking to unresponsive agents talked less and had proportionally greater disfluency rates and frequencies [67] (Fig. 2).

Fig. 2.
figure 2

Sample flat automated agent

Agents that are photorealistic need to be completely lifelike, with natural expressions, or else individuals perceive them negatively, and a disembodied voice is actually preferred and found to be clearer. When cartoon figures are utilized, three-dimensional characters are preferred over two-dimensional characters and whole body animations are preferred over talking heads [61]. Emotional demeanor is an additional signal that can be manipulated as an effector by the automated leader based on its desired goals, probable outcomes, and current states. The emotional state display may be determined from the probability that desired goals will be achieved. Emotions can be expressed through the animation movements and facial expressions, which may be probabilistically determined, based on the agent’s expert system [61]. There are limitless possible renderings that may influence human perception and affect the agents operating environment. Derrick and Ligon [68] showed that these types of agents could use influence tactics such as impression management techniques to change user perceptions of the automation. Moreover, it has been shown that these perceptions change user/follower behavior include how people speak and interact with the agent [69]. Finally, Nunamaker and colleagues review how these types of agents have been tested and deployed in various contexts [16].

2.5 Hologram-Based Automated Agents

An alternate technology that is widely deployed in interactive entertainment environments is a projection display known as Pepper’s Ghost. While often referred to in the mainstream media as a hologram, this is a form of 2D display technology that creates an illusion of depth under limited viewing conditions and angles. Technological advancements have produced impressive visualizations and immersive experiences, as evidenced by recent highly publicized “live” stage performances by celebrities who are not present (e.g. Narendra Modi), animated characters (e.g., Hatsune Miku, Madonna with the Gorillaz), and digital recreations of deceased celebrities (e.g., Tupac, Michael Jackson). These visualizations are also used at high-end amusement parks where the immersive experience is critical to the visitor’s experience, such as Disney World’s “Haunted Mansion” and “Phantom Manor” and Universal Studios’ Harry Potter “Hogwarts Express” ride. We have developed a prototype limited viewing angle pseudo-hologram (LVAH) based system from readily available 2D COTS systems to use as the automated leader with the most social presence.

Technological trusting beliefs result from social presence, or the warmth, sociability, and feeling of human contact, and can be achieved through simulating interaction with another real person [70]. Trust in technology also depends upon machine accuracy, responsivity, predictability, and dependability [71]. We propose that the more socially present leader that is created using a LVAH will moderate performance and satisfaction of the followers. We will measure the followers’ perceptions of social presence in each of the types of leadership agents and test how this moderates the outcome variables. Figure 3 shows the apparatus to create the hologram agent and Fig. 4 shows the embodied hologram agent that will be used in the study.

Fig. 3.
figure 3

Hologram apparatus

Fig. 4.
figure 4

Embodied hologram leader agent

3 Method

We have created a task where students have to input fake alumni information into an online system. We generated 500 fake names, addresses and phone numbers, and printed them out on sheets of paper. We ran a control study with no leadership where students were told, “We are capturing addresses and contact information for recent UNO Alumni that will be used to send information, fundraise, and help build the UNO community. In front of you, there is a sheet of alumni’s information that must be input into the system. Please use the data entry screen on the computer to input this data. Work as quickly and as accurately as possible, as your performance is based on both the number and quality of the data that you have captured. After you have input each person’s contact information, please press the submit button to store it to the database. You will input data for thirty minutes and then we will ask you about your experience”. The students then input the data for thirty minutes and were thanked for their participation. Based on this control group, the average user could input approximately 26 names in 30 min (25.7) with a standard deviation of 4.9. We measured the accuracy of the data input against the gold standard of the generated names stored in the database by comparing each field entered to the actual data. Figure 5 shows the user input screen.

Fig. 5.
figure 5

User input screen

For the control group, the average accuracy was 86.4% with a standard deviation of 6.3%. Using this baseline data, we programmed the automated leaders to set an objective for the new followers that was two standard deviations higher than the average (i.e., 35 names input in 30 min) and one standard deviation higher for quality (i.e., 93%). For any of the leadership conditions regardless of embodiment, the follow script was used and was delivered either in text (for the text-based leader) or by voice in case of the embodied leaders:

Hello. I am a new automated manager and will be your leader for this task. We are capturing addresses and contact information for recent UNO Alumni that will be used to send information, fundraise, and help build the UNO community. In front of you, there is a sheet of alumni’s information that must be input into the system. Please use the data entry screen on the computer to input this data. Work as quickly and as accurately as possible, as your performance is based on both the number and quality of the data that you have captured. After you have input each person’s contact information, please press the submit button to store it to the database. You will input data for thirty minutes and then we will ask you about your experience. I will monitor your performance. The average person can input about 30 people’s contact information in 30 min. Based on your education and personality profile, I think that 35 is a reasonable goal for you. Please don’t let me down. When you press the OK button on the screen, I will start the timer. Please ask my human assistant if you have any questions”.

The bolded sections in the text highlight where the agent is establishing itself as the leader and is setting a performance objective for the follower. The non-bolded text is the same as the control group instructions. Once the user starts the experiment, the systems monitors performance and provides appropriate feedback at defined intervals. The system architecture is shown in Fig. 6 below.

Fig. 6.
figure 6

System architecture

As described earlier the system will then communicate these objectives to followers electronically via a chat client or as an embodied conversational agent (e.g. SPECIES agent) [14]. Moreover, the artificially intelligent leader will be able to monitor individual team members’ performance using electronic performance monitoring techniques as the users participate in the virtual context. Finally, the system is programmed to use operant conditioning on its followers and the automated leader has specific and proper pre-programmed statements that it will send to followers at appropriate intervals depending on their performance. There are multiple studies that evaluate leadership from an operant perspective [26,27,28,29, 41]. Transactional leadership from an operant perspective was chosen for automation because it can be limited to inducing only basic exchanges with followers. In essence, the programmed psychology of the artificial leader will be operant conditioning [72]. Figure 7 shows the experiment flow. The initial and second feedback are measured against progress towards the stated goal. All subsequent feedback is based on the performance of the prior five minutes. This allows the user to get more positive feedback if his or her performance improves over their initial baselines but are still short toward the goal.

Fig. 7.
figure 7

Experiment flow

The leader has a battery of possible feedback based on performance. Below are two samples of the feedback. The first is an example of feedback at 9 min where the participant has good speed, but poor accuracy and the second is an example of positive feedback for both speed and accuracy at 15 min.

Example 1.

Your speed is excellent, and you are on track to make our goal! However, I have also checked the quality of your data entries and they are not acceptable. You need to be more precise. Being fast is not good, if your data quality is so poor. Thank you for your effort, but you need to improve. Your quality is worse than most of the people that have worked on this task. Please be more accurate.

Example 2.

Thank you so much for your effort! You are really doing a fantastic job. Your speed and accuracy are in the top tier of all of the people that have worked on this task. You are doing a remarkable job and are on pace to be one of the best participants.

The experiment concludes with the leader delivering a thank you message and telling them that they have done an excellent job. After the completion of the task, the participants are given a post-survey that measures outcome and process satisfaction [73], Leader-Member Exchange (LMX) [74], Leader Behavior Description Questionnaire [4], and degree of social presence [45, 75] of the artificial leader.

Transactional leadership theory indicates that leadership is an exchange process based on the fulfillment of contractual obligations and is typically represented as setting objectives and monitoring and controlling outcomes [19]. The object of our study is to measure the effectiveness of an information system in providing this type of leadership. We control for natural team capability by random assignment to the various embodied artificial leaders. We will perform comparisons between the control group (no leader) and the presence of leadership with the manipulations being the embodiment. Transactional leadership has been operationalized as setting goals, performance monitoring and performance consequences. These behaviors should directly affect the team performance. Figure 8 shows the final operationalized model.

Fig. 8.
figure 8

Final operationalized research model

Summary of hypotheses:

Automated Leadership Will Improve Follower Performance

  1. 1.

    Automated Leadership will increase the number of data entries

  2. 2.

    Automated Leadership will increase the accuracy of data entries

  3. 3.

    Social presence will have a positive moderating effect on performance outcomes.

Follower Satisfaction

  1. 4.

    Automated Leadership will decrease process satisfaction

  2. 5.

    Automated Leadership will increase outcome satisfaction.

Perceptions of the Leader

  1. 6.

    The greater the social presence of the artificial leader the better the follower perception of the leader.

The automated leader obviously has several limitations many of which are grounded in its assumptions. First, it assumes a rational follower, who is largely motivated by simple reward, and who exhibits predictable behavior. Its programmed psychology is Behaviorism, including classical conditioning [76] and operant conditioning [72]. Similarly, it is a very narrow task with limited interaction and consequences.

4 Conclusion

As technology advances, and virtual leadership becomes the norm, our view of leadership must evolve. Similarly, the ability of machines to exhibit leadership traits needs to be evaluated. Our overarching proposition is that an information system can perform equal to or better than a human at providing transactional leadership to a human follower. We propose to explore these propositions and questions by conducting experiments where leadership is clearly defined and consistently measured [77]. Charles Darwin said, “In the long history of humankind (and animal kind, too) those who learned to collaborate and improvise most effectively have prevailed”. This new phenomenon of machine leadership is part of this next evolution we must understand how it impacts individual and team dynamics. Humans and machines are collaborating in new ways and organizations are increasingly leveraging human-automation teams. Siri (the Apple iPhone conversational assistant), Alexa (Amazon’s conversational agent), physical robots, virtual customer-service agents, and many other pseudo-intelligent agents, use text clues, vocal cues, or other environmental sensors to retrieve information from the user, process it, and respond appropriately. These agents help individuals complete everyday tasks such as find directions, ask for help when ordering goods or services on a website, or understand additional information about a topic or idea. Humans still use automated agents for simple, utilitarian tasks, but these types of assistants are able to undertake larger and more important tasks. While intelligent agents present a potential solution, it is not fully understood about how humans will actually interact with digitized experts or if humans utilize intelligent agents in ways different from traditional human-to-human collaboration.