Skip to main content

Five Requisites for Human-Agent Decision Sharing in Military Environments

  • Conference paper
  • First Online:
Advances in Human Factors in Robots and Unmanned Systems

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 499))

Abstract

Working with industry, universities and other government agencies, the U.S. Army Research Laboratory has been engaged in multi-year programs to understand the role of humans working with autonomous and robotic systems. The purpose of the paper is to present an overview of the research themes in order to abstract five research requirements for effective human-agent decision-making. Supporting research for each of the five requirements is discussed to elucidate the issues involved and to make recommendations for future research. The requirements include: (a) direct link between the operator and a supervisory agent, (b) interface transparency, (c) appropriate trust, (d) cognitive architectures to infer intent, and e) common language between humans and agents.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Department of Defense: Briefing on Autonomy Initiatives (2012)

    Google Scholar 

  2. Defense Science Board: Role of Autonomy in DoD Systems. Office of the Undersecretary of Defense, Washington, D.C. (2012)

    Google Scholar 

  3. Endsley, M.: Autonomous Horizons: System Autonomy in the Air Force—A Path to the Future (Volume I: Human Autonomy Teaming). US Department of the Air Force, Washington, D.C. (2015)

    Google Scholar 

  4. Barnes, M.J., Chen, J.Y.C., Jentsch, F., Oron-Gilad, T., Redden, E.S., Elliott, L., Evans, A.: Designing for humans in autonomous systems: Military applications. Technical report ARL–TR–6782, Army Research Laboratory (US), Aberdeen Proving Ground, Maryland (2014)

    Google Scholar 

  5. Chen, J.Y.C., Barnes, M.J.: Human-agent teaming for multirobot control: a review of human factors issues. IEEE Trans. Human-Mach. Syst. 4 4(1), 13–29 (2014)

    Google Scholar 

  6. Lewis, M.: Human interaction with multiple remote robots. Rev. Human Factors Ergon. 9(1), 131–174 (2013)

    Article  Google Scholar 

  7. Barnes, M.J., Chen, J.Y.C., Wright, J., Stowers, K.: Human agent teaming for effective multi-robot management: Effects of transparency (in press)

    Google Scholar 

  8. Chen, J.Y.C., Barnes, M.J.: Supervisory control of multiple robots in dynamic tasking environments. Ergonomics 55, 1043–1058 (2012)

    Article  Google Scholar 

  9. Chen, J.Y.C., Barnes, M.J.: Supervisory control of multiple robots; effects of imperfect automation and individual differences. Hum. Factors 54(2), 157–174 (2012)

    Article  Google Scholar 

  10. Wright, J.L., Chen, J.Y.C., Quinn, S.A., Barnes, M.J.: The effects of level of autonomy on human-agent teaming for multi-robot control and local security maintenance. Technical report, ARL-TR-6724, U.S. Army Research Laboratory, Aberdeen Proving Grounds, Maryland (2013)

    Google Scholar 

  11. Wright, J.L., Chen, J.Y.C., Barnes, M.J., Hancock, P.A.: The effect of agent reasoning transparency on automation bias: an analysis of performance and decision time (in press)

    Google Scholar 

  12. Meyer, J., Lee, J.: Trust, reliance, compliance. In: Lee, J., Kirlik, A. (eds.) The oxford handbook of cognitive engineering. Oxford University Press, Oxford (2013)

    Google Scholar 

  13. Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors 46(1), 50–80 (2004)

    Article  Google Scholar 

  14. Endsley, M.R.: Toward a theory of situation awareness in dynamic systems. Hum. Factors 37, 32–64 (1995)

    Article  Google Scholar 

  15. Chen, J.Y.C., Procci, K., Boyce, M., Wright, J., Garcia, A., Barnes, M.J.: Situation awareness-based agent transparency. Technical report ARL-TR_6905, Army Research Laboratory (US), Aberdeen, Maryland (2014)

    Google Scholar 

  16. U.S. Department of Defense—Research & Engineering Enterprise: Autonomy Research Pilot Initiative. http://www.acq.osd.mil/chieftechnologist/arpi.html

  17. Draper, M.: Realizing autonomy via intelligent adaptive hybrid control: adaptable autonomy for achieving UxV RSTA team decision superiority, Yearly report. Dayton, OH: US Air Force Research Laboratory (in press)

    Google Scholar 

  18. Mercado, J.E., Rupp, M., Chen, J.Y.C., Barber, D., Procci, K., Barnes, M.J.: Intelligent agent transparency in human-agent teaming for multi-UxV management. Human Factors (in press)

    Google Scholar 

  19. Stowers, K., Chen, J.Y.C., Kasdaglis, N., Newton, O., Rupp, M., Barnes, M.: Effects of situation awareness-based agent transparency information on human agent teaming for multi-UxV management (2011)

    Google Scholar 

  20. Schaefer, K.E.: The Perception and Measurement of Human Robot Trust. Doctoral Dissertation. University of Central Florida

    Google Scholar 

  21. Hancock, P.A., Billings, D.R., Schaefer, K.E.: Can you trust your robot? Ergon. Des. 19, 24–29 (2011)

    Google Scholar 

  22. Hancock, P.A., Billings, D.R., Schaefer, K.E., Chen, J.Y.C., de Visser, E.J., Parasuraman, R.: A meta-analysis of factors affecting trust in human-robot interaction. Hum. Factors 53, 517–527 (2011)

    Article  Google Scholar 

  23. Schaefer, K.E., Chen, J.Y.C., Szalma, J.L., Hancock, P.A.: A Meta-Analysis of Factors Influencing the Development of Trust in Automation: Implications for Understanding Autonomy in Future Systems. Human Factors (in press)

    Google Scholar 

  24. Phillips, E., Schaefer, K.E., Billings, D.R., Jentsch, F., Hancock, P.A.: Human-animal teams as an analog for future human-robot teams: influencing design and fostering trust. J. Human-Robot Interact. (in press)

    Google Scholar 

  25. Schaefer, K.E., Evans, A.W., Hill, S.G.: Command and control in network-centric operations: trust and robot autonomy. In: 20th International Command and Control Research and Technology Symposium. Annapolis, MD (2015)

    Google Scholar 

  26. Schaefer, K.E., Brewer, R., Avery, E., Straub, E.R.: Matching theory and simulation design: incorporating the human into driverless vehicle simulations using RIVET. In: Proceedings of International Conference on Social Computing, Behavioral-Cultural Modeling and Prediction and Behavior Representation in Modeling and Simulation (SBP-BRiMS). Washington, DC (in press)

    Google Scholar 

  27. Evans, A.W., Schaefer, K.E., Ratka, S., Briggs, K.L.: Soldier perceptions of robotic wingman platforms. In: Proceedings of the SPIE: Unmanned Systems Technology XVIII (in press)

    Google Scholar 

  28. Kelley, T.D.: Developing a psychologically inspired cognitive architecture for robotic control: the symbolic and subsymbolic robotic intelligence control system (SS-RICS). Int. J. Adv. Rob. Syst. 3(3), 219–222 (2006)

    Google Scholar 

  29. Hanford, S.D., Janrathitikarn, O., Long, L.N.: Control of mobile robots using the soar cognitive architecture. J. Aerosp. Comput. Inf. Commun. 6(2), 69–91 (2009)

    Article  Google Scholar 

  30. Mott, D., Poteet, S., Xue, P., Copestake, A.: Natural language fact extraction and domain reasoning using controlled English. DELPH-IN 2014, Portugal. http://www.delph-in.net/2014/Mot:Pot:Xue:14.pdf (2014)

  31. Mott, D., Shemanski, D., Giammanco, C., Braines, D.: Collaborative human-machine analysis using a controlled natural language. Society for Photo-Optical Instrumentation Engineers—Defense, Security, and Sensing Symposium, MD (2015)

    Google Scholar 

  32. Mott, D., Giammanco, C.: The use of rationale in collaborative planning. In: Annual Conference of the International Technology Alliance (2008)

    Google Scholar 

  33. Mott, D., Xue, P., Giammanco, C.: A forensic reasoning agent using controlled English for problem solving. In: Annual Fall Meeting of the International Technology Alliance in Network and Information Sciences (2015)

    Google Scholar 

  34. Patel, J., Dorneich, M., Mott, D., Bahrami, A., Giammanco, C.: Improving coalition planning by making plans alive. IEEE Intell. Syst. 28(1), 17–25 (2013)

    Google Scholar 

  35. Giammanco, C., Mott, D., McGowan, R.: Controlled English for Critical Thinking about the Civil-Military Domain. Annual Fall Meeting of the International Technology Alliance in Network and Information Sciences (2015)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Barnes .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing Switzerland

About this paper

Cite this paper

Barnes, M., Chen, J., Schaefer, K.E., Kelley, T., Giammanco, C., Hill, S. (2017). Five Requisites for Human-Agent Decision Sharing in Military Environments. In: Savage-Knepshield, P., Chen, J. (eds) Advances in Human Factors in Robots and Unmanned Systems. Advances in Intelligent Systems and Computing, vol 499. Springer, Cham. https://doi.org/10.1007/978-3-319-41959-6_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-41959-6_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-41958-9

  • Online ISBN: 978-3-319-41959-6

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics