Skip to main content
Log in

Autonomous flying blimp interaction with human in an indoor space

  • Published:
Frontiers of Information Technology & Electronic Engineering Aims and scope Submit manuscript

Abstract

We present the Georgia Tech Miniature Autonomous Blimp (GT-MAB), which is designed to support human-robot interaction experiments in an indoor space for up to two hours. GT-MAB is safe while flying in close proximity to humans. It is able to detect the face of a human subject, follow the human, and recognize hand gestures. GT-MAB employs a deep neural network based on the single shot multibox detector to jointly detect a human user’s face and hands in a real-time video stream collected by the onboard camera. A human-robot interaction procedure is designed and tested with various human users. The learning algorithms recognize two hand waving gestures. The human user does not need to wear any additional tracking device when interacting with the flying blimp. Vision-based feedback controllers are designed to control the blimp to follow the human and fly in one of two distinguishable patterns in response to each of the two hand gestures. The blimp communicates its intentions to the human user by displaying visual symbols. The collected experimental data show that the visual feedback from the blimp in reaction to the human user significantly improves the interactive experience between blimp and human. The demonstrated success of this procedure indicates that GT-MAB could serve as a flying robot that is able to collect human data safely in an indoor environment.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Acharya U, Bevins A, Duncan BA, 2017. Investigation of human-robot comfort with a small unmanned aerial vehicle compared to a ground robot. Proc IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.2758–2765. https://doi.org/10.1109/IROS.2017.8206104

    Google Scholar 

  • Arroyo D, Lucho C, Roncal J, et al., 2014. Daedalus: a sUAV for human-robot interaction. Proc 9th ACM/IEEE Int Conf on Human-Robot Interaction, p.116–117.

    Google Scholar 

  • Birchfield S, 1996. KLT: an Implementation of the Kanade- Lucas-Tomasi Feature Tracker

    Google Scholar 

  • Burri M, Gasser L, Käch M, et al., 2013. Design and control of a spherical omnidirectional blimp. Proc IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.1873–1879. https://doi.org/10.1109/IROS.2013.6696604

    Google Scholar 

  • Cauchard JR, Zhai KY, Spadafora M, et al., 2016. Emotion encoding in human-drone interaction. Proc 11th ACM/IEEE Int Conf on Human-Robot Interaction, p.263–270. https://doi.org/10.1109/HRI.2016.7451761

    Google Scholar 

  • Cho S, Mishra V, Tao Q, et al., 2017. Autopilot design for a class of miniature autonomous blimps. Proc IEEE Conf on Control Technology and Applications, p.841–846. https://doi.org/10.1109/CCTA.2017.8062564

    Google Scholar 

  • Corke P, 2011. Robotics, Vision and Control: Fundamental Algorithms in MATLAB. Springer Berlin Germany.

    Google Scholar 

  • Costante G, Bellocchio E, Valigi P, et al., 2014. Personalizing vision-based gestural interfaces for HRI with UAVs: a transfer learning approach. Proc IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.3319–3326. https://doi.org/10.1109/IROS.2014.6943024

    Google Scholar 

  • de Crescenzio F, Miranda G, Persiani F, et al., 2009. A first implementation of an advanced 3D interface to control and supervise UAV (uninhabited aerial vehicles) missions. Presence, 18(3):171–184. https://doi.org/10.1162/pres.18.3.171

    Article  Google Scholar 

  • Draper M, Calhoun G, Ruff H, et al., 2003. Manual versus speech input for unmanned aerial vehicle control station operations. Proc Hum Factors Ergon Soc Ann Meet, 47(1):109–113. https://doi.org/10.1177/154193120304700123

    Article  Google Scholar 

  • Duffy BR, 2003. Anthropomorphism and the social robot. Rob Auton Syst, 42(3-4):177–190. https://doi.org/10.1016/S0921-8890(02)00374-3

    Article  MATH  Google Scholar 

  • Duncan BA, Murphy RR, 2013. Comfortable approach distance with small unmanned aerial vehicles. Proc IEEE RO-MAN, p.786–792. https://doi.org/10.1109/ROMAN.2013.6628409

    Google Scholar 

  • Goodrich MA, Schultz AC, 2007. Human-robot interaction: a survey. Found Trends Hum-Comput Interact, 1(3):203–275. https://doi.org/10.1561/1100000005

    Article  MATH  Google Scholar 

  • Graether E, Mueller F, 2012. Joggobot: a flying robot as jogging companion. Proc ACM SIGCHI Conf on Human Factors in Computing Systems, p.1063–1066. https://doi.org/10.1145/2212776.2212386

    Google Scholar 

  • Hall ET, 1966. The Hidden Dimension. Doubleday, New York, USA.

    Google Scholar 

  • Hansen JP, Alapetite A, MacKenzie IS, et al., 2014. The use of gaze to control drones. Proc Symp on Eye Tracking Research and Applications, p.27–34. https://doi.org/10.1145/2578153.2578156

    Chapter  Google Scholar 

  • He D, Ren HY, Hua WD, et al., 2011. Flyingbuddy: augment human mobility and perceptibility. Proc 13th Int Conf on Ubiquitous Computing, p.615–616. https://doi.org/10.1145/2030112.2030241

    Google Scholar 

  • Helbing D, Molnár P, 1995. Social force model for pedestrian dynamics. Phys Rev E, 51(5):4282–4286. https://doi.org/10.1103/PhysRevE.51.4282

    Article  Google Scholar 

  • Lichtenstern M, Frassl M, Perun B, et al., 2012. A prototyping environment for interaction between a human and a robotic multi-agent system. Proc 7th ACM/IEEE Int Conf on Human-Robot Interaction, p.185–186. https://doi.org/10.1145/2157689.2157747

    Google Scholar 

  • Liew CF, Yairi T, 2013. Quadrotor or blimp? Noise and appearance considerations in designing social aerial robot. Proc 8th ACM/IEEE Int Conf on Human-Robot Interaction, p.183–184. https://doi.org/10.1109/HRI.2013.6483562

    Google Scholar 

  • Lim H, Sinha SN, 2015. Monocular localization of a moving person onboard a quadrotor MAV. Proc IEEE Int Conf on Robotics and Automation, p.2182–2189. https://doi.org/10.1109/ICRA.2015.7139487

    Google Scholar 

  • Liu W, Anguelov D, Erhan D, et al., 2016. SSD: single shot multibox detector. Proc 14th European Conf on Computer Vision, p.21–37. https://doi.org/10.1007/978-3-319-46448-0_2

    Google Scholar 

  • Mittal A, Zisserman A, Torr PHS, 2011. Hand detection using multiple proposals. Proc British Machine Vision Conf, p.1–11.

    Google Scholar 

  • Monajjemi VM, Wawerla J, Vaughan R, et al., 2013. HRI in the sky: creating and commanding teams of UAVs with a vision-mediated gestural interface. Proc IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.617–623. https://doi.org/10.1109/IROS.2013.6696415

    Google Scholar 

  • Monajjemi VM, Mohaimenianpour S, Vaughan R, 2016. UAV, come to me: end-to-end, multi-scale situated HRI with an uninstrumented human and a distant UAV. Proc IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.4410–4417. https://doi.org/10.1109/IROS.2016.7759649

    Google Scholar 

  • Nagi J, Giusti A, di Caro GA, et al., 2014. Human control of UAVs using face pose estimates and hand gestures. Proc ACM/IEEE Int Conf on Human-Robot Interaction, p.252–253. https://doi.org/10.1145/2559636.2559833

    Google Scholar 

  • Naseer T, Sturm J, Cremers D, 2013. FollowMe: person following and gesture recognition with a quadrocopter. Proc IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.624–630. https://doi.org/10.1109/IROS.2013.6696416

    Google Scholar 

  • Perera AG, Law YW, Chahl J, 2018. Human pose and path estimation from aerial video using dynamic classifier selection. Cogn Comput, 6(10):1019–1041. https://doi.org/10.1007/s12559-018-9577-6

    Article  Google Scholar 

  • Peshkova E, Hitz M, Kaufmann B, 2017. Natural interaction techniques for an unmanned aerial vehicle system. IEEE Perv Comput, 16(1):34–42. https://doi.org/10.1109/MPRV.2017.3

    Article  Google Scholar 

  • Pourmehr S, Monajjemi VM, Sadat SA, et al., 2014. “You are green”: a touch-to-name interaction in an integrated multi-modal multi-robot HRI system. Proc ACM/IEEE Int Conf on Human-Robot Interaction, p.266–267. https://doi.org/10.1145/2559636.2559806

    Google Scholar 

  • Schneegass S, Alt F, Scheible J, et al., 2014. Midair displays: concept and first experiences with free-floating pervasive displays. Proc Int Symp on Pervasive Displays, Article 27. https://doi.org/10.1145/2611009.2611013

    Google Scholar 

  • Sharma M, Hildebrandt D, Newman G, et al., 2013. Communicating affect via flight path: exploring use of the Laban effort system for designing affective locomotion paths. Proc ACM/IEEE Int Conf on Human-Robot Interaction, p.293–300. https://doi.org/10.1109/HRI.2013.6483602

    Google Scholar 

  • Srisamosorn V, Kuwahara N, Yamashita A, et al., 2016. Design of face tracking system using fixed 360-degree cameras and flying blimp for health care evaluation. Proc 4th Int Conf on Serviceology.

    Google Scholar 

  • St-Onge D, Brèches PY, Sharf I, et al., 2017. Control, localization and human interaction with an autonomous lighter-than-air performer. Rob Auton Syst, 88:165–186. https://doi.org/10.1016/j.robot.2016.10.013

    Article  Google Scholar 

  • Szafir D, Mutlu B, Fong T, 2014. Communication of intent in assistive free flyers. Proc ACM/IEEE Int Conf on Human-Robot Interaction, p.358–365. https://doi.org/10.1145/2559636.2559672

    Google Scholar 

  • Szafir D, Mutlu B, Fong T, 2015. Communicating directionality in flying robots. Proc 10th Annual ACM/IEEE Int Conf on Human-Robot Interaction, p.19–26. https://doi.org/10.1145/2696454.2696475

    Google Scholar 

  • Tao QY, Cha J, Hou MX, et al., 2018. Parameter identification of blimp dynamics through swinging motion. Proc 15th Int Conf on Control, Automation, Robotics and Vision. https://doi.org/10.1109/ICARCV.2018.8581376

    Google Scholar 

  • Viola P, Jones MJ, 2004. Robust real-time face detection. Int J Comput Vis, 57(2):137–154. https://doi.org/10.1023/B:VISI.0000013087.49260.fb

    Article  Google Scholar 

  • Wold S, Esbensen K, Geladi P, 1987. Principal component analysis. Chemom Intell Lab Syst, 2(1-3):37–52. https://doi.org/10.1016/0169-7439(87)80084-9

    Article  Google Scholar 

  • Yao NS, Anaya E, Tao QY, et al., 2017. Monocular visionbased human following on miniature robotic blimp. Proc IEEE Int Conf on Robotics and Automation, p.3244–3249. https://doi.org/10.1109/ICRA.2017.7989369

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fumin Zhang.

Additional information

Project supported by the Office of Naval Research (Nos. N00014- 14-1-0635 and N00014-16-1-2667), the National Science Foundation, U.S. (No. OCE-1559475), the Naval Research Laboratory (No. N0017317-1-G001), and the National Oceanic and Atmospheric Administration (No. NA16NOS0120028)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yao, Ns., Tao, Qy., Liu, Wy. et al. Autonomous flying blimp interaction with human in an indoor space. Frontiers Inf Technol Electronic Eng 20, 45–59 (2019). https://doi.org/10.1631/FITEE.1800587

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1631/FITEE.1800587

Key words

CLC number

Navigation