Skip to main content

Collective Cognition and Sensing in Robotic Swarms via an Emergent Group-Mind

  • Conference paper
  • First Online:
2016 International Symposium on Experimental Robotics (ISER 2016)

Part of the book series: Springer Proceedings in Advanced Robotics ((SPAR,volume 1))

Included in the following conference series:

Abstract

Algorithms for robotic swarms often involve programming each robot with simple rules that cause complex group behavior to emerge out of many individual interactions. We study an algorithm with emergent behavior that transforms a robotic swarm into a single unified computational meta-entity that can be programmed at runtime. In particular, a swarm-spanning artificial neural network emerges as wireless neural links between robots self-organize. The resulting artificial group-mind is trained to differentiate between spatially heterogeneous light patterns it observes by using the swarm’s distributed light sensors like cells in a retina. It then orchestrates different coordinated heterogeneous swarm responses depending on which pattern it observes. Experiments on real robot swarms containing up to 316 robots demonstrate that this enables collective decision making based on distributed sensor data, and facilitates human-swarm interaction.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 259.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 329.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 329.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chialvo, D.R.: Emergent complex neural dynamics. Nature Phys. 6(10), 744–750 (2010)

    Article  Google Scholar 

  2. Lévy, P.: L’intelligence collective: pour une anthropologie du cyberspace, vol. 11. La Découverte, Paris (1994)

    Google Scholar 

  3. Green, D.G.: Emergent behavior in biological systems. In: Green, D.G., Bossomaier, T.J. (eds.) Complex Systems: From Biology to Computation, pp. 24–35. IOS Press (1993)

    Google Scholar 

  4. Wheeler, W.M.: The ant-colony as an organism. J. Morphol. 22, 307 (1912)

    Article  Google Scholar 

  5. Stapledon, O.: Last and First Men: A story of the Near and Far Future. Penguin Books, London (1937)

    Google Scholar 

  6. Asimov, I.: Foundation and Earth. Foundation series Doubleday (1986)

    Google Scholar 

  7. Berman, R., Piller, M., Taylor, J., Taylor, M., Price, A.S., Gaberman, M.: Collective. Star Trek Telivision Series Episode VOY.6.16, Directed by Allison Liddi, Based on Concept by Gene Roddenberry (February 2000)

    Google Scholar 

  8. Farley, B., Clark, W.: Simulation of self-organizing systems by digital computer. Trans. IRE Prof. Group Inf. Theor. 4(4), 76–84 (1954)

    Article  MathSciNet  Google Scholar 

  9. Pomerleau, D.A.: Efficient training of artificial neural networks for autonomous navigation. Neural Comput. 3(1), 88–97 (1991)

    Article  Google Scholar 

  10. Cireşan, D., Meierand, U., Gambardella, L., Schmidhuber, J.: Deep, big, simple neural nets for handwritten digit recognition. Neural Comput. 22(12), 3207–3220 (2010)

    Article  Google Scholar 

  11. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)

    Article  Google Scholar 

  12. Amkraut, S., Girard, M., Karl, G.: Motion studies for a work in progress entitled eurnythmy. SIGGRAPH Video Rev., 21 (1985)

    Google Scholar 

  13. Reynolds, C.W.: Flocks, herds, and schools: a distributed behavioral model. Comput. Graph. 21(4), 25–34 (1987)

    Article  Google Scholar 

  14. Matarić, M.J.: Interaction and Intelligent Behavior. Ph.D. thesis (1994)

    Google Scholar 

  15. Martinoli, A.: Swarm intelligence in autonomous collective robotics: from tools to the analysis and synthesis of distributed control strategies. Ph.D. thesis, Ecole Polytechnique Fédérale de Lausanne (1999)

    Google Scholar 

  16. Ferrante, E., Turgut, A.E., Huepe, C., Stranieri, A., Pinciroli, C., Dorigo, M.: Self-organized flocking with a mobile robot swarm: a novel motion control method. Adaptive Behavior (2012)

    Google Scholar 

  17. Werfel, J., Petersen, K., Nagpal, R.: Designing collective behavior in a termite-inspired robot construction team. Science 343(6172), 754–758 (2014)

    Article  Google Scholar 

  18. Steels, L.: Cooperation between distributed agents through self-organisation. In: Proceedings of IEEE International Workshop on Intelligent Robots and Systems ’90, IROS 1990, pp. 8–14, July 1990. Towards a New Frontier of Applications

    Google Scholar 

  19. Becker, A., Habibi, G., Werfel, J., Rubenstein, M., McLurkin, J.: Massive uniform manipulation: Controlling large populations of simple robots with a common input signal. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 520–527, November 2013

    Google Scholar 

  20. Wilson, S., Pavlic, T.P., Kumar, G.P., Buffin, A., Pratt, S.C., Berman, S.: Design of ant-inspired stochastic control policies for collective transport by robotic swarms. Swarm Intell. 8(4), 303–327 (2014)

    Article  Google Scholar 

  21. Groß, R., Bonani, M., Mondada, F., Dorigo, M.: Autonomous self-assembly in swarm-bots. IEEE Trans. Robot. 22(6), 1115–1130 (2006)

    Article  Google Scholar 

  22. Gilpin, K., Knaian, A., Rus, D.: Robot pebbles: one centimeter modules for programmable matter through self-disassembly. In: 2010 IEEE International Conference on Robotics and Automation (ICRA), pp. 2485–2492. IEEE (2010)

    Google Scholar 

  23. Butera, W.J.: Programming a paintable computer. Ph.D. thesis (2002)

    Google Scholar 

  24. Rubenstein, M., Cornejo, A., Nagpal, R.: Programmable self-assembly in a thousand-robot swarm. Science 345, 795–799 (2014)

    Article  Google Scholar 

  25. Farber, P., Asanovic, K.: Parallel neural network training on multi-spert. In: 1997 3rd International Conference on Algorithms and Architectures for Parallel Processing, ICAp 1997, pp. 659–666, December 1997

    Google Scholar 

  26. LeCun, Y.A., Bottou, L., Orr, G.B., Müller, K.-R.: Efficient backprop. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 7700, pp. 9–48. Springer, Heidelberg (2012). doi:10.1007/978-3-642-35289-8_3

    Chapter  Google Scholar 

Download references

Acknowledgments

This work would not have been possible without Michael Rubenstein, who designed the Kilobot platform, taught the author how to use it, and provided invaluable feedback on this work. The author is grateful for the knowledge, resources, encouragement, and advice provided by Radhika Nagpal and Melvin Gauci. The author are also grateful to Derek Kingston for providing the time, space, and freedom to pursue this problem. This work was funded by the Control Science Center of Excellence at the Air Force Research Laboratory (CSCE AFRL), the National Science Foundation (NSF) grant IIP-1161029, and the Center for Unmanned Aircraft Systems. This work was performed while Michael Otte was “in residence” at CSCE AFRL.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Otte .

Editor information

Editors and Affiliations

Appendix: High-level Pseudo Code

Appendix: High-level Pseudo Code

Each robot in the swarm runs identical code. Two different “main” procedures are presented. The first is for situations in which the output behavior of the swarm does not involve movement or other actions that will break the group mind’s network connectivity (Algorithm 2). The second is for situations in which the output behavior is expected to break connectivity, and so the group mind must organize an orderly dissolution back to a non-group-mind swarm (Algorithm 3). In addition to the main thread, each robot runs a separate message broadcast thread at approximately 2 Hz (Algorithm 4), and has a callback function to receive incoming messages (Algorithm 5), respectively. Global data is accessible across all threads and functions.

figure a
figure b

The start-up procedure appears in Algorithm 1 and corresponds to the steps in Fig. 2 between “Local ID Agreement” and “Data Upload.” Each robot uses a state machine that is initialized to state \(\mathrm {NOT\_YET\_TRAINING}\) (line 1). A Boolean value \(done\_training\) is also used to track when training has resulted in an acceptable level of accuracy (on this robot). The battery charge is used to seed a pseudo-random number generator so that different pseudo-random number sequences will be generated on each robot with high probability. A distributed algorithm is used to ensure that neighboring robots have unique randomly determined IDs (line 4). Light sensors are calibrated (line 5). Neighbors are discovered and outgoing wireless links to their neurons are created and initialized with random weights (line 6). Data is uploaded to the swarm from a human user via visual light projection following a predefined procedure (line 7). State \(\mathrm {TRAIN}\) indicates the start-up phase has ended (line 8).

figure c

The main thread for non-movement cases appears in Algorithm 2. All signals sent along neural connections are tagged with the number of training iterations this robot has completed. The function \(\mathrm {out\_of\_sync}()\) returns \(\mathbf {true}\) whenever this robot has gotten too many training iterations ahead of its neighbors (100 in our experiments). The backpropogation training algorithm is run one iteration at a time (line 4) — but only if the training error needs improvement and this robot is not out-of-sync with its neighbors (line 3). A robot stops training once its local error has fallen below \(5\%\) (lines 5–6). This robot uses the subroutine \(\mathrm {use\_group\_mind}(\mathrm {sample\_light()})\) to both provide its current light sensor reading to the group mind, and to learn the group mind’s prediction of the overall swarm behavior \(\tau \) that should be performed (line 7). The single robot behavior \(behaviour\) this robot performs as part of \(\tau \) is also returned, and determined within \(\mathrm {use\_group\_mind}(\mathrm {sample\_light()})\) by querying a local look-up table with the value of \(\tau \). The look-up table is populated with the local mapping from \(\tau \) to \(behaviour\) during the data upload portion of the start-up phase.

figure d
figure e

The main thread used in cases involving movement appears in Algorithm 3. Differences vs. Algorithm 2 (no movement) appear on lines 3 and 8–15. Movement destroys the group mind; thus, movement should only start once the group-mind is highly certain it has calculated the correct response behavior. This is facilitated by adding state \(\mathrm {CONSIDER}\) to the state machine, and also by defining one of the behaviors to be “continue training.” In practice, the swarm is trained to continue training in response to a neutral gray light pattern, which is then displayed during the training phase. \(\mathrm {CONSIDER}\) can only be accessed once a robot believes the desired behavior is no longer “continue training” (lines 9–12). The function \(\mathrm {consideration\_time\_exhausted()}\) is used to ensure a robot remains continuously in state \(\mathrm {CONSIDER}\) for a predetermined amount of time before switching to state \(\mathrm {ACT}\) to perform the prescribed behavior (lines 11–14). This adds robustness to erroneous outputs from partially trained models.

Algorithm 4 depicts the message broadcast thread. Function \(\mathrm {get\_neural\_data()}\) retrieves the neural network data that resides on this robot’s portion of the group mind (line 3). For each training example as well as the real-time environmental sensor input, this includes both the forward neural signals and backpropogation messages (including training iteration number and, for each backpropogation message, the destination ID). Neural data is broadcast, along with this robot’s state and ID (line 4). In practice, due to the Kilobots’ small message payload size (9 bytes), we must divide each batch of neural network data across multiple messages (not shown). If there is movement behavior such that state \(\mathrm {ACT}\) is used, then the robot sends this state, its ID, and the swarm behavior class \(\tau \) output of the neural network vs. real-time environmental data (lines 5–6). To save space we omit the other message passing details necessary to run the standard distributed algorithms that we employ as subroutines during the start-up phase (represented by lines 7–8).

The receive message callback function appears in Algorithm 5. Normal training data is received on lines 2–5. If a neighbor has decided to act (e.g., move) then this robot will join it (lines 6–9); making sure to perform its own prescribed behavior \(behaviour\) relevant to the overall swarm behavior \(\tau \) (line 9). The function \(\mathrm {modify\_behaviour}(behaviour,sender\_behaviour,sender\_distance)\) is used to modify the specific output behavior of this robot during the \(\mathrm {ACT}\) phase, as a function of interaction with neighboring robots (lines 10–14). This enables more complex swarm behaviors to emerge out of the interactions between robots. For example, the smiley faces in our experiments are created as randomly searching robots stop moving in the vicinity of attracting robots. Lines 15–16 represent other message processing that is used for the distributed subroutines within the start-up phase.

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Otte, M. (2017). Collective Cognition and Sensing in Robotic Swarms via an Emergent Group-Mind. In: Kulić, D., Nakamura, Y., Khatib, O., Venture, G. (eds) 2016 International Symposium on Experimental Robotics. ISER 2016. Springer Proceedings in Advanced Robotics, vol 1. Springer, Cham. https://doi.org/10.1007/978-3-319-50115-4_72

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-50115-4_72

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-50114-7

  • Online ISBN: 978-3-319-50115-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics