Skip to main content

Multi-Agent Programming Contest 2019 FIT BUT Team Solution

  • Conference paper
  • First Online:
The Multi-Agent Programming Contest 2019 (MAPC 2019)

Abstract

During our participation in MAPC 2019, we have developed two multi-agent systems that have been designed specifically for this competition. The first of the systems is a proactive system that works with pre-specified scenarios and tasks agents with generated goals designed for individual agents according to assigned role. The second system is designed as more reactive and employs layered architecture with highly dynamic behaviour, where agents select their own action based on their perception of usefulness of said action.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Russell, S., Norvig, P.: Artificial Intelligence: A Modern Approach. Prentice Hall, Upper Saddle River (2010)

    MATH  Google Scholar 

  2. Bordini, R.H., Hübner, J.F., Wooldridge, M.: Programming Multi-agent Systems in AgentSpeak Using Jason, vol. 8. Wiley, Hoboken (2007)

    Book  Google Scholar 

  3. Rao, A.S.: AgentSpeak(L): BDI agents speak out in a logical computable language. In: Van de Velde, W., Perram, J.W. (eds.) MAAMAW 1996. LNCS, vol. 1038, pp. 42–55. Springer, Heidelberg (1996). https://doi.org/10.1007/BFb0031845

    Chapter  Google Scholar 

  4. Rao, A.S., Georgeff, M.P.: Modeling rational agents within a BDI-architecture. In: Proceedings of the 2nd International Conference on Principles of Knowledge Representation and Reasoning, pp. 473–484 (1991)

    Google Scholar 

  5. Winikoff, M., Padgham, L., Harland, J., Thangarajah, J.: Declarative and procedural goals in intelligent agent systems. In: Proceedings of KR 2002, pp. 470–481 (2002)

    Google Scholar 

  6. Dastani, M.: 2APL: a practical agent programming language. Int. J. Auton. Agents Multi-Agent Syst. (JAAMAS) 16(3), 214–248 (2008). Special Issue on Computational Logic-based Agents, Toni, F., Bentahar, J. (eds.)

    Article  Google Scholar 

  7. Boissier, O., Bordini, R.H., Hübner, J.F., Ricci, A., Santi, A.: Multi-agent oriented programming with JaCaMo. Sci. Comput. Program. 78, 747–761 (2013)

    Article  Google Scholar 

  8. Brooks, R.: A robust layered control system for a mobile robot. IEEE J. Robot. Autom. 2(1), 14–23 (1986). https://doi.org/10.1109/JRA.1986.1087032

    Article  Google Scholar 

  9. Bratman, M.E., Israel, D.J., Pollack, M.E.: Plans and resource-bounded practical reasoning. Comput. Intell. 4, 349–355 (1988)

    Article  Google Scholar 

  10. MAPC 2019. https://multiagentcontest.org/2019/. Accessed 14 May 2020

Download references

Acknowledgment

This work was supported by the project IT4IXS: IT4Innovations Excellence in Science project (LQ1602).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vaclav Uhlir .

Editor information

Editors and Affiliations

A Team Overview: Short Answers

A Team Overview: Short Answers

1.1 A.1 Participants and Their Background

  • What was your motivation to participate in the contest?

    Our group is related to artificial agents and multi-agent systems and we wanted to compete in an international contest and test our skills.

  • What is the history of your group? (course project, thesis, \(\ldots \))

    Members of our research group have been teaching artificial intelligence at our faculty for nearly 20 years. Most of the projects or thesis in our group concern the topic of artificial intelligence, multi-agent systems, soft-computing and machine learning.

  • What is your field of research? Which work therein is related?

    Vaclav Uhlir: Ecosystems involving autonomous units (mainly autonomous cars).

    František Zboril and Frantisek Videnky: Artificial agents and BDI agents. Frantisek Zboril’s field of research is also prototyping of wireless sensor networks using mobile agents.

1.2 A.2 Statistics

  • How much time did you invest in the contest (for programming, organizing your group, other)?

    Something between 200 to 300 hours of programming and another 100 of planning, strategizing and managing git and other development environments.

  • How many lines of code did you produce for your final agent team?

    5531 lines of code.

    797 comment lines.

    42 active “TODO” in final code.

  • How many people were involved?

    3

  • When did you start working on your agents?

    Aug 29, 2019 10:41am.

1.3 A.3 Agent System Details

  • How does the team work together? (i.e. coordination, information sharing, ...) How decentralised is your approach?

    Every agent has its local tasks with priority list as a fallback and if time allows, it waits for local group decision (triggered by the slowest agent in the group).

  • Do your agents make use of the following features: Planning, Learning, Organisations, Norms? If so, please elaborate briefly.

    Our agents plan is only one step and agent are organized into groups as they “meet” - within these groups agents cooperate based on momentary advantage.

  • Can your agents change their behavior during runtime? If so, what triggers the changes?

    Every action is dependent only on the current environment and few randomizers independent on previous steps.

  • Did you have to make changes to the team (e.g. fix critical bugs) during the contest?

    Yes, we enabled not-fully-tested beta features hoping to achieve better error handling.

  • How did you go about debugging your system? Custom logger with 5 levels of logging for every agent and bound to various subsystems. (By average every contest match produced around 1 GB plain-text info.)

  • During the contest you were not allowed to watch the matches. How did you understand what your team of agents was doing? Did this understanding help you to improve your team’s performance?

    An overwhelming flood of error indicated network problems and resulted in agent desynchronization - limiting the system higher functions. Enabling beta features eliminated some network issues but introduced other errors.

  • Did you invest time in making your agents more robust? How?

    Robustness was planned via fallback strategies – some of them were implemented in beta features, but most was not ready for the main contest.

1.4 A.4 Scenario and Strategy

  • What is the main strategy of your agent team? Aiming for closest possible highly valued target while effectively ignoring past.

  • Your agents only got local perceptions of the whole scenario. Did your agents try to build a global view of the scenario for a specific purpose? If so, describe it briefly. Upon successful position confirmation of any two agents, agents were assigned to workgroups synchronizing any new perception to the global map used in the search for highest achievable task completion.

  • How do your agents decide which tasks to complete?

    When a task is available and all required blocks are accessible and their joining can result in the successful completion of structure and structure can be delivered to the goal. Task value selection is based on reward and the number of steps needed with a slight preference for smaller structures.

  • Do your agents form ad-hoc teams to complete a task?

    Agents cooperate in team-like structures but in every step, the cooperation can be reevaluated.

  • Which aspect(s) of the scenario did you find particularly challenging?

    Identification of block attachments (to each other or agents).

  • If another developer needs to integrate your techniques into their code (i.e., same programming language tools), how easy is it to make that integration work?

    Definitely under average as some in-code named features are not fully complete and/or are using various temporal workarounds.

1.5 A.5 And the Moral of It Is ...

  • What did you learn from participating in the contest?

    Relatively simply looking scenario can present a far greater challenge than we expected.

  • What are the strong and weak points of your team?

    Our team has expertise in multiple different languages and coding approach techniques.

    Every member has specialization in different programming language and techniques.

  • Where did you benefit from your chosen programming language, methodology, tools, and algorithms? Familiarity with the used environment allowed for faster development for some team members.

  • Which problems did you encounter because of your chosen technologies?

    Mainly portability issues over different operating systems and conflicting environment variables.

  • Did you encounter new problems during the contest? Yes - battling environment and OS portability on large scale.

  • Did playing against other agent teams bring about new insights on your own agents? Yes - mainly highlighting strength and weaknesses and opening ideas for new strategies.

  • What would you improve (wrt. your agents) if you wanted to participate in the same contest a week from now (or next year)? Error handling, network code, fallback strategies - in this order.

  • Which aspect of your team cost you the most time? In the early versions of the system, we had a nasty bug that sometimes caused subsequent errors in synchronization amongst agents and problems in other systems. This was blamed on various other possible sources and caused very lengthy bug-hunting through multiple environments.

  • What can be improved regarding the contest/scenario for next year? Clarification about block connections - either changing perceptions with some sort of connection information or clear warning about the uncertainty of block connections.

  • Why did your team perform as it did? Why did the other teams perform better/worse than you did? Our agents were running on a machine with desynchronized clock (about −3.5 s) and thus fearing timeouts, agents were submitting action prematurely with less than 0.5 s on decisions - which they were not built for and because of this the higher system planning was often not effectively used.

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Uhlir, V., Zboril, F., Vidensky, F. (2020). Multi-Agent Programming Contest 2019 FIT BUT Team Solution. In: Ahlbrecht, T., Dix, J., Fiekas, N., Krausburg, T. (eds) The Multi-Agent Programming Contest 2019. MAPC 2019. Lecture Notes in Computer Science(), vol 12381. Springer, Cham. https://doi.org/10.1007/978-3-030-59299-8_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-59299-8_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-59298-1

  • Online ISBN: 978-3-030-59299-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics