Abstract
A Wireless Sensor and Actor Network (WSAN) is a group of wireless devices with the ability to sense physical events (sensors) or/and to perform relatively complicated actions (actors), based on the sensed data shared by sensors. This paper presents design and implementation of a simulation system based on Deep Q-Network (DQN) for actor node mobility control in WSANs. DQN is a deep neural network structure used for estimation of Q-value of the Q-learning method. In this work, we implement the proposed simulating system by Rust programming language. We compare the performance of proposed system for normal and uniform distribution of events. The simulation results show that for normal distribution of events and the best episode all actor nodes are connected and have covered all events. For normal distribution of events, the total reward is higher than uniform distribution of events. This means that the actor node can move and keep connection with other actor nodes.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Akyildiz, I.F., Kasimoglu, I.H.: Wireless sensor and actor networks: research challenges. Ad Hoc Netw. 2(4), 351–367 (2004)
Krishnakumar, S.S., Abler, R.T.: Intelligent actor mobility in wireless sensor and actor networks. In: IFIP WG 6.8 1st International Conference on Wireless Sensor and Actor Networks (WSAN-2007), pp. 13–22 (2007)
Sir, M.Y., Senturk, I.F., Sisikoglu, E., Akkaya, K.: An optimization-based approach for connecting partitioned mobile sensor/actuator networks. In: IEEE Conference on Computer Communications Workshops, pp. 525–530 (2011)
Younis, M., Akkaya, K.: Strategies and techniques for node placement in wireless sensor networks: a survey. Ad Hoc Netw. 6(4), 621–655 (2008)
Liu, H., Chu, X., Leung, Y.-W., Du, R.: Simple movement control algorithm for bi-connectivity in robotic sensor networks. IEEE J. Sel. Areas Commun. 28(7), 994–1005 (2010)
Abbasi, A., Younis, M., Akkaya, K.: Movement-assisted connectivity restoration in wireless sensor and actor networks. IEEE Trans. Parallel Distrib. Syst. 20(9), 1366–1379 (2009)
Akkaya, K., Senel, F., Thimmapuram, A., Uludag, S.: Distributed recovery from network partitioning in movable sensor/actor networks via controlled mobility. IEEE Trans. Comput. 59(2), 258–271 (2010)
Costanzo, C., Loscri, V., Natalizio, E., Razafindralambo, T.: Nodes self-deployment for coverage maximization in mobile robot networks using an evolving neural network. Comput. Commun. 35(9), 1047–1055 (2012)
Li, Y., Li, H., Wang, Y.: Neural-based control of a mobile robot: a test model for merging biological intelligence into mechanical system. In: IEEE 7th Joint International Information Technology and Artificial Intelligence Conference (ITAIC-2014), pp. 186–190 (2014)
Oda, T., Obukata, R., Ikeda, M., Barolli, L., Takizawa, M.: Design and implementation of a simulation system based on deep Q-network for mobile actor node control in wireless sensor and actor networks. In: The 31th IEEE International Conference on Advanced Information Networking and Applications Workshops (IEEE WAINA-2017) (2017)
Llaria, A., Terrasson, G., Curea, O., Jiménez, J.: Application of wireless sensor and actuator networks to achieve intelligent microgrids: a promising approach towards a global smart grid deployment. Appl. Sci. 6(3), 61–71 (2016)
Kulla, E., Oda, T., Ikeda, M., Barolli, L.: SAMI: a sensor actor network matlab implementation. In: The 18th International Conference on Network-Based Information Systems (NBiS-2015), pp. 554–560 (2015)
Kruger, J., Polajnar, D., Polajnar, J.: An open simulator architecture for heterogeneous self-organizing networks. In: Canadian Conference on Electrical and Computer Engineering (CCECE-2006), pp. 754–757 (2006)
Akbas, M., Turgut, D.: Apawsan: actor positioning for aerial wireless sensor and actor networks. In: IEEE 36th Conference on Local Computer Networks (LCN-2011), pp. 563–570 (2011)
Akbas, M., Brust, M., Turgut, D.: Local positioning for environmental monitoring in wireless sensor and actor networks. In: IEEE 35th Conference on Local Computer Networks (LCN-2010), pp. 806–813 (2010)
Melodia, T., Pompili, D., Gungor, V., Akyildiz, I.: Communication and coordination in wireless sensor and actor networks. IEEE Trans. Mob. Comput. 6(10), 1126–1129 (2007)
Gungor, V., Akan, O., Akyildiz, I.: A real-time and reliable transport (rt2) protocol for wireless sensor and actor networks. IEEE/ACM Trans. Netw. 16(2), 359–370 (2008)
Mo, L., Xu, B.: Node coordination mechanism based on distributed estimation and control in wireless sensor and actuator networks. J. Control Theor. Appl. 11(4), 570–578 (2013)
Selvaradjou, K., Handigol, N., Franklin, A., Murthy, C.: Energy-efficient directional routing between partitioned actors in wireless sensor and actor networks. IET Commun. 4(1), 102–115 (2010)
Kantaros, Y., Zavlanos, M.M.: Communication-aware coverage control for robotic sensor networks. In: IEEE Conference on Decision and Control, pp. 6863–6868 (2014)
Melodia, T., Pompili, D., Akyldiz, I.: Handling mobility in wireless sensor and actor networks. IEEE Trans. Mob. Comput. 9(2), 160–173 (2010)
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)
Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M.: Playing atari with deep reinforcement learning, arXiv:1312.5602v1, pp. 1–9 (2013)
Lei, T., Ming, L.: A robot exploration strategy based on Q-learning network. In: IEEE International Conference on Real-time Computing and Robotics (RCAR-2016), pp. 57–62 (2016)
Riedmiller, M.: Neural fitted Q iteration - first experiences with a data efficient neural reinforcement learning method. In: The 16th European Conference on Machine Learning (ECML-2005). Lecture Notes in Computer Science, vol. 3720, pp. 317–328 (2005)
Lange, S., Riedmiller, M.: Deep auto-encoder neural networks in reinforcement learning. In: The 2010 International Joint Conference on Neural Networks (IJCNN-2010), pp. 1–8 (2010)
Kaelbling, L.P., Littman, M.L., Cassandra, A.R.: Planning and acting in partially observable stochastic domains. Artif. Intell. 101(1–2), 99–134 (1998)
Lin, L.J.: Reinforcement learning for robots using neural networks. Technical report, DTIC Document (1993)
The Rust Programming Language. https://www.rust-lang.org/. Accessed 2007
GitHub - rust-lang/rust: a safe, concurrent, practical language. https://github.com/rust-lang/. Accessed 2007
‘rust’ tag wiki - Stack Overflow. http://stackoverflow.com/tags/rust/info/. Accessed 2007
Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feed forward neural networks. In: The 13th International Conference on Artificial Intelligence and Statistics (AISTATS-2010), pp. 249–256 (2010)
Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: The 14th International Conference on Artificial Intelligence and Statistics (AISTATS-2011), pp. 315–323 (2011)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG
About this paper
Cite this paper
Oda, T., Kulla, E., Cuka, M., Elmazi, D., Ikeda, M., Barolli, L. (2018). Performance Evaluation of a Deep Q-Network Based Simulation System for Actor Node Mobility Control in Wireless Sensor and Actor Networks Considering Different Distributions of Events. In: Barolli, L., Enokido, T. (eds) Innovative Mobile and Internet Services in Ubiquitous Computing . IMIS 2017. Advances in Intelligent Systems and Computing, vol 612. Springer, Cham. https://doi.org/10.1007/978-3-319-61542-4_4
Download citation
DOI: https://doi.org/10.1007/978-3-319-61542-4_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-61541-7
Online ISBN: 978-3-319-61542-4
eBook Packages: EngineeringEngineering (R0)