Skip to main content

Edge Architecture for Dynamic Data Stream Analysis and Manipulation

  • Conference paper
  • First Online:
Edge Computing – EDGE 2020 (EDGE 2020)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12407))

Included in the following conference series:

  • 589 Accesses

Abstract

The exponential growth in IoT and connected devices featuring limited computational capabilities requires the delegation of computation tasks to cloud compute platforms. Edge compute tasks largely involve sending data from an edge compute device to a central location where data is processed and returned to the edge device as a response. Since most edge network infrastructure is restricted in its ability to dynamically delegate computation while retaining context, these events are commonly limited to a predefined task that the edge function is modeled to process and respond to. Edge functions traditionally handle isolated events or periodic updates, making them ill-suited for continuous tasks on streaming data. We propose a decentralized, massively scalable architecture of modular edge compute components which dynamically defines computation channels in the network, with emphasis on the ability to efficiently process data streams from a large amount of producers and support a large amount of consumers in real time. We test this architecture on real-world tasks, involving chaining of edge functions, context retention, and machine learning models on the edge, demonstrating its viability .

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    A video showing the emotion detection task can be seen here.

  2. 2.

    Measured using ‘curl’ to 30 public cloud instances from different companies and averaged.

  3. 3.

    Detailed comparison can be seen here.

  4. 4.

    A video showing the text to speech task can be seen here.

  5. 5.

    latency average was computed based on information in: https://www.cloudping.co/.

  6. 6.

    A video showing the augmentation task can be seen here.

References

  1. Lucero, S., et al.: IoT platforms: enabling the internet of things. White paper (2016)

    Google Scholar 

  2. Shi, W., Dustdar, S.: The promise of edge computing. Computer 49(5), 78–81 (2016)

    Article  Google Scholar 

  3. Satyanarayanan, M.: The emergence of edge computing. Computer 50(1), 30–39 (2017)

    Article  Google Scholar 

  4. Multi-access-edge computing. https://www.etsi.org/technologies/multi-access-edge-computing

  5. Taleb, T., Samdanis, K., Mada, B., Flinck, H., Dutta, S., Sabella, D.: On multi-access edge computing: a survey of the emerging 5G network edge cloud architecture and orchestration. IEEE Commun. Surv. Tutorials 19(3), 1657–1681 (2017)

    Article  Google Scholar 

  6. Baldini, I., et al.: Serverless computing: current trends and open problems. In: Chaudhary, S., Somani, G., Buyya, R. (eds.) Research Advances in Cloud Computing, pp. 1–20. Springer, Singapore (2017). https://doi.org/10.1007/978-981-10-5026-8_1

    Chapter  Google Scholar 

  7. Klimovic, A., Wang, Y., Stuedi, P., Trivedi, A., Pfefferle, J., Kozyrakis, C.: Pocket: elastic ephemeral storage for serverless analytics. In: 13th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2018, pp. 427–444 (2018)

    Google Scholar 

  8. Shi, W., Cao, J., Zhang, Q., Li, Y., Xu, L.: Edge computing: vision and challenges. IEEE Internet Things J. 3(5), 637–646 (2016)

    Article  Google Scholar 

  9. Chang, H., Hari, A., Mukherjee, S., Lakshman, T.: Bringing the cloud to the edge. In: 2014 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), pp. 346–351. IEEE (2014)

    Google Scholar 

  10. Garcia Lopez, P., et al.: Edge-centric computing: vision and challenges (2015)

    Google Scholar 

  11. Song, Y., Yau, S.S., Yu, R., Zhang, X., Xue, G.: An approach to QoS-based task distribution in edge computing networks for IoT applications. In: 2017 IEEE International Conference on Edge Computing (EDGE), pp. 32–39. IEEE (2017)

    Google Scholar 

  12. Yousefpour, A., Ishigaki, G., Jue, J.P.: Fog computing: towards minimizing delay in the internet of things. In: 2017 IEEE International Conference on Edge Computing (EDGE), pp. 17–24. IEEE (2017)

    Google Scholar 

  13. Mach, P., Becvar, Z.: Mobile edge computing: a survey on architecture and computation offloading. IEEE Commun. Surv. Tutorials 19(3), 1628–1656 (2017)

    Article  Google Scholar 

  14. Konečnỳ, J., McMahan, H.B., Yu, F.X., Richtárik, P., Suresh, A.T., Bacon, D.: Federated learning: strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492 (2016)

  15. Wang, S., et al.: Adaptive federated learning in resource constrained edge computing systems. IEEE J. Sel. Areas Commun. 37(6), 1205–1221 (2019)

    Article  Google Scholar 

  16. Mohammadi, M., Al-Fuqaha, A., Sorour, S., Guizani, M.: Deep learning for IoT big data and streaming analytics: a survey. IEEE Commun. Surv. Tutorials 20(4), 2923–2960 (2018)

    Article  Google Scholar 

  17. Livingstone, S.R., Russo, F.A.: The Ryerson audio-visual database of emotional speech and song (RAVDESS): a dynamic, multimodal set of facial and vocal expressions in North American English. PLoS ONE 13(5), e0196391 (2018)

    Article  Google Scholar 

  18. Ping, W., et al.: Deep voice 3: scaling text-to-speech with convolutional sequence learning. arXiv preprint arXiv:1710.07654 (2017)

  19. Parkhi, O.M., Vedaldi, A., Zisserman, A., et al.: Deep face recognition. In: BMVC, vol. 1, p. 6 (2015)

    Google Scholar 

  20. Pu, Y., et al.: Variational autoencoder for deep learning of images, labels and captions. In: Advances in Neural Information Processing Systems, pp. 2352–2360 (2016)

    Google Scholar 

  21. Wang, C.: HTTP vs. MQTT: a tale of two IoT protocols (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Orpaz Goldstein , Anant Shah , Derek Shiell , Mehrdad Arshad Rad , William Pressly or Majid Sarrafzadeh .

Editor information

Editors and Affiliations

A Comparison with different architectures

A Comparison with different architectures

In addition to the benefits accrued by our overarching edge architecture, there is room to break down individual components and compare them to other possible design choices. MQTT is one protocol chosen from the few emerging protocols of choice for the IoT world. While we evaluated both MQTT and CoAP and found both to be comparable, we chose MQTT for our pub/sub protocol as it had better library availability and broker seleciton. We compare our choice of MQTT with an HTTP based singnaling mechanism to support our architecture. In our architecture, we make use of MQTT as a signaling channel between subscribed clients waiting on streams of data, and between edge nodes coordinating execution of models on data. The key observation here is that our MQTT connection are seldom closed, and in most cases reused many times between the time they are established and close. The comparison made in [21] clearly shows the benefits of utilizing an open MQTT connection with exponential benefits over the same use case implemented using HTTP. Similarly to [21], we investigate the difference between 1, 10, and 100 messages each weighing 10 bytes, transmitted over MQTT and HTTP, over 10 trials. This simulates transferring simple instructions and EKV data locations in our computation channels. For MQTT we connect once and reuse the same connection to communicate all subsequent messages. For HTTP we use POST requests. All communication was evaluated between an edge node and a local client, emulating a real world scenario. Figure 8 and 9 show the log scale results for speed in ms, as observed in our test. Since HTTP grows as a factor of messages passed we see the benefit of opening a single MQTT connection to be used over multiple messages.

Fig. 8.
figure 8

MQTT speed per number of requests compared at log scale of ms.

Fig. 9.
figure 9

HTTP speed per number of requests compared at log scale of ms.

Another aspect worthy of comparison is the speed gain of using our architecture as compared to the same job implemented as a FAAS workflow, where results must be returned back to a user before the next function in a pipeline is started. We compare a simple numpy matrix multiplication task, called via our MQTT computation channels 1,10 and 100 times, where results are pushed to a MinIO storage instance. This is compared to the case where a function runs and returns a result directly to a client. In the case where we run our function more than 1 time, we compute the next result based on the previous functions result. In the FAAS like use case, the client sends back the result to the function, and in our architecture, the previous result is picked up from our MinIO instance. Figures 10 and 11 show the comparison between the two approaches. It can be seen that the impact on sending the little amount of data we use back and forth using HTTP POST requests, essentially does not change the POST requests time for execution. While the time increases using MQTT computation channels and ephemeral storage, where an extra call to the MinIO server is needed. However, even with this increase, it can be seen that as the amount of concurrent requests grow, the penalty incurred by POST requests is far more inhibiting then the extra hop to MinIO. As we have previously shown in our experiments, MQTT can be used for small scale data and speed up computation even more in cased where not much data is moved in the network.

Fig. 10.
figure 10

Speed of calling our function via HTTP POST requests, and sending back the result for all cases where the function was called more than once. Compared at log scale of ms.

Fig. 11.
figure 11

Speed of calling our function via an MQTT computation channel, send result to an ephemeral storage, and compute results based on previous function run for all cases where we call the function more than once. Compared at log scale of ms.

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Goldstein, O., Shah, A., Shiell, D., Rad, M.A., Pressly, W., Sarrafzadeh, M. (2020). Edge Architecture for Dynamic Data Stream Analysis and Manipulation. In: Katangur, A., Lin, SC., Wei, J., Yang, S., Zhang, LJ. (eds) Edge Computing – EDGE 2020. EDGE 2020. Lecture Notes in Computer Science(), vol 12407. Springer, Cham. https://doi.org/10.1007/978-3-030-59824-2_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-59824-2_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-59823-5

  • Online ISBN: 978-3-030-59824-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics