Skip to main content

Past and Future Directions for Concurrent Task Scheduling

  • Chapter
Concurrent Objects and Beyond

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 8665))

  • 566 Accesses

Abstract

A wave of parallel processing research in the 1970s and 1980s developed various techniques for concurrent task scheduling, including work-stealing scheduling and lazy task creation, and various ideas for supporting speculative computing, including the sponsor model, but these ideas did not see large-scale use as long as uniprocessor clock speeds continued to increase rapidly from year to year. Now that the increase in clock speeds has slowed dramatically and multicore processors have become the answer for increasing the computing throughput of processor chips, increasing the performance of everyday applications on multicore processors by using parallelism has taken on greater importance, so concurrent task scheduling techniques are getting a second look.

Work stealing and lazy task creation have now been incorporated into a wide range of systems capable of “industrial strength” application execution, but support for speculative computing still lags behind. This paper traces these techniques from their origins to their use in present-day systems and suggests some directions for further investigation and development in the speculative computing area.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Agarwal, A., Bianchini, R., Chaiken, D., et al.: The MIT Alewife Machine: Architecture and Performance. In: 22nd Annual Int’l. Symp. on Computer Architecture, pp. 2–13 (1995)

    Google Scholar 

  2. Agrawal, K., Leiserson, C., He, Y., Hsu, W.: Adaptive Work Stealing with Parallelism Feedback. ACM Transactions on Computer Systems 26(3), 7:1–7:32 (2008)

    Google Scholar 

  3. Allen, E., Chase, D., Hallett, J., et al.: The Fortress Language Specification Version 1.0 (2008), http://research.sun.com/projects/plrg/fortress.pdf

  4. Ayguade, E., Copty, N., Duran, A., et al.: The Design of OpenMP Tasks. IEEE Trans. on Parallel and Distributed Systems 20(3), 404–418 (2009)

    Article  Google Scholar 

  5. Baker, H., Hewitt, C.: The Incremental Garbage Collection of Processes. MIT Artificial Intelligence Laboratory Memo 454, Cambridge, MA (1977)

    Google Scholar 

  6. Blumofe, R., Joerg, C., Kuszmaul, B., et al.: Cilk: An Efficient Multithreaded Runtime System. J. Parallel and Distributed Computing 37(1), 55–69 (1996)

    Article  Google Scholar 

  7. Blumofe, R., Leiserson, C.: Scheduling Multithreaded Computations by Work Stealing. J. ACM 46(5), 720–748 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  8. Burton, F.W., Sleep, M.R.: Executing Functional Programs on a Virtual Tree of Processors. In: Proc. of the 1981 Conf. on Functional Programming Languages and Computer Architecture, FPCA 1981, pp. 187–194 (1981)

    Google Scholar 

  9. Chapman, B., LaGrone, J.: OpenMP. In: Encyclopedia of Parallel Computing, pp. 1365–1371. Springer (2011)

    Google Scholar 

  10. Charles, P., Donawa, C., Ebcioglu, K., et al.: X10: An Object-Oriented Approach to Non-Uniform Cluster Computing. In: OOPSLA 2005, pp. 519–538 (2005)

    Google Scholar 

  11. Cong, G., Kodali, S., Krishnamoorthy, S., et al.: Solving Large, Irregular Graph Problems Using Adaptive Work-Stealing. In: International Conf. on Parallel Processing, pp. 536–545 (2008)

    Google Scholar 

  12. Dailey, D., Leiserson, C.: Using Cilk to Write Multiprocessor Chess Programs. J. Int. Computer Chess Assoc. 24(4), 236–237 (2002)

    Google Scholar 

  13. Frigo, M., Leiserson, C., Randall, K.: The Implementation of the Cilk-5 Multithreaded Language. In: ACM SIGPLAN 1998 Conf. on Programming Language Design and Implementation, pp. 212–223 (1998)

    Google Scholar 

  14. Gabriel, R.: Performance Evaluation of Lisp Systems. MIT Press, Cambridge (1985)

    Google Scholar 

  15. Halstead, R.: Multilisp: A Language for Concurrent Symbolic Computation. ACM Trans. on Programming Languages and Systems 7(4), 501–538 (1985)

    Article  MATH  Google Scholar 

  16. Halstead, R.: Vista: un outil générique pour visualiser l’exécution de programmes parallèles. In: Proc. JFLA 1996, Journées Francophones des Langages Applicatifs, INRIA—Collection Didactique, pp. 3–24 (1996) ISBN 2-7261-0944-6 (in French)

    Google Scholar 

  17. Halstead, R.: Multilisp. In: Encyclopedia of Parallel Computing, pp. 1216–1222. Springer (2011)

    Google Scholar 

  18. Hauck, E., Dent, B.: Burroughs’ B6500/B7500 Stack Mechanism. In: Proc. AFIPS Spring Joint Computer Conf., pp. 245–251 (1968)

    Google Scholar 

  19. Intel Corporation: Intel R C++ Compiler 12.0 User and Reference Guides (September 2010), Document Number 323271-011US (2010)

    Google Scholar 

  20. Intel Corporation: Intel Threading Building Blocks Reference Manual, http://www.threadingbuildingblocks.org/documentation.php

  21. Keller, R., Lindstrom, G., Patil, S.: A Loosely-Coupled Applicative Multi-Processing System. In: NCC 1979, AFIPS Conf. Proceedings, vol. 48, pp. 613–622 (1979)

    Google Scholar 

  22. Kornfeld, W., Hewitt, C.: The Scientific Community Metaphor. IEEE Trans. on Systems, Man, and Cybernetics 11(1), 24–33 (1981)

    Article  Google Scholar 

  23. Lee, I., Wickizer, S., Huang, Z., Leiserson, C.: Using Memory Mapping to Support Cactus Stacks in Work-Stealing Runtime Systems. In: PACT 2010, pp. 411–420. ACM (2010)

    Google Scholar 

  24. Leiserson, C.: The Cilk++ Concurrency Platform, J. Supercomputing 51(3), 244–257 (2010)

    Article  Google Scholar 

  25. Leiserson, C.: Cilk. In: Encyclopedia of Parallel Computing, pp. 273–288. Springer (2011)

    Google Scholar 

  26. Krall, E., McGehearty, P.: A Case Study of Parallel Execution of a Rule-Based Expert System. Int’l J. of Parallel Programming 15(1), 5–32 (1986)

    Article  Google Scholar 

  27. Kranz, D., Halstead, R., Mohr, E.: Mul-T: A High-Performance Parallel Lisp. In: ACM SIGPLAN 1989 Conf. on Programming Language Design and Implementation, pp. 81–90 (1989)

    Google Scholar 

  28. Mohr, E., Kranz, D., Halstead, R.: Lazy Task Creation: A Technique for Increasing the Granularity of Parallel Programs. IEEE Trans. Parallel and Distributed Systems 2(3), 264–280 (1991)

    Article  Google Scholar 

  29. Nichols, B., Buttlar, D., Farrell, J.: Pthreads Programming: A POSIX Standard for Better Multiprocessing. O’Reilly, Sebastopol (1996)

    Google Scholar 

  30. Osborne, R.: Speculative Computation in Multilisp. Tech. Report MIT/LCS/TR-464, MIT Laboratory for Computer Science, Cambridge, MA (1989)

    Google Scholar 

  31. Osborne, R.: Speculative Computation in Multilisp. In: Ito, T., Halstead Jr., R.H. (eds.) US/Japan WS 1989. LNCS, vol. 441, pp. 103–137. Springer, Heidelberg (1990)

    Chapter  Google Scholar 

  32. Reinders, J.: Intel Threading Building Blocks: Outfitting C++ for Multi-Core Processor Parallelism. O’Reilly, Sebastopol (2007)

    Google Scholar 

  33. Robison, A.: Intel Threading Building Blocks (TBB). In: Encyclopedia of Parallel Computing, pp. 955–964. Springer (2011)

    Google Scholar 

  34. Steele, G., Allen, A., Chase, D., et al.: Fortress (Sun HPCS Language). In: Encyclopedia of Parallel Computing, pp. 718–735. Springer (2011)

    Google Scholar 

  35. Taura, K., Kaneda, K., Endo, T., Yonezawa, A.: Phoenix: A Parallel Programming Model for Accommodating Dynamically Joining/Leaving Resources. In: ACM PPoPP 2003, pp. 216–229 (2003)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Halstead, R.H. (2014). Past and Future Directions for Concurrent Task Scheduling. In: Agha, G., et al. Concurrent Objects and Beyond. Lecture Notes in Computer Science, vol 8665. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-44471-9_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-662-44471-9_8

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-662-44470-2

  • Online ISBN: 978-3-662-44471-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics