Skip to main content

Training of Neural Networks: Interactive Possibilities in a Distributed Framework

  • Conference paper
  • First Online:
Recent Advances in Parallel Virtual Machine and Message Passing Interface (EuroPVM/MPI 2002)

Abstract

Training of Artificial Neural Networks in a Distributed Environment is considered and applied to a typical example in High Energy Physics interactive analysis. Promising results showing a reduction of the wait time from 5 hours to 5 minutes obtained in a local cluster with 64 nodes are described. Preliminary tests in a wide area network studying the impact of latency time are described; and the future work for integration in a GRID framework, that will be carried in the CrossGrid European Project, is outlined.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. I. Foster and C. Kesselman, editors. The Grid: Blueprint for a Future Computing Infraestucture. Morgan Kaufmann Publishers, 1999

    Google Scholar 

  2. DELPHI Collaboration. Search for the standard model Higgs boson at LEP in the year 2000. Phys. Lett. B 499:23–37, 2001 [hep-ex/0102036]

    Article  Google Scholar 

  3. Manavendra Misra. Parallel Environments for Implementing Neural Networks. Neural Computing Survey, vol. 1., 48–60, 1997

    Google Scholar 

  4. D. Aberdeen, J. Baxter, R. Edwards. 98c/MFLOP Ultra-Large Neural Network Training on a PIII Cluster. Proceedings of Supercomputing 2000, November 2000

    Google Scholar 

  5. CrossGrid European Project (IST-2001-32243). http://www.eu-crossgrid.org

  6. Kohonen, T. S elf-Organizing Maps. Springer, Berlin, Heidelberg, 1995

    Google Scholar 

  7. Broyden, Fletcher, Goldfarb, Shanno (BFGS) method. For example in Practical Methods of Optimization R.Fletcher. Wiley (1987)

    Google Scholar 

  8. MLPFIT: a tool for designing and using Multi-Layer Perceptrons. http://schwind.home.cern.ch/schwind/MLPfit.html

  9. Physics Analysis Worstation. http://paw.web.cern.ch/paw/

  10. MPICH. http://www-unix.mcs.anl.gov/mpi/mpich

  11. T. Sjöstrand, Phys. Comm. 39 (1986) 347. Version 6.125 was used

    Google Scholar 

  12. Santander GRID Wall. http://grid.ifca.unican.es/sgw

  13. Géant. http://www.dante.net/geant/

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Ponce, O. et al. (2002). Training of Neural Networks: Interactive Possibilities in a Distributed Framework. In: Kranzlmüller, D., Volkert, J., Kacsuk, P., Dongarra, J. (eds) Recent Advances in Parallel Virtual Machine and Message Passing Interface. EuroPVM/MPI 2002. Lecture Notes in Computer Science, vol 2474. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45825-5_15

Download citation

  • DOI: https://doi.org/10.1007/3-540-45825-5_15

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-44296-7

  • Online ISBN: 978-3-540-45825-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics