Skip to main content

Parallel Algorithms for Low Rank Tensor Arithmetic

  • Chapter
  • First Online:
Advances in Mathematical Methods and High Performance Computing

Part of the book series: Advances in Mechanics and Mathematics ((AMMA,volume 41))

Abstract

High-dimensional tensors of low rank can be represented in the hierarchical Tucker format (HT format) with a complexity which is linear in the dimension d of the tensor. We developed parallel algorithms which perform arithmetic operations on tensors in the HT format, where we assume the tensor data to be distributed over several compute nodes. The parallel runtime of our algorithms grows like \(\log (d)\) with the tensor dimension d, due to the tree structure of the HT format. On each of the compute nodes one can use shared memory parallelization to accelerate the algorithms further. One application of our algorithms is parameter-dependent problems. Solutions of parameter-dependent problems can be approximated as tensors in the HT format if the parameter dependencies fulfil some low rank property. Our algorithms can then be used to perform post-processing on solution tensors, e.g., compute mean values, expected values or other quantities of interest. If the problem is of the form Ax = b with the matrix A as well in the HT format, we can compute the residual of a solution tensor or even compute the entire solution directly in the HT format by means of iterative methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 79.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We abbreviate \(\mathbb {R}^{\mathcal {I}_1\times \{1,\ldots ,r\}}\) by \(\mathbb {R}^{\mathcal {I}_1\times r}\).

  2. 2.

    A rSVD exists for any matrix which is not the zero matrix. This is, in general, not the case for tensors of higher dimension d > 2. Nevertheless, if \(\mathcal {I}_t\) is large, a rSVD of \(\mathcal {M}_t(A)\) may be no more computable. This is, however, not a handicap for us when we have available HT representations of a matrix A and a right-hand side B and want to solve AX = B by some iterative method inside the HT format. We can then choose a starting vector X 0 in the HT format (e.g., X 0 := B) and we will never have to transfer a tensor into the HT format.

    If we need to approximate large tensors in the HT format, we may use other approximation techniques as, e.g., the cross approximation for HT tensors [2].

  3. 3.

    We define (r t)tT ≤ (s t)tT  :⇔ r t ≤ s t for all t ∈ T.

  4. 4.

    The Hadamard product x ∘ y of two vectors \(x,y\in \mathbb {R}^{\mathcal {I}}\) is the vector of the entry-wise products: (x ∘ y)(i) = x(i) ⋅ y(i) for all \(i\in \mathcal {I}\).

References

  1. Ballani, J., Grasedyck, L.: Tree Adaptive Approximation in the Hierarchical Tensor Format. SIAM Journal on Scientific Computing 36(4), A1415–A1431 (2014)

    Article  MathSciNet  Google Scholar 

  2. Ballani, J., Grasedyck, L., Kluge, M.: Black box approximation of tensors in hierarchical Tucker format . Linear Algebra Appl. 438(2), 639–657 (2013)

    Article  MathSciNet  Google Scholar 

  3. Etter, S.: Parallel ALS Algorithm for Solving Linear Systems in the Hierarchical Tucker Representation. SIAM Journal on Scientific Computing 38(4), A2585–A2609 (2016)

    Article  MathSciNet  Google Scholar 

  4. Grasedyck, L.: Hierarchical Singular Value Decomposition of Tensors. SIAM J. Matrix Anal. Appl. 31, 2029–2054 (2010)

    Article  MathSciNet  Google Scholar 

  5. Grasedyck, L., Löbbert, C.: Distributed Hierarchical SVD in the Hierarchical Tucker Format. arXiv (2017). URL http://arxiv.org/abs/1708.03340

  6. Hackbusch, W.: Tensor spaces and numerical tensor calculus, Springer series in computational mathematics, vol. 42. Springer, Heidelberg (2012)

    Book  Google Scholar 

  7. Karlsson, L., Kressner, D., Uschmajew, A.: Parallel algorithms for tensor completion in the CP format. Parallel Computing 57(Supplement C), 222–234 (2016)

    Article  MathSciNet  Google Scholar 

  8. Solomonik, E., Matthews, D., Hammond, J.R., Stanton, J.F., Demmel, J.: A massively parallel tensor contraction framework for coupled-cluster computations. Journal of Parallel and Distributed Computing 74(12), 3176–3190 (2014). Domain-Specific Languages and High-Level Frameworks for High-Performance Computing

    Google Scholar 

  9. Woody Austin Grey Ballard, T.G.K.: Parallel Tensor Compression for Large-Scale Scientific Data. 2016 IEEE International Parallel and Distributed Processing Symposium (IPDPS) pp. 912–922 (2016)

    Google Scholar 

Download references

Acknowledgement

The authors gratefully acknowledge the support by the DFG priority programme 1648 (SPPEXA) under grant GR-3179/4-2.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lars Grasedyck .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Grasedyck, L., Löbbert, C. (2019). Parallel Algorithms for Low Rank Tensor Arithmetic. In: Singh, V., Gao, D., Fischer, A. (eds) Advances in Mathematical Methods and High Performance Computing. Advances in Mechanics and Mathematics, vol 41. Springer, Cham. https://doi.org/10.1007/978-3-030-02487-1_16

Download citation

Publish with us

Policies and ethics