Computational & Technology Resources
an online resource for computational,
engineering & technology publications |
|
Civil-Comp Proceedings
ISSN 1759-3433 CCP: 107
PROCEEDINGS OF THE FOURTH INTERNATIONAL CONFERENCE ON PARALLEL, DISTRIBUTED, GRID AND CLOUD COMPUTING FOR ENGINEERING Edited by:
Paper 26
Towards Automatic Selection of Direct vs. Iterative Solvers for Cloud-Based Finite Element Analysis N. Muhtaroglu1, I. Ari2 and E. Koyun3
1Department of Computational Mechanics, Faculty of Mechanics and Mathematics, Moscow State University, Russia
N. Muhtaroglu, I. Ari, E. Koyun, "Towards Automatic Selection of Direct vs. Iterative Solvers for Cloud-Based Finite Element Analysis", in , (Editors), "Proceedings of the Fourth International Conference on Parallel, Distributed, Grid and Cloud Computing for Engineering", Civil-Comp Press, Stirlingshire, UK, Paper 26, 2015. doi:10.4203/ccp.107.26
Keywords: HPC-as-a-service, cloud computing, finite element analysis, direct solvers, iterative solvers, Krylov, PETSc, job scheduling.
Summary
The new trend in engineering is to solve complex computational problems in the
cloud using high performance computing (HPC) services provided by different
vendors. In this paper, we compare performances of direct vs. iterative linear
equation solvers to help with the development of job schedulers that can
automatically choose the best solver type and tune them (e.g. precondition the
matrices) according to job characteristics and workload conditions seen in the HPC
cloud services. As a proof of concept, we use three classical elasticity problems,
namely a cantilever beam, Lame problem and the stress concentration factor (SCF),
whose analytical solutions are well-known. We mesh these linear problems with
increasing granularities, which leads to various matrix sizes; the largest having one
billion non-zero elements. Detailed finite element analyses using an IBM HPC
cluster are executed. We first use the multi-frontal parallel, sparse direct solver
MUMPS and evaluate its performance with Cholesky and LU decompositions of the
generated matrices with respect to memory usage, and multi-core, multi-node
execution performances. As for the iterative solver, we use the PETSc library and
carry out studies with several Krylov subspace methods (CG, BiCG, GMRES) and
preconditioner combinations (BJacobi, SOR, ASM, None). Finally, we compare
and contrast the direct and iterative solver results in order to find the most suitable
algorithm for varying cases obtained from numerical modelling of these three-dimensional
linear elasticity problems.
purchase the full-text of this paper (price £20)
go to the previous paper |
|