Computational & Technology Resources
an online resource for computational,
engineering & technology publications |
|
Civil-Comp Proceedings
ISSN 1759-3433 CCP: 84
PROCEEDINGS OF THE FIFTH INTERNATIONAL CONFERENCE ON ENGINEERING COMPUTATIONAL TECHNOLOGY Edited by: B.H.V. Topping, G. Montero and R. Montenegro
Paper 121
Parallel Discrete Element Simulation of a Heterogeneous Particle System R. Kacianauskas2, A. Maknickas1, A. Kaceniauskas1, D. Markauskas2 and R. Balevicius2
1Parallel Computing Laboratory,
R. Kacianauskas, A. Maknickas, A. Kaceniauskas, D. Markauskas, R. Balevicius, "Parallel Discrete Element Simulation of a Heterogeneous Particle System", in B.H.V. Topping, G. Montero, R. Montenegro, (Editors), "Proceedings of the Fifth International Conference on Engineering Computational Technology", Civil-Comp Press, Stirlingshire, UK, Paper 121, 2006. doi:10.4203/ccp.84.121
Keywords: particle compacting, discrete element method, heterogeneous poly-dispersed granular material, parallel computing, spatial domain decomposition, distributed memory PC clusters.
Summary
This paper presents a parallel DEM software developed for simulating granular
material on distributed memory PC clusters. Static domain decomposition and
message passing inter-processor communication are implemented in the DEM code.
A novel algorithm for moving particles, that exchange processors, is incorporated in
the domain decomposition framework. A particular manifest of this paper is
two-fold: to investigate computational performance of the developed software and to
contribute to the understanding of algorithmic aspects related to poly-disperse
properties of the heterogeneous granular material.
The granular material is regarded as a system of a finite number of spherical particles. The inter-particle contact model considers a combination of elasticity, viscous damping and friction force effects. A detailed description of the DEM technique applied may also be found in [1]. The parallel algorithms are implemented in the FORTRAN 90 code DEMMAT_PAR. Inter-processor communication is implemented in the code by the subroutines of the message passing library MPI [2]. The communication is performed by the MPI routines MPI_ISEND, MPI_REQUEST_FREE and MPI_RECEIVE. The non-blocking communication routines significantly improve parallel efficiency of the code. Computations were performed on the PC cluster VILKAS (NPACI Rocks Cluster, RedHat Linux Enterprise 3.0). The cluster consisted of 20 processors (Intel Pentium 4, 3.2GHz, 1GB RAM for a processor). It was connected by D-Link DGS 1224T Gigabit Smart Switch (24-Ports 10/100/1000Mbps Base-T Module). A numerical illustration addresses tri-axial compacting of granular material by rigid walls. Two types of benchmark problems regarding mono-disperse and heterogeneous poly-disperse granular material were solved. Two examples of the mono-disperse material were also considered. They present 20000 particles with dmin=2.2 mm and 100000 particles with the diameter d=1.3 mm. Then initial composition of the particles presents a regular lattice type structure, where particles are embedded in the centers of the cells. Poly-dispersed material is composed of particles with a specified normal particular size distribution. It is presented by two sets containing exactly 19890 and 100037 particles with dmin=1.031 mm and dmax=4.466 mm and dmin=0.603 mm and dmax=2.613 mm for each set, respectively. The initial irregular particle arrangement is generated by employing the algorithm presented by Jiang et al. [3]. One-dimensional strip-type spatial decomposition containing a roughly equal number of particles to ensure the static load balancing on the homogeneous PC clusters is applied. In the present work, the speed-up equal to 8.81 has been obtained for poly-dispersed material and the speed-up equal to 9.22 has been attained for mono-dispersed material on 10 processors. The parallel algorithm and software developed are applied to the simulation of compacting granular material. Simulation results are presented in terms of the wall pressures, the coordination number and the packing density. Based on the current investigation, the following concluding remarks may be drawn:
References
purchase the full-text of this paper (price £20)
go to the previous paper |
|