Computational & Technology Resources
an online resource for computational,
engineering & technology publications |
|
Civil-Comp Proceedings
ISSN 1759-3433 CCP: 95
PROCEEDINGS OF THE SECOND INTERNATIONAL CONFERENCE ON PARALLEL, DISTRIBUTED, GRID AND CLOUD COMPUTING FOR ENGINEERING Edited by:
Paper 20
DIANA: A Device Abstraction Framework for Parallel Computations A. Panagiotidis, D. Kauker, S. Frey and T. Ertl
Visualization and Interactive Systems Group, University of Stuttgart, Germany A. Panagiotidis, D. Kauker, S. Frey, T. Ertl, "DIANA: A Device Abstraction Framework for Parallel Computations", in , (Editors), "Proceedings of the Second International Conference on Parallel, Distributed, Grid and Cloud Computing for Engineering", Civil-Comp Press, Stirlingshire, UK, Paper 20, 2011. doi:10.4203/ccp.95.20
Keywords: abstraction layer, parallel computation, GPGPU, modular framework, finite element method simulation, maintainability.
Summary
Using DIANA in an application increases its maintainability and portability because hardware platforms can be changed easily by supplying new plugins.
These plugins can also be shared among developers, so code needs to be written only once.
This also reduces development cycles and allows developers to focus on solving problems instead of tinkering with hardware.
DIANA stores most information in an embedded SQL database for flexible and scalable retrieval and processing. Applications query the database for devices meeting certain capabilities, for example double-precision floating point operations or overlapping memory transfers with computations. Operations, for example multiplying matrices or sorting lists, are queried in the same way. This eliminates the need to manually detect hardware and capabilities in order to decide which code to execute. An operation can either be executed directly or queued for deferred execution on a device. Data buffers needed for computations are also handled transparently, i.e. data is kept consistent across multiple devices. To put it briefly, DIANA acts as a single co-processor that utilizes all supported computation devices inside a node. Our evaluation shows that DIANA imposes only a very small overhead when compared against direct use of CUDA. Most of this overhead can be accounted to the initialization and shutdown phase of an application and has no impact on frequently used computations. Since already available kernels and libraries can be used within plugins, no computational performance is lost; instead only a small amount of time is needed to look up an operation. Yet, the negligible overall penalty comes with the benefit of a unified interface for any currently supported and upcoming hardware and computation library. We put a strong focus on the easy integration of DIANA into existing applications. PERMAS, a general purpose finite element system developed by INTES GmbH, is the first application to use DIANA for extending a pre-existing CPU-based shared-memory parallelization to a hybrid CPU/GPU parallelization. By off-loading certain computations to graphics processing units through DIANA's unified interface, PERMAS benefits from speed-ups, portability, and maintainability. Upcoming computation platforms, like Intel's Knights Corner, are easily supported by creating a plugin in DIANA, requiring no changes in PERMAS. For future work, we plan to support more devices inside a node and to utilize computation devices of interconnected nodes. We also want to supply visual tools to aid with the development and optimization stages of parallel software. Finally, we will build an extensive plugin repository, providing optimized operations for all supported platforms for simulation and visualization. purchase the full-text of this paper (price £20)
go to the previous paper |
|