Computational & Technology Resources
an online resource for computational,
engineering & technology publications |
|
Civil-Comp Proceedings
ISSN 1759-3433 CCP: 100
PROCEEDINGS OF THE EIGHTH INTERNATIONAL CONFERENCE ON ENGINEERING COMPUTATIONAL TECHNOLOGY Edited by: B.H.V. Topping
Paper 1
Future High Performance Computing Strategies M.M. Resch
High Performance Computing Center Stuttgart (HLRS), University of Stuttgart, Germany M.M. Resch, "Future High Performance Computing Strategies", in B.H.V. Topping, (Editor), "Proceedings of the Eighth International Conference on Engineering Computational Technology", Civil-Comp Press, Stirlingshire, UK, Paper 1, 2012. doi:10.4203/ccp.100.1
Keywords: high performance computing, hardware-software, exaflop computing.
Summary
High performance computing (HPC) has seen a long history of progress over the last six decades. From Megaflops to Gigaflops, Teraflops and Petaflops the speed of systems has constantly increased over the last fifty years. The prospects for an increase in performance over the coming decade are still very good. Systems with a performance in the range of Petaflops become widely available and discussions have started about how to achieve an Exaflop. This paper discusses the further development of high performance computing in the future. It proposes a strategy that shifts the focus of attention from hardware to software. With such a change of paradigm further progress in the field of simulation is more likely than with many of the concepts presented for Exascale computing.
The key factor in improving the quality and overall performance of large scale supercomputers is obviously the software. Investigations show that standard software can hardly get reasonable levels of performance from existing supercomputer hardware. Even the single processor performance is very often in the single digit figure percentage. Communication overheads in systems with a growing number of cores make sure that a code that is not optimised can easily end up sustaining only 1% of peak performance. One might argue that still 1% of a Petaflop is 10 Teraflop and that this compares favourably to previous systems. However, teraflops of sustained performance were easily achieved with traditional vector systems already ten years ago. It is unclear what the progress would be unless we are able to come up with better results. Optimization of software is getting increasingly difficult with an ever changing landscape of supercomputer systems. While ten years ago it was expected that standard components would carry the day, today the top500 list shows a growing number of specialized systems such as the IBM BlueGene or the Cray XE6. All of these specialised architectures require individual optimization effort. One might say that this has always been the case. This is true. However, the complexity of the systems increases at an unprecedented speed. And this is what makes optimization often very difficult. The hope is therefore for a slow-down in hardware acceleration. While everybody is worried about the difficulties to increase performance beyond Exaflops, it is argued that such an increase is not desirable. Any slow-down in hardware acceleration on the contrary is good since it would allow software to catch up. purchase the full-text of this paper (price £20)
go to the next paper |
|