Computational & Technology Resources
an online resource for computational,
engineering & technology publications |
|
Civil-Comp Proceedings
ISSN 1759-3433 CCP: 92
PROCEEDINGS OF THE FIRST INTERNATIONAL CONFERENCE ON SOFT COMPUTING TECHNOLOGY IN CIVIL, STRUCTURAL AND ENVIRONMENTAL ENGINEERING Edited by: B.H.V. Topping and Y. Tsompanakis
Paper 20
A Practical Methodology for Comparison of Evolutionary Optimization Algorithms J. Nosek and M. Lepš
Department of Mechanics, Faculty of Civil Engineering, Czech Technical University in Prague, Czech Republic , "A Practical Methodology for Comparison of Evolutionary Optimization Algorithms", in B.H.V. Topping, Y. Tsompanakis, (Editors), "Proceedings of the First International Conference on Soft Computing Technology in Civil, Structural and Environmental Engineering", Civil-Comp Press, Stirlingshire, UK, Paper 20, 2009. doi:10.4203/ccp.92.20
Keywords: experimental research methodology, comparison, evolutionary algorithms, optimization, structural optimization, sizing.
Summary
During the last several decades, considerable work has been done in the field of comparison of numerical optimization algorithms, e.g. the work by Eiben et al. [1,2], or the work by Bartz-Beielstein [3] and all sources from his web-page [4], to cite a few. The leading topic is the selection of a proper metric for comparison of optimization algorithms. There have been many metrics defined for a comparison of optimization algorithms. In reference [3] a dozen metrics are summarized. The two most often used are (i) the fixed number of function evaluations where the response of optimization algorithms is compared and (ii) the number of successful runs which fall below some prescribed limit, usually some distance from the known optimum.
However, these metrics are too "scientific" and a practical use of these methodologies as well as their meaningful explanation is rather limited. The first metric is too simple for the comparison of two methods. In our results, where usually one method is better at the beginning of the optimization whereas a second method usually wins in the end. The second metric needs either knowledge of the optimum or the expected level of the objective function minimum. However this information is available only for benchmarks and is not available for a common practical task. Therefore, a humble goal of our work was to limit attention to practical parts of the already known approaches. As a result, we think that the whole history of the optimization algorithms' performance, i.e. the whole progress plot, is of interest. Our proposed methodology stores the whole progress plot, i.e. a graph of the minimum value of the objective function encountered vs. a number of function calls in given discrete steps (to minimize data storage). Since almost all optimization algorithms are stochastic, as a result of a random starting point, one hundred performance curves are obtained for each optimization algorithm for the sake of statistical meaningfulness. Note that some statistical test can be applied at all stored points to distinguish differences among applied optimization algorithms if the clear dominance is not visible directly from the graphs. To show the applicability of the proposed methodology, the suite of three sizing structural optimization benchmarks from [5] are investigated. As a by-product of this research, discrepancies in published results in available sources are presented. Finally, the resulting graphs and tables can be used by practitioners to determine which of the studied optimization algorithms better suit their aims.
References
purchase the full-text of this paper (price £20)
go to the previous paper |
|