Computational & Technology Resources
an online resource for computational,
engineering & technology publications |
|
Civil-Comp Proceedings
ISSN 1759-3433 CCP: 92
PROCEEDINGS OF THE FIRST INTERNATIONAL CONFERENCE ON SOFT COMPUTING TECHNOLOGY IN CIVIL, STRUCTURAL AND ENVIRONMENTAL ENGINEERING Edited by: B.H.V. Topping and Y. Tsompanakis
Paper 53
Hierarchical Subset Latin Hypercube Sampling for Correlated Random Vectors M. Vorechovský
Institute of Structural Mechanics, Faculty of Civil Engineering, Brno University of Technology, Czech Republic , "Hierarchical Subset Latin Hypercube Sampling for Correlated Random Vectors", in B.H.V. Topping, Y. Tsompanakis, (Editors), "Proceedings of the First International Conference on Soft Computing Technology in Civil, Structural and Environmental Engineering", Civil-Comp Press, Stirlingshire, UK, Paper 53, 2009. doi:10.4203/ccp.92.53
Keywords: simulation, Latin hypercube sampling, correlation, progressive sampling, design of experiments, adaptive sample size, neural network learning, response surface, simulated annealing.
Summary
In many computer experiments the adequacy of a given sample to give
acceptable estimates of the desired statistical quantities cannot be
determined a priori, and thus the ability to extend or refine
an experimental design may be important. This can be done using
crude Monte Carlo sampling, though, running each
realization (physical or virtual experiment) is often very expensive.
Therefore the variance reduction techniques such as Latin hypercube
sampling [1,2] is a
suitable option, because it yields lower variance of estimates of
statistical moments compared to crude Monte Carlo sampling at the
same sample size. In conventional Latin
hypercube sampling (LHS), however, it is the necessary to specify
the number of simulations (or physical realizations in the design of
experiments) in advance. However, in real life problems the sample
size yielding stable and statistically significant estimations of
output statistics is not known beforehand. For example, when an
analyst is planning to run complex nonlinear finite element
computations using LHS, it is unclear how many simulations are
needed. If too small a sample set is used (i.e. a set that does not
give acceptable statistical results), the analyst normally has to
abandon the results and run new analyses with a larger sample set.
It is thus desirable to start with a small sample and then extend
(or refine) the design if deemed necessary. The extension would
permit the use of a larger sample set without the loss of any of the
already performed expensive calculations
(experiments).
This problem has been overcome by the suggested methods. The paper present procedures for extending the size of a Latin hypercube sample (LHS) with rank correlated variables. The methods are based on a hierarchy of Latin hypercube-like samples which are proven to yield to smaller variances of results compared to the crude Monte Carlo method. It is called hierarchical because each refinement explores the design space with higher resolution than the previous (pre-refined) design. The subsets sampled by the proposed method can be merged together exploiting the property of variance reduction, yet retaining the sampling flexibility. The whole procedure of a cascade of LHS-like runs can be fully automated and the stopping criterion might be for example the significance of the output statistics or the desired computational time. Simply the simulation can be stopped in a run-time depending on the current accuracy of the results and analyst budget. In this way, e.g. some crude pilot studies can later be efficiently reused and refined. References
purchase the full-text of this paper (price £20)
go to the previous paper |
|