Computational & Technology Resources
an online resource for computational,
engineering & technology publications |
|
Civil-Comp Proceedings
ISSN 1759-3433 CCP: 96
PROCEEDINGS OF THE THIRTEENTH INTERNATIONAL CONFERENCE ON CIVIL, STRUCTURAL AND ENVIRONMENTAL ENGINEERING COMPUTING Edited by: B.H.V. Topping and Y. Tsompanakis
Paper 93
Solving Random Linear Problems: Expected Value Linear Programming I. Deák
Department of Computer Science, Corvinus University of Budapest, Hungary , "Solving Random Linear Problems: Expected Value Linear Programming", in B.H.V. Topping, Y. Tsompanakis, (Editors), "Proceedings of the Thirteenth International Conference on Civil, Structural and Environmental Engineering Computing", Civil-Comp Press, Stirlingshire, UK, Paper 93, 2011. doi:10.4203/ccp.96.93
Keywords: random linear problems, stochastic optimization, expected value optimization, successive regression approximations.
Summary
One of the basic problems of optimization is the subject of linear programming, that is optimizing a linear function, subject to some linear constraints. Usually the parameters in the problem description are considered constant. However these quantities may be random, because no exact values are available and observations yield only an approximate value, which can be considered the original value plus the noise. In this case we face a completely different problem. This problem, containing random variables should be reformulated, and then solved somehow.
We present a possible reformulation of the problem, where the random variables are replaced by their expected values, and we call this deterministic model the expected value linear programming model (ELP). In actual life we do not have access to the expected values, only the realizations of the random variables. The next problem we consider in this paper is the solution procedure, we would like to apply to solve an ELP. Recently we developed a solution procedure for several stochastic optimization problems, where one (or more) function does not have a deterministic value, only a noise corrupted value can be observed. This numerical method is called the successive regression approximations (SRA), and it uses regression approximations in the same way, as the second order Taylor expansion is used in the Newton's method. The main points of the SRA algorithm are the following: (i) for a set of points and related function values we fit a regression function, (ii) using this regression function we create and solve an approximate problem, by replacing the noisy ("hard to compute") function by this approximation, (iii) the solution of the approximate problem is fed back to the set of previous points. This procedure is repeated until we are satisfied with the approximate solution. SRA was applied to solve numerical examples of stochastic programming problems [1,2]. The convergence of the method was proven for the case, when a noisy equation is solved [3]. Here we present a version of SRA for computing the solution of the ELP problem. Furthermore we consider the problem, when we have to solve a system of equations, which has random coefficients. Our proposal is similar to the ELP problem: the expected value problem is formed, then solved by a variant of the SRA method. References
purchase the full-text of this paper (price £20)
go to the previous paper |
|