Computational & Technology Resources
an online resource for computational,
engineering & technology publications
Civil-Comp Proceedings
ISSN 1759-3433
CCP: 93
PROCEEDINGS OF THE TENTH INTERNATIONAL CONFERENCE ON COMPUTATIONAL STRUCTURES TECHNOLOGY
Edited by:
Paper 340

Introducing Negative Penalty Functions in Least Square Optimisation

S. Ilanko1 and G.K. Bharathy2

1Department of Engineering, The University of Waikato, Hamilton, New Zealand
2Ackoff Center for Advancement of Systems Approach (ACASA) [and Systems Engineering], University of Pennsylvania, Philadelphia PA, United States of America

Full Bibliographic Reference for this paper
S. Ilanko, G.K. Bharathy, "Introducing Negative Penalty Functions in Least Square Optimisation", in , (Editors), "Proceedings of the Tenth International Conference on Computational Structures Technology", Civil-Comp Press, Stirlingshire, UK, Paper 340, 2010. doi:10.4203/ccp.93.340
Keywords: negative penalty, curve fitting, optimization.

Summary
In the penalty method, it is a common practice to model constraints (boundary conditions) by first permitting the individual, augmented objective functions with constraints to violate such conditions but, multiplying the square of the error by a very large positive penalty parameter, which effectively reduces the error to an acceptable level, thus satisfying the constraint conditions.

Despite its common use and acceptance in many areas, the problems due to the uncertainty in the constraint violation and the need to select a suitable value for the penalty parameter have continued to cause concern. Recent publications show that for certain linear problems in mechanics, it is possible to overcome this problem by using a combination of positive and negative penalty terms.

The purpose of this paper is to demonstrate, how this approach could be used to solve curve fitting problems. For this, we formulated and tested two models, in two and three dimensional space.

In order to determine whether the proposed penalty function method provides adequate fit:

  • We compared our predictions using the proposed penalty function method with a Fourier series solution that satisfies all constraints;
  • We examined the quantitative measures of discrepancies between the real and predicted data by estimating the distribution of root means square errors (RMSE). RMSE, calculated as the square root of the sum of the square of the fractional residuals which shows that errors are modest at <1% for both two-dimensional and three-dimensional cases.
  • In order to test how the penalty method enforces the boundary conditions, we have plotted boundary displacements for various values of penalty parameters $(log|alpha| (0<=k<=8)).

It was also observed that both positive and negative values of penalty parameters generally appear to drive convergence from opposite directions and the magnitude of constraint violation decreases with the magnitude of the penalty parameter but numerical problems show at very high penalty values.

Our test showed that by combining positive and negative penalty parameters, one was able to achieve convergence even at moderate values of penalty parameters. The errors are small and are deemed acceptable but when representing continuous constraints as a penalised integral in a three-dimensional model, the convergence is non-monotonic, and the bracketing nature of the solution using a combination of positive and negative penalty parameters is violated. Further investigation is under way to address this by using a series of discrete constraints.

purchase the full-text of this paper (price £20)

go to the previous paper
go to the next paper
return to the table of contents
return to the book description
purchase this book (price £145 +P&P)