Computational & Technology Resources
an online resource for computational,
engineering & technology publications
Civil-Comp Proceedings
ISSN 1759-3433
CCP: 84
PROCEEDINGS OF THE FIFTH INTERNATIONAL CONFERENCE ON ENGINEERING COMPUTATIONAL TECHNOLOGY
Edited by: B.H.V. Topping, G. Montero and R. Montenegro
Paper 61

Back Analysis of Structural Model Parameters: The Application of Neural Networks

S. Arangio

Department of Structural and Geotechnical Engineering, University of Rome "La Sapienza", Rome, Italy

Full Bibliographic Reference for this paper
S. Arangio, "Back Analysis of Structural Model Parameters: The Application of Neural Networks", in B.H.V. Topping, G. Montero, R. Montenegro, (Editors), "Proceedings of the Fifth International Conference on Engineering Computational Technology", Civil-Comp Press, Stirlingshire, UK, Paper 61, 2006. doi:10.4203/ccp.84.61
Keywords: back analysis, parameter estimation, model updating, neural networks, back propagation algorithm, model soundness, statistical robustness.

Summary
Nowadays, the use of numerical models, also relatively sophisticated, is a common practice in structural engineering. In critical applications, such as the modeling of complex structural systems, the parameters estimation plays a crucial role. This is specifically true in the so-called inverse structural problems.

In this paper, the problem of the identification of the reference configuration of a long suspension bridge is presented. The achievement of such a configuration is sensitiver in numerical modelling. In fact, in the reality, the weight of structural and not structural elements loads the bridge during the construction phases, so the reference configuration is reached under these loads. Instead, the numerical model is initially unloaded and undeformed and the application of the dead loads causes relevant and unrealistic displacements of the bridge deck.

Three methods are usually adopted to deal with this problem are briefly presented: the impressed displacement method (SI), the sag method (CF) and the temperature variation method (TE).

In the temperature variation method the reference configuration is modelled by applying an appropriate temperature variation on the hanger system, that reproduces the initial deformation due to the dead load.

In particular, a technique for the back analysis of the values of temperatures to assign to the hangers, in order to obtain a modified initial configuration close to the reference configuration, is presented. The present technique is based on the combination of the finite element method (FEM) and of the neural networks. Specifically, the FEM is used for the generation of the training data for the network, and a feed forward, back propagation, multilayer neural network is used for the back analysis of the parameters.

In order to verify the quality of the responses, the temperature variations obtained are used for carrying out a direct FEM analysis, obtaining the corresponding vertical displacements. The analysis of the results shows that these parameters identify rather well the configuration in the middle span zone (with a difference of 2%), whereas they do not catch the behaviour going towards the anchorages (the difference is about 23%). The large difference can be justified because the error near the anchorage is due to the approximation of the temperature variation method and not to the parameters estimation.

The presented application shows that in those situations in which there is a large set of data, neural networks represent a robust statistical tool for handling prediction problems. This kind of problems is based on the research of a model, estimated from the available data, used for predicting the value of some variable of interest.

Looking at the different sources of error, it is possible to gain an understanding of why particular models tend to work well on one type of dataset but less so on others.

It is possible to decompose the reducible part of error into a bias and a variance component. The relative performances of different models is related to the so-called bias-variance dilemma: flexible techniques such as neural networks lead to models that tend to have a low bias and a high variance, whereas the more inflexible "conventional" statistical methods such as linear regression, lead to models that tend to have more bias and less variance than their modern counterparts. For a fixed bias, the variance tends to decrease when the number of examples gets larger. Consequently, for large sets of available data, bias tends to be the most important source of prediction error, and flexible statistical models, such as neural networks, are more robust. On the other hand, in order to reach the soundness of the network model, it is important to operate accurately on its free parameters. In fact, with a suitable definition of the number of hidden units and training epochs, it is possible to reach satisfactory results.

purchase the full-text of this paper (price £20)

go to the previous paper
go to the next paper
return to the table of contents
return to the book description
purchase this book (price £105 +P&P)