Friday 6 March 2015

Calibration of Computationally Expensive Simulations via Response Surface Optimisation

In this post I will provide a general definition of optimisation, and an overview of some popular algorithms. I will then go through an example on the use of response surface optimisation to determine the turbulence model parameters that minimise the error between a computational expensive fluid flow simulation of a submarine, and associated experimental measurements. We first presented this work in [1].


Optimisation is the means of determining the design parameters that minimise a specified cost function, under a set of constraints. For example if we want to minimise the noise of a fan, then the cost function would be the decibel level of the noise generated by the fan, and the design parameters would include the width, length, thickness and material type of the fan blades. There may also be a constraints put on the weight and sharpness of the components for occupational, health and safety reasons. Constraints limit the allowable combination of design parameters, and typically result in a lower performance design had the constraints not been in place. The cost functions can also be multi-objective. For example, in addition to minimising the noise of the fan, one may also want to minimise the time, energy and financial cost required to manufacture the components. Typically the designer must determine the relative importance of each of the objectives to combine them into one cost function.



Optimisation methods can be classed as two different types: gradient based methods; and non-gradient based methods. If the design / parameter space for a given cost function is convex (continually decreasing with only one minimum) then gradient based methods are an extremely efficient way of determining the optimal solution. In these methods one estimates how the cost function changes with the design parameters, and this information is used to select the next combination of parameters to set. However, if the cost functions are not convex and are highly non-linear, then gradient based methods have the potential to get stuck in local minima and produce sub-optimal designs. In this case non-gradient based methods are preferred.



One of the more popular non-gradient based methods is evolutionary optimisation. These methods attempt to evolve an optimal design through natural selection, or survival of the fittest. This is achieved through breeding a set of parents to produce a set of children, and keep only the fittest children (low cost function) for further breeding [2]. Each parent and child have a DNA gene sequence defined by the design input parameters. Breeding is undertaken on the basis of mutation and cross-over (gene swapping). There are two main incarnations that utilise the above approach, genetic algorithms (GA) and evolutionary algorithms (EA). GA were development first, with the parameters encoded into binary numbers, which are essentially treated as genes in the reproductive process. In the mutation process one of the digits in the binary coding is randomly changed from a 1 to a 0 or visa-versa. In the cross-over process to genes are split at a random point and interchanged. EA are philosophically the same, however, the crossover and mutation steps are performed on the design parameters directly, without any binary coding and decoding processes. In either case the breeding process is continued over multiple generations until a sufficiently good solution is found or computational resources are exhausted.



Evolutionary approaches can achieve a very good solution in a reasonable time, however, they require many concurrent evaluations of the cost function [2]. This is not reasonable for cost functions that are very computationally expensive. In this case Response Surface Models (RSM) can be utilised to build an estimate of the parameter space from a limited number of cost function evaluations. The minimum of the RSM is then obtained either numerically or analytically, and the cost function is then re-evaluated for the estimated optimal parameter set. The RSM is updated with the additional cost function evaluation, and the new minimum determined. This process is repeated until there is a sufficiently small difference between the RSM estimate of the cost function minimum and the value obtained from the cost function itself. The only subjective choice is the functional form of the RSM. Popular choices are splines and Kriging [3]. The latter is adopted in the following example. Kriging has the added feature of providing error estimates of the RSM throughout the design space.



As mentioned at the beginning of the post, the optimisation task here is to determine the turbulence model parameters that minimise the error between an expensive computational fluid dynamics (CFD) simulation of a submarine, and associated experimental measurements (lift force, drag force and yaw moment). Slices of the velocity field are illustrated with respect to the submarine body below.



For this particular turbulence model there are two key parameters βi and α. We, therefore, have a two-dimensional parameter space. Reasonable limits for these two parameters are set, with 9 evenly spaced values are selected along each dimension, producing 9x9=81 design candidates. The CFD simulation was run 81 times - once for each of the design candidates – with the lift force, drag force and yaw moments calculated from the flow field. The multi-objective cost function is the average squared error between the forces calculated from the CFD and the forced measured experimentally. The initial RSM of the simulation error is illustrated below, where blue contours are low error and red contours are high error. The symbols in the figure below represent the initial 81 evaluations of the cost function.

The minimum of this RSM is determined along with the associated values of βi and α. An additional CFD simulation is run with these particular values of βi and α. The cost function (error in the forces between the CFD simulation and the measurements) is evaluated and the RSM is updated. This process continues until the difference between the RSM of the cost function and the evaluated cost function is sufficiently small. The final RSM with the additional cost function evaluations are illustrated below.
In addition to determine the optimal solution, the RSM also provides an understanding of how the cost function changes throughout the design space, which evolutionary methods do not. RSM are also capable of answering multiple questions at once. For example in a multi-objective optimisation study if the relative importance of the objectives change, then a new RSM can be built using the existing simulation results. Further refinement of the RSM would in all likelihood be required to find the new optimal solution for the new cost function. If one was to use an evolutionary method, then the optimisation process would have to be run from scratch.

The RSM optimisation method can also be used to determine the optimal hyper-parameters used in the data modelling process, such as regularisation parameters used in regression and classification studies. 

References:
[1] Chng, T. S., Widjaja, R., Kitsios, V. & Ooi , A., RANS Turbulence Model Optimisation based on Surrogate Management Framework , Australasian Fluid Mechanics Conference , Queensland University, Brisbane, Australia , 3-7 December, 2007 .
[2] Fogel, D., 1994, An introduction to simulated evolutionary optimization, IEEE transactions on neural networks, Vol. 5, pp 3-14.
[3] Jones, D. R., 2001, A taxonomy of global optimization methods based on response surfaces, J. Global Opt., Vol. 21, pp 345-383.

No comments:

Post a Comment