[R] adaptive optimization of mesh size
baptiste Auguié
ba208 at exeter.ac.uk
Tue May 6 12:05:15 CEST 2008
my post may have slipped through the bank holiday and be forgotten by
now, I'm still hoping for some pointers. Please let me know if I need
to clarify the problem.
baptiste
On 4 May 2008, at 16:39, baptiste Auguié wrote:
> DeaR list,
>
>
> I'm running an external program that computes some electromagnetic
> response of a scattering body. The numerical scheme is based on a
> discretization with a characteristic mesh size "y". The smaller y
> is, the better the result (but obviously the computation will take
> longer).
>
> A convergence study showed the error between the computed values
> and the exact solution of the problem to be a quadratic in y, with
> standard error increasing as y^3. I wrote the interface to the
> program in R, as it is much more user friendly and allows for post-
> processing analysis. Currently, it only runs with user-defined
> discretization parameter. I would like to implement an adaptive
> scheme [1] and provide the following improvements,
>
> 1) obtain an estimate of the error by fitting the result against a
> series of mesh sizes with the quadratic model, and extrapolate at y
> = 0. (quite straight forward)
>
> 2) adapt "dynamically" the set of mesh sizes to fulfill a final
> accuracy condition, between a starting value (a rule-of thumb
> estimate is given by the problem values). The lower limit of y
> should also be constrained by the resources (again, an empirical
> rule dictates the computation time and memory usage).
>
> I'm looking for advice on this second point (both on the technical
> aspect, and whether this is sound statistically):
>
> - I can foresee that I should always start with a few y values
> before I can do any extrapolation, but how many of them? 3, 10? How
> could I know?
>
> - once I have enough points (say, 10) to use the fitting procedure
> and get an estimate of the error, how should I decide the best
> location of the next y if the error is too important?
>
> - in a practical implementation, I would use a while loop and
> append the successive values to a data.frame(y, value). However,
> this procedure will be run for different parameters (wavelengths,
> actually), so the set and number of y values may vary between one
> run and another. I think I'd be better off using a list with each
> new run having its own data.frame. Does this make sense?
>
>
> Below are a few lines of code to illustrate the problem,
>
>
>> program.result <- function(x, p){ # made up function that mimicks
>> the results of the real program
>>
>> y <- p[3]*x^2 + p[2]*x + p[1]
>> y * (1 + rnorm(1, mean=0, sd = 0.1 * y^3))
>> }
>>
>>
>> p0 <- c(0.1, 0.1, 2) # set of parameters
>>
>> ## user defined limits of the y parameter (log scale)
>> limits <- c(0.1, 0.8)
>> limits.log <- (10^limits)
>> y.log <- seq(limits.log[1], limits.log[2], l=10)
>>
>> y <- log10(y.log)
>>
>> result <- sapply(y, function(x) program.result(x, p0)) # results
>> of the program
>>
>> #### fitting and extrapolation procedure ####
>> library(gplots) # plot with CI
>> plotCI(y, result, y^3, xlim=c(0, 1), ylim=c(0, 2)) # the data
>> with y^3 errors
>>
>> my.data <- data.frame(y = y, value = result)
>>
>> fm <- lm(value ~ poly(y, degree=2, raw=TRUE), data = my.data ,
>> weights = 1/y^3)
>> lines(y, predict(fm, data.frame(y=y)), col = 2)
>>
>> extrap <- summary(fm)$coefficients[1,] # intercept and error on it
>> plotCI(0,extrap[1], 2 * extrap[2], col = 2, add=T)
>>
>>
>> ### my naive take on adaptive runs... ##
>>
>> objective <- 1e-3 # stop when the standard error of the
>> extrapolated value is smaller than this
>>
>> err <- extrap[2]
>> my.color <- 3
>> while (err > objective){
>>
>> new.value <- min(y)/2 # i don't know how to choose this optimally
>> y <- c(new.value, y)
>> new.result <- program.result(new.value, p0)
>> result <- c(new.result, result)
>> points(new.value, new.result, col= my.color)
>> my.data <- data.frame(y = y, value = result)
>> fm <- lm(value ~ poly(y, degree=2, raw=TRUE), data = my.data ,
>> weights = 1/y^3)
>> lines(y, predict(fm, data.frame(y=y)), col = my.color)
>>
>> extrap <- summary(fm)$coefficients[1,] # intercept and error on it
>> err <- extrap[2]
>> print(err)
>> plotCI(0,extrap[1], 2 * err, col = 2, add=T)
>> my.color <- my.color + 1
>>
>> }
>> err
>
>
>
> Many thanks in advance for your comments,
>
> baptiste
>
>
> [1]: Yurkin et al., Convergence of the discrete dipole
> approximation. II. An extrapolation technique to increase the
> accuracy. J. Opt. Soc. Am. A / Vol. 23, No. 10 / October 2006
> _____________________________
>
> Baptiste Auguié
>
> Physics Department
> University of Exeter
> Stocker Road,
> Exeter, Devon,
> EX4 4QL, UK
>
> Phone: +44 1392 264187
>
> http://newton.ex.ac.uk/research/emag
> http://projects.ex.ac.uk/atto
> ______________________________
>
>
>
>
>
_____________________________
Baptiste Auguié
Physics Department
University of Exeter
Stocker Road,
Exeter, Devon,
EX4 4QL, UK
Phone: +44 1392 264187
http://newton.ex.ac.uk/research/emag
http://projects.ex.ac.uk/atto
More information about the R-help
mailing list