[R-meta] Post-hoc weighted analysis based on number of observations

Cesar Terrer Moreno cesar.terrer at me.com
Mon Jan 22 18:51:33 CET 2018


I have a gridded dataset representing the standard error (SE) of an effect. This SE was calculated through a meta-analysis and subsequent predictive model applied on a grid:

ECMmeta <- rma(es, var, data=ecm.df ,control=list(stepadj=.5), mods= ~ 1 + MAP + MAT*CO2dif, knha=TRUE)
options(na.action = "na.pass")
ECMpred <- predict(ECMmeta, 
                    newmods = cbind(s.df$precipitation, s.df$temperature, CO2inc, s.df$temperature*CO2inc))
ECMrelSE <- rasterFromXYZ(ECMpred[,c("x", "y", "se")],crs="+proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0")


I would like to add a further level of uncertainty to SE based on the number of measurements (observations) per type of ecosystem in the dataset. The idea is that ecosystems that are poorly represented by experiments in the dataset should have a higher SE than ecosystems with plenty of measurements in the dataset.

I thought I could, for example, calculate an ecosystem-based weight as:

weight = n/sum(n)

That is, number of observations in a particular ecosystem divided by the total of observations. 

The next step would be to apply a weighting approach to each pixel. First approach I've come up with is to simply multiply SE and the inverse of the weight:

SEw=SE*(1/weight)

But the values are extremely high.

An approach like this would be more like an post-hoc patch. I am sure something like this can be done within the meta-analysis at the beginning. Alternatively, a better post-hoc approach or ideas to investigate further would be welcome. Any recommendation or basic approach commonly used to add further uncertainty to areas with low representativeness?

Thanks


More information about the R-sig-meta-analysis mailing list