CSTools is aim for post-processing seasonal climate forecast with state-of-the-art methods. However, some doubts and issues may arise the first time using the package: do I need an specific R version? how much RAM memory I need? where I can find the datasets? Should I format the datasets? etc. Therefore, some recommendations and key points to take into account are gathered here in order to facilitate the use of CSTools.
The first question may come to a new user is the requirements of my computer to run CSTools. Here, the list of most frequent needs:
On the other hand, the computational power of a computer could be a limitation, but it will depends on the size of the data that the users need for their analysis. For instance, they can estimate the memory they will require by multiplying the following values:
For example, if they want to use the hindcast of 3 different seasonal simulations with 9 members, in daily resolution, for performing a regional study let's say in a region of 40000 km2 with a resolution of 5 km:
200km x 200km / (5km * 5km) * (3 + 1) models * 214 days * 30 hindcast years * 9 members x 2 start dates x 8 bytes ~ 6 GB
(*)Furthermore, some of the functions need to duplicated or triplicate (even more) the inputs for performing their analysis. Therefore, between 12 and 18 GB of RAM memory would be necessary, in this example.
All CSTools functions have been developed following the same guidelines. The main point, interesting for the users, is that that one function is built on several nested levels, and it is possible to distinguish at least three levels:
CST_FunctionName()this function works on s2dv_cube objects which is exposed to the users.
FunctionName()this function works on N-dimensional arrays with named dimensions and it is exposed to the users.
.functionname()which works in the minimum required elements and it is not exposed to the user.
A reasonable important doubt that a new user may have at this point is: what 's2dv_cube' object is? 's2dv_cube' is a class of an object storing the data and metadata in several elements:
It is possible to visualize an example of the structure of 's2dv_cube' object by opening an R session and running:
library(CSTools) class(lonlat_temp$exp) # check the class of the object lonlat_temp$exp names(lonlat_temp$exp) # shows the names of the elements in the object lonlat_temp$exp str(lonlat_temp$exp) # shows the full structure of the object lonlat_temp$exp
CSTools main objective is to share state-of-the-arts post-processing methods with the scientific community. However, in order to facilitate its use, CSTools package includes a function,
CST_Load, to read the files and have the data available in 's2dv_cube' format in the R session memory to conduct the analysis. Some benefits of using this function are:
If you plan to use CST_Load, we have developed guidelines to download and formatting the data. See CDS_Seasonal_Downloader.
There are alternatives to CSTLoad function, for instance, the user can:
1) use another tool to read the data from files (e.g.: ncdf4, easyNDCF, startR packages) and then convert it to the class 's2dv_cube' with
s2dv.cube() function or
2) If they keep facing problems to convert the data to that class, they can just skip it and work with the functions without the prefix 'CST'. In this case, they will be able to work with the basic class 'array'.
Independently of the tool used to read the data from your local storage to your R session, this step can be automatized by given a common structure and format to all datasets in your local storate. Here, there is the list of minimum requirements that CST_Save follows to be able to store an experiment that could be later loaded with CST_Load:
library(CSTools) library(zeallot) path <- "/esarchive/exp/meteofrance/system6c3s/$STORE_FREQ$_mean/$VAR_NAME$_f6h/$VAR_NAME$_$START_DATE$.nc" ini <- 1993 fin <- 2012 month <- '05' start <- as.Date(paste(ini, month, "01", sep = ""), "%Y%m%d") end <- as.Date(paste(fin, month, "01", sep = ""), "%Y%m%d") dateseq <- format(seq(start, end, by = "year"), "%Y%m%d") c(exp, obs) %<-% CST_Load(var = 'sfcWind', exp = list(list(name = 'meteofrance/system6c3s', path = path)), obs = 'erainterim', sdates = dateseq, leadtimemin = 2, leadtimemax = 4, lonmin = -19, lonmax = 60.5, latmin = 0, latmax = 79.5, storefreq = "daily", sampleperiod = 1, nmember = 9, output = "lonlat", method = "bilinear", grid = "r360x180")
Extra lines to see the size of the objects and visualize the data:
library(pryr) object_size(exp) # 27.7 MB object_size(obs) # 3.09 MB library(s2dv) PlotEquiMap(exp$data[1,1,1,1,,], lon = exp$coords$lon, lat= exp$coords$lat, filled.continents = FALSE, fileout = "Meteofrance_r360x180.png")
Depending on the user needs, limitations can be found when trying to process big datasets. This may depend on the number of ensembles, the resolution and region that the user wants to process. CSTools has been developed for compatibility of startR package which covers these aims:
There is a video tutorial about the startR package and the tutorial material.
The functions in CSTools (with or without CST_ prefix) include a parameter called ‘ncores’ that allows to automatically parallelize the code in multiple cores when the parameter is set greater than one.