[R-sig-Geo] Javascript All The Things!

Barry Rowlingson b.rowlingson at lancaster.ac.uk
Fri May 29 12:13:31 CEST 2015


[Apologies for the memey subject: http://knowyourmeme.com/memes/x-all-the-y]

Developers can't fail to have noticed the rise of Javascript, both in
the browser and outside of it. But it has now crept into R, and into
R-spatial packages.

Javascript, like most languages, has its right to exist and its place.
I'm just not sure that using it to call functionality that exists
perfectly well elsewhere in R is the place.

 For example, there is a Javascript library called "turf.js" that does
GIS operations like polygon overlay, buffering, point-in-polygon etc.
That's great, because now my web mapping interface can do those in the
browser. The user can draw a polygon, the browser can select the data
and do something with it without a round-trip and a load on the
server. Brilliant. That same Javascript can also run perfectly well
outside a browser via a JS interpreter such as node or V8, so its
possible to use turf.js to write javascript scripts to do GIS
operations. Again, brilliant. If you are a JS programmer developing
systems in JS that need this, you've now got it.

 Now there is a package (V8) to run Javascript in  R. Great. If
there's some JS functionality you want to call. But why should you use
it to access functionality you've got in a perfectly good R package
already? The turf.js code has been wrapped in the "lawn" package, so
you can call turf's GIS functions from R. Some people on twitter seem
to think this is novel, and suddenly you can now do "GIS in R!". They
seem to have not noticed we've been doing it for the past umpteen
years with things like gpclib, and lately rgeos.

 Why use "lawn" instead of "rgeos"? Twitter isn't a good place for
such discussions, and all I've got out of people there are things like
"but if you have a geoJSON workflow". I'm not sure that makes sense.
geoJSON, if you don't know, is the lingua franca of spatial data in JS
(and is supported by GDAL/OGR). But if you are going to read your data
into R at some point its going to get converted out of geoJSON into
native R formats. There's overheads both ways. I've not benchmarked
any of this yet, but I have a hunch data conversion and interfacing to
JS is going to be slow compared to native rgeos calculations. The lawn
package web page seems to concur.

 Another example of "JS All The Things"  manifested today. The rgbif
package uses rgeos to read WKT data. But a recent change from the
author replaced that with some JS code. So now instead of calling
rgeos::readWKT, this happens:

read_wkt <- function(wkt) {
terr$eval(sprintf("var out = terrwktparse.parse('%s');", wkt))
terr$get("out")
}

where `terr` is a handle to the JS code. Again, I've not benchmarked
it, but there appears to be a lot of string conversion and passing of
data from one world to another. Plus there's the overhead of loading
in a completely new language interpreter via a dependency on the V8
package, compared to loading in some clean C code.

 So I'm mind-boggled. Why is this happening? Is it just that JS is
trendy? Is it that there is a surfeit of JS programmers? Is it
speculative "because I can" development? Should R-spatial data look at
this as evolution and consider the future of sp classes?

 Discuss. Maybe its something those of us at the Geostat Summer School
in August can have a good chat about, although I don't think anyone
from the JS side of things will be there... Maybe in 2016....

Barry



More information about the R-sig-Geo mailing list