[Bioc-devel] Docker granularity: containers for individual R packages, running on a normal R installation?

Bastian Schiffthaler bastian at bioinformatics.upsc.se
Wed Apr 15 11:21:08 CEST 2015


> maybe starting the docker container in such a way that you have access 
> to your non-docker file system.

One way to achieve that is to mount a directory from your host system 
inside the container:
     # Creating a subdirectory in /home/rstudio and making it read/write 
for all (permissions in dockers FS can be a bit tricky)
         docker run --name="rstudio-local-data" 
bioconductor/release_sequencing bash -c 'mkdir /home/rstudio/data && 
chmod o+rw /home/rstudio/data'

     # Committing the changes to create a new image from the now 
modified bioconductor/release_sequencing
         docker commit rstudio-local-data rstudio-local-data

     # Mounting my current working directory inside the docker and 
starting rstudio-server
         docker run -p 8787:8787 -v $(pwd):/home/rstudio/data 
rstudio-local-data supervisord

 From there on you can open a browser and navigate to 
http://localhost:8787 as Martin said.

/Bastian
> Martin Morgan <mailto:mtmorgan at fredhutch.org>
> 15 Apr 2015 02:19
> On 04/14/2015 01:17 PM, Wolfgang Huber wrote:
>> Dear Sean
>> I understand the second point. As for .Call not being the right 
>> paradigm, then maybe some other method invocation mechanism? In 
>> essence, my question is whether someone already has figured out 
>> whether new virtualisation tools can help avoid some of the 
>> tradtional Makeovers/configure pain.
>
> The part of your question that challenged me was to 'run under a 
> “normal”, system-installed R', for which I don't have any meaningful 
> help to offer. Probably the following is not what you were looking for...
>
> There was no explicit mention of this in Sean's answer, so I'll point to
>
>   http://bioconductor.org/help/docker/
>
> A more typical use is that R is on the docker container, maybe 
> starting the docker container in such a way that you have access to 
> your non-docker file system.
>
> I might run the devel version of R / Bioc (the devel version was a bit 
> stale recently; I'm not sure if it is updated) with
>
>   docker run -ti bioconductor/devel_sequencing R
>
> (the first time this will be painful, but the second time 
> instantaneous). The image comes with all the usual tools (e.g., 
> compilers) and all of the packages with a 'Sequencing' biocViews; most 
> additional packages can be installed w/out problem.
>
> If there were complex dependencies, then one might start with one of 
> the simpler containers, add the necessary dependencies, save the 
> image, and distribute it, as outlined at
>
>   http://bioconductor.org/help/docker/#modifying-the-images
>
> I bet that many of the common complexities are already on the image. A 
> fun alternative to running R is to run RStudio Server on the image, 
> and connect to it via your browser
>
>   docker run -p 8787:8787 bioconductor/devel_base
>
> (point your browser to http://localhost:8787 and log in with 
> username/password rstudio/rstudio).
>
> I guess this also suggests a way to interact with some complicated 
> docker-based package from within R on another computer, serving the 
> package up as some kind of a web service.
>
> Martin
>
>> Wolfgang
>>
>>
>>
>>
>>
>>
>>> On Apr 14, 2015, at 13:52 GMT+2, Sean Davis <seandavi at gmail.com> wrote:
>>>
>>> Hi, Wolfgang.
>>>
>>> One way to think of docker is as a very efficient, self-contained 
>>> virtual machine.  The operative term is "self-contained".  The 
>>> docker containers resemble real machines from the inside and the 
>>> outside.  These machines can expose ports and can mount file 
>>> systems, but something like .Call would need to use a network 
>>> protocol, basically.  So, I think the direct answer to your question 
>>> is "no".
>>>
>>> That said, there is no reason that a docker container containing all 
>>> complex system dependencies for the Bioc build system, for example, 
>>> couldn't be created with a minimal R installation.  Such a system 
>>> could then become the basis for further installations, perhaps even 
>>> package-specific ones (though those would need to include all R 
>>> package dependencies, also).  R would need to run INSIDE the 
>>> container, though, to get the benefits of the installed complex 
>>> dependencies.
>>>
>>> I imagine Dan or others might have other thoughts to contribute.
>>>
>>> Sean
>>>
>>>
>>> On Tue, Apr 14, 2015 at 7:23 AM, Wolfgang Huber <whuber at embl.de> wrote:
>>> Is it possible to ship individual R packages (that e.g. contain 
>>> complex, tricky to compile C/C++ libraries or other system 
>>> resources) as Docker containers (or analogous) so that they would 
>>> still run under a “normal”, system-installed R. Or, is it possible 
>>> to provide a Docker container that contains such complex system 
>>> dependencies such that a normal R package can access it e.g. via 
>>> .Call ?
>>>
>>> (This question exposes my significant ignorance on the topic, I’m 
>>> still asking it for the potential benefit of a potential answer.)
>>>
>>> Wolfgang
>>>
>>> _______________________________________________
>>> Bioc-devel at r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/bioc-devel
>>>
>>
>> _______________________________________________
>> Bioc-devel at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/bioc-devel
>>
>
>
> Wolfgang Huber <mailto:whuber at embl.de>
> 14 Apr 2015 22:17
> Dear Sean
> I understand the second point. As for .Call not being the right 
> paradigm, then maybe some other method invocation mechanism? In 
> essence, my question is whether someone already has figured out 
> whether new virtualisation tools can help avoid some of the tradtional 
> Makeovers/configure pain.
> Wolfgang
>
>
>
>
>
>
>
> _______________________________________________
> Bioc-devel at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/bioc-devel
> Sean Davis <mailto:seandavi at gmail.com>
> 14 Apr 2015 13:52
> Hi, Wolfgang.
>
> One way to think of docker is as a very efficient, self-contained virtual
> machine. The operative term is "self-contained". The docker containers
> resemble real machines from the inside and the outside. These machines can
> expose ports and can mount file systems, but something like .Call would
> need to use a network protocol, basically. So, I think the direct answer
> to your question is "no".
>
> That said, there is no reason that a docker container containing all
> complex system dependencies for the Bioc build system, for example,
> couldn't be created with a minimal R installation. Such a system could
> then become the basis for further installations, perhaps even
> package-specific ones (though those would need to include all R package
> dependencies, also). R would need to run INSIDE the container, though, to
> get the benefits of the installed complex dependencies.
>
> I imagine Dan or others might have other thoughts to contribute.
>
> Sean
>
>
>
> [[alternative HTML version deleted]]
>
> _______________________________________________
> Bioc-devel at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/bioc-devel
> Wolfgang Huber <mailto:whuber at embl.de>
> 14 Apr 2015 13:23
> Is it possible to ship individual R packages (that e.g. contain 
> complex, tricky to compile C/C++ libraries or other system resources) 
> as Docker containers (or analogous) so that they would still run under 
> a “normal”, system-installed R. Or, is it possible to provide a Docker 
> container that contains such complex system dependencies such that a 
> normal R package can access it e.g. via .Call ?
>
> (This question exposes my significant ignorance on the topic, I’m 
> still asking it for the potential benefit of a potential answer.)
>
> Wolfgang
>
> _______________________________________________
> Bioc-devel at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/bioc-devel




More information about the Bioc-devel mailing list