[R-SIG-Finance] [R-sig-finance] [R] Bloomberg Data Import to R
robert at sanctumfi.com
Mon Jun 30 12:34:42 CEST 2008
I've never run across this problem but then I never throw hundreds of tickers into a single API request. I have heard many times that the API does seem to choke unpredictably when large calls are made, so I suspect it's a bloomberg problem, despite what they tell you. Can you successfully perform your 900 ticker request using the Excel interface? If you can't, go back to bloomberg. If you can, send me your R code (that chokes) and your .xls file (that works) and I'll look into it.
But Sean's suggestion is the most sensible approach; break your big call up into smaller calls. This will not only improve reliability on the RBloomberg side of things, it is also the approach that the R community recommends for dealing with large datasets.
From: r-sig-finance-bounces at stat.math.ethz.ch [mailto:r-sig-finance-bounces at stat.math.ethz.ch] On Behalf Of Sean Carmody
Sent: 28 June 2008 22:56
To: marcin.kopaczynski at soundinvest.net
Cc: r-sig-finance at stat.math.ethz.ch
Subject: Re: [R-SIG-Finance] [R-sig-finance] [R] Bloomberg Data Import to R
While I have not seen this problem with the R Bloomberg package, I have certainly seen something similar with large calls to the Bloomberg API from Excel. For some reason, the API seemed to choke if it received "too many"
data requests. I never had any luck getting clarification from Bloomberg as to the limits of the API and only got around the problem by breaking my problem into smaller requests. In your case, you might need to loop through say 100 stocks or so at a time. In any event, it is certainly worth testing whether your code works with a smaller number than 900.
On Fri, Feb 1, 2008 at 4:19 PM, Marcin Kopaczynski < marcin.kopaczynski at soundinvest.net> wrote:
> hi robert,
> i´ve got a problem with the bloomberg data import. usually all works
> well, but sometimes it seems that nothing is downloaded at all from bloomberg.
> example: i wanted to download px_last for 900 us-stocks beginning with
> 19900101 (giving him the appropriate chron-date format). the download
> starts, the downloaded data is stored and written out to a file. new
> data is being concatenated with the old data (column by column) until
> all 900 stocks are downloaded. during this process the download breaks
> down, and the strange thing is, that it does happen SOMETIMES and not
> at the same place (i.e. not at stock number 390, but at number 401 at
> one time and 789 at other time). sometimes it does not happen at all.
> also, when i resume the download, i.e. tell him to download the data
> from where he stopped, it then works.
> so i checked the functions in your package and the problem really
> seems to be with the download itself. what happens is: in the function
> <blpGetHistoricalData> the object <lst> is NULL. this makes the
> command "attr(lst, "num.of.date.cols") <- 1" to throw an error,
> because he is not able to assign an attribute to a NULL object:
> "attr(lst, "num.of.date.cols") <- 1 : attempt to set an attribute on NULL".
> the object <lst> becomes then a data.frame of the dimension
> [0:number.of.tickers], which obviously is not what one would expect to get.
> my question is: have you ever experienced such a problem? the problem
> seems to become more probable, the more data one downloads from bloomberg.
> here are some infos on my environment:
> R 2.5.1
> windows xp on the bloomberg terminal
> chron 2.3.16
> RBloomberg 0.1-10
> RDCOMClient 0.91-0
> zoo 1.4-2
> i´ve been talking to bloomberg about this problem, as well. they
> assured me that it is not about the amount of data i was downloading.
> so there must be something else.
> thx in advance for your help,
> R-SIG-Finance at stat.math.ethz.ch mailing list
> -- Subscriber-posting only.
> -- If you want to post, subscribe first.
[[alternative HTML version deleted]]
More information about the R-SIG-Finance