[R-SIG-Finance] Processing time of backtests on a singlecomputer

Jersey Fanatic jerseyfanatic1 at gmail.com
Sat Apr 9 10:57:00 CEST 2016


Will go with the cheap route. I will report back the performance increase
if I remember to do so. Thanks for the recommendation.

2016-04-08 17:32 GMT+03:00 Frank <frankm60606 at gmail.com>:

> I mean buy the maximum amount of memory your system allows, install it and
> see if that helps. Memory is cheap. On the expensive side, most quad core
> i7 CPU sockets can accept a six core i7 CPU upgrade. These were $1,000 the
> last time I checked.
>
>
>
> Frank
>
> Chicago, IL
>
>
> ------------------------------
>
> *From:* Jersey Fanatic [mailto:jerseyfanatic1 at gmail.com]
> *Sent:* Friday, April 08, 2016 9:28 AM
> *To:* Frank
> *Cc:* Erol Biceroglu; r-sig-finance; Brian G. Peterson
>
> *Subject:* Re: [R-SIG-Finance] Processing time of backtests on a
> singlecomputer
>
>
>
> All info I could find was about setting the swap file size, not enabling
> Windows to max out memory, or disabling paging before like %90 memory
> usage. But will research further too see how its done. Thanks.
>
>
>
> 2016-04-08 16:22 GMT+03:00 Frank <frankm60606 at gmail.com>:
>
> Windows can start using the swap file long before 80% memory utilization.
> If
> speed is of the essence, you might want to max out the memory on your
> machine. If that doesn't help, return the memory.
>
> Frank
> Chicago, IL
>
> -----Original Message-----
> From: R-SIG-Finance [mailto:r-sig-finance-bounces at r-project.org] On Behalf
> Of Jersey Fanatic
> Sent: Friday, April 08, 2016 12:47 AM
> To: Erol Biceroglu
> Cc: r-sig-finance; Brian G. Peterson
> Subject: Re: [R-SIG-Finance] Processing time of backtests on a
> singlecomputer
>
> RAM is usually at 80% or so, but CPU is maxing out, I guess that means RAM
> is sufficient?
>
> 2016-04-07 23:23 GMT+03:00 Erol Biceroglu <
> erol.biceroglu at alumni.utoronto.ca
> >:
>
> > Hello,
> >
> > Only thing I can think of that might make it take longer, that affected
> me
> > in the past, is RAM, especially in Windows.  Is it maxing out at all?
> >
> >
> > On Thursday, April 7, 2016, Jersey Fanatic <jerseyfanatic1 at gmail.com>
> > wrote:
> >
> >> So I tried to see what effect trailing stop and stop loss rules have on
> >> the
> >> processing time. For the same dataset, only SL took 5 mins while only
> >> trailing SL took 10 mins, and trailing SL with normal SL took 13 mins. I
> >> guess it is processing it as fast as it should be. Thanks for all the
> >> help.
> >>
> >> 2016-04-07 20:45 GMT+03:00 Joshua Ulrich <josh.m.ulrich at gmail.com>:
> >>
> >> > On Thu, Apr 7, 2016 at 11:30 AM, Jersey Fanatic
> >> > <jerseyfanatic1 at gmail.com> wrote:
> >> > > Thanks for the insight. I did not know variations in processing time
> >> of
> >> > 20
> >> > > minutes or so could happen between different parameter combinations.
> >> > >
> >> > > I ran the strategy with the same dataset with random parameters
> >> without
> >> > > Trailing SL, on single core and it took 5.15 minutes. The number of
> >> > > transactions were 7800. The amount of processing time seems too much
> >> > > compared to yours though. 5sec data of 3 years vs M5 data of just 1
> >> > year; 20
> >> > > min vs 5 mins.
> >> > >
> >> > Again, the number of observations is not a good predictor of the
> >> > amount of time it will take.  You have 7800 transactions.  My shortest
> >> > (longest) run had 25 (1500) transactions.
> >> >
> >> > Seems reasonable to me that a strategy producing nearly 8000
> >> > transactions takes about 5 minutes; that's about 25 transactions a
> >> > second.
> >> >
> >> > > 2016-04-07 16:32 GMT+03:00 Joshua Ulrich <josh.m.ulrich at gmail.com>:
> >> > >>
> >> > >> On Thu, Apr 7, 2016 at 8:10 AM, Jersey Fanatic <
> >> > jerseyfanatic1 at gmail.com>
> >> > >> wrote:
> >> > >> > 10 years of daily data makes about 2500 data points. So
> >> extrapolating
> >> > >> > from
> >> > >> > that to 58000 data points (assuming the relation is linear), it
> >> should
> >> > >> > take
> >> > >>
> >> > >> Number of data points is not necessarily a good estimator for run
> >> time
> >> > >> even if the strategies are the same.  What matters more is the
> number
> >> > >> of timestamps/observations that must be evaluated.  That includes
> >> > >> signals, moving orders, processing fills, etc.
> >> > >>
> >> > >> > about 12.2 secs for a single run with my dataset. For 144 runs
> >> (total
> >> > >> > number of parameter combinations), it should take about 30 mins.
> >> > >> > However, I
> >> > >>
> >> > >> Again, the relationship is not linear.  Different parameter
> >> > >> combinations will produce differing amounts of signals, order
> >> > >> movement, fills, etc.
> >> > >>
> >> > >> For example, I ran parameter optimization on ~3 years of 5-second
> >> > >> data.  Some parameter combinations took 1-2 minutes, some took >20
> >> > >> minutes.
> >> > >>
> >> > >> > ran apply.paramset() this morning (without trailing stops), it
> took
> >> > 4.5
> >> > >> > hours. And the code is the one that I sent earlier with Trailing
> >> stop
> >> > >> > rules
> >> > >> > enabled=FALSE'd.
> >> > >> >
> >> > >> > Did you run the macd demo code with single core? If you did some
> >> > >> > parallel
> >> > >> > processing, did you use doSNOW package or something else? Maybe
> >> that
> >> > is
> >> > >> > the
> >> > >> > reason, I am not sure.
> >> > >> >
> >> > >> > Would deleting trailing stop rules speed things up, instead of
> >> > defining
> >> > >> > them but setting enabled=FALSE?
> >> > >> >
> >> > >> > 2016-04-07 0:34 GMT+03:00 Brian G. Peterson <brian at braverock.com
> >:
> >> > >> >
> >> > >> >> On Wed, 2016-04-06 at 23:58 +0300, Jersey Fanatic wrote:
> >> > >> >> > I will try running the same code without trailing stops and
> see
> >> > what
> >> > >> >> > effect
> >> > >> >> > it has on the processing time. I will report back as soon as
> it
> >> is
> >> > >> >> > finished.
> >> > >> >>
> >> > >> >> Running the macd demo code over 10 years of daily data on my
> >> machine
> >> > >> >> (no
> >> > >> >> trailing stops) takes 0.5262365 secs for a single run.
> >> > >> >>
> >> > >> >>
> >> > >> >>
> >> > >> >
> >> > >> >         [[alternative HTML version deleted]]
> >> > >> >
> >> > >> > _______________________________________________
> >> > >> > R-SIG-Finance at r-project.org mailing list
> >> > >> > https://stat.ethz.ch/mailman/listinfo/r-sig-finance
> >> > >> > -- Subscriber-posting only. If you want to post, subscribe first.
> >> > >> > -- Also note that this is not the r-help list where general R
> >> > questions
> >> > >> > should go.
> >> > >>
> >> > >>
> >> > >>
> >> > >> --
> >> > >> Joshua Ulrich  |  about.me/joshuaulrich
> >> > >> FOSS Trading  |  www.fosstrading.com
> >> > >> R/Finance 2016 | www.rinfinance.com
> >> > >
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > Joshua Ulrich  |  about.me/joshuaulrich
> >> > FOSS Trading  |  www.fosstrading.com
> >> > R/Finance 2016 | www.rinfinance.com
> >> >
> >>
> >>         [[alternative HTML version deleted]]
> >>
> >> _______________________________________________
> >> R-SIG-Finance at r-project.org mailing list
> >> https://stat.ethz.ch/mailman/listinfo/r-sig-finance
> >> -- Subscriber-posting only. If you want to post, subscribe first.
> >> -- Also note that this is not the r-help list where general R questions
> >> should go.
> >>
> >
> >
> > --
> >
> > Erol Biceroglu
> >
> >
>
> > *erol.biceroglu at alumni.utoronto.ca
> > <erol.biceroglu at alumni.utoronto.ca>416-275-7970 <416-275-7970>*
>
> >
>
>         [[alternative HTML version deleted]]
>
> _______________________________________________
> R-SIG-Finance at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-finance
> -- Subscriber-posting only. If you want to post, subscribe first.
> -- Also note that this is not the r-help list where general R questions
> should go.
>
>
>

	[[alternative HTML version deleted]]



More information about the R-SIG-Finance mailing list