From @@chmid1 @ending from @teven@@edu Tue Jan 8 17:09:09 2019 From: @@chmid1 @ending from @teven@@edu (Alec Schmidt) Date: Tue, 8 Jan 2019 16:09:09 +0000 Subject: [R-SIG-Finance] corrections vs drawdowns Message-ID: I tried to use the function findDrawdowns() to compile NASDAQ (^IXIC) corrections. For the sample starting on 2007-01-01, I get the following start -to-trough periods with drawdowns higher than 10% 08/30/2018 - 12/24/2018 (-23.64%) [80 Days] 07/21/2015 - 02/11/2016 (-18.24%) [143 Days] 09/17/2012 - 11/15/2012 (-10.90%) [42 Days] 03/27/2012 - 06/01/2012 (-12.01%) [47 Days] 05/02/2011 - 10/03/2011 (-18.71%) [108 Days] 11/01/2007 - 03/09/2009 (-55.63%) [339 Days] However, if the sample starts on 2000-06-01, I get 08/30/2018 - 12/24/2018 (-23.64%) [80 Days] 07/21/2015 - 02/11/2016 (-18.24%) [143 Days] 07/18/2000 - 10/09/2002 (-73.94%) [559 Days] i.e. no bear market of 2008... This is because ^IXIC didn't recover in 2007 from its fall from top in 2000. This implies that various reports on market corrections do not use the max drawdown. Is there consensus (and possibly R scripts) that address this problem? Thanks! Alec [[alternative HTML version deleted]] From bri@n @ending from br@verock@com Tue Jan 8 17:17:50 2019 From: bri@n @ending from br@verock@com (Brian G. Peterson) Date: Tue, 08 Jan 2019 10:17:50 -0600 Subject: [R-SIG-Finance] corrections vs drawdowns In-Reply-To: References: Message-ID: <1546964270.14204.83.camel@braverock.com> Alec, I suspect that you may wish to start with setting geometric=FALSE in your call to findDrawdowns. Corrections are usually defined as a peak to trough difference in *price*, as a percentage of the peak price. So I think you do not want to compound the *returns* in calculating your drawdowns. Regards, Brian -- Brian G. Peterson http://braverock.com/brian/ Ph: 773-459-4973 IM: bgpbraverock On Tue, 2019-01-08 at 16:09 +0000, Alec Schmidt wrote: > I tried to use the function findDrawdowns() to compile NASDAQ (^IXIC) > corrections. For the sample starting on > > 2007-01-01, I get the following start -to-trough periods with > drawdowns higher than 10% > > 08/30/2018 - 12/24/2018 (-23.64%) [80 Days] > 07/21/2015 - 02/11/2016 (-18.24%) [143 Days] > 09/17/2012 - 11/15/2012 (-10.90%) [42 Days] > 03/27/2012 - 06/01/2012 (-12.01%) [47 Days] > 05/02/2011 - 10/03/2011 (-18.71%) [108 Days] > 11/01/2007 - 03/09/2009 (-55.63%) [339 Days] > > > However, if the sample starts on 2000-06-01, I get > 08/30/2018 - 12/24/2018 (-23.64%) [80 Days] > 07/21/2015 - 02/11/2016 (-18.24%) [143 Days] > 07/18/2000 - 10/09/2002 (-73.94%) [559 Days] > > i.e. no bear market of 2008... > > This is because ^IXIC didn't recover in 2007 from its fall from top > in 2000. This implies that various reports on market corrections do > not use the max drawdown. Is there consensus (and possibly R scripts) > that address this problem? > > Thanks! Alec > > [[alternative HTML version deleted]] > > _______________________________________________ > R-SIG-Finance at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-sig-finance > -- Subscriber-posting only. If you want to post, subscribe first. > -- Also note that this is not the r-help list where general R > questions should go. From @@chmid1 @ending from @teven@@edu Tue Jan 8 17:36:57 2019 From: @@chmid1 @ending from @teven@@edu (Alec Schmidt) Date: Tue, 8 Jan 2019 16:36:57 +0000 Subject: [R-SIG-Finance] corrections vs drawdowns In-Reply-To: <1546964270.14204.83.camel@braverock.com> References: , <1546964270.14204.83.camel@braverock.com> Message-ID: Thank you Brian, geometric=FALSE gave me additional corrections in 2011 and 2012 but still no bear market of 2008: 08/30/2018 - 12/24/2018 (-11.04%) [80 Days] 07/21/2015 - 02/11/2016 (-10.05%) [143 Days] 09/17/2012 - 11/15/2012 (-8.42%) [42 Days] 03/27/2012 - 06/01/2012 (-9.44%) [47 Days] 07/08/2011 - 08/19/2011 (-15.96%) [31 Days] 05/02/2011 - 06/17/2011 (-7.59%) [34 Days] 02/22/2011 - 03/16/2011 (-6.54%) [17 Days] 07/18/2000 - 10/09/2002 (-97.34%) [559 Days] Alec ________________________________ From: Brian G. Peterson Sent: Tuesday, January 8, 2019 11:17 AM To: Alec Schmidt; r-sig-finance at r-project.org Subject: Re: [R-SIG-Finance] corrections vs drawdowns Alec, I suspect that you may wish to start with setting geometric=FALSE in your call to findDrawdowns. Corrections are usually defined as a peak to trough difference in *price*, as a percentage of the peak price. So I think you do not want to compound the *returns* in calculating your drawdowns. Regards, Brian -- Brian G. Peterson https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fbraverock.com%2Fbrian%2F&data=02%7C01%7Caschmid1%40stevens.edu%7Ce6f064fd98b940503baf08d67584dcf1%7C8d1a69ec03b54345ae21dad112f5fb4f%7C0%7C0%7C636825610866789711&sdata=CEhprRb58LDRQj0OmzZ5qzUDDYwumGGjUl9T4CoUscY%3D&reserved=0 Ph: 773-459-4973 IM: bgpbraverock On Tue, 2019-01-08 at 16:09 +0000, Alec Schmidt wrote: > I tried to use the function findDrawdowns() to compile NASDAQ (^IXIC) > corrections. For the sample starting on > > 2007-01-01, I get the following start -to-trough periods with > drawdowns higher than 10% > > 08/30/2018 - 12/24/2018 (-23.64%) [80 Days] > 07/21/2015 - 02/11/2016 (-18.24%) [143 Days] > 09/17/2012 - 11/15/2012 (-10.90%) [42 Days] > 03/27/2012 - 06/01/2012 (-12.01%) [47 Days] > 05/02/2011 - 10/03/2011 (-18.71%) [108 Days] > 11/01/2007 - 03/09/2009 (-55.63%) [339 Days] > > > However, if the sample starts on 2000-06-01, I get > 08/30/2018 - 12/24/2018 (-23.64%) [80 Days] > 07/21/2015 - 02/11/2016 (-18.24%) [143 Days] > 07/18/2000 - 10/09/2002 (-73.94%) [559 Days] > > i.e. no bear market of 2008... > > This is because ^IXIC didn't recover in 2007 from its fall from top > in 2000. This implies that various reports on market corrections do > not use the max drawdown. Is there consensus (and possibly R scripts) > that address this problem? > > Thanks! Alec > > [[alternative HTML version deleted]] > > _______________________________________________ > R-SIG-Finance at r-project.org mailing list > https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fstat.ethz.ch%2Fmailman%2Flistinfo%2Fr-sig-finance&data=02%7C01%7Caschmid1%40stevens.edu%7Ce6f064fd98b940503baf08d67584dcf1%7C8d1a69ec03b54345ae21dad112f5fb4f%7C0%7C0%7C636825610866789711&sdata=9c8z0kRyh9uaYahELBtBfeg9np8ppq0HYswDUg3myig%3D&reserved=0 > -- Subscriber-posting only. If you want to post, subscribe first. > -- Also note that this is not the r-help list where general R > questions should go. [[alternative HTML version deleted]] From bri@n @ending from br@verock@com Tue Jan 8 17:55:36 2019 From: bri@n @ending from br@verock@com (Brian G. Peterson) Date: Tue, 08 Jan 2019 10:55:36 -0600 Subject: [R-SIG-Finance] corrections vs drawdowns In-Reply-To: References: ,<1546964270.14204.83.camel@braverock.com> Message-ID: <1546966536.14204.94.camel@braverock.com> I think that this is correct. NASDAQ was still in a drawdown. NASDAQ didn't make new all-time highs until 2014. Some people define 'corrections' as drawdown from most recent peak. Charles Schwab's definition is in-line with generally accepted usage: https://www.schwab.com/resource-center/insights/content/market-correcti on-what-does-it-mean The Motley Fool uses a similar but not identical definition: https://www.schwab.com/resource-center/insights/content/market-correcti on-what-does-it-mean quantmod has a 'findPeaks' function, but this is dependent on you setting a threshold for what defines a peak. A related Stack Overflow question may provide something in the direction of what you're looking for to look at drawdown from a recent peak. https://stackoverflow.com/questions/14737899/calculate-cumulatve-growth -drawdown-from-local-min-max I would certainly be happy to include a 'findCorrections' function in a later version of PerformanceAnalytics if we could parameterize what constitutes a 'recent high' for that purpose. Regards, Brian On Tue, 2019-01-08 at 16:36 +0000, Alec Schmidt wrote: > Thank you Brian, > geometric=FALSE gave me additional corrections in 2011 and 2012 but > still no bear market of 2008: > > > 08/30/2018 - 12/24/2018 (-11.04%) [80 Days] > 07/21/2015 - 02/11/2016 (-10.05%) [143 Days] > 09/17/2012 - 11/15/2012 (-8.42%) [42 Days] > 03/27/2012 - 06/01/2012 (-9.44%) [47 Days] > 07/08/2011 - 08/19/2011 (-15.96%) [31 Days] > 05/02/2011 - 06/17/2011 (-7.59%) [34 Days] > 02/22/2011 - 03/16/2011 (-6.54%) [17 Days] > 07/18/2000 - 10/09/2002 (-97.34%) [559 Days] > Alec > > > > From: Brian G. Peterson > Sent: Tuesday, January 8, 2019 11:17 AM > To: Alec Schmidt; r-sig-finance at r-project.org > Subject: Re: [R-SIG-Finance] corrections vs drawdowns > > Alec, > > I suspect that you may wish to start with setting geometric=FALSE in > your call to findDrawdowns. > > Corrections are usually defined as a peak to trough difference in > *price*, as a percentage of the peak price. > > So I think you do not want to compound the *returns* in calculating > your drawdowns. > > Regards, > > Brian > > > On Tue, 2019-01-08 at 16:09 +0000, Alec Schmidt wrote: > > I tried to use the function findDrawdowns() to compile NASDAQ > > (^IXIC) > > corrections. For the sample starting on > > > > 2007-01-01, I get the following start -to-trough periods with > > drawdowns higher than 10% > > > > 08/30/2018 - 12/24/2018 (-23.64%) [80 Days] > > 07/21/2015 - 02/11/2016 (-18.24%) [143 Days] > > 09/17/2012 - 11/15/2012 (-10.90%) [42 Days] > > 03/27/2012 - 06/01/2012 (-12.01%) [47 Days] > > 05/02/2011 - 10/03/2011 (-18.71%) [108 Days] > > 11/01/2007 - 03/09/2009 (-55.63%) [339 Days] > > > > > > However, if the sample starts on 2000-06-01, I get > > 08/30/2018 - 12/24/2018 (-23.64%) [80 Days] > > 07/21/2015 - 02/11/2016 (-18.24%) [143 Days] > > 07/18/2000 - 10/09/2002 (-73.94%) [559 Days] > > > > i.e. no bear market of 2008... > > > > This is because ^IXIC didn't recover in 2007 from its fall from top > > in 2000. This implies that various reports on market corrections do > > not use the max drawdown. Is there consensus (and possibly R > > scripts) > > that address this problem? > > > > Thanks! Alec From e@ @ending from enrico@chum@nn@net Tue Jan 8 21:00:25 2019 From: e@ @ending from enrico@chum@nn@net (Enrico Schumann) Date: Tue, 08 Jan 2019 21:00:25 +0100 Subject: [R-SIG-Finance] corrections vs drawdowns In-Reply-To: (Alec Schmidt's message of "Tue, 8 Jan 2019 16:09:09 +0000") References: Message-ID: <87wone538m.fsf@enricoschumann.net> On Tue, 08 Jan 2019, Alec Schmidt writes: > I tried to use the function findDrawdowns() to compile NASDAQ (^IXIC) > corrections. For the sample starting on > > 2007-01-01, I get the following start -to-trough periods with > drawdowns higher than 10% > > 08/30/2018 - 12/24/2018 (-23.64%) [80 Days] > 07/21/2015 - 02/11/2016 (-18.24%) [143 Days] > 09/17/2012 - 11/15/2012 (-10.90%) [42 Days] > 03/27/2012 - 06/01/2012 (-12.01%) [47 Days] > 05/02/2011 - 10/03/2011 (-18.71%) [108 Days] > 11/01/2007 - 03/09/2009 (-55.63%) [339 Days] > > > However, if the sample starts on 2000-06-01, I get > 08/30/2018 - 12/24/2018 (-23.64%) [80 Days] > 07/21/2015 - 02/11/2016 (-18.24%) [143 Days] > 07/18/2000 - 10/09/2002 (-73.94%) [559 Days] > > i.e. no bear market of 2008... > > This is because ^IXIC didn't recover in 2007 from its > fall from top in 2000. This implies that various > reports on market corrections do not use the max > drawdown. Is there consensus (and possibly R scripts) > that address this problem? > > Thanks! Alec > Perhaps the function 'streaks' in package 'PMwR' does what you want. library("tseries") library("PMwR") z <- get.hist.quote("^IXIC", quote = "Close", retclass = "zoo", start = as.Date("2007-1-1")) streaks(z) ## start end state return ## 1 2007-01-03 2007-03-05 -0.03403819 ## 2 2007-03-05 2007-10-31 up 0.22149128 ## 3 2007-10-31 2008-11-20 down -0.53967656 ## 4 2008-11-20 2009-01-06 up 0.25549343 ## 5 2009-01-06 2009-03-09 down -0.23223471 ## 6 2009-03-09 2018-08-29 up 5.39242799 ## 7 2018-08-29 2019-01-04 down -0.16903607 See also https://stats.stackexchange.com/questions/354157/determining-up-down-market-trends-in-timeseries-data/373622#373622 https://cran.r-project.org/web/packages/PMwR/vignettes/Drawdowns_streaks.pdf -- Enrico Schumann (maintainer of PMwR) Lucerne, Switzerland http://enricoschumann.net From @@chmid1 @ending from @teven@@edu Tue Jan 8 22:22:46 2019 From: @@chmid1 @ending from @teven@@edu (Alec Schmidt) Date: Tue, 8 Jan 2019 21:22:46 +0000 Subject: [R-SIG-Finance] corrections vs drawdowns In-Reply-To: <1546966536.14204.94.camel@braverock.com> References: ,<1546964270.14204.83.camel@braverock.com> , <1546966536.14204.94.camel@braverock.com> Message-ID: Brian, Thanks again. It would be great if you implement findCorrections(). I think it becomes a popular topic... ? On top of my head, the default version needs just one parameter, ie. if we're looking for corrections of 10%, let's check them after every peak of 10%+ since the last correction's trough. But of course there may be a more generic setup. Alec ________________________________ From: Brian G. Peterson Sent: Tuesday, January 8, 2019 11:55 AM To: Alec Schmidt; r-sig-finance at r-project.org Subject: Re: [R-SIG-Finance] corrections vs drawdowns I think that this is correct. NASDAQ was still in a drawdown. NASDAQ didn't make new all-time highs until 2014. Some people define 'corrections' as drawdown from most recent peak. Charles Schwab's definition is in-line with generally accepted usage: https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.schwab.com%2Fresource-center%2Finsights%2Fcontent%2Fmarket-correcti&data=02%7C01%7Caschmid1%40stevens.edu%7C104e1f582d6242bfce0208d6758a227a%7C8d1a69ec03b54345ae21dad112f5fb4f%7C0%7C0%7C636825633496698033&sdata=rkHsOOY4EdLB9LUu4bomU4%2F98T3kHidzSJY%2BGEQ4NsI%3D&reserved=0 on-what-does-it-mean The Motley Fool uses a similar but not identical definition: https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.schwab.com%2Fresource-center%2Finsights%2Fcontent%2Fmarket-correcti&data=02%7C01%7Caschmid1%40stevens.edu%7C104e1f582d6242bfce0208d6758a227a%7C8d1a69ec03b54345ae21dad112f5fb4f%7C0%7C0%7C636825633496698033&sdata=rkHsOOY4EdLB9LUu4bomU4%2F98T3kHidzSJY%2BGEQ4NsI%3D&reserved=0 on-what-does-it-mean quantmod has a 'findPeaks' function, but this is dependent on you setting a threshold for what defines a peak. A related Stack Overflow question may provide something in the direction of what you're looking for to look at drawdown from a recent peak. https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fstackoverflow.com%2Fquestions%2F14737899%2Fcalculate-cumulatve-growth&data=02%7C01%7Caschmid1%40stevens.edu%7C104e1f582d6242bfce0208d6758a227a%7C8d1a69ec03b54345ae21dad112f5fb4f%7C0%7C0%7C636825633496708038&sdata=R9gkq2ILuqhdJQpjyijXw%2Flmogrxto8WP%2BvV05K6lgo%3D&reserved=0 -drawdown-from-local-min-max I would certainly be happy to include a 'findCorrections' function in a later version of PerformanceAnalytics if we could parameterize what constitutes a 'recent high' for that purpose. Regards, Brian On Tue, 2019-01-08 at 16:36 +0000, Alec Schmidt wrote: > Thank you Brian, > geometric=FALSE gave me additional corrections in 2011 and 2012 but > still no bear market of 2008: > > > 08/30/2018 - 12/24/2018 (-11.04%) [80 Days] > 07/21/2015 - 02/11/2016 (-10.05%) [143 Days] > 09/17/2012 - 11/15/2012 (-8.42%) [42 Days] > 03/27/2012 - 06/01/2012 (-9.44%) [47 Days] > 07/08/2011 - 08/19/2011 (-15.96%) [31 Days] > 05/02/2011 - 06/17/2011 (-7.59%) [34 Days] > 02/22/2011 - 03/16/2011 (-6.54%) [17 Days] > 07/18/2000 - 10/09/2002 (-97.34%) [559 Days] > Alec > > > > From: Brian G. Peterson > Sent: Tuesday, January 8, 2019 11:17 AM > To: Alec Schmidt; r-sig-finance at r-project.org > Subject: Re: [R-SIG-Finance] corrections vs drawdowns > > Alec, > > I suspect that you may wish to start with setting geometric=FALSE in > your call to findDrawdowns. > > Corrections are usually defined as a peak to trough difference in > *price*, as a percentage of the peak price. > > So I think you do not want to compound the *returns* in calculating > your drawdowns. > > Regards, > > Brian > > > On Tue, 2019-01-08 at 16:09 +0000, Alec Schmidt wrote: > > I tried to use the function findDrawdowns() to compile NASDAQ > > (^IXIC) > > corrections. For the sample starting on > > > > 2007-01-01, I get the following start -to-trough periods with > > drawdowns higher than 10% > > > > 08/30/2018 - 12/24/2018 (-23.64%) [80 Days] > > 07/21/2015 - 02/11/2016 (-18.24%) [143 Days] > > 09/17/2012 - 11/15/2012 (-10.90%) [42 Days] > > 03/27/2012 - 06/01/2012 (-12.01%) [47 Days] > > 05/02/2011 - 10/03/2011 (-18.71%) [108 Days] > > 11/01/2007 - 03/09/2009 (-55.63%) [339 Days] > > > > > > However, if the sample starts on 2000-06-01, I get > > 08/30/2018 - 12/24/2018 (-23.64%) [80 Days] > > 07/21/2015 - 02/11/2016 (-18.24%) [143 Days] > > 07/18/2000 - 10/09/2002 (-73.94%) [559 Days] > > > > i.e. no bear market of 2008... > > > > This is because ^IXIC didn't recover in 2007 from its fall from top > > in 2000. This implies that various reports on market corrections do > > not use the max drawdown. Is there consensus (and possibly R > > scripts) > > that address this problem? > > > > Thanks! Alec [[alternative HTML version deleted]] From @@chmid1 @ending from @teven@@edu Tue Jan 8 22:48:30 2019 From: @@chmid1 @ending from @teven@@edu (Alec Schmidt) Date: Tue, 8 Jan 2019 21:48:30 +0000 Subject: [R-SIG-Finance] corrections vs drawdowns Message-ID: Brian, Thanks again. It would be great if you implement findCorrections(). I think it becomes a popular topic... ? On top of my head, the default version needs just one parameter, ie. if we're looking for corrections of 10%, let's check them after every peak of 10% since the last correction's trough. But of course there may be a more generic setup. Alec ________________________________ From: Brian G. Peterson Sent: Tuesday, January 8, 2019 11:55 AM To: Alec Schmidt; r-sig-finance at r-project.org Subject: Re: [R-SIG-Finance] corrections vs drawdowns I think that this is correct. NASDAQ was still in a drawdown. NASDAQ didn't make new all-time highs until 2014. Some people define 'corrections' as drawdown from most recent peak. Charles Schwab's definition is in-line with generally accepted usage: https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.schwab.com%2Fresource-center%2Finsights%2Fcontent%2Fmarket-correcti&data=02%7C01%7Caschmid1%40stevens.edu%7C104e1f582d6242bfce0208d6758a227a%7C8d1a69ec03b54345ae21dad112f5fb4f%7C0%7C0%7C636825633496698033&sdata=rkHsOOY4EdLB9LUu4bomU4%2F98T3kHidzSJY%2BGEQ4NsI%3D&reserved=0 on-what-does-it-mean The Motley Fool uses a similar but not identical definition: https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.schwab.com%2Fresource-center%2Finsights%2Fcontent%2Fmarket-correcti&data=02%7C01%7Caschmid1%40stevens.edu%7C104e1f582d6242bfce0208d6758a227a%7C8d1a69ec03b54345ae21dad112f5fb4f%7C0%7C0%7C636825633496698033&sdata=rkHsOOY4EdLB9LUu4bomU4%2F98T3kHidzSJY%2BGEQ4NsI%3D&reserved=0 on-what-does-it-mean quantmod has a 'findPeaks' function, but this is dependent on you setting a threshold for what defines a peak. A related Stack Overflow question may provide something in the direction of what you're looking for to look at drawdown from a recent peak. https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fstackoverflow.com%2Fquestions%2F14737899%2Fcalculate-cumulatve-growth&data=02%7C01%7Caschmid1%40stevens.edu%7C104e1f582d6242bfce0208d6758a227a%7C8d1a69ec03b54345ae21dad112f5fb4f%7C0%7C0%7C636825633496708038&sdata=R9gkq2ILuqhdJQpjyijXw%2Flmogrxto8WP%2BvV05K6lgo%3D&reserved=0 -drawdown-from-local-min-max I would certainly be happy to include a 'findCorrections' function in a later version of PerformanceAnalytics if we could parameterize what constitutes a 'recent high' for that purpose. Regards, Brian On Tue, 2019-01-08 at 16:36 +0000, Alec Schmidt wrote: > Thank you Brian, > geometric=FALSE gave me additional corrections in 2011 and 2012 but > still no bear market of 2008: > > > 08/30/2018 - 12/24/2018 (-11.04%) [80 Days] > 07/21/2015 - 02/11/2016 (-10.05%) [143 Days] > 09/17/2012 - 11/15/2012 (-8.42%) [42 Days] > 03/27/2012 - 06/01/2012 (-9.44%) [47 Days] > 07/08/2011 - 08/19/2011 (-15.96%) [31 Days] > 05/02/2011 - 06/17/2011 (-7.59%) [34 Days] > 02/22/2011 - 03/16/2011 (-6.54%) [17 Days] > 07/18/2000 - 10/09/2002 (-97.34%) [559 Days] > Alec > > > > From: Brian G. Peterson > Sent: Tuesday, January 8, 2019 11:17 AM > To: Alec Schmidt; r-sig-finance at r-project.org > Subject: Re: [R-SIG-Finance] corrections vs drawdowns > > Alec, > > I suspect that you may wish to start with setting geometric=FALSE in > your call to findDrawdowns. > > Corrections are usually defined as a peak to trough difference in > *price*, as a percentage of the peak price. > > So I think you do not want to compound the *returns* in calculating > your drawdowns. > > Regards, > > Brian > > > On Tue, 2019-01-08 at 16:09 +0000, Alec Schmidt wrote: > > I tried to use the function findDrawdowns() to compile NASDAQ > > (^IXIC) > > corrections. For the sample starting on > > > > 2007-01-01, I get the following start -to-trough periods with > > drawdowns higher than 10% > > > > 08/30/2018 - 12/24/2018 (-23.64%) [80 Days] > > 07/21/2015 - 02/11/2016 (-18.24%) [143 Days] > > 09/17/2012 - 11/15/2012 (-10.90%) [42 Days] > > 03/27/2012 - 06/01/2012 (-12.01%) [47 Days] > > 05/02/2011 - 10/03/2011 (-18.71%) [108 Days] > > 11/01/2007 - 03/09/2009 (-55.63%) [339 Days] > > > > > > However, if the sample starts on 2000-06-01, I get > > 08/30/2018 - 12/24/2018 (-23.64%) [80 Days] > > 07/21/2015 - 02/11/2016 (-18.24%) [143 Days] > > 07/18/2000 - 10/09/2002 (-73.94%) [559 Days] > > > > i.e. no bear market of 2008... > > > > This is because ^IXIC didn't recover in 2007 from its fall from top > > in 2000. This implies that various reports on market corrections do > > not use the max drawdown. Is there consensus (and possibly R > > scripts) > > that address this problem? > > > > Thanks! Alec [[alternative HTML version deleted]] From john@d@wr|ter @end|ng |rom gm@||@com Mon Jan 21 10:27:13 2019 From: john@d@wr|ter @end|ng |rom gm@||@com (John Writer) Date: Mon, 21 Jan 2019 14:57:13 +0530 Subject: [R-SIG-Finance] Query on strucchange package In-Reply-To: References: Message-ID: Hi There, I have a query on the usage of breakpoints function in strucchange package. The example on Page 14 of 69 of the reference manual for strucchange package shows this breakpoints(Nile ~ 1) where Nile is the time series dataset for which breakpoints are to be calculated. What does this command mean ? Strucchange package is based on Bai Perron (page 13) Structural break model using linear models. It seems to me that no other variable is used here. I am not sure if I have misunderstood the paper or this command. I would be grateful if someone can explain. Please advise. Thanks. [[alternative HTML version deleted]] From mmm@mmm1900 @end|ng |rom gm@||@com Sat Jan 26 03:25:35 2019 From: mmm@mmm1900 @end|ng |rom gm@||@com (mmm ammm) Date: Sat, 26 Jan 2019 04:25:35 +0200 Subject: [R-SIG-Finance] the package nmof Message-ID: Dear all, i'm hoping that one of you can help me in the following code that is used for asset selection based on nmof package and please guide me where is the mistake: the error message is: "Error in colSums(x) : 'x' must be an array of at least two dimensions". The entire code is below (it works with DEopt) but does not with PSO; it is for asset selection exaclty from the package nmof. require("NMOF") na<-31 nn<- read.table("n.txt") # n is the a 31*31 matrix. Sigma <- data.matrix(nn) OF2 <- function(x, data) { # res <- colSums (data$Sigma %*% x * x) res <- colSums (Sigma %*% x * x) #z<-c(x,x) n <- colSums (x); res <- res / n^2 } ####### pso ############# data <- list( na = na, max = rep( 0.05, na), min = rep(-0.05, na) ) algo <- list(nP = 31L, nG = 1000L, c1 = 0.5, c2 = 1.5, #min = data$min, max = data$max, max = rep( 0.05, na), min = rep(-0.05, na), #repair = repair, pen = penalty, iner = 0.7, initV = 1, maxV = 0.2 #printBar = FALSE, printDetail = TRUE ) #x<-array(x, c(2,2)) system.time(sol <- PSopt(OF = OF2,algo = algo, data)) From peg@rc|@m76 @end|ng |rom gm@||@com Sat Jan 26 05:02:36 2019 From: peg@rc|@m76 @end|ng |rom gm@||@com (Pedro Garcia) Date: Fri, 25 Jan 2019 23:02:36 -0500 Subject: [R-SIG-Finance] the package nmof In-Reply-To: References: Message-ID: Ok, it may be that 'x' is a data structure, need to turn it into an array, also you need to transpose it 'x' when multiplying. Good luck, Pedro Virus-free. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> On Fri, 25 Jan 2019 at 21:26, mmm ammm wrote: > Dear all, > i'm hoping that one of you can help me in the following code that is > used for asset selection based on nmof package and please guide me > where is the mistake: > > the error message is: "Error in colSums(x) : 'x' must be an array of > at least two dimensions". > > The entire code is below (it works with DEopt) but does not with PSO; > it is for asset selection exaclty from the package nmof. > > require("NMOF") > na<-31 > > nn<- read.table("n.txt") # n is the a 31*31 matrix. > Sigma <- data.matrix(nn) > > OF2 <- function(x, data) { > # res <- colSums (data$Sigma %*% x * x) > res <- colSums (Sigma %*% x * x) > #z<-c(x,x) > n <- colSums (x); res <- res / n^2 > } > ####### pso ############# > data <- list( > na = na, > max = rep( 0.05, na), > min = rep(-0.05, na) > ) > algo <- list(nP = 31L, > nG = 1000L, > c1 = 0.5, > c2 = 1.5, > #min = data$min, max = data$max, > max = rep( 0.05, na), min = rep(-0.05, na), > #repair = repair, pen = penalty, > iner = 0.7, initV = 1, maxV = 0.2 > #printBar = FALSE, printDetail = TRUE > ) > #x<-array(x, c(2,2)) > > system.time(sol <- PSopt(OF = OF2,algo = algo, data)) > > _______________________________________________ > R-SIG-Finance at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-sig-finance > -- Subscriber-posting only. If you want to post, subscribe first. > -- Also note that this is not the r-help list where general R questions > should go. > [[alternative HTML version deleted]] From e@ @end|ng |rom enr|co@chum@nn@net Sat Jan 26 20:50:14 2019 From: e@ @end|ng |rom enr|co@chum@nn@net (Enrico Schumann) Date: Sat, 26 Jan 2019 20:50:14 +0100 Subject: [R-SIG-Finance] the package nmof In-Reply-To: (mmm ammm's message of "Sat, 26 Jan 2019 04:25:35 +0200") References: Message-ID: <87a7jn6vwp.fsf@enricoschumann.net> >>>>> "m" == mmm ammm writes: m> Dear all, m> i'm hoping that one of you can help me in the following code that is m> used for asset selection based on nmof package and please guide me m> where is the mistake: m> the error message is: "Error in colSums(x) : 'x' must be an array of m> at least two dimensions". m> The entire code is below (it works with DEopt) but does not with PSO; m> it is for asset selection exaclty from the package nmof. m> require("NMOF") m> na<-31 m> nn<- read.table("n.txt") # n is the a 31*31 matrix. m> Sigma <- data.matrix(nn) m> OF2 <- function(x, data) { m> # res <- colSums (data$Sigma %*% x * x) m> res <- colSums (Sigma %*% x * x) m> #z<-c(x,x) m> n <- colSums (x); res <- res / n^2 m> } m> ####### pso ############# m> data <- list( m> na = na, m> max = rep( 0.05, na), m> min = rep(-0.05, na) m> ) m> algo <- list(nP = 31L, m> nG = 1000L, m> c1 = 0.5, m> c2 = 1.5, m> #min = data$min, max = data$max, m> max = rep( 0.05, na), min = rep(-0.05, na), m> #repair = repair, pen = penalty, m> iner = 0.7, initV = 1, maxV = 0.2 m> #printBar = FALSE, printDetail = TRUE m> ) m> #x<-array(x, c(2,2)) m> system.time(sol <- PSopt(OF = OF2,algo = algo, data)) You could get rid of the error by setting 'loopOF' to FALSE (as part of the settings passed with list 'algo'). I will explain below what this setting does. But in any case, are you sure your objective function does what it should? If I read it correctly, it assumes that 'x' is logical. But both DEopt and PSopt work with numeric (i.e. real-valued) vectors. What 'loopOF' does: Differential Evolution and Particle Swarm Optimisation are multiple-solution methods, aka population-based methods. The NMOF implementations 'DEopt' and 'PSopt' arrange the populations as matrices; every column in such a matrix represents one solution. To compute the objective function of the solutions, with the default settings both 'DEopt' and 'PSopt' use a loop. The objective function should thus receive a single solution as input, and should evaluate to a single number. Sometimes an objective function may be computed for the whole population (i.e. all solutions) in one step. In such a case, the objective function should expect the population matrix (i.e. all solutions) as input, and should evaluate to a vector: the objective-function values corresponding to the columns of the population matrix. However, since the user specifies the objective function, 'DEopt'/'PSopt' cannot know automatically in what way the objective function is written; so you need to tell the functions by setting 'loopOF' to TRUE (the default) or to FALSE. kind regards Enrico -- Enrico Schumann (maintainer of package NMOF) Lucerne, Switzerland http://enricoschumann.net From mmm@mmm1900 @end|ng |rom gm@||@com Sun Jan 27 03:01:33 2019 From: mmm@mmm1900 @end|ng |rom gm@||@com (mmm ammm) Date: Sun, 27 Jan 2019 04:01:33 +0200 Subject: [R-SIG-Finance] the package nmof In-Reply-To: <87a7jn6vwp.fsf@enricoschumann.net> References: <87a7jn6vwp.fsf@enricoschumann.net> Message-ID: Dear Enrico, Thank you so much. It works now. But, the function DEopt worked without changing this setting and gave approximately the same result; could you please explain this for me? Many thanks On 26/01/2019, Enrico Schumann wrote: >>>>>> "m" == mmm ammm writes: > > m> Dear all, > m> i'm hoping that one of you can help me in the following code that is > m> used for asset selection based on nmof package and please guide me > m> where is the mistake: > > m> the error message is: "Error in colSums(x) : 'x' must be an array of > m> at least two dimensions". > > m> The entire code is below (it works with DEopt) but does not with > PSO; > m> it is for asset selection exaclty from the package nmof. > > m> require("NMOF") > m> na<-31 > > m> nn<- read.table("n.txt") # n is the a 31*31 matrix. > m> Sigma <- data.matrix(nn) > > m> OF2 <- function(x, data) { > m> # res <- colSums (data$Sigma %*% x * x) > m> res <- colSums (Sigma %*% x * x) > m> #z<-c(x,x) > m> n <- colSums (x); res <- res / n^2 > m> } > m> ####### pso ############# > m> data <- list( > m> na = na, > m> max = rep( 0.05, na), > m> min = rep(-0.05, na) > m> ) > m> algo <- list(nP = 31L, > m> nG = 1000L, > m> c1 = 0.5, > m> c2 = 1.5, > m> #min = data$min, max = data$max, > m> max = rep( 0.05, na), min = rep(-0.05, na), > m> #repair = repair, pen = penalty, > m> iner = 0.7, initV = 1, maxV = 0.2 > m> #printBar = FALSE, printDetail = TRUE > m> ) > m> #x<-array(x, c(2,2)) > > m> system.time(sol <- PSopt(OF = OF2,algo = algo, data)) > > You could get rid of the error by setting 'loopOF' to > FALSE (as part of the settings passed with list > 'algo'). I will explain below what this setting does. > > But in any case, are you sure your objective function > does what it should? If I read it correctly, it > assumes that 'x' is logical. But both DEopt and PSopt > work with numeric (i.e. real-valued) vectors. > > What 'loopOF' does: Differential Evolution and Particle > Swarm Optimisation are multiple-solution methods, aka > population-based methods. The NMOF implementations > 'DEopt' and 'PSopt' arrange the populations as > matrices; every column in such a matrix represents one > solution. To compute the objective function of the > solutions, with the default settings both 'DEopt' and > 'PSopt' use a loop. The objective function should thus > receive a single solution as input, and should evaluate > to a single number. > > Sometimes an objective function may be computed for the > whole population (i.e. all solutions) in one step. In > such a case, the objective function should expect the > population matrix (i.e. all solutions) as input, and > should evaluate to a vector: the objective-function > values corresponding to the columns of the population > matrix. However, since the user specifies the > objective function, 'DEopt'/'PSopt' cannot know > automatically in what way the objective function is > written; so you need to tell the functions by setting > 'loopOF' to TRUE (the default) or to FALSE. > > kind regards > Enrico > > > -- > Enrico Schumann (maintainer of package NMOF) > Lucerne, Switzerland > http://enricoschumann.net > From e@ @end|ng |rom enr|co@chum@nn@net Mon Jan 28 08:19:20 2019 From: e@ @end|ng |rom enr|co@chum@nn@net (Enrico Schumann) Date: Mon, 28 Jan 2019 08:19:20 +0100 Subject: [R-SIG-Finance] the package nmof In-Reply-To: (mmm ammm's message of "Sun, 27 Jan 2019 04:01:33 +0200") References: <87a7jn6vwp.fsf@enricoschumann.net> Message-ID: <871s4xdzbb.fsf@enricoschumann.net> >>>>> "m" == mmm ammm writes: m> Dear Enrico, m> Thank you so much. It works now. m> But, the function DEopt worked without changing this setting and gave m> approximately the same result; could you please explain this for me? m> Many thanks The reason is that 'DEopt' does not drop the dimension when a solution is selected and passed to the objective function; a single solution remains a matrix (of one column). Such a conversion is simple to do in the objective function: use 'as.matrix()' to create a column vector; or 'c()' or 'drop()' to drop the dimension and create a vector. kind regards Enrico m> On 26/01/2019, Enrico Schumann wrote: >>>>>>> "m" == mmm ammm writes: >> m> Dear all, m> i'm hoping that one of you can help me in the following code that is m> used for asset selection based on nmof package and please guide me m> where is the mistake: >> m> the error message is: "Error in colSums(x) : 'x' must be an array of m> at least two dimensions". >> m> The entire code is below (it works with DEopt) but does not with >> PSO; m> it is for asset selection exaclty from the package nmof. >> m> require("NMOF") m> na<-31 >> m> nn<- read.table("n.txt") # n is the a 31*31 matrix. m> Sigma <- data.matrix(nn) >> m> OF2 <- function(x, data) { m> # res <- colSums (data$Sigma %*% x * x) m> res <- colSums (Sigma %*% x * x) m> #z<-c(x,x) m> n <- colSums (x); res <- res / n^2 m> } m> ####### pso ############# m> data <- list( m> na = na, m> max = rep( 0.05, na), m> min = rep(-0.05, na) m> ) m> algo <- list(nP = 31L, m> nG = 1000L, m> c1 = 0.5, m> c2 = 1.5, m> #min = data$min, max = data$max, m> max = rep( 0.05, na), min = rep(-0.05, na), m> #repair = repair, pen = penalty, m> iner = 0.7, initV = 1, maxV = 0.2 m> #printBar = FALSE, printDetail = TRUE m> ) m> #x<-array(x, c(2,2)) >> m> system.time(sol <- PSopt(OF = OF2,algo = algo, data)) >> >> You could get rid of the error by setting 'loopOF' to >> FALSE (as part of the settings passed with list >> 'algo'). I will explain below what this setting does. >> >> But in any case, are you sure your objective function >> does what it should? If I read it correctly, it >> assumes that 'x' is logical. But both DEopt and PSopt >> work with numeric (i.e. real-valued) vectors. >> >> What 'loopOF' does: Differential Evolution and Particle >> Swarm Optimisation are multiple-solution methods, aka >> population-based methods. The NMOF implementations >> 'DEopt' and 'PSopt' arrange the populations as >> matrices; every column in such a matrix represents one >> solution. To compute the objective function of the >> solutions, with the default settings both 'DEopt' and >> 'PSopt' use a loop. The objective function should thus >> receive a single solution as input, and should evaluate >> to a single number. >> >> Sometimes an objective function may be computed for the >> whole population (i.e. all solutions) in one step. In >> such a case, the objective function should expect the >> population matrix (i.e. all solutions) as input, and >> should evaluate to a vector: the objective-function >> values corresponding to the columns of the population >> matrix. However, since the user specifies the >> objective function, 'DEopt'/'PSopt' cannot know >> automatically in what way the objective function is >> written; so you need to tell the functions by setting >> 'loopOF' to TRUE (the default) or to FALSE. >> >> kind regards >> Enrico >> >> >> -- >> Enrico Schumann (maintainer of package NMOF) >> Lucerne, Switzerland >> http://enricoschumann.net >> -- Enrico Schumann Lucerne, Switzerland http://enricoschumann.net From p@nk@j@b| @end|ng |rom y@hoo@com Mon Jan 28 14:35:42 2019 From: p@nk@j@b| @end|ng |rom y@hoo@com (Pankaj K Agarwal) Date: Mon, 28 Jan 2019 13:35:42 +0000 (UTC) Subject: [R-SIG-Finance] Fama-MacBeth Procedure for multiple independent variables. References: <883646581.2158546.1548682542458.ref@mail.yahoo.com> Message-ID: <883646581.2158546.1548682542458@mail.yahoo.com> Dear All Hope the question qualifies to be included here. Sure you are aware that Fama-MacBeth (1973) two-step procedure involves:1. Regressing time series of each decile of y on time series of? each decile of x, resulting in 10 slopes.2. Regressing cross-section of decile 1-10 of y over slopes obtained in step-1, for each month.3. Obtaining time-series of slopes in step-2 and then testing for its significance. Now say y is return and x is VaR. Deciles of xs were created based on decreasing VaR and corresponding decile returns were ys.? Issue is what if i wish to add another x (say x1 and x2 now) and implement Fama-MacBeth procedure? How then the deciles will be formed, basis x1 or x2 or both? Can someone help along with R-code too? Regards,Pankaj K Agarwal +91-98397-11444 [[alternative HTML version deleted]] From cgm|| @end|ng |rom m@n@com Mon Jan 28 17:23:09 2019 From: cgm|| @end|ng |rom m@n@com (Curtis Miller) Date: Mon, 28 Jan 2019 16:23:09 +0000 Subject: [R-SIG-Finance] GARCH parameter estimation with rugarch: estimates seem inaccurate Message-ID: Hello all, Over a year ago I wrote a blog post about the problems I was having estimating the parameters of GARCH models via fGarch. I got a lot of feedback and I've now followed up with another article taking that feedback into account: https://ntguardian.wordpress.com/2019/01/28/problems-estimating-garch-parameters-r-part-2-rugarch/ First, I switched from fGarch to rugarch, which is supposedly still maintained. I also looked at other parameter combinations in simulation experiments that others requested. It seems that rugarch isn't necessarily better when it comes to parameter accuracy and one needs a lot of data (in the order of thousands) to get good estimates of the parameter values. That said, CIs computed are highly unreliable even at large sample sizes and there is certainly no "silver bullet" optimization algorithm. I'd like feedback if I'm not doing things right. I heard once that others could not replicate my results; that is, they have reliable estimates for GARCH parameters. But I never found out who those people were and they did not give me their code to see what I was doing wrong. If the community is aware of better approaches, I would like to hear them as well. Thank you all, Curtis Miller From mmm@mmm1900 @end|ng |rom gm@||@com Mon Jan 28 18:49:36 2019 From: mmm@mmm1900 @end|ng |rom gm@||@com (mmm ammm) Date: Mon, 28 Jan 2019 19:49:36 +0200 Subject: [R-SIG-Finance] the package nmof In-Reply-To: <871s4xdzbb.fsf@enricoschumann.net> References: <87a7jn6vwp.fsf@enricoschumann.net> <871s4xdzbb.fsf@enricoschumann.net> Message-ID: Many thanks Enrico for your help. On 28/01/2019, Enrico Schumann wrote: >>>>>> "m" == mmm ammm writes: > > m> Dear Enrico, > m> Thank you so much. It works now. > m> But, the function DEopt worked without changing this setting and > gave > m> approximately the same result; could you please explain this for me? > > m> Many thanks > > The reason is that 'DEopt' does not drop the dimension > when a solution is selected and passed to the objective > function; a single solution remains a matrix (of one > column). > > Such a conversion is simple to do in the objective > function: use 'as.matrix()' to create a column vector; > or 'c()' or 'drop()' to drop the dimension and create a > vector. > > kind regards > Enrico > > m> On 26/01/2019, Enrico Schumann wrote: > >>>>>>> "m" == mmm ammm writes: > >> > m> Dear all, > m> i'm hoping that one of you can help me in the following code that is > m> used for asset selection based on nmof package and please guide me > m> where is the mistake: > >> > m> the error message is: "Error in colSums(x) : 'x' must be an array of > m> at least two dimensions". > >> > m> The entire code is below (it works with DEopt) but does not with > >> PSO; > m> it is for asset selection exaclty from the package nmof. > >> > m> require("NMOF") > m> na<-31 > >> > m> nn<- read.table("n.txt") # n is the a 31*31 matrix. > m> Sigma <- data.matrix(nn) > >> > m> OF2 <- function(x, data) { > m> # res <- colSums (data$Sigma %*% x * x) > m> res <- colSums (Sigma %*% x * x) > m> #z<-c(x,x) > m> n <- colSums (x); res <- res / n^2 > m> } > m> ####### pso ############# > m> data <- list( > m> na = na, > m> max = rep( 0.05, na), > m> min = rep(-0.05, na) > m> ) > m> algo <- list(nP = 31L, > m> nG = 1000L, > m> c1 = 0.5, > m> c2 = 1.5, > m> #min = data$min, max = data$max, > m> max = rep( 0.05, na), min = rep(-0.05, na), > m> #repair = repair, pen = penalty, > m> iner = 0.7, initV = 1, maxV = 0.2 > m> #printBar = FALSE, printDetail = TRUE > m> ) > m> #x<-array(x, c(2,2)) > >> > m> system.time(sol <- PSopt(OF = OF2,algo = algo, data)) > >> > >> You could get rid of the error by setting 'loopOF' to > >> FALSE (as part of the settings passed with list > >> 'algo'). I will explain below what this setting does. > >> > >> But in any case, are you sure your objective function > >> does what it should? If I read it correctly, it > >> assumes that 'x' is logical. But both DEopt and PSopt > >> work with numeric (i.e. real-valued) vectors. > >> > >> What 'loopOF' does: Differential Evolution and Particle > >> Swarm Optimisation are multiple-solution methods, aka > >> population-based methods. The NMOF implementations > >> 'DEopt' and 'PSopt' arrange the populations as > >> matrices; every column in such a matrix represents one > >> solution. To compute the objective function of the > >> solutions, with the default settings both 'DEopt' and > >> 'PSopt' use a loop. The objective function should thus > >> receive a single solution as input, and should evaluate > >> to a single number. > >> > >> Sometimes an objective function may be computed for the > >> whole population (i.e. all solutions) in one step. In > >> such a case, the objective function should expect the > >> population matrix (i.e. all solutions) as input, and > >> should evaluate to a vector: the objective-function > >> values corresponding to the columns of the population > >> matrix. However, since the user specifies the > >> objective function, 'DEopt'/'PSopt' cannot know > >> automatically in what way the objective function is > >> written; so you need to tell the functions by setting > >> 'loopOF' to TRUE (the default) or to FALSE. > >> > >> kind regards > >> Enrico > >> > >> > >> -- > >> Enrico Schumann (maintainer of package NMOF) > >> Lucerne, Switzerland > >> http://enricoschumann.net > >> > > -- > Enrico Schumann > Lucerne, Switzerland > http://enricoschumann.net > From @|ex|o@ @end|ng |rom 4d@c@pe@com Mon Jan 28 19:02:23 2019 From: @|ex|o@ @end|ng |rom 4d@c@pe@com (alexios ghalanos) Date: Mon, 28 Jan 2019 18:02:23 +0000 (UTC) Subject: [R-SIG-Finance] GARCH parameter estimation with rugarch: estimates seem inaccurate In-Reply-To: Message-ID: Hi Curtis, There is a function in rugarch called ugarchdistribution for performing these types of experiments: spec1 <- ugarchspec(mean.model = list(armaOrder = c(0,0), include.mean = FALSE), fixed.pars = list("omega" = 0.2, "alpha1" = 0.2, "beta1" = 0.2)) d=ugarchdistribution(spec1, n.sim=2000, m.sim=100, recursive = TRUE, recursive.length = 6000, solver.control=list(trace=1)) Try this and perhaps also read this blog post: http://www.unstarched.net/2012/12/26/garch-parameter-uncertainty-and-data-size/ Could we benefit from a better nonlinear solver? Perhaps. Could we benefit from code contributions to make it better? Definitely. Feel free to contribute. Best, Alexios On Mon, 28 Jan 2019 16:23:09 +0000, Curtis Miller wrote: > Hello all, > > Over a year ago I wrote a blog post about the problems I was having > estimating the parameters of GARCH models via fGarch. I got a lot of > feedback and I've now followed up with another article taking that > feedback into account: > https://ntguardian.wordpress.com/2019/01/28/problems-estimating-garch-parameters-r-part-2-rugarch/ > > First, I switched from fGarch to rugarch, which is supposedly still > maintained. I also looked at other parameter combinations in simulation > experiments that others requested. > > It seems that rugarch isn't necessarily better when it comes to > parameter accuracy and one needs a lot of data (in the order of > thousands) to get good estimates of the parameter values. That said, CIs > computed are highly unreliable even at large sample sizes and there is > certainly no "silver bullet" optimization algorithm. > > I'd like feedback if I'm not doing things right. I heard once that > others could not replicate my results; that is, they have reliable > estimates for GARCH parameters. But I never found out who those people > were and they did not give me their code to see what I was doing wrong. > > If the community is aware of better approaches, I would like to hear > them as well. > > Thank you all, > > Curtis Miller > > _______________________________________________ > R-SIG-Finance at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-sig-finance > -- Subscriber-posting only. If you want to post, subscribe first. > -- Also note that this is not the r-help list where general R questions should go. From e@ @end|ng |rom enr|co@chum@nn@net Fri Feb 1 14:28:04 2019 From: e@ @end|ng |rom enr|co@chum@nn@net (Enrico Schumann) Date: Fri, 01 Feb 2019 14:28:04 +0100 Subject: [R-SIG-Finance] Alternative solvers in rugarch (was: GARCH parameter estimation with rugarch: estimates seem inaccurate) In-Reply-To: (alexios ghalanos's message of "Mon, 28 Jan 2019 18:02:23 +0000 (UTC)") References: Message-ID: <87lg2zd4ez.fsf@enricoschumann.net> >>>>> "alexios" == alexios ghalanos writes: alexios> Hi Curtis, alexios> There is a function in rugarch called ugarchdistribution for performing these types of experiments: alexios> spec1 <- ugarchspec(mean.model = list(armaOrder = c(0,0), include.mean = FALSE), alexios> fixed.pars = list("omega" = 0.2, "alpha1" = 0.2, "beta1" = 0.2)) alexios> d=ugarchdistribution(spec1, n.sim=2000, m.sim=100, recursive = TRUE, recursive.length = 6000, solver.control=list(trace=1)) alexios> Try this and perhaps also read this blog post: alexios> http://www.unstarched.net/2012/12/26/garch-parameter-uncertainty-and-data-size/ alexios> Could we benefit from a better nonlinear solver? Perhaps. alexios> Could we benefit from code contributions to make it better? Definitely. alexios> Feel free to contribute. alexios> Best, alexios> Alexios I am not using 'rugarch' and only had a brief look at the code. But is it possible to "plug in" alternative solvers, i.e. without changing the package code? If not, that could be a useful feature, as it would allow to quickly test solvers. An "external" solver would have to comply with some interface convention, i.e. the solver would have to be provided as a function that takes certain defined input arguments and evaluates to defined outputs. alexios> On Mon, 28 Jan 2019 16:23:09 +0000, Curtis Miller wrote: >> Hello all, >> >> Over a year ago I wrote a blog post about the problems I was having >> estimating the parameters of GARCH models via fGarch. I got a lot of >> feedback and I've now followed up with another article taking that >> feedback into account: >> https://ntguardian.wordpress.com/2019/01/28/problems-estimating-garch-parameters-r-part-2-rugarch/ >> >> First, I switched from fGarch to rugarch, which is supposedly still >> maintained. I also looked at other parameter combinations in simulation >> experiments that others requested. >> >> It seems that rugarch isn't necessarily better when it comes to >> parameter accuracy and one needs a lot of data (in the order of >> thousands) to get good estimates of the parameter values. That said, CIs >> computed are highly unreliable even at large sample sizes and there is >> certainly no "silver bullet" optimization algorithm. >> >> I'd like feedback if I'm not doing things right. I heard once that >> others could not replicate my results; that is, they have reliable >> estimates for GARCH parameters. But I never found out who those people >> were and they did not give me their code to see what I was doing wrong. >> >> If the community is aware of better approaches, I would like to hear >> them as well. >> >> Thank you all, >> >> Curtis Miller >> -- Enrico Schumann Lucerne, Switzerland http://enricoschumann.net From jo@h@m@u|r|ch @end|ng |rom gm@||@com Fri Feb 1 14:35:31 2019 From: jo@h@m@u|r|ch @end|ng |rom gm@||@com (Joshua Ulrich) Date: Fri, 1 Feb 2019 07:35:31 -0600 Subject: [R-SIG-Finance] R/Finance 2019: Call for Presentations Message-ID: R/Finance 2019: Applied Finance with R May 17 and 18, 2019 University of Illinois at Chicago Call for Presentations The eleventh annual R/Finance conference for applied finance using R will be held on May 17 and 18, 2019 in Chicago, IL, USA at the University of Illinois at Chicago. The conference will cover topics including portfolio management, time series analysis, advanced risk tools, high-performance computing, market microstructure, and econometrics. All will be discussed within the context of using R as a primary tool for financial model development, risk management, portfolio construction, and trading. >From its midwest beginnings, word of the conference spread among trading desks and universities, until it became the primary meeting for academics and practitioners interested in using R in quantitative finance. It has featured presentations from prominent academics and practitioners, and we anticipate another exciting line-up for 2019. We invite you to submit complete papers in pdf format for consideration. We will also consider one-page abstracts (in txt or pdf format) although more complete papers are preferred. We welcome submissions for full talks (approximately 20 min.), abbreviated "lightning talks" (approx. 6 min.), and (1 hr.) pre-conference tutorials. Both academic and practitioner proposals related to R are encouraged. All slides will be made publicly available at conference time. Presenters are strongly encouraged to provide working R code to accompany the slides. Ideally, data sets should be made public for the purposes of reproducibility (though we realize this may be limited due to contracts with data vendors). Preference may be given to innovative research or presenters who have released R packages. Please submit proposals online at http://go.uic.edu/rfinsubmit Submissions will be reviewed and accepted on a rolling basis with a final submission deadline of March 1, 2019. Submitters will be notified on a rolling basis via email by March 15, 2019 of acceptance, presentation length, and financial assistance (if requested). Financial assistance for travel and accommodation may be available to presenters. Requests for financial assistance do not affect acceptance decisions. Requests must be made at the time of submission, and should indicate why assistance is being requested. Requests made after submission are unlikely to be fulfilled. Assistance will be granted at the discretion of the conference committee. Additional details will be announced via the conference website http://www.RinFinance.com/ as they become available. Information on previous years' presenters and their presentations are also at the conference website. We will make a separate announcement when registration opens, usually sometime in mid to late March. For the conference committee: Petra Bakosova, Gib Bassett, Peter Carl, Dirk Eddelbuettel, Soumya Kalra, Brian Peterson, Dale Rosenthal, Jeffrey Ryan, Justin Shea, Joshua Ulrich -- Joshua Ulrich | about.me/joshuaulrich FOSS Trading | www.fosstrading.com R/Finance 2018 | www.rinfinance.com From hk@hr@ @end|ng |rom gm@||@com Tue Feb 19 13:10:37 2019 From: hk@hr@ @end|ng |rom gm@||@com (Hannu Kahra) Date: Tue, 19 Feb 2019 14:10:37 +0200 Subject: [R-SIG-Finance] Mixed integer programming Message-ID: I have tried to replicate Example 5.1 in Luenberger: Investment Science. Here is the model model <- MIPModel() %>% add_variable(x[i], i = 1:7, type = "binary") %>% set_objective(200*x[1]+30*x[2]+200*x[3]+60*x[4]+50*x[5]+100*x[6]+50*x[7]) %>% add_constraint(100*x[1]+20*x[2]+150*x[3]+50*x[4]+50*x[5]+150*x[6]+150*x[7] <= 500) solve_model(with_ROI("glpk",verbose = TRUE)) I get the following error Error in UseMethod("solve_model") : no applicable method for 'solve_model' applied to an object of class "function" modelMixed integer linear optimization problem Variables: Continuous: 0 Integer: 0 Binary: 7 Model sense: maximize Constraints: 1 What is wrong? Thank you in advance. -Hannu [[alternative HTML version deleted]] From hk@hr@ @end|ng |rom gm@||@com Tue Feb 19 13:38:16 2019 From: hk@hr@ @end|ng |rom gm@||@com (Hannu Kahra) Date: Tue, 19 Feb 2019 14:38:16 +0200 Subject: [R-SIG-Finance] Mixed integer programming In-Reply-To: References: Message-ID: I forgot to mention that I am using the ompr package. An example is given here https://blog.revolutionanalytics.com/2016/12/mixed-integer-programming-in-r-with-the-ompr-package.html -Hannu On Tue, Feb 19, 2019 at 2:10 PM Hannu Kahra wrote: > I have tried to replicate Example 5.1 in Luenberger: Investment Science. > > Here is the model > model <- MIPModel() %>% > add_variable(x[i], i = 1:7, type = "binary") %>% > > set_objective(200*x[1]+30*x[2]+200*x[3]+60*x[4]+50*x[5]+100*x[6]+50*x[7]) > %>% > > add_constraint(100*x[1]+20*x[2]+150*x[3]+50*x[4]+50*x[5]+150*x[6]+150*x[7] > <= 500) > solve_model(with_ROI("glpk",verbose = TRUE)) > > I get the following error > > Error in UseMethod("solve_model") : > no applicable method for 'solve_model' applied to an object of class "function" > > modelMixed integer linear optimization problem > Variables: > Continuous: 0 > Integer: 0 > Binary: 7 > Model sense: maximize > Constraints: 1 > > What is wrong? Thank you in advance. > > -Hannu > > [[alternative HTML version deleted]] From e@ @end|ng |rom enr|co@chum@nn@net Tue Feb 19 14:45:51 2019 From: e@ @end|ng |rom enr|co@chum@nn@net (Enrico Schumann) Date: Tue, 19 Feb 2019 14:45:51 +0100 Subject: [R-SIG-Finance] Mixed integer programming In-Reply-To: (Hannu Kahra's message of "Tue, 19 Feb 2019 14:10:37 +0200") References: Message-ID: <87h8czhoxc.fsf@enricoschumann.net> >>>>> "Hannu" == Hannu Kahra writes: Hannu> I have tried to replicate Example 5.1 in Luenberger: Investment Science. Hannu> Here is the model Hannu> model <- MIPModel() %>% Hannu> add_variable(x[i], i = 1:7, type = "binary") %>% Hannu> set_objective(200*x[1]+30*x[2]+200*x[3]+60*x[4]+50*x[5]+100*x[6]+50*x[7]) Hannu> %>% Hannu> add_constraint(100*x[1]+20*x[2]+150*x[3]+50*x[4]+50*x[5]+150*x[6]+150*x[7] Hannu> <= 500) Hannu> solve_model(with_ROI("glpk",verbose = TRUE)) Hannu> I get the following error Hannu> Error in UseMethod("solve_model") : Hannu> no applicable method for 'solve_model' applied to an object of class Hannu> "function" Hannu> modelMixed integer linear optimization problem Hannu> Variables: Hannu> Continuous: 0 Hannu> Integer: 0 Hannu> Binary: 7 Hannu> Model sense: maximize Hannu> Constraints: 1 Hannu> What is wrong? Thank you in advance. Hannu> -Hannu If you want people to help you, you need to provide a reproducible example. Also, please do not post in HTML, as code examples get scrambled and become unreadable. (If I had to venture a guess, I'd think you've forgotten one of those `%>%` before 'solve_model'.) -- Enrico Schumann Lucerne, Switzerland http://enricoschumann.net From er|cjberger @end|ng |rom gm@||@com Tue Feb 19 15:01:44 2019 From: er|cjberger @end|ng |rom gm@||@com (Eric Berger) Date: Tue, 19 Feb 2019 16:01:44 +0200 Subject: [R-SIG-Finance] Mixed integer programming In-Reply-To: <87h8czhoxc.fsf@enricoschumann.net> References: <87h8czhoxc.fsf@enricoschumann.net> Message-ID: Hi Hannu, I figured out the problem. The following code works for me. Note that I generally find it helpful to write function calls with the name of the package they come from, as in ompr::MIPModel() rather than just MIPModel(). This is partly style and partly to avoid issues when functions get masked by other functions of the same name. Here is my code: library(magrittr) library(ompr) library(ROI) library(ROI.plugin.glpk) library(ompr.roi) mymodel <- ompr::MIPModel() %>% add_variable(x[i], i = 1:7, type = "binary") %>% set_objective(200*x[1]+30*x[2]+200*x[3]+60*x[4]+50*x[5]+100*x[6]+50*x[7]) %>% add_constraint(100*x[1]+20*x[2]+150*x[3]+50*x[4]+50*x[5]+150*x[6]+150*x[7] <= 500) z <- ompr::solve_model(mymodel, ompr.roi::with_ROI(solver="glpk")) z # Status: optimal # Objective value: 610 summary(z) # Length Class Mode #model 3 optimization_model list #objective_value 1 -none- numeric #status 1 -none- character #solution 7 -none- numeric #solution_column_duals 1 -none- function #solution_row_duals 1 -none- function z$solution # x[1] x[2] x[3] x[4] x[5] x[6] x[7] # 1 0 1 1 1 1 0 HTH, Eric On Tue, Feb 19, 2019 at 3:46 PM Enrico Schumann wrote: > >>>>> "Hannu" == Hannu Kahra writes: > > Hannu> I have tried to replicate Example 5.1 in Luenberger: Investment > Science. > Hannu> Here is the model > Hannu> model <- MIPModel() %>% > Hannu> add_variable(x[i], i = 1:7, type = "binary") %>% > Hannu> > set_objective(200*x[1]+30*x[2]+200*x[3]+60*x[4]+50*x[5]+100*x[6]+50*x[7]) > Hannu> %>% > > Hannu> > add_constraint(100*x[1]+20*x[2]+150*x[3]+50*x[4]+50*x[5]+150*x[6]+150*x[7] > Hannu> <= 500) > Hannu> solve_model(with_ROI("glpk",verbose = TRUE)) > > Hannu> I get the following error > > Hannu> Error in UseMethod("solve_model") : > Hannu> no applicable method for 'solve_model' applied to an object > of class > Hannu> "function" > > Hannu> modelMixed integer linear optimization problem > Hannu> Variables: > Hannu> Continuous: 0 > Hannu> Integer: 0 > Hannu> Binary: 7 > Hannu> Model sense: maximize > Hannu> Constraints: 1 > > Hannu> What is wrong? Thank you in advance. > > Hannu> -Hannu > > If you want people to help you, you need to provide a > reproducible example. Also, please do not post in HTML, > as code examples get scrambled and become unreadable. > > (If I had to venture a guess, I'd think you've > forgotten one of those `%>%` before 'solve_model'.) > > > -- > Enrico Schumann > Lucerne, Switzerland > http://enricoschumann.net > > _______________________________________________ > R-SIG-Finance at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-sig-finance > -- Subscriber-posting only. If you want to post, subscribe first. > -- Also note that this is not the r-help list where general R questions > should go. > [[alternative HTML version deleted]] From hk@hr@ @end|ng |rom gm@||@com Tue Feb 19 15:58:02 2019 From: hk@hr@ @end|ng |rom gm@||@com (Hannu Kahra) Date: Tue, 19 Feb 2019 16:58:02 +0200 Subject: [R-SIG-Finance] Mixed integer programming In-Reply-To: References: <87h8czhoxc.fsf@enricoschumann.net> Message-ID: Hi Eric, thank you very much. Best regards, Hannu On Tue, Feb 19, 2019 at 4:02 PM Eric Berger wrote: > Hi Hannu, > I figured out the problem. The following code works for me. Note that I > generally find it helpful to write function calls with the name of the > package they come from, as in ompr::MIPModel() rather than just MIPModel(). > This is partly style and partly to avoid issues when functions get masked > by other functions of the same name. > > Here is my code: > > library(magrittr) > library(ompr) > library(ROI) > library(ROI.plugin.glpk) > library(ompr.roi) > > mymodel <- ompr::MIPModel() %>% > add_variable(x[i], i = 1:7, type = "binary") %>% > > set_objective(200*x[1]+30*x[2]+200*x[3]+60*x[4]+50*x[5]+100*x[6]+50*x[7]) > %>% > > add_constraint(100*x[1]+20*x[2]+150*x[3]+50*x[4]+50*x[5]+150*x[6]+150*x[7] > <= 500) > > z <- ompr::solve_model(mymodel, ompr.roi::with_ROI(solver="glpk")) > > z > # Status: optimal > # Objective value: 610 > > summary(z) > # Length Class Mode > #model 3 optimization_model list > #objective_value 1 -none- numeric > #status 1 -none- character > #solution 7 -none- numeric > #solution_column_duals 1 -none- function > #solution_row_duals 1 -none- function > > z$solution > > # x[1] x[2] x[3] x[4] x[5] x[6] x[7] > # 1 0 1 1 1 1 0 > > HTH, > Eric > > > On Tue, Feb 19, 2019 at 3:46 PM Enrico Schumann > wrote: > >> >>>>> "Hannu" == Hannu Kahra writes: >> >> Hannu> I have tried to replicate Example 5.1 in Luenberger: >> Investment Science. >> Hannu> Here is the model >> Hannu> model <- MIPModel() %>% >> Hannu> add_variable(x[i], i = 1:7, type = "binary") %>% >> Hannu> >> set_objective(200*x[1]+30*x[2]+200*x[3]+60*x[4]+50*x[5]+100*x[6]+50*x[7]) >> Hannu> %>% >> >> Hannu> >> add_constraint(100*x[1]+20*x[2]+150*x[3]+50*x[4]+50*x[5]+150*x[6]+150*x[7] >> Hannu> <= 500) >> Hannu> solve_model(with_ROI("glpk",verbose = TRUE)) >> >> Hannu> I get the following error >> >> Hannu> Error in UseMethod("solve_model") : >> Hannu> no applicable method for 'solve_model' applied to an object >> of class >> Hannu> "function" >> >> Hannu> modelMixed integer linear optimization problem >> Hannu> Variables: >> Hannu> Continuous: 0 >> Hannu> Integer: 0 >> Hannu> Binary: 7 >> Hannu> Model sense: maximize >> Hannu> Constraints: 1 >> >> Hannu> What is wrong? Thank you in advance. >> >> Hannu> -Hannu >> >> If you want people to help you, you need to provide a >> reproducible example. Also, please do not post in HTML, >> as code examples get scrambled and become unreadable. >> >> (If I had to venture a guess, I'd think you've >> forgotten one of those `%>%` before 'solve_model'.) >> >> >> -- >> Enrico Schumann >> Lucerne, Switzerland >> http://enricoschumann.net >> >> _______________________________________________ >> R-SIG-Finance at r-project.org mailing list >> https://stat.ethz.ch/mailman/listinfo/r-sig-finance >> -- Subscriber-posting only. If you want to post, subscribe first. >> -- Also note that this is not the r-help list where general R questions >> should go. >> > [[alternative HTML version deleted]] From jo@hu@@@eg@| @end|ng |rom gm@||@com Fri Mar 1 18:53:07 2019 From: jo@hu@@@eg@| @end|ng |rom gm@||@com (Josh Segal) Date: Fri, 1 Mar 2019 12:53:07 -0500 Subject: [R-SIG-Finance] Question on rmgarch - dccspec Message-ID: Hi guys/Alexios, I'm finding that dccspec ignores the input for lag.max. Looking at the code, I see: if(is.null(lag.max)) VAR.opt$lag.max = NULL else VAR.opt$lag.max = as.integer(min(1, lag.max)) Why is min taken against 1? This seems to defeat the purpose. Thanks, Josh [[alternative HTML version deleted]] From pro|@@m|t@m|tt@| @end|ng |rom gm@||@com Fri Mar 1 19:50:22 2019 From: pro|@@m|t@m|tt@| @end|ng |rom gm@||@com (Amit Mittal) Date: Sat, 2 Mar 2019 00:20:22 +0530 Subject: [R-SIG-Finance] Question on rmgarch - dccspec In-Reply-To: References: Message-ID: <5c797eef.1c69fb81.86f8c.8dda@mx.google.com> Josh, It is explicitly mentioned in the documentation that the dccfit works only for one lag. It is able to manage multiple markets because it works with this limitation. Tis limitation works beautifully to get large datasets reproducible and effective analysis and additional gains I feel may be unwieldy and minimal with out this restriction and need not be prioritized Best Regards Amit +91 7899381263 Please request Skype as available? 5th Year FPM (Ph.D.) in Finance and Accounting Area Indian Institute of Management, Lucknow, (U.P.) 226013 India http://bit.ly/2A2PhD AEA Job profile : http://bit.ly/AEAamit FMA 2 page profile : http://bit.ly/FMApdf2p SSRN top10% downloaded since July 2017:?http://ssrn.com/author=2665511 From: Josh Segal Sent: 01 March 2019 23:23 To: r-sig-finance at r-project.org Subject: [R-SIG-Finance] Question on rmgarch - dccspec Hi guys/Alexios, I'm finding that dccspec ignores the input for lag.max. Looking at the code, I see: if(is.null(lag.max)) VAR.opt$lag.max = NULL else VAR.opt$lag.max = as.integer(min(1, lag.max)) Why is min taken against 1? This seems to defeat the purpose. Thanks, Josh [[alternative HTML version deleted]] _______________________________________________ R-SIG-Finance at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-finance -- Subscriber-posting only. If you want to post, subscribe first. -- Also note that this is not the r-help list where general R questions should go. [[alternative HTML version deleted]] From @|ex|o@ @end|ng |rom 4d@c@pe@com Fri Mar 1 19:58:54 2019 From: @|ex|o@ @end|ng |rom 4d@c@pe@com (Alexios Ghalanos) Date: Fri, 1 Mar 2019 10:58:54 -0800 Subject: [R-SIG-Finance] Question on rmgarch - dccspec In-Reply-To: <5c797eef.1c69fb81.86f8c.8dda@mx.google.com> References: <5c797eef.1c69fb81.86f8c.8dda@mx.google.com> Message-ID: <46AFA803-0B45-45BF-AF5D-35C44D2E5A7A@4dscape.com> I?ll take a look over the weekend. Thanks for reporting. Alexios > On Mar 1, 2019, at 10:50 AM, Amit Mittal wrote: > > Josh, > > It is explicitly mentioned in the documentation that the dccfit works only for one lag. It is able to manage multiple markets because it works with this limitation. Tis limitation works beautifully to get large datasets reproducible and effective analysis and additional gains I feel may be unwieldy and minimal with out this restriction and need not be prioritized > > > Best Regards > Amit > +91 7899381263 > > > > > > > Please request Skype as available > 5th Year FPM (Ph.D.) in Finance and Accounting Area > Indian Institute of Management, Lucknow, (U.P.) 226013 India > http://bit.ly/2A2PhD > AEA Job profile : http://bit.ly/AEAamit > FMA 2 page profile : http://bit.ly/FMApdf2p > SSRN top10% downloaded since July 2017: http://ssrn.com/author=2665511 > > From: Josh Segal > Sent: 01 March 2019 23:23 > To: r-sig-finance at r-project.org > Subject: [R-SIG-Finance] Question on rmgarch - dccspec > > Hi guys/Alexios, > > I'm finding that dccspec ignores the input for lag.max. Looking at the > code, I see: > if(is.null(lag.max)) VAR.opt$lag.max = NULL else VAR.opt$lag.max = > as.integer(min(1, lag.max)) > Why is min taken against 1? This seems to defeat the purpose. > > Thanks, > Josh > > [[alternative HTML version deleted]] > > _______________________________________________ > R-SIG-Finance at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-sig-finance > -- Subscriber-posting only. If you want to post, subscribe first. > -- Also note that this is not the r-help list where general R questions should go. > > > [[alternative HTML version deleted]] > > _______________________________________________ > R-SIG-Finance at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-sig-finance > -- Subscriber-posting only. If you want to post, subscribe first. > -- Also note that this is not the r-help list where general R questions should go. > From jo@hu@@@eg@| @end|ng |rom gm@||@com Fri Mar 1 20:35:29 2019 From: jo@hu@@@eg@| @end|ng |rom gm@||@com (Josh Segal) Date: Fri, 1 Mar 2019 14:35:29 -0500 Subject: [R-SIG-Finance] Question on rmgarch - dccspec In-Reply-To: <46AFA803-0B45-45BF-AF5D-35C44D2E5A7A@4dscape.com> References: <5c797eef.1c69fb81.86f8c.8dda@mx.google.com> <46AFA803-0B45-45BF-AF5D-35C44D2E5A7A@4dscape.com> Message-ID: Thanks, Alexios Amit, I don't see that in the documentation, where are you looking? I do see examples in rmgarch.tests with dcc and lag > 1. On Fri, Mar 1, 2019 at 1:59 PM Alexios Ghalanos wrote: > I?ll take a look over the weekend. Thanks for reporting. > > Alexios > > > > On Mar 1, 2019, at 10:50 AM, Amit Mittal > wrote: > > > > Josh, > > > > It is explicitly mentioned in the documentation that the dccfit works > only for one lag. It is able to manage multiple markets because it works > with this limitation. Tis limitation works beautifully to get large > datasets reproducible and effective analysis and additional gains I feel > may be unwieldy and minimal with out this restriction and need not be > prioritized > > > > > > Best Regards > > Amit > > +91 7899381263 > > > > > > > > > > > > > > Please request Skype as available > > 5th Year FPM (Ph.D.) in Finance and Accounting Area > > Indian Institute of Management, Lucknow, (U.P.) 226013 India > > http://bit.ly/2A2PhD > > AEA Job profile : http://bit.ly/AEAamit > > FMA 2 page profile : http://bit.ly/FMApdf2p > > SSRN top10% downloaded since July 2017: http://ssrn.com/author=2665511 > > > > From: Josh Segal > > Sent: 01 March 2019 23:23 > > To: r-sig-finance at r-project.org > > Subject: [R-SIG-Finance] Question on rmgarch - dccspec > > > > Hi guys/Alexios, > > > > I'm finding that dccspec ignores the input for lag.max. Looking at the > > code, I see: > > if(is.null(lag.max)) VAR.opt$lag.max = NULL else VAR.opt$lag.max = > > as.integer(min(1, lag.max)) > > Why is min taken against 1? This seems to defeat the purpose. > > > > Thanks, > > Josh > > > > [[alternative HTML version deleted]] > > > > _______________________________________________ > > R-SIG-Finance at r-project.org mailing list > > https://stat.ethz.ch/mailman/listinfo/r-sig-finance > > -- Subscriber-posting only. If you want to post, subscribe first. > > -- Also note that this is not the r-help list where general R questions > should go. > > > > > > [[alternative HTML version deleted]] > > > > _______________________________________________ > > R-SIG-Finance at r-project.org mailing list > > https://stat.ethz.ch/mailman/listinfo/r-sig-finance > > -- Subscriber-posting only. If you want to post, subscribe first. > > -- Also note that this is not the r-help list where general R questions > should go. > > > > [[alternative HTML version deleted]] From pro|@@m|t@m|tt@| @end|ng |rom gm@||@com Sat Mar 2 06:39:59 2019 From: pro|@@m|t@m|tt@| @end|ng |rom gm@||@com (Amit Mittal) Date: Sat, 2 Mar 2019 11:09:59 +0530 Subject: [R-SIG-Finance] Question on rmgarch - dccspec In-Reply-To: References: <5c797eef.1c69fb81.86f8c.8dda@mx.google.com> <46AFA803-0B45-45BF-AF5D-35C44D2E5A7A@4dscape.com> Message-ID: <5c7a1730.1c69fb81.28097.7162@mx.google.com> Sure, As Alexios said he is looking into it. I can confirm that my project would not gain additionally from additional lags materially. It is across 10 financial markets using 10 years data and 1 lag gives robust effective results from dccfit (up to 4 markets at one time) I can?t find that information. now. There are only three dcc documents reliable. One is Zivot?s presentation on using it, others can be checked on bit or cran Best Regards Amit +91 7899381263 Please request Skype as available? 5th Year FPM (Ph.D.) in Finance and Accounting Area Indian Institute of Management, Lucknow, (U.P.) 226013 India http://bit.ly/2A2PhD AEA Job profile : http://bit.ly/AEAamit FMA 2 page profile : http://bit.ly/FMApdf2p SSRN top10% downloaded since July 2017:?http://ssrn.com/author=2665511 From: Josh Segal Sent: 02 March 2019 01:05 To: Alexios Ghalanos Cc: Amit Mittal; r-sig-finance at r-project.org Subject: Re: [R-SIG-Finance] Question on rmgarch - dccspec Thanks, Alexios Amit, I don't see that in the documentation, where are you looking?? I do see examples in rmgarch.tests with dcc and lag > 1. On Fri, Mar 1, 2019 at 1:59 PM Alexios Ghalanos wrote: I?ll take a look over the weekend. Thanks for reporting. Alexios > On Mar 1, 2019, at 10:50 AM, Amit Mittal wrote: > > Josh, > > It is explicitly mentioned in the documentation that the dccfit works only for one lag. It is able to manage multiple markets because it works with this limitation. Tis limitation works beautifully to get large datasets reproducible and effective analysis and additional gains I feel may be unwieldy and minimal with out this restriction and need not be prioritized > > > Best Regards > Amit > +91 7899381263 > > > > > > > Please request Skype as available > 5th Year FPM (Ph.D.) in Finance and Accounting Area > Indian Institute of Management, Lucknow, (U.P.) 226013 India > http://bit.ly/2A2PhD > AEA Job profile : http://bit.ly/AEAamit > FMA 2 page profile : http://bit.ly/FMApdf2p > SSRN top10% downloaded since July 2017: http://ssrn.com/author=2665511 > > From: Josh Segal > Sent: 01 March 2019 23:23 > To: r-sig-finance at r-project.org > Subject: [R-SIG-Finance] Question on rmgarch - dccspec > > Hi guys/Alexios, > > I'm finding that dccspec ignores the input for lag.max.? Looking at the > code, I see: > if(is.null(lag.max)) VAR.opt$lag.max = NULL else VAR.opt$lag.max = > as.integer(min(1, lag.max)) > Why is min taken against 1?? This seems to defeat the purpose. > > Thanks, > Josh > >? ? [[alternative HTML version deleted]] > > _______________________________________________ > R-SIG-Finance at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-sig-finance > -- Subscriber-posting only. If you want to post, subscribe first. > -- Also note that this is not the r-help list where general R questions should go. > > >? ? [[alternative HTML version deleted]] > > _______________________________________________ > R-SIG-Finance at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-sig-finance > -- Subscriber-posting only. If you want to post, subscribe first. > -- Also note that this is not the r-help list where general R questions should go. > [[alternative HTML version deleted]] From jo@h@m@u|r|ch @end|ng |rom gm@||@com Mon Mar 4 19:21:59 2019 From: jo@h@m@u|r|ch @end|ng |rom gm@||@com (Joshua Ulrich) Date: Mon, 4 Mar 2019 12:21:59 -0600 Subject: [R-SIG-Finance] R/Finance 2019: Call for Presentations In-Reply-To: References: Message-ID: Call for Diverse Presenters The lack of diversity in tech is well-known, and the R/Finance committee feels the situation in quantitative finance is potentially even worse than in tech. There has been great work on increasing diversity in the broader R community over the past several years. We need the help of the R community to make the same progress in the quantitative R finance community. In an effort to increase diversity of the presentations and presenters at the R/Finance conference, the committee asks and encourages members of under-represented or historically marginalized groups to respond to the Call for Presentations. A full paper is not required. We especially encourage those who were previously unaware of R/Finance to submit their talk proposals. There is travel/accommodation funding available if that may prevent you from attending and presenting. R/Finance is dedicated to a safe, productive, and welcoming environment free from discrimination and harassment. The conference expects all participants to treat each other with respect and doesn?t tolerate harassment of any kind as fitting of the University environment. For details, please peruse the code of conduct here. The CFP is open until this Friday. We look forward to hearing from you! For the conference committee: Petra Bakosova, Gib Bassett, Peter Carl, Dirk Eddelbuettel, Soumya Kalra, Brian Peterson, Dale Rosenthal, Jeffrey Ryan, Justin Shea, Joshua Ulrich On Fri, Feb 1, 2019 at 7:35 AM Joshua Ulrich wrote: > > R/Finance 2019: Applied Finance with R > May 17 and 18, 2019 > University of Illinois at Chicago > > Call for Presentations > > The eleventh annual R/Finance conference for applied finance using R > will be held on May 17 and 18, 2019 in Chicago, IL, USA at the > University of Illinois at Chicago. The conference will cover topics > including portfolio management, time series analysis, advanced risk > tools, high-performance computing, market microstructure, and > econometrics. All will be discussed within the context of using R as > a primary tool for financial model development, risk management, > portfolio construction, and trading. > > From its midwest beginnings, word of the conference spread among > trading desks and universities, until it became the primary meeting > for academics and practitioners interested in using R in quantitative > finance. It has featured presentations from prominent academics and > practitioners, and we anticipate another exciting line-up for 2019. > > We invite you to submit complete papers in pdf format for > consideration. We will also consider one-page abstracts (in txt or > pdf format) although more complete papers are preferred. We welcome > submissions for full talks (approximately 20 min.), abbreviated > "lightning talks" (approx. 6 min.), and (1 hr.) pre-conference > tutorials. Both academic and practitioner proposals related to R are > encouraged. > > All slides will be made publicly available at conference time. > Presenters are strongly encouraged to provide working R code to > accompany the slides. Ideally, data sets should be made public for > the purposes of reproducibility (though we realize this may be limited > due to contracts with data vendors). Preference may be given to > innovative research or presenters who have released R packages. > > Please submit proposals online at http://go.uic.edu/rfinsubmit > > Submissions will be reviewed and accepted on a rolling basis with a > final submission deadline of March 1, 2019. Submitters will be > notified on a rolling basis via email by March 15, 2019 of acceptance, > presentation length, and financial assistance (if requested). > > Financial assistance for travel and accommodation may be available to > presenters. Requests for financial assistance do not affect acceptance > decisions. Requests must be made at the time of submission, and should > indicate why assistance is being requested. Requests made after > submission are unlikely to be fulfilled. Assistance will be granted at > the discretion of the conference committee. > > Additional details will be announced via the conference website > http://www.RinFinance.com/ as they become available. Information on > previous years' presenters and their presentations are also at the > conference website. We will make a separate announcement when > registration opens, usually sometime in mid to late March. > > For the conference committee: > Petra Bakosova, Gib Bassett, Peter Carl, Dirk Eddelbuettel, Soumya > Kalra, Brian Peterson, Dale Rosenthal, Jeffrey Ryan, Justin Shea, > Joshua Ulrich > > -- > Joshua Ulrich | about.me/joshuaulrich > FOSS Trading | www.fosstrading.com > R/Finance 2018 | www.rinfinance.com -- Joshua Ulrich | about.me/joshuaulrich FOSS Trading | www.fosstrading.com R/Finance 2018 | www.rinfinance.com From @te|@no@|@cu@ @end|ng |rom un|m|@|t Mon Mar 11 14:52:18 2019 From: @te|@no@|@cu@ @end|ng |rom un|m|@|t (stefano iacus) Date: Mon, 11 Mar 2019 14:52:18 +0100 Subject: [R-SIG-Finance] [COURSE] YSS2019: Summer School on Computational and Statistical Methods for Stochastic Process Message-ID: [Apologizes for cross posting] YSS2019: The first YUIMA Summer School on Computational and Statistical Methods for Stochastic Process 25-28 June 2019, Brixen-Bressanone, Italy This 4 days course aims at introducing researchers, PhD students and practitioners to several aspects of numerical and statistical analysis of time series through the R language and, in particular, the YUIMA package. The course covers topics of R programming, time series data handling, simulation, numerical and statistical analysis for several types of models including: point processes, stochastic differential equations driven by Brownian motion with or without jumps, fractional Brownian motion and L?vy processes. For detailed information see the course page at: https://yuimaproject.com/yss2019/ Registration closes on May 20th 2019! Stefano ----------------------------------- Prof. Stefano M. Iacus, Ph.D. Department of Economics, Management and Quantitative Methods University of Milan Via Conservatorio, 7 I-20123 Milan - Italy Ph.: +39 02 50321 461 Fax: +39 02 50321 505 Twitter: @iacus http://scholar.google.com/citations?user=JBs9tJ4AAAAJ&hl=en http://orcid.org/0000-0002-4884-0047 ------------------------------------------------------------------------------------ Please don't send me Word or PowerPoint attachments if not absolutely necessary. See: http://www.gnu.org/philosophy/no-word-attachments.html [[alternative HTML version deleted]] From dodgerdodger12 @end|ng |rom gm@||@com Mon Mar 11 23:13:39 2019 From: dodgerdodger12 @end|ng |rom gm@||@com (Rodger Dodger) Date: Mon, 11 Mar 2019 22:13:39 +0000 Subject: [R-SIG-Finance] refining trailing stop loss in simple trend following strategy Message-ID: Dear All, Im after some help with the trailing stop loss functionality in quanstrat. I have written a simple script that uses EMAs as a trend filter, enters positions based on breakouts from Donchian Channels and rides the position with a trailing stop loss based on ATR. The script runs ok but Ive run into the following problems trying to refine this strategy in quantstrat. 1. When specifying the ATR lookback period, anything above n=1 generates an error "Error in if (threshold > 0) threshold = -threshold : missing value where TRUE/FALSE needed" Can anyone explain whats going on here? I'd like to set the look back window much further. Also - is the ATR stop loss working as it should? This seems to hinge on the threshold part of the 'ruleSignal' function. Could anyone explain how this part works? I could not find any documentation. 2. Ideally, I want the ATR stop loss to only move in the direction of the winning position. E.g. if we are long and the position moves in our favour then the stop moves up with the Close - 2*ATR band. However, if the price moves down I want the ATR to remain fixed at the last highest value, rapidly closing out the winning position. This what I currently do manually but would love to have a way of automating it. Im struggling to think of a way to do this within the quantstrat framework. Below I copy a reproducible version of the script. Any guidance would be gratefully received. Rodg # packages require(IKTrading) require(knitr) # settings currency('USD') Sys.setenv(TZ="UTC") # variables start_date <- Sys.Date() -365 end_date <- Sys.Date() init_equity <- 1e6 tradesize <- 100 adjustment <- TRUE # stock data symbols <- c("MSFT", "IBM") #symbols used in our backtest getSymbols(Symbols = symbols, src = "yahoo", from= start_date, to= end_date, adjust = TRUE) #receive data stock(symbols, currency = "USD", multiplier = 1) #tells quanstrat what instruments present and what currency to use #### STRATEGY strategy.st <- portfolio.st <- account.st <- "firststrat" #naming strategy, portfolio and account #removes old portfolio and strategy from environment rm.strat(portfolio.st) rm.strat(strategy.st) #initialize portfolio, account, orders and strategy objects initPortf(portfolio.st, symbols = symbols, currency = "USD") initAcct(account.st, portfolios = portfolio.st, currency = "USD", initEq = init_equity) initOrders(portfolio.st) strategy(strategy.st, store=TRUE) #Plots candleChart(MSFT, up.col = "green", dn.col = "red", theme = "white") addEMA(n = c(26,12), on = 1, col = c("red", "blue")) DCH <- DonchianChannel(MSFT[,2:3], n=20, include.lag = TRUE) plot(addTA(DCH$high, on=1, col='red')) plot(addTA(DCH$low, on=1, col='blue')) candleChart(IBM, up.col = "black", dn.col = "red", theme = "white") addEMA(n = c(26,12), on = 1, col = c("red", "blue")) DCH <- DonchianChannel(IBM[,2:3], n=20, include.lag = TRUE) plot(addTA(DCH$high, on=1, col='red')) plot(addTA(DCH$low, on=1, col='blue')) ## INDICATORS # 26 EMA add.indicator(strategy = strategy.st, name = 'EMA', arguments = list(x = quote(Cl(mktdata)), n=26), label = 'EMA26') # 12 EMA add.indicator(strategy = strategy.st, name = 'EMA', arguments = list(x = quote(Cl(mktdata)), n=12), label = 'EMA12') # Add the Donchian indicator add.indicator(strategy = strategy.st, # correct name of function: name = "DonchianChannel", arguments = list(HL = quote(HLC(mktdata)[, 2:3]), n = 20, include.lag = TRUE), label = "Donchian") ### SIGNALS ## Long #1: sigComparison specifying when 12-period EMA above 26-period EMA add.signal(strategy.st, name = 'sigComparison', arguments = list(columns=c("EMA12", "EMA26"), relationship = "gt" ), label = "longfilter") #2: Price over the Donchian add.signal(strategy.st, name="sigComparison", arguments=list(columns=c("Close","high.Donchian"),relationship= "gt"), label="Hi.gt.Donchian") #Enter long when DC break out plus EMAs crossed add.signal(strategy.st, name = "sigFormula", arguments = list(formula = "longfilter & Hi.gt.Donchian", cross = TRUE), label = "longentry") ## Short #1: sigComparison specifying when 12-period EMA below 26-period EMA add.signal(strategy.st, name = 'sigComparison', arguments = list(columns=c("EMA12", "EMA26"), relationship = "lt" ), label = "shortfilter") #2: crosses under the Donchian add.signal(strategy.st, name="sigComparison", arguments=list(columns=c("Close","low.Donchian"), relationship= "lt"), label="Lo.lt.Donchian") #Enter short when DC break down plus EMAs crossed add.signal(strategy.st, name = "sigFormula", arguments = list(formula = "shortfilter & Lo.lt.Donchian", cross = TRUE), label = "shortentry") ## ATR Stops stopATR <- function(HLC, n, maType, noATR){ stop.atr <- noATR * ATR(HLC=HLC, n=n, maType=maType) return(stop.atr) } add.indicator(strategy = strategy.st, name = "stopATR", arguments = list(HLC=quote(HLC(mktdata)[, 2:4]), n=1, maType="EMA", noATR= 2)) ## RULES ## Exits # exit longs add.rule(strategy.st, name='ruleSignal', arguments = list(sigcol="longentry", sigval=TRUE, orderqty='all', ordertype='stoptrailing', orderside='long', threshold='atr.stopATR.ind', tmult=FALSE, replace=FALSE, orderset='long' ), type='chain', parent='EnterLONG', label='ExitLong') # exit shorts add.rule(strategy.st, name='ruleSignal', arguments = list(sigcol="shortentry", sigval=TRUE, orderqty='all', ordertype='stoptrailing', orderside='short', threshold='atr.stopATR.ind', tmult=FALSE, replace=FALSE, orderset='short' ), type='chain', parent='EnterSHORT', label='ExitShort') ## Entries # Enter long add.rule(strategy.st, name='ruleSignal', arguments=list(sigcol='longentry' , sigval=TRUE, orderside='long' , ordertype='market', prefer='High', orderqty= tradesize, osFUN=osMaxPos, replace=FALSE, orderset='long' ), type='enter', label='EnterLONG' ) # Enter short add.rule(strategy.st, name='ruleSignal', arguments=list(sigcol='shortentry', sigval=TRUE, orderside='short', ordertype='market', prefer='Low', orderqty=-tradesize, osFUN=osMaxPos, replace=FALSE, orderset='short' ), type='enter', label='EnterSHORT' ) # restrict to 1 entry per side. for(symbol in symbols){ addPosLimit(portfolio = portfolio.st, symbol = symbol, timestamp = start_date, maxpos = tradesize) } ## PERFORMANCE ANALYTICS out <- applyStrategy(strategy = strategy.st, portfolios = portfolio.st) updatePortf(portfolio.st) daterange <- time(getPortfolio(portfolio.st)$summary)[-1] updateAcct(account.st, daterange) updateEndEq(account.st) # look at orderbook ob <- getOrderBook(portfolio.st) # orderbook for each symbol ob_MSFT <- data.frame(ob$firststrat$MSFT) ob_IBM <- data.frame(ob$firststrat$IBM) # plot trades myTheme<-chart_theme() myTheme$col$dn.col<-'lightblue' myTheme$col$dn.border <- 'lightgray' myTheme$col$up.border <- 'lightgray' chart.Posn(Portfolio = portfolio.st, theme=myTheme, Symbol = "MSFT", TA= c("add_EMA(n=12, col='blue')", "add_EMA(n=26, col='red')")) chart.Posn(Portfolio = portfolio.st, theme=myTheme, Symbol = "IBM", TA= c("add_EMA(n=12, col='blue')", "add_EMA(n=26, col='red')")) # get trad stats tstats <- tradeStats(portfolio.st) kable(t(tstats)) [[alternative HTML version deleted]] From ||y@@k|pn|@ @end|ng |rom gm@||@com Wed Mar 13 05:52:36 2019 From: ||y@@k|pn|@ @end|ng |rom gm@||@com (Ilya Kipnis) Date: Wed, 13 Mar 2019 00:52:36 -0400 Subject: [R-SIG-Finance] Lo catches slow Message-ID: Minimum reproducible example: require(quantmod) require(TTR) getSymbols('SPY') data <- cbind(OHLC(SPY), stoch(HLC(SPY))) head(Lo(data)) head(HLC(data)) Lo catches both the low from the OHLC object (which it's supposed to), and the slowD column (because it has low in the name), which it isn't supposed to. This can cause issues down the line with other functions that call functions that search for the Low column which might cause a dimnames error down the line. [[alternative HTML version deleted]] From jo@h@m@u|r|ch @end|ng |rom gm@||@com Wed Mar 13 11:58:40 2019 From: jo@h@m@u|r|ch @end|ng |rom gm@||@com (Joshua Ulrich) Date: Wed, 13 Mar 2019 05:58:40 -0500 Subject: [R-SIG-Finance] Lo catches slow In-Reply-To: References: Message-ID: On Tue, Mar 12, 2019 at 11:53 PM Ilya Kipnis wrote: > > Minimum reproducible example: > > require(quantmod) > require(TTR) > > getSymbols('SPY') > data <- cbind(OHLC(SPY), stoch(HLC(SPY))) > head(Lo(data)) > head(HLC(data)) > > Lo catches both the low from the OHLC object (which it's supposed to), and > the slowD column (because it has low in the name), which it isn't supposed > to. > This is a known issue that is hard to fix. See https://github.com/joshuaulrich/quantmod/issues/24 > This can cause issues down the line with other functions that call > functions that search for the Low column which might cause a dimnames error > down the line. > > [[alternative HTML version deleted]] > > _______________________________________________ > R-SIG-Finance at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-sig-finance > -- Subscriber-posting only. If you want to post, subscribe first. > -- Also note that this is not the r-help list where general R questions should go. -- Joshua Ulrich | about.me/joshuaulrich FOSS Trading | www.fosstrading.com R/Finance 2019 | www.rinfinance.com From e@ @end|ng |rom enr|co@chum@nn@net Wed Mar 13 12:14:54 2019 From: e@ @end|ng |rom enr|co@chum@nn@net (Enrico Schumann) Date: Wed, 13 Mar 2019 12:14:54 +0100 Subject: [R-SIG-Finance] Lo catches slow In-Reply-To: (Joshua Ulrich's message of "Wed, 13 Mar 2019 05:58:40 -0500") References: Message-ID: <87d0mvgh5d.fsf@enricoschumann.net> >>>>> "Joshua" == Joshua Ulrich writes: Joshua> On Tue, Mar 12, 2019 at 11:53 PM Ilya Kipnis wrote: >> >> Minimum reproducible example: >> >> require(quantmod) >> require(TTR) >> >> getSymbols('SPY') >> data <- cbind(OHLC(SPY), stoch(HLC(SPY))) >> head(Lo(data)) >> head(HLC(data)) >> >> Lo catches both the low from the OHLC object >> (which it's supposed to), and the slowD column >> (because it has low in the name), which it isn't >> supposed to. >> Joshua> This is a known issue that is hard to fix. See Joshua> https://github.com/joshuaulrich/quantmod/issues/24 Perhaps one could add a warning for this case, either in `Lo` or `has.Lo`? i <- grep("Low", colnames(x), ignore.case = TRUE) if (length(i) > 1L) warning("more than one column match 'low': ", paste(colnames(x)[i], collapse = " ")) >> This can cause issues down the line with other functions that call >> functions that search for the Low column which might cause a dimnames error >> down the line. >> Joshua> -- Joshua> Joshua Ulrich | about.me/joshuaulrich Joshua> FOSS Trading | www.fosstrading.com Joshua> R/Finance 2019 | www.rinfinance.com -- Enrico Schumann Lucerne, Switzerland http://enricoschumann.net From b@k|un@| @end|ng |rom y@hoo@com Wed Mar 27 22:06:34 2019 From: b@k|un@| @end|ng |rom y@hoo@com (Baki UNAL) Date: Wed, 27 Mar 2019 21:06:34 +0000 (UTC) Subject: [R-SIG-Finance] Problem while installing keras package References: <1844348511.11079727.1553720794438.ref@mail.yahoo.com> Message-ID: <1844348511.11079727.1553720794438@mail.yahoo.com> Hello I am trying to install keras package. But I got the following error: > library(keras)> install_keras(method = "conda")Creating r-tensorflow conda environment for TensorFlow installation...Solving environment: ...working... failed # >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<< ? ? Traceback (most recent call last):? ? ? File "C:\Users\user\ANACON~1\lib\site-packages\requests\adapters.py", line 412, in send? ? ? ? conn = self.get_connection(request.url, proxies)? ? ? File "C:\Users\user\ANACON~1\lib\site-packages\requests\adapters.py", line 305, in get_connection? ? ? ? proxy_url = parse_url(proxy)? ? ? File "C:\Users\user\ANACON~1\lib\site-packages\urllib3\util\url.py", line 199, in parse_url? ? ? ? raise LocationParseError(url)? ? urllib3.exceptions.LocationParseError: Failed to parse: proxy.server.com:port? ??? ? During handling of the above exception, another exception occurred:? ??? ? Traceback (most recent call last):? ? ? File "C:\Users\user\ANACON~1\lib\site-packages\conda\exceptions.py", line 819, in __call__? ? ? ? return func(*args, **kwargs)? ? ? File "C:\Users\user\ANACON~1\lib\site-packages\conda\cli\main.py", line 78, in _main? ? ? ? exit_code = do_call(args, p)? ? ? File "C:\Users\user\ANACON~1\lib\site-packages\conda\cli\conda_argparse.py", line 77, in do_call? ? ? ? exit_code = getattr(module, func_name)(args, parser)? ? ? File "C:\Users\user\ANACON~1\lib\site-packages\conda\cli\main_create.py", line 11, in execute? ? ? ? install(args, parser, 'create')? ? ? File "C:\Users\user\ANACON~1\lib\site-packages\conda\cli\install.py", line 235, in install? ? ? ? force_reinstall=context.force,? ? ? File "C:\Users\user\ANACON~1\lib\site-packages\conda\core\solve.py", line 518, in solve_for_transaction? ? ? ? force_remove, force_reinstall)? ? ? File "C:\Users\user\ANACON~1\lib\site-packages\conda\core\solve.py", line 451, in solve_for_diff? ? ? ? final_precs = self.solve_final_state(deps_modifier, prune, ignore_pinned, force_remove)? ? ? File "C:\Users\user\ANACON~1\lib\site-packages\conda\core\solve.py", line 180, in solve_final_state? ? ? ? index, r = self._prepare(prepared_specs)? ? ? File "C:\Users\user\ANACON~1\lib\site-packages\conda\core\solve.py", line 592, in _prepare? ? ? ? self.subdirs, prepared_specs)? ? ? File "C:\Users\user\ANACON~1\lib\site-packages\conda\core\index.py", line 215, in get_reduced_index? ? ? ? new_records = query_all(spec)? ? ? File "C:\Users\user\ANACON~1\lib\site-packages\conda\core\index.py", line 184, in query_all? ? ? ? return tuple(concat(future.result() for future in as_completed(futures)))? ? ? File "C:\Users\user\ANACON~1\lib\site-packages\conda\core\subdir_data.py", line 95, in query? ? ? ? self.load()? ? ? File "C:\Users\user\ANACON~1\lib\site-packages\conda\core\subdir_data.py", line 149, in load? ? ? ? _internal_state = self._load()? ? ? File "C:\Users\user\ANACON~1\lib\site-packages\conda\core\subdir_data.py", line 227, in _load? ? ? ? mod_etag_headers.get('_mod'))? ? ? File "C:\Users\user\ANACON~1\lib\site-packages\conda\core\subdir_data.py", line 437, in fetch_repodata_remote_request? ? ? ? timeout=timeout)? ? ? File "C:\Users\user\ANACON~1\lib\site-packages\requests\sessions.py", line 546, in get? ? ? ? return self.request('GET', url, **kwargs)? ? ? File "C:\Users\user\ANACON~1\lib\site-packages\requests\sessions.py", line 533, in request? ? ? ? resp = self.send(prep, **send_kwargs)? ? ? File "C:\Users\user\ANACON~1\lib\site-packages\requests\sessions.py", line 646, in send? ? ? ? r = adapter.send(request, **kwargs)? ? ? File "C:\Users\user\ANACON~1\lib\site-packages\requests\adapters.py", line 414, in send? ? ? ? raise InvalidURL(e, request=request)? ? requests.exceptions.InvalidURL: Failed to parse: proxy.server.com:port `$ C:\Users\user\ANACON~1\Scripts\conda create --yes --name r-tensorflow python=3.6` ? environment variables:? ? ? ? ? ? ? ? ?CIO_TEST=? ? ? ? ? ? ? ?CONDA_ROOT=C:\Users\user\ANACON~1? ? ? ? ? ? ? ? ?HOMEPATH=\Users\user? ? ? ? ? ? ? HTTPS_PROXY=? ? ? ? ? ? ? ?HTTP_PROXY=? ? ? ? ? MOZ_PLUGIN_PATH=C:\Program Files (x86)\Foxit Software\Foxit Reader\plugins\? ? ? ? ? ? ? ? ? ? ?PATH=C:\Program Files\R\R-3.5.3\bin\x64;C:\Program Files\Common? ? ? ? ? ? ? ? ? ? ? ? ? Files\Microsoft Shared\Windows Live;C:\Program Files (x86)\Common? ? ? ? ? ? ? ? ? ? ? ? ? Files\Microsoft Shared\Windows Live;C:\Windows\system32;C:\Windows;C:\? ? ? ? ? ? ? ? ? ? ? ? ? Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\P? ? ? ? ? ? ? ? ? ? ? ? ? rogram Files (x86)\ATI Technologies\ATI.ACE\Core-Static;C:\Program? ? ? ? ? ? ? ? ? ? ? ? ? Files\Broadcom\Broadcom 802.11 Network Adapter\Driver;C:\Program Files? ? ? ? ? ? ? ? ? ? ? ? ? (x86)\Windows Live\Shared;C:\Program Files\WIDCOMM\Bluetooth? ? ? ? ? ? ? ? ? ? ? ? ? Software\;C:\Program Files\WIDCOMM\Bluetooth? ? ? ? ? ? ? ? ? ? ? ? ? Software\syswow64;C:\Program? ? ? ? ? ? ? ? ? ? ? ? ? Files\MATLAB\R2011a\runtime\win64;C:\Program? ? ? ? ? ? ? ? ? ? ? ? ? Files\MATLAB\R2011a\bin;C:\Program Files\R\R-3.2.1\bin\i386;C:\Program? ? ? ? ? ? ? ? ? ? ? ? ? Files (x86)\NetLogo 5.1.0\jre\bin;C:\Users\user\AppData\Local\Programs? ? ? ? ? ? ? ? ? ? ? ? ? \Python\Python37\Scripts\;C:\Users\user\AppData\Local\Programs\Python\? ? ? ? ? ? ? ? ? ? ? ? ? Python37\;C:\Program Files\JetBrains\PyCharm Community Edition? ? ? ? ? ? ? ? ? ? ? ? ? 2018.3.5\bin;? ? ? ? ? ? ?PSMODULEPATH=C:\Windows\system32\WindowsPowerShell\v1.0\Modules\? ? ? ?REQUESTS_CA_BUNDLE=? ?RMARKDOWN_MATHJAX_PATH=C:/Program Files/RStudio/resources/mathjax-26? ? ? ? RS_RPOSTBACK_PATH=C:/Program Files/RStudio/bin/rpostback? ? ? ? ? ? SSL_CERT_FILE= ? ? ?active environment : None? ? ? ?user config file : C:\Users\user\Documents\.condarc?populated config files :?? ? ? ? ? conda version : 4.5.12? ? conda-build version : 3.17.6? ? ? ? ?python version : 3.7.1.final.0? ? ? ?base environment : C:\Users\user\ANACON~1? (writable)? ? ? ? ? ?channel URLs : https://repo.anaconda.com/pkgs/main/win-64? ? ? ? ? ? ? ? ? ? ? ? ? https://repo.anaconda.com/pkgs/main/noarch? ? ? ? ? ? ? ? ? ? ? ? ? https://repo.anaconda.com/pkgs/free/win-64? ? ? ? ? ? ? ? ? ? ? ? ? https://repo.anaconda.com/pkgs/free/noarch? ? ? ? ? ? ? ? ? ? ? ? ? https://repo.anaconda.com/pkgs/r/win-64? ? ? ? ? ? ? ? ? ? ? ? ? https://repo.anaconda.com/pkgs/r/noarch? ? ? ? ? ? ? ? ? ? ? ? ? https://repo.anaconda.com/pkgs/pro/win-64? ? ? ? ? ? ? ? ? ? ? ? ? https://repo.anaconda.com/pkgs/pro/noarch? ? ? ? ? ? ? ? ? ? ? ? ? https://repo.anaconda.com/pkgs/msys2/win-64? ? ? ? ? ? ? ? ? ? ? ? ? https://repo.anaconda.com/pkgs/msys2/noarch? ? ? ? ? package cache : C:\Users\user\ANACON~1\pkgs? ? ? ? ? ? ? ? ? ? ? ? ? C:\Users\user\AppData\Local\conda\conda\pkgs? ? ? ?envs directories : C:\Users\user\ANACON~1\envs? ? ? ? ? ? ? ? ? ? ? ? ? C:\Users\user\AppData\Local\conda\conda\envs? ? ? ? ? ? ? ? ? ? ? ? ? C:\Users\user\Documents\.conda\envs? ? ? ? ? ? ? ?platform : win-64? ? ? ? ? ? ?user-agent : conda/4.5.12 requests/2.21.0 CPython/3.7.1 Windows/7 Windows/6.1.7601? ? ? ? ? administrator : False? ? ? ? ? ? ?netrc file : None? ? ? ? ? ?offline mode : False An unexpected error has occurred. Conda has prepared the above report.Upload did not complete.Error: Error 1 occurred creating conda environment r-tensorflow>? How can I fix this problem. Thanks [[alternative HTML version deleted]] From b@k|un@| @end|ng |rom y@hoo@com Wed Mar 27 22:23:08 2019 From: b@k|un@| @end|ng |rom y@hoo@com (Baki UNAL) Date: Wed, 27 Mar 2019 21:23:08 +0000 (UTC) Subject: [R-SIG-Finance] Problem while installing keras package 2 References: <156328651.11067155.1553721788440.ref@mail.yahoo.com> Message-ID: <156328651.11067155.1553721788440@mail.yahoo.com> Hello I have error while installing keras package.?In my previous mail my error text is not readable. I attached my error to this mail in the form of text and word files.? Best regards -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 22455 bytes Desc: not available URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: error.txt URL: From d@n|rzu| @end|ng |rom gm@||@com Thu Mar 28 08:56:46 2019 From: d@n|rzu| @end|ng |rom gm@||@com (=?UTF-8?B?0JTQsNC90LjRgCDQl9GD0LvRjNC60LDRgNC90LDQtdCy?=) Date: Thu, 28 Mar 2019 10:56:46 +0300 Subject: [R-SIG-Finance] Time-scale of Value at risk In-Reply-To: <18500d24cf0544c58ccb2f0a19249715@sberbank.ru> References: <18500d24cf0544c58ccb2f0a19249715@sberbank.ru> Message-ID: Hi guys! Could you please help me to understand some things about Value at risk? 1. How to time-scale nonnormal parametric Value at Risk? I mean modified VaR, student VaR, skewed student VaR. Is there any rule of thumb like square-root-of-time for normal distribution? Are there some packages to do it in R, 2. How does ugarchroll in rugarch estimate value-at-risk? Does it compute the theoretical quantile of the particular distribution which has been set in ugarchspec? 3. If I simulate 10000 paths by ugarchpath and find the X% of the worst scenarios, is it be equivalent to Monte Carlo VaR? Thanks! [[alternative HTML version deleted]] From t|e|tch1 @end|ng |rom jhu@edu Thu Mar 28 17:28:30 2019 From: t|e|tch1 @end|ng |rom jhu@edu (Terry Leitch) Date: Thu, 28 Mar 2019 16:28:30 +0000 Subject: [R-SIG-Finance] Time-scale of Value at risk In-Reply-To: References: <18500d24cf0544c58ccb2f0a19249715@sberbank.ru> Message-ID: <8499A388-AB47-481B-86D4-6C3AA2C6EF07@jhu.edu> The rugarch package can do a simulated forward distribution that takes into account the GARCH model spec and the chosen error distribution. I would recommend, given the complexity of your approach, to do a forward sim and get the VaR from the resulting sim. > On Mar 28, 2019, at 3:56 AM, ????? ??????????? wrote: > > > Hi guys! > > Could you please help me to understand some things about Value at risk? > > 1. How to time-scale nonnormal parametric Value at Risk? > I mean modified VaR, student VaR, skewed student VaR. > Is there any rule of thumb like square-root-of-time for normal > distribution? > Are there some packages to do it in R, > > 2. How does ugarchroll in rugarch estimate value-at-risk? > Does it compute the theoretical quantile of the particular distribution > which has been set in ugarchspec? > > 3. If I simulate 10000 paths by ugarchpath and find the X% of the worst > scenarios, is it be equivalent to Monte Carlo VaR? > > Thanks! > > [[alternative HTML version deleted]] > > _______________________________________________ > R-SIG-Finance at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-sig-finance > -- Subscriber-posting only. If you want to post, subscribe first. > -- Also note that this is not the r-help list where general R questions should go. > From d@n|rzu| @end|ng |rom gm@||@com Thu Mar 28 17:44:09 2019 From: d@n|rzu| @end|ng |rom gm@||@com (=?UTF-8?B?0JTQsNC90LjRgCDQl9GD0LvRjNC60LDRgNC90LDQtdCy?=) Date: Thu, 28 Mar 2019 19:44:09 +0300 Subject: [R-SIG-Finance] Time-scale of Value at risk In-Reply-To: <8499A388-AB47-481B-86D4-6C3AA2C6EF07@jhu.edu> References: <18500d24cf0544c58ccb2f0a19249715@sberbank.ru> <8499A388-AB47-481B-86D4-6C3AA2C6EF07@jhu.edu> Message-ID: Terry, thank you for your help ??, 28 ???. 2019 ?., 19:28 Terry Leitch : > The rugarch package can do a simulated forward distribution that takes > into account the GARCH model spec and the chosen error distribution. I > would recommend, given the complexity of your approach, to do a forward sim > and get the VaR from the resulting sim. > > > > > On Mar 28, 2019, at 3:56 AM, ????? ??????????? > wrote: > > > > > > Hi guys! > > > > Could you please help me to understand some things about Value at risk? > > > > 1. How to time-scale nonnormal parametric Value at Risk? > > I mean modified VaR, student VaR, skewed student VaR. > > Is there any rule of thumb like square-root-of-time for normal > > distribution? > > Are there some packages to do it in R, > > > > 2. How does ugarchroll in rugarch estimate value-at-risk? > > Does it compute the theoretical quantile of the particular distribution > > which has been set in ugarchspec? > > > > 3. If I simulate 10000 paths by ugarchpath and find the X% of the worst > > scenarios, is it be equivalent to Monte Carlo VaR? > > > > Thanks! > > > > [[alternative HTML version deleted]] > > > > _______________________________________________ > > R-SIG-Finance at r-project.org mailing list > > https://stat.ethz.ch/mailman/listinfo/r-sig-finance > > -- Subscriber-posting only. If you want to post, subscribe first. > > -- Also note that this is not the r-help list where general R questions > should go. > > > [[alternative HTML version deleted]]