[R] Maximum Likelihood Est. regarding the degree of freedom of a multivariate skew-t copula
Ravi Varadhan
RVaradhan at jhmi.edu
Wed Sep 23 15:36:21 CEST 2009
Why are you using "SANN" for optimizing over a smooth function of a scalar
parameter? Simulated annealing is generally quite slow, and is typically
used for "nasty" functions with multiple bumps and valleys.
Try `optimize' instead.
Ravi.
----------------------------------------------------------------------------
-------
Ravi Varadhan, Ph.D.
Assistant Professor, The Center on Aging and Health
Division of Geriatric Medicine and Gerontology
Johns Hopkins University
Ph: (410) 502-2619
Fax: (410) 614-9625
Email: rvaradhan at jhmi.edu
Webpage:
http://www.jhsph.edu/agingandhealth/People/Faculty_personal_pages/Varadhan.h
tml
----------------------------------------------------------------------------
--------
-----Original Message-----
From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-project.org] On
Behalf Of John Reichenbächer
Sent: Wednesday, September 23, 2009 7:17 AM
To: r-help at r-project.org
Subject: [R] Maximum Likelihood Est. regarding the degree of freedom of a
multivariate skew-t copula
Hello,
I have a bigger problem in calculating the Maximum Likelihood Estimator
regarding the degree of freedom of a multivariate skew-t copula.
First of all I would like to describe what this is all about, so that you
can understand my problem:
I have 2 time series with more than 3000 entries each. I would like to
calculate a multivariate skew-t Copula that fits this time series.
Notice: The program-code works fine, but it is too slow to deliver adequate
results in time.
I marked:
Yellow the needed calculations and definitions of the data.
Pink the estimator oft he correlation
cyan the loglikelihood-function of the skew-t-copula (NOTICE fort he first
consideration the skew-parameter is 0, but I want to change it later on)
Blue the calculation of the needed quantiles by uniroot.
und dark-blue the value of the loglikelihood-function
purple the starting parameters and the optim()
PROBLEM: Executing the likelihood-function by it self takes half a minute.
The optim() even longer. But I need several iterations. (maybe 1000 or even
more) Is there a way to make it faster????
THX, John Reichenbächer
PS: The attachment are the time series, that are used
data<-read.table("NIKKEI.txt", header=T)
attach(data)
data<-read.table("DAX.txt", header=T)
attach(data)
my_dax<-mean(dax)
sd_dax<-sqrt(var(dax))
my_nik<-mean(nik)
sd_nik<-sqrt(var(nik))
P_dax<-pnorm(dax,mean=my_dax, sd=sd_dax) P_nik<-pnorm(nik,mean=my_nik,
sd=sd_nik)
xi<-vector(length=2)
Omega <- matrix(nrow=2, ncol=2)
alpha<-vector(length=2)
u1<-vector(length=length(P_dax))
u2<-vector(length=length(P_nik))
xi<-c(0,0)
Omega<-diag(2)
alpha<-c(0,0)
ber1<-c(-25,25)
ber2<-c(-25,25)
z<-vector(length=length(P_dax))
s<-0
for(i in 2:length(P_dax)) {
for(j in 1:(i-1)) {
s<-s+sign((P_dax[j]-P_dax[i])*(P_nik[j]-P_nik[i]))
}
}
s<-s/choose(length(P_dax),2)
ndiag<-sin(pi*s/2)
Omega[2,1]<-Omega[1,2]<-ndiag
c_density <- function(v) {
df<-v[1]
for(i in 1:length(P_dax)) {
f <- function(z) {
pmst(z, xi[1],Omega[1,1],alpha[1],df)-P_dax[i]
}
u1[i]<-uniroot(f,ber1,tol=0.000001)$root
f <- function(z) {
pmst(z, xi[2],Omega[2,2],alpha[2],df)-P_nik[i]
}
u2[i]<-uniroot(f,ber2,tol=0.000001)$root
zähler<-dmst(c(u1[i],u2[i]),xi,Omega,alpha,df)[1]
nenner<-dmst(u1[i], xi[1] ,Omega,alpha[1],df) * dmst(u2[i],
xi[2],Omega,alpha[2],df)
z[i]<-zähler/nenner
}
lnc<-log(z)
erg<-(-1)*sum(lnc)
return(erg)
}
v<-c(10)
optim(v,c_density, method="SANN", control=list(maxit=20))
More information about the R-help
mailing list