# [R-SIG-Finance] Berkowitz Truncated Likelihood Ratio tail Test

alexios ghalanos alexios at 4dscape.com
Sat Jun 18 14:42:07 CEST 2011

```Stefan,

I suggest you have a look at Kevin Dowd's paper (“Backtesting Risk
Models within a Standard Normality Framework", and available on his webpage:
http://web.me.com/kevindowd1958/web.me.com_kevindowd1958_Site/Financial_risk_management.html).
Section 3 on the truncated distribution
is quite informative on the 'proper' implementation of the Berkowitz
test to the tail data.

Best,

Alexios

On 6/17/2011 7:20 AM, stefan strunz wrote:
> Hi guys,
>
> I'm getting quite desperate here and I hope you could help me out! I am trying to implement the tail Likelihood Ratio Test, that Berkowitz describes in his 2001 Paper "Testing Density Forecasts, with Applications to Risk Management", which can be found here:
>
> http://www.ims.nus.edu.sg/Programs/econometrics/files/kw_ref_2.pdf
>
> I know that Alexios has implemeneted Berkowitz "normal" LR test (see Equation (4) in rgarch (many thanks for that Alexios!) but I wanted to implement the one on page 469. It basically tests if the density forecasts were good in the tail, not on the whole distribution. The formula seems pretty simple and I think that I understand the concept, however, when I try to implement it - I get negative Likehood Ratios (see the histogram at the end of the code)! It would be really great if someone can get a fresh look at it and tell me where my error is. In particular, I am not sure that I've implemented Formula (9) on page 469 correctly. The way I see it, the first sum, sums all Z* which are below VAR and the second sum is just a constant * logarithm of the cdf.
>
> Here is my commented code:
>
> #I calculate 1000 Likelihood ratios, hence the sapply. But you can just run the inner part #for one example#
>
>   results <-  sapply(1:1000,function(x){
>
>     #generating some data which has to be forecasted
>     data_to_forecast<- rnorm(1000,0,1)
>
>      #creating the probability integral transformations.
>      z_1 <- pnorm(data_to_forecast,0,1)
>
>      #check: it has to be uniformly distributed, which it is.
>      #hist(z_1)
>
>      #creating the inverse (see page 467)
>      zt_norm <- qnorm(z_1)
>
>      #cutting off the VAR violations
>
>      VAR <- qnorm(0.05)
>
>      zt_star <- NULL
>
>     #creating the new variable z*, see page 469
>     for(i in 1:length(zt_norm))
>     {
>         if(zt_norm[i] >=  -1.64){zt_star[i] = VAR}  #if its bigger than VAR, just use VAR
>         else{
>             zt_star[i] = zt_norm[i]      #if it is smaller, use the realized value
>         }}
>    ##############
>    #implementing formula (9)
>    ############
>
>     #all the Z* which are smaller than VAR
>     z_tail <- zt_star[zt_star < VAR]
>
>     #length of the vector which contains all the Z* which are greater than VAR
>     length_z_nontail <- length(zt_norm) - length(z_tail)
>
>     #unrestricted likelihood
>     likeun <- (-length(z_tail)/2*(log(2*pi*var(zt_norm))) - sum((z_tail -    mean(zt_norm))^2/(2*sd(zt_norm))) +
>                 length_z_nontail*(log(1-pnorm((VAR-mean(zt_norm))/sd(zt_norm)))))
>
>     #restricted likelihood
>     likere <- (-length(z_tail)/2*(log(2*pi*1)) - sum((z_tail - 0)^2/(2*1)) +
>                 length_z_nontail*(log(1-pnorm((VAR-0)/1))))
>
>     #the likelihood ratio
>           2*(likeun-likere)
>   })
>
>   #Problem: Sometimes the likelihood of the restricted model is higher than that of the   #unrestricted! Which cannot be, according to theory (see Greene 2003 f.ex). So the ratio is #not chi squared distributed then.
>   hist(results)
>
>
> Regards,
>
> Stefan
>
>
>
>
>
>
>
>
>
> 	[[alternative HTML version deleted]]
>
>
>
> _______________________________________________
> R-SIG-Finance at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-finance
> -- Subscriber-posting only. If you want to post, subscribe first.
> -- Also note that this is not the r-help list where general R questions should go.

```