[R-SIG-Finance] XTS with unique time stamps?

Jeffrey Ryan jeffrey.ryan at lemnica.com
Mon Jan 31 16:37:02 CET 2011


Brian, Worik

w.r.t the new functionality in xts.

It is so bleeding edge that Brian gave you the wrong name ;-) think
"make [the] index unique".  It probably will also be extended to do
the former removal of subsequent non-unique observations/times as
well.

HTH,
Jeff


?make.index.unique

make.index.unique             package:xts              R Documentation

Force Time Values To Be Unique

Description:

     A generic function to force sorted time vectors to be unique.
     Useful for high-frequency time-series where original time-stamps
     may have identical values. For the case of xts objects, the
     default ‘eps’ is set to one-hundred microseconds. In practice this
     advances each subsequent identical time by ‘eps’ over the previous
     (possibly also advanced) value.

Usage:

     make.index.unique(x, eps = 1e-05, ...)

     make.time.unique(x, eps = 1e-05, ...)

Arguments:

       x: An xts object, or POSIXct vector.

     eps: value to add to force uniqueness.

     ...: unused

Details:

     The returned time-series object will have new time-stamps so that
     ‘isOrdered( .index(x) )’ evaluates to TRUE.

Value:

     A modified version of x.

Note:

     Incoming values must be pre-sorted, and no check is done to make
     sure that this is the case.  If the index values are of
     storage.mode ‘integer’, they will be coerced to ‘double’.

Author(s):

     Jeffrey A. Ryan

See Also:

     ‘align.time’

Examples:

     ds <- options(digits.secs=6) # so we can see the change

     x <- xts(1:10, as.POSIXct("2011-01-21") + c(1,1,1,2:8)/1e3)
     x
     make.index.unique(x)

     options(ds)



On Mon, Jan 31, 2011 at 6:05 AM, Brian G. Peterson <brian at braverock.com> wrote:
> On Monday, January 31, 2011 12:55:03 am Worik wrote:
>> I am having trouble with non-unique time stamps in an xts.
>>
>> My underlying data has some repeated rows (in a csv file).
>>
>> How can I easily get rid of the duplicates?
>>
>> I feel I must be missing something simple.  If not I can concoct an
>> example to illustrate my problem.
>
> Worik,
>
> It depends on what you need.
>
> If you can remove the rows with duplicated indices, then a construction such
> as:
>
> myxts<-myxts[!duplicated(index(myxts))]
>
> should work.
>
> If you need all of the observations, and need to artificially make them unique
> (as is a common problem with tick data), then you will see discussion in the
> list archives here and other places regarding adding artificial indices to high
> frequency data while preserving order. You will need the latest xts from R-
> Forge and use a construction like this:
>
> myxts<-make.unique.index(myxts)
>
> which will (by default) add .00001 sec to each non-unique index after the
> first, preserving order, and providing every observation with a unique index.
> Note that this presumes that the original order of the observations was
> correct in the first place, no provision has been made if you have different
> circumstances.
>
> Thanks to Jeff Ryan for (very) recently adding this second method.
>
> Regards,
>
>  - Brian
>
> --
> Brian G. Peterson
> http://braverock.com/brian/
> Ph: 773-459-4973
> IM: bgpbraverock
>
> _______________________________________________
> R-SIG-Finance at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-finance
> -- Subscriber-posting only. If you want to post, subscribe first.
> -- Also note that this is not the r-help list where general R questions should go.
>



-- 
Jeffrey Ryan
jeffrey.ryan at lemnica.com

www.lemnica.com



More information about the R-SIG-Finance mailing list