in that case, it begs the question of why you want to regularly space your data ?<br />all the info is there so why reduce the amount of it by regularly spacing ?<br /><br /><br /><br /><br /><p>On May 21, 2009, <strong>Michael</strong> <comtech.usa@gmail.com> wrote: </p><div class="replyBody"><blockquote style="border-left: 2px solid #267fdb; margin: 0pt 0pt 0pt 1.8ex; padding-left: 1ex">In fact, I have the whole jump processes of best bid, and best ask, at<br />a continuous level (in the sense of time-stamped arrival data), and<br />also the jump process of the last trade price, at a continuous level<br />(in the sense of time-stamped arrival data). Any more thoughts?<br /><br /><br />On Thu, May 21, 2009 at 9:51 AM, Hae Kyung Im <<a href="mailto:hakyim@gmail.com" target="_blank" class="parsedEmail">hakyim@gmail.com</a>> wrote:<br />> Relating the approach that turns irregular data into regular one,<br />> I guess it's a complex question and how you approach it will depend on<br />> the specific problem.<br />><br />> With your method, you would assume that the price is equal to the last<br />> traded price or something like that. If there is no trade for some<br />> time, would it make sense to say that the price is the last traded<br />> price? If you wanted to actually buy/sell at that price, it's not<br />> obvious that you'll be able to do so.<br />><br />> Also, if you only look at the time series of instantaneous prices, you<br />> would be losing a lot of information about what happened in between<br />> the time points. It makes more sense to aggregate and keep, for<br />> example, open, high, low and close. Or some statistics on the<br />> distribution of the prices between the endpoints.<br />><br />> If what you need to calculate is correlations, then I would look at<br />> the papers that Liviu suggested. It seems that synchronicity is<br />> critical. I heard there is an extension of TSRV to correlations.<br />><br />> If you only need to look at univariate time series, you may be able to<br />> get away with your method more easily. It may not be statistically<br />> efficient but may give you a good enough answer in some cases.<br />><br />><br />> HTH<br />> Haky<br />><br />><br />><br />> On Thu, May 21, 2009 at 10:38 AM, Michael <<a href="mailto:comtech.usa@gmail.com" target="_blank" class="parsedEmail">comtech.usa@gmail.com</a>> wrote:<br />>> My data are price change arrivals, irregularly spaced. But when there<br />>> is no price change, the price stays constant. Therefore, in fact, at<br />>> any time instant, you give me a time, I can give you the price at that<br />>> very instant of time. So irregularly spaced data can be easily sampled<br />>> to be regularly spaced data.<br />>> What do you think of this approach?<br />>><br />>> On Thu, May 21, 2009 at 8:21 AM, Michael <<a href="mailto:comtech.usa@gmail.com" target="_blank" class="parsedEmail">comtech.usa@gmail.com</a>> wrote:<br />>>> Thanks Jeff.<br />>>><br />>>> By high frequency I mean really the tick data. For example, during<br />>>> peak time, the arrival of price events could be at about hundreds to<br />>>> thousands within one second, irregularly spaced.<br />>>><br />>>> I've heard that forcing irregularly spaced data into regularly spaced<br />>>> data(e.g. through interpolation) will lose information. How's that so?<br />>>><br />>>> Thanks!<br />>>><br />>>> On Thu, May 21, 2009 at 8:15 AM, Jeff Ryan <<a href="mailto:jeff.a.ryan@gmail.com" target="_blank" class="parsedEmail">jeff.a.ryan@gmail.com</a>> wrote:<br />>>>> Not my domain, but you will more than likely have to aggregate to some<br />>>>> sort of regular/homogenous type of series for most traditional tools<br />>>>> to work.<br />>>>><br />>>>> xts has to.period to aggregate up to a lower frequency from tick-level<br />>>>> data. Coupled with something like na.locf you can make yourself some<br />>>>> high frequency 'regular' data from 'irregular'<br />>>>><br />>>>> Regular and irregular of course depend on what you are looking at<br />>>>> (weekends missing in daily data can still be 'regular').<br />>>>><br />>>>> I'd be interested in hearing thoughts from those who actually tread in<br />>>>> the high-freq domain...<br />>>>><br />>>>> A wealth of information can be found here:<br />>>>><br />>>>> <a href="http://www.olsen.ch/publications/working-papers/" target="_blank" class="parsedLink">http://www.olsen.ch/publications/working-papers/</a><br />>>>><br />>>>> Jeff<br />>>>><br />>>>> On Thu, May 21, 2009 at 10:04 AM, Michael <<a href="mailto:comtech.usa@gmail.com" target="_blank" class="parsedEmail">comtech.usa@gmail.com</a>> wrote:<br />>>>>> Hi all,<br />>>>>><br />>>>>> I am wondering if there are some special toolboxes to handle high<br />>>>>> frequency data in R?<br />>>>>><br />>>>>> I have some high frequency data and was wondering what meaningful<br />>>>>> experiments can I run on these high frequency data.<br />>>>>><br />>>>>> Not sure if normal (low frequency) financial time series textbook data<br />>>>>> analysis tools will work for high frequency data?<br />>>>>><br />>>>>> Let's say I run a correlation between two stocks using the high<br />>>>>> frequency data, or run an ARMA model on one stock, will the results be<br />>>>>> meaningful?<br />>>>>><br />>>>>> Could anybody point me some classroom types of treatment or lab<br />>>>>> tutorial type of document which show me what meaningful<br />>>>>> experiments/tests I can run on high frequency data?<br />>>>>><br />>>>>> Thanks a lot!<br />>>>>><br />>>>>> _______________________________________________<br />>>>>> <a href="mailto:R-SIG-Finance@stat.math.ethz.ch" target="_blank" class="parsedEmail">R-SIG-Finance@stat.math.ethz.ch</a> mailing list<br />>>>>> <a href="https://stat.ethz.ch/mailman/listinfo/r-sig-finance" target="_blank" class="parsedLink">https://stat.ethz.ch/mailman/listinfo/r-sig-finance</a><br />>>>>> -- Subscriber-posting only.<br />>>>>> -- If you want to post, subscribe first.<br />>>>>><br />>>>><br />>>>><br />>>>><br />>>>> --<br />>>>> Jeffrey Ryan<br />>>>> <a href="mailto:jeffrey.ryan@insightalgo.com" target="_blank" class="parsedEmail">jeffrey.ryan@insightalgo.com</a><br />>>>><br />>>>> ia: insight algorithmics<br />>>>> <a href="http://www.insightalgo.com" target="_blank" class="parsedLink">www.insightalgo.com</a><br />>>>><br />>>><br />>><br />>> _______________________________________________<br />>> <a href="mailto:R-SIG-Finance@stat.math.ethz.ch" target="_blank" class="parsedEmail">R-SIG-Finance@stat.math.ethz.ch</a> mailing list<br />>> <a href="https://stat.ethz.ch/mailman/listinfo/r-sig-finance" target="_blank" class="parsedLink">https://stat.ethz.ch/mailman/listinfo/r-sig-finance</a><br />>> -- Subscriber-posting only.<br />>> -- If you want to post, subscribe first.<br />>><br />><br /><br />_______________________________________________<br /><a href="mailto:R-SIG-Finance@stat.math.ethz.ch" target="_blank" class="parsedEmail">R-SIG-Finance@stat.math.ethz.ch</a> mailing list<br /><a href="https://stat.ethz.ch/mailman/listinfo/r-sig-finance" target="_blank" class="parsedLink">https://stat.ethz.ch/mailman/listinfo/r-sig-finance</a><br />-- Subscriber-posting only.<br />-- If you want to post, subscribe first.<br /></blockquote></div>