[R] 4. Rexcel (Luis Felipe Parra)-how to run a code from excel

suiming47 suiming47 at gmail.com
Wed Nov 24 03:05:42 CET 2010


Hi Louis,

It's simple to run a r script from the excel spreadsheet.
Just write your code, source("C:\\Quantil
Aplicativos\\Genercauca\\BackwardSelectionNC.r"), into a cell of a
workingsheet. Then right-click the cell and select "run code" in the pop-up
menu.
Hope this will help you.
Best,
Bernard 

-----邮件原件-----
发件人: r-help-bounces at r-project.org [mailto:r-help-bounces at r-project.org]
代表 r-help-request at r-project.org
发送时间: 2010年11月23日 19:00
收件人: r-help at r-project.org
主题: R-help Digest, Vol 93, Issue 23

Send R-help mailing list submissions to
	r-help at r-project.org

To subscribe or unsubscribe via the World Wide Web, visit
	https://stat.ethz.ch/mailman/listinfo/r-help
or, via email, send a message with subject or body 'help' to
	r-help-request at r-project.org

You can reach the person managing the list at
	r-help-owner at r-project.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of R-help digest..."


Today's Topics:

   1. Re: "unexpected numeric constant" while reading tab delimited
      csv file (Mike Marchywka)
   2. how to sample lowess/loess into matrix ? (madr)
   3. Re: "unexpected numeric constant" while reading tab delimited
      csv file (Duncan Murdoch)
   4. Rexcel (Luis Felipe Parra)
   5. how to change number of characters per line for print() to
      sink()? (Nevil Amos)
   6. Invitation ? se connecter sur LinkedIn
      (Aline Uwimana via LinkedIn)
   7. Re: How to combine Date and time in one column
      (Gabor Grothendieck)
   8. Tinn-R 2.3.7.0 released (Jose Claudio Faria)
   9. Rexcel (Luis Felipe Parra)
  10. Re: how to change number of characters per line for print()
      to	sink()? (jim holtman)
  11. Re: boxplot: reverse y-axis order (S Ellison)
  12. plot inside function does not work (Alaios)
  13. Re: how to sample lowess/loess into matrix ? (madr)
  14. Re: plot inside function does not work (Claudia Beleites)
  15. Re: Rexcel (Stephen Liu)
  16. Re: plot inside function does not work (Alaios)
  17. Invitation ? se connecter sur LinkedIn (Yann Lancien via LinkedIn)
  18. Prob with merge (Joel)
  19. Re: Making a line in a legend shorter (Peter Ehlers)
  20. Re: Prob with merge (ONKELINX, Thierry)
  21. Re: calculating martingale residual on new data	using
      "predict.coxph" (Terry Therneau)
  22. Re: Prob with merge (Joel)
  23. What if geoRglm results showed that a non-spacial model fits?
      (Jimmy Martina)
  24. Help with plotting kohonen maps (Stella Pachidi)
  25. Re: ?summaryRprof running at 100% cpu for one hour ...
      (Kjetil Halvorsen)
  26. Re: ?summaryRprof running at 100% cpu for one hour ...
      (Mike Marchywka)
  27. hierarchical mixture modelling (Mike Lawrence)
  28. Check for is.object (Santosh Srinivas)
  29.  Fast Two-Dimensional Optimization (Wonsang You)
  30. Re: plotting a timeline (?ukasz R?c?awowicz)
  31. Re: memory profiling (Patrick Leyshock)
  32. Re: ?summaryRprof running at 100% cpu for one hour ...
      (Kjetil Halvorsen)
  33. Re: An empty grey diagram (Martin Maechler)
  34. Re: ?summaryRprof running at 100% cpu for one hour ...
      (Mike Marchywka)
  35. Re: Check for is.object (Jonathan P Daily)
  36. Re: solve nonlinear equation using BBsolve (Berend Hasselman)
  37. Re: Alternatives to image(...) and filled.contour(...) for
      2-D filled Plots (Ista Zahn)
  38. Re: statistical test for comparison of two classifications
      (nominal) (Matt Shotwell)
  39. Re: Check for is.object (Phil Spector)
  40. Re: "negative alpha" or custom gradient colors of data dots
      in scatterplot ? (Ista Zahn)
  41. Problem setting the number of digits in xtable (wphantomfr)
  42. RCurl : All connection are used ? (omerle)
  43. sm.ancova graphic (Lucia Ca?as)
  44. Plotting a cloud/fog of variable density in rgl (JiHO)
  45. Re: RCurl : All connection are used ? (Mike Marchywka)
  46. Re: Ordeing Zoo object (Manta)
  47. Re: Plotting a cloud/fog of variable density in rgl
      (Duncan Murdoch)
  48. Re: How to produce a graph of glms in R? (Greg Snow)
  49. Re: Plotting a cloud/fog of variable density in rgl
      (Mike Marchywka)
  50. Re: data acquisition with R? (B.-Markus Schuller)
  51. Is it possible to make a matrix to start at row 0? (bogdanno)
  52. Re: Rexcel (csrabak)
  53. Re: question about constraint minimization (dhacademic at gmail.com)
  54. Re: Lost in POSIX (Dimitri Shvorob)
  55. Re: Find in R and R books (Georg Otto)
  56. Re: using rpart with a tree misclassification condition (meytar)
  57. Re: memory profiling (Patrick Leyshock)
  58. Some questione about plot (romzero)
  59. how do remove those predictor which have p value greater than
      0.05 in GLM? (shubha)
  60. Re: how to apply sample function to each row of a data frame?
      (wangwallace)
  61. Re: arima (tomreilly)
  62. Wait for user input with readline() (Nathan Miller)
  63. Re: txtProgressBar strange behavior in R 2.12.0
      (Viechtbauer Wolfgang (STAT))
  64. How to call web service in R (??)
  65. save a regression model that can be used later
      (Ni, Melody Zhifang)
  66. Re: question about constraint minimization (Ravi Varadhan)
  67. Re: Is it possible to make a matrix to start at row 0?
      (Joshua Wiley)
  68. Re: Is it possible to make a matrix to start at row 0?
      (Bert Gunter)
  69. Re: RGoogleDocs stopped working (Harlan Harris)
  70. Re: sm.ancova graphic (Peter Ehlers)
  71. Re: Wait for user input with readline() (Joshua Wiley)
  72. Re: Ordeing Zoo object (Gabor Grothendieck)
  73. Re: Is it possible to make a matrix to start at row 0?
      (Ben Bolker)
  74. Re: save a regression model that can be used later
      (David Winsemius)
  75. how to round only one column of a matrix ? (madr)
  76. Re: Is it possible to make a matrix to start at row 0?
      (baptiste auguie)
  77. Re: how to round only one column of a matrix ? (Phil Spector)
  78. Re: How to call web service in R (Steve Lianoglou)
  79. aggregate a Date column does not work? (Tan, Richard)
  80. R2WinBUGS help (bmiddle)
  81. Re: FW:  help with time Series regression please (tomreilly)
  82. Re: Is it possible to make a matrix to start at row 0?
      (Mike Marchywka)
  83. Re: aggregate a Date column does not work? (David Winsemius)
  84. I need a very specific unique like function and I don't know
      even how to properly call this (madr)
  85. Re: aggregate a Date column does not work? (Tan, Richard)
  86. R package "kernlab" can not be properly loaded (Xiaoqi Cui)
  87. Re: Rexcel (Erich Neuwirth)
  88. Re: aggregate a Date column does not work? (David Winsemius)
  89. Re: aggregate a Date column does not work? (Tan, Richard)
  90. Re: aggregate a Date column does not work? (Gabor Grothendieck)
  91. Re: "negative alpha" or custom gradient colors of data dots
      in... (madr)
  92. Re: how do remove those predictor which have p value greater
      than 0.05 in GLM? (Frank Harrell)
  93. Help: Standard errors arima (lucia)
  94. Re: Help: Standard errors arima (David Winsemius)
  95. Re: R2WinBUGS help (Uwe Ligges)
  96. Re: R package "kernlab" can not be properly loaded
      (David Winsemius)
  97. Re: R package "kernlab" can not be properly loaded (Uwe Ligges)
  98. Re: I need a very specific unique like function and I don't
      know even how to properly call this (Ista Zahn)
  99. Re: R2WinBUGS help (bmiddle)
  100. Re: how do remove those predictor which have p value greater
      than 0.05 in GLM? (shubha)
  101. Re: Help: Standard errors arima (Dennis Murphy)
  102. Probit Analysis: Confidence Interval for the LD50 using
      Fieller's and Heterogeneity (UNCLASSIFIED)
      (Kenney, Colleen T CTR USA AMC)
  103. how to calculate derivative (Yogesh Tiwari)
  104. Re: cpgram: access data, confidence bands (David Scott)
  105. Re: Probit Analysis: Confidence Interval for the LD50 using
      Fieller's and Heterogeneity (UNCLASSIFIED) (David Winsemius)
  106. Re: Probit Analysis: Confidence Interval for the LD50 using
      Fieller's and Heterogeneity (UNCLASSIFIED) (Dennis Murphy)
  107. empity value in colnames (M.Ribeiro)
  108. Re: ?summaryRprof running at 100% cpu for one hour ...
      (Kjetil Halvorsen)
  109. Re: empity value in colnames (David Winsemius)
  110. Re: ?summaryRprof running at 100% cpu for one hour ...
      (Mike Marchywka)
  111. Sporadic errors when training models using CARET (Kendric Wang)
  112. Re: I need a very specific unique like function and I don't
      know even how to properly call this (Phil Spector)
  113. Re: how do remove those predictor which have p value greater
      than 0.05 in GLM? (David Winsemius)
  114. Re: plotting a timeline (Peter Ehlers)
  115. Gap between graph and axis (Sebastian Rudnick)
  116. Re: Gap between graph and axis (Bill.Venables at csiro.au)
  117.  R on Androids? (Erin Hodgess)
  118. Re: how to loop through variables in R? (watashi at post.com)
  119. How to start default browser on R (Stephen Liu)
  120. question on "uniCox" (Shi, Tao)
  121. Re: Find in R and R books (Spencer Graves)
  122. Re: calculating martingale residual on new data using
      "predict.coxph" (Shi, Tao)
  123. Re: calculating martingale residual on new data using (Shi, Tao)
  124. Re: calculating martingale residual on new data using
      "predict.coxph" (Shi, Tao)
  125. Re: Find in R and R books (Mike Marchywka)
  126. Re: Fast Two-Dimensional Optimization (Ben Bolker)
  127. Re: how to loop through variables in R? (watashi at post.com)
  128. Re: How to start default browser on R (David Scott)
  129. Re: Find in R and R books (Spencer Graves)
  130. using the "apply" method for functions with multiple inputs
      (joeponzio)
  131. permalgorithm (Kere Klein)
  132. Re: using the "apply" method for functions with multiple
      inputs (David Winsemius)
  133. Re: How to start default browser on R (Stephen Liu)
  134. Re: How to start default browser on R (Ista Zahn)
  135. Re: how to calculate derivative (Ravi Varadhan)
  136. Re: How to start default browser on R (Stephen Liu)
  137. Re: How to start default browser on R (David Scott)
  138. Re: how to calculate derivative (Spencer Graves)
  139. Re: Is it possible to make a matrix to start at row 0?
      (David Stoffer)
  140. overlay histograms on map at map coordinates (Steve Bellan)
  141. Re: Kalman filter (David Stoffer)
  142. (no subject) (Mari Pesek)
  143. factorial ANOVA for block/split-plot design (Mari Pesek)
  144. Re: factorial ANOVA for block/split-plot design
      (RICHARD M. HEIBERGER)
  145. Re: question about constraint minimization (dhacademic at gmail.com)
  146. Explained GLZ model variation (Shai)
  147. Lattice and Quartz (Sarah Berry)
  148. compare GLM coefficients (Kayce anderson)
  149. Re: compare GLM coefficients (Michael Bedward)
  150. Calculating correlation (gireesh bogu)
  151. Re: Some questione about plot (Jim Lemon)
  152. Re: How to start default browser on R (Stephen Liu)
  153. Re: Calculating correlation (Tal Galili)
  154. About available datasets on PC (Stephen Liu)
  155. More detail in chart axis? (Noah Silverman)
  156. Error: cannot allocate vector of size x Gb (64-bit ... yet
      again) (derek eder)
  157. Re: About available datasets on PC (Jeff Newmiller)
  158. Re: More detail in chart axis? (Jim Lemon)
  159. Re: how to get rid of unused space on all 4 borders in
      plot()	render (Petr PIKAL)
  160. Re: how to loop through variables in R? (Ivan Calandra)
  161. Odp:  save a regression model that can be used later (Petr PIKAL)
  162. Re: Lost in POSIX (Jeff Newmiller)
  163. Re: About available datasets on PC (Stephen Liu)
  164. Re: Calculating correlation (Tal Galili)


----------------------------------------------------------------------

Message: 1
Date: Mon, 22 Nov 2010 06:29:34 -0500
From: Mike Marchywka <marchywka at hotmail.com>
To: <jdnewmil at dcn.davis.ca.us>, <madrazel at interia.pl>
Cc: r-help at r-project.org
Subject: Re: [R] "unexpected numeric constant" while reading tab
	delimited csv file
Message-ID: <BLU113-W14BF3EA0E3821E3C0CA8AEBE3D0 at phx.gbl>
Content-Type: text/plain; charset="iso-8859-1"








----------------------------------------
> Date: Mon, 22 Nov 2010 01:57:54 -0800
> From: jdnewmil at dcn.davis.ca.us
> To: madrazel at interia.pl
> CC: r-help at r-project.org
> Subject: Re: [R] "unexpected numeric constant" while reading tab delimited
csv file
>
> madr wrote:
> > my csv file is very simple - just one line for purpose of this test:
> > 0{TAB}0
> >
> > and read function is this:
> > csvdata = read.csv(file="d:/s/test.csv",head=FALSE,sep="\t")
> >
> > then error comes:
> >
> > Error in source("d:/test.csv") :
> > d:/test.csv:1:9: unexpected numeric constant
> > 1: 0 0
> >
> >
> > but when I change delimiter to ; (colon) then error not shows up anymore
> >
> You seem to be referencing two different files somehow... one in the
> root directory of your drive D, and the other in a subdirectory D:/s.
> This may have something to do with it... or may be extraneous.
>
> You haven't indicated what your working environment is, though the OS

> mention a distinction between whatever this environment is (RGui?) and
> "console". Are you using Cygwin? could end-of-line termination (CRLF vs
> LF) be causing you difficulty?

The OP explained that and if you believe OP changing intended file changes
the error message. And, yes, I would strongly suggest getting cygwin
so you have some tools other than posting incomplete information of calling
tech support LOL. In this case, you would use something like "od" to verify
that your file is as you expect. Just as printing numerical data often
truncated digits, unprintable chars don't always show up on printing.
od -ax or something may be informative. Changing the tab may have caused
editor to change line endings or something else. Smart editors often
mess stuff up.



>
> Perhaps you should follow the posting guide instructions...
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.



------------------------------

Message: 2
Date: Mon, 22 Nov 2010 03:36:17 -0800 (PST)
From: madr <madrazel at interia.pl>
To: r-help at r-project.org
Subject: [R] how to sample lowess/loess into matrix ?
Message-ID: <1290425777216-3053458.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii


code:

x <- rnorm(32)
y <- rnorm(32)
plot(x,y)
lines(lowess(x,y),col='red')

Now I need to sample the lowess function into matrix where one series will
be X and other will be values of lowess at particular X.
-- 
View this message in context:
http://r.789695.n4.nabble.com/how-to-sample-lowess-loess-into-matrix-tp30534
58p3053458.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 3
Date: Mon, 22 Nov 2010 06:38:52 -0500
From: Duncan Murdoch <murdoch.duncan at gmail.com>
To: madr <madrazel at interia.pl>
Cc: r-help at r-project.org
Subject: Re: [R] "unexpected numeric constant" while reading tab
	delimited csv file
Message-ID: <4CEA564C.4000908 at gmail.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

madr wrote:
> my csv file is very simple - just one line for purpose of this test:
> 0{TAB}0
> 
> and read function is this:
> csvdata = read.csv(file="d:/s/test.csv",head=FALSE,sep="\t")
> 
> then error comes:
> 
> Error in source("d:/test.csv") : 
>   d:/test.csv:1:9: unexpected numeric constant
> 1: 0       0
> 
> 
> but when I change delimiter to ; (colon) then error not shows up anymore

You used source, not read.csv.  They aren't the same thing.

If you typed what you said you typed, then you've hidden the real 
read.csv function behind your own, and your own calls source.  But I 
don't think you typed what you said you typed.

Duncan Murdoch



------------------------------

Message: 4
Date: Mon, 22 Nov 2010 20:11:39 +0800
From: Luis Felipe Parra <felipe.parra at quantil.com.co>
To: r-help <r-help at r-project.org>
Subject: [R] Rexcel
Message-ID:
	<AANLkTi=rWSJheF2WLtmhe8Lg0byHZf1nr3Cdj18Q8+mq at mail.gmail.com>
Content-Type: text/plain

Hello I am new to RExcel and I would like to run a source code form the
excel worksheet. I would like to run the following code

source("C:\\Quantil Aplicativos\\Genercauca\\BackwardSelectionNC.r")

from the excel wroksheet. Does anybody know how to do this?

Thank you

Felipe Parra

	[[alternative HTML version deleted]]



------------------------------

Message: 5
Date: Mon, 22 Nov 2010 23:22:25 +1100
From: Nevil Amos <nevil.amos at gmail.com>
To: R-help at r-project.org
Subject: [R] how to change number of characters per line for print()
	to	sink()?
Message-ID: <4CEA6081.70202 at sci.monash.edu.au>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

I am using r to read and reformat data that is then saved to a text file 
using sink(), the file has a number of initial lines of comments and 
summary data followed by print of a data.frame with no row names.
for example

a<-c(100:120)
b<-c(rnorm(100:120))
c<-c(rnorm(200:220))
mydata<-data.frame(rbind(a,b,c))
sink("datafile.txt")

cat("comments about my data \n")
cat("other calculations returned as separate text comments on a line \n")
print(mydata,row.names=F)
sink()


I need the content of the text file to keep each row of the data frame 
on a single line thus (with intervening columns present of course)

"datafile.txt"

comments about my data
other calculations returned as separate text comments on a line
           X1         X2          X3          X4          X5          
X6           
.....................................................................

X19         X20         X21


  100.0000000 101.000000 102.0000000 103.0000000 104.0000000 
105.0000000  ......................118.0000000 119.0000000 120.0000000
   -0.3380570  -1.400905   1.0396499  -0.5802181  -0.2340614   
0.6044928   ...................................-0.4854702  -0.3677461  
-1.2033173
   -0.9002824   1.544242  -0.8668653   0.3066256   0.2490254  -1.6429223 
.....................................   0.0861146   0.4276929  -0.3408604

How doI change setting for print() or use another function to keep each 
row of the data frame as a single line ( of greater length up to approx 
300 characters) instead of wrapping the data frame into multiple lines 
of text?

The problem : I end up with the data frame split into several sections 
one under another thus

"datafile.txt"

comments about my data
other calculations returned as separate text comments on a line
           X1         X2          X3          X4          X5          X6
  100.0000000 101.000000 102.0000000 103.0000000 104.0000000 105.0000000
   -0.3380570  -1.400905   1.0396499  -0.5802181  -0.2340614   0.6044928
   -0.9002824   1.544242  -0.8668653   0.3066256   0.2490254  -1.6429223
           X7           X8          X9         X10         X11         X12
  106.0000000 107.00000000 108.0000000 109.0000000 110.0000000 111.0000000
    0.3152427   0.15093494  -0.3316172  -0.3603724  -2.0516402  -0.4556241
   -0.6502265  -0.08842649  -0.3775335  -0.4942572  -0.0976565  -0.7716651
          X13         X14         X15         X16          X17        X18
  112.0000000 113.0000000 114.0000000 115.0000000 116.00000000 117.000000
    0.8829135   0.8851043  -0.7687383  -0.9573476  -0.03041968   1.425754
    0.2666777   0.6405255   0.2342905  -0.7705545  -1.18028004   1.303601
          X19         X20         X21
  118.0000000 119.0000000 120.0000000
   -0.4854702  -0.3677461  -1.2033173
    0.0861146   0.4276929  -0.3408604



------------------------------

Message: 6
Date: Mon, 22 Nov 2010 04:31:29 -0800 (PST)
From: Aline Uwimana via LinkedIn <member at linkedin.com>
To: James Holtman <r-help at r-project.org>
Subject: [R] Invitation ? se connecter sur LinkedIn
Message-ID:
	<1473926269.6681467.1290429089742.JavaMail.app at ech3-cdn12.prod>
Content-Type: text/plain

LinkedIn
------------Aline Uwimana requested to add you as a connection on LinkedIn:
------------------------------------------

James,

J'aimerais vous inviter C  rejoindre mon rC)seau professionnel en ligne, sur
le site LinkedIn.

Aline

Accept invitation from Aline Uwimana
http://www.linkedin.com/e/j2w180-ggtc5u4r-5e/dwA6EuZvdiRUO9LjukuviNUAHj8vHaN
g/blk/I2473001097_2/1BpC5vrmRLoRZcjkkZt5YCpnlOt3RApnhMpmdzgmhxrSNBszYOnPsVc3
4Mc3cTd399bSlUk6ZDtQd6bPkOcP4Sd3kPej4LrCBxbOYWrSlI/EML_comm_afe/

View invitation from Aline Uwimana
http://www.linkedin.com/e/j2w180-ggtc5u4r-5e/dwA6EuZvdiRUO9LjukuviNUAHj8vHaN
g/blk/I2473001097_2/39vdPAMcj0McPsQcAALqnpPbOYWrSlI/svi/
------------------------------------------

DID YOU KNOW you can be the first to know when a trusted member of your
network changes jobs? With Network Updates on your LinkedIn home page,
you'll be notified as members of your network change their current position.
Be the first to know and reach out!

http://www.linkedin.com/


-- 
(c) 2010, LinkedIn Corporation
	[[alternative HTML version deleted]]



------------------------------

Message: 7
Date: Mon, 22 Nov 2010 07:32:11 -0500
From: Gabor Grothendieck <ggrothendieck at gmail.com>
To: rnick <nikos.rachmanis at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] How to combine Date and time in one column
Message-ID:
	<AANLkTimU-LvRyziVgf0n_=W+WTH0TPYqgR4ZvxkWSXSu at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

On Mon, Nov 22, 2010 at 2:24 AM, rnick <nikos.rachmanis at gmail.com> wrote:
>
> Hello everyone,
>
> I am trying to built an xts object and i have run into some problems on
the
> data handling. I would really appreciate if someone could help me with the
> following:
>
> 1) I have a OHLC dataset with Time and date in different columns. How
could
> i combine date and time in one column in order to pass on the new column
to
> xts? I have use cbind and data.frame before but i did not manage to yield
> any good results as the formating of the file changes.
>
> Date ? ? ? ? ? ?Time ? ? ? ? ? O ? ? ? ? ? ? ? ?H ? ? ? ? ? ? ? L ? ? ? ?
? ? ? C
> 1/2/2005 ? ? ? ?17:05 ? ? ? ? ?1.3546 ? 1.3553 ?1.3546 ?1.35495
> 1/2/2005 ? ? ? ?17:10 ? ? ? ? ?1.3553 ? 1.3556 ?1.3549 ?1.35525
> 1/2/2005 ? ? ? ?17:15 ? ? ? ? ?1.3556 ? 1.35565 1.35515 1.3553
> 1/2/2005 ? ? ? ?17:25 ? ? ? ? ?1.355 ? ? ? ? ? ?1.3556 ?1.355 ? ? ? ? ? 1.
3555
> 1/2/2005 ? ? ? ?17:30 ? ? ? ? ?1.3556 ? 1.3564 ?1.35535 1.3563
>
> 2) It is not clear to me what is the best way to construct the .xts
object?
> Should i use only the Date&time to index or should i also combine it with
> the rest of the variables?
>

Use read.zoo and then as.xts to convert it to xts.   The following
shows it for chron date/times.  Replace textConnection(Lines) with
"myfile.dat" to read it from that file.   You can replace the FUN=
part with a conversion to any date/time class supported by xts.  Here
we show it for chron.  In the example below we are assuming that the
date format is month/day/year. See R News 4/1.

Lines <- "Date Time O H L C
1/2/2005 17:05 1.3546 1.3553 1.3546 1.35495
1/2/2005 17:10 1.3553 1.3556 1.3549 1.35525
1/2/2005 17:15 1.3556 1.35565 1.35515 1.3553
1/2/2005 17:25 1.355 1.3556 1.355 1.3555
1/2/2005 17:30 1.3556 1.3564 1.35535 1.3563"

library(xts) # this also pulls in zoo and its read.zoo
library(chron)

z <- read.zoo(textConnection(Lines), header = TRUE, index = list(1, 2),
	FUN = function(d, t) as.chron(paste(as.Date(chron(d)), t)))

x <- as.xts(z)


-- 
Statistics & Software Consulting
GKX Group, GKX Associates Inc.
tel: 1-877-GKX-GROUP
email: ggrothendieck at gmail.com



------------------------------

Message: 8
Date: Mon, 22 Nov 2010 09:43:18 -0300
From: Jose Claudio Faria <joseclaudio.faria at gmail.com>
To: r-help at r-project.org
Subject: [R] Tinn-R 2.3.7.0 released
Message-ID:
	<AANLkTimGQHOWj_EzDhR_J-38QxG99KDGmq0OvGZ29eiZ at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

Dears users,

A new version of Tinn-R was released today. Below details:

2.3.7.0 (Nov/22/2010)

    * Bug(s) fixed:
          - A bug related with the intermittent loose of connection
(or appear to freeze) with Rgui.exe was fixed.
    * The versions 2.3.6.4, 2.3.6.5, 2.3.6.6 and 2.3.6.7 restricted to
pre-release testers.
    * The Application options interface was a bit changed:
          - The Application options/R/Rterm was split in two tabs:
Error and Options. The tab Error has a new option: Try to find error
in the editor. It enables the user to set Tinn-R in order to find
errors in the editor when sending instructions to Rterm.
    * This version is full compatible with Windows 7 and R 2.12.0.
    * The component XPmenu was removed from the project. Windows XP
users, perhaps, will find the Tinn-R appearance less attractive, but
the applicative is now more stable. As soon as possible, the project
will get a better option for skins.
    * Parts of the source code were optimized.

All the best,
-- 
///\\\///\\\///\\\///\\\///\\\///\\\///\\\///\\\
Jose Claudio Faria
Estatistica - prof. Titular
UESC/DCET/Brasil
joseclaudio.faria at gmail.com
///\\\///\\\///\\\///\\\///\\\///\\\///\\\///\\\



------------------------------

Message: 9
Date: Mon, 22 Nov 2010 20:53:14 +0800
From: Luis Felipe Parra <felipe.parra at quantil.com.co>
To: r-help <r-help at r-project.org>
Subject: [R] Rexcel
Message-ID:
	<AANLkTimtvxE9BvWqZ+=bK7LZOcAou_jrA3OFDJOZ5Gcj at mail.gmail.com>
Content-Type: text/plain

Hello, I am trying to use RExcel and I would like to know if it is possible
to use in excel the following function I made for R

Pron = function(path="C:\\Quantil Aplicativos\\Genercauca\\V5\\"){
library(timeSeries)
library(maSigPro)
### CARGAR FUNCIONES
source(paste(path,"\\fUtilidades\\BackwardSelectionNC.r",sep<file://futilida
des//BackwardSelectionNC.r%22,sep>
=""))
source(paste(path,"\\fUtilidades\\CriteriosDeComparacion.r",sep<file://futil
idades//CriteriosDeComparacion.r%22,sep>
=""))
dataTSORG<-read.csv('entrada.csv', header = TRUE, sep = ",", quote="\"",
dec=".",fill = TRUE, comment.char="")
dataTSORG = ts(dataTSORG, start=c(1950,1), frequency=12)
dataTSORG = as.timeSeries(dataTSORG)
X = prcomp(dataTSORG[,2:40])$x
return(X)
}

Does somebody know if its possible? and if so how can I do it?

Thank you

Felipe Parra

	[[alternative HTML version deleted]]



------------------------------

Message: 10
Date: Mon, 22 Nov 2010 07:59:05 -0500
From: jim holtman <jholtman at gmail.com>
To: nevil.amos at sci.monash.edu.au
Cc: R-help at r-project.org
Subject: Re: [R] how to change number of characters per line for
	print() to	sink()?
Message-ID:
	<AANLkTi=_J6jSVAuVX8SiF8g+9w6pVLPx7v+HTqa-4_wm at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

?options
width

options(width = 1000)


On Mon, Nov 22, 2010 at 7:22 AM, Nevil Amos <nevil.amos at gmail.com> wrote:
> I am using r to read and reformat data that is then saved to a text file
> using sink(), the file has a number of initial lines of comments and
summary
> data followed by print of a data.frame with no row names.
> for example
>
> a<-c(100:120)
> b<-c(rnorm(100:120))
> c<-c(rnorm(200:220))
> mydata<-data.frame(rbind(a,b,c))
> sink("datafile.txt")
>
> cat("comments about my data \n")
> cat("other calculations returned as separate text comments on a line \n")
> print(mydata,row.names=F)
> sink()
>
>
> I need the content of the text file to keep each row of the data frame on
a
> single line thus (with intervening columns present of course)
>
> "datafile.txt"
>
> comments about my data
> other calculations returned as separate text comments on a line
> ? ? ? ? ?X1 ? ? ? ? X2 ? ? ? ? ?X3 ? ? ? ? ?X4 ? ? ? ? ?X5 ? ? ? ? ?X6
> ? ? .....................................................................
> ? ? ? X19 ? ? ? ? X20 ? ? ? ? X21
>
>
> ?100.0000000 101.000000 102.0000000 103.0000000 104.0000000 105.0000000
> ?......................118.0000000 119.0000000 120.0000000
> ?-0.3380570 ?-1.400905 ? 1.0396499 ?-0.5802181 ?-0.2340614 ? 0.6044928
> ...................................-0.4854702 ?-0.3677461 ?-1.2033173
> ?-0.9002824 ? 1.544242 ?-0.8668653 ? 0.3066256 ? 0.2490254 ?-1.6429223
> ..................................... ? 0.0861146 ? 0.4276929 ?-0.3408604
>
> How doI change setting for print() or use another function to keep each
row
> of the data frame as a single line ( of greater length up to approx 300
> characters) instead of wrapping the data frame into multiple lines of
text?
>
> The problem : I end up with the data frame split into several sections one
> under another thus
>
> "datafile.txt"
>
> comments about my data
> other calculations returned as separate text comments on a line
> ? ? ? ? ?X1 ? ? ? ? X2 ? ? ? ? ?X3 ? ? ? ? ?X4 ? ? ? ? ?X5 ? ? ? ? ?X6
> ?100.0000000 101.000000 102.0000000 103.0000000 104.0000000 105.0000000
> ?-0.3380570 ?-1.400905 ? 1.0396499 ?-0.5802181 ?-0.2340614 ? 0.6044928
> ?-0.9002824 ? 1.544242 ?-0.8668653 ? 0.3066256 ? 0.2490254 ?-1.6429223
> ? ? ? ? ?X7 ? ? ? ? ? X8 ? ? ? ? ?X9 ? ? ? ? X10 ? ? ? ? X11 ? ? ? ? X12
> ?106.0000000 107.00000000 108.0000000 109.0000000 110.0000000 111.0000000
> ? 0.3152427 ? 0.15093494 ?-0.3316172 ?-0.3603724 ?-2.0516402 ?-0.4556241
> ?-0.6502265 ?-0.08842649 ?-0.3775335 ?-0.4942572 ?-0.0976565 ?-0.7716651
> ? ? ? ? X13 ? ? ? ? X14 ? ? ? ? X15 ? ? ? ? X16 ? ? ? ? ?X17 ? ? ? ?X18
> ?112.0000000 113.0000000 114.0000000 115.0000000 116.00000000 117.000000
> ? 0.8829135 ? 0.8851043 ?-0.7687383 ?-0.9573476 ?-0.03041968 ? 1.425754
> ? 0.2666777 ? 0.6405255 ? 0.2342905 ?-0.7705545 ?-1.18028004 ? 1.303601
> ? ? ? ? X19 ? ? ? ? X20 ? ? ? ? X21
> ?118.0000000 119.0000000 120.0000000
> ?-0.4854702 ?-0.3677461 ?-1.2033173
> ? 0.0861146 ? 0.4276929 ?-0.3408604
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?



------------------------------

Message: 11
Date: Mon, 22 Nov 2010 13:02:17 +0000
From: "S Ellison" <S.Ellison at lgc.co.uk>
To: "emorway" <emorway at engr.colostate.edu>,	"Uwe Ligges"
	<ligges at statistik.tu-dortmund.de>
Cc: r-help at r-project.org
Subject: Re: [R] boxplot: reverse y-axis order
Message-ID: <scea69e8.069 at tedmail.lgc.co.uk>
Content-Type: text/plain; charset=US-ASCII

A simple alternative is to use "at" to contrl plot locations:

boxplot( ..., at=rev(1:nlevels(depthM)))

which just rearranges where they are plotted.

Example:
set.seed(1023)
 x <- gl(5, 5)
y<-rnorm(25)
boxplot(y~x, horizontal=TRUE)
boxplot(y~x, at=rev(1:nlevels(x)), , horizontal=TRUE)


Steve E

>>> Uwe Ligges <ligges at statistik.tu-dortmund.de> 21/11/2010 19:38:54
>>>


On 21.11.2010 20:30, emorway wrote:
>
> Hello,
>
> Searching this forum has enabled me to get pretty far in what I'm
trying to
> do.  However, there is one more manipulation I would like to make and
I
> haven't found a solution.  Using the data and code below, I generate
the
> plot produced by the last command.  If possible I would like to
reverse the
> order of the y-axis (bearing in mind horizontal=T) so that 0 is
plotted at
> the upper most part of the y-axis and 3 at the axis intercepts.  I've
tried
> the experiment approach to no avail, meaning I've placed rev(...)
around
> various arguments but with wrong results.
>
> Thanks,
> Eric
>
>
> df<-read.table(textConnection("Field Date AvgWTD Region variable
value hole
> depth depthM
> 204 17-Aug-00 2.897428989 US R1 NA R 1 0


[SNIP]

> 9A 09-Aug-00 1.482089996 US C6 NA C 6 1.8
> 9B 01-Jun-01 1.409700036 US C6 NA C 6 1.8
> 9B 09-Aug-00 3.837660074 US C6 NA C 6 1.8"),header=T)
> closeAllConnections()
> #The folling call doesn't preserve the correct spacing between data
> boxplot(value~depthM,data=df,horizontal=T,outline=F)
> #The following command preserves correct spacing, but need to reverse
the
> order
>
boxplot(value~factor(depthM,levels=c(0.0,0.3,0.6,0.9,1.2,1.5,1.8,2.1,2.4,2.7
,3)),data=df,horizontal=T,outline=F)


So if you want to reverse, either specify the levels in reverse order
or 
use rev() as in:

boxplot(value ~ factor(depthM,
     levels = rev(c(0.0,0.3,0.6,0.9,1.2,1.5,1.8,2.1,2.4,2.7,3))),
     data = df, horizontal = TRUE, outline = FALSE)


Uwe Ligges

______________________________________________
R-help at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help 
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html 
and provide commented, minimal, self-contained, reproducible code.

*******************************************************************
This email and any attachments are confidential. Any use...{{dropped:8}}



------------------------------

Message: 12
Date: Mon, 22 Nov 2010 05:19:21 -0800 (PST)
From: Alaios <alaios at yahoo.com>
To: Rhelp <r-help at r-project.org>
Subject: [R] plot inside function does not work
Message-ID: <413924.42773.qm at web120105.mail.ne1.yahoo.com>
Content-Type: text/plain

Hello everyone,
when I commit a plot using console(command line?) plot works fine. I have
created a function that plots based on the input. This function is called
plot_shad. When I call this function alone in the command line I get my
plot. 

Then I tried to use another function as depicted_below to do some
calculation before calling the function that does the plotting.

plot_shad_map<-function(f,CRagent,agentid){
  for (i in c(1:nrow(shad_map))){
    for (j in c(1:ncol(shad_map))){
     # Do something
    }
  }
  plot_shad_f(shad_map) # This plots fine when used in command line. But
inside this #function does not
  return(shad_map)
}

Unfortunately I get no plot . What might be the problem?

One more question how to get more plots at the same time. It seems that when
I issue a new plot replaces the old plot.

I would like to thank you in advance for you help 
Regards
Alex



      
	[[alternative HTML version deleted]]



------------------------------

Message: 13
Date: Mon, 22 Nov 2010 05:32:05 -0800 (PST)
From: madr <madrazel at interia.pl>
To: r-help at r-project.org
Subject: Re: [R] how to sample lowess/loess into matrix ?
Message-ID: <1290432725271-3053601.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii


I found it, it was SO simple:

lowessline <- lowess(x,y)
write.csv(lowessline, "loess.csv")
-- 
View this message in context:
http://r.789695.n4.nabble.com/how-to-sample-lowess-loess-into-matrix-tp30534
58p3053601.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 14
Date: Mon, 22 Nov 2010 14:34:02 +0100
From: Claudia Beleites <cbeleites at units.it>
To: r-help at r-project.org
Subject: Re: [R] plot inside function does not work
Message-ID: <4CEA714A.5070406 at units.it>
Content-Type: text/plain; charset=UTF-8; format=flowed

Alex, this may be FAQ 7.22

Claudia

On 11/22/2010 02:19 PM, Alaios wrote:
> Hello everyone,
> when I commit a plot using console(command line?) plot works fine. I have
created a function that plots based on the input. This function is called
plot_shad. When I call this function alone in the command line I get my
plot.
>
> Then I tried to use another function as depicted_below to do some
calculation before calling the function that does the plotting.
>
> plot_shad_map<-function(f,CRagent,agentid){
> ? for (i in c(1:nrow(shad_map))){
> ??? for (j in c(1:ncol(shad_map))){
> ???? # Do something
> ??? }
> ? }
> ? plot_shad_f(shad_map) # This plots fine when used in command line. But
inside this #function does not
> ? return(shad_map)
> }
>
> Unfortunately I get no plot . What might be the problem?
>
> One more question how to get more plots at the same time. It seems that
when I issue a new plot replaces the old plot.
>
> I would like to thank you in advance for you help
> Regards
> Alex
>
>
>
>
> 	[[alternative HTML version deleted]]
>
>
>
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.


-- 
Claudia Beleites
Dipartimento dei Materiali e delle Risorse Naturali
Universit? degli Studi di Trieste
Via Alfonso Valerio 6/a
I-34127 Trieste

phone: +39 0 40 5 58-37 68
email: cbeleites at units.it



------------------------------

Message: 15
Date: Mon, 22 Nov 2010 05:36:06 -0800 (PST)

To: Luis Felipe Parra <felipe.parra at quantil.com.co>,	r-help
	<r-help at r-project.org>
Subject: Re: [R] Rexcel
Message-ID: <666026.40186.qm at web113206.mail.gq1.yahoo.com>
Content-Type: text/plain; charset=utf-8

Hi,

For RExcel I would suggest subscribing;

http://mailman.csd.univie.ac.at/listinfo/rcom-l


They have a website on;
http://rcom.univie.ac.at/

B.R.
Stephen L



----- Original Message ----
From: Luis Felipe Parra <felipe.parra at quantil.com.co>
To: r-help <r-help at r-project.org>
Sent: Mon, November 22, 2010 8:11:39 PM
Subject: [R] Rexcel

Hello I am new to RExcel and I would like to run a source code form the
excel worksheet. I would like to run the following code

source("C:\\Quantil Aplicativos\\Genercauca\\BackwardSelectionNC.r")

from the excel wroksheet. Does anybody know how to do this?

Thank you

Felipe Parra

    [[alternative HTML version deleted]]

______________________________________________
R-help at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.






------------------------------

Message: 16
Date: Mon, 22 Nov 2010 05:47:35 -0800 (PST)

To: r-help at r-project.org, Claudia Beleites <cbeleites at units.it>
Subject: Re: [R] plot inside function does not work
Message-ID: <946840.29824.qm at web120104.mail.ne1.yahoo.com>
Content-Type: text/plain

Dear Claudia,
I would like to thank you for your reply, according to 7.22 I have to put
some print() statement inside my function. What I do not understand is where
to put this line (at the beginning or at the end of the code?) I tried both
but I get error message.

Moreover I would like to ask you if any inputs should be passed to the
print(). I tried to pass inside the argument of ggplot but it didnt like it
:(

Best Regards

7.22 Why do lattice/trellis graphics not work?

The most likely reason is that you forgot to tell R to display the
graph.  Lattice functions such as xyplot() create a graph object,
but do not display it (the same is true of ggplot2 graphics,
and Trellis graphics in S-Plus).  The print() method for the
graph object produces the actual display.  When you use these functions
interactively at the command line, the result is automatically printed,
but in source() or inside your own functions you will need an
explicit print() statement.



--- On Mon, 11/22/10, Claudia Beleites <cbeleites at units.it> wrote:

From: Claudia Beleites <cbeleites at units.it>
Subject: Re: [R] plot inside function does not work
To: r-help at r-project.org
Date: Monday, November 22, 2010, 1:34 PM

Alex, this may be FAQ 7.22

Claudia

On 11/22/2010 02:19 PM, Alaios wrote:
> Hello everyone,
> when I commit a plot using console(command line?) plot works fine. I have
created a function that plots based on the input. This function is called
plot_shad. When I call this function alone in the command line I get my
plot.
>
> Then I tried to use another function as depicted_below to do some
calculation before calling the function that does the plotting.
>
> plot_shad_map<-function(f,CRagent,agentid){
> o?= for (i in c(1:nrow(shad_map))){
> o?=o?=o?= for (j in c(1:ncol(shad_map))){
> o?=o?=o?=o?= # Do something
> o?=o?=o?= }
> o?= }
> o?= plot_shad_f(shad_map) # This plots fine when used in command line. But
inside this #function does not
> o?= return(shad_map)
> }
>
> Unfortunately I get no plot . What might be the problem?
>
> One more question how to get more plots at the same time. It seems that
when I issue a new plot replaces the old plot.
>
> I would like to thank you in advance for you help
> Regards
> Alex
>
>
>
>
> B B B  [[alternative HTML version deleted]]
>
>
>
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.


-- 
Claudia Beleites
Dipartimento dei Materiali e delle Risorse Naturali
UniversitC  degli Studi di Trieste
Via Alfonso Valerio 6/a
I-34127 Trieste

phone: +39 0 40 5 58-37 68
email: cbeleites at units.it

______________________________________________
R-help at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



      
	[[alternative HTML version deleted]]



------------------------------

Message: 17
Date: Mon, 22 Nov 2010 05:53:40 -0800 (PST)
From: Yann Lancien via LinkedIn <member at linkedin.com>
To: James Holtman <r-help at r-project.org>
Subject: [R] Invitation ? se connecter sur LinkedIn
Message-ID:
	<210238489.6957030.1290434020449.JavaMail.app at ech3-cdn05.prod>
Content-Type: text/plain

LinkedIn
------------Yann Lancien requested to add you as a connection on LinkedIn:
------------------------------------------

James,

J'aimerais vous inviter C  rejoindre mon rC)seau professionnel en ligne, sur
le site LinkedIn.

Yann

Accept invitation from Yann Lancien
http://www.linkedin.com/e/j2w180-ggtf3iot-z/dwA6EuZvdiRUO9LjukuviNUAHj8vHaNg
/blk/I2473133047_2/1BpC5vrmRLoRZcjkkZt5YCpnlOt3RApnhMpmdzgmhxrSNBszYOnPsQc3c
PcjcTd399bPxgum8Utk5QbPcNdzkQe3kPej4LrCBxbOYWrSlI/EML_comm_afe/

View invitation from Yann Lancien
http://www.linkedin.com/e/j2w180-ggtf3iot-z/dwA6EuZvdiRUO9LjukuviNUAHj8vHaNg
/blk/I2473133047_2/39vdPgMcPcNcPsQcAALqnpPbOYWrSlI/svi/

------------------------------------------

Why might connecting with Yann Lancien be a good idea?

People Yann Lancien knows can discover your profile:
Connecting to Yann Lancien will attract the attention of LinkedIn users. See
who's been viewing your profile:

http://www.linkedin.com/e/j2w180-ggtf3iot-z/wvp/inv18_wvmp/


-- 
(c) 2010, LinkedIn Corporation
	[[alternative HTML version deleted]]



------------------------------

Message: 18
Date: Mon, 22 Nov 2010 06:06:30 -0800 (PST)
From: Joel <joda2457 at student.uu.se>
To: r-help at r-project.org
Subject: [R] Prob with merge
Message-ID: <1290434790326-3053652.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii


Hi

Im trying to merge 2 data frames using merge but I dont get the result i
want

Lets make this a small test as my data set is to big to put in here :).

t1<-data.frame(a=c(1,2,3,4,5,6),b=c(11,11,NA,11,11,11))
t1<-data.frame(a=c(1,2,3,4,5,8),b=c(12,12,12,12,12,32))

this gives me:

> t1
  a  b
1 1 12
2 2 12
3 3 12
4 4 12
5 5 12
6 8 32
> t2
  a  b
1 1 11
2 2 11
3 3 NA
4 4 11
5 5 11
6 6 11

now when i merge i get:
> merge(t1,t2, by="a")
  a b.x b.y
1 1  12  11
2 2  12  11
3 3  12  NA
4 4  12  11
5 5  12  11


But what I want is it to look like:

   a b.x b.y
1 1 12 11
2 2 12 11
3 3 12 NA
4 4 12 11
5 5 12 11
6 8 32 NA

So I keep all of the rows from t1 and get an NA in dose slots at the t2 part
of the merge.
Anyone know how to accomplice this?

Thx
//Joel 



-- 
View this message in context:
http://r.789695.n4.nabble.com/Prob-with-merge-tp3053652p3053652.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 19
Date: Mon, 22 Nov 2010 06:08:10 -0800
From: Peter Ehlers <ehlers at ucalgary.ca>
To: Luis Felipe Parra <felipe.parra at quantil.com.co>
Cc: r-help <r-help at r-project.org>
Subject: Re: [R] Making a line in a legend shorter
Message-ID: <4CEA794A.3000607 at ucalgary.ca>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

On 2010-11-21 18:38, Luis Felipe Parra wrote:
> Hello, I am putting a legend with lines in a line plot and I would like to
> make the lines in the legend shorter. Does anybody knows how to do this?

I think the segment length is still hard-coded in legend().
Last July there was a request to lengthen the segment.
Maybe this post will help you:

  https://stat.ethz.ch/pipermail/r-help/2010-July/246599.html

Peter Ehlers

>
> Thank you
>
> Felipe Parra
>
> 	[[alternative HTML version deleted]]



------------------------------

Message: 20
Date: Mon, 22 Nov 2010 15:09:31 +0100
From: "ONKELINX, Thierry" <Thierry.ONKELINX at inbo.be>
To: Joel <joda2457 at student.uu.se>, <r-help at r-project.org>
Subject: Re: [R] Prob with merge
Message-ID: <3DB16098F738284D8DBEB2FC369916384D6283 at inboexch.inbo.be>
Content-Type: text/plain; charset="us-ascii"

merge(t1,t2, by="a", all.x = TRUE)

------------------------------------------------------------------------
----
ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek
team Biometrie & Kwaliteitszorg
Gaverstraat 4
9500 Geraardsbergen
Belgium

Research Institute for Nature and Forest
team Biometrics & Quality Assurance
Gaverstraat 4
9500 Geraardsbergen
Belgium

tel. + 32 54/436 185
Thierry.Onkelinx at inbo.be
www.inbo.be

To call in the statistician after the experiment is done may be no more
than asking him to perform a post-mortem examination: he may be able to
say what the experiment died of.
~ Sir Ronald Aylmer Fisher

The plural of anecdote is not data.
~ Roger Brinner

The combination of some data and an aching desire for an answer does not
ensure that a reasonable answer can be extracted from a given body of
data.
~ John Tukey
  

> -----Oorspronkelijk bericht-----
> Van: r-help-bounces at r-project.org 
> [mailto:r-help-bounces at r-project.org] Namens Joel
> Verzonden: maandag 22 november 2010 15:07
> Aan: r-help at r-project.org
> Onderwerp: [R] Prob with merge
> 
> 
> Hi
> 
> Im trying to merge 2 data frames using merge but I dont get 
> the result i want
> 
> Lets make this a small test as my data set is to big to put 
> in here :).
> 
> t1<-data.frame(a=c(1,2,3,4,5,6),b=c(11,11,NA,11,11,11))
> t1<-data.frame(a=c(1,2,3,4,5,8),b=c(12,12,12,12,12,32))
> 
> this gives me:
> 
> > t1
>   a  b
> 1 1 12
> 2 2 12
> 3 3 12
> 4 4 12
> 5 5 12
> 6 8 32
> > t2
>   a  b
> 1 1 11
> 2 2 11
> 3 3 NA
> 4 4 11
> 5 5 11
> 6 6 11
> 
> now when i merge i get:
> > merge(t1,t2, by="a")
>   a b.x b.y
> 1 1  12  11
> 2 2  12  11
> 3 3  12  NA
> 4 4  12  11
> 5 5  12  11
> 
> 
> But what I want is it to look like:
> 
>    a b.x b.y
> 1 1 12 11
> 2 2 12 11
> 3 3 12 NA
> 4 4 12 11
> 5 5 12 11
> 6 8 32 NA
> 
> So I keep all of the rows from t1 and get an NA in dose slots 
> at the t2 part of the merge.
> Anyone know how to accomplice this?
> 
> Thx
> //Joel 
> 
> 
> 
> --
> View this message in context: 
> http://r.789695.n4.nabble.com/Prob-with-merge-tp3053652p3053652.html
> Sent from the R help mailing list archive at Nabble.com.
> 
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 



------------------------------

Message: 21
Date: Mon, 22 Nov 2010 08:11:15 -0600
From: Terry Therneau <therneau at mayo.edu>

Cc: r-help at r-project.org, dieter.menne at menne-biomed.de,
	r_tingley at hotmail.com
Subject: Re: [R] calculating martingale residual on new data	using
	"predict.coxph"
Message-ID: <1290435075.7350.3.camel at punchbuggy>
Content-Type: text/plain

 This feature has been added in survival 2.36-1, which is now on CRAN.
(2.36-2 should appear in another day or so)
     Terry T.

---------begin included message --------
I was trying to use "predict.coxph" to calculate martingale residuals on
a test 
data, however, as pointed out before

http://tolstoy.newcastle.edu.au/R/e4/help/08/06/13508.html

predict(mycox1, newdata, type="expected") is not implemented yet.



------------------------------

Message: 22
Date: Mon, 22 Nov 2010 06:20:51 -0800 (PST)
From: Joel <joda2457 at student.uu.se>
To: r-help at r-project.org
Subject: Re: [R] Prob with merge
Message-ID: <1290435651761-3053675.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii


Thx alot mate.
-- 
View this message in context:
http://r.789695.n4.nabble.com/Prob-with-merge-tp3053652p3053675.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 23
Date: Mon, 22 Nov 2010 09:27:08 -0500
From: Jimmy Martina <jimmyjuan1 at hotmail.com>
To: R-geo <r-sig-geo at stat.math.ethz.ch>, R <r-help at r-project.org>
Subject: [R] What if geoRglm results showed that a non-spacial model
	fits?
Message-ID: <BAY110-W26A01692A4E820BEE2C8BF943D0 at phx.gbl>
Content-Type: text/plain


Hi R-people:

Working in geoRglm, it shows me, according to AIC criterion, that the
non-spacial model describes the process in a better way. It's the first time
that I'm facing up to.

These are my results:
OP2003Seppos.AICnonsp-OP2003Seppos.AICsp
#[1] -4

(OP2003Seppos.lf0.p<-exp(OP2003Seppos.lf0$beta)/(1+exp(OP2003Seppos.lf0$beta
))) #P non spatial
#[1] 0.9717596

(OP2003Seppos.lf.p<-exp(OP2003Seppos.lf$beta)/(1+exp(OP2003Seppos.lf$beta)))
#P spatial
#[1] 0.9717596

It must what have an important influence at kriging, because it shows as
following:

OP2003Sepposbin.krig<-glsm.krige(OP2003Seppos.tune,loc=OP2003Seppospro.pred.
grid,bor=OP2003Sepposbor)
#glsm.krige: Prediction for a generalised linear spatial model 
#There are 50 or mode advices (use warnings() to see them)
#> warnings()
#Warning messages:
#1: In asympvar(kpl.result$predict, messages = FALSE) ... :
#  value of argument lag.max is not suffiently long
#2: In asympvar(kpl.result$predict, messages = FALSE) ... :
#  value of argument lag.max is not suffiently long

Help me, please. 		 	   		  
	[[alternative HTML version deleted]]



------------------------------

Message: 24
Date: Mon, 22 Nov 2010 15:32:51 +0100
From: Stella Pachidi <stella.pachidi at gmail.com>
To: r-help at stat.math.ethz.ch
Subject: [R] Help with plotting kohonen maps
Message-ID:
	<AANLkTini0qSwDHVo_Pe9f2zh2Ga5mEWfBWhCXiwwQWYr at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

Dear all,

I recently started using the kohonen package for my thesis project. I
have a very simple question which I cannot figure out by myself:

When I execute the following example code, from the paper of Wehrens
and Buydens (http://www.jstatsoft.org/v21/i05/paper):

R> library("kohonen")
Loading required package: class
R> data("wines")
R> wines.sc <- scale(wines)
R> set.seed(7)
R> wine.som <- som(data = wines.sc, grid = somgrid(5, 4, "hexagonal"))
R> plot(wine.som, main = "Wine data")

I get to have a plot of the codebook vectors of the 5-by-4 mapping of
the wine data, and it also includes which  variable names correspond
to each color. (same picture as in the paper)

However, when I run the som() function with my own data and I try to
get the plot afterwards:

library("kohonen")
self_Organising_Map <- som(data = tableToCluster, grid = somgrid(5, 2,
"rectangular"), rlen=1000)
plot(self_Organising_Map, main = "Kohonen Map of Clustered Profiles")

 the resulting plot does not contain the color labels i.e. the
variable names of my data table, even though they exist and are
included as column names of tableToCluster.

I also tried the following line:

plot(self_Organising_Map, type="codes", codeRendering = "segments",
ncolors=length(colnames(self_Organising_Map$codes)),
palette.name=rainbow, main = "Kohonen Map of Clustered Profiles \n
Codes", zlim =colnames(self_Organising_Map$codes))

but it had the same result.

If you could please help with what argument I should use to show the
[[elided Yahoo spam]]

Kind regards,
Stella

-- 
Stella Pachidi
Master in Business Informatics student
Utrecht University



------------------------------

Message: 25
Date: Mon, 22 Nov 2010 12:03:54 -0300
From: Kjetil Halvorsen <kjetilbrinchmannhalvorsen at gmail.com>
To: Uwe Ligges <ligges at statistik.tu-dortmund.de>
Cc: r-devel <r-help at r-project.org>
Subject: Re: [R] ?summaryRprof running at 100% cpu for one hour ...
Message-ID:
	<AANLkTim7qsgaidiEV65pbJfPLOhrHG4yy8jf6Eha==t9 at mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

see below.

2010/11/21 Uwe Ligges <ligges at statistik.tu-dortmund.de>:
>
>
> On 21.11.2010 18:13, Kjetil Halvorsen wrote:

>>> ?save.image
>>
>> And at this point it has been running with one cpu at 100% for over an
>> hour!
>
>
> It's OK to take an hour (due to memory <-> disc IO) if it uses swap space
> heavily. Factor of 60 is not much given memory is faster than harddiscs by
> orders of magnitude.
>
> Uwe

It takes much more than an hour! I started anew a process with the
problem yesterday aroun 18.00, had to kill it this morning around
09.00. That's more than  1|5 hours.

Kjetil
>



------------------------------

Message: 26
Date: Mon, 22 Nov 2010 10:13:01 -0500
From: Mike Marchywka <marchywka at hotmail.com>
To: <kjetilbrinchmannhalvorsen at gmail.com>,
	<ligges at statistik.tu-dortmund.de>
Cc: r-help at r-project.org
Subject: Re: [R] ?summaryRprof running at 100% cpu for one hour ...
Message-ID: <BLU113-W11A7426FDE41EECEB92F7BBE3D0 at phx.gbl>
Content-Type: text/plain; charset="iso-8859-1"






----------------------------------------
> Date: Mon, 22 Nov 2010 12:03:54 -0300
> From: kjetilbrinchmannhalvorsen at gmail.com
> To: ligges at statistik.tu-dortmund.de
> CC: r-help at r-project.org
> Subject: Re: [R] ?summaryRprof running at 100% cpu for one hour ...
>
> see below.
>
> 2010/11/21 Uwe Ligges :
> >
> >
> > On 21.11.2010 18:13, Kjetil Halvorsen wrote:
>
> >>> ?save.image
> >>
> >> And at this point it has been running with one cpu at 100% for over an
> >> hour!
> >
> >
> > It's OK to take an hour (due to memory <-> disc IO) if it uses swap
space
> > heavily. Factor of 60 is not much given memory is faster than harddiscs
by
> > orders of magnitude.
> >
> > Uwe
>
> It takes much more than an hour! I started anew a process with the
> problem yesterday aroun 18.00, had to kill it this morning around
> 09.00. That's more than 1|5 hours.


Again, see if you can run it under gdb or at least look at
tools you have to determine page faults. My brain has been corrupted
with 'dohs but in task manager CPU usage drops when page faults start
or lock startvation etc. A blocking thread should yield IIRC. Waiting
for it to die a natural death may not be practical. 

I just posted something on this after following another's suggestion but
it should be easy for you to get developer tools, execute gdb,
point it at R and then break a few times. Debuggers don't speed anything
up but presumably it gets into its limit cycle ( infinite futile loop )
within a short time. Also sometimes you get these loops due to memory
corruption
with native code etc etc so confusing results may take a few different
approaches
to figure out. 

Turning on profiling will at best destry any memory coherence and worse
ad to VM thrashing. At least try to determine if you are faulting all over.


>
> Kjetil
> >
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.



------------------------------

Message: 27
Date: Mon, 22 Nov 2010 09:13:19 -0600
From: Mike Lawrence <Mike.Lawrence at dal.ca>
To: "r-help at lists.R-project.org" <r-help at r-project.org>
Subject: [R] hierarchical mixture modelling
Message-ID:
	<AANLkTi=fzD+Jz1-DnOY239U3xycCRfriyZ1bF-AJjjAh at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

Hi folks,

I have circular data that I'd like to model as a mixture of a uniform
and a Von Mises centered on zero. The data are from 2 groups of human
participants, where each participant is measured repeatedly within
each of several conditions. So, the data frame would look something
like:

########
    design = expand.grid(
        person = 1:20
        , variable1 = 1:2
        , variable2 = 1:3
        , repetition = 1:100
    )
    design$group = design$person %% 2
########

where each row would have a data point.

Now, I know how to fit the mixture of a uniform and Von Mises to each
individual cell of the data independently using the EM algorithm,
yielding estimates of the mixture proportion and Von Mises
concentration per cell. However, I of course want to assess the degree
to which the group and other variables in the design affect these
model parameters, and at least in the case of the proportion estimate,
I'm uncomfortable submitting the raw proportion to a test that is
going to assume Gaussian error (eg. ANOVA, or lmer(...,
family=gaussian)). I'm aware that lmer lets one specify non-gaussian
links, but as I understand it, if I wanted to, say, specify the
binomial link (which seems appropriate for a proportion), lmer wants
the data to be the raw 1's and 0's, not the proportion estimate
obtained from EM.

I've heard that there are hierarchical mixture modelling methods out
there (possibly Bayesian hierarchical mixture modelling) that might
let me achieve model fitting and inference in one step (eg. model the
mixture and influence on each parameter from the between and
within-person variables, and treating people as random effects), but
I'm having trouble tacking down instructions on how to do this.

[[elided Yahoo spam]]

Cheers,

Mike

--
Mike Lawrence
Graduate Student
Department of Psychology
Dalhousie University

Looking to arrange a meeting? Check my public calendar:
http://tr.im/mikes_public_calendar

~ Certainty is folly... I think. ~



------------------------------

Message: 28
Date: Mon, 22 Nov 2010 20:44:51 +0530
From: "Santosh Srinivas" <santosh.srinivas at gmail.com>
To: <r-help at r-project.org>
Subject: [R] Check for is.object
Message-ID: <4cea88f0.1f44960a.1cfe.fffff56a at mx.google.com>
Content-Type: text/plain;	charset="us-ascii"

Hello,

I am trying to recursively append some data from multiple files into a
common object

For this, I am using in a loop

NewObject <- rbind(NewObject,tempObject)


For the first loop, obviously there is no NewObject ... so I wanted to do
NewObject <- tempObject[0,]

Now when it loops again I want to put the statement do "NewObject <-
tempObject[0,]" inside a if statement ... so that it does I can skip it once
NewObject has been initialized.

But, is.object doesn't seem to work. 

What is the alternative check that I can do? And is there a better way to
achieve what I want?

Thanks,
S



------------------------------

Message: 29
Date: Mon, 22 Nov 2010 07:17:22 -0800 (PST)
From: Wonsang You <you at ifn-magdeburg.de>
To: r-help at r-project.org
Subject: [R]  Fast Two-Dimensional Optimization
Message-ID:
	<AANLkTikBQPjKypX7Y9X7sn3h41nO=r8WOcRtbr2dEO40 at mail.gmail.com>
Content-Type: text/plain


Dear R Helpers,

I have attempted "optim" function to solve a two-dimensional optimization
problem. It took around 25 second to complete the procedure.
However, I want to reduce the computation time: less than 7 second. Is there
any optimization function in R which is very rapid?

Best Regards,
Wonsang


-----
Wonsang You
Leibniz Institute for Neurobiology
-- 
View this message in context:
http://r.789695.n4.nabble.com/R-Fast-Two-Dimensional-Optimization-tp3053782p
3053782.html
Sent from the R help mailing list archive at Nabble.com.

	[[alternative HTML version deleted]]



------------------------------

Message: 30
Date: Mon, 22 Nov 2010 16:19:14 +0100
From: ?ukasz R?c?awowicz <lukasz.reclawowicz at gmail.com>
To: Marcin Gomulka <mrgomel at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] plotting a timeline
Message-ID:
	<AANLkTimfz8Eo1GZouw5hf_ROvSVDzk8mgJGjLMwCb5d4 at mail.gmail.com>
Content-Type: text/plain

2010/11/20 Marcin Gomulka <mrgomel at gmail.com>

> I'd rather do this with a dedicated
>  package function ( like axis() ).


Probably you have to write your own function, or tune up manually plot.

plot(the_data$eventtime, abs(the_data$impact), type="h", frame.plot=FALSE,
axes =
FALSE, xlab="",ylab="", col="grey",lwd=2,ylim=c(-2,2),xlim=c(1913,2005))
text(the_data$eventtime,the_data$impact+.1, the_data$label,cex=.6,adj=1)
lines(x=c(1914,2003),y=c(0,0),lwd=2,col="blue",t="l")
axis(1,the_data$eventtime,pos=0,cex.axis=.5,padj=-2,tck=-.01)
-- 
Mi3ego dnia

	[[alternative HTML version deleted]]



------------------------------

Message: 31
Date: Mon, 22 Nov 2010 07:24:56 -0800
From: Patrick Leyshock <ngkbr8es at gmail.com>
To: Uwe Ligges <ligges at statistik.tu-dortmund.de>, r-help at r-project.org
Subject: Re: [R] memory profiling
Message-ID:
	<AANLkTim62ptR2idZzbLDTZ5Mtm31_09pzSAyjT6EkK_1 at mail.gmail.com>
Content-Type: text/plain

> Using:
>
>   summaryRprof(memory="both")
>
> did the trick, thank you.  I had not been using that setting when calling
> summaryRprof.
>
> Thanks, Patrick
>
> 2010/11/20 Uwe Ligges <ligges at statistik.tu-dortmund.de>
>
>
>>
>> On 19.11.2010 21:48, Patrick Leyshock wrote:
>>
>>> I'm trying to configure Version 2.12.0 or R to do memory profiling.
>>>
>>> I've reconfigured the code:
>>>
>>> % ./compile --enable-memory-profiling=YES
>>>
>>> and verified that it's configured correctly by examining the output.  I
>>> then
>>> rebuild R:
>>>
>>> % make
>>>
>>> Then I fire up R and run a script, using Rprof with the memory-profiling
>>> switch set to TRUE:
>>>
>>> Rprof("output", memory.profiling=TRUE);
>>> # a bunch of R code
>>> Rprof(NULL);
>>>
>>
>>
>> Wen I do
>>
>> summaryRprof(memory="both")
>>
>> I see an additional column ...
>>
>> but since you have not said what you tried exactly, we cannot help very
>> much.
>>
>> Uwe Ligges
>>
>>
>>
>>  When I examine the output, however, using either R CMD Rprof from the
>>> shell,
>>> or summaryRprof from within R, the output I see is identical to the
>>> output I
>>> got when I ran R BEFORE I recompiled with memory profiling enabled.
>>>
>>> Anyone see something that I'm missing?
>>>
>>> Thanks, Patrick
>>>
>>>        [[alternative HTML version deleted]]
>>>
>>> ______________________________________________
>>> R-help at r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>>> http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>>
>>
>

	[[alternative HTML version deleted]]



------------------------------

Message: 32
Date: Mon, 22 Nov 2010 12:41:06 -0300
From: Kjetil Halvorsen <kjetilbrinchmannhalvorsen at gmail.com>
To: Mike Marchywka <marchywka at hotmail.com>
Cc: r-help at r-project.org, ligges at statistik.tu-dortmund.de
Subject: Re: [R] ?summaryRprof running at 100% cpu for one hour ...
Message-ID:
	<AANLkTimevZO2TA=5tZj92eZBwd5pc7GaNzRXfO=Rskx=@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

see below.

On Mon, Nov 22, 2010 at 12:13 PM, Mike Marchywka <marchywka at hotmail.com>
wrote:
>
>
>
>
>
> ----------------------------------------
>> Date: Mon, 22 Nov 2010 12:03:54 -0300
>> From: kjetilbrinchmannhalvorsen at gmail.com
>> To: ligges at statistik.tu-dortmund.de
>> CC: r-help at r-project.org
>> Subject: Re: [R] ?summaryRprof running at 100% cpu for one hour ...
>>
>> see below.
>>
>> 2010/11/21 Uwe Ligges :
>> >
>> >
>> > On 21.11.2010 18:13, Kjetil Halvorsen wrote:
>>
>> >>> ?save.image
>> >>
>> >> And at this point it has been running with one cpu at 100% for over an
>> >> hour!
>> >
>> >
>> > It's OK to take an hour (due to memory <-> disc IO) if it uses swap
space
>> > heavily. Factor of 60 is not much given memory is faster than harddiscs
by
>> > orders of magnitude.
>> >
>> > Uwe
>>
>> It takes much more than an hour! I started anew a process with the
>> problem yesterday aroun 18.00, had to kill it this morning around
>> 09.00. That's more than 1|5 hours.
>
>
> Again, see if you can run it under gdb or at least look at
> tools you have to determine page faults. My brain has been corrupted
> with 'dohs but in task manager CPU usage drops when page faults start
> or lock startvation etc. A blocking thread should yield IIRC. Waiting
> for it to die a natural death may not be practical.
>
Thanks. Will try. Really, I tried yesterday, to run R under gdb within
emacs, but it did'nt work out. What I did (in emacs 23) was, typing
Ctrl-u M-x R
and then enter the option
--debugger=gdb

[[elided Yahoo spam]]

Kjetil


> I just posted something on this after following another's suggestion but
> it should be easy for you to get developer tools, execute gdb,
> point it at R and then break a few times. Debuggers don't speed anything
> up but presumably it gets into its limit cycle ( infinite futile loop )
> within a short time. Also sometimes you get these loops due to memory
corruption
> with native code etc etc so confusing results may take a few different
approaches
> to figure out.
>
> Turning on profiling will at best destry any memory coherence and worse
> ad to VM thrashing. At least try to determine if you are faulting all
over.
>
>
>>
>> Kjetil
>> >
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.



------------------------------

Message: 33
Date: Mon, 22 Nov 2010 16:44:27 +0100
From: Martin Maechler <maechler at stat.math.ethz.ch>

Cc: r-help at r-project.org
Subject: Re: [R] An empty grey diagram
Message-ID: <19690.36827.343657.468114 at lynne.math.ethz.ch>
Content-Type: text/plain; charset=us-ascii


>>>>>     on Sat, 20 Nov 2010 00:12:07 -0800 (PST) writes:

    > Hi Josh and David,
    > Problem solved.

    > Both following steps work.

    > 1)
    >> ToothGrowth
    >> attach(ToothGrowth)
    >> plot(dose,len)   # this step is needed.  Don't close the diagram.
Otherwise 
    >> following command won't work.

    >> matrics=lm(len~dose)
    >> abline(matrics)

    > Graph displayed

    > 2)
    >> ToothGrowth
    >> attach(ToothGrowth)
    >> plot(dose, len)   # this step is needed.  Don't close the diagram.
Otherwise 
    >> following command won't work.

    >> abline(lm(len ~ dose, data = ToothGrowth))

    > Graph displayed

Well, it is  *VERY BAD*  style and error-prone practice  nowadays,
to attach a data set.
 [The only thing to attach() are  save()d data *files*;
  there it's often tidier to attach() instead of to load() ...]


I have not followed all the things you tried .. or did not try.

A much better way of achieving the above (and yes; *never* close
  the graphics window within these) should be

 plot (    len ~ dose, data=ToothGrowth)
 abline(lm(len ~ dose, data=ToothGrowth))

and you can see {if you use a fixed-width font in your e-mail,
    	    	 it "springs into your eyes"}
how nicely the formula notation of graphics alings with
the same in models.

If the above two simple lines do not work correctly,
then your R environment is "broken" in some way,
and maybe first doing  
    rm(list = ls())
may help.

Martin Maechler, 
ETH Zurich and R Core Team


    > B.R.
    > Stephen L


    > ----- Original Message ----
    > From: Joshua Wiley <jwiley.psych at gmail.com>

    > Cc: r-help at r-project.org
    > Sent: Sat, November 20, 2010 1:39:45 PM
    > Subject: Re: [R] An empty grey diagram


wrote:
    >> Hi David,
    >> 
    >> 
    >>> What happens when you follow the directions... i.e. type:
    >>> plot.new()    #???
    >> 
    >> abline(lm(len ~ dose, data = ToothGrowth))
    >> plot.new()
    >> 
    >> The grey background changes to white, still an empty graph

    > You cannot just use abline() on an empty graphic (well, you can but
    > you get an empty graph).  Please actually run my code, it will create
    > a scatter plot, then add a line.  Do not close the graphic device in
    > between.

    > with(ToothGrowth, plot(dose, len))
    > abline(lm(len ~ dose, data = ToothGrowth))




    > ______________________________________________
    > R-help at r-project.org mailing list
    > https://stat.ethz.ch/mailman/listinfo/r-help
    > PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
    > and provide commented, minimal, self-contained, reproducible code.



------------------------------

Message: 34
Date: Mon, 22 Nov 2010 10:57:06 -0500
From: Mike Marchywka <marchywka at hotmail.com>
To: <kjetilbrinchmannhalvorsen at gmail.com>
Cc: r-help at r-project.org, ligges at statistik.tu-dortmund.de
Subject: Re: [R] ?summaryRprof running at 100% cpu for one hour ...
Message-ID: <BLU113-W27D876C2B7D93140F1135BE3D0 at phx.gbl>
Content-Type: text/plain; charset="iso-8859-1"









----------------------------------------
> Date: Mon, 22 Nov 2010 12:41:06 -0300
> Subject: Re: [R] ?summaryRprof running at 100% cpu for one hour ...
> From: kjetilbrinchmannhalvorsen at gmail.com
> To: marchywka at hotmail.com
> CC: ligges at statistik.tu-dortmund.de; r-help at r-project.org
>
> see below.
>
> On Mon, Nov 22, 2010 at 12:13 PM, Mike Marchywka wrote:
> >
> >
> Thanks. Will try. Really, I tried yesterday, to run R under gdb within
> emacs, but it did'nt work out. What I did (in emacs 23) was, typing
> Ctrl-u M-x R
> and then enter the option
> --debugger=gdb
>
[[elided Hotmail spam]]
>
> Kjetil

I rarely use gdb but it did seem to work with R but I executed gdb from
cygwin windoh and IIRC ctrl-C worked fine as it broke into debugger.
I guess you could try that- start gdb and attach or invoke R from gdb.


  		 	   		  


------------------------------

Message: 35
Date: Mon, 22 Nov 2010 11:02:55 -0500
From: Jonathan P Daily <jdaily at usgs.gov>
To: "Santosh Srinivas" <santosh.srinivas at gmail.com>
Cc: r-help at r-project.org, r-help-bounces at r-project.org
Subject: Re: [R] Check for is.object
Message-ID:
	<OF1BD5F51B.26988C10-ON852577E3.00581042-852577E3.005830E8 at usgs.gov>
Content-Type: text/plain; charset="US-ASCII"

I think you want the function ?exists

>if(!exists("NewObject"))

--------------------------------------
Jonathan P. Daily
Technician - USGS Leetown Science Center
11649 Leetown Road
Kearneysville WV, 25430
(304) 724-4480
"Is the room still a room when its empty? Does the room,
 the thing itself have purpose? Or do we, what's the word... imbue it."
     - Jubal Early, Firefly

r-help-bounces at r-project.org wrote on 11/22/2010 10:14:51 AM:

> [image removed] 
> 
> [R] Check for is.object
> 
> Santosh Srinivas 
> 
> to:
> 
> r-help
> 
> 11/22/2010 10:17 AM
> 
> Sent by:
> 
> r-help-bounces at r-project.org
> 
> Hello,
> 
> I am trying to recursively append some data from multiple files into a
> common object
> 
> For this, I am using in a loop
> 
> NewObject <- rbind(NewObject,tempObject)
> 
> 
> For the first loop, obviously there is no NewObject ... so I wanted to 
do
> NewObject <- tempObject[0,]
> 
> Now when it loops again I want to put the statement do "NewObject <-
> tempObject[0,]" inside a if statement ... so that it does I can skip it 
once
> NewObject has been initialized.
> 
> But, is.object doesn't seem to work. 
> 
> What is the alternative check that I can do? And is there a better way 
to
> achieve what I want?
> 
> Thanks,
> S
> 
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.



------------------------------

Message: 36
Date: Mon, 22 Nov 2010 08:07:24 -0800 (PST)
From: Berend Hasselman <bhh at xs4all.nl>
To: r-help at r-project.org
Subject: Re: [R] solve nonlinear equation using BBsolve
Message-ID: <1290442044850-3053902.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii


After my reply you sent me the following privately:



> Thank you for your respond.  Now I adjust the parameters which are close
> to the value that I'm expected and it gives me the following message:  I
> tried a few combination.  My question is why it said "Unsuccessful
> convergence" but still give some answers?  Can I use the answers?
>  
>  
>> p0 <- c(2.5,25,25,0.8)
>> mgf_gammasum(p0)
> [1]   -17.3600000 -3410.6900000     0.3508769    -1.1028287
>> BBsolve(par = p0, fn = mgf_gammasum)[[1]]
>   Unsuccessful convergence.
> [1]  2.066909 44.068739 24.809270  0.972542
>> source(.trPaths[5], echo=TRUE, max.deparse.length=10000)
>> p0 <- c(1.7,36,50,0.9)
>> mgf_gammasum(p0)
> [1]    3.8400000 2601.0300000    0.7232021    0.2866732
>> BBsolve(par = p0, fn = mgf_gammasum)[[1]]
>   Unsuccessful convergence.
> [1]  2.0823407 18.3757502 49.9935914  0.9456666
>> p0 <- c(2,40,40,0.8)
>> mgf_gammasum(p0)
> [1]   17.6400000 2798.7100000    0.4883676   -0.5653881
>> BBsolve(par = p0, fn = mgf_gammasum)[[1]]
>   Unsuccessful convergence.
> [1]  2.059853 29.215478 39.882727  0.914894
> 


It is only giving you the values it stopped at.
You are only printing  [[1]]  of BBsolve's result.
BBsolve provides more information.
You can easily check if the result is usable.

Do something like this.

p0 <- c(2.5,25,25,0.8)
bb.result <- BBsolve(par = p0, fn = mgf_gammasum)
bb.result
mgf_gammasum(bb.result$par) 

You will see that BBsolve has NOT found a solution.

If you use nleqslv as follows you will see that the jacobian matrix in your
starting point is very ill-conditioned.

nleqslv(p0,mgf_gammasum)

All your other starting points have similar problems.
You really need to rethink your system of equations.

In future please also reply to the list and not only to me privately.

best

Berend
-- 
View this message in context:
http://r.789695.n4.nabble.com/solve-nonlinear-equation-using-BBsolve-tp30521
67p3053902.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 37
Date: Mon, 22 Nov 2010 11:07:43 -0500
From: Ista Zahn <izahn at psych.rochester.edu>

Cc: R Project Help <R-help at r-project.org>
Subject: Re: [R] Alternatives to image(...) and filled.contour(...)
	for 2-D filled Plots
Message-ID:
	<AANLkTi=9Z+ozud2YRj4C2Jqu5xRJGph4vSusQw4UbRHp at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

Hi Jason,
You do not say what you want the alternative to do, so its hard to
know if this will be helpful. But one alternative is

dat <- as.data.frame(ak.fan)
dat <- melt(dat, id.vars=c("x", "y"))

p <- ggplot(dat, aes(x=x, y=variable))
p + geom_tile(aes(fill=value))

-Ista


wrote:
>
> By any chance are there any alternatives to image(...) and
filled.contour(...)
>
> I used Rseek to search for that very topic, but didn't turn over any
leads...
>
http://www.rseek.org/?cx=010923144343702598753%3Aboaz1reyxd4&newwindow=1&q=a
lternative+to+image+and+filled.contour&sa=Search&cof=FORID%3A11&siteurl=www.
rseek.org%252F#1238
>
>
> I'm sure there are some out there, but curious about some of the favorites
and
> ones folks have had success using.
>
>
> Thanks for any insights and feedback.
>
> I would like to use the alternative 2-D fill function with the example I
have
> been messing with in place of image(...) or filled.contour(...):
>
>
>
> library(akima)
>
> hyp_distance<-seq(1,15)
> angle_deg_val<-seq(0,15)
>
>
> x_distance_val<-NULL
> y_distance_val<-NULL
>
> for(ii in 1:length(hyp_distance))
> {
> ? ? ? ?for(jj in 1:length(angle_deg_val))
> ? ? ? ?{
> ? ? ? ? ? ? ?
?x_distance_tmp<-hyp_distance[ii]*cos(angle_deg_val[jj]*pi/180)
> ? ? ? ? ? ? ?
?y_distance_tmp<-hyp_distance[ii]*sin(angle_deg_val[jj]*pi/180)
>
> ? ? ? ? ? ? ? ?x_distance_val<-c(x_distance_val, x_distance_tmp)
> ? ? ? ? ? ? ? ?y_distance_val<-c(y_distance_val, y_distance_tmp)
> ? ? ? ?}
>
> }
>
>
> temperature_vals<-rnorm(length(x_distance_val), 75, 2)
>
> temp_samples<-cbind(x_distance_val, y_distance_val, temperature_vals)
>
> temp_samples_DF<-data.frame(x = x_distance_val, y = ?y_distance_val, z =
> temperature_vals)
>
>
> ak.fan <- interp(temp_samples[,1], temp_samples[,2], temp_samples[,3] )
>
> length_val<-floor(max(temperature_vals) - min(temperature_vals))*2
>
> color_vals_red_to_yellow_to_green<-colorRampPalette(c("red", "yellow",
"green"),
> space="Lab")(length_val)
> color_vals_green_to_yellow_to_red<-colorRampPalette(c("green", "yellow",
"red"),
> space="Lab")(length_val)
>
> plot(1,1, col = 0, xlim = c(min(x_distance_val), max(x_distance_val)),
ylim =
> c(min(y_distance_val), max(y_distance_val)), xlab = "Room X Position
(FT)", ylab
> = "Room Y Position (FT)", main = "Room Temp vs Position")
>
> grid()
>
> # filled.contour(ak.fan, col = color_vals_red_to_yellow_to_green)
> # filled.contour(ak.fan, col = color_vals_green_to_yellow_to_red)
>
> # image(ak.fan, col = color_vals_red_to_yellow_to_green, add = TRUE)
> image(ak.fan, col = color_vals_green_to_yellow_to_red, add = TRUE)
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Ista Zahn
Graduate student
University of Rochester
Department of Clinical and Social Psychology
http://yourpsyche.org



------------------------------

Message: 38
Date: Mon, 22 Nov 2010 11:09:10 -0500
From: Matt Shotwell <shotwelm at musc.edu>
To: "martin.tomko at geo.uzh.ch" <martin.tomko at geo.uzh.ch>
Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] statistical test for comparison of two
	classifications (nominal)
Message-ID: <1290442150.1756.74.camel at matt-laptop>
Content-Type: text/plain; charset="UTF-8"

Martin,

Pardon the delayed reply.

Bootstrap methods have been around for some time (late seventies?), but
their popularity seems to have exploded in correspondence with computing
technology. You should be able to find more information in most modern
books on statistical inference, but here is a brief:

The bootstrap is a method often used to establish an empirical null
distribution for a test statistic when traditional (analytical) methods
fail. The bootstrap works by imposing a null hypothesis on the observed
data, followed by re-sampling with replacement. The test statistic is
computed at each re-sample and used to build up an empirical null
distribution. The idea is to impose the null hypothesis while preserving
variability in the observed data, and thus the test statistic.

For example, suppose we observe some continuous scalar data and
hypothesize that the sample was observed from a population with mean
zero. We can impose this hypothesis by subtracting the sample mean from
each observation. Re-samples from these transformed data are treated as
having been observed under the null hypothesis.

In the case of classification and partitioning, the difficulty is
formulating a meaningful null hypothesis about the collection of
classifications, and imposing the null hypothesis in a bootstrap
sampling scheme.

-Matt

On Wed, 2010-11-17 at 10:01 -0500, Martin Tomko wrote: 
> Thanks Mat,
> I have in the meantime identified the Rand index, but not the others. I 
> will also have a look at profdpm, that did not pop-up in my searches.
> Indeed, the interpretation is going to be critical... Could you please 
> elaborate on what you mean by the bootstrap process?
> 
> Thanks a lot for your helps,
> Martin
> 
> On 11/17/2010 3:50 PM, Matt Shotwell wrote:
> > There are several statistics used to compare nominal classifications, or
> > _partitions_ of a data set. A partition isn't quite the same in this
> > context because partitioned data are not restricted to a fixed number of
> > classes. However, the statistics used to compare partitions should also
> > work for these 'restricted' partitions. See the Rand index, Fowlkes and
> > Mallows index, Wallace indices, and the Jaccard index. The profdpm
> > package implements a function (?profdpm::pci) that computes these
> > indices for two factors representing partitions of the same data.
> >
> > The difficult part is drawing statistical inference about these indices.
> > It's difficult to formulate a null hypothesis, and even more difficult
> > to determine a null distribution for a partition comparison index. A
> > bootstrap test might work, but you will probably have to implement this
> > yourself.
> >
> > -Matt
> >
> > On Wed, 2010-11-17 at 08:33 -0500, Martin Tomko wrote:
> >    
> >> Dear all,
> >> I am having a hard time to figure out a suitable test for the match
> >> between two nominal classifications of the same set of data.
> >> I have used hierarchical clustering with multiple methods (ward,
> >> k-means,...) to classify my dat into a set number of classesa, and I
> >> would like to compare the resulting automated classification with the
> >> actual - objective benchmark one.
> >> So in principle I have a data frame with n columns of nominal
> >> classifications, and I want to do a mutual comparison and test for
> >> significance in difference in classification between pairs of columns.
> >>
> >> I just need to identify a suitable test, but I fail. I am currently
> >> exploring the possibility of using Cohen's Kappa, but I am open to
other
> >> suggestions. Especially the fact that kappa seems to be moslty used on
> >> failible, human annotators seems to bring in limitations taht do not
> >> apply to my automatic classification.
> >> Any help will be appreciated, especially if also followed by a pointer
> >> to an R package that implements it.
> >>
> >> Thanks
> >> Martin
> >>
> >> ______________________________________________
> >> R-help at r-project.org mailing list
> >> https://stat.ethz.ch/mailman/listinfo/r-help
> >> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> >> and provide commented, minimal, self-contained, reproducible code.
> >>      
> >    
> 
> 

-- 
Matthew S. Shotwell
Graduate Student 
Division of Biostatistics and Epidemiology
Medical University of South Carolina



------------------------------

Message: 39
Date: Mon, 22 Nov 2010 08:58:30 -0800 (PST)
From: Phil Spector <spector at stat.berkeley.edu>
To: Santosh Srinivas <santosh.srinivas at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Check for is.object
Message-ID:
	<alpine.DEB.2.00.1011220852200.14043 at springer.Berkeley.EDU>
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed

Santosh -
    The simple answer to your question is to initialize 
NewObject to NULL before the loop, i.e.

    newObject = NULL

However, I should point out this is one of the most 
inefficient ways to program in R.  A far better way is
to allocate enough space for NewObject outside your
loop, and "fill it in" in the loop.  Here's a simple
example to give you an idea of the difference in time
the two methods require:

> system.time({answer = matrix(NA,1000,5);
+              for(i in 1:1000)answer[i,] <- sample(10,5)})
    user  system elapsed
   0.020   0.000   0.017 
> system.time({answer=NULL;
+              for(i in 1:1000)answer=rbind(answer,sample(10,5))})
    user  system elapsed
   0.072   0.000   0.070

However, it gets even worse if the sample size is larger:

> system.time({answer = matrix(NA,10000,5);
+              for(i in 1:10000)answer[i,] <- sample(10,5)})
    user  system elapsed
   0.184   0.000   0.184 
> system.time({answer=NULL;for(i in 1:10000)
+              answer=rbind(answer,sample(10,5))})
    user  system elapsed
   5.492   0.032   5.562

Even if you don't know how big your newObject matrix will
become, it's still far more efficient to overallocate the 
matrix and then truncate it at the end.

I'd strongly recommend that you avoid building your matrix
[[elided Yahoo spam]]

 					- Phil Spector
 					 Statistical Computing Facility
 					 Department of Statistics
 					 UC Berkeley
 					 spector at stat.berkeley.edu


On Mon, 22 Nov 2010, Santosh Srinivas wrote:

> Hello,
>
> I am trying to recursively append some data from multiple files into a
> common object
>
> For this, I am using in a loop
>
> NewObject <- rbind(NewObject,tempObject)
>
>
> For the first loop, obviously there is no NewObject ... so I wanted to do
> NewObject <- tempObject[0,]
>
> Now when it loops again I want to put the statement do "NewObject <-
> tempObject[0,]" inside a if statement ... so that it does I can skip it
once
> NewObject has been initialized.
>
> But, is.object doesn't seem to work.
>
> What is the alternative check that I can do? And is there a better way to
> achieve what I want?
>
> Thanks,
> S
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



------------------------------

Message: 40
Date: Mon, 22 Nov 2010 12:04:59 -0500
From: Ista Zahn <izahn at psych.rochester.edu>
To: madr <madrazel at interia.pl>
Cc: r-help at r-project.org
Subject: Re: [R] "negative alpha" or custom gradient colors of data
	dots in scatterplot ?
Message-ID:
	<AANLkTikVspi-vK+qeB+QmH_qvqFi2+T2deeKhqyLxjTJ at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

Hi,
I suggest taking a look at the plotting functions in the ggplot2
package. For example:


x <- rnorm(10000)
y <- x+rnorm(10000)
dat <- data.frame(x,y)

library(ggplot2)

p <- ggplot(dat, aes(x=x, y=y))
p + geom_point() # too much overplotting: compare to
dev.new()
p + geom_hex(binwidth=c(.1,.1))

Best,
Ista

On Sun, Nov 21, 2010 at 9:13 AM, madr <madrazel at interia.pl> wrote:
>
> I know that by setting alpha to for example col = rgb(0, 0, 0, 0.1) it is
> possible to see how many overlapping is in the plot. But disadvantage of
it
> is that single points are barely visible on the background. So I wonder if
> there is possible to make setting that single points would be almost
black,
> but with more and more data on the same spot it would get more and more
> whiteish. Or maybe it is possible to make sole data points black but
> overlapped tending to some particular color of choice ?
> --
> View this message in context:
http://r.789695.n4.nabble.com/negative-alpha-or-custom-gradient-colors-of-da
ta-dots-in-scatterplot-tp3052394p3052394.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Ista Zahn
Graduate student
University of Rochester
Department of Clinical and Social Psychology
http://yourpsyche.org



------------------------------

Message: 41
Date: Mon, 22 Nov 2010 18:16:42 +0100
From: wphantomfr <wphantomfr at gmail.com>
To: r-help at r-project.org
Subject: [R] Problem setting the number of digits in xtable
Message-ID: <8E449FB3-99DF-4BC3-A1DA-EBDC075F6D0A at gmail.com>
Content-Type: text/plain

DEar list members,

I am currently using Sweave with LaTeX which is great.

I can use xtable for formatting outp of tables but I have a problem setting
the number of decimals in xtables when used with dataframe.

I have found an example on the net ith matrix and it works.

For example this works :
> > tmp <- matrix(rnorm(9), 3, 3) 
> > xtmp <- xtable(tmp)
> > digits(xtmp) <- c(0,0,3,4)
> > print(xtmp, include.rownames = FALSE) # row names

produced :
> % latex table generated in R 2.12.0 by xtable 1.5-6 package
> % Mon Nov 22 17:35:00 2010
> \begin{table}[ht]
> \begin{center}
> \begin{tabular}{rrr}
>   \hline
> 1 & 2 & 3 \\ 
>   \hline
> -2 & -2.158 & 2.8886 \\ 
>   1 & 1.330 & 0.4677 \\ 
>   -0 & 0.486 & -0.3319 \\ 
>    \hline
> \end{tabular}
> \end{center}
> \end{table}



But this won't work :
> > mydata
>   	TEST                t ddl                   p CONDITION
> 2   R1 3.01109061083632  16 0.00828552765650315        C1
> 3   R2 3.30476953908811  16 0.00447412002109504        C1
> 4   DR 2.86343993410509  16  0.0112631908739966        C1
> 5   R1 1.05386387510206  16    0.30760068470456        C2
> 6   R2 3.04997140665209  16 0.00763921045771104        C2
> 7   DR 2.25175987512241  16  0.0387401575011488        C2

but 
> > xtable(mydata,digits=2)

produced

> % latex table generated in R 2.12.0 by xtable 1.5-6 package
> % Mon Nov 22 18:13:47 2010
> \begin{table}[ht]
> \begin{center}
> \begin{tabular}{rlllll}
>   \hline
>  & TEST & t & ddl & p & CONDITION \\ 
>   \hline
> 2 & R1 & 3.01109061083632 & 16 & 0.00828552765650315 & C1 \\ 
>   3 & R2 & 3.30476953908811 & 16 & 0.00447412002109504 & C1 \\ 
>   4 & DR & 2.86343993410509 & 16 & 0.0112631908739966 & C1 \\ 
>   5 & R1 & 1.05386387510206 & 16 & 0.30760068470456 & C2 \\ 
>   6 & R2 & 3.04997140665209 & 16 & 0.00763921045771104 & C2 \\ 
>   7 & DR & 2.25175987512241 & 16 & 0.0387401575011488 & C2 \\ 
>    \hline
> \end{tabular}
> \end{center}
> \end{table}




I have also tried setting the digits with c(0,0,4,0,4,0), using also the
'display' argument to specify the type of each column... noway...


What am I missing ?


Thanks in advance

Sylvain Climent


	[[alternative HTML version deleted]]



------------------------------

Message: 42
Date: Mon, 22 Nov 2010 18:33:07 +0100 (CET)
From: omerle <omerle at laposte.net>
To: r-help at r-project.org
Subject: [R] RCurl : All connection are used ?
Message-ID: <1277930.342.1290447187435.JavaMail.www at wwinf8202>
Content-Type: text/plain

B B B B B Hi everybody,

I got a problem with the ftpUpload function from the RCurl package. My goal
is to Upload a lot of files from a local directory to a web server.

1st try :

for (i in 1:length(file)){
B B B B B  ftpUpload(what=files[i],to=files[i])
}
At i=11 I get : (my server has only 10 available open connections available)
:
Erreur dans curlPerform(url = to, upload = TRUE, readfunction =
uploadFunctionHandler(what,B  :
B  Got a 421 ftp-server response when 220 was expected

2 nd Try :

ftpConnection=getCurlHandle(userpwd=ftp$userpwd,maxconnects=1,fresh.connect=
0)
for (i in 1:length(file)){
B B B B B  ftpUpload(what=files[i],to=files[i],curl=ftpConnection)
}
And I got this error after 30 files (the error is not linked to the web
server but to the R session) :
Error in file(file, "rb") : all conection are used

I read the documentation and the options from curl library but I can't find
how to solve my problem even if I think that the problem is linked to not
closing the position I opened. Do you have any idea how to solve it ?

Thanks,

Olivier Merle


Une messagerie gratuite, garantie C  vie et des services en plus, C'a vous
tente ?
Je crC)e ma boC.te mail www.laposte.net

	[[alternative HTML version deleted]]



------------------------------

Message: 43
Date: Mon, 22 Nov 2010 18:47:16 +0100
From: Lucia Ca?as <lucia.canas at co.ieo.es>
To: <r-help at R-project.org>
Subject: [R] sm.ancova graphic
Message-ID:
	<50EB6473669C6741AC34FF5692DCC4560DA79E at ieocoruna.co.ieo.es>
Content-Type: text/plain


Hi R-Users,

I am working with sm.ancova (in the package sm) and I have two problems with
the graph, which is automatically generated when sm.ancova() is run.

1-Besides of the fitted lines, the observed data appeared automatically in
the graph. I prefer that only fitted lines appear. I check the sm.options,
but I could not find the way that  the observed data do not appear in the
graph.

2-I would like to change the size of the numbers in the axis. Again,  I
check the sm.options, but I could not find the correct way.



Thank you in advance,

Lucma 

	[[alternative HTML version deleted]]



------------------------------

Message: 44
Date: Mon, 22 Nov 2010 18:51:04 +0100
From: JiHO <jo.lists at gmail.com>
To: R Help <r-help at stat.math.ethz.ch>
Subject: [R] Plotting a cloud/fog of variable density in rgl
Message-ID:
	<AANLkTi=eYZV6xjQdKHnP1ak0yO4VQDrPMp_hRpMX=d6Z at mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

Hi everyone,

I want to plot a 3D interpolation of the concentration of aquatic
organisms. My goal would be to have the result represented as clouds
with a density proportional to the abundance of organisms, so that I
could fly (well, swim actually ;) ) through the scene and see the
patches here and there. Basically, I want to do something like this:
	http://www.youtube.com/watch?v=27mo_Y-aU-c
but simpler and with only clouds.

I though about doing it this way:
1- interpolate to a fine grid
2- plot points at each grid intersection of transparency inversely
proportional to abundance
3- blur/fog a bit each point to create the general impression of a cloud

So far I am stuck on 3 but maybe there is a better overall solution.
Here is some code that reads the result of the interpolation on a
coarse grid and plots it:

	# read a set of gridded data points in 3D
	d = read.table("http://dl.dropbox.com/u/1047321/R/test3Ddata.txt",
header=T)

	# plot
	library("rgl")
	spheres3d(d$x, d$y, d$z, alpha=alpha, radius=0.05)

And here is a version that actually performs the interpolation a
random set of points in 3D through kriging in case you want to try
with increase precision.

	# create a set of random data points in 3D
	n = 50
	data3D = data.frame(x = runif(n), y = runif(n), z = runif(n), v =
rnorm(n))

	# do 3d interpolation via kriging
	library("gstat")
	coordinates(data3D) = ~x+y+z
	range1D = seq(from = 0, to = 1, length = 10)
	grid3D = expand.grid(x = range1D, y = range1D, z = range1D)
	gridded(grid3D) = ~x+y+z
	res3D = krige(formula = v ~ 1, data3D, grid3D, model = vgm(1, "Exp",
.2))

	# convert the result to a data.frame
	d = as.data.frame(res3D)

	# compute transparency (proportional to the interpolated value)
	maxD = max(d$var1.pred)
	minD = min(d$var1.pred)
	alpha = (d$var1.pred - minD)/(maxD - minD)
	# reduce maximum alpha (all points are semi-transparent)
	alpha = alpha/5

	# plot
	library("rgl")
	spheres3d(d$x, d$y, d$z, alpha=alpha, radius=0.05)


I saw the fog effect but it seems to add a fog in the scene to
increase depth. What I want is my scene to actually look like a fog.

Thanks in advance for any help. Sincerely,

JiHO
---
http://maururu.net



------------------------------

Message: 45
Date: Mon, 22 Nov 2010 13:13:55 -0500
From: Mike Marchywka <marchywka at hotmail.com>
To: <omerle at laposte.net>, <r-help at r-project.org>
Subject: Re: [R] RCurl : All connection are used ?
Message-ID: <BLU113-W1133131680D4E669D5227EBE3D0 at phx.gbl>
Content-Type: text/plain; charset="iso-8859-1"


I guess I would just comment that for many tasks I try
to keep the work in dedicated tools. In this case, command
line versions of curl or even wget. The reason is things
like this that come up talking to foreign entitites.
Also the learning curve can be amortized over many other
efforts that require same file transfers etc.
I just mention this for rebuttals from R experts.




----------------------------------------
From: omerle at laposte.net
To: r-help at r-project.org
Date: Mon, 22 Nov 2010 18:33:07 +0100
Subject: [R] RCurl : All connection are used ?


Hi everybody,

I got a problem with the ftpUpload function from the RCurl package. My goal
is to Upload a lot of files from a local directory to a web server.

1st try :

for (i in 1:length(file)){
ftpUpload(what=files[i],to=files[i])
}
At i=11 I get : (my server has only 10 available open connections available)
:
Erreur dans curlPerform(url = to, upload = TRUE, readfunction =
uploadFunctionHandler(what, :
Got a 421 ftp-server response when 220 was expected

2 nd Try :

ftpConnection=getCurlHandle(userpwd=ftp$userpwd,maxconnects=1,fresh.connect=
0)
for (i in 1:length(file)){
ftpUpload(what=files[i],to=files[i],curl=ftpConnection)
}
And I got this error after 30 files (the error is not linked to the web
server but to the R session) :
Error in file(file, "rb") : all conection are used

I read the documentation and the options from curl library but I can't find
how to solve my problem even if I think that the problem is linked to not
closing the position I opened. Do you have any idea how to solve it ?

Thanks,

Olivier Merle


Une messagerie gratuite, garantie ? vie et des services en plus, ?a vous
tente ?
Je cr?e ma bo?te mail www.laposte.net

[[alternative HTML version deleted]]


______________________________________________
R-help at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



------------------------------

Message: 46
Date: Mon, 22 Nov 2010 10:35:02 -0800 (PST)
From: Manta <mantino84 at libero.it>
To: r-help at r-project.org
Subject: Re: [R] Ordeing Zoo object
Message-ID: <1290450902168-3054192.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii


And how about if I want to order the series from the smallest to the largest
value, keeping the date index in order to see when the values were
predominantly negative etc...

Thanks,
Marco
-- 
View this message in context:
http://r.789695.n4.nabble.com/Ordeing-Zoo-object-tp955868p3054192.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 47
Date: Mon, 22 Nov 2010 13:55:13 -0500
From: Duncan Murdoch <murdoch.duncan at gmail.com>
To: JiHO <jo.lists at gmail.com>
Cc: R Help <r-help at stat.math.ethz.ch>
Subject: Re: [R] Plotting a cloud/fog of variable density in rgl
Message-ID: <4CEABC91.1080402 at gmail.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

On 22/11/2010 12:51 PM, JiHO wrote:
> Hi everyone,
>
> I want to plot a 3D interpolation of the concentration of aquatic
> organisms. My goal would be to have the result represented as clouds
> with a density proportional to the abundance of organisms, so that I
> could fly (well, swim actually ;) ) through the scene and see the
> patches here and there. Basically, I want to do something like this:
> 	http://www.youtube.com/watch?v=27mo_Y-aU-c
> but simpler and with only clouds.

rgl doesn't make everything in OpenGL available.  I'm not sure exactly 
how those clouds were done, but it wouldn't really be easy to do them in 
rgl.

I think you can come closest to what you want within rgl by using 
sprites rather than rendering transparent spheres.  See 
examples(sprites3d).

Duncan Murdoch

> I though about doing it this way:
> 1- interpolate to a fine grid
> 2- plot points at each grid intersection of transparency inversely
> proportional to abundance
> 3- blur/fog a bit each point to create the general impression of a cloud
>
> So far I am stuck on 3 but maybe there is a better overall solution.
> Here is some code that reads the result of the interpolation on a
> coarse grid and plots it:
>
> 	# read a set of gridded data points in 3D
> 	d = read.table("http://dl.dropbox.com/u/1047321/R/test3Ddata.txt",
header=T)
>
> 	# plot
> 	library("rgl")
> 	spheres3d(d$x, d$y, d$z, alpha=alpha, radius=0.05)
>
> And here is a version that actually performs the interpolation a
> random set of points in 3D through kriging in case you want to try
> with increase precision.
>
> 	# create a set of random data points in 3D
> 	n = 50
> 	data3D = data.frame(x = runif(n), y = runif(n), z = runif(n), v =
rnorm(n))
>
> 	# do 3d interpolation via kriging
> 	library("gstat")
> 	coordinates(data3D) = ~x+y+z
> 	range1D = seq(from = 0, to = 1, length = 10)
> 	grid3D = expand.grid(x = range1D, y = range1D, z = range1D)
> 	gridded(grid3D) = ~x+y+z
> 	res3D = krige(formula = v ~ 1, data3D, grid3D, model = vgm(1, "Exp",
.2))
>
> 	# convert the result to a data.frame
> 	d = as.data.frame(res3D)
>
> 	# compute transparency (proportional to the interpolated value)
> 	maxD = max(d$var1.pred)
> 	minD = min(d$var1.pred)
> 	alpha = (d$var1.pred - minD)/(maxD - minD)
> 	# reduce maximum alpha (all points are semi-transparent)
> 	alpha = alpha/5
>
> 	# plot
> 	library("rgl")
> 	spheres3d(d$x, d$y, d$z, alpha=alpha, radius=0.05)
>
>
> I saw the fog effect but it seems to add a fog in the scene to
> increase depth. What I want is my scene to actually look like a fog.
>
> Thanks in advance for any help. Sincerely,
>
> JiHO
> ---
> http://maururu.net
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.



------------------------------

Message: 48
Date: Mon, 22 Nov 2010 12:01:26 -0700
From: Greg Snow <Greg.Snow at imail.org>
To: Sonja Klein <sonja.klein.07 at aberdeen.ac.uk>,
	"r-help at r-project.org"	<r-help at r-project.org>
Subject: Re: [R] How to produce a graph of glms in R?
Message-ID:
	<B37C0A15B8FB3C468B5BC7EBC7DA14CC633FEC591B at LP-EXMBVS10.CO.IHC.COM>
Content-Type: text/plain; charset="us-ascii"

Look at Predict.Plot (and possibly TkPredict) in the TeachingDemos package.

-- 
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.snow at imail.org
801.408.8111


> -----Original Message-----
> From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-
> project.org] On Behalf Of Sonja Klein
> Sent: Saturday, November 20, 2010 4:52 AM
> To: r-help at r-project.org
> Subject: [R] How to produce a graph of glms in R?
> 
> 
> I'm very new to R and modeling but need some help with visualization of
> glms.
> 
> I'd like to make a graph of my glms to visualize the different effects
> of
> different parameters.
> I've got a binary response variable (bird sightings) and use binomial
> glms.
> The 'main' response variable is a measure of distance to a track and
> the
> parameters I'm testing for are vegetation parameters that effect the
> response in terms of distance.
> My glm is: glm(Response~NEdist+I(NEdist^2)+Distance+I(Distance^2) which
> is
> the basic model and where I add interactions to, like for exampls
> Visibility
> as an interaction to Distance
> (glm(Response~NEdist+I(NEdist^2)+Distance*Visibility+I(Distance^2)))
> 
> I'd now like to make a graph which has the response variable on the y-
> axis
> (obviously). But the x-axis should have distance on it. The NEdist is a
> vector that is just co-influencing the curve and has to stay in the
> model
> but doesn't have any interactions with any other vectors.
> I'd then like to put in curves/lines for the different models to see if
> for
> example visibility effects the distance of the track to the first bird
> sighting.
> 
> Is there a way to produce a graph in R that has these features?
> --
> View this message in context: http://r.789695.n4.nabble.com/How-to-
> produce-a-graph-of-glms-in-R-tp3051471p3051471.html
> Sent from the R help mailing list archive at Nabble.com.
> 
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.



------------------------------

Message: 49
Date: Mon, 22 Nov 2010 14:05:52 -0500
From: Mike Marchywka <marchywka at hotmail.com>
To: <murdoch.duncan at gmail.com>, <jo.lists at gmail.com>
Cc: r-help at stat.math.ethz.ch
Subject: Re: [R] Plotting a cloud/fog of variable density in rgl
Message-ID: <BLU113-W6EEA06C04558173792A5ABE3D0 at phx.gbl>
Content-Type: text/plain; charset="iso-8859-1"







----------------------------------------
> Date: Mon, 22 Nov 2010 13:55:13 -0500
> From: murdoch.duncan at gmail.com
> To: jo.lists at gmail.com
> CC: r-help at stat.math.ethz.ch
> Subject: Re: [R] Plotting a cloud/fog of variable density in rgl
>
> On 22/11/2010 12:51 PM, JiHO wrote:
> > Hi everyone,
> >
> > I want to plot a 3D interpolation of the concentration of aquatic
> > organisms. My goal would be to have the result represented as clouds
> > with a density proportional to the abundance of organisms, so that I
> > could fly (well, swim actually ;) ) through the scene and see the
> > patches here and there. Basically, I want to do something like this:
> > http://www.youtube.com/watch?v=27mo_Y-aU-c
> > but simpler and with only clouds.
>
> rgl doesn't make everything in OpenGL available. I'm not sure exactly
> how those clouds were done, but it wouldn't really be easy to do them in
> rgl.
>
> I think you can come closest to what you want within rgl by using
> sprites rather than rendering transparent spheres. See
> examples(sprites3d).
>
If you only have 2 things with simple properties, namely point emitters 
as your organisms and a uniform concsntration of transparent scatters ( the
fog) you can probably derive geometrical optics expressions for the ray
trace
results and just integrate those over your source distribution. This should
be reasonably easy in R. I haven't been to siggraph since 1983 so can't help
much but you can probably find analyitcal solutions for fog on google
and just sum up your source distribution. I guess you could even do some 
wave optics etc as presumably the fog could be done as a function
of wavelength just as easily. In any case, if you only have two basic things
with simple disto should be reasonably easy to do in R with your own code.

  		 	   		  


------------------------------

Message: 50
Date: Mon, 22 Nov 2010 15:28:29 +0100
From: "B.-Markus Schuller" <b.markus.schuller at googlemail.com>
To: r-help at r-project.org
Subject: Re: [R] data acquisition with R?
Message-ID: <4CEA7E0D.2090606 at googlemail.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Thanks a lot, Matt!

I will have a look at the options you suggested.

Cheers,
Mango

-- 

-----------------------------------------
Never run for the bus.
Never skip tea.



------------------------------

Message: 51
Date: Mon, 22 Nov 2010 10:23:07 -0800 (PST)
From: bogdanno <bodinsoul at gmail.com>
To: r-help at r-project.org
Subject: [R] Is it possible to make a matrix to start at row 0?
Message-ID:
	<ef7292d4-2217-4646-97ce-0e8d53c732c9 at n30g2000vbb.googlegroups.com>
Content-Type: text/plain; charset=ISO-8859-1

I want to make the matrix to be indexed from row (column) 0, not 1
Can I do that? How?
Thanks



------------------------------

Message: 52
Date: Mon, 22 Nov 2010 16:15:17 -0200
From: csrabak <crabak at acm.org>
To: r-help at stat.math.ethz.ch
Subject: Re: [R] Rexcel
Message-ID: <icebvm$qdh$1 at dough.gmane.org>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Em 22/11/2010 10:11, Luis Felipe Parra escreveu:
> Hello I am new to RExcel and I would like to run a source code form the
> excel worksheet. I would like to run the following code
>
> source("C:\\Quantil Aplicativos\\Genercauca\\BackwardSelectionNC.r")
>
> from the excel wroksheet. Does anybody know how to do this?
>
> Thank you
>
Felipe,

Look at the section "Startup" in the RExcel help. In a nutshell, if you 
want the code to run immediately at the loading of the spreadsheet, 
create a workbook called "RCode" and put your source there.

Other options are available. See the docs.

--
Cesar Rabak



------------------------------

Message: 53
Date: Mon, 22 Nov 2010 08:10:07 -0800 (PST)
From: "dhacademic at gmail.com" <dhacademic at gmail.com>
To: r-help at r-project.org
Subject: Re: [R] question about constraint minimization
Message-ID:
	<AANLkTi=21mXKG4dEskMMN66xt0n_GYxT7dzc_ceb1f3m at mail.gmail.com>
Content-Type: text/plain


Hi,

I have struggled on this "bound optimization with equality constraint" by
using optim function for two days, but still fail to prepare a good input.
Can anyone help to prepare the input for my specific case? Many thanks.

Best,
Hao


On Sat, Nov 20, 2010 at 3:17 AM, Hans W Borchers [via R] <
ml-node+3051338-309339578-202837 at n4.nabble.com<ml-node%2B3051338-309339578-2
02837 at n4.nabble.com>
> wrote:

> dhacademic <at> gmail.com <dhacademic <at> gmail.com> writes:
>
> >
> >
> > Hi,
> >
> > I am a beginner of R. There is a question about constraint minimization.
> A
> > function, y=f(x1,x2,x3....x12), needs to be minimized. There are 3
> > requirements for the minimization:
> >
> > (1) x2+x3+...+x12=1.5 (x1 is excluded);
> > (2) x1=x3=x4;
> > (3) x1, x3 and x5 are in the range of -1~0, respectively. The rest
> variables
> > (x2, x4, x6, x7, ...., x12) are in the range of 0~1, respectively.
> >
> > The "optim" function is used. And part of my input is as follow, where
> > "xx1r" represents the x12:
> >
> > xx1r=1.5-x[2]-x[1]-x[1]-x[3]-x[4]-x[5]-x[6]-x[7]-x[8]-x[9]
> > start=rnorm(9)
> > up=1:9/1:9*1
> > lo=1:9/1:9*-1
> > out=optim(start,f,lower=lo,upper=up,method="L-BFGS-B",hessian=TRUE,
> > control=list(trace=6,maxit=1000))
> >
> > There are two problems in this input. the "up" and "lo" only define a
> range
> > of -1~1 for x1 to x11, which can not meet the requirement (3). In
> addition,
> > there is not any constraint imposed on x12. I have no idea how to
specify
> a
> > matrix that can impose different constraints on individual variables in
a
>
> > function. Any suggestion is highly appreciated.
> >
> > Best,
> > Hao
> >
>
> I don't see any direct need for real 'constraint' optimization here,
> it is a 'bounded' optimization where you are allowed to use
>
>     lower <- c(-1,0,-1,0,-1,0,0,0,0,0,0,0)
>     upper <- c( 0,1, 0,0, 0,1,1,1,1,1,1,1)
>
> Otherwise, your description is confusing:
>   (1) Did you change f to a new function with 9 variables, eliminating
>       x3, x4, and x12 ?
>   (2) x4 (being equal to x1) has to be in [-1, 0] but also in [0, 1]?
>   (3) If you need to restrict x12 to [0, 1] also, you cannot eliminate it.
>       Either keep x12 and use an equality constraint, or use inequality
>       constraints on xxlr.
>
> Hans Werner
>
> ______________________________________________
> [hidden email]
<http://user/SendEmail.jtp?type=node&node=3051338&i=0>mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>
> ------------------------------
>  View message @
>
http://r.789695.n4.nabble.com/question-about-constraint-minimization-tp30508
80p3051338.html
>
> To unsubscribe from question about constraint minimization, click
here<http://r.789695.n4.nabble.com/template/NamlServlet.jtp?macro=unsubscrib
e_by_code&node=3050880&code=ZGhhY2FkZW1pY0BnbWFpbC5jb218MzA1MDg4MHwtNjM2Nzc0
NA==>.
>
>

-- 
View this message in context:
http://r.789695.n4.nabble.com/question-about-constraint-minimization-tp30508
80p3053912.html
Sent from the R help mailing list archive at Nabble.com.

	[[alternative HTML version deleted]]



------------------------------

Message: 54
Date: Mon, 22 Nov 2010 01:41:42 -0800 (PST)
From: Dimitri Shvorob <dimitri.shvorob at gmail.com>
To: r-help at r-project.org
Subject: Re: [R] Lost in POSIX
Message-ID: <1290418902451-3053329.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii


> Nor would I call this much of an improvement in clarity... what about
"min"? You want to know the minimum?

LOL. (And apologies for the insensitivity). Thank you for help, Jeff. This
works, but I am still curious to see a solution based on "trunc", if anyone
can find it. 
-- 
View this message in context:
http://r.789695.n4.nabble.com/Lost-in-POSIX-tp3052768p3053329.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 55
Date: Mon, 22 Nov 2010 11:19:29 +0000
From: Georg Otto <gwo at well.ox.ac.uk>
To: r-help at stat.math.ethz.ch
Subject: Re: [R] Find in R and R books
Message-ID: <xotr5edbo72.fsf at well.ox.ac.uk>
Content-Type: text/plain; charset=us-ascii




> Also when I try to search in google using for example the word R inside
the search lemma I get very few results as the R confuses the search engine.
When I was looking something in matlab ofcourse it was easier to get results
as the search engine performs better.
> What are your tricks when you want to find some function that provides
some functionality?

To search R-specific sites the best place to go is this one:

http://www.rseek.org/

Cheers,

Georg



------------------------------

Message: 56
Date: Mon, 22 Nov 2010 00:20:09 -0800 (PST)
From: meytar <meytar at techunix.technion.ac.il>
To: r-help at r-project.org
Subject: Re: [R] using rpart with a tree misclassification condition
Message-ID: <1290414009182-3053230.post at n4.nabble.com>
Content-Type: text/plain


[[elided Yahoo spam]]
Yes, I meant the apparent error rate.
According to your advice, if I use "rpart" to build a full tree, what
pruning command will be appropriate and will able me to add as an input to
the pruning procedure the total error rate i'm looking for?
Thank you very much
Meytar
-- 
View this message in context:
http://r.789695.n4.nabble.com/using-rpart-with-a-tree-misclassification-cond
ition-tp3053167p3053230.html
Sent from the R help mailing list archive at Nabble.com.

	[[alternative HTML version deleted]]



------------------------------

Message: 57
Date: Mon, 22 Nov 2010 07:23:25 -0800
From: Patrick Leyshock <pleyshock at gmail.com>
To: Uwe Ligges <ligges at statistik.tu-dortmund.de>
Cc: r-help at r-project.org
Subject: Re: [R] memory profiling
Message-ID:
	<AANLkTinWYjpGPjFre3Q43rNi1x3ZgLYG7Y6wAQcqzd31 at mail.gmail.com>
Content-Type: text/plain

Using:

  summaryRprof(memory="both")

did the trick, thank you.  I had not been using that setting when calling
summaryRprof.

Thanks, Patrick

2010/11/20 Uwe Ligges <ligges at statistik.tu-dortmund.de>

>
>
> On 19.11.2010 21:48, Patrick Leyshock wrote:
>
>> I'm trying to configure Version 2.12.0 or R to do memory profiling.
>>
>> I've reconfigured the code:
>>
>> % ./compile --enable-memory-profiling=YES
>>
>> and verified that it's configured correctly by examining the output.  I
>> then
>> rebuild R:
>>
>> % make
>>
>> Then I fire up R and run a script, using Rprof with the memory-profiling
>> switch set to TRUE:
>>
>> Rprof("output", memory.profiling=TRUE);
>> # a bunch of R code
>> Rprof(NULL);
>>
>
>
> Wen I do
>
> summaryRprof(memory="both")
>
> I see an additional column ...
>
> but since you have not said what you tried exactly, we cannot help very
> much.
>
> Uwe Ligges
>
>
>
>  When I examine the output, however, using either R CMD Rprof from the
>> shell,
>> or summaryRprof from within R, the output I see is identical to the
output
>> I
>> got when I ran R BEFORE I recompiled with memory profiling enabled.
>>
>> Anyone see something that I'm missing?
>>
>> Thanks, Patrick
>>
>>        [[alternative HTML version deleted]]
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>

	[[alternative HTML version deleted]]



------------------------------

Message: 58
Date: Mon, 22 Nov 2010 03:14:17 -0800 (PST)
From: romzero <romzero at yahoo.it>
To: r-help at r-project.org
Subject: [R] Some questione about plot
Message-ID: <1290424457967-3053430.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii


Q1: How can i draw LSD (Least significant difference) on a plot?
Like this...
http://r.789695.n4.nabble.com/file/n3053430/LSD.jpg 

Q2: How can i draw the axis secondary scale?

Thanks for help. 


-- 
View this message in context:
http://r.789695.n4.nabble.com/Some-questione-about-plot-tp3053430p3053430.ht
ml
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 59
Date: Mon, 22 Nov 2010 08:13:32 -0800 (PST)
From: shubha <shuba.pandit at gmail.com>
To: r-help at r-project.org
Subject: [R] how do remove those predictor which have p value greater
	than 0.05 in GLM?
Message-ID: <1290442412855-3053921.post at n4.nabble.com>
Content-Type: text/plain; charset=UTF-8


Hi R user,
I am a kind of an intermediate user of R. Now I am using GLM model (library
MASS, VEGUS). I used  a backward stepwise logistic regression, but i got a
problem in removing  those predictors which are above 0.05. I don't want to
include those variables which were above 0.05 in final backward stepwise
logetsic regression model.

for example: first I run the model,
 "name<-glm(dep~env1+env2..., family= binomial, data=new)"

after that, I did stepwise for name

name.step<-step(name, direction="backward")

here, I still got those variables which were not significant, for example:
secchi was not significant (see below  example), but still it was in the
model. how can I remove those variables which are not significant in
forward/backward stepwise?.

another question, when I wrote direction="backward", I got the results same
as in the process of "forward". It is really strange. why is it same results
for backward and forward.  I checked in other two statistical software
(Statistica and SYSTAT), they provided a correct results, I think. But, I
need to use R for further analysis, therefore I need to fix the problem.  I
am spending so much time to figure it out, but I could not. could you please
give your suggestions. It would be really a great help. please see the
example of retaining predictors which have p value is greater that 0.05
after stepwise logistic regression.

Thank
Shubha Pandit, PhD
University of Windsor
Windsor, ON, Canada
====


> summary(step.glm.int.ag1)

Call:
glm(formula = ag1less ~ GEARTEMP + DOGEAR + GEARDEPTH + SECCHI +
    GEARTEMP:SECCHI + DOGEAR:SECCHI + GEARTEMP:DOGEAR + GEARTEMP:GEARDEPTH +
    DOGEAR:GEARDEPTH, family = binomial, data = training)

Deviance Residuals:
    Min       1Q   Median       3Q      Max 
-2.1983  -0.8272  -0.4677   0.8014   2.6502 

Coefficients:
                    Estimate Std. Error z value Pr(>|z|)   
(Intercept)         3.231623   1.846593   1.750 0.080110 . 
GEARTEMP           -0.004408   0.085254  -0.052 0.958761   
DOGEAR             -0.732805   0.182285  -4.020 5.82e-05 ***
GEARDEPTH          -0.249237   0.060825  -4.098 4.17e-05 ***
SECCHI              0.311875   0.297594   1.048 0.294645   
GEARTEMP:SECCHI    -0.080664   0.010079  -8.003 1.21e-15 ***
DOGEAR:SECCHI       0.066555   0.022181   3.000 0.002695 **
GEARTEMP:DOGEAR     0.030988   0.008907   3.479 0.000503 ***
GEARTEMP:GEARDEPTH  0.008856   0.002122   4.173 3.01e-05 ***
DOGEAR:GEARDEPTH    0.006680   0.004483   1.490 0.136151   
---
Signif. codes:  0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 3389.5  on 2751  degrees of freedom
Residual devia\
n\
ce: 2720.4  on 2742  degrees of freedom

AIC: 2740.4uh

Number of Fisher Scoring iterations: 6

==========================

>
glm.int.ag1<-glm(ag1less~GEARTEMP+DOGEAR+GEARDEPTH+SECCHI+SECCHI*GEARTEMP+SE
CCHI*DOGEAR+SECCHI*GEARDEPTH+GEARTEMP*DOGEAR+GEARTEMP*GEARDEPTH+GEARDEPTH*DO
GEAR,data=training,
> family=binomial)
> summary(glm.int.ag1)

Call:
glm(formula = ag1less ~ GEARTEMP + DOGEAR + GEARDEPTH + SECCHI +
    SECCHI * GEARTEMP + SECCHI * DOGEAR + SECCHI * GEARDEPTH +
    GEARTEMP * DOGEAR + GEARTEMP * GEARDEPTH + GEARDEPTH * DOGEAR,
    family = binomial, data = training)

Deviance Residuals:
    Min       1Q   Median       3Q      Max 
-2.1990  -0.8287  -0.4668   0.8055   2.6673 

Coefficients:
                    Estimate Std. Error z value Pr(>|z|)   
(Intercept)         2.909805   1.928375   1.509 0.131314   
GEARTEMP            0.005315   0.087159   0.061 0.951379   
DOGEAR             -0.721864   0.183708  -3.929 8.52e-05 ***
GEARDEPTH          -0.235961   0.064828  -3.640 0.000273 ***
SECCHI              0.391445   0.326542   1.199 0.230622   
GEARTEMP:SECCHI    -0.082296   0.010437  -7.885 3.14e-15 ***
DOGEAR:SECCHI       0.065572   0.022319   2.938 0.003305 **
GEARDEPTH:SECCHI   -0.003176   0.005295  -0.600 0.548675   
GEARTEMP:DOGEAR     0.030571   0.008961   3.412 0.000646 ***
GEARTEMP:GEARDEPTH  0.008692   0.002159   4.027 5.66e-05 ***
DOGEAR:GEARDEPTH    0.006544   0.004495   1.456 0.145484   
---
Signif. codes:  0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 3389.5  on 2751  degrees of freedom
Residual deviance: 2720.0  on 2741  degrees of freedom
AIC: 2742

Number of Fisher Scoring iterations: 6



-- 
View this message in context:
http://r.789695.n4.nabble.com/how-do-remove-those-predictor-which-have-p-val
ue-greater-than-0-05-in-GLM-tp3053921p3053921.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 60
Date: Mon, 22 Nov 2010 09:56:37 -0800 (PST)
From: wangwallace <talenttree at gmail.com>
To: r-help at r-project.org
Subject: Re: [R] how to apply sample function to each row of a data
	frame?
Message-ID: <1290448597442-3054117.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii


I tried it. it works out perfectly. you save my life.
-- 
View this message in context:
http://r.789695.n4.nabble.com/Re-how-to-apply-sample-function-to-each-row-of
-a-data-frame-tp3050933p3054117.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 61
Date: Mon, 22 Nov 2010 10:43:15 -0800 (PST)
From: tomreilly <tomreilly at autobox.com>
To: r-help at r-project.org
Subject: Re: [R] arima
Message-ID: <1290451395576-3054206.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii


Nuncio,

No, there is no requirement to subtract the mean.

It is required that the residuals are N.I.I.D. (ie constant mean and
constant variance).  If you have an upward trending series, for example,
then the series would need to be "deseasonalized" so that it is constant.

There are many many steps to doing this right.  Email me at
sales at autobox.com to hear more.

Tom



-- 
View this message in context:
http://r.789695.n4.nabble.com/arima-tp2993543p3054206.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 62
Date: Mon, 22 Nov 2010 11:13:13 -0800
From: Nathan Miller <natemiller77 at gmail.com>
To: r-help at r-project.org
Subject: [R] Wait for user input with readline()
Message-ID:
	<AANLkTik-U6f+OXv3M=C11+3D7168PwPe1SN6rHcYN5Vv at mail.gmail.com>
Content-Type: text/plain

Hello,

I am trying write a script that includes a prompt for user input using
readlines(). I am running into the problem that when I run readlines() as a
single line the prompt works perfectly, but when I try to run a block of
code which includes the readline function, the script doesn't wait for the
user input. I have seen this question posted before when I did a search, but
I didn't find an suitable answer. Is there a means of ensuring that the
script does not proceed until a value has been entered to readline(). Can I
put readline in a function that will wait for input?

Are there other options for getting user input that allow require that the
script wait for user input?

Thanks for your help,

Nate

	[[alternative HTML version deleted]]



------------------------------

Message: 63
Date: Mon, 22 Nov 2010 16:09:40 +0100
From: "Viechtbauer Wolfgang (STAT)"
	<wolfgang.viechtbauer at maastrichtuniversity.nl>
To: Jalla <zazzala at googlemail.com>, "r-help at r-project.org"
	<r-help at r-project.org>
Subject: Re: [R] txtProgressBar strange behavior in R 2.12.0
Message-ID:
	<077E31A57DA26E46AB0D493C9966AC730BE0812678 at UM-MAIL4112.unimaas.nl>
Content-Type: text/plain; charset="utf-8"

I believe nobody has responded to far, so maybe this is not a wide-spread
issue. However, I have also encountered this since upgrading to R 2.12.0
(Windows 7, 64-bit). In my simulations where I use txtProgressBar(), the
problem usually disappears after the bar has progressed to a certain amount,
but it's quite strange nonetheless. The characters that appear are gibberish
and include some Asian symbols. Here is a screenshot:

http://www.wvbauer.com/screenshot.jpg

sessionInfo():

R version 2.12.0 (2010-10-15)
Platform: x86_64-pc-mingw32/x64 (64-bit)

locale:
[1] LC_COLLATE=English_United States.1252 
[2] LC_CTYPE=English_United States.1252   
[3] LC_MONETARY=English_United States.1252
[4] LC_NUMERIC=C                          
[5] LC_TIME=English_United States.1252    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

No idea what could be causing this. 

Best,

-- 
Wolfgang Viechtbauer
Department of Psychiatry and Neuropsychology
School for Mental Health and Neuroscience
Maastricht University, P.O. Box 616
6200 MD Maastricht, The Netherlands
Tel: +31 (43) 368-5248
Fax: +31 (43) 368-8689
Web: http://www.wvbauer.com


> -----Original Message-----
> From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-project.org]
> On Behalf Of Jalla
> Sent: Saturday, November 20, 2010 23:49
> To: r-help at r-project.org
> Subject: [R] txtProgressBar strange behavior in R 2.12.0
> 
> 
> Hi,
> I am running R 2.12.0 (windows).
> 
> example(txtProgressBar)
> 
> gives me some funny screen output with all kinds of special characters
> appearing and disappearing. It's happening on two different mashines since
> vs. 2.12.0. Is this a known issue?
> 
> Best,
> Jalla
> 
> --
> View this message in context:
> http://r.789695.n4.nabble.com/txtProgressBar-strange-behavior-in-R-2-12-0-
> tp3051976p3051976.html
> Sent from the R help mailing list archive at Nabble.com.
> 
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.

------------------------------

Message: 64
Date: Mon, 22 Nov 2010 17:51:10 +0800
From: ?? <xiagao1982 at gmail.com>
To: r-help at r-project.org
Subject: [R] How to call web service in R
Message-ID:
	<AANLkTintJO3AVsFhBn0eTrPnOK832E1E7GLa2Wsi9Q+a at mail.gmail.com>
Content-Type: text/plain

Hello everyone,

[[elided Yahoo spam]]

Gao Xia

	[[alternative HTML version deleted]]



------------------------------

Message: 65
Date: Mon, 22 Nov 2010 15:02:20 +0000
From: "Ni, Melody Zhifang" <z.ni at imperial.ac.uk>
To: "'r-help at r-project.org'" <r-help at r-project.org>
Subject: [R] save a regression model that can be used later
Message-ID:
	<BD9258C2D51F8040927E2E5D2DD39C7C2930926AA6 at ICEXM3.ic.ac.uk>
Content-Type: text/plain

Hi everyone

I have a question about how to save a regression model in R and how to
retrieve it for making predictions in a new session.

To be more specific, I fitted a multilevel logistic regression model using
the lmer  from the "lme4" package. I then successfully make predictions
using  fitted(mymodel).

Since data are complex (three levels, nested, numerous categorical and
continuous data describing types of laparoscopic surgery), the computer
takes quite a while to fit the MLM model.  I wonder whether it's possible to
save the fitted model so that I don't have to fit it again for making
predictions every time I start a new R session.

I searched the mailing-list archive. Suggestions include using save () to
save the model as "mymodel.rda" and then use load(mymodel.rda) into the
workspace. I tried without success (in Windows), returning the error
message: "Error in object$fitted : $ operator is invalid for atomic vectors"

Did I do anything wrong?  Any help on this topic is much appreciated

BW, Melody

--
Dr Melody Ni
Imperial College
Department of Surgery and Cancer
10th floor, QEQM Building
St. Mary's Hospital
London W2 1NY
Tel/Fax: +44 (0) 20 331 27657/26309
z.ni at imperial.ac.uk<mailto:z.ni at imperial.ac.uk>

	[[alternative HTML version deleted]]



------------------------------

Message: 66
Date: Mon, 22 Nov 2010 14:24:47 -0500
From: "Ravi Varadhan" <rvaradhan at jhmi.edu>
To: <dhacademic at gmail.com>, <r-help at r-project.org>
Subject: Re: [R] question about constraint minimization
Message-ID: <004f01cb8a7a$ed137670$c73a6350$@edu>
Content-Type: text/plain;	charset="us-ascii"

I do not understand the constraint x1 = x3 = x4.  If this is correct, you
only have 10 unknown parameters.

If you can correctly formulate your problem, you can have a look at the
packages "alabama" or "BB".  The function `auglag' in "alabama" or the
function `spg' in "BB" may be useful.

Ravi.

-------------------------------------------------------
Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and Gerontology School of Medicine Johns
Hopkins University

Ph. (410) 502-2619
email: rvaradhan at jhmi.edu


-----Original Message-----
From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-project.org] On
Behalf Of dhacademic at gmail.com
Sent: Monday, November 22, 2010 11:10 AM
To: r-help at r-project.org
Subject: Re: [R] question about constraint minimization


Hi,

I have struggled on this "bound optimization with equality constraint" by
using optim function for two days, but still fail to prepare a good input.
Can anyone help to prepare the input for my specific case? Many thanks.

Best,
Hao


On Sat, Nov 20, 2010 at 3:17 AM, Hans W Borchers [via R] <
ml-node+3051338-309339578-202837 at n4.nabble.com<ml-node%2B3051338-309339578-2
02837 at n4.nabble.com>
> wrote:

> dhacademic <at> gmail.com <dhacademic <at> gmail.com> writes:
>
> >
> >
> > Hi,
> >
> > I am a beginner of R. There is a question about constraint minimization.
> A
> > function, y=f(x1,x2,x3....x12), needs to be minimized. There are 3
> > requirements for the minimization:
> >
> > (1) x2+x3+...+x12=1.5 (x1 is excluded);
> > (2) x1=x3=x4;
> > (3) x1, x3 and x5 are in the range of -1~0, respectively. The rest
> variables
> > (x2, x4, x6, x7, ...., x12) are in the range of 0~1, respectively.
> >
> > The "optim" function is used. And part of my input is as follow, where
> > "xx1r" represents the x12:
> >
> > xx1r=1.5-x[2]-x[1]-x[1]-x[3]-x[4]-x[5]-x[6]-x[7]-x[8]-x[9]
> > start=rnorm(9)
> > up=1:9/1:9*1
> > lo=1:9/1:9*-1
> > out=optim(start,f,lower=lo,upper=up,method="L-BFGS-B",hessian=TRUE,
> > control=list(trace=6,maxit=1000))
> >
> > There are two problems in this input. the "up" and "lo" only define a
> range
> > of -1~1 for x1 to x11, which can not meet the requirement (3). In
> addition,
> > there is not any constraint imposed on x12. I have no idea how to
specify
> a
> > matrix that can impose different constraints on individual variables in
a
>
> > function. Any suggestion is highly appreciated.
> >
> > Best,
> > Hao
> >
>
> I don't see any direct need for real 'constraint' optimization here,
> it is a 'bounded' optimization where you are allowed to use
>
>     lower <- c(-1,0,-1,0,-1,0,0,0,0,0,0,0)
>     upper <- c( 0,1, 0,0, 0,1,1,1,1,1,1,1)
>
> Otherwise, your description is confusing:
>   (1) Did you change f to a new function with 9 variables, eliminating
>       x3, x4, and x12 ?
>   (2) x4 (being equal to x1) has to be in [-1, 0] but also in [0, 1]?
>   (3) If you need to restrict x12 to [0, 1] also, you cannot eliminate it.
>       Either keep x12 and use an equality constraint, or use inequality
>       constraints on xxlr.
>
> Hans Werner
>
> ______________________________________________
> [hidden email]
<http://user/SendEmail.jtp?type=node&node=3051338&i=0>mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>
> ------------------------------
>  View message @
>
http://r.789695.n4.nabble.com/question-about-constraint-minimization-tp30508
80p3051338.html
>
> To unsubscribe from question about constraint minimization, click
here<http://r.789695.n4.nabble.com/template/NamlServlet.jtp?macro=unsubscrib
e_by_code&node=3050880&code=ZGhhY2FkZW1pY0BnbWFpbC5jb218MzA1MDg4MHwtNjM2Nzc0
NA==>.
>
>

-- 
View this message in context:
http://r.789695.n4.nabble.com/question-about-constraint-minimization-tp30508
80p3053912.html
Sent from the R help mailing list archive at Nabble.com.

	[[alternative HTML version deleted]]

______________________________________________
R-help at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



------------------------------

Message: 67
Date: Mon, 22 Nov 2010 11:27:09 -0800
From: Joshua Wiley <jwiley.psych at gmail.com>
To: bogdanno <bodinsoul at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Is it possible to make a matrix to start at row 0?
Message-ID:
	<AANLkTikfD_yAkHNwdV26dQmqADiMUmHS_AZ4qfrMy70=@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

Hi,

You can do it, but it would be very difficult (think reworking all
indexing yourself) and you probably should not even try (nothing else
that was expecting indexing to work as the R gods intended it to would
work once you had done your rework).

What has lead you to want to index from 0?  If it is some problem you
are having, I can almost certainly promise you it will be easier for
us to show you how to approach it differently and index from 1 than to
change the underlying framework so you can index from 0.

Cheers,

Josh

On Mon, Nov 22, 2010 at 10:23 AM, bogdanno <bodinsoul at gmail.com> wrote:
>
> I want to make the matrix to be indexed from row (column) 0, not 1
> Can I do that? How?
> Thanks
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.



--
Joshua Wiley
Ph.D. Student, Health Psychology
University of California, Los Angeles
http://www.joshuawiley.com/



------------------------------

Message: 68
Date: Mon, 22 Nov 2010 11:37:29 -0800
From: Bert Gunter <gunter.berton at gene.com>
To: bogdanno <bodinsoul at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Is it possible to make a matrix to start at row 0?
Message-ID:
	<AANLkTinwygygCqK3LNRKm4JrpVOmMek5ecNwjMceL-tA at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

Eh??? Why would you want to do that?? (R isn't C).

So the simple answer is: you can't.

The other answer is, well of course you sort of can via, e.g.

for(i in 0:9)  {
   z <- myMatrix[i+1,]
  ...
}

But as Josh said, I think this falls into the class of "You are just
asking for trouble, so don't do it."

Cheers,
Bert





On Mon, Nov 22, 2010 at 10:23 AM, bogdanno <bodinsoul at gmail.com> wrote:
> I want to make the matrix to be indexed from row (column) 0, not 1
> Can I do that? How?
> Thanks
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Bert Gunter
Genentech Nonclinical Biostatistics



------------------------------

Message: 69
Date: Mon, 22 Nov 2010 14:38:18 -0500
From: Harlan Harris <harlan at harris.name>
To: Tal Galili <tal.galili at gmail.com>, Farrel Buchinsky
	<fjbuch at gmail.com>
Cc: r-help at r-project.org, Duncan Temple Lang <duncan at wald.ucdavis.edu>
Subject: Re: [R] RGoogleDocs stopped working
Message-ID:
	<AANLkTin=c2Ashu8SQtgV=9A6__MCmyFxwQewMpzoUAML at mail.gmail.com>
Content-Type: text/plain

No joy for me. :(

I'd had version 0.4-1 installed previously, and re-pulling that URL and
reinstalling, plus setting RCurlOptions as specified, do not help for me.
Exactly the same behavior. It doesn't matter whether I call getGoogleAuth
directly or let getGoogleDocsConnection do it for me.

 -Harlan

On Fri, Nov 19, 2010 at 10:27 PM, Farrel Buchinsky <fjbuch at gmail.com> wrote:

> Harlan and Tal have had problems. I had lots too. I spent hours getting it
> to work. Terrible process to go through but RGoogleDocs is so useful that
[[elided Yahoo spam]]
>
> My problems were overcome when
>
>    1. I used the latest zip file by Duncan Temple Lang see below
>    2. I inserted an options line that loosened the ssl security - do not
>    know if that was a good thing or not but it got it to work
>
> Duncan said:
> "I have put an updated version of the source of the package with
> these changes. It is available from
> http://www.omegahat.org/RGoogleDocs/RGoogleDocs_0.4-1.tar.gz
> There is a binary for Windows  in
> http://www.omegahat.org/RGoogleDocs/RGoogleDocs_0.4-1.zip
>
> Here is my script that works. Does yours look like this?
>
> library(RGoogleDocs)
> packageDescription("
RGoogleDocs")
> ps <-readline(prompt="get the password in ")
> options(RCurlOptions = list(capath = system.file("CurlSSL", "cacert.pem",
> package = "RCurl"), ssl.verifypeer = FALSE))
> sheets.con = getGoogleDocsConnection(getGoogleAuth("fjbuch at gmail.com", ps,
> service ="wise"))
> ts2=getWorksheets("OnCall",sheets.con) #OnCall is just the name of a
> spreadsheet
> names(ts2)
> y2005<-sheetAsMatrix(ts2$y2005,header=TRUE, as.data.frame=TRUE, trim=TRUE)
>
> Finally, I am willing to offer you a TeamViewer session where we can take
> control of one another's computers and see if the problem is code or the
> installation. I warn you that I am neither a programmer nor a developer,
> just a very enthusiastic RGoogleDocs user who probably perseveres more
than
> is good for him.
>
> Farrel Buchinsky
>
>


On Fri, Nov 19, 2010 at 1:35 PM, Tal Galili <tal.galili at gmail.com> wrote:

> I second Harlan's call.
>
>
> ----------------Contact
> Details:-------------------------------------------------------
> Contact me: Tal.Galili at gmail.com |  972-52-7275845
> Read me: www.talgalili.com (Hebrew) | www.biostatistics.co.il (Hebrew) |
> www.r-statistics.com (English)
>
>
----------------------------------------------------------------------------
------------------
>
>
>
>
> On Fri, Nov 19, 2010 at 5:00 PM, Harlan Harris <harlan at harris.name> wrote:
>
>> Any new thoughts on this? I really want to get this working again! Is
>> there
>> someone else that can help or somewhere else I should be asking?
>>
>> Thanks!
>>
>>  -Harlan
>>
>>
>> On Wed, Nov 17, 2010 at 10:16 AM, Harlan Harris <harlan at harris.name>
>> wrote:
>>
>> > Following up again. I found on the forums for the Google Apps API this
>> > thread that seems to be about a similar issue:
>> >
>>
http://www.google.com/support/forum/p/apps-apis/thread?tid=1c22cb44eb5cbba6&
hl=en&search_impression_id=ab161b010ecf8803%3A12c5a65ce83&search_source=rela
ted_question
>> >
>> > It's using Java and is rather over my head, but it seems to suggest
that
>> > something related to the content type might be wrong? Does this offer
>> any
>> > suggestions on how to fix my use of RGoogleDocs?
>> >
>> > Thanks,
>> >
>> >  -Harlan
>> >
>> >
>> > On Mon, Nov 15, 2010 at 12:16 PM, Harlan Harris <harlan at harris.name
>> >wrote:
>> >
>> >> Thanks, Duncan. Finally getting a chance to follow up on this...
>> >>
>> >> I tried again, changing and resetting my password, and trying to
>> specify
>> >> my login and password manually in the getGoogleDocsConnection argument
>> list.
>> >> I also tried removing either or both of the service and error options.
>> No
>> >> luck in any case. I also tried a different Google account, also with
no
>> >> luck.
>> >>
>> >> I've also tried tweaking the URL being generated by the code, and in
>> all
>> >> cases, I get a 403: Forbidden error with content
>> "Error=BadAuthentication".
>> >>
>> >> I don't really know enough about how authentication is supposed to
work
>> to
>> >> get much farther. Can you help? Should I try the Google API forum
>> instead?
>> >>
>> >>  -Harlan
>> >>
>> >>
>> >>
>> >>> From: Duncan Temple Lang <duncan at wald.ucdavis.edu>
>> >>> To: r-help at r-project.org
>> >>> Date: Wed, 10 Nov 2010 10:33:47 -0800
>> >>> Subject: Re: [R] RGoogleDocs stopped working
>> >>>
>> >>> Hi Harlan
>> >>>
>> >>>  I just tried to connect to Google Docs and I had ostensibly the same
>> >>> problem.
>> >>> However, the password was actually different from what I had
>> specified.
>> >>> After resetting it with GoogleDocs, the getGoogleDocsConnection()
>> worked
>> >>> fine. So I don't doubt that the login and password are correct, but
>> >>> you might just try it again to ensure there are no typos.
>> >>> The other thing to look at is the values for Email and Passwd
>> >>> sent in the URL, i.e. the string in url in your debugging
>> >>> below. (Thanks for that by the way). If either has special
characters,
>> >>> e.g. &, it is imperative that they are escaped correctly, i.e.
>> converted
>> >>> to %24.  This should happen and nothing should have changed, but it
is
>> >>> worth verifying.
>> >>>
>> >>>  So things still seem to work for me. It is a data point, but not one
>> >>> that gives you much of a clue as to what is wrong on your machine.
>> >>>
>> >>>  D.
>> >>>
>> >>
>> >> On Wed, Nov 10, 2010 at 10:36 AM, Harlan Harris <harlan at harris.name
>> >wrote:
>> >>
>> >>> Hello,
>> >>>
>> >>> Some code using RGoogleDocs, which had been working smoothly since
the
>> >>> summer, just stopped working. I know that it worked on November 3rd,
>> but it
>> >>> doesn't work today. I've confirmed that the login and password still
>> work
>> >>> when I log in manually. I've confirmed that the URL gives the same
>> error
>> >>> when I paste it into Firefox. I don't know enough about this web
>> service to
>> >>> figure out the problem myself, alas...
>> >>>
>> >>> Here's the error and other info (login/password omitted):
>> >>>
>> >>> > ss.con <- getGoogleDocsConnection(login=gd.login,
>> password=gd.password,
>> >>> service='wise', error=FALSE)
>> >>> Error: Forbidden
>> >>>
>> >>> Enter a frame number, or 0 to exit
>> >>>
>> >>> 1: getGoogleDocsConnection(login = gd.login, password = gd.password,
>> >>> service = "wise", error = FALSE)
>> >>> 2: getGoogleAuth(..., error = error)
>> >>> 3: getForm("https://www.google.com/accounts/ClientLogin", accountType
>> =
>> >>> "HOSTED_OR_GOOGLE", Email = login, Passw
>> >>> 4: getURLContent(uri, .opts = .opts, .encoding = .encoding, binary =
>> >>> binary, curl = curl)
>> >>> 5: stop.if.HTTP.error(http.header)
>> >>>
>> >>> Selection: 4
>> >>> Called from: eval(expr, envir, enclos)
>> >>> Browse[1]> http.header
>> >>>                    Content-Type
>> >>> Cache-control                          Pragma
>> >>>                    "text/plain"            "no-cache,
>> >>> no-store"                      "no-cache"
>> >>>                         Expires                            Date
>> >>> X-Content-Type-Options
>> >>> "Mon, 01-Jan-1990 00:00:00 GMT" "Wed, 10 Nov 2010 15:24:39
>> >>> GMT"                       "nosniff"
>> >>>                X-XSS-Protection
>> >>> Content-Length                          Server
>> >>>                 "1; mode=block"
>> >>> "24"                           "GSE"
>> >>>                          status                   statusMessage
>> >>>                           "403"                 "Forbidden\r\n"
>> >>> Browse[1]> url
>> >>> [1] "
>> >>>
>>
https://www.google.com/accounts/ClientLogin?accountType=HOSTED%5FOR%5FGOOGLE
&Email=***&Passwd=***&service=wise&source=R%2DGoogleDocs%2D0%2E1
>> >>> "
>> >>> Browse[1]> .opts
>> >>> $ssl.verifypeer
>> >>> [1] FALSE
>> >>>
>> >>>
>> >>> > R.Version()
>> >>> $platform
>> >>> [1] "i386-apple-darwin9.8.0"
>> >>>
>> >>> $arch
>> >>> [1] "i386"
>> >>>
>> >>> $os
>> >>> [1] "darwin9.8.0"
>> >>>
>> >>> $system
>> >>> [1] "i386, darwin9.8.0"
>> >>>
>> >>> $status
>> >>> [1] ""
>> >>>
>> >>> $major
>> >>> [1] "2"
>> >>>
>> >>> $minor
>> >>> [1] "10.1"
>> >>>
>> >>> $year
>> >>> [1] "2009"
>> >>>
>> >>> $month
>> >>> [1] "12"
>> >>>
>> >>> $day
>> >>> [1] "14"
>> >>>
>> >>> $`svn rev`
>> >>> [1] "50720"
>> >>>
>> >>> $language
>> >>> [1] "R"
>> >>>
>> >>> $version.string
>> >>> [1] "R version 2.10.1 (2009-12-14)"
>> >>>
>> >>>
>> >>> > installed.packages()[c('RCurl', 'RGoogleDocs'), ]
>> >>>             Package
>> >>> LibPath                                             Version Priority
>> Bundle
>> >>> Contains
>> >>> RCurl       "RCurl"
>> >>> "/Users/hharris/Library/R/2.10/library"             "1.4-3" NA
>> NA
>> >>> NA
>> >>> RGoogleDocs "RGoogleDocs"
>> >>> "/Library/Frameworks/R.framework/Resources/library" "0.4-1" NA
>> NA
>> >>> NA
>> >>>             Depends                         Imports LinkingTo
>> >>> Suggests       Enhances OS_type License Built
>> >>> RCurl       "R (>= 2.7.0), methods, bitops" NA      NA
>> >>> "Rcompression" NA       NA      "BSD"   "2.10.1"
>> >>> RGoogleDocs "RCurl, XML, methods"           NA      NA
>> >>> NA             NA       NA      "BSD"   "2.10.1"
>> >>>
>> >>>
[[elided Yahoo spam]]
>> >>>
>> >>>  -Harlan
>> >>>
>> >>>
>> >>
>> >
>>
>>        [[alternative HTML version deleted]]
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>

	[[alternative HTML version deleted]]



------------------------------

Message: 70
Date: Mon, 22 Nov 2010 11:49:43 -0800
From: Peter Ehlers <ehlers at ucalgary.ca>
To: Lucia Ca?as <lucia.canas at co.ieo.es>
Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] sm.ancova graphic
Message-ID: <4CEAC957.8040505 at ucalgary.ca>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

On 2010-11-22 09:47, Lucia Ca?as wrote:
>
> Hi R-Users,
>
> I am working with sm.ancova (in the package sm) and I have two problems
with the graph, which is automatically generated when sm.ancova() is run.
>
> 1-Besides of the fitted lines, the observed data appeared automatically in
the graph. I prefer that only fitted lines appear. I check the sm.options,
but I could not find the way that  the observed data do not appear in the
graph.
>
> 2-I would like to change the size of the numbers in the axis. Again,  I
check the sm.options, but I could not find the correct way.
>

Your second request is easy: just issue a par(cex.axis = ....) call
before the sm.ancova call.

For your first problem, I can't see a simple way; this seems to be
hard-coded in the function. But the function is easy to modify. Just
look for a couple of short loops containing this:

   text(rawdata$

(It's best to work with the source code in the *.tar.gz file.)
Remove or comment out those loops and save the function as, say,
my.sm.ancova. You'll also have to set the environment of
my.sm.ancova to that of sm.ancova.

Seems to me that it might be worth suggesting this and the ability
to fiddle with graphics parameters to the maintainer of sm.

Peter Ehlers

>
>
> Thank you in advance,
>
> Luc?a
>
> 	[[alternative HTML version deleted]]
>



------------------------------

Message: 71
Date: Mon, 22 Nov 2010 11:55:10 -0800
From: Joshua Wiley <jwiley.psych at gmail.com>
To: Nathan Miller <natemiller77 at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Wait for user input with readline()
Message-ID:
	<AANLkTi=w+oQ19cinTY8fJ6NtGn=zwi8pNzxT11tOjO=- at mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

Hi Nate,

There may be better ways, but on the couple instances I've wanted to
wait for keyboard input I used this type of paradigm:

foo <- function() {
  x <- 1:10
  y <- rnorm(10)
  input <- NA
  while(!isTRUE(input == "1") && !isTRUE(input == "2")) {
    cat("Please type '1' if you want the first variable on the x
           axis and '2' if you want the second.", fill = TRUE)
    input <- scan("", what = "character")
    if(input == "1") {
      plot(x, y)
    } else if (input == "2") {
      plot(y, x)
    } else {cat("Sorry, I didn't catch that", fill = TRUE)}
  }
}

Perhaps it will be of some use to you.

Best regards,

Josh


On Mon, Nov 22, 2010 at 11:13 AM, Nathan Miller <natemiller77 at gmail.com>
wrote:
> Hello,
>
> I am trying write a script that includes a prompt for user input using
> readlines(). I am running into the problem that when I run readlines() as
a
> single line the prompt works perfectly, but when I try to run a block of
> code which includes the readline function, the script doesn't wait for the
> user input. I have seen this question posted before when I did a search,
but
> I didn't find an suitable answer. Is there a means of ensuring that the
> script does not proceed until a value has been entered to readline(). Can
I
> put readline in a function that will wait for input?
>
> Are there other options for getting user input that allow require that the
> script wait for user input?
>
> Thanks for your help,
>
> Nate
>
> ? ? ? ?[[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Joshua Wiley
Ph.D. Student, Health Psychology
University of California, Los Angeles
http://www.joshuawiley.com/



------------------------------

Message: 72
Date: Mon, 22 Nov 2010 14:56:39 -0500
From: Gabor Grothendieck <ggrothendieck at gmail.com>
To: Manta <mantino84 at libero.it>
Cc: r-help at r-project.org
Subject: Re: [R] Ordeing Zoo object
Message-ID:
	<AANLkTikKFhcP7r+EWekDXqx=C06xr+TDp9e+7zbpSQ=S at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

On Mon, Nov 22, 2010 at 1:35 PM, Manta <mantino84 at libero.it> wrote:
>
> And how about if I want to order the series from the smallest to the
largest
> value, keeping the date index in order to see when the values were
> predominantly negative etc...
>

If you just want to look at it in this order then:

as.data.frame(dat)[length(dat):1,,drop = FALSE]

-- 
Statistics & Software Consulting
GKX Group, GKX Associates Inc.
tel: 1-877-GKX-GROUP
email: ggrothendieck at gmail.com



------------------------------

Message: 73
Date: Mon, 22 Nov 2010 19:59:23 +0000 (UTC)
From: Ben Bolker <bbolker at gmail.com>
To: r-help at stat.math.ethz.ch
Subject: Re: [R] Is it possible to make a matrix to start at row 0?
Message-ID: <loom.20101122T204655-678 at post.gmane.org>
Content-Type: text/plain; charset=us-ascii

Bert Gunter <gunter.berton <at> gene.com> writes:

> 
> Eh??? Why would you want to do that?? (R isn't C).
> 
> So the simple answer is: you can't.
> 
> The other answer is, well of course you sort of can via, e.g.
> 
> for(i in 0:9)  {
>    z <- myMatrix[i+1,]
>   ...
> }
> 
> But as Josh said, I think this falls into the class of "You are just
> asking for trouble, so don't do it."
> 
> Cheers,
> Bert

  But if you still want to after all those warnings, you can ...
see the "Oarray" package, where the first letter of the package
name is a capital letter "oh" (O), not a zero (0).

 library("fortunes"); fortune("Yoda")

 There ought also to be a clever fortune() expressing the sentiment
that you may eventually find (weeks, months, or years later) that
changing the way you solve your problem to go with R's flow would
have been easier than implementing a solution that works around
the flow (examples abound: <<-, zero-based arrays, eval(parse()), 
storing names  of variables as character vectors and using get()
[FAQ 7.21], etc, etc, etc ...)



------------------------------

Message: 74
Date: Mon, 22 Nov 2010 15:04:01 -0500
From: David Winsemius <dwinsemius at comcast.net>
To: "Ni, Melody Zhifang" <z.ni at imperial.ac.uk>
Cc: "'r-help at r-project.org'" <r-help at r-project.org>
Subject: Re: [R] save a regression model that can be used later
Message-ID: <EA74D6E5-DAF0-407E-8451-7F23BD7F84BC at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes


On Nov 22, 2010, at 10:02 AM, Ni, Melody Zhifang wrote:

> Hi everyone
>
> I have a question about how to save a regression model in R and how  
> to retrieve it for making predictions in a new session.
>
> To be more specific, I fitted a multilevel logistic regression model  
> using the lmer  from the "lme4" package. I then successfully make  
> predictions using  fitted(mymodel).
>
> Since data are complex (three levels, nested, numerous categorical  
> and continuous data describing types of laparoscopic surgery), the  
> computer takes quite a while to fit the MLM model.  I wonder whether  
> it's possible to save the fitted model so that I don't have to fit  
> it again for making predictions every time I start a new R session.
>
> I searched the mailing-list archive. Suggestions include using save  
> () to save the model as "mymodel.rda" and then use load(mymodel.rda)  
> into the workspace. I tried without success (in Windows), returning  
> the error message: "Error in object$fitted : $ operator is invalid  
> for atomic vectors"

How? ... did you "try" that is. Need code, not vague reports of failure.

>
> Did I do anything wrong?  Any help on this topic is much appreciated
>
> BW, Melody
>
> --
> Dr Melody Ni
> Imperial College
> Department of Surgery and Cancer
> 10th floor, QEQM Building
-- 
David Winsemius, MD
West Hartford, CT



------------------------------

Message: 75
Date: Mon, 22 Nov 2010 12:10:17 -0800 (PST)
From: madr <madrazel at interia.pl>
To: r-help at r-project.org
Subject: [R] how to round only one column of a matrix ?
Message-ID: <1290456617008-3054363.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii


round() function affects all values of a matrix, I want only to round column
that is called 'y'.
-- 
View this message in context:
http://r.789695.n4.nabble.com/how-to-round-only-one-column-of-a-matrix-tp305
4363p3054363.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 76
Date: Mon, 22 Nov 2010 21:10:20 +0100
From: baptiste auguie <baptiste.auguie at googlemail.com>
To: r-help at stat.math.ethz.ch
Subject: Re: [R] Is it possible to make a matrix to start at row 0?
Message-ID:
	<AANLkTi=UTX7mWNAA_wy5Yb=v6oorXTBG21KFwa+-3OnM at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

Apparently He who starts from 0 needn't be called unfortunate,

fortune('indexed')

baptiste

On 22 November 2010 20:59, Ben Bolker <bbolker at gmail.com> wrote:
> Bert Gunter <gunter.berton <at> gene.com> writes:
>
>>
>> Eh??? Why would you want to do that?? (R isn't C).
>>
>> So the simple answer is: you can't.
>>
>> The other answer is, well of course you sort of can via, e.g.
>>
>> for(i in 0:9) ?{
>> ? ?z <- myMatrix[i+1,]
>> ? ...
>> }
>>
>> But as Josh said, I think this falls into the class of "You are just
>> asking for trouble, so don't do it."
>>
>> Cheers,
>> Bert
>
> ?But if you still want to after all those warnings, you can ...
> see the "Oarray" package, where the first letter of the package
> name is a capital letter "oh" (O), not a zero (0).
>
> ?library("fortunes"); fortune("Yoda")
>
> ?There ought also to be a clever fortune() expressing the sentiment
> that you may eventually find (weeks, months, or years later) that
> changing the way you solve your problem to go with R's flow would
> have been easier than implementing a solution that works around
> the flow (examples abound: <<-, zero-based arrays, eval(parse()),
> storing names ?of variables as character vectors and using get()
> [FAQ 7.21], etc, etc, etc ...)
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



------------------------------

Message: 77
Date: Mon, 22 Nov 2010 12:17:15 -0800 (PST)
From: Phil Spector <spector at stat.berkeley.edu>
To: madr <madrazel at interia.pl>
Cc: r-help at r-project.org
Subject: Re: [R] how to round only one column of a matrix ?
Message-ID:
	<alpine.DEB.2.00.1011221216480.14043 at springer.Berkeley.EDU>
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed

Is this what you're looking for?

> mymatrix = matrix(rnorm(15),5,3,dimnames=list(NULL,c('x','y','z')))
> mymatrix
               x          y          z
[1,] -0.4459162 -2.3936837 -0.7401963
[2,]  0.9886466 -1.3955161 -1.3390314
[3,] -0.2086743  1.7984620 -0.8532579
[4,]  1.0985411  0.9315553 -1.3981632
[5,]  0.5787438  0.1719177  0.2246174
> mymatrix[,'y'] = round(mymatrix[,'y'])
> mymatrix
               x  y          z
[1,] -0.4459162 -2 -0.7401963
[2,]  0.9886466 -1 -1.3390314
[3,] -0.2086743  2 -0.8532579
[4,]  1.0985411  1 -1.3981632
[5,]  0.5787438  0  0.2246174


 					- Phil Spector
 					 Statistical Computing Facility
 					 Department of Statistics
 					 UC Berkeley
 					 spector at stat.berkeley.edu

On Mon, 22 Nov 2010, madr wrote:

>
> round() function affects all values of a matrix, I want only to round
column
> that is called 'y'.
> -- 
> View this message in context:
http://r.789695.n4.nabble.com/how-to-round-only-one-column-of-a-matrix-tp305
4363p3054363.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



------------------------------

Message: 78
Date: Mon, 22 Nov 2010 15:19:27 -0500
From: Steve Lianoglou <mailinglist.honeypot at gmail.com>
To: ?? <xiagao1982 at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] How to call web service in R
Message-ID:
	<AANLkTimoV-AyGAzu6NVhJX2N6mwRjxFsbmkKXep4pnx4 at mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

Hi,

On Mon, Nov 22, 2010 at 4:51 AM, ?? <xiagao1982 at gmail.com> wrote:
> Hello everyone,
>
[[elided Yahoo spam]]

Is RCurl what you're looking for?
http://cran.r-project.org/web/packages/RCurl/index.html

-steve
-- 
Steve Lianoglou
Graduate Student: Computational Systems Biology
?| Memorial Sloan-Kettering Cancer Center
?| Weill Medical College of Cornell University
Contact Info: http://cbio.mskcc.org/~lianos/contact



------------------------------

Message: 79
Date: Mon, 22 Nov 2010 15:39:58 -0500
From: "Tan, Richard" <RTan at panagora.com>
To: <r-help at r-project.org>
Subject: [R] aggregate a Date column does not work?
Message-ID:
	<3303FA84CE4F7244B27BE264EC4AE2A71510A385 at panemail.panagora.com>
Content-Type: text/plain

Hi, I am trying to aggregate max a Date type column but have weird
result, how do I fix this?



> a <- rbind(

+ data.frame(name='Tom', payday=as.Date('1999-01-01')),

+ data.frame(name='Tom', payday=as.Date('2000-01-01')),

+ data.frame(name='Pete', payday=as.Date('1998-01-01')),

+ data.frame(name='Pete', payday=as.Date('1999-01-01'))

+ )

> a

  name     payday

1  Tom 1999-01-01

2  Tom 2000-01-01

3 Pete 1998-01-01

4 Pete 1999-01-01

> aggregate(a$payday, list(a$name), max)

  Group.1     x

1     Tom 10957

2    Pete 10592



Thanks,

Richard




	[[alternative HTML version deleted]]



------------------------------

Message: 80
Date: Mon, 22 Nov 2010 12:41:19 -0800 (PST)
From: bmiddle <bmiddle at sandia.gov>
To: r-help at r-project.org
Subject: [R] R2WinBUGS help
Message-ID: <1290458479607-3054411.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii


When I use the 'bugs' function from R, WinBUGS runs correctly, but R
freezes.
The only way to use R after this is to stop calculations (without my file
that documents the calculation). However, I want to save the output in R so
I can use it in further models. Does anyone know how to fix this problem?
-- 
View this message in context:
http://r.789695.n4.nabble.com/R2WinBUGS-help-tp3054411p3054411.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 81
Date: Mon, 22 Nov 2010 12:44:36 -0800 (PST)
From: tomreilly <tomreilly at autobox.com>
To: r-help at r-project.org
Subject: Re: [R] FW:  help with time Series regression please
Message-ID: <1290458676157-3054417.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii


Cathy,

How does this model look?



[(1-B**4)]Y(T) =  20.767

                 +[X1(T)][(1-B**4)][(+ 56.1962)]      :PULSE              7/

4               I~P00028test
                 +[X2(T)][(1-B**4)][(+ 74.4301)]      :PULSE              9/

4               I~P00036test
                 +[X3(T)][(1-B**4)][(- 59.9872)]      :PULSE              6/

3               I~P00023test
                 +[X4(T)][(1-B**4)][(- 27.2187)]      :PULSE              7/

1               I~P00025test
                +     [(1-  .435B** 1)]**-1  [A(T)]

-- 
View this message in context:
http://r.789695.n4.nabble.com/errors-appears-in-my-time-Series-regression-fo
mula-tp1016593p3054417.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 82
Date: Mon, 22 Nov 2010 15:48:15 -0500
From: Mike Marchywka <marchywka at hotmail.com>
To: <bbolker at gmail.com>, <r-help at stat.math.ethz.ch>
Subject: Re: [R] Is it possible to make a matrix to start at row 0?
Message-ID: <BLU113-W5447FA07F7418936CF2F5BE3D0 at phx.gbl>
Content-Type: text/plain; charset="iso-8859-1"









----------------------------------------
> To: r-help at stat.math.ethz.ch
> From: bbolker at gmail.com
> Date: Mon, 22 Nov 2010 19:59:23 +0000
> Subject: Re: [R] Is it possible to make a matrix to start at row 0?
>
> Bert Gunter gene.com> writes:
>
> >
> > Eh??? Why would you want to do that?? (R isn't C).
> >
> > So the simple answer is: you can't.


Eh, it is open source for a reason. I wouldrecommend however
if you let your zero-base version out in the wild you rename
it to R0 or something. Seriosuly 1-based arrays annoy me
but I'm not sure what happens if you modify R LOL.



> >
> > The other answer is, well of course you sort of can via, e.g.
> >
> > for(i in 0:9) {
> > z <- myMatrix[i+1,]
> > ...
> > }
> >
> > But as Josh said, I think this falls into the class of "You are just
> > asking for trouble, so don't do it."
> >
> > Cheers,
> > Bert
>
> But if you still want to after all those warnings, you can ...
> see the "Oarray" package, where the first letter of the package
> name is a capital letter "oh" (O), not a zero (0).
>
> library("fortunes"); fortune("Yoda")
>
> There ought also to be a clever fortune() expressing the sentiment
> that you may eventually find (weeks, months, or years later) that
> changing the way you solve your problem to go with R's flow would
> have been easier than implementing a solution that works around
> the flow (examples abound: <<-, zero-based arrays, eval(parse()),
> storing names of variables as character vectors and using get()
> [FAQ 7.21], etc, etc, etc ...)
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.



------------------------------

Message: 83
Date: Mon, 22 Nov 2010 15:50:33 -0500
From: David Winsemius <dwinsemius at comcast.net>
To: "Tan, Richard" <RTan at panagora.com>
Cc: r-help at r-project.org
Subject: Re: [R] aggregate a Date column does not work?
Message-ID: <41B45939-081A-4A75-A55C-F5348722918C at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes


On Nov 22, 2010, at 3:39 PM, Tan, Richard wrote:

> Hi, I am trying to aggregate max a Date type column but have weird
> result, how do I fix this?

In the process of getting max() you coerced the Dates to numeric and  
now you need to re-coerce them back to Dates

?as.Date
as.Date(<your result>)  (possibly with an origin it the default  
"1970-01-01" doesn't get used.

-- 
David.
>
>
>
>> a <- rbind(
>
> + data.frame(name='Tom', payday=as.Date('1999-01-01')),
>
> + data.frame(name='Tom', payday=as.Date('2000-01-01')),
>
> + data.frame(name='Pete', payday=as.Date('1998-01-01')),
>
> + data.frame(name='Pete', payday=as.Date('1999-01-01'))
>
> + )
>
>> a
>
>  name     payday
>
> 1  Tom 1999-01-01
>
> 2  Tom 2000-01-01
>
> 3 Pete 1998-01-01
>
> 4 Pete 1999-01-01
>
>> aggregate(a$payday, list(a$name), max)
>
>  Group.1     x
>
> 1     Tom 10957
>
> 2    Pete 10592
>
>
>
> Thanks,
>
> Richard
>
>
>
>
> 	[[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

David Winsemius, MD
West Hartford, CT



------------------------------

Message: 84
Date: Mon, 22 Nov 2010 12:53:41 -0800 (PST)
From: madr <madrazel at interia.pl>
To: r-help at r-project.org
Subject: [R] I need a very specific unique like function and I don't
	know even how to properly call this
Message-ID: <1290459221115-3054427.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii


consider this matrix:

      [,1] [,2]
 [1,]    3   7
 [2,]    6   5
 [3,]    7   5
 [4,]    3   5
 [5,]    7   5
 [6,]    5   5
 [7,]    8   4
 [8,]    2   4
 [9,]    7   4
[10,]    0   6

I need to delete all rows where column 2 above and below has the same value,
so the effect would be:

      [,1] [,2]
 [1,]    3   7
 [2,]    6   5
 [6,]    5   5
 [7,]    8   4
 [9,]    7   4
[10,]    0   6

is there a built in function for that kind of operation or I must write one
from scratch ?
Is there a name for that kind of operation ?
-- 
View this message in context:
http://r.789695.n4.nabble.com/I-need-a-very-specific-unique-like-function-an
d-I-don-t-know-even-how-to-properly-call-this-tp3054427p3054427.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 85
Date: Mon, 22 Nov 2010 15:54:37 -0500
From: "Tan, Richard" <RTan at panagora.com>
To: "David Winsemius" <dwinsemius at comcast.net>
Cc: r-help at r-project.org
Subject: Re: [R] aggregate a Date column does not work?
Message-ID:
	<3303FA84CE4F7244B27BE264EC4AE2A71510A392 at panemail.panagora.com>
Content-Type: text/plain;	charset="us-ascii"

Thanks, add as.Date('1970-01-01') to the result column works.

Richard

-----Original Message-----
From: David Winsemius [mailto:dwinsemius at comcast.net] 
Sent: Monday, November 22, 2010 3:51 PM
To: Tan, Richard
Cc: r-help at r-project.org
Subject: Re: [R] aggregate a Date column does not work?


On Nov 22, 2010, at 3:39 PM, Tan, Richard wrote:

> Hi, I am trying to aggregate max a Date type column but have weird
> result, how do I fix this?

In the process of getting max() you coerced the Dates to numeric and  
now you need to re-coerce them back to Dates

?as.Date
as.Date(<your result>)  (possibly with an origin it the default  
"1970-01-01" doesn't get used.

-- 
David.
>
>
>
>> a <- rbind(
>
> + data.frame(name='Tom', payday=as.Date('1999-01-01')),
>
> + data.frame(name='Tom', payday=as.Date('2000-01-01')),
>
> + data.frame(name='Pete', payday=as.Date('1998-01-01')),
>
> + data.frame(name='Pete', payday=as.Date('1999-01-01'))
>
> + )
>
>> a
>
>  name     payday
>
> 1  Tom 1999-01-01
>
> 2  Tom 2000-01-01
>
> 3 Pete 1998-01-01
>
> 4 Pete 1999-01-01
>
>> aggregate(a$payday, list(a$name), max)
>
>  Group.1     x
>
> 1     Tom 10957
>
> 2    Pete 10592
>
>
>
> Thanks,
>
> Richard
>
>
>
>
> 	[[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

David Winsemius, MD
West Hartford, CT



------------------------------

Message: 86
Date: Mon, 22 Nov 2010 15:55:50 -0500 (EST)
From: Xiaoqi Cui <xcui at mtu.edu>
To: r-help <r-help at r-project.org>
Subject: [R] R package "kernlab" can not be properly loaded
Message-ID:
	<1590317927.1718871290459350354.JavaMail.root at marlbed.merit.edu>
Content-Type: text/plain; charset=utf-8

Hi,

I tried to load the package "kernlab" under R-v11 and R-v10, however it gave
error message:

Error in library.dynam(lib, package, package.lib) : 
  shared library 'kernlab' not found
In addition: Warning message:
package 'kernlab' was built under R version 2.12.0 
Error: package/namespace load failed for 'kernlab'

Has anybody loaded this successfully before? Thanks,

Xiaoqi



------------------------------

Message: 87
Date: Mon, 22 Nov 2010 21:56:24 +0100
From: Erich Neuwirth <erich.neuwirth at univie.ac.at>
To: csrabak <crabak at acm.org>, r-help <r-help at r-project.org>
Subject: Re: [R] Rexcel
Message-ID: <4CEAD8F8.4030305 at univie.ac.at>
Content-Type: text/plain; charset=ISO-8859-1

a) RExcel has its own mailing list (as the documentation tell you).
Please post RExcel related questions to the mailing list
accessible at rcom.univie.ac.at
b) For running code at startup, you have to create a worksheet (not a
workbook) named RCode in your workbook.


On 11/22/2010 7:15 PM, csrabak wrote:
> Em 22/11/2010 10:11, Luis Felipe Parra escreveu:
>> Hello I am new to RExcel and I would like to run a source code form the
>> excel worksheet. I would like to run the following code
>>
>> source("C:\\Quantil Aplicativos\\Genercauca\\BackwardSelectionNC.r")
>>
>> from the excel wroksheet. Does anybody know how to do this?
>>
>> Thank you
>>
> Felipe,
> 
> Look at the section "Startup" in the RExcel help. In a nutshell, if you
> want the code to run immediately at the loading of the spreadsheet,
> create a workbook called "RCode" and put your source there.
> 
> Other options are available. See the docs.
> 
> -- 
> Cesar Rabak
> 
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



------------------------------

Message: 88
Date: Mon, 22 Nov 2010 15:57:45 -0500
From: David Winsemius <dwinsemius at comcast.net>
To: "Tan, Richard" <RTan at panagora.com>
Cc: r-help at r-project.org
Subject: Re: [R] aggregate a Date column does not work?
Message-ID: <60654551-AE43-4C06-9AE7-16B9F968324E at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes


On Nov 22, 2010, at 3:54 PM, Tan, Richard wrote:

> Thanks, add as.Date('1970-01-01') to the result column works.

But that should make them all the same date in 1970. Since aggregate  
renames the date column to "x", this should work:

as.Date( aggregate(a$payday, list(a$name), max)$x )
[1] "2000-01-01" "1999-01-01"

>
> Richard
>
> -----Original Message-----
> From: David Winsemius [mailto:dwinsemius at comcast.net]
> Sent: Monday, November 22, 2010 3:51 PM
> To: Tan, Richard
> Cc: r-help at r-project.org
> Subject: Re: [R] aggregate a Date column does not work?
>
>
> On Nov 22, 2010, at 3:39 PM, Tan, Richard wrote:
>
>> Hi, I am trying to aggregate max a Date type column but have weird
>> result, how do I fix this?
>
> In the process of getting max() you coerced the Dates to numeric and
> now you need to re-coerce them back to Dates
>
> ?as.Date
> as.Date(<your result>)  (possibly with an origin it the default
> "1970-01-01" doesn't get used.
>
> -- 
> David.
>>
>>
>>
>>> a <- rbind(
>>
>> + data.frame(name='Tom', payday=as.Date('1999-01-01')),
>>
>> + data.frame(name='Tom', payday=as.Date('2000-01-01')),
>>
>> + data.frame(name='Pete', payday=as.Date('1998-01-01')),
>>
>> + data.frame(name='Pete', payday=as.Date('1999-01-01'))
>>
>> + )
>>
>>> a
>>
>> name     payday
>>
>> 1  Tom 1999-01-01
>>
>> 2  Tom 2000-01-01
>>
>> 3 Pete 1998-01-01
>>
>> 4 Pete 1999-01-01
>>
>>> aggregate(a$payday, list(a$name), max)
>>
>> Group.1     x
>>
>> 1     Tom 10957
>>
>> 2    Pete 10592
>>
>>
>>
>> Thanks,
>>
>> Richard
>>
>>
>>
>>
>> 	[[alternative HTML version deleted]]
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
> David Winsemius, MD
> West Hartford, CT
>
>

David Winsemius, MD
West Hartford, CT



------------------------------

Message: 89
Date: Mon, 22 Nov 2010 16:08:35 -0500
From: "Tan, Richard" <RTan at panagora.com>
To: "David Winsemius" <dwinsemius at comcast.net>
Cc: r-help at r-project.org
Subject: Re: [R] aggregate a Date column does not work?
Message-ID:
	<3303FA84CE4F7244B27BE264EC4AE2A71510A39F at panemail.panagora.com>
Content-Type: text/plain;	charset="us-ascii"

Yes, I meant something like one of these:

b <- aggregate(a$payday, list(a$name), max)
b$x <- as.Date('1970-01-01') + b$x 
or
b$x <- as.Date(b$x, origin='1970-01-01')

Thanks.


-----Original Message-----
From: David Winsemius [mailto:dwinsemius at comcast.net] 
Sent: Monday, November 22, 2010 3:58 PM
To: Tan, Richard
Cc: r-help at r-project.org
Subject: Re: [R] aggregate a Date column does not work?


On Nov 22, 2010, at 3:54 PM, Tan, Richard wrote:

> Thanks, add as.Date('1970-01-01') to the result column works.

But that should make them all the same date in 1970. Since aggregate  
renames the date column to "x", this should work:

as.Date( aggregate(a$payday, list(a$name), max)$x )
[1] "2000-01-01" "1999-01-01"

>
> Richard
>
> -----Original Message-----
> From: David Winsemius [mailto:dwinsemius at comcast.net]
> Sent: Monday, November 22, 2010 3:51 PM
> To: Tan, Richard
> Cc: r-help at r-project.org
> Subject: Re: [R] aggregate a Date column does not work?
>
>
> On Nov 22, 2010, at 3:39 PM, Tan, Richard wrote:
>
>> Hi, I am trying to aggregate max a Date type column but have weird
>> result, how do I fix this?
>
> In the process of getting max() you coerced the Dates to numeric and
> now you need to re-coerce them back to Dates
>
> ?as.Date
> as.Date(<your result>)  (possibly with an origin it the default
> "1970-01-01" doesn't get used.
>
> -- 
> David.
>>
>>
>>
>>> a <- rbind(
>>
>> + data.frame(name='Tom', payday=as.Date('1999-01-01')),
>>
>> + data.frame(name='Tom', payday=as.Date('2000-01-01')),
>>
>> + data.frame(name='Pete', payday=as.Date('1998-01-01')),
>>
>> + data.frame(name='Pete', payday=as.Date('1999-01-01'))
>>
>> + )
>>
>>> a
>>
>> name     payday
>>
>> 1  Tom 1999-01-01
>>
>> 2  Tom 2000-01-01
>>
>> 3 Pete 1998-01-01
>>
>> 4 Pete 1999-01-01
>>
>>> aggregate(a$payday, list(a$name), max)
>>
>> Group.1     x
>>
>> 1     Tom 10957
>>
>> 2    Pete 10592
>>
>>
>>
>> Thanks,
>>
>> Richard
>>
>>
>>
>>
>> 	[[alternative HTML version deleted]]
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
> David Winsemius, MD
> West Hartford, CT
>
>

David Winsemius, MD
West Hartford, CT



------------------------------

Message: 90
Date: Mon, 22 Nov 2010 16:13:11 -0500
From: Gabor Grothendieck <ggrothendieck at gmail.com>
To: "Tan, Richard" <RTan at panagora.com>
Cc: r-help at r-project.org
Subject: Re: [R] aggregate a Date column does not work?
Message-ID:
	<AANLkTik0iSDu3id5WH3BDzEYPv928D7-YWYCW2yzf=rH at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

On Mon, Nov 22, 2010 at 3:39 PM, Tan, Richard <RTan at panagora.com> wrote:
> Hi, I am trying to aggregate max a Date type column but have weird
> result, how do I fix this?
>
>> a <- rbind(
>
> + data.frame(name='Tom', payday=as.Date('1999-01-01')),
> + data.frame(name='Tom', payday=as.Date('2000-01-01')),
> + data.frame(name='Pete', payday=as.Date('1998-01-01')),
> + data.frame(name='Pete', payday=as.Date('1999-01-01'))
> + )
>


Since its already sorted try this:

a[!duplicated(a$name, fromLast = TRUE), ]

Using sqldf also works:

library(sqldf)
sqldf("select name, max(payday) payday from a group by name order by name")


-- 
Statistics & Software Consulting
GKX Group, GKX Associates Inc.
tel: 1-877-GKX-GROUP
email: ggrothendieck at gmail.com



------------------------------

Message: 91
Date: Mon, 22 Nov 2010 13:13:41 -0800 (PST)
From: madr <madrazel at interia.pl>
To: r-help at r-project.org
Subject: Re: [R] "negative alpha" or custom gradient colors of data
	dots	in...
Message-ID: <1290460421895-3054465.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii


Well I "hacked" it i maybe not so elegant way:


x <- rnorm(100000)
y <- rnorm(100000)
png('out.png',width=600,height=500,units="px")
op <-
par(bg='black',fg='gray',col='gray',col.axis='gray',col.lab='gray',col.main=
'gray',col.sub='gray',mai=c(0,0,0,0),
tck = 0.01, mgp = c(0, -1.4, 0), xaxs = "i", yaxs = "i")
plot(x,y,ylim=c(-20,20),xlim=c(min(x),max(x)),pch='.',col =
rgb(1,1,1,1),yaxt="n", ann=FALSE)
# here is the solution - plot the same data with low alpha and different
color blue is best because is perceived as darkest
points(x,y,col = rgb(0,0,1,0.1),pch='.')
abline(h=0, lty=3, col="green")
abline(v=0, lty=3, col="green")
par <- op



http://i51.tinypic.com/maxmcl.png


[[repost, because subject was deleted and this message won't get into the
list]]
-- 
View this message in context:
http://r.789695.n4.nabble.com/negative-alpha-or-custom-gradient-colors-of-da
ta-dots-in-scatterplot-tp3052394p3054465.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 92
Date: Mon, 22 Nov 2010 13:20:16 -0800 (PST)
From: Frank Harrell <f.harrell at vanderbilt.edu>
To: r-help at r-project.org
Subject: Re: [R] how do remove those predictor which have p value
	greater than 0.05 in GLM?
Message-ID: <1290460816357-3054478.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii


What would make you want to delete a variable because P > 0.05?  That will
invalidate every aspect of statistical inference for the model.

Frank


-----
Frank Harrell
Department of Biostatistics, Vanderbilt University
-- 
View this message in context:
http://r.789695.n4.nabble.com/how-do-remove-those-predictor-which-have-p-val
ue-greater-than-0-05-in-GLM-tp3053921p3054478.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 93
Date: Mon, 22 Nov 2010 15:29:11 -0600
From: lucia <lucia at thedietdiary.com>
To: r-help at r-project.org
Subject: [R] Help: Standard errors arima
Message-ID: <1E4A2E3E-19B8-4531-AEFB-A57CA477CB75 at thedietdiary.com>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes

Hello,
I'm an R newbie. I've tried to search, but my search skills don't seem  
up to finding what I need. (Maybe I don't know the correct terms?)

I need the standard errors and not the confidence intervals from an  
ARIMA fit.

I can get fits:

 > coef(test)
                    ar1                     ma1                
intercept time(TempVector) - 1900
            0.801459585             0.704126549             
12.854527065             0.000520366

And confidence intervals:

 > confint(test)
                                2.5 %       97.5 %
ar1                      7.684230e-01  0.834496136
ma1                      6.742786e-01  0.733974460
intercept                1.217042e+01 13.538635652
time(TempVector) - 1900 -9.610183e-06  0.001050342
 >

http://stat.ethz.ch/R-manual/R-devel/library/stats/html/arima.html
Are any of these standard errors?

 > vcov(test)
                                  ar1           ma1    intercept  
time(TempVector) - 1900
ar1                      2.841144e-04 -5.343792e-05  
1.028710e-05            2.725763e-08
ma1                     -5.343792e-05  2.319165e-04  
9.990842e-07           -3.103661e-09
intercept                1.028710e-05  9.990842e-07  
1.218299e-01            8.969206e-05
time(TempVector) - 1900  2.725763e-08 -3.103661e-09  
8.969206e-05            7.311670e-08

Or is there a function that can give me standard errors for the  
coefficients on AR1, ma, and time?  (I don't care about the intercept.)
Thanks,
Lucia



------------------------------

Message: 94
Date: Mon, 22 Nov 2010 16:37:39 -0500
From: David Winsemius <dwinsemius at comcast.net>
To: lucia <lucia at thedietdiary.com>
Cc: r-help at r-project.org
Subject: Re: [R] Help: Standard errors arima
Message-ID: <F1D85621-8D27-46B9-B72F-22E82007414B at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes


On Nov 22, 2010, at 4:29 PM, lucia wrote:

> Hello,
> I'm an R newbie. I've tried to search, but my search skills don't  
> seem up to finding what I need. (Maybe I don't know the correct  
> terms?)
>
> I need the standard errors and not the confidence intervals from an  
> ARIMA fit.
>
> I can get fits:
>
> > coef(test)
>                   ar1                     ma1                
> intercept time(TempVector) - 1900
>           0.801459585             0.704126549             
> 12.854527065             0.000520366
>
> And confidence intervals:
>
> > confint(test)
>                               2.5 %       97.5 %
> ar1                      7.684230e-01  0.834496136
> ma1                      6.742786e-01  0.733974460
> intercept                1.217042e+01 13.538635652
> time(TempVector) - 1900 -9.610183e-06  0.001050342
> >
>
> http://stat.ethz.ch/R-manual/R-devel/library/stats/html/arima.html

That page says that there is a vcov function that can extract the  
variance-covariance matrix, so that should let you make s.e.'s pretty  
easily.

-- 
David.

> Are any of these standard errors?
>
> > vcov(test)
>                                 ar1           ma1    intercept  
> time(TempVector) - 1900
> ar1                      2.841144e-04 -5.343792e-05  
> 1.028710e-05            2.725763e-08
> ma1                     -5.343792e-05  2.319165e-04  
> 9.990842e-07           -3.103661e-09
> intercept                1.028710e-05  9.990842e-07  
> 1.218299e-01            8.969206e-05
> time(TempVector) - 1900  2.725763e-08 -3.103661e-09  
> 8.969206e-05            7.311670e-08
>
> Or is there a function that can give me standard errors for the  
> coefficients on AR1, ma, and time?  (I don't care about the  
> intercept.)
> Thanks,
> Lucia
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

David Winsemius, MD
West Hartford, CT



------------------------------

Message: 95
Date: Mon, 22 Nov 2010 22:41:28 +0100
From: Uwe Ligges <ligges at statistik.tu-dortmund.de>
To: bmiddle <bmiddle at sandia.gov>
Cc: r-help at r-project.org
Subject: Re: [R] R2WinBUGS help
Message-ID: <4CEAE388.3070801 at statistik.tu-dortmund.de>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed



On 22.11.2010 21:41, bmiddle wrote:
>
> When I use the 'bugs' function from R, WinBUGS runs correctly, but R
freezes.
> The only way to use R after this is to stop calculations (without my file
> that documents the calculation).

R "freezes" as long as the WinBUGS process is opened. AFter closing it, 
R should "respond" again.

If you like a more inetractive way of dealing with some BUGS 
incarnation, try the BRugs package that makes use of OpenBUGS by typing

install.packages("BRugs")
library("BRugs")
?BRugs
?BRugsFit

Uwe Ligges


  However, I want to save the output in R so
> I can use it in further models. Does anyone know how to fix this problem?



------------------------------

Message: 96
Date: Mon, 22 Nov 2010 16:41:44 -0500
From: David Winsemius <dwinsemius at comcast.net>
To: Xiaoqi Cui <xcui at mtu.edu>
Cc: r-help <r-help at r-project.org>
Subject: Re: [R] R package "kernlab" can not be properly loaded
Message-ID: <A0F114AF-60B6-4672-8C2A-85A7848F42C7 at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes


On Nov 22, 2010, at 3:55 PM, Xiaoqi Cui wrote:

> Hi,
>
> I tried to load the package "kernlab" under R-v11 and R-v10, however  
> it gave error message:
>
> Error in library.dynam(lib, package, package.lib) :
>  shared library 'kernlab' not found
> In addition: Warning message:
> package 'kernlab' was built under R version 2.12.0
> Error: package/namespace load failed for 'kernlab'

The current version of R and all of the packages in CRAN _is_ 2.12.0.

load()-ing packages that differ in their major versions is not  
advisable and not guaranteed to succeed. The Contributed Packages page  
of CRAN has at the very bottom a link to the package Archive.


David Winsemius, MD
West Hartford, CT



------------------------------

Message: 97
Date: Mon, 22 Nov 2010 22:45:21 +0100
From: Uwe Ligges <ligges at statistik.tu-dortmund.de>
To: Xiaoqi Cui <xcui at mtu.edu>
Cc: r-help <r-help at r-project.org>
Subject: Re: [R] R package "kernlab" can not be properly loaded
Message-ID: <4CEAE471.50802 at statistik.tu-dortmund.de>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed



On 22.11.2010 21:55, Xiaoqi Cui wrote:
> Hi,
>
> I tried to load the package "kernlab" under R-v11 and R-v10, however it
gave error message:
>
> Error in library.dynam(lib, package, package.lib) :
>    shared library 'kernlab' not found
> In addition: Warning message:
> package 'kernlab' was built under R version 2.12.0
> Error: package/namespace load failed for 'kernlab'
>
> Has anybody loaded this successfully before? Thanks,



Yes, if it was built for R-2.11.x we were using R-2.11.x.

I guess you are under Windows (unstated) where some infrastructure was 
changed form R-2.11.x to R-2.12.x and packages compiled for the latter 
won't work foer the former.

hcen please run

install.packages("kernlab") in order to get a version that fits to your 
R or even better upgrade your version of R.


Uwe Ligges





> Xiaoqi
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.



------------------------------

Message: 98
Date: Mon, 22 Nov 2010 21:51:40 +0000
From: Ista Zahn <izahn at psych.rochester.edu>
To: madr <madrazel at interia.pl>
Cc: r-help at r-project.org
Subject: Re: [R] I need a very specific unique like function and I
	don't know even how to properly call this
Message-ID:
	<AANLkTin6Yn1FPKiDZO3-WMKFO48_aqA_QFjHKh_4WC7U at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

Here is a method for piecing it together using diff and indexing:

dat <- structure(c(3L, 6L, 7L, 3L, 7L, 5L, 8L, 2L, 7L, 0L, 7L, 5L, 5L,
 5L, 5L, 5L, 4L, 4L, 4L, 6L), .Dim = c(10L, 2L), .Dimnames = list(
     NULL, c("V1", "V2")))
 diffs <- abs(diff(dat[,2], 1)) # get the difference between each
value and the previous value
 new.dat <- cbind(dat, c(NA, diffs), c(diffs, NA)) # combine the diffs
with the original matrix, shifted down (is the next valued the same as
the value) and down (is the previous value the same)
 new.dat <- cbind(new.dat, rowSums(new.dat[,3:4], na.rm=TRUE)) # sum
the shifted diffs so that the value is 0 if above and below are the
same, and greater than zero if the above and below values are not the
same
 final.dat <- new.dat[new.dat[,5] !=0 ,1:2] # get rid of rows for
which the sum of the shifted diffs is not equal to zero.

HTH,
Ista
On Mon, Nov 22, 2010 at 8:53 PM, madr <madrazel at interia.pl> wrote:
>
> consider this matrix:
>
> ? ? ?[,1] [,2]
> ?[1,] ? ?3 ? 7
> ?[2,] ? ?6 ? 5
> ?[3,] ? ?7 ? 5
> ?[4,] ? ?3 ? 5
> ?[5,] ? ?7 ? 5
> ?[6,] ? ?5 ? 5
> ?[7,] ? ?8 ? 4
> ?[8,] ? ?2 ? 4
> ?[9,] ? ?7 ? 4
> [10,] ? ?0 ? 6
>
> I need to delete all rows where column 2 above and below has the same
value,
> so the effect would be:
>
> ? ? ?[,1] [,2]
> ?[1,] ? ?3 ? 7
> ?[2,] ? ?6 ? 5
> ?[6,] ? ?5 ? 5
> ?[7,] ? ?8 ? 4
> ?[9,] ? ?7 ? 4
> [10,] ? ?0 ? 6
>
> is there a built in function for that kind of operation or I must write
one
> from scratch ?
> Is there a name for that kind of operation ?
> --
> View this message in context:
http://r.789695.n4.nabble.com/I-need-a-very-specific-unique-like-function-an
d-I-don-t-know-even-how-to-properly-call-this-tp3054427p3054427.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Ista Zahn
Graduate student
University of Rochester
Department of Clinical and Social Psychology
http://yourpsyche.org



------------------------------

Message: 99
Date: Mon, 22 Nov 2010 14:03:43 -0800 (PST)
From: bmiddle <bmiddle at sandia.gov>
To: r-help at r-project.org
Subject: Re: [R] R2WinBUGS help
Message-ID: <1290463423081-3054531.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii


 When I use the 'bugs' function from R, WinBUGS runs correctly, but R
freezes. 
> The only way to use R after this is to stop calculations (without my file 
> that documents the calculation). 

R "freezes" as long as the WinBUGS process is opened. AFter closing it, 
R should "respond" again. 

If you like a more inetractive way of dealing with some BUGS 
incarnation, try the BRugs package that makes use of OpenBUGS by typing 

install.packages("BRugs") 
library("BRugs") 
?BRugs 
?BRugsFit

Thanks, Uwe. I think it was a problem with Windows. WinBUGS should close
automatically and it wasn't. Anyway, it seems that the problem is now fixed.
I may try the BRugs package soon (after I submit this project report!).
-- 
View this message in context:
http://r.789695.n4.nabble.com/R2WinBUGS-help-tp3054411p3054531.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 100
Date: Mon, 22 Nov 2010 14:10:04 -0800 (PST)
From: shubha <shuba.pandit at gmail.com>
To: r-help at r-project.org
Subject: Re: [R] how do remove those predictor which have p value
	greater than 0.05 in GLM?
Message-ID: <1290463804045-3054540.post at n4.nabble.com>
Content-Type: text/plain


Thanks for the response, Frank. 
I am not saying that I want to delete a variables because of p>0.5. But my
concern was: I am using backward stepwise logistic regression, it keeps the
variables in the final model if the variable significantly contributing in
the model. Otherwise, it should not be in the final model. 
Using other software, they give correct results. But R, did not. I want
those variables if p<0.05, otherwise exclude from the model. If you include
that variables, it will affect the Log likelihood ratio and AIC. I want to
change a P-value criterion <=0.05 in the model.  Any suggestions. 
thanks

-- 
View this message in context:
http://r.789695.n4.nabble.com/how-do-remove-those-predictor-which-have-p-val
ue-greater-than-0-05-in-GLM-tp3053921p3054540.html
Sent from the R help mailing list archive at Nabble.com.

	[[alternative HTML version deleted]]



------------------------------

Message: 101
Date: Mon, 22 Nov 2010 14:11:13 -0800
From: Dennis Murphy <djmuser at gmail.com>
To: lucia <lucia at thedietdiary.com>
Cc: r-help at r-project.org
Subject: Re: [R] Help: Standard errors arima
Message-ID:
	<AANLkTimNjEECU5Ee_+89vsy8D3JzHc9rZbBPhA7458JL at mail.gmail.com>
Content-Type: text/plain

Hi:

Here's an example (never mind the model fit...or lack of it thereof...)

str(AirPassengers)      # a built-in R data set

# Series is seasonal with increasing trend and increasing variance
plot(AirPassengers, type = 'l')
# STL decomposition
plot(stl(AirPassengers, 'periodic'))
# ACF and PACF of differenced series
par(mfrow = c(2, 1))
acf(diff(AirPassengers))
pacf(diff(AirPassengers))
par(mfrow = c(1, 1))

# Fit a basic seasonal model: SARIMA(0, 1, 1) x (0, 0, 1):
m1 <- arima(AirPassengers, order = c(0, 1, 1),
             seasonal = list(order = c(0, 0, 1), period = 12))

# Most models in R return lists; arima() is no different:
names(m1)
 [1] "coef"      "sigma2"    "var.coef"  "mask"      "loglik"    "aic"
 [7] "arma"      "residuals" "call"      "series"    "code"      "n.cond"
[13] "model"

# var.coef looks promising, so let's extract it:
m1$var.coef

# As David mentioned, vcov() also works (not just for time series, either):
vcov(m1)

Both should return the same covariance matrix of the estimated coefficients.
The standard errors are the square roots of the diagonal elements:

sqrt(diag(m1$var.coef))
sqrt(diag(vcov(m1)))

Compare this to the output from arima():
> m1

Call:
arima(x = AirPassengers, order = c(0, 1, 1), seasonal = list(order = c(0, 0,

    1), period = 12))

Coefficients:
         ma1    sma1
      0.2263  0.8015
s.e.  0.0805  0.0674


HTH,
Dennis



On Mon, Nov 22, 2010 at 1:29 PM, lucia <lucia at thedietdiary.com> wrote:

> Hello,
> I'm an R newbie. I've tried to search, but my search skills don't seem up
> to finding what I need. (Maybe I don't know the correct terms?)
>
> I need the standard errors and not the confidence intervals from an ARIMA
> fit.
>
> I can get fits:
>
> > coef(test)
>                   ar1                     ma1               intercept
> time(TempVector) - 1900
>           0.801459585             0.704126549            12.854527065
>       0.000520366
>
> And confidence intervals:
>
> > confint(test)
>                               2.5 %       97.5 %
> ar1                      7.684230e-01  0.834496136
> ma1                      6.742786e-01  0.733974460
> intercept                1.217042e+01 13.538635652
> time(TempVector) - 1900 -9.610183e-06  0.001050342
> >
>
> http://stat.ethz.ch/R-manual/R-devel/library/stats/html/arima.html
> Are any of these standard errors?
>
> > vcov(test)
>                                 ar1           ma1    intercept
> time(TempVector) - 1900
> ar1                      2.841144e-04 -5.343792e-05 1.028710e-05
>  2.725763e-08
> ma1                     -5.343792e-05  2.319165e-04 9.990842e-07
> -3.103661e-09
> intercept                1.028710e-05  9.990842e-07 1.218299e-01
>  8.969206e-05
> time(TempVector) - 1900  2.725763e-08 -3.103661e-09 8.969206e-05
>  7.311670e-08
>
> Or is there a function that can give me standard errors for the
> coefficients on AR1, ma, and time?  (I don't care about the intercept.)
> Thanks,
> Lucia
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

	[[alternative HTML version deleted]]



------------------------------

Message: 102
Date: Mon, 22 Nov 2010 14:24:50 -0500
From: "Kenney, Colleen T CTR USA AMC" <colleen.t.kenney at us.army.mil>
To: <r-help at r-project.org>
Subject: [R] Probit Analysis: Confidence Interval for the LD50 using
	Fieller's and Heterogeneity (UNCLASSIFIED)
Message-ID:
	
<0DE122C88FD3464296144E13610ADF700CC59A5C at APGR010BEC80008.nae.ds.army.mil>
	
Content-Type: text/plain;	charset="us-ascii"

Classification:  UNCLASSIFIED 
Caveats: NONE

A similar question has been posted in the past but never answered.  My
question is this: for probit analysis, how do you program a 95%
confidence interval for the LD50 (or LC50, ec50, etc.), including a
heterogeneity factor as written about in "Probit Analysis" by
Finney(1971)?  The heterogeneity factor comes into play through the
chi-squared test for homogeneity and is equal to h=chi^2/(k-2), where k
is the number of doses and k-2 are the degrees of freedom.

I have done a lot of research on this and really appreciate any help
[[elided Yahoo spam]]


Classification:  UNCLASSIFIED 
Caveats: NONE



------------------------------

Message: 103
Date: Mon, 22 Nov 2010 12:35:55 -0700
From: Yogesh Tiwari <yogesh.mpi at googlemail.com>
To: r-help <r-help at stat.math.ethz.ch>
Subject: [R] how to calculate derivative
Message-ID:
	<AANLkTind3+hs91==Te155AvYTBB4o4XULE=mEfWTAFwx at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

Dear R Users,

I have trend of two time series of CO2 each 10  years of data. One is
varying
weekly and another is bi-weekly. I want to calculate Growth rate ppmv / year
of these CO2 trends. Therefore I want to calculate  time derivative ppmv /
year.

How to do it in R?

Here I attached example data file, I would appreciate if any one kindly can
help on it.

Thanks,

Regards,
Yogesh
-------------- next part --------------
1993.1218	356.8655
1993.1246	356.8723
1993.1273	356.8792
1993.13	356.886
1993.1328	356.8929
1993.1355	356.8998
1993.1383	356.9067
1993.141	356.9136
1993.1437	356.9206
1993.1465	356.9275
1993.1492	356.9345
1993.152	356.9415
1993.1547	356.9485
1993.1574	356.9555
1993.1602	356.9625
1993.1629	356.9695
1993.1656	356.9766
1993.1684	356.9836
1993.1711	356.9907
1993.1739	356.9978
1993.1766	357.0049
1993.1793	357.012
1993.1821	357.0191
1993.1848	357.0263
1993.1875	357.0334
1993.1903	357.0406
1993.193	357.0478
1993.1958	357.0549
1993.1985	357.0621
1993.2012	357.0694
1993.204	357.0766
1993.2067	357.0838
1993.2094	357.0911
1993.2122	357.0983
1993.2149	357.1056
1993.2177	357.1129
1993.2204	357.1202
1993.2231	357.1275
1993.2259	357.1349
1993.2286	357.1422
1993.2313	357.1495
1993.2341	357.1569
1993.2368	357.1643
1993.2396	357.1717
1993.2423	357.1791
1993.245	357.1865
1993.2478	357.1939
1993.2505	357.2013
1993.2533	357.2087
1993.256	357.2162
1993.2587	357.2237
1993.2615	357.2311
1993.2642	357.2386
1993.2669	357.2461
1993.2697	357.2536
1993.2724	357.2611
1993.2752	357.2686
1993.2779	357.2762
1993.2806	357.2837
1993.2834	357.2913
1993.2861	357.2988
1993.2888	357.3064
1993.2916	357.314
1993.2943	357.3216
1993.2971	357.3292
1993.2998	357.3368
1993.3025	357.3444
1993.3053	357.352
1993.308	357.3597
1993.3107	357.3673
1993.3135	357.3747
1993.3162	357.3824
1993.319	357.39
1993.3217	357.3977
1993.3244	357.4054
1993.3272	357.4131
1993.3299	357.4208
1993.3326	357.4285
1993.3354	357.4362
1993.3381	357.4439
1993.3409	357.4516
1993.3436	357.4593
1993.3463	357.4671
1993.3491	357.4748
1993.3518	357.4826
1993.3546	357.4903
1993.3573	357.4981
1993.36	357.5059
1993.3628	357.5136
1993.3655	357.5214
1993.3682	357.5292
1993.371	357.537
1993.3737	357.5448
1993.3765	357.5526
1993.3792	357.5604
1993.3819	357.5683
1993.3847	357.5761
1993.3874	357.5839
1993.3901	357.5918
1993.3929	357.5996
1993.3956	357.6075
1993.3984	357.6153
1993.4011	357.6232
1993.4038	357.631
1993.4066	357.6389
1993.4093	357.6468
1993.412	357.6546
1993.4148	357.6625
1993.4175	357.6702
1993.4203	357.6781
1993.423	357.686
1993.4257	357.6938
1993.4285	357.7017
1993.4312	357.7096
1993.4339	357.7175
1993.4367	357.7254
1993.4394	357.7333
1993.4422	357.7412
1993.4449	357.7491
1993.4476	357.757
1993.4504	357.7649
1993.4531	357.7728
1993.4559	357.7807
1993.4586	357.7886
1993.4613	357.7966
1993.4641	357.8045
1993.4668	357.8124
1993.4695	357.8203
1993.4723	357.8282
1993.475	357.8361
1993.4778	357.8441
1993.4805	357.852
1993.4832	357.8599
1993.486	357.8678
1993.4887	357.8758
1993.4914	357.8837
1993.4942	357.8916
1993.4969	357.8995
1993.4997	357.9074
1993.5024	357.9154
1993.5051	357.9232
1993.5079	357.9311
1993.5106	357.939
1993.5133	357.947
1993.5161	357.9549
1993.5188	357.9628
1993.5216	357.9707
1993.5243	357.9786
1993.527	357.9865
1993.5298	357.9944
1993.5325	358.0023
1993.5352	358.0102
1993.538	358.0181
1993.5407	358.026
1993.5435	358.0339
1993.5462	358.0418
1993.5489	358.0496
1993.5517	358.0575
1993.5544	358.0654
1993.5572	358.0733
1993.5599	358.0811
1993.5626	358.089
1993.5654	358.0969
1993.5681	358.1047
1993.5708	358.1126
1993.5736	358.1204
1993.5763	358.1283
1993.5791	358.1361
1993.5818	358.144
1993.5845	358.1518
1993.5873	358.1596
1993.59	358.1675
1993.5927	358.1753
1993.5955	358.1831
1993.5982	358.1909
1993.601	358.1987
1993.6037	358.2065
1993.6064	358.2142
1993.6092	358.222
1993.6119	358.2298
1993.6146	358.2376
1993.6174	358.2453
1993.6201	358.2531
1993.6229	358.2608
1993.6256	358.2685
1993.6283	358.2763
1993.6311	358.284
1993.6338	358.2917
1993.6366	358.2994
1993.6393	358.3071
1993.642	358.3148
1993.6448	358.3224
1993.6475	358.3301
1993.6502	358.3377
1993.653	358.3454
1993.6557	358.353
1993.6585	358.3607
1993.6612	358.3683
1993.6639	358.3759
1993.6667	358.3835
1993.6694	358.3911
1993.6721	358.3987
1993.6749	358.4062
1993.6776	358.4141
1993.6804	358.4216
1993.6831	358.4292
1993.6858	358.4367
1993.6886	358.4443
1993.6913	358.4518
1993.694	358.4593
1993.6968	358.4668
1993.6995	358.4743
1993.7023	358.4817
1993.705	358.4892
1993.7077	358.4967
1993.7105	358.5041
1993.7132	358.5115
1993.7159	358.5189
1993.7187	358.5263
1993.7214	358.5337
1993.7242	358.5411
1993.7269	358.5484
1993.7296	358.5558
1993.7324	358.5631
1993.7351	358.5704
1993.7379	358.5778
1993.7406	358.585
1993.7433	358.5923
1993.7461	358.5996
1993.7488	358.6068
1993.7515	358.6141
1993.7543	358.6216
1993.757	358.6289
1993.7598	358.6361
1993.7625	358.6433
1993.7652	358.6505
1993.768	358.6576
1993.7707	358.6648
1993.7734	358.6719
1993.7762	358.6791
1993.7789	358.6862
1993.7817	358.6933
1993.7844	358.7003
1993.7871	358.7074
1993.7899	358.7144
1993.7926	358.7215
1993.7953	358.7285
1993.7981	358.7355
1993.8008	358.7424
1993.8036	358.7494
1993.8063	358.7563
1993.809	358.7633
1993.8118	358.7702
1993.8145	358.7771
1993.8172	358.7839
1993.82	358.7908
1993.8227	358.7976
1993.8255	358.8044
1993.8282	358.8117
1993.8309	358.8185
1993.8337	358.8253
1993.8364	358.8321
1993.8392	358.8388
1993.8419	358.8456
1993.8446	358.8523
1993.8474	358.859
1993.8501	358.8657
1993.8528	358.8723
1993.8556	358.879
1993.8583	358.8856
1993.8611	358.8922
1993.8638	358.8988
1993.8665	358.9053
1993.8693	358.9119
1993.872	358.9184
1993.8747	358.9249
1993.8775	358.9314
1993.8802	358.9379
1993.883	358.9443
1993.8857	358.9507
1993.8884	358.9571
1993.8912	358.9635
1993.8939	358.9699
1993.8966	358.9762
1993.8994	358.9825
1993.9021	358.9888
1993.9049	358.9951
1993.9076	359.0018
1993.9103	359.0081
1993.9131	359.0144
1993.9158	359.0206
1993.9185	359.0268
1993.9213	359.033
1993.924	359.0391
1993.9268	359.0452
1993.9295	359.0514
1993.9322	359.0575
1993.935	359.0635
1993.9377	359.0696
1993.9405	359.0756
1993.9432	359.0816
1993.9459	359.0876
1993.9487	359.0935
1993.9514	359.0995
1993.9541	359.1054
1993.9569	359.1112
1993.9596	359.1171
1993.9624	359.1229
1993.9651	359.1288
1993.9678	359.1346
1993.9706	359.1403
1993.9733	359.1461
1993.976	359.1518
1993.9788	359.1575
1993.9815	359.1632
1993.9843	359.1691
1993.987	359.1748
1993.9897	359.1804
1993.9925	359.186
1993.9952	359.1916
1993.9979	359.1971
1994.0007	359.2026
1994.0034	359.2081
1994.0062	359.2136
1994.0089	359.2191
1994.0116	359.2245
1994.0144	359.2299
1994.0171	359.2353
1994.0198	359.2407
1994.0226	359.246
1994.0253	359.2513
1994.0281	359.2566
1994.0308	359.2618
1994.0335	359.2671
1994.0363	359.2723
1994.039	359.2775
1994.0418	359.2826
1994.0445	359.2878
1994.0472	359.2929
1994.05	359.298
1994.0527	359.303
1994.0554	359.3081
1994.0582	359.3131
1994.0609	359.3181
1994.0637	359.323
1994.0664	359.328
1994.0691	359.3329
1994.0719	359.3378
1994.0746	359.3426
1994.0773	359.3475
1994.0801	359.3523
1994.0828	359.3571
1994.0856	359.3618
1994.0883	359.3665
1994.091	359.3713
1994.0938	359.3759
1994.0965	359.3806
1994.0992	359.3852
1994.102	359.3898
1994.1047	359.3944
1994.1075	359.3989
1994.1102	359.4034
1994.1129	359.4079
1994.1157	359.4124
1994.1184	359.4169
1994.1211	359.4213
1994.1239	359.4257
1994.1266	359.4301
1994.1294	359.4344
1994.1321	359.4387
1994.1348	359.443
1994.1376	359.4473
1994.1403	359.4515
1994.1431	359.4558
1994.1458	359.4599
1994.1485	359.4641
1994.1513	359.4683
1994.154	359.4724
1994.1567	359.4765
1994.1595	359.4805
1994.1622	359.4846
1994.165	359.4886
1994.1677	359.4926
1994.1704	359.4966
1994.1732	359.5005
1994.1759	359.5041
1994.1786	359.508
1994.1814	359.5118
1994.1841	359.5156
1994.1869	359.5194
1994.1896	359.5232
1994.1923	359.5269
1994.1951	359.5306
1994.1978	359.5343
1994.2005	359.538
1994.2033	359.5417
1994.206	359.5453
1994.2088	359.5489
1994.2115	359.5524
1994.2142	359.556
1994.217	359.5595
1994.2197	359.563
1994.2225	359.5665
1994.2252	359.5699
1994.2279	359.5733
1994.2307	359.5767
1994.2334	359.5801
1994.2361	359.5835
1994.2389	359.5868
1994.2416	359.5898
1994.2444	359.5931
1994.2471	359.5963
1994.2498	359.5995
1994.2526	359.6027
1994.2553	359.6058
1994.258	359.609
1994.2608	359.6121
1994.2635	359.6152
1994.2663	359.6182
1994.269	359.6213
1994.2717	359.6243
1994.2745	359.6273
1994.2772	359.6303
1994.2799	359.6332
1994.2827	359.6362
1994.2854	359.6391
1994.2882	359.6419
1994.2909	359.6446
1994.2936	359.6475
1994.2964	359.6503
1994.2991	359.653
1994.3018	359.6558
1994.3046	359.6585
1994.3073	359.6612
1994.3101	359.6639
1994.3128	359.6666
1994.3155	359.6692
1994.3183	359.6719
1994.321	359.6745
1994.3238	359.677
1994.3265	359.6796
1994.3292	359.6821
1994.332	359.6846
1994.3347	359.687
1994.3374	359.6895
1994.3402	359.692
1994.3429	359.6944
1994.3457	359.6968
1994.3484	359.6992
1994.3511	359.7015
1994.3539	359.7039
1994.3566	359.7062
1994.3593	359.7085
1994.3621	359.7108
1994.3648	359.713
1994.3676	359.7152
1994.3703	359.7174
1994.373	359.7196
1994.3758	359.7218
1994.3785	359.7239
1994.3812	359.7261
1994.384	359.7282
1994.3867	359.7303
1994.3895	359.7324
1994.3922	359.7344
1994.3949	359.7365
1994.3977	359.7385
1994.4004	359.7405
1994.4031	359.7425
1994.4059	359.7444
1994.4086	359.7464
1994.4114	359.7483
1994.4141	359.7502
1994.4168	359.7521
1994.4196	359.754
1994.4223	359.7558
1994.4251	359.7577
1994.4278	359.7595
1994.4305	359.7613
1994.4333	359.7631
1994.436	359.7649
1994.4387	359.7667
1994.4415	359.7684
1994.4442	359.7701
1994.447	359.7719
1994.4497	359.7736
1994.4524	359.7752
1994.4552	359.7769
1994.4579	359.7786
1994.4606	359.7802
1994.4634	359.7818
1994.4661	359.7834
1994.4689	359.785
1994.4716	359.7866
1994.4743	359.7882
1994.4771	359.7897
1994.4798	359.7913
1994.4825	359.7928
1994.4853	359.7943
1994.488	359.7958
1994.4908	359.7973
1994.4935	359.7988
1994.4962	359.8003
1994.499	359.8017
1994.5017	359.8032
1994.5044	359.8046
1994.5072	359.806
1994.5099	359.8074
1994.5127	359.8088
1994.5154	359.8102
1994.5181	359.8116
1994.5209	359.813
1994.5236	359.8144
1994.5264	359.8157
1994.5291	359.817
1994.5318	359.8184
1994.5346	359.8197
1994.5373	359.821
1994.54	359.8223
1994.5428	359.8236
1994.5455	359.8249
1994.5483	359.8261
1994.551	359.8274
1994.5537	359.8286
1994.5565	359.8299
1994.5592	359.8311
1994.5619	359.8323
1994.5647	359.8336
1994.5674	359.8348
1994.5702	359.836
1994.5729	359.8372
1994.5756	359.8384
1994.5784	359.8398
1994.5811	359.841
1994.5838	359.8422
1994.5866	359.8434
1994.5893	359.8446
1994.5921	359.8457
1994.5948	359.8469
1994.5975	359.8481
1994.6003	359.8492
1994.603	359.8504
1994.6057	359.8515
1994.6085	359.8527
1994.6112	359.8538
1994.614	359.8551
1994.6167	359.8562
1994.6194	359.8574
1994.6222	359.8585
1994.6249	359.8597
1994.6277	359.8608
1994.6304	359.862
1994.6331	359.8631
1994.6359	359.8642
1994.6386	359.8654
1994.6413	359.8665
1994.6441	359.8676
1994.6468	359.8687
1994.6496	359.8698
1994.6523	359.8712
1994.655	359.8723
1994.6578	359.8735
1994.6605	359.8746
1994.6632	359.8758
1994.666	359.8769
1994.6687	359.878
1994.6715	359.8792
1994.6742	359.8803
1994.6769	359.8815
1994.6797	359.8826
1994.6824	359.8838
1994.6851	359.8849
1994.6879	359.8861
1994.6906	359.8875
1994.6934	359.8887
1994.6961	359.8899
1994.6988	359.8911
1994.7016	359.8923
1994.7043	359.8935
1994.707	359.8947
1994.7098	359.8959
1994.7125	359.8971
1994.7153	359.8983
1994.718	359.8995
1994.7207	359.9007
1994.7235	359.9019
1994.7262	359.9032
1994.729	359.9047
1994.7317	359.906
1994.7344	359.9072
1994.7372	359.9085
1994.7399	359.9098
1994.7426	359.9111
1994.7454	359.9124
1994.7481	359.9137
1994.7509	359.9151
1994.7536	359.9164
1994.7563	359.9177
1994.7591	359.9191
1994.7618	359.9204
1994.7645	359.9218
1994.7673	359.9234
1994.77	359.9249
1994.7728	359.9263
1994.7755	359.9277
1994.7782	359.9291
1994.781	359.9306
1994.7837	359.9321
1994.7864	359.9335
1994.7892	359.935
1994.7919	359.9365
1994.7947	359.938
1994.7974	359.9395
1994.8001	359.941
1994.8029	359.9425
1994.8056	359.9441
1994.8084	359.946
1994.8111	359.9476
1994.8138	359.9492
1994.8166	359.9509
1994.8193	359.9525
1994.822	359.9542
1994.8248	359.9558
1994.8275	359.9575
1994.8303	359.9592
1994.833	359.9609
1994.8357	359.9626
1994.8385	359.9644
1994.8412	359.9661
1994.8439	359.9679
1994.8467	359.97
1994.8494	359.9719
1994.8522	359.9737
1994.8549	359.9756
1994.8576	359.9775
1994.8604	359.9794
1994.8631	359.9813
1994.8658	359.9832
1994.8686	359.9852
1994.8713	359.9871
1994.8741	359.9891
1994.8768	359.9911
1994.8795	359.9931
1994.8823	359.9952
1994.885	359.9976
1994.8877	359.9997
1994.8905	360.0018
1994.8932	360.004
1994.896	360.0062
1994.8987	360.0084
1994.9014	360.0106
1994.9042	360.0128
1994.9069	360.015
1994.9097	360.0173
1994.9124	360.0196
1994.9151	360.0219
1994.9179	360.0242
1994.9206	360.0269
1994.9233	360.0293
1994.9261	360.0317
1994.9288	360.0342
1994.9316	360.0367
1994.9343	360.0392
1994.937	360.0417
1994.9398	360.0442
1994.9425	360.0468
1994.9452	360.0494
1994.948	360.052
1994.9507	360.0548
1994.9535	360.0575
1994.9562	360.0602
1994.9589	360.063
1994.9617	360.0657
1994.9644	360.0685
1994.9671	360.0713
1994.9699	360.0741
1994.9726	360.077
1994.9754	360.0799
1994.9781	360.0828
1994.9808	360.0857
1994.9836	360.0886
1994.9863	360.0916
1994.989	360.0946
1994.9918	360.0976
1994.9945	360.1006
1994.9973	360.1043
1995	360.1075
1995.0027	360.1107
1995.0055	360.1139
1995.0082	360.1171
1995.011	360.1204
1995.0137	360.1237
1995.0164	360.127
1995.0192	360.1303
1995.0219	360.1337
1995.0246	360.1371
1995.0274	360.1405
1995.0301	360.1439
1995.0329	360.1474
1995.0356	360.1513
1995.0383	360.1549
1995.0411	360.1585
1995.0438	360.1622
1995.0465	360.1658
1995.0493	360.1695
1995.052	360.1733
1995.0548	360.177
1995.0575	360.1808
1995.0602	360.1846
1995.063	360.1884
1995.0657	360.1923
1995.0684	360.1962
1995.0712	360.2001
1995.0739	360.2045
1995.0767	360.2085
1995.0794	360.2125
1995.0821	360.2166
1995.0849	360.2208
1995.0876	360.2249
1995.0903	360.2291
1995.0931	360.2333
1995.0958	360.2375
1995.0986	360.2418
1995.1013	360.2461
1995.104	360.2504
1995.1068	360.2548
1995.1095	360.2592
1995.1123	360.2636
1995.115	360.268
1995.1177	360.2731
1995.1205	360.2776
1995.1232	360.2822
1995.1259	360.2869
1995.1287	360.2915
1995.1314	360.2962
1995.1342	360.301
1995.1369	360.3057
1995.1396	360.3105
1995.1424	360.3153
1995.1451	360.3202
1995.1478	360.3251
1995.1506	360.33
1995.1533	360.3353
1995.1561	360.3403
1995.1588	360.3454
1995.1615	360.3505
1995.1643	360.3556
1995.167	360.3608
1995.1697	360.366
1995.1725	360.3712
1995.1752	360.3764
1995.178	360.3817
1995.1807	360.387
1995.1834	360.3924
1995.1862	360.3978
1995.1889	360.4032
1995.1916	360.4087
1995.1944	360.4146
1995.1971	360.4201
1995.1999	360.4257
1995.2026	360.4314
1995.2053	360.437
1995.2081	360.4428
1995.2108	360.4485
1995.2136	360.4543
1995.2163	360.4601
1995.219	360.4659
1995.2218	360.4718
1995.2245	360.4777
1995.2272	360.4839
1995.23	360.4899
1995.2327	360.4959
1995.2355	360.502
1995.2382	360.5081
1995.2409	360.5143
1995.2437	360.5204
1995.2464	360.5267
1995.2491	360.5329
1995.2519	360.5392
1995.2546	360.5455
1995.2574	360.5519
1995.2601	360.5582
1995.2628	360.5647
1995.2656	360.5711
1995.2683	360.5776
1995.271	360.5841
1995.2738	360.5907
1995.2765	360.5973
1995.2793	360.6039
1995.282	360.6112
1995.2847	360.6179
1995.2875	360.6247
1995.2902	360.6315
1995.293	360.6384
1995.2957	360.6452
1995.2984	360.6522
1995.3012	360.6591
1995.3039	360.6661
1995.3066	360.6731
1995.3094	360.6802
1995.3121	360.6873
1995.3149	360.6944
1995.3176	360.7016
1995.3203	360.7088
1995.3231	360.7163
1995.3258	360.7236
1995.3285	360.7309
1995.3313	360.7383
1995.334	360.7457
1995.3368	360.7531
1995.3395	360.7606
1995.3422	360.7681
1995.345	360.7756
1995.3477	360.7832
1995.3504	360.7908
1995.3532	360.7985
1995.3559	360.8061
1995.3587	360.8138
1995.3614	360.8216
1995.3641	360.8294
1995.3669	360.8374
1995.3696	360.8453
1995.3723	360.8532
1995.3751	360.8612
1995.3778	360.8691
1995.3806	360.8772
1995.3833	360.8852
1995.386	360.8933
1995.3888	360.9014
1995.3915	360.9095
1995.3943	360.9177
1995.397	360.9259
1995.3997	360.9341
1995.4025	360.9426
1995.4052	360.9509
1995.4079	360.9593
1995.4107	360.9677
1995.4134	360.9761
1995.4162	360.9845
1995.4189	360.993
1995.4216	361.0016
1995.4244	361.0101
1995.4271	361.0187
1995.4298	361.0273
1995.4326	361.036
1995.4353	361.0447
1995.4381	361.0534
1995.4408	361.0623
1995.4435	361.0711
1995.4463	361.0799
1995.449	361.0888
1995.4517	361.0977
1995.4545	361.1066
1995.4572	361.1156
1995.46	361.1245
1995.4627	361.1336
1995.4654	361.1426
1995.4682	361.1517
1995.4709	361.1608
1995.4736	361.1699
1995.4764	361.1791
1995.4791	361.1883
1995.4819	361.1977
1995.4846	361.207
1995.4873	361.2163
1995.4901	361.2256
1995.4928	361.235
1995.4956	361.2444
1995.4983	361.2538
1995.501	361.2633
1995.5038	361.2728
1995.5065	361.2823
1995.5092	361.2918
1995.512	361.3014
1995.5147	361.3111
1995.5175	361.3207
1995.5202	361.3304
1995.5229	361.3401
1995.5257	361.3498
1995.5284	361.3595
1995.5311	361.3693
1995.5339	361.3791
1995.5366	361.389
1995.5394	361.3988
1995.5421	361.4087
1995.5448	361.4186
1995.5476	361.4286
1995.5503	361.4385
1995.553	361.4486
1995.5558	361.4586
1995.5585	361.4687
1995.5613	361.4788
1995.564	361.4889
1995.5667	361.499
1995.5695	361.5092
1995.5722	361.5194
1995.5749	361.5296
1995.5777	361.5398
1995.5804	361.5501
1995.5832	361.5604
1995.5859	361.5707
1995.5886	361.581
1995.5914	361.5914
1995.5941	361.6018
1995.5969	361.6122
1995.5996	361.6226
1995.6023	361.6331
1995.6051	361.6436
1995.6078	361.6541
1995.6105	361.6646
1995.6133	361.6751
1995.616	361.6857
1995.6188	361.6963
1995.6215	361.7069
1995.6242	361.7176
1995.627	361.7282
1995.6297	361.7389
1995.6324	361.7496
1995.6352	361.7603
1995.6379	361.771
1995.6407	361.7818
1995.6434	361.7926
1995.6461	361.8034
1995.6489	361.8142
1995.6516	361.8251
1995.6543	361.8359
1995.6571	361.8468
1995.6598	361.8577
1995.6626	361.8686
1995.6653	361.8795
1995.668	361.8905
1995.6708	361.9014
1995.6735	361.9124
1995.6762	361.9234
1995.679	361.9344
1995.6817	361.9455
1995.6845	361.9565
1995.6872	361.9676
1995.6899	361.9787
1995.6927	361.9898
1995.6954	362.0009
1995.6982	362.012
1995.7009	362.0232
1995.7036	362.0343
1995.7064	362.0455
1995.7091	362.0566
1995.7118	362.0678
1995.7146	362.079
1995.7173	362.0903
1995.7201	362.1015
1995.7228	362.1128
1995.7255	362.124
1995.7283	362.1353
1995.731	362.1466
1995.7337	362.1579
1995.7365	362.1693
1995.7392	362.1806
1995.742	362.1919
1995.7447	362.2033
1995.7474	362.2145
1995.7502	362.2259
1995.7529	362.2372
1995.7556	362.2486
1995.7584	362.26
1995.7611	362.2714
1995.7639	362.2828
1995.7666	362.2943
1995.7693	362.3057
1995.7721	362.3171
1995.7748	362.3286
1995.7775	362.34
1995.7803	362.3515
1995.783	362.363
1995.7858	362.3745
1995.7885	362.3858
1995.7912	362.3972
1995.794	362.4087
1995.7967	362.4202
1995.7995	362.4317
1995.8022	362.4432
1995.8049	362.4547
1995.8077	362.4662
1995.8104	362.4777
1995.8131	362.4892
1995.8159	362.5008
1995.8186	362.5123
1995.8214	362.5238
1995.8241	362.5352
1995.8268	362.5467
1995.8296	362.5582
1995.8323	362.5697
1995.835	362.5813
1995.8378	362.5928
1995.8405	362.6043
1995.8433	362.6159
1995.846	362.6274
1995.8487	362.6389
1995.8515	362.6505
1995.8542	362.662
1995.8569	362.6735
1995.8597	362.6849
1995.8624	362.6964
1995.8652	362.7079
1995.8679	362.7194
1995.8706	362.7309
1995.8734	362.7424
1995.8761	362.7539
1995.8789	362.7654
1995.8816	362.7769
1995.8843	362.7884
1995.8871	362.7999
1995.8898	362.8114
1995.8925	362.8229
1995.8953	362.8344
1995.898	362.8459
1995.9008	362.857
1995.9035	362.8685
1995.9062	362.8799
1995.909	362.8913
1995.9117	362.9028
1995.9144	362.9142
1995.9172	362.9256
1995.9199	362.937
1995.9227	362.9484
1995.9254	362.9598
1995.9281	362.9712
1995.9309	362.9826
1995.9336	362.994
1995.9363	363.0054
1995.9391	363.0164
1995.9418	363.0277
1995.9446	363.039
1995.9473	363.0503
1995.95	363.0616
1995.9528	363.0729
1995.9555	363.0841
1995.9582	363.0954
1995.961	363.1067
1995.9637	363.1179
1995.9665	363.1291
1995.9692	363.1404
1995.9719	363.1516
1995.9747	363.1628
1995.9774	363.174
1995.9802	363.1848
1995.9829	363.1959
1995.9856	363.207
1995.9884	363.2181
1995.9911	363.2291
1995.9938	363.2402
1995.9966	363.2513
1995.9993	363.2623
1996.0021	363.2733
1996.0048	363.2844
1996.0075	363.2954
1996.0103	363.3063
1996.013	363.3173
1996.0157	363.3283
1996.0185	363.3392
1996.0212	363.3502
1996.024	363.3611
1996.0267	363.3714
1996.0294	363.3822
1996.0322	363.393
1996.0349	363.4038
1996.0376	363.4146
1996.0404	363.4253
1996.0431	363.4361
1996.0459	363.4468
1996.0486	363.4575
1996.0513	363.4682
1996.0541	363.4788
1996.0568	363.4895
1996.0595	363.5001
1996.0623	363.5107
1996.065	363.5213
1996.0678	363.5318
1996.0705	363.5424
1996.0732	363.5524
1996.076	363.5628
1996.0787	363.5732
1996.0815	363.5836
1996.0842	363.594
1996.0869	363.6044
1996.0897	363.6147
1996.0924	363.625
1996.0951	363.6353
1996.0979	363.6456
1996.1006	363.6558
1996.1034	363.6661
1996.1061	363.6763
1996.1088	363.6861
1996.1116	363.6963
1996.1143	363.7063
1996.117	363.7164
1996.1198	363.7264
1996.1225	363.7364
1996.1253	363.7464
1996.128	363.7564
1996.1307	363.7663
1996.1335	363.7763
1996.1362	363.7861
1996.1389	363.796
1996.1417	363.8058
1996.1444	363.8156
1996.1472	363.8254
1996.1499	363.8352
1996.1526	363.8449
1996.1554	363.8546
1996.1581	363.8643
1996.1608	363.8739
1996.1636	363.8836
1996.1663	363.8931
1996.1691	363.9027
1996.1718	363.9122
1996.1745	363.9207
1996.1773	363.9301
1996.18	363.9395
1996.1828	363.9488
1996.1855	363.9581
1996.1882	363.9674
1996.191	363.9766
1996.1937	363.9858
1996.1964	363.995
1996.1992	364.0042
1996.2019	364.0133
1996.2047	364.0222
1996.2074	364.0312
1996.2101	364.0402
1996.2129	364.0491
1996.2156	364.058
1996.2183	364.0669
1996.2211	364.0758
1996.2238	364.0846
1996.2266	364.0934
1996.2293	364.1022
1996.232	364.1109
1996.2348	364.1196
1996.2375	364.1283
1996.2402	364.1369
1996.243	364.1455
1996.2457	364.1537
1996.2485	364.1621
1996.2512	364.1706
1996.2539	364.179
1996.2567	364.1874
1996.2594	364.1957
1996.2621	364.204
1996.2649	364.2123
1996.2676	364.2205
1996.2704	364.2287
1996.2731	364.2369
1996.2758	364.245
1996.2786	364.2531
1996.2813	364.2611
1996.2841	364.2688
1996.2868	364.2767
1996.2895	364.2846
1996.2923	364.2925
1996.295	364.3003
1996.2977	364.3081
1996.3005	364.3158
1996.3032	364.3236
1996.306	364.3312
1996.3087	364.3389
1996.3114	364.3465
1996.3142	364.354
1996.3169	364.3615
1996.3196	364.369
1996.3224	364.3765
1996.3251	364.3835
1996.3279	364.3908
1996.3306	364.3981
1996.3333	364.4053
1996.3361	364.4125
1996.3388	364.4197
1996.3415	364.4268
1996.3443	364.4339
1996.347	364.4409
1996.3498	364.4479
1996.3525	364.4549
1996.3552	364.4618
1996.358	364.4685
1996.3607	364.4753
1996.3634	364.4821
1996.3662	364.4888
1996.3689	364.4955
1996.3717	364.5021
1996.3744	364.5087
1996.3771	364.5153
1996.3799	364.5218
1996.3826	364.5283
1996.3854	364.5347
1996.3881	364.5411
1996.3908	364.5475
1996.3936	364.5538
1996.3963	364.5601
1996.399	364.5659
1996.4018	364.5721
1996.4045	364.5782
1996.4073	364.5843
1996.41	364.5903
1996.4127	364.5963
1996.4155	364.6022
1996.4182	364.6081
1996.4209	364.6139
1996.4237	364.6198
1996.4264	364.6255
1996.4292	364.6313
1996.4319	364.6369
1996.4346	364.6426
1996.4374	364.6479
1996.4401	364.6534
1996.4428	364.6589
1996.4456	364.6643
1996.4483	364.6697
1996.4511	364.675
1996.4538	364.6803
1996.4565	364.6856
1996.4593	364.6908
1996.462	364.696
1996.4648	364.7011
1996.4675	364.7062
1996.4702	364.7113
1996.473	364.716
1996.4757	364.7209
1996.4784	364.7258
1996.4812	364.7307
1996.4839	364.7354
1996.4867	364.7402
1996.4894	364.7449
1996.4921	364.7496
1996.4949	364.7542
1996.4976	364.7588
1996.5003	364.7633
1996.5031	364.7678
1996.5058	364.7722
1996.5086	364.7766
1996.5113	364.781
1996.514	364.785
1996.5168	364.7892
1996.5195	364.7934
1996.5222	364.7976
1996.525	364.8017
1996.5277	364.8057
1996.5305	364.8097
1996.5332	364.8137
1996.5359	364.8176
1996.5387	364.8215
1996.5414	364.8254
1996.5441	364.8292
1996.5469	364.8329
1996.5496	364.8364
1996.5524	364.84
1996.5551	364.8436
1996.5578	364.8472
1996.5606	364.8507
1996.5633	364.8542
1996.5661	364.8576
1996.5688	364.8609
1996.5715	364.8643
1996.5743	364.8676
1996.577	364.8708
1996.5797	364.874
1996.5825	364.8772
1996.5852	364.8803
1996.588	364.8834
1996.5907	364.8861
1996.5934	364.8891
1996.5962	364.892
1996.5989	364.8948
1996.6016	364.8977
1996.6044	364.9005
1996.6071	364.9032
1996.6099	364.9059
1996.6126	364.9086
1996.6153	364.9112
1996.6181	364.9137
1996.6208	364.9163
1996.6235	364.9188
1996.6263	364.9212
1996.629	364.9236
1996.6318	364.926
1996.6345	364.9283
1996.6372	364.9306
1996.64	364.9329
1996.6427	364.9351
1996.6454	364.9372
1996.6482	364.9394
1996.6509	364.9414
1996.6537	364.9435
1996.6564	364.9455
1996.6591	364.9475
1996.6619	364.9494
1996.6646	364.9502
1996.6674	364.9519
1996.6701	364.9537
1996.6728	364.9553
1996.6756	364.957
1996.6783	364.9586
1996.681	364.9602
1996.6838	364.9617
1996.6865	364.9632
1996.6893	364.9646
1996.692	364.966
1996.6947	364.9674
1996.6975	364.9687
1996.7002	364.97
1996.7029	364.9713
1996.7057	364.9722
1996.7084	364.9733
1996.7112	364.9744
1996.7139	364.9755
1996.7166	364.9765
1996.7194	364.9775
1996.7221	364.9784
1996.7248	364.9793
1996.7276	364.9802
1996.7303	364.981
1996.7331	364.9818
1996.7358	364.9826
1996.7385	364.9833
1996.7413	364.9837
1996.744	364.9844
1996.7467	364.9849
1996.7495	364.9855
1996.7522	364.986
1996.755	364.9865
1996.7577	364.9869
1996.7604	364.9873
1996.7632	364.9877
1996.7659	364.988
1996.7687	364.9883
1996.7714	364.9886
1996.7741	364.9888
1996.7769	364.989
1996.7796	364.9892
1996.7823	364.9889
1996.7851	364.989
1996.7878	364.989
1996.7906	364.989
1996.7933	364.9889
1996.796	364.9888
1996.7988	364.9887
1996.8015	364.9886
1996.8042	364.9884
1996.807	364.9882
1996.8097	364.9879
1996.8125	364.9877
1996.8152	364.9874
1996.8179	364.987
1996.8207	364.9863
1996.8234	364.9859
1996.8261	364.9854
1996.8289	364.9849
1996.8316	364.9844
1996.8344	364.9838
1996.8371	364.9832
1996.8398	364.9826
1996.8426	364.982
1996.8453	364.9813
1996.848	364.9806
1996.8508	364.9798
1996.8535	364.9791
1996.8563	364.9783
1996.859	364.9771
1996.8617	364.9762
1996.8645	364.9753
1996.8672	364.9744
1996.87	364.9734
1996.8727	364.9724
1996.8754	364.9714
1996.8782	364.9703
1996.8809	364.9692
1996.8836	364.9681
1996.8864	364.967
1996.8891	364.9659
1996.8919	364.9647
1996.8946	364.9635
1996.8973	364.9619
1996.9001	364.9606
1996.9028	364.9593
1996.9055	364.9579
1996.9083	364.9566
1996.911	364.9552
1996.9138	364.9538
1996.9165	364.9523
1996.9192	364.9509
1996.922	364.9494
1996.9247	364.9479
1996.9274	364.9464
1996.9302	364.9448
1996.9329	364.9433
1996.9357	364.9417
1996.9384	364.9401
1996.9411	364.9384
1996.9439	364.9368
1996.9466	364.9351
1996.9493	364.9335
1996.9521	364.9318
1996.9548	364.93
1996.9576	364.9283
1996.9603	364.9266
1996.963	364.9248
1996.9658	364.923
1996.9685	364.9212
1996.9713	364.9194
1996.974	364.9175
1996.9767	364.9157
1996.9795	364.9138
1996.9822	364.9119
1996.9849	364.91
1996.9877	364.9081
1996.9904	364.9062
1996.9932	364.9042
1996.9959	364.9023
1996.9986	364.9003
1997.0014	364.8983
1997.0041	364.8963
1997.0068	364.8943
1997.0096	364.8922
1997.0123	364.8902
1997.0151	364.8882
1997.0178	364.8861
1997.0205	364.8806
1997.0233	364.8783
1997.026	364.8761
1997.0287	364.8738
1997.0315	364.8716
1997.0342	364.8693
1997.037	364.867
1997.0397	364.8647
1997.0424	364.8624
1997.0452	364.8601
1997.0479	364.8578
1997.0507	364.8553
1997.0534	364.8529
1997.0561	364.8505
1997.0589	364.8481
1997.0616	364.8457
1997.0643	364.8433
1997.0671	364.8409
1997.0698	364.8385
1997.0726	364.8361
1997.0753	364.8337
1997.078	364.8312
1997.0808	364.8286
1997.0835	364.8262
1997.0862	364.8237
1997.089	364.8212
1997.0917	364.8187
1997.0945	364.8162
1997.0972	364.8137
1997.0999	364.8113
1997.1027	364.8088
1997.1054	364.8063
1997.1081	364.8038
1997.1109	364.8013
1997.1136	364.7988
1997.1164	364.796
1997.1191	364.7935
1997.1218	364.7909
1997.1246	364.7884
1997.1273	364.7859
1997.13	364.7833
1997.1328	364.7808
1997.1355	364.7783
1997.1383	364.7757
1997.141	364.7732
1997.1437	364.7707
1997.1465	364.7681
1997.1492	364.7656
1997.152	364.7631
1997.1547	364.7603
1997.1574	364.7578
1997.1602	364.7552
1997.1629	364.7527
1997.1656	364.7501
1997.1684	364.7476
1997.1711	364.7451
1997.1739	364.7426
1997.1766	364.7401
1997.1793	364.7375
1997.1821	364.735
1997.1848	364.7324
1997.1875	364.7299
1997.1903	364.7274
1997.193	364.7249
1997.1958	364.7224
1997.1985	364.7199
1997.2012	364.7174
1997.204	364.7149
1997.2067	364.7125
1997.2094	364.71
1997.2122	364.7076
1997.2149	364.7052
1997.2177	364.7028
1997.2204	364.7003
1997.2231	364.698
1997.2259	364.6954
1997.2286	364.693
1997.2313	364.6906
1997.2341	364.6882
1997.2368	364.6858
1997.2396	364.6835
1997.2423	364.6811
1997.245	364.6788
1997.2478	364.6765
1997.2505	364.6742
1997.2533	364.6719
1997.256	364.6697
1997.2587	364.6674
1997.2615	364.6651
1997.2642	364.6628
1997.2669	364.6606
1997.2697	364.6584
1997.2724	364.6562
1997.2752	364.654
1997.2779	364.6519
1997.2806	364.6497
1997.2834	364.6476
1997.2861	364.6455
1997.2888	364.6434
1997.2916	364.6414
1997.2943	364.6393
1997.2971	364.6373
1997.2998	364.6352
1997.3025	364.6332
1997.3053	364.6312
1997.308	364.6293
1997.3107	364.6273
1997.3135	364.6254
1997.3162	364.6235
1997.319	364.6217
1997.3217	364.6198
1997.3244	364.618
1997.3272	364.6162
1997.3299	364.6144
1997.3326	364.6126
1997.3354	364.6109
1997.3381	364.6091
1997.3409	364.6075
1997.3436	364.6058
1997.3463	364.6041
1997.3491	364.6025
1997.3518	364.6009
1997.3546	364.5994
1997.3573	364.5978
1997.36	364.5963
1997.3628	364.5948
1997.3655	364.5933
1997.3682	364.5919
1997.371	364.5905
1997.3737	364.5891
1997.3765	364.5878
1997.3792	364.5865
1997.3819	364.5852
1997.3847	364.5839
1997.3874	364.5826
1997.3901	364.5814
1997.3929	364.5803
1997.3956	364.5791
1997.3984	364.578
1997.4011	364.5769
1997.4038	364.5758
1997.4066	364.5748
1997.4093	364.5738
1997.412	364.5728
1997.4148	364.5719
1997.4175	364.571
1997.4203	364.5702
1997.423	364.5693
1997.4257	364.5685
1997.4285	364.5678
1997.4312	364.567
1997.4339	364.5663
1997.4367	364.5657
1997.4394	364.565
1997.4422	364.5644
1997.4449	364.5638
1997.4476	364.5633
1997.4504	364.5629
1997.4531	364.5625
1997.4559	364.5621
1997.4586	364.5617
1997.4613	364.5613
1997.4641	364.561
1997.4668	364.5607
1997.4695	364.5605
1997.4723	364.5603
1997.475	364.5601
1997.4778	364.56
1997.4805	364.5599
1997.4832	364.5598
1997.486	364.5598
1997.4887	364.56
1997.4914	364.5601
1997.4942	364.5602
1997.4969	364.5604
1997.4997	364.5606
1997.5024	364.5608
1997.5051	364.5611
1997.5079	364.5614
1997.5106	364.5617
1997.5133	364.5621
1997.5161	364.5626
1997.5188	364.563
1997.5216	364.5635
1997.5243	364.5641
1997.527	364.5646
1997.5298	364.5656
1997.5325	364.5663
1997.5352	364.567
1997.538	364.5678
1997.5407	364.5686
1997.5435	364.5695
1997.5462	364.5704
1997.5489	364.5714
1997.5517	364.5724
1997.5544	364.5734
1997.5572	364.5745
1997.5599	364.5756
1997.5626	364.5767
1997.5654	364.578
1997.5681	364.5795
1997.5708	364.5809
1997.5736	364.5822
1997.5763	364.5837
1997.5791	364.5851
1997.5818	364.5866
1997.5845	364.5882
1997.5873	364.5898
1997.59	364.5914
1997.5927	364.5931
1997.5955	364.5948
1997.5982	364.5966
1997.601	364.5984
1997.6037	364.6002
1997.6064	364.6025
1997.6092	364.6045
1997.6119	364.6065
1997.6146	364.6086
1997.6174	364.6108
1997.6201	364.6129
1997.6229	364.6152
1997.6256	364.6174
1997.6283	364.6198
1997.6311	364.6221
1997.6338	364.6245
1997.6366	364.627
1997.6393	364.6295
1997.642	364.632
1997.6448	364.6346
1997.6475	364.6372
1997.6502	364.6399
1997.653	364.6433
1997.6557	364.6461
1997.6585	364.649
1997.6612	364.652
1997.6639	364.6549
1997.6667	364.658
1997.6694	364.6611
1997.6721	364.6642
1997.6749	364.6674
1997.6776	364.6706
1997.6804	364.6739
1997.6831	364.6772
1997.6858	364.6805
1997.6886	364.684
1997.6913	364.6874
1997.694	364.6909
1997.6968	364.6945
1997.6995	364.6988
1997.7023	364.7025
1997.705	364.7063
1997.7077	364.7101
1997.7105	364.714
1997.7132	364.7179
1997.7159	364.7219
1997.7187	364.7259
1997.7214	364.73
1997.7242	364.7341
1997.7269	364.7383
1997.7296	364.7425
1997.7324	364.7468
1997.7351	364.7511
1997.7379	364.7559
1997.7406	364.7604
1997.7433	364.765
1997.7461	364.7695
1997.7488	364.7742
1997.7515	364.7788
1997.7543	364.7836
1997.757	364.7883
1997.7598	364.7932
1997.7625	364.798
1997.7652	364.8029
1997.768	364.8079
1997.7707	364.8129
1997.7734	364.818
1997.7762	364.8231
1997.7789	364.8289
1997.7817	364.8342
1997.7844	364.8395
1997.7871	364.8449
1997.7899	364.8504
1997.7926	364.8559
1997.7953	364.8614
1997.7981	364.867
1997.8008	364.8726
1997.8036	364.8783
1997.8063	364.8841
1997.809	364.8898
1997.8118	364.8957
1997.8145	364.9016
1997.8172	364.9081
1997.82	364.9141
1997.8227	364.9202
1997.8255	364.9264
1997.8282	364.9326
1997.8309	364.9389
1997.8337	364.9452
1997.8364	364.9515
1997.8392	364.9579
1997.8419	364.9644
1997.8446	364.9709
1997.8474	364.9774
1997.8501	364.984
1997.8528	364.9911
1997.8556	364.9979
1997.8583	365.0047
1997.8611	365.0116
1997.8638	365.0185
1997.8665	365.0255
1997.8693	365.0325
1997.872	365.0395
1997.8747	365.0466
1997.8775	365.0538
1997.8802	365.061
1997.883	365.0682
1997.8857	365.0755
1997.8884	365.0829
1997.8912	365.0909
1997.8939	365.0984
1997.8966	365.106
1997.8994	365.1136
1997.9021	365.1212
1997.9049	365.129
1997.9076	365.1367
1997.9103	365.1445
1997.9131	365.1524
1997.9158	365.1603
1997.9185	365.1682
1997.9213	365.1762
1997.924	365.1843
1997.9268	365.1929
1997.9295	365.2011
1997.9322	365.2093
1997.935	365.2176
1997.9377	365.226
1997.9405	365.2344
1997.9432	365.2428
1997.9459	365.2513
1997.9487	365.2598
1997.9514	365.2684
1997.9541	365.277
1997.9569	365.2857
1997.9596	365.2944
1997.9624	365.3032
1997.9651	365.312
1997.9678	365.3208
1997.9706	365.3305
1997.9733	365.3395
1997.976	365.3486
1997.9788	365.3577
1997.9815	365.3669
1997.9843	365.3761
1997.987	365.3853
1997.9897	365.3946
1997.9925	365.404
1997.9952	365.4133
1997.9979	365.4228
1998.0007	365.4322
1998.0034	365.4417
1998.0062	365.4513
1998.0089	365.4609
1998.0116	365.4705
1998.0144	365.4802
1998.0171	365.4899
1998.0198	365.4997
1998.0226	365.5095
1998.0253	365.5194
1998.0281	365.5292
1998.0308	365.5392
1998.0335	365.5491
1998.0363	365.5592
1998.039	365.5692
1998.0418	365.5793
1998.0445	365.5894
1998.0472	365.6018
1998.05	365.6122
1998.0527	365.6226
1998.0554	365.6331
1998.0582	365.6436
1998.0609	365.6541
1998.0637	365.6647
1998.0664	365.6753
1998.0691	365.6859
1998.0719	365.6966
1998.0746	365.7073
1998.0773	365.7181
1998.0801	365.7289
1998.0828	365.7397
1998.0856	365.7511
1998.0883	365.7621
1998.091	365.7731
1998.0938	365.7842
1998.0965	365.7953
1998.0992	365.8064
1998.102	365.8176
1998.1047	365.8288
1998.1075	365.84
1998.1102	365.8513
1998.1129	365.8626
1998.1157	365.8739
1998.1184	365.8853
1998.1211	365.8967
1998.1239	365.9087
1998.1266	365.9202
1998.1294	365.9318
1998.1321	365.9434
1998.1348	365.9551
1998.1376	365.9668
1998.1403	365.9785
1998.1431	365.9902
1998.1458	366.002
1998.1485	366.0139
1998.1513	366.0257
1998.154	366.0376
1998.1567	366.0495
1998.1595	366.0615
1998.1622	366.0739
1998.165	366.0859
1998.1677	366.098
1998.1704	366.1102
1998.1732	366.1223
1998.1759	366.1345
1998.1786	366.1468
1998.1814	366.159
1998.1841	366.1713
1998.1869	366.1836
1998.1896	366.196
1998.1923	366.2083
1998.1951	366.2207
1998.1978	366.2332
1998.2005	366.246
1998.2033	366.2585
1998.206	366.2711
1998.2088	366.2837
1998.2115	366.2963
1998.2142	366.3089
1998.217	366.3216
1998.2197	366.3343
1998.2225	366.347
1998.2252	366.3598
1998.2279	366.3726
1998.2307	366.3854
1998.2334	366.3982
1998.2361	366.411
1998.2389	366.4239
1998.2416	366.4368
1998.2444	366.4501
1998.2471	366.4631
1998.2498	366.4761
1998.2526	366.4892
1998.2553	366.5022
1998.258	366.5153
1998.2608	366.5284
1998.2635	366.5415
1998.2663	366.5547
1998.269	366.5679
1998.2717	366.5811
1998.2745	366.5943
1998.2772	366.6075
1998.2799	366.6208
1998.2827	366.6341
1998.2854	366.6476
1998.2882	366.6609
1998.2909	366.6743
1998.2936	366.6877
1998.2964	366.7011
1998.2991	366.7145
1998.3018	366.728
1998.3046	366.7414
1998.3073	366.7549
1998.3101	366.7684
1998.3128	366.7819
1998.3155	366.7954
1998.3183	366.809
1998.321	366.8226
1998.3238	366.8361
1998.3265	366.8499
1998.3292	366.8635
1998.332	366.8771
1998.3347	366.8908
1998.3374	366.9045
1998.3402	366.9182
1998.3429	366.9319
1998.3457	366.9456
1998.3484	366.9593
1998.3511	366.9731
1998.3539	366.9868
1998.3566	367.0006
1998.3593	367.0144
1998.3621	367.0282
1998.3648	367.042
1998.3676	367.0559
1998.3703	367.0697
1998.373	367.0836
1998.3758	367.0974
1998.3785	367.1113
1998.3812	367.1252
1998.384	367.1391
1998.3867	367.153
1998.3895	367.1669
1998.3922	367.1808
1998.3949	367.1947
1998.3977	367.2087
1998.4004	367.2226
1998.4031	367.2366
1998.4059	367.2505
1998.4086	367.2645
1998.4114	367.2785
1998.4141	367.2924
1998.4168	367.3064
1998.4196	367.3204
1998.4223	367.3344
1998.4251	367.3484
1998.4278	367.3624
1998.4305	367.3764
1998.4333	367.3904
1998.436	367.4045
1998.4387	367.4185
1998.4415	367.4325
1998.4442	367.4465
1998.447	367.4606
1998.4497	367.4746
1998.4524	367.4887
1998.4552	367.5027
1998.4579	367.5168
1998.4606	367.5308
1998.4634	367.5449
1998.4661	367.5589
1998.4689	367.5729
1998.4716	367.5869
1998.4743	367.6009
1998.4771	367.615
1998.4798	367.629
1998.4825	367.6431
1998.4853	367.6571
1998.488	367.6711
1998.4908	367.6852
1998.4935	367.6992
1998.4962	367.7132
1998.499	367.7273
1998.5017	367.7413
1998.5044	367.7553
1998.5072	367.7692
1998.5099	367.7832
1998.5127	367.7972
1998.5154	367.8112
1998.5181	367.8252
1998.5209	367.8392
1998.5236	367.8531
1998.5264	367.8671
1998.5291	367.8811
1998.5318	367.8951
1998.5346	367.909
1998.5373	367.923
1998.54	367.9369
1998.5428	367.9509
1998.5455	367.9646
1998.5483	367.9785
1998.551	367.9924
1998.5537	368.0063
1998.5565	368.0202
1998.5592	368.034
1998.5619	368.0479
1998.5647	368.0617
1998.5674	368.0756
1998.5702	368.0894
1998.5729	368.1032
1998.5756	368.1171
1998.5784	368.1309
1998.5811	368.1447
1998.5838	368.1582
1998.5866	368.172
1998.5893	368.1857
1998.5921	368.1994
1998.5948	368.2131
1998.5975	368.2268
1998.6003	368.2405
1998.603	368.2542
1998.6057	368.2678
1998.6085	368.2814
1998.6112	368.2951
1998.614	368.3087
1998.6167	368.3223
1998.6194	368.3359
1998.6222	368.3495
1998.6249	368.3627
1998.6277	368.3762
1998.6304	368.3897
1998.6331	368.4032
1998.6359	368.4166
1998.6386	368.4301
1998.6413	368.4435
1998.6441	368.4569
1998.6468	368.4703
1998.6496	368.4837
1998.6523	368.497
1998.655	368.5104
1998.6578	368.5237
1998.6605	368.537
1998.6632	368.5503
1998.666	368.5636
1998.6687	368.5768
1998.6715	368.5896
1998.6742	368.6028
1998.6769	368.6159
1998.6797	368.629
1998.6824	368.6421
1998.6851	368.6552
1998.6879	368.6683
1998.6906	368.6813
1998.6934	368.6943
1998.6961	368.7073
1998.6988	368.7203
1998.7016	368.7332
1998.7043	368.7462
1998.707	368.7591
1998.7098	368.7716
1998.7125	368.7844
1998.7153	368.7972
1998.718	368.81
1998.7207	368.8227
1998.7235	368.8354
1998.7262	368.8481
1998.729	368.8608
1998.7317	368.8734
1998.7344	368.8861
1998.7372	368.8987
1998.7399	368.9112
1998.7426	368.9238
1998.7454	368.9363
1998.7481	368.9488
1998.7509	368.9613
1998.7536	368.9737
1998.7563	368.9862
1998.7591	368.9986
1998.7618	369.0109
1998.7645	369.0233
1998.7673	369.0356
1998.77	369.0479
1998.7728	369.0601
1998.7755	369.0724
1998.7782	369.0846
1998.781	369.0968
1998.7837	369.1089
1998.7864	369.121
1998.7892	369.1331
1998.7919	369.1452
1998.7947	369.1572
1998.7974	369.1692
1998.8001	369.1812
1998.8029	369.1932
1998.8056	369.2051
1998.8084	369.217
1998.8111	369.2288
1998.8138	369.2407
1998.8166	369.2525
1998.8193	369.2642
1998.822	369.276
1998.8248	369.2877
1998.8275	369.2993
1998.8303	369.311
1998.833	369.3226
1998.8357	369.3342
1998.8385	369.3457
1998.8412	369.3572
1998.8439	369.3637
1998.8467	369.3749
1998.8494	369.3862
1998.8522	369.3973
1998.8549	369.4085
1998.8576	369.4196
1998.8604	369.4307
1998.8631	369.4417
1998.8658	369.4527
1998.8686	369.4637
1998.8713	369.4746
1998.8741	369.4855
1998.8768	369.4964
1998.8795	369.5073
1998.8823	369.5181
1998.885	369.5283
1998.8877	369.539
1998.8905	369.5496
1998.8932	369.5602
1998.896	369.5708
1998.8987	369.5813
1998.9014	369.5918
1998.9042	369.6022
1998.9069	369.6127
1998.9097	369.623
1998.9124	369.6334
1998.9151	369.6437
1998.9179	369.654
1998.9206	369.6642
1998.9233	369.6744
1998.9261	369.684
1998.9288	369.6941
1998.9316	369.7041
1998.9343	369.7141
1998.937	369.724
1998.9398	369.7339
1998.9425	369.7438
1998.9452	369.7537
1998.948	369.7635
1998.9507	369.7732
1998.9535	369.7829
1998.9562	369.7926
1998.9589	369.8023
1998.9617	369.8119
1998.9644	369.8214
1998.9671	369.8304
1998.9699	369.8398
1998.9726	369.8492
1998.9754	369.8586
1998.9781	369.8679
1998.9808	369.8771
1998.9836	369.8863
1998.9863	369.8955
1998.989	369.9047
1998.9918	369.9138
1998.9945	369.9229
1998.9973	369.9319
1999	369.9409
1999.0027	369.9498
1999.0055	369.9583
1999.0082	369.9671
1999.011	369.9758
1999.0137	369.9846
1999.0164	369.9932
1999.0192	370.0019
1999.0219	370.0105
1999.0246	370.0191
1999.0274	370.0276
1999.0301	370.0361
1999.0329	370.0445
1999.0356	370.0529
1999.0383	370.0613
1999.0411	370.0696
1999.0438	370.0774
1999.0465	370.0856
1999.0493	370.0937
1999.052	370.1018
1999.0548	370.1098
1999.0575	370.1179
1999.0602	370.1258
1999.063	370.1338
1999.0657	370.1417
1999.0684	370.1495
1999.0712	370.1573
1999.0739	370.1651
1999.0767	370.1728
1999.0794	370.1805
1999.0821	370.1876
1999.0849	370.1952
1999.0876	370.2026
1999.0903	370.2101
1999.0931	370.2175
1999.0958	370.2249
1999.0986	370.2322
1999.1013	370.2395
1999.104	370.2467
1999.1068	370.2539
1999.1095	370.2611
1999.1123	370.2682
1999.115	370.2753
1999.1177	370.2823
1999.1205	370.2893
1999.1232	370.2963
1999.1259	370.3032
1999.1287	370.3101
1999.1314	370.3169
1999.1342	370.3237
1999.1369	370.3304
1999.1396	370.336
1999.1424	370.3425
1999.1451	370.349
1999.1478	370.3555
1999.1506	370.3619
1999.1533	370.3683
1999.1561	370.3747
1999.1588	370.381
1999.1615	370.3873
1999.1643	370.3933
1999.167	370.3994
1999.1697	370.4055
1999.1725	370.4116
1999.1752	370.4176
1999.178	370.4235
1999.1807	370.4295
1999.1834	370.4354
1999.1862	370.4412
1999.1889	370.447
1999.1916	370.4528
1999.1944	370.4585
1999.1971	370.4642
1999.1999	370.4699
1999.2026	370.4755
1999.2053	370.481
1999.2081	370.4858
1999.2108	370.4912
1999.2136	370.4966
1999.2163	370.5019
1999.219	370.5072
1999.2218	370.5124
1999.2245	370.5176
1999.2272	370.5227
1999.23	370.5279
1999.2327	370.5329
1999.2355	370.538
1999.2382	370.543
1999.2409	370.5479
1999.2437	370.5523
1999.2464	370.5571
1999.2491	370.5619
1999.2519	370.5666
1999.2546	370.5713
1999.2574	370.5759
1999.2601	370.5805
1999.2628	370.5851
1999.2656	370.5896
1999.2683	370.5941
1999.271	370.5985
1999.2738	370.6029
1999.2765	370.6073
1999.2793	370.6116
1999.282	370.6153
1999.2847	370.6195
1999.2875	370.6236
1999.2902	370.6277
1999.293	370.6318
1999.2957	370.6358
1999.2984	370.6398
1999.3012	370.6437
1999.3039	370.6476
1999.3066	370.6514
1999.3094	370.6553
1999.3121	370.659
1999.3149	370.6628
1999.3176	370.6665
1999.3203	370.6701
1999.3231	370.6738
1999.3258	370.6774
1999.3285	370.6809
1999.3313	370.6835
1999.334	370.6868
1999.3368	370.6902
1999.3395	370.6935
1999.3422	370.6967
1999.345	370.6999
1999.3477	370.7031
1999.3504	370.7063
1999.3532	370.7094
1999.3559	370.7125
1999.3587	370.7155
1999.3614	370.7185
1999.3641	370.7215
1999.3669	370.7244
1999.3696	370.7273
1999.3723	370.7294
1999.3751	370.7322
1999.3778	370.7349
1999.3806	370.7375
1999.3833	370.7401
1999.386	370.7427
1999.3888	370.7453
1999.3915	370.7478
1999.3943	370.7503
1999.397	370.7527
1999.3997	370.7551
1999.4025	370.7575
1999.4052	370.7599
1999.4079	370.7616
1999.4107	370.7638
1999.4134	370.766
1999.4162	370.7681
1999.4189	370.7702
1999.4216	370.7722
1999.4244	370.7742
1999.4271	370.7762
1999.4298	370.7782
1999.4326	370.7801
1999.4353	370.782
1999.4381	370.7839
1999.4408	370.7857
1999.4435	370.7875
1999.4463	370.7886
1999.449	370.7902
1999.4517	370.7918
1999.4545	370.7934
1999.4572	370.7949
1999.46	370.7964
1999.4627	370.7979
1999.4654	370.7993
1999.4682	370.8007
1999.4709	370.8021
1999.4736	370.8035
1999.4764	370.8048
1999.4791	370.8061
1999.4819	370.8073
1999.4846	370.8078
1999.4873	370.8089
1999.4901	370.81
1999.4928	370.8111
1999.4956	370.8121
1999.4983	370.813
1999.501	370.814
1999.5038	370.8149
1999.5065	370.8158
1999.5092	370.8167
1999.512	370.8175
1999.5147	370.8183
1999.5175	370.8191
1999.5202	370.8198
1999.5229	370.8198
1999.5257	370.8204
1999.5284	370.821
1999.5311	370.8215
1999.5339	370.822
1999.5366	370.8225
1999.5394	370.823
1999.5421	370.8234
1999.5448	370.8238
1999.5476	370.8242
1999.5503	370.8245
1999.553	370.8248
1999.5558	370.8251
1999.5585	370.8254
1999.5613	370.8249
1999.564	370.825
1999.5667	370.8251
1999.5695	370.8251
1999.5722	370.8252
1999.5749	370.8252
1999.5777	370.8252
1999.5804	370.8251
1999.5832	370.8251
1999.5859	370.825
1999.5886	370.8249
1999.5914	370.8248
1999.5941	370.8246
1999.5969	370.8244
1999.5996	370.8234
1999.6023	370.8231
1999.6051	370.8227
1999.6078	370.8223
1999.6105	370.8219
1999.6133	370.8215
1999.616	370.8211
1999.6188	370.8206
1999.6215	370.8201
1999.6242	370.8196
1999.627	370.8191
1999.6297	370.8185
1999.6324	370.8179
1999.6352	370.8173
1999.6379	370.8159
1999.6407	370.8151
1999.6434	370.8144
1999.6461	370.8136
1999.6489	370.8128
1999.6516	370.8119
1999.6543	370.8111
1999.6571	370.8102
1999.6598	370.8093
1999.6626	370.8084
1999.6653	370.8075
1999.668	370.8066
1999.6708	370.8056
1999.6735	370.8047
1999.6762	370.8028
1999.679	370.8017
1999.6817	370.8005
1999.6845	370.7994
1999.6872	370.7982
1999.6899	370.797
1999.6927	370.7958
1999.6954	370.7946
1999.6982	370.7933
1999.7009	370.7921
1999.7036	370.7908
1999.7064	370.7895
1999.7091	370.7882
1999.7118	370.7869
1999.7146	370.7847
1999.7173	370.7832
1999.7201	370.7818
1999.7228	370.7803
1999.7255	370.7787
1999.7283	370.7772
1999.731	370.7757
1999.7337	370.7741
1999.7365	370.7726
1999.7392	370.771
1999.742	370.7694
1999.7447	370.7678
1999.7474	370.7662
1999.7502	370.7646
1999.7529	370.7621
1999.7556	370.7603
1999.7584	370.7586
1999.7611	370.7568
1999.7639	370.755
1999.7666	370.7532
1999.7693	370.7513
1999.7721	370.7495
1999.7748	370.7477
1999.7775	370.7458
1999.7803	370.7439
1999.783	370.7421
1999.7858	370.7402
1999.7885	370.7383
1999.7912	370.7364
1999.794	370.7345
1999.7967	370.7326
1999.7995	370.7307
1999.8022	370.7287
1999.8049	370.7268
1999.8077	370.7249
1999.8104	370.7229
1999.8131	370.7209
1999.8159	370.719
1999.8186	370.717
1999.8214	370.715
1999.8241	370.713
1999.8268	370.711
1999.8296	370.709
1999.8323	370.707
1999.835	370.705
1999.8378	370.703
1999.8405	370.701
1999.8433	370.699
1999.846	370.6969
1999.8487	370.6949
1999.8515	370.6929
1999.8542	370.6908
1999.8569	370.6888
1999.8597	370.6867
1999.8624	370.6847
1999.8652	370.6826
1999.8679	370.6805
1999.8706	370.6785
1999.8734	370.6764
1999.8761	370.6743
1999.8789	370.6723
1999.8816	370.6702
1999.8843	370.6681
1999.8871	370.666
1999.8898	370.6639
1999.8925	370.6619
1999.8953	370.6598
1999.898	370.6577
1999.9008	370.6556
1999.9035	370.6535
1999.9062	370.6389
1999.909	370.6363
1999.9117	370.6338
1999.9144	370.6313
1999.9172	370.6288
1999.9199	370.6262
1999.9227	370.6237
1999.9254	370.6212
1999.9281	370.6187
1999.9309	370.6161
1999.9336	370.6136
1999.9363	370.6111
1999.9391	370.6086
1999.9418	370.6061
1999.9446	370.6029
1999.9473	370.6003
1999.95	370.5977
1999.9528	370.595
1999.9555	370.5924
1999.9582	370.5899
1999.961	370.5873
1999.9637	370.5847
1999.9665	370.5821
1999.9692	370.5795
1999.9719	370.5769
1999.9747	370.5743
1999.9774	370.5718
1999.9802	370.5692
1999.9829	370.566
1999.9856	370.5633
1999.9884	370.5607
1999.9911	370.558
1999.9938	370.5554
1999.9966	370.5528
1999.9993	370.5502
2000.0021	370.5475
2000.0048	370.5449
2000.0075	370.5423
2000.0103	370.5397
2000.013	370.5371
2000.0157	370.5345
2000.0185	370.532
2000.0212	370.5288
2000.024	370.5261
2000.0267	370.5235
2000.0294	370.5209
2000.0322	370.5182
2000.0349	370.5156
2000.0376	370.513
2000.0404	370.5104
2000.0431	370.5078
2000.0459	370.5052
2000.0486	370.5026
2000.0513	370.5001
2000.0541	370.4975
2000.0568	370.4949
2000.0595	370.4919
2000.0623	370.4892
2000.065	370.4866
2000.0678	370.4841
2000.0705	370.4815
2000.0732	370.4789
2000.076	370.4763
2000.0787	370.4738
2000.0815	370.4713
2000.0842	370.4687
2000.0869	370.4662
2000.0897	370.4637
2000.0924	370.4612
2000.0951	370.4587
2000.0979	370.4563
2000.1006	370.4538
2000.1034	370.4513
2000.1061	370.4489
2000.1088	370.4465
2000.1116	370.4441
2000.1143	370.4417
2000.117	370.4382
2000.1198	370.4358
2000.1225	370.4333
2000.1253	370.4309
2000.128	370.4284
2000.1307	370.426
2000.1335	370.4236
2000.1362	370.4212
2000.1389	370.4188
2000.1417	370.4165
2000.1444	370.4141
2000.1472	370.4118
2000.1499	370.4095
2000.1526	370.4071
2000.1554	370.4048
2000.1581	370.4021
2000.1608	370.3998
2000.1636	370.3975
2000.1663	370.3952
2000.1691	370.393
2000.1718	370.3907
2000.1745	370.3884
2000.1773	370.3862
2000.18	370.384
2000.1828	370.3818
2000.1855	370.3796
2000.1882	370.3775
2000.191	370.3753
2000.1937	370.3729
2000.1964	370.3707
2000.1992	370.3686
2000.2019	370.3665
2000.2047	370.3644
2000.2074	370.3623
2000.2101	370.3602
2000.2129	370.3581
2000.2156	370.3561
2000.2183	370.3541
2000.2211	370.3521
2000.2238	370.3501
2000.2266	370.3481
2000.2293	370.3462
2000.232	370.3442
2000.2348	370.342
2000.2375	370.3401
2000.2402	370.3382
2000.243	370.3363
2000.2457	370.3344
2000.2485	370.3325
2000.2512	370.3307
2000.2539	370.3288
2000.2567	370.327
2000.2594	370.3252
2000.2621	370.3235
2000.2649	370.3217
2000.2676	370.32
2000.2704	370.3181
2000.2731	370.3164
2000.2758	370.3147
2000.2786	370.313
2000.2813	370.3113
2000.2841	370.3097
2000.2868	370.3081
2000.2895	370.3065
2000.2923	370.3049
2000.295	370.3033
2000.2977	370.3018
2000.3005	370.3002
2000.3032	370.2987
2000.306	370.2973
2000.3087	370.2956
2000.3114	370.2942
2000.3142	370.2927
2000.3169	370.2913
2000.3196	370.2899
2000.3224	370.2885
2000.3251	370.2872
2000.3279	370.2858
2000.3306	370.2845
2000.3333	370.2832
2000.3361	370.2819
2000.3388	370.2807
2000.3415	370.2795
2000.3443	370.2782
2000.347	370.2769
2000.3498	370.2758
2000.3525	370.2746
2000.3552	370.2734
2000.358	370.2723
2000.3607	370.2712
2000.3634	370.2701
2000.3662	370.2691
2000.3689	370.2681
2000.3717	370.267
2000.3744	370.2661
2000.3771	370.2651
2000.3799	370.2641
2000.3826	370.2632
2000.3854	370.2623
2000.3881	370.2614
2000.3908	370.2606
2000.3936	370.2597
2000.3963	370.2589
2000.399	370.2581
2000.4018	370.2572
2000.4045	370.2565
2000.4073	370.2557
2000.41	370.255
2000.4127	370.2543
2000.4155	370.2537
2000.4182	370.253
2000.4209	370.2524
2000.4237	370.2518
2000.4264	370.2512
2000.4292	370.2507
2000.4319	370.2501
2000.4346	370.2496
2000.4374	370.2491
2000.4401	370.2487
2000.4428	370.2482
2000.4456	370.2478
2000.4483	370.2474
2000.4511	370.247
2000.4538	370.2467
2000.4565	370.2464
2000.4593	370.2461
2000.462	370.2458
2000.4648	370.2455
2000.4675	370.2453
2000.4702	370.2451
2000.473	370.2449
2000.4757	370.2447
2000.4784	370.2446
2000.4812	370.2445
2000.4839	370.2444
2000.4867	370.2443
2000.4894	370.2443
2000.4921	370.2442
2000.4949	370.2442
2000.4976	370.2443
2000.5003	370.2443
2000.5031	370.2444
2000.5058	370.2445
2000.5086	370.2446
2000.5113	370.2448
2000.514	370.2449
2000.5168	370.2451
2000.5195	370.2453
2000.5222	370.2456
2000.525	370.2458
2000.5277	370.2461
2000.5305	370.2464
2000.5332	370.2468
2000.5359	370.2472
2000.5387	370.2476
2000.5414	370.248
2000.5441	370.2484
2000.5469	370.2489
2000.5496	370.2493
2000.5524	370.2498
2000.5551	370.2504
2000.5578	370.2509
2000.5606	370.2515
2000.5633	370.2521
2000.5661	370.2527
2000.5688	370.2534
2000.5715	370.254
2000.5743	370.2547
2000.577	370.2555
2000.5797	370.2563
2000.5825	370.257
2000.5852	370.2578
2000.588	370.2587
2000.5907	370.2595
2000.5934	370.2604
2000.5962	370.2613
2000.5989	370.2622
2000.6016	370.2631
2000.6044	370.2641
2000.6071	370.2651
2000.6099	370.2661
2000.6126	370.2671
2000.6153	370.2681
2000.6181	370.2692
2000.6208	370.2703
2000.6235	370.2714
2000.6263	370.2726
2000.629	370.2738
2000.6318	370.275
2000.6345	370.2762
2000.6372	370.2774
2000.64	370.2787
2000.6427	370.28
2000.6454	370.2813
2000.6482	370.2826
2000.6509	370.284
2000.6537	370.2853
2000.6564	370.2867
2000.6591	370.2882
2000.6619	370.2896
2000.6646	370.2911
2000.6674	370.2926
2000.6701	370.2941
2000.6728	370.2962
2000.6756	370.2978
2000.6783	370.2994
2000.681	370.301
2000.6838	370.3027
2000.6865	370.3044
2000.6893	370.3061
2000.692	370.3078
2000.6947	370.3096
2000.6975	370.3113
2000.7002	370.3131
2000.7029	370.3149
2000.7057	370.3168
2000.7084	370.3186
2000.7112	370.3207
2000.7139	370.3226
2000.7166	370.3245
2000.7194	370.3265
2000.7221	370.3285
2000.7248	370.3305
2000.7276	370.3325
2000.7303	370.3346
2000.7331	370.3367
2000.7358	370.3388
2000.7385	370.3409
2000.7413	370.343
2000.744	370.3452
2000.7467	370.3475
2000.7495	370.3497
2000.7522	370.3519
2000.755	370.3542
2000.7577	370.3565
2000.7604	370.3588
2000.7632	370.3611
2000.7659	370.3635
2000.7687	370.3659
2000.7714	370.3683
2000.7741	370.3707
2000.7769	370.3731
2000.7796	370.3756
2000.7823	370.378
2000.7851	370.3805
2000.7878	370.3832
2000.7906	370.3858
2000.7933	370.3884
2000.796	370.391
2000.7988	370.3936
2000.8015	370.3962
2000.8042	370.3989
2000.807	370.4016
2000.8097	370.4043
2000.8125	370.407
2000.8152	370.4097
2000.8179	370.4125
2000.8207	370.4153
2000.8234	370.4181
2000.8261	370.4211
2000.8289	370.4239
2000.8316	370.4268
2000.8344	370.4297
2000.8371	370.4326
2000.8398	370.4356
2000.8426	370.4385
2000.8453	370.4415
2000.848	370.4445
2000.8508	370.4475
2000.8535	370.4505
2000.8563	370.4536
2000.859	370.4567
2000.8617	370.4597
2000.8645	370.4628
2000.8672	370.466
2000.87	370.4691
2000.8727	370.4723
2000.8754	370.4755
2000.8782	370.4787
2000.8809	370.4819
2000.8836	370.4851
2000.8864	370.4883
2000.8891	370.4916
2000.8919	370.4949
2000.8946	370.4988
2000.8973	370.5022
2000.9001	370.5055
2000.9028	370.5089
2000.9055	370.5124
2000.9083	370.5158
2000.911	370.5192
2000.9138	370.5227
2000.9165	370.5262
2000.9192	370.5297
2000.922	370.5332
2000.9247	370.5368
2000.9274	370.5403
2000.9302	370.5439
2000.9329	370.5475
2000.9357	370.551
2000.9384	370.5547
2000.9411	370.5586
2000.9439	370.5623
2000.9466	370.566
2000.9493	370.5697
2000.9521	370.5734
2000.9548	370.5771
2000.9576	370.5809
2000.9603	370.5847
2000.963	370.5885
2000.9658	370.5923
2000.9685	370.5961
2000.9713	370.5999
2000.974	370.6038
2000.9767	370.6076
2000.9795	370.6115
2000.9822	370.6154
2000.9849	370.6193
2000.9877	370.6232
2000.9904	370.6272
2000.9932	370.6311
2000.9959	370.6351
2000.9986	370.639
2001.0014	370.643
2001.0041	370.647
2001.0068	370.6511
2001.0096	370.6551
2001.0123	370.6591
2001.0151	370.6632
2001.0178	370.6673
2001.0205	370.6713
2001.0233	370.6754
2001.026	370.6796
2001.0287	370.6837
2001.0315	370.6878
2001.0342	370.692
2001.037	370.6961
2001.0397	370.7003
2001.0424	370.7045
2001.0452	370.7087
2001.0479	370.7129
2001.0507	370.7171
2001.0534	370.7213
2001.0561	370.7276
2001.0589	370.7319
2001.0616	370.7363
2001.0643	370.7407
2001.0671	370.7451
2001.0698	370.7495
2001.0726	370.7539
2001.0753	370.7583
2001.078	370.7627
2001.0808	370.7672
2001.0835	370.7716
2001.0862	370.7761
2001.089	370.7806
2001.0917	370.785
2001.0945	370.7898
2001.0972	370.7943
2001.0999	370.7989
2001.1027	370.8035
2001.1054	370.808
2001.1081	370.8126
2001.1109	370.8172
2001.1136	370.8218
2001.1164	370.8264
2001.1191	370.831
2001.1218	370.8357
2001.1246	370.8403
2001.1273	370.845
2001.13	370.8496
2001.1328	370.8546
2001.1355	370.8593
2001.1383	370.864
2001.141	370.8687
2001.1437	370.8735
2001.1465	370.8782
2001.1492	370.883
2001.152	370.8877
2001.1547	370.8925
2001.1574	370.8973
2001.1602	370.9021
2001.1629	370.9069
2001.1656	370.9117
2001.1684	370.9165
2001.1711	370.9216
2001.1739	370.9265
2001.1766	370.9314
2001.1793	370.9362
2001.1821	370.9411
2001.1848	370.946
2001.1875	370.9509
2001.1903	370.9559
2001.193	370.9608
2001.1958	370.9657
2001.1985	370.9706
2001.2012	370.9756
2001.204	370.9805
2001.2067	370.9855
2001.2094	370.9907
2001.2122	370.9957
2001.2149	371.0008
2001.2177	371.0058
2001.2204	371.0108
2001.2231	371.0158
2001.2259	371.0209
2001.2286	371.0259
2001.2313	371.0309
2001.2341	371.036
2001.2368	371.041
2001.2396	371.0461
2001.2423	371.0512
2001.245	371.0562
2001.2478	371.0616
2001.2505	371.0668
2001.2533	371.0719
2001.256	371.077
2001.2587	371.0822
2001.2615	371.0873
2001.2642	371.0925
2001.2669	371.0976
2001.2697	371.1028
2001.2724	371.1079
2001.2752	371.1131
2001.2779	371.1182
2001.2806	371.1234
2001.2834	371.1286
2001.2861	371.1341
2001.2888	371.1393
2001.2916	371.1445
2001.2943	371.1498
2001.2971	371.155
2001.2998	371.1602
2001.3025	371.1655
2001.3053	371.1707
2001.308	371.176
2001.3107	371.1812
2001.3135	371.1865
2001.3162	371.1917
2001.319	371.197
2001.3217	371.2022
2001.3244	371.2078
2001.3272	371.2131
2001.3299	371.2184
2001.3326	371.2237
2001.3354	371.229
2001.3381	371.2343
2001.3409	371.2397
2001.3436	371.245
2001.3463	371.2503
2001.3491	371.2556
2001.3518	371.2609
2001.3546	371.2662
2001.3573	371.2716
2001.36	371.2769
2001.3628	371.2822
2001.3655	371.2875
2001.3682	371.2928
2001.371	371.2987
2001.3737	371.304
2001.3765	371.3094
2001.3792	371.3148
2001.3819	371.3202
2001.3847	371.3256
2001.3874	371.3309
2001.3901	371.3363
2001.3929	371.3417
2001.3956	371.3471
2001.3984	371.3525
2001.4011	371.3578
2001.4038	371.3632
2001.4066	371.3686
2001.4093	371.374
2001.412	371.3793
2001.4148	371.3847
2001.4175	371.3901
2001.4203	371.3955
2001.423	371.4008
2001.4257	371.4062
2001.4285	371.4123
2001.4312	371.4177
2001.4339	371.4232
2001.4367	371.4286
2001.4394	371.434
2001.4422	371.4394
2001.4449	371.4449
2001.4476	371.4503
2001.4504	371.4557
2001.4531	371.4611
2001.4559	371.4665
2001.4586	371.4719
2001.4613	371.4774
2001.4641	371.4828
2001.4668	371.4882
2001.4695	371.4939
2001.4723	371.4994
2001.475	371.5048
2001.4778	371.5102
2001.4805	371.5157
2001.4832	371.5211
2001.486	371.5265
2001.4887	371.532
2001.4914	371.5374
2001.4942	371.5428
2001.4969	371.5482
2001.4997	371.5536
2001.5024	371.559
2001.5051	371.5644
2001.5079	371.5698
2001.5106	371.5752
2001.5133	371.5806
2001.5161	371.586
2001.5188	371.5914
2001.5216	371.5968
2001.5243	371.6021
2001.527	371.6075
2001.5298	371.6129
2001.5325	371.6182
2001.5352	371.6236
2001.538	371.629
2001.5407	371.6343
2001.5435	371.6396
2001.5462	371.645
2001.5489	371.6503
2001.5517	371.6556
2001.5544	371.661
2001.5572	371.6663
2001.5599	371.6716
2001.5626	371.6769
2001.5654	371.6822
2001.5681	371.6875
2001.5708	371.6928
2001.5736	371.6981
2001.5763	371.7034
2001.5791	371.7086
2001.5818	371.7139
2001.5845	371.7192
2001.5873	371.7244
2001.59	371.7297
2001.5927	371.7377
2001.5955	371.7431
2001.5982	371.7484
2001.601	371.7538
2001.6037	371.7591
2001.6064	371.7644
2001.6092	371.7698
2001.6119	371.7751
2001.6146	371.7804
2001.6174	371.7857
2001.6201	371.791
2001.6229	371.7963
2001.6256	371.8016
2001.6283	371.8069
2001.6311	371.8124
2001.6338	371.8177
2001.6366	371.823
2001.6393	371.8283
2001.642	371.8335
2001.6448	371.8388
2001.6475	371.8441
2001.6502	371.8493
2001.653	371.8546
2001.6557	371.8598
2001.6585	371.8651
2001.6612	371.8703
2001.6639	371.8755
2001.6667	371.8807
2001.6694	371.8861
2001.6721	371.8914
2001.6749	371.8966
2001.6776	371.9018
2001.6804	371.907
2001.6831	371.9122
2001.6858	371.9174
2001.6886	371.9226
2001.6913	371.9277
2001.694	371.9329
2001.6968	371.9381
2001.6995	371.9432
2001.7023	371.9484
2001.705	371.9535
2001.7077	371.9586
2001.7105	371.9637
2001.7132	371.9688
2001.7159	371.9739
2001.7187	371.979
2001.7214	371.9841
2001.7242	371.9892
2001.7269	371.9942
2001.7296	371.9993
2001.7324	372.0044
2001.7351	372.0094
2001.7379	372.0144
2001.7406	372.0195
2001.7433	372.0245
2001.7461	372.0295
2001.7488	372.0345
2001.7515	372.0395
2001.7543	372.0444
2001.757	372.0494
2001.7598	372.0544
2001.7625	372.0593
2001.7652	372.0643
2001.768	372.0692
2001.7707	372.0741
2001.7734	372.079
2001.7762	372.084
2001.7789	372.0889
2001.7817	372.0937
2001.7844	372.0986
2001.7871	372.1035
2001.7899	372.1084
2001.7926	372.1132
2001.7953	372.118
2001.7981	372.1229
2001.8008	372.1277
2001.8036	372.1325
2001.8063	372.1373
2001.809	372.1421
2001.8118	372.1469
2001.8145	372.1517
2001.8172	372.1565
2001.82	372.1612
2001.8227	372.1687
2001.8255	372.1735
2001.8282	372.1783
2001.8309	372.1831
2001.8337	372.1879
2001.8364	372.1927
2001.8392	372.1975
2001.8419	372.2023
2001.8446	372.207
2001.8474	372.2118
2001.8501	372.2165
2001.8528	372.2213
2001.8556	372.226
2001.8583	372.2307
2001.8611	372.2355
2001.8638	372.2402
2001.8665	372.2449
2001.8693	372.2496
2001.872	372.2543
2001.8747	372.259
2001.8775	372.2636
2001.8802	372.2683
2001.883	372.2729
2001.8857	372.2775
2001.8884	372.2821
2001.8912	372.2867
2001.8939	372.2913
2001.8966	372.2959
2001.8994	372.3006
2001.9021	372.3052
2001.9049	372.3098
2001.9076	372.3143
2001.9103	372.3189
2001.9131	372.3234
2001.9158	372.328
2001.9185	372.3325
2001.9213	372.337
2001.924	372.3415
2001.9268	372.346
2001.9295	372.3505
2001.9322	372.3549
2001.935	372.3594
2001.9377	372.3639
2001.9405	372.3684
2001.9432	372.3728
2001.9459	372.3773
2001.9487	372.3817
2001.9514	372.3861
2001.9541	372.3905
2001.9569	372.3949
2001.9596	372.3993
2001.9624	372.4036
2001.9651	372.408
2001.9678	372.4124
2001.9706	372.4167
2001.9733	372.421
2001.976	372.4254
2001.9788	372.4298
2001.9815	372.4341
2001.9843	372.4384
2001.987	372.4427
2001.9897	372.447
2001.9925	372.4512
2001.9952	372.4555
2001.9979	372.4597
2002.0007	372.464
2002.0034	372.4682
2002.0062	372.4724
2002.0089	372.4766
2002.0116	372.4808
2002.0144	372.4851
2002.0171	372.4893
2002.0198	372.4935
2002.0226	372.4977
2002.0253	372.5018
2002.0281	372.506
2002.0308	372.5101
2002.0335	372.5142
2002.0363	372.5184
2002.039	372.5225
2002.0418	372.5266
2002.0445	372.5307
2002.0472	372.5347
2002.05	372.5388
2002.0527	372.5429
2002.0554	372.5469
2002.0582	372.551
2002.0609	372.555
2002.0637	372.559
2002.0664	372.563
2002.0691	372.567
2002.0719	372.571
2002.0746	372.575
2002.0773	372.579
2002.0801	372.5829
2002.0828	372.5869
2002.0856	372.5908
2002.0883	372.5947
2002.091	372.599
2002.0938	372.6029
2002.0965	372.6068
2002.0992	372.6107
2002.102	372.6146
2002.1047	372.6185
2002.1075	372.6224
2002.1102	372.6263
2002.1129	372.6301
2002.1157	372.634
2002.1184	372.6378
2002.1211	372.6416
2002.1239	372.6455
2002.1266	372.6493
2002.1294	372.6531
2002.1321	372.6569
2002.1348	372.6606
2002.1376	372.6644
2002.1403	372.6682
2002.1431	372.6719
2002.1458	372.6757
2002.1485	372.6794
2002.1513	372.6831
2002.154	372.6868
2002.1567	372.6906
2002.1595	372.6942
2002.1622	372.6979
2002.165	372.7016
2002.1677	372.7055
2002.1704	372.7092
2002.1732	372.7128
2002.1759	372.7165
2002.1786	372.7201
2002.1814	372.7238
2002.1841	372.7274
2002.1869	372.731
2002.1896	372.7346
2002.1923	372.7382
2002.1951	372.7418
2002.1978	372.7454
2002.2005	372.749
2002.2033	372.7525
2002.206	372.7561
2002.2088	372.7596
2002.2115	372.7632
2002.2142	372.7667
2002.217	372.7703
2002.2197	372.7738
2002.2225	372.7773
2002.2252	372.7808
2002.2279	372.7843
2002.2307	372.7878
2002.2334	372.7913
2002.2361	372.7947
2002.2389	372.7982
2002.2416	372.8016
2002.2444	372.8051
2002.2471	372.8085
2002.2498	372.8119
2002.2526	372.8153
2002.2553	372.8187
2002.258	372.8221
2002.2608	372.8255
2002.2635	372.8289
2002.2663	372.8323
2002.269	372.8356
2002.2717	372.839
2002.2745	372.8423
2002.2772	372.8457
2002.2799	372.849
2002.2827	372.8525
2002.2854	372.8558
2002.2882	372.8591
2002.2909	372.8624
2002.2936	372.8657
2002.2964	372.869
2002.2991	372.8723
2002.3018	372.8755
2002.3046	372.8788
2002.3073	372.8821
2002.3101	372.8853
2002.3128	372.8886
2002.3155	372.8918
2002.3183	372.8951
2002.321	372.8983
2002.3238	372.9015
2002.3265	372.9047
2002.3292	372.9079
2002.332	372.9111
2002.3347	372.9143
2002.3374	372.9174
2002.3402	372.9206
2002.3429	372.9238
2002.3457	372.9269
2002.3484	372.9301
2002.3511	372.9332
2002.3539	372.9364
2002.3566	372.9395
2002.3593	372.9426
2002.3621	372.9458
2002.3648	372.9489
2002.3676	372.952
2002.3703	372.9551
2002.373	372.9581
2002.3758	372.9612
2002.3785	372.9643
2002.3812	372.9674
2002.384	372.9704
2002.3867	372.9735
2002.3895	372.9765
2002.3922	372.9796
2002.3949	372.9826
2002.3977	372.9856
2002.4004	372.9887
2002.4031	372.9917
2002.4059	372.9947
2002.4086	372.9977
2002.4114	373.0007
2002.4141	373.0037
2002.4168	373.0067
2002.4196	373.0096
2002.4223	373.0126
2002.4251	373.0156
2002.4278	373.0185
2002.4305	373.0215
2002.4333	373.0244
2002.436	373.0274
2002.4387	373.0303
2002.4415	373.0333
2002.4442	373.0362
2002.447	373.0391
2002.4497	373.042
2002.4524	373.0449
2002.4552	373.0478
2002.4579	373.0507
2002.4606	373.0536
2002.4634	373.0565
2002.4661	373.0593
2002.4689	373.0622
2002.4716	373.0651
2002.4743	373.0679
2002.4771	373.0708
2002.4798	373.0736
2002.4825	373.0765
2002.4853	373.0793
2002.488	373.0822
2002.4908	373.085
2002.4935	373.0878
2002.4962	373.0906
2002.499	373.0934
2002.5017	373.0962
2002.5044	373.099
2002.5072	373.1018
2002.5099	373.1046
2002.5127	373.1074
2002.5154	373.1102
2002.5181	373.113
2002.5209	373.1157
2002.5236	373.1185
2002.5264	373.1213
2002.5291	373.124
2002.5318	373.1268
2002.5346	373.1295
2002.5373	373.1323
2002.54	373.135
2002.5428	373.1377
2002.5455	373.1405
2002.5483	373.1432
2002.551	373.1459
2002.5537	373.1486
2002.5565	373.1513
2002.5592	373.154
2002.5619	373.1567
2002.5647	373.1594
2002.5674	373.1621
2002.5702	373.1648
2002.5729	373.1675
2002.5756	373.1702
2002.5784	373.1728
2002.5811	373.1755
2002.5838	373.1782
2002.5866	373.1809
2002.5893	373.1835
2002.5921	373.1862
2002.5948	373.1888
2002.5975	373.1915
2002.6003	373.1941
2002.603	373.1967
2002.6057	373.1994
2002.6085	373.202
2002.6112	373.2046
2002.614	373.2073
2002.6167	373.2099
2002.6194	373.2125
2002.6222	373.2151
2002.6249	373.2177
2002.6277	373.2203
2002.6304	373.2229
2002.6331	373.2255
2002.6359	373.2281
2002.6386	373.2307
2002.6413	373.2333
2002.6441	373.2359
2002.6468	373.2385
2002.6496	373.2411
2002.6523	373.2436
2002.655	373.2462
2002.6578	373.2488
2002.6605	373.2514
2002.6632	373.2539
2002.666	373.2565
2002.6687	373.2591
2002.6715	373.2616
2002.6742	373.2642
2002.6769	373.2668
2002.6797	373.2693
2002.6824	373.2719
2002.6851	373.2744
2002.6879	373.277
2002.6906	373.2795
2002.6934	373.282
2002.6961	373.2846
2002.6988	373.2871
2002.7016	373.2897
2002.7043	373.2921
2002.707	373.2946
2002.7098	373.2972
2002.7125	373.2997
2002.7153	373.3022
2002.718	373.3047
2002.7207	373.3073
2002.7235	373.3098
2002.7262	373.3123
2002.729	373.3148
2002.7317	373.3173
2002.7344	373.3198
2002.7372	373.3223
2002.7399	373.3249
2002.7426	373.3274
2002.7454	373.3299
2002.7481	373.3324
2002.7509	373.3349
2002.7536	373.3374
2002.7563	373.3399
2002.7591	373.3424
2002.7618	373.3449
2002.7645	373.3474
2002.7673	373.3499
2002.77	373.3524
2002.7728	373.3549
2002.7755	373.3573
2002.7782	373.3598
2002.781	373.3623
2002.7837	373.3648
2002.7864	373.3673
2002.7892	373.3698
2002.7919	373.3723
2002.7947	373.3748
2002.7974	373.3773
2002.8001	373.3798
2002.8029	373.3822
2002.8056	373.3847
2002.8084	373.3872
2002.8111	373.3897
2002.8138	373.3922
2002.8166	373.3947
2002.8193	373.3971
2002.822	373.3996
2002.8248	373.4021

------------------------------

Message: 104
Date: Tue, 23 Nov 2010 11:14:05 +1300
From: David Scott <d.scott at auckland.ac.nz>
To: Henri Mone <henriMone at gmail.com>
Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] cpgram: access data, confidence bands
Message-ID: <4CEAEB2D.3000306 at auckland.ac.nz>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

  On 22/11/10 22:54, Henri Mone wrote:
> Dear R experts, beginners and everyone else,
>
> I'm calculating "cumulative periodogram" using the command "cpgram"
> [1] from the MASS library. Here is a short example with the "lh"
> (hormone level) dataset:
>
>    library(MASS)
>    plot(lh,type="l",ylab="value",xlab="time", main="Hormone Levels (lh)")
>    spectrum(lh, main="Hormone Levels (lh)") # periodigram
>    cpgram(lh, main="Hormone Levels (lh)") # cumul. periodigram
>
> I got following two questions:
>
> 1. The command "cpgram" plots the cumulative periodogram without any
> problem. But I could not figure out any way to access the data of the
> plot (save it in a variable).
> the following command fails (contains no data):
>     >myObject<-cpgram(lh, main="Hormone Levels (lh)")
>     >summary(myObject)
>         Length  Class   Mode
>              0   NULL   NULL
>
> Is there an easy way to access the data of the  cumulative
> periodogram, or do I need to rewrite the "cpgram" function?
>
You need to rewrite cpgram. Have a look at the last line of the 
function, it is
invisible()
meaning it doesn't return anything. It is easy to change it, replace the 
last line by for example

return(list(pgram = y, cum = cumsum(y)/sum(y)))

or whatever you actually want to return.


> 2. The "cpgram" function plots with the default options the 95%
> confidence bands in the plot. The confidence band  are defined such
> that in 95% of the cases the true value will lie inside the bands. For
> most cases which I tested the cumulative periodogram is outside the
> confidence band. Does "cpgram" plot the confidence band of the the
> cumulative periodogram or for the periodogram (I think it is the
> cumulative periodigram, is this correct?). How should the confidence
> band in "cpgram" be interpreted? Some more description on this would
> be great.
>
>
It is the cumulative periodogram (as the name suggests). What did you 
test? Only a white noise process should stay inside the confidence 
bands. There is some information about the use of the cumulative 
periodogram in Venables and Ripley's book for which cpgram was written 
(but admittedly not a lot).

David Scott


_________________________________________________________________
David Scott	Department of Statistics
		The University of Auckland, PB 92019
		Auckland 1142,    NEW ZEALAND
Phone: +64 9 923 5055, or +64 9 373 7599 ext 85055
Email:	d.scott at auckland.ac.nz,  Fax: +64 9 373 7018

Director of Consulting, Department of Statistics



------------------------------

Message: 105
Date: Mon, 22 Nov 2010 17:21:09 -0500
From: David Winsemius <dwinsemius at comcast.net>
To: "Kenney, Colleen T CTR USA AMC" <colleen.t.kenney at us.army.mil>
Cc: r-help at r-project.org
Subject: Re: [R] Probit Analysis: Confidence Interval for the LD50
	using	Fieller's and Heterogeneity (UNCLASSIFIED)
Message-ID: <99B2DFBB-F6B5-4555-BBF7-4D5261612C4E at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes


On Nov 22, 2010, at 2:24 PM, Kenney, Colleen T CTR USA AMC wrote:

> Classification:  UNCLASSIFIED
> Caveats: NONE
>
> A similar question has been posted in the past but never answered.  My
> question is this: for probit analysis, how do you program a 95%
> confidence interval for the LD50 (or LC50, ec50, etc.), including a
> heterogeneity factor as written about in "Probit Analysis" by
> Finney(1971)?  The heterogeneity factor comes into play through the
> chi-squared test for homogeneity and is equal to h=chi^2/(k-2),  
> where k
> is the number of doses and k-2 are the degrees of freedom.
>
> I have done a lot of research on this and really appreciate any help
[[elided Yahoo spam]]

The reason it may not have had a rely ... assuming the questions was  
homomorphic to this one... is that there is no R content in this  
question. You may want to post on:

http://stats.stackexchange.com/

But I suggest you be somewhat more descriptive than just citing a 40  
year-old text.

-- 

David Winsemius, MD
West Hartford, CT



------------------------------

Message: 106
Date: Mon, 22 Nov 2010 14:28:56 -0800
From: Dennis Murphy <djmuser at gmail.com>
To: "Kenney, Colleen T CTR USA AMC" <colleen.t.kenney at us.army.mil>
Cc: r-help at r-project.org
Subject: Re: [R] Probit Analysis: Confidence Interval for the LD50
	using Fieller's and Heterogeneity (UNCLASSIFIED)
Message-ID:
	<AANLkTinRodwVPCrXm8SnfAfcn3QQ6-qJg4GnaNZMG0B3 at mail.gmail.com>
Content-Type: text/plain

Hi:

The MASS package has a function dose.p() to produce a CI for ED50, ED90 or
EDp in general (0 < p < 100).  It takes a model object (presumably from a
suitable logistic regression) as input. You could always take the code
already available and adapt it to your situation or you could investigate
one or more of the packages devoted to dose-response models.

A useful thing to know in R is the sos package, starting with its primary
function, findFn():

library(sos)
findFn('ED50')

scares up the three major packages related to dose-response models with 40
matches to ED50.

HTH,
Dennis

On Mon, Nov 22, 2010 at 11:24 AM, Kenney, Colleen T CTR USA AMC <
colleen.t.kenney at us.army.mil> wrote:

> Classification:  UNCLASSIFIED
> Caveats: NONE
>
> A similar question has been posted in the past but never answered.  My
> question is this: for probit analysis, how do you program a 95%
> confidence interval for the LD50 (or LC50, ec50, etc.), including a
> heterogeneity factor as written about in "Probit Analysis" by
> Finney(1971)?  The heterogeneity factor comes into play through the
> chi-squared test for homogeneity and is equal to h=chi^2/(k-2), where k
> is the number of doses and k-2 are the degrees of freedom.
>
> I have done a lot of research on this and really appreciate any help
[[elided Yahoo spam]]
>
>
> Classification:  UNCLASSIFIED
> Caveats: NONE
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

	[[alternative HTML version deleted]]



------------------------------

Message: 107
Date: Mon, 22 Nov 2010 14:36:08 -0800 (PST)

To: r-help at r-project.org
Subject: [R] empity value in colnames
Message-ID: <1290465368687-3054564.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii


Hi Guys.
I have a matrix which has names in every other column and the other one is
empity ("")

example
     X10000     X10001     X10002
[1,] "A"    "G" "A"    "G" "G"   
[2,] "G"    "G" "A"    "G" "A"   
[3,] "G"    "G" "A"    "A" "A"   
[4,] "G"    "G" "A"    "A" "A"   
[5,] "A"    "G" "A"    "A" "A"

I am creating another matrix (I called here subset) which cbinds information
from another matrix (not shown) and a subset of this example matrix above
(let's say column 1 and 2) and save as .prn file

subset
  Sequence family  clone female  male X10000 V2 X10001 V4
1    40003    400 540003  10005 22055      A  G      A  G
2    40011    400 540011  10005 22055      G  G      A  G
3    40014    400 540014  10005 22055      G  G      A  A
4    40042    400 540042  10005 22055      G  G      A  A
5    40057    400 540057  10005 22055      A  G      A  A


Everytime I do it, it creates a column name ("V2" and "V4" in bold) where it
should be empty ("").
Do you guys  have any clue on how to to this?

Thanks



-- 
View this message in context:
http://r.789695.n4.nabble.com/empity-value-in-colnames-tp3054564p3054564.htm
l
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 108
Date: Mon, 22 Nov 2010 19:59:04 -0300
From: Kjetil Halvorsen <kjetilbrinchmannhalvorsen at gmail.com>
To: Mike Marchywka <marchywka at hotmail.com>
Cc: r-help at r-project.org, ligges at statistik.tu-dortmund.de
Subject: Re: [R] ?summaryRprof running at 100% cpu for one hour ...
Message-ID:
	<AANLkTi=nDYVu_cuYuH3_0HOe1T99B_cy3ZA_VrgG_FnE at mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

see  below.

On Mon, Nov 22, 2010 at 12:57 PM, Mike Marchywka <marchywka at hotmail.com>
wrote:
>
>
>
>
>
>
>
>
> ----------------------------------------
>> Date: Mon, 22 Nov 2010 12:41:06 -0300
>> Subject: Re: [R] ?summaryRprof running at 100% cpu for one hour ...
>> From: kjetilbrinchmannhalvorsen at gmail.com
>> To: marchywka at hotmail.com
>> CC: ligges at statistik.tu-dortmund.de; r-help at r-project.org
>>
>> see below.
>>
>> On Mon, Nov 22, 2010 at 12:13 PM, Mike Marchywka wrote:
>> >
>> >
>> Thanks. Will try. Really, I tried yesterday, to run R under gdb within
>> emacs, but it did'nt work out. What I did (in emacs 23) was, typing
>> Ctrl-u M-x R
>> and then enter the option
>> --debugger=gdb
>>
[[elided Yahoo spam]]
>>
>> Kjetil
>
> I rarely use gdb but it did seem to work with R but I executed gdb from
> cygwin windoh and IIRC ctrl-C worked fine as it broke into debugger.
> I guess you could try that- start gdb and attach or invoke R from gdb.
>
>

OK, thanks. I started R with
R --debugger=gdb
in a shell, outside emacs. then it works.

I did some unsystematic sampling with Ctrl-C. Most of the time it was stuck
in memory.c, apparently doing garbage collection.
Other files which occured was unique.c, duplicate.c

kjetil


>



------------------------------

Message: 109
Date: Mon, 22 Nov 2010 18:13:17 -0500
From: David Winsemius <dwinsemius at comcast.net>

Cc: r-help at r-project.org
Subject: Re: [R] empity value in colnames
Message-ID: <AB076202-FC9B-40FA-891B-9420CCF73EB8 at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes


On Nov 22, 2010, at 5:36 PM, M.Ribeiro wrote:

>
> Hi Guys.
> I have a matrix which has names in every other column and the other  
> one is
> empity ("")
>
> example
>     X10000     X10001     X10002
> [1,] "A"    "G" "A"    "G" "G"
> [2,] "G"    "G" "A"    "G" "A"
> [3,] "G"    "G" "A"    "A" "A"
> [4,] "G"    "G" "A"    "A" "A"
> [5,] "A"    "G" "A"    "A" "A"
>
> I am creating another matrix

How?

> (I called here subset) which cbinds information
> from another matrix (not shown) and a subset of this example matrix  
> above

Off hand, I would guess that this "other matrix", of undescribed  
structure, is really a dataframe and you are actually calling  
cbind.data.frame

> (let's say column 1 and 2) and save as .prn file
>
> subset
>  Sequence family  clone female  male X10000 V2 X10001 V4
> 1    40003    400 540003  10005 22055      A  G      A  G
> 2    40011    400 540011  10005 22055      G  G      A  G
> 3    40014    400 540014  10005 22055      G  G      A  A
> 4    40042    400 540042  10005 22055      G  G      A  A
> 5    40057    400 540057  10005 22055      A  G      A  A
>
>
> Everytime I do it, it creates a column name ("V2" and "V4" in bold)  
> where it
> should be empty ("").

That's why I asked "How?"

> Do you guys  have any clue on how

Exactly. ............. How? You show us your "how" and we will show  
you ours.

> to to this?
>
> Thanks
>
>


David Winsemius, MD
West Hartford, CT



------------------------------

Message: 110
Date: Mon, 22 Nov 2010 18:17:30 -0500
From: Mike Marchywka <marchywka at hotmail.com>
To: <kjetilbrinchmannhalvorsen at gmail.com>
Cc: r-help at r-project.org, ligges at statistik.tu-dortmund.de
Subject: Re: [R] ?summaryRprof running at 100% cpu for one hour ...
Message-ID: <BLU113-W16E5F5F3BA4752BD2EDCAEBE3D0 at phx.gbl>
Content-Type: text/plain; charset="iso-8859-1"









----------------------------------------
> Date: Mon, 22 Nov 2010 19:59:04 -0300
> Subject: Re: [R] ?summaryRprof running at 100% cpu for one hour ...
> From: kjetilbrinchmannhalvorsen at gmail.com
> To: marchywka at hotmail.com
> CC: ligges at statistik.tu-dortmund.de; r-help at r-project.org
>
> see below.
>
> On Mon, Nov 22, 2010 at 12:57 PM, Mike Marchywka wrote:
> >
> >
> >
> >
> >
> >
> >
> >
> > ----------------------------------------
> >> Date: Mon, 22 Nov 2010 12:41:06 -0300
> >> Subject: Re: [R] ?summaryRprof running at 100% cpu for one hour ...
> >> From: kjetilbrinchmannhalvorsen at gmail.com
> >> To: marchywka at hotmail.com
> >> CC: ligges at statistik.tu-dortmund.de; r-help at r-project.org
> >>
> >> see below.
> >>
> >> On Mon, Nov 22, 2010 at 12:13 PM, Mike Marchywka wrote:
> >> >
> >> >
> >> Thanks. Will try. Really, I tried yesterday, to run R under gdb within
> >> emacs, but it did'nt work out. What I did (in emacs 23) was, typing
> >> Ctrl-u M-x R
> >> and then enter the option
> >> --debugger=gdb
> >>
[[elided Hotmail spam]]
> >>
> >> Kjetil
> >
> > I rarely use gdb but it did seem to work with R but I executed gdb from
> > cygwin windoh and IIRC ctrl-C worked fine as it broke into debugger.
> > I guess you could try that- start gdb and attach or invoke R from gdb.
> >
> >
>
> OK, thanks. I started R with
> R --debugger=gdb
> in a shell, outside emacs. then it works.
>
> I did some unsystematic sampling with Ctrl-C. Most of the time it was
stuck
> in memory.c, apparently doing garbage collection.
> Other files which occured was unique.c, duplicate.c
>

you may want to try the R-develop list for better help now but
presumably you can get symobls somewhere and a readable
stack trace. I guess floundering with memory management
would be consistent with high CPU usage since as far as the OS
is concerned the process is runnable. In java you see stuff like
this with lots of temp objects being created. I guess if it
is gc and you make lots of garbage and then need a big contiguous
area could slow things down a lot.
Once you are pretty sure you stopped it in a hotspot, you can
try stepping in and out of things and see if anything looks odd.

I guess one other exploratory thing to try, this may or may not
work in R with your problem, is get a snapshot of the memory and then use a
utility
like "strings" to see if there is any indication of what is going on.
If objects are annotated at all something may jump out but hard to know.


> kjetil
>
>
> > 		 	   		  


------------------------------

Message: 111
Date: Mon, 22 Nov 2010 15:18:40 -0800
From: Kendric Wang <kendricw at interchange.ubc.ca>
To: r-help at r-project.org
Subject: [R] Sporadic errors when training models using CARET
Message-ID:
	<AANLkTimKokkPu6L4r1dmaunSvd3PJ-dAjed_CKZ84RWD at mail.gmail.com>
Content-Type: text/plain

Hi. I am trying to construct a svmLinear model using the "caret" package
(see code below). Using the same data, without changing any setting,
sometimes it constructs the model successfully, and sometimes I get an index
out of bounds error. Is this unexpected behaviour? I would appreciate any
insights this issue.


Thanks.
~Kendric


> train.y
 [1] S S S S R R R R R R R R R R R R R R R R R R R R
Levels: R S

> train.x
        m1      m2
1   0.1756  0.6502
2   0.1110 -0.2217
3   0.0837 -0.1809
4  -0.3703 -0.2476
5   8.3825  2.8814
6   5.6400 12.9922
7   7.5537  7.4809
8   3.5005  5.7844
9  16.8541 16.6326
10  9.1851  8.7814
11  1.4405 11.0132
12  9.8795  2.6182
13  8.7151  4.5476
14 -0.2092 -0.7601
15  3.6876  2.5772
16  8.3776  5.0882
17  8.6567  7.2640
18 20.9386 20.1107
19 12.2903  4.7864
20 10.5920  7.5204
21 10.2679  9.5493
22  6.2023 11.2333
23 -5.0720 -4.8701
24  6.6417 11.5139

> svmLinearGrid <- expand.grid(.C=0.1)
> svmLinearFit <- train(train.x, train.y, method="svmLinear",
tuneGrid=svmLinearGrid)
Fitting: C=0.1
Error in indexes[[j]] : subscript out of bounds

> svmLinearFit <- train(train.x, train.y, method="svmLinear",
tuneGrid=svmLinearGrid)
Fitting: C=0.1
maximum number of iterations reached 0.0005031579 0.0005026807maximum number
of iterations reached 0.0002505857 0.0002506714Error in indexes[[j]] :
subscript out of bounds

> svmLinearFit <- train(train.x, train.y, method="svmLinear",
tuneGrid=svmLinearGrid)
Fitting: C=0.1
maximum number of iterations reached 0.0003270061 0.0003269764maximum number
of iterations reached 7.887867e-05 7.866367e-05maximum number of iterations
reached 0.0004087571 0.0004087466Aggregating results
Selecting tuning parameters
Fitting model on full training set


R version 2.11.1 (2010-05-31)
x86_64-redhat-linux-gnu

locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C
 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8
 [5] LC_MONETARY=C              LC_MESSAGES=en_US.UTF-8
 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C
 [9] LC_ADDRESS=C               LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C

attached base packages:
[1] splines   stats     graphics  grDevices utils     datasets  methods
[8] base

other attached packages:
 [1] kernlab_0.9-12  pamr_1.47       survival_2.35-8 cluster_1.12.3
 [5] e1071_1.5-24    class_7.3-2     caret_4.70      reshape_0.8.3
 [9] plyr_1.2.1      lattice_0.18-8

loaded via a namespace (and not attached):
[1] grid_2.11.1


-- 
MSc. Candidate
CIHR/MSFHR Training Program in Bioinformatics
University of British Columbia

	[[alternative HTML version deleted]]



------------------------------

Message: 112
Date: Mon, 22 Nov 2010 15:20:56 -0800 (PST)
From: Phil Spector <spector at stat.berkeley.edu>
To: madr <madrazel at interia.pl>
Cc: r-help at r-project.org
Subject: Re: [R] I need a very specific unique like function and I
	don't know even how to properly call this
Message-ID:
	<alpine.DEB.2.00.1011221517010.14043 at springer.Berkeley.EDU>
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed

Given a vector, x, we can test if the value above
it is equal to itself with

    abv = c(FALSE,x[-l] == x[-1])

and if the value below is equal to itself with

    blw = c(x[-l] == x[-1],FALSE)

So, for your problem:

> abv = c(FALSE,dat[,2][-l] == dat[,2][-1])
> blw = c(dat[,2][-l] == dat[,2][-1],FALSE)
> dat[!(abv & blw),]
      [,1] [,2]
[1,]    3    7
[2,]    6    5
[3,]    5    5
[4,]    8    4
[5,]    7    4
[6,]    0    6

 					- Phil Spector
 					 Statistical Computing Facility
 					 Department of Statistics
 					 UC Berkeley
 					 spector at stat.berkeley.edu


On Mon, 22 Nov 2010, madr wrote:

>
> consider this matrix:
>
>      [,1] [,2]
> [1,]    3   7
> [2,]    6   5
> [3,]    7   5
> [4,]    3   5
> [5,]    7   5
> [6,]    5   5
> [7,]    8   4
> [8,]    2   4
> [9,]    7   4
> [10,]    0   6
>
> I need to delete all rows where column 2 above and below has the same
value,
> so the effect would be:
>
>      [,1] [,2]
> [1,]    3   7
> [2,]    6   5
> [6,]    5   5
> [7,]    8   4
> [9,]    7   4
> [10,]    0   6
>
> is there a built in function for that kind of operation or I must write
one
> from scratch ?
> Is there a name for that kind of operation ?
> -- 
> View this message in context:
http://r.789695.n4.nabble.com/I-need-a-very-specific-unique-like-function-an
d-I-don-t-know-even-how-to-properly-call-this-tp3054427p3054427.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



------------------------------

Message: 113
Date: Mon, 22 Nov 2010 18:43:41 -0500
From: David Winsemius <dwinsemius at comcast.net>
To: shubha <shuba.pandit at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] how do remove those predictor which have p value
	greater	than 0.05 in GLM?
Message-ID: <0911E03A-8277-4731-8AF4-E80777201E28 at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes


On Nov 22, 2010, at 5:10 PM, shubha wrote:

>
> Thanks for the response, Frank.
> I am not saying that I want to delete a variables because of p>0.5.

Presumably that was meant to be p > 0.05

> But my
> concern was: I am using backward stepwise logistic regression, it  
> keeps the
> variables in the final model if the variable significantly  
> contributing in
> the model.

Isn't that what backwards selection does?

> Otherwise, it should not be in the final model.

You're sure? How did you arrive at that conclusion?

> Using other software, they give correct results.

Correct? Please describe your standards for correctness.

> But R, did not. I want
> those variables if p<0.05, otherwise exclude from the model.

But you said above that was _not_ what you wanted. I'm confused about  
your posture here.

> If you include
> that variables, it will affect the Log likelihood ratio and AIC.

Yes, perhaps it will, ...  so is the standard a p-value or is a  
penalized penalized estimate? When you  toss out a variable, you are  
deluding yourself to then later ignore that act of deletion when  
specifying your degrees of freedom for the multiple hypothesis testing  
effort you have conducting.

> I want to
> change a P-value criterion <=0.05 in the model.  Any suggestions.

More reading. Less reliance on canned software.


> thanks
>
> -- 


David Winsemius, MD
West Hartford, CT



------------------------------

Message: 114
Date: Mon, 22 Nov 2010 16:22:11 -0800
From: Peter Ehlers <ehlers at ucalgary.ca>
To: Marcin Gomulka <mrgomel at gmail.com>
Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] plotting a timeline
Message-ID: <4CEB0933.30406 at ucalgary.ca>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

On 2010-11-20 14:26, Marcin Gomulka wrote:
> I was trying to recreate this kind of timeline plot:
> http://www.vertex42.com/ExcelArticles/create-a-timeline.html
>
> As you can see in their excel example, the events are nicely placed out on
> both sides of the timeline axis.
>
> AFAIK there is no function to do this nicely in R-project. Furthermore,
> graphics and lattice packages are unable to draw the x-axis in the middle
of
> the plot. (datapoints cannot be plotted below the axis, as in the Excel
> example).
>
> My question: Is there a function to draw the x-axis inside the plot? (at a
> certain vertical position?) Is there a function for the whole timeline
plot
> that I do not know about?
>
> I tried to visually replicate the plot using additional elements to create
> my own x-axis (code below). I have placed the x-axis ticks on a fixed
> y-height (-0.1 and -0.05), but this might look badly with a different
> dataset or at other image proportions. I'd rather do this with a dedicated
> package function ( like axis() ).
>

It wouldn't be difficult to write such a function.
Here's some code to get you started:

  with(the_data, {
    plot(eventtime, impact, type="h", axes=FALSE,
         ann=FALSE, col="grey", ylim=c(-.7,1.2))
    points(eventtime, impact, pch=95, font=5, cex=2, col=4)
    text(eventtime, impact, label, pos = 1 + 2*(impact > 0))
  })
  abline(h=0, lwd=2)
  axis(1, pos=0, lwd=2, lwd.ticks=1)

Peter Ehlers

> --
> mrgomel
> -----------------------
> Below is my example code in R:
>
>
> the_data<-
> structure(list(eventtime = c(1914L, 1917L, 1918L, 1939L, 1945L,
> 1963L, 1989L, 2001L, 2003L), impact = c(1, -.5, 0.8, 1, 0.8, 0.5,
> -.5, 0.5, 1), label = structure(c(8L, 7L, 4L, 9L, 5L, 2L, 3L,
> 1L, 6L), .Label = c("9/11", "Cuban crisis", "end of communism",
> "end WW1", "end WW2", "Iraq war", "start of communism", "WW1",
> "WW2"), class = "factor")), .Names = c("eventtime", "impact",
> "label"), class = "data.frame", row.names = c(NA, -9L))
>
>
> plot(the_data$eventtime, the_data$impact, type="h", frame.plot=FALSE, axes
=
> FALSE, xlab="",ylab="", col="grey")
> text(the_data$eventtime,the_data$impact, the_data$label)
> #axis(1)
> abline(h=0,lwd=2)
> text(axTicks(1),-0.1, axTicks(1))
> points(axTicks(1),rep(-0.05,length(axTicks(1))), type="h")
>
> 	[[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.



------------------------------

Message: 115
Date: Tue, 23 Nov 2010 01:36:47 +0100
From: "Sebastian Rudnick" <rudnick at igb-berlin.de>
To: r-help at r-project.org
Subject: [R] Gap between graph and axis
Message-ID: <20101123002430.M57525 at igb-berlin.de>
Content-Type: text/plain;	charset=utf-8

Hi everyone!

I want to plot some precipitation data via plot(type="h"). Unfortunately
there
is always a gap between the bars and the x-axis, so that the bars reach in
the
"negative area" below 0 at the y-axis, which is very misleading. The
ylim-parameter is set to 0 and max of precipitation, the min value of
precipitation is 0 as well.
I tried to fix this via the fig parameter, but I have no idea how to do it
at
all. 
I hope anyone can help.

Thanks a lot,

Sebastian



------------------------------

Message: 116
Date: Tue, 23 Nov 2010 11:52:29 +1100
From: <Bill.Venables at csiro.au>
To: <rudnick at igb-berlin.de>, <r-help at r-project.org>
Subject: Re: [R] Gap between graph and axis
Message-ID:
	<1BDAE2969943D540934EE8B4EF68F95FB27A44FC0D at EXNSW-MBX03.nexus.csiro.
au>
	
Content-Type: text/plain; charset="us-ascii"

perhaps you need something like this.

par(yaxs = "i")
plot(runif(10), type = "h", ylim = c(0, 1.1))



-----Original Message-----
From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-project.org] On
Behalf Of Sebastian Rudnick
Sent: Tuesday, 23 November 2010 10:37 AM
To: r-help at r-project.org
Subject: [R] Gap between graph and axis

Hi everyone!

I want to plot some precipitation data via plot(type="h"). Unfortunately
there
is always a gap between the bars and the x-axis, so that the bars reach in
the
"negative area" below 0 at the y-axis, which is very misleading. The
ylim-parameter is set to 0 and max of precipitation, the min value of
precipitation is 0 as well.
I tried to fix this via the fig parameter, but I have no idea how to do it
at
all. 
I hope anyone can help.

Thanks a lot,

Sebastian

______________________________________________
R-help at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



------------------------------

Message: 117
Date: Mon, 22 Nov 2010 19:01:16 -0600
From: Erin Hodgess <erinm.hodgess at gmail.com>
To: R help <r-help at stat.math.ethz.ch>
Subject: [R]  R on Androids?
Message-ID:
	<AANLkTik5jP2cm5Gy0cDRvbkQURoQVc0vgYmE-hDYWG_B at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

Dear R People:

Does R run on the Android, yet, please?  I'm about 99% sure that it
does not, but thought that I would double check.

Thanks,
Erin


-- 
Erin Hodgess
Associate Professor
Department of Computer and Mathematical Sciences
University of Houston - Downtown
mailto: erinm.hodgess at gmail.com



------------------------------

Message: 118
Date: Mon, 22 Nov 2010 20:07:08 -0500
From: watashi at post.com
To: yuliya.rmail at gmail.com, r-help at r-project.org
Subject: Re: [R] how to loop through variables in R?
Message-ID: <8CD58C1E2D65661-1718-A0F at web-mmc-m05.sysops.aol.com>
Content-Type: text/plain

-----Original Message-----

From: Yuliya Matveyeva <yuliya.rmail at gmail.com>
To: watashi at post.com
Sent: Mon, Nov 22, 2010 4:32 pm
Subject: Re: [R] how to loop through variables in R?


If you want to have a name-specific loop.
Assign names to your variables after inserting them into the data.frame like
that:
colnames(df) <- c("var1","var23","var456","var44",...)
for (nam in colnames(df)) {
 myfunction(df[[nam]])
}
Data.frames support access by names.


Unfortunately, in this sense I have to type in 1000 times... isn't there any
function that allows retrieving and assigning all columns automatically?
read.table in "the R intro manual" just brief says it can read a table but
then there's no follow up about how to access the data....



	[[alternative HTML version deleted]]



------------------------------

Message: 119
Date: Mon, 22 Nov 2010 17:20:22 -0800 (PST)

To: r-help at r-project.org
Subject: [R] How to start default browser on R
Message-ID: <116352.18459.qm at web113205.mail.gq1.yahoo.com>
Content-Type: text/plain; charset=utf-8

Hi folks,

Win7 64 bit
IE 64 bit

How to start IE on R?  TIA

B.R.
Stephen L






------------------------------

Message: 120
Date: Mon, 22 Nov 2010 17:23:29 -0800 (PST)

To: R help <r-help at stat.math.ethz.ch>
Subject: [R] question on "uniCox"
Message-ID: <741278.83327.qm at web30803.mail.mud.yahoo.com>
Content-Type: text/plain; charset=utf-8

Hi list,

I?m testing out uniCox R package (version 1.0, on R2.12.0, WinXP). 

When I ran uniCox on my data, there are always some NA?s in the beta matrix,

which in turn causes problems in uniCoxCV call.  I don?t see anything  wrong

with the corresponding data (e.g. no NAs) and if I fit a  univariate Cox
model, 
the features that give NA beta estimates are  actually pretty significant.  
Could you please let me know what  happened and how to avoid this?

I?ve attached the outputs of the function calls below.

Thank you very much!


...Tao


> a <- uniCox(x=t(dat.ave.train.base), y=sampleinfo.ave.train.base$tm2dthr, 
>status=sampleinfo.ave.train.base$censrdth)
lambda value  1 out of  20
lambda value  2 out of  20
lambda value  3 out of  20
lambda value  4 out of  20
lambda value  5 out of  20
lambda value  6 out of  20
lambda value  7 out of  20
lambda value  8 out of  20
lambda value  9 out of  20
lambda value  10 out of  20
lambda value  11 out of  20
lambda value  12 out of  20
lambda value  13 out of  20
lambda value  14 out of  20
lambda value  15 out of  20
lambda value  16 out of  20
lambda value  17 out of  20
lambda value  18 out of  20
lambda value  19 out of  20
lambda value  20 out of  20
5  betas missing

>  aa <- uniCoxCV(a, x=t(dat.ave.train.base),  
>y=sampleinfo.ave.train.base$tm2dthr,  
status=sampleinfo.ave.train.base$censrdth)
FOLD= 1
lambda value  1 out of  20
lambda value  2 out of  20
lambda value  3 out of  20
lambda value  4 out of  20
lambda value  5 out of  20
lambda value  6 out of  20
lambda value  7 out of  20
lambda value  8 out of  20
lambda value  9 out of  20
lambda value  10 out of  20
lambda value  11 out of  20
lambda value  12 out of  20
lambda value  13 out of  20
lambda value  14 out of  20
lambda value  15 out of  20
lambda value  16 out of  20
lambda value  17 out of  20
lambda value  18 out of  20
lambda value  19 out of  20
lambda value  20 out of  20
3  betas missing
1
Error in coxph(Surv(y[ii], status[ii]) ~ eta.new) :
  No (non-missing) observations

> a[[2]][(rowSums(is.na(a[[2]])))>0,]
         [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12]
[,13] 
[,14] [,15] [,16] [,17] [,18] [,19] [,20]
[1,]  92.6641  NaN  NaN  NaN    0    0    0    0    0     0     0     0
0    

0     0     0     0     0     0     0
[2,]      NaN    0    0    0    0    0    0    0    0     0     0     0
0    

0     0     0     0     0     0     0
[3,] 567.3650  NaN    0    0    0    0    0    0    0     0     0     0
0    

0     0     0     0     0     0     0






------------------------------

Message: 121
Date: Mon, 22 Nov 2010 17:28:57 -0800
From: Spencer Graves <spencer.graves at structuremonitoring.com>
To: Georg Otto <gwo at well.ox.ac.uk>
Cc: r-help at stat.math.ethz.ch
Subject: Re: [R] Find in R and R books
Message-ID: <4CEB18D9.6030305 at structuremonitoring.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

       Other people like R Site Search 
(http://search.r-project.org/nmz.html), which is available via the 
standard R function "RSiteSearch".


       For me, the fastest literature search on virtually anything 
statistical is the "findFn" function in the "sos" package.  
(Disclaimer:  I'm the lead author of that package, so I may be biased.)  
"findFn" sorts search results to put the package with the most matches 
first.  The print method opens a table of the results in a web browser 
with hot links to the individual matches.  "sos" comes with a vignette, 
which includes an example of the "writeFindFn2xls" function.  This 
writes a "findFn" object to an Excel file with two sheets:  The second 
is all the matches found.  The first is a summary of the packages found 
with extra information not available via RSiteSearch.


       Hope this helps.
       Spencer


On 11/22/2010 3:19 AM, Georg Otto wrote:

>
>
>> Also when I try to search in google using for example the word R inside
the search lemma I get very few results as the R confuses the search engine.
When I was looking something in matlab ofcourse it was easier to get results
as the search engine performs better.
>> What are your tricks when you want to find some function that provides
some functionality?
> To search R-specific sites the best place to go is this one:
>
> http://www.rseek.org/
>
> Cheers,
>
> Georg
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



------------------------------

Message: 122
Date: Mon, 22 Nov 2010 17:33:22 -0800 (PST)

To: Terry Therneau <therneau at mayo.edu>
Cc: r-help at r-project.org, dieter.menne at menne-biomed.de,
	r_tingley at hotmail.com
Subject: Re: [R] calculating martingale residual on new data using
	"predict.coxph"
Message-ID: <616997.31494.qm at web30804.mail.mud.yahoo.com>
Content-Type: text/plain; charset=us-ascii

Thank you, Terry!




----- Original Message ----
> From: Terry Therneau <therneau at mayo.edu>

> Cc: r-help at r-project.org; dieter.menne at menne-biomed.de; r_tingley at hotmail.
com
> Sent: Mon, November 22, 2010 6:11:15 AM
> Subject: Re:  calculating martingale residual on new data using 
"predict.coxph"
> 
> This feature has been added in survival 2.36-1, which is now on CRAN.
> (2.36-2  should appear in another day or so)
>      Terry  T.
> 
> ---------begin included message --------
> I was trying to use  "predict.coxph" to calculate martingale residuals on
> a test 
> data,  however, as pointed out  before
> 
> http://tolstoy.newcastle.edu.au/R/e4/help/08/06/13508.html
> 
> predict(mycox1,  newdata, type="expected") is not implemented yet.  
> 
>



------------------------------

Message: 123
Date: Mon, 22 Nov 2010 17:36:18 -0800 (PST)

To: Frank Harrell <f.harrell at vanderbilt.edu>, r-help at r-project.org
Subject: Re: [R] calculating martingale residual on new data using
Message-ID: <562346.4353.qm at web30805.mail.mud.yahoo.com>
Content-Type: text/plain; charset=us-ascii

[[elided Yahoo spam]]

...Tao



----- Original Message ----
> From: Frank Harrell <f.harrell at vanderbilt.edu>
> To: r-help at r-project.org
> Sent: Sun, November 21, 2010 5:49:36 AM
> Subject: Re: [R] calculating martingale residual on new data using
> 
> 
> The tendency is to use residual-like diagnostics on the entire dataset
that
> was available for model development.  For test data we typically  run
> predictive accuracy analyses.  For example, one of the strongest
validations
> is to show, in a high-resolution calibration plot, that absolute
predictions
> (e.g., probability of survival at 2 years) are  accurate.
> 
> Frank
> 
> 
> -----
> Frank Harrell
> Department of  Biostatistics, Vanderbilt University
> -- 
> View this message in context:  
>http://r.789695.n4.nabble.com/calculating-martingale-residual-on-new-data-u
sing-predict-coxph-tp3050712p3052377.html
>
> Sent  from the R help mailing list archive at Nabble.com.
> 
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting  guide
http://www.R-project.org/posting-guide.html
> and provide commented,  minimal, self-contained, reproducible code.
>



------------------------------

Message: 124
Date: Mon, 22 Nov 2010 17:36:54 -0800 (PST)

To: David Winsemius <dwinsemius at comcast.net>
Cc: r-help at r-project.org, dieter.menne at menne-biomed.de,
	r_tingley at hotmail.com
Subject: Re: [R] calculating martingale residual on new data using
	"predict.coxph"
Message-ID: <212275.29986.qm at web30802.mail.mud.yahoo.com>
Content-Type: text/plain; charset=us-ascii

[[elided Yahoo spam]]

...Tao




----- Original Message ----
> From: David Winsemius <dwinsemius at comcast.net>

> Cc: r-help at r-project.org; dieter.menne at menne-biomed.de; r_tingley at hotmail.
com
> Sent: Sun, November 21, 2010 5:50:31 AM
> Subject: Re: [R] calculating martingale residual on new data using 
>"predict.coxph"
> 
> 
> On Nov 21, 2010, at 3:42 AM, Shi, Tao wrote:
> 
> > Hi David,
> > 
> > Thanks, but I don't quite follow your examples below.
> 
> I wasn't  really sure they did anything useful anyway.
> 
> > The residuals  you
> > calculated are still based on the training data from which your cox
model 
>was
> > generated.  I'm interested in the testing  data.
> 
>   The survest function in rms and the survfit function in  survival will 
>calculate survival probabilities given a model and newdata, and  depending
on 
>your definition of "residual" you could take the difference between  the 
>calculation and validation data. That must be what happens (at least at a
gross 
>level of description) when Harrell runs his validate function on his cph
models 
>in the rms/Design package, although I don't know if something that you
would 
>recognize as a martingale residual is an identifiable  intermediate.
> 
>   If you are using survfit, it would appear from my  reading that you
would 
>need to set the "individual" parameter to TRUE. I'm  assuming you planned
to 
>calculate these (1- expected) at the event times of the  validation cohort 
>(which it appears the default method fixes via the censor  argument)?
> 
> --David
> 
> > 
> > 
> > Best,
> > 
> >  ...Tao
> > 
> > 
> > 
> > 
> > 
> > ----- Original  Message ----
> >> From: David Winsemius <dwinsemius at comcast.net>
> >>  To: David Winsemius <dwinsemius at comcast.net>

> >> dieter.menne at menne-biomed.de; r_tingley at hotmail.com
> >> Sent:  Fri, November 19, 2010 10:53:26 AM
> >> Subject: Re: [R] calculating  martingale residual on new data using
> >> "predict.coxph"
> >> 
> >> 
> >> On Nov 19, 2010, at 12:50 PM, David Winsemius  wrote:
> >> 
> >>> 
> >>> On  Nov 19, 2010, at  12:32 PM, Shi, Tao wrote:
> >>> 
> >>>> Hi   list,
> >>>> 
> >>>> I was trying to use  "predict.coxph" to calculate  martingale
residuals on 
>a
> >>  test
> >>>> data, however, as pointed out   before
> >>> 
> >>> What about resid(fit) ?  It's my  reading of  Therneau & Gramsch [and
of
> >> help(coxph.object) ]  that they consider those  martingale residuals.
> >> 
> >>  The manner in which I _thought_ this would work was  to insert some
dummy  
>cases
> >> into the original data and then to get residuals by   weighting the
cases
> >> appropriately. That doesn't seem to be as  successful as I  imagined:
> >> 
> >>> test1 <-  list(time=c(4,3,1,1,2,2,3,3),  weights=c(rep(1,7), 0),
> >> +                  status=c(1,1,1,0,1,1,0,1),
> >> +                 x=c(0,2,1,1,1,0,0,1),
> >> +                 sex=c(0,0,0,0,1,1,1,1))
> >>>  coxph(Surv(time, status) ~ x , test1,  weights=weights)$weights
> >>  Error in fitter(X, Y, strats, offset, init, control,  weights =
weights,  
>:
> >>  Invalid weights, must be >0
> >> #  OK then  make it a small number
> >>> test1 <-  list(time=c(4,3,1,1,2,2,3,3),  weights=c(rep(1,7), 0.01),
> >>  +                 status=c(1,1,1,0,1,1,0,1),
> >> +                 x=c(0,2,1,1,1,0,0,1),
> >> +                 sex=c(0,0,0,0,1,1,1,1))
> >>>  print(resid( coxph(Surv(time, status) ~ x ,  test1,weights=weights)
)
> >> ,digits=3)
> >>      1         2       3       4         5       6       7         8
> >> -0.6410 -0.5889  0.8456 -0.1544  0.4862   0.6931  -0.6410  0.0509
> >> Now take out constructed case and  weights
> >> 
> >>> test1 <-  list(time=c(4,3,1,1,2,2,3),
> >> +                 status=c(1,1,1,0,1,1,0),
> >> +                 x=c(0,2,1,1,1,0,0),
> >> +                  sex=c(0,0,0,0,1,1,1))
> >>> print(resid( coxph(Surv(time, status) ~  x  , test1) ) ,digits=3)
> >>     1       2       3      4      5       6       7
> >> -0.632 -0.589  0.846  -0.154  0.486  0.676  -0.632
> >> 
> >> Expecting  approximately the same residuals for first 7 cases but  not

>really
> >> getting it. There must be something about weights in coxph  that I
don't
> >> understand, unless a one-hundreth of a case gets  "up indexed" inside
the
> >> machinery of coxph?
> >> 
> >>  Still think that inserting a single constructed case  into a real
dataset  
>of
> >> sufficient size ought to be able to yield some sort of   estimate, and
only 
>be a
> >> minor perturbation,  although I must  admit I'm  having trouble
figuring out 
>...
> >> why are we  attempting such a maneuver? The  notion of "residuals"
around
> >>  constructed cases makes me statistically  suspicious, although I
suppose  
>that is
> >> just some sort of cumulative  excess/deficit death  fraction.
> >> 
> >>>>  http://tolstoy.newcastle.edu.au/R/e4/help/08/06/13508.html
> >>>> 
> >>>> predict(mycox1, newdata, type="expected") is not  implemented  yet.  
>Dieter
> >>>> suggested to use 'cph'  and 'predict.Design', but  from my reading so

far,
> >> I'm  not
> >>>> sure they can do that.
> >>>> 
> >>>> Do you other ways to calculate martingale residuals on a  new  data?
> >>>> 
[[elided Yahoo spam]]
> >>>> 
> >>>> ...Tao
> >> 
> >>  --David Winsemius, MD
> >> West Hartford, CT
> >> 
> >> 
> > 
> > 
> > 
> 
> David Winsemius, MD
> West Hartford,  CT
> 
>



------------------------------

Message: 125
Date: Mon, 22 Nov 2010 20:43:56 -0500
From: Mike Marchywka <marchywka at hotmail.com>
To: <spencer.graves at structuremonitoring.com>, <gwo at well.ox.ac.uk>
Cc: r-help at stat.math.ethz.ch
Subject: Re: [R] Find in R and R books
Message-ID: <BLU113-W111F8D13F02BDD7AC3C938BE3E0 at phx.gbl>
Content-Type: text/plain; charset="iso-8859-1"







----------------------------------------
> Date: Mon, 22 Nov 2010 17:28:57 -0800
> From: spencer.graves at structuremonitoring.com
> To: gwo at well.ox.ac.uk
> CC: r-help at stat.math.ethz.ch
> Subject: Re: [R] Find in R and R books
>
> Other people like R Site Search
> (http://search.r-project.org/nmz.html), which is available via the
> standard R function "RSiteSearch".
>
>
> For me, the fastest literature search on virtually anything
> statistical is the "findFn" function in the "sos" package.
> (Disclaimer: I'm the lead author of that package, so I may be biased.)
> "findFn" sorts search results to put the package with the most matches
> first. The print method opens a table of the results in a web browser

Again, I have in past taken docs for various things like R, rendered
html to text and used things like grep and built my own indicies.
However, your facility does seem in that line of thought.

Personally I haven't had a problem with google scholar or
probably even citeseer would return good hits, R is not
a common english word so I think google can make use of it.




> with hot links to the individual matches. "sos" comes with a vignette,
> which includes an example of the "writeFindFn2xls" function. This
> writes a "findFn" object to an Excel file with two sheets: The second
> is all the matches found. The first is a summary of the packages found
> with extra information not available via RSiteSearch.
>
>
> Hope this helps.
> Spencer
>
>
> On 11/22/2010 3:19 AM, Georg Otto wrote:
> > Alaios writes:
> >
> >
> >> Also when I try to search in google using for example the word R inside
the search lemma I get very few results as the R confuses the search engine.
When I was looking something in matlab ofcourse it was easier to get results
as the search engine performs better.
> >> What are your tricks when you want to find some function that provides
some functionality?
> > To search R-specific sites the best place to go is this one:
> >
> > http://www.rseek.org/
> >
> > Cheers,
> >
> > Georg
> >
> > ______________________________________________
> > R-help at r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.
html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.



------------------------------

Message: 126
Date: Tue, 23 Nov 2010 01:50:28 +0000 (UTC)
From: Ben Bolker <bbolker at gmail.com>
To: r-help at stat.math.ethz.ch
Subject: Re: [R] Fast Two-Dimensional Optimization
Message-ID: <loom.20101123T024136-231 at post.gmane.org>
Content-Type: text/plain; charset=us-ascii

Wonsang You <you <at> ifn-magdeburg.de> writes:

> I have attempted "optim" function to solve a two-dimensional optimization
> problem. It took around 25 second to complete the procedure.
> However, I want to reduce the computation time: less than 7 second. Is
there
> any optimization function in R which is very rapid?

  This is not nearly enough information for us to help. The answer
depends on the characteristics of your objective function.  You may be
able to quadruple the speed of your optimization by coding your objective
function more efficiently in R or by re-coding it in C or C++. You may
be able to choose better starting conditions.  You may be able to pick
an optimization method that is more suitable for your objective function
(see ?optim and the "optimx" package on r-forge).

  Ben Bolker



------------------------------

Message: 127
Date: Mon, 22 Nov 2010 21:11:14 -0500
From: watashi at post.com
To: r-help at r-project.org
Subject: Re: [R] how to loop through variables in R?
Message-ID: <8CD58CAD70284B2-1718-DC0 at web-mmc-m05.sysops.aol.com>
Content-Type: text/plain



d<-read.table("D:\\Working\\Statics.txt")

df <- cbind("Q1", "Q2", "Q3", "Q4", "Q5", "Q5A", "Q5B", "Q5C", "Q5D", "Q5E",
"Q5F", "Q5G", "Q6", "Q6A", "Q6B", "Q6C", "Q6D", "Q6E", "Q6F", "Q7", "Q8",
"Q9")
#Than you can loop through them simply by doing:
result <- numeric(length(df))
for (i in 1:(length(df)-1)) {
 result <- chisq.test(table(df[[i]], df[[i+1]]))
}

and then this error comes out:

Error: unexpected '}' in "}"


and how can I redirect the output of the chi-square test to a file instead
of console output?


	[[alternative HTML version deleted]]



------------------------------

Message: 128
Date: Tue, 23 Nov 2010 15:16:04 +1300
From: David Scott <d.scott at auckland.ac.nz>

Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] How to start default browser on R
Message-ID: <4CEB23E4.8090209 at auckland.ac.nz>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

  On 23/11/10 14:20, Stephen Liu wrote:
> Hi folks,
>
> Win7 64 bit
> IE 64 bit
>
> How to start IE on R?  TIA
>
> B.R.
> Stephen L
>
>
?browseURL

-- 
_________________________________________________________________
David Scott	Department of Statistics
		The University of Auckland, PB 92019
		Auckland 1142,    NEW ZEALAND
Phone: +64 9 923 5055, or +64 9 373 7599 ext 85055
Email:	d.scott at auckland.ac.nz,  Fax: +64 9 373 7018

Director of Consulting, Department of Statistics



------------------------------

Message: 129
Date: Mon, 22 Nov 2010 18:20:35 -0800
From: Spencer Graves <spencer.graves at structuremonitoring.com>
To: Mike Marchywka <marchywka at hotmail.com>
Cc: r-help at stat.math.ethz.ch, gwo at well.ox.ac.uk
Subject: Re: [R] Find in R and R books
Message-ID: <4CEB24F3.3040108 at structuremonitoring.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hi, Mike, et al.:


<in line>


On 11/22/2010 5:43 PM, Mike Marchywka wrote:
>
>
>
>
>
> ----------------------------------------
>> Date: Mon, 22 Nov 2010 17:28:57 -0800
>> From: spencer.graves at structuremonitoring.com
>> To: gwo at well.ox.ac.uk
>> CC: r-help at stat.math.ethz.ch
>> Subject: Re: [R] Find in R and R books
>>
>> Other people like R Site Search
>> (http://search.r-project.org/nmz.html), which is available via the
>> standard R function "RSiteSearch".
>>
>>
>> For me, the fastest literature search on virtually anything
>> statistical is the "findFn" function in the "sos" package.
>> (Disclaimer: I'm the lead author of that package, so I may be biased.)
>> "findFn" sorts search results to put the package with the most matches
>> first. The print method opens a table of the results in a web browser
> Again, I have in past taken docs for various things like R, rendered
> html to text and used things like grep and built my own indicies.
> However, your facility does seem in that line of thought.
>
> Personally I haven't had a problem with google scholar or
> probably even citeseer would return good hits, R is not
> a common english word so I think google can make use of it.

Thanks for this.


For me, anything that mixes math with worked examples is vastly superior 
to either alone, because I no longer have to puzzle sometimes for hours 
over a single line or page of mathematics:  I can try a variety of 
examples and walk through the code line by line until I understand.  In 
that way, I find R packages much more intelligible than theoretical 
treatises.  This is especially true when the R package comes with a 
vignette or companion documentation with script files working the 
examples (e.g., like the "scripts" subdirectories for "nmle" and "fda").


Spencer

>> with hot links to the individual matches. "sos" comes with a vignette,
>> which includes an example of the "writeFindFn2xls" function. This
>> writes a "findFn" object to an Excel file with two sheets: The second
>> is all the matches found. The first is a summary of the packages found
>> with extra information not available via RSiteSearch.
>>
>>
>> Hope this helps.
>> Spencer
>>
>>
>> On 11/22/2010 3:19 AM, Georg Otto wrote:
>>> Alaios writes:
>>>
>>>
>>>> Also when I try to search in google using for example the word R inside
the search lemma I get very few results as the R confuses the search engine.
When I was looking something in matlab ofcourse it was easier to get results
as the search engine performs better.
>>>> What are your tricks when you want to find some function that provides
some functionality?
>>> To search R-specific sites the best place to go is this one:
>>>
>>> http://www.rseek.org/
>>>
>>> Cheers,
>>>
>>> Georg
>>>
>>> ______________________________________________
>>> R-help at r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.
html
>>> and provide commented, minimal, self-contained, reproducible code.



------------------------------

Message: 130
Date: Mon, 22 Nov 2010 17:38:06 -0800 (PST)
From: joeponzio <joe.ponzio at gmail.com>
To: r-help at r-project.org
Subject: [R] using the "apply" method for functions with multiple
	inputs
Message-ID: <1290476286318-3054719.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii


hello r users,

i'm trying to use the apply method on a function with several inputs, but
cannot figure out how to send multiple arguments to the function (not
multiple runs of the same function, but one run of the function including
two variables - each used within the function).

a <- c(1:10,999,999,999)
b <- c(11:20,999,999,999)

tfun <- function(x,y){
  if( (x = 1 & y !=999) || (x > 1 & x < 999 & y == 999) )
    x1 <- 1
  else
    x1 <-0
}

#this doesn't work - gives an error " 'y' is missing
tfilt <- sapply(data.frame(a,b), tfun)

thanks,
joe

-- 
View this message in context:
http://r.789695.n4.nabble.com/using-the-apply-method-for-functions-with-mult
iple-inputs-tp3054719p3054719.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 131
Date: Tue, 23 Nov 2010 11:05:23 +1000
From: Kere Klein <k.klein at uq.edu.au>
To: "r-help at R-project.org" <r-help at R-project.org>
Subject: [R] permalgorithm
Message-ID:
	<DFD404226A5BC247866AEAE43341B65428367A2CCB at UQEXMB01.soe.uq.edu.au>
Content-Type: text/plain; charset="us-ascii"

To whom may it concern,

What is a proper way to analyse a dataset generated from permalgorithm with
time varying covariates? It seems like to me that interval censoring would
be the choice but Surv suggests that the event would be regarded as occurred
at the end of the time interval...

Best wishes,
Kere

Kerenaftali Klein PhD| Biostatistician | Queensland Clinical Trials &
Biostatistics Centre 
The University of Queensland | School of Population Health | Building 33,
Level 1| Princess Alexandra Hospital |Ipswich Road | Woolloongabba QLD 4102
| Australia Ph: +61 7 3176 3062| Fax: +61 7 3176 6826 | Email:
k.klein at uq.edu.au | Web: http://www.sph.uq.edu.au/qctbc


------------------------------

Message: 132
Date: Mon, 22 Nov 2010 21:43:25 -0500
From: David Winsemius <dwinsemius at comcast.net>
To: joeponzio <joe.ponzio at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] using the "apply" method for functions with multiple
	inputs
Message-ID: <DA7675BA-50F5-4CB0-9610-AF60012C4905 at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes


On Nov 22, 2010, at 8:38 PM, joeponzio wrote:

>
> hello r users,
>
> i'm trying to use the apply method on a function with several  
> inputs, but
> cannot figure out how to send multiple arguments to the function (not
> multiple runs of the same function, but one run of the function  
> including
> two variables - each used within the function).
>
> a <- c(1:10,999,999,999)
> b <- c(11:20,999,999,999)
>
> tfun <- function(x,y){
>  if( (x = 1 & y !=999) || (x > 1 & x < 999 & y == 999) )

There is a problem with that first logical test, assigns 1 to x rather  
than testing.

>    x1 <- 1
>  else
>    x1 <-0
> }
>

 > mapply("tfun", a, b)
  [1] 1 1 1 1 1 1 1 1 1 1 0 0 0

(but see above if that were not what you expected.)

> #this doesn't work - gives an error " 'y' is missing
> tfilt <- sapply(data.frame(a,b), tfun)
>
> thanks,
> joe
>
> -- 
> View this message in context:
http://r.789695.n4.nabble.com/using-the-apply-method-for-functions-with-mult
iple-inputs-tp3054719p3054719.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

David Winsemius, MD
West Hartford, CT



------------------------------

Message: 133
Date: Mon, 22 Nov 2010 19:26:14 -0800 (PST)

To: David Scott <d.scott at auckland.ac.nz>
Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] How to start default browser on R
Message-ID: <389534.66648.qm at web113215.mail.gq1.yahoo.com>
Content-Type: text/plain; charset=utf-8

Hi David,

Thanks for your advice.

According to the Example on ?browseURL I tried:

1)
browseURL("file:http://www.r-project.org", browser="C:/Program
Files/Internet 
Explorer/iexplore.exe")

It starts a small windows asking for permission to accept ActiveX
-> OK

IE doesn't start

2)
browseURL("file:http://d:/R/R-2.5.1/html/index.html", browser="C:/Program 
Files/Internet Explorer/iexplore.exe")

same result as 1) above


What I have missed?  TIA


B.R.
Stephen L




----- Original Message ----
From: David Scott <d.scott at auckland.ac.nz>

Cc: "r-help at r-project.org" <r-help at r-project.org>
Sent: Tue, November 23, 2010 10:16:04 AM
Subject: Re: [R] How to start default browser on R

  On 23/11/10 14:20, Stephen Liu wrote:
> Hi folks,
>
> Win7 64 bit
> IE 64 bit
>
> How to start IE on R?  TIA
>
> B.R.
> Stephen L
>
>
?browseURL

-- 
_________________________________________________________________
David Scott    Department of Statistics
        The University of Auckland, PB 92019
        Auckland 1142,    NEW ZEALAND
Phone: +64 9 923 5055, or +64 9 373 7599 ext 85055
Email:    d.scott at auckland.ac.nz,  Fax: +64 9 373 7018

Director of Consulting, Department of Statistics





------------------------------

Message: 134
Date: Mon, 22 Nov 2010 22:39:15 -0500
From: Ista Zahn <izahn at psych.rochester.edu>

Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] How to start default browser on R
Message-ID:
	<AANLkTik4RKiUwPPDj+TYimm3pdrdxCJ2eoSsiR-0zaaA at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

Hi Stephen,
I'm not sure if this is the problem, but you almost certainly do not
want the "file:" part. Try

browseURL("http://www.r-project.org")

-Ista


> Hi David,
>
> Thanks for your advice.
>
> According to the Example on ?browseURL I tried:
>
> 1)
> browseURL("file:http://www.r-project.org", browser="C:/Program
Files/Internet
> Explorer/iexplore.exe")
>
> It starts a small windows asking for permission to accept ActiveX
> -> OK
>
> IE doesn't start
>
> 2)
> browseURL("file:http://d:/R/R-2.5.1/html/index.html", browser="C:/Program
> Files/Internet Explorer/iexplore.exe")
>
> same result as 1) above
>
>
> What I have missed? ?TIA
>
>
> B.R.
> Stephen L
>
>
>
>
> ----- Original Message ----
> From: David Scott <d.scott at auckland.ac.nz>

> Cc: "r-help at r-project.org" <r-help at r-project.org>
> Sent: Tue, November 23, 2010 10:16:04 AM
> Subject: Re: [R] How to start default browser on R
>
> ?On 23/11/10 14:20, Stephen Liu wrote:
>> Hi folks,
>>
>> Win7 64 bit
>> IE 64 bit
>>
>> How to start IE on R? ?TIA
>>
>> B.R.
>> Stephen L
>>
>>
> ?browseURL
>
> --
> _________________________________________________________________
> David Scott ? ?Department of Statistics
> ? ? ? ?The University of Auckland, PB 92019
> ? ? ? ?Auckland 1142, ? ?NEW ZEALAND
> Phone: +64 9 923 5055, or +64 9 373 7599 ext 85055
> Email: ? ?d.scott at auckland.ac.nz, ?Fax: +64 9 373 7018
>
> Director of Consulting, Department of Statistics
>
>
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Ista Zahn
Graduate student
University of Rochester
Department of Clinical and Social Psychology
http://yourpsyche.org



------------------------------

Message: 135
Date: Mon, 22 Nov 2010 22:49:15 -0500
From: Ravi Varadhan <rvaradhan at jhmi.edu>
To: Yogesh Tiwari <yogesh.mpi at googlemail.com>
Cc: r-help <r-help at stat.math.ethz.ch>
Subject: Re: [R] how to calculate derivative
Message-ID: <718097d9ea7a.4ceaf36b at johnshopkins.edu>
Content-Type: text/plain; CHARSET=US-ASCII

Here is a simple approach:

data <- read.table("test-data.txt")

deriv <- diff(data$V2) / diff(data$V1)

times <- (data$V1[-1] + data$V1[-3545])/2

plot(times, deriv, type="l")

Another approach is to smooth the original data and then obtain derivatives
from the smooth

Ravi.

____________________________________________________________________

Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and Gerontology
School of Medicine
Johns Hopkins University

Ph. (410) 502-2619
email: rvaradhan at jhmi.edu


----- Original Message -----
From: Yogesh Tiwari <yogesh.mpi at googlemail.com>
Date: Monday, November 22, 2010 5:14 pm
Subject: [R] how to calculate derivative
To: r-help <r-help at stat.math.ethz.ch>


> Dear R Users,
>  
>  I have trend of two time series of CO2 each 10  years of data. One is
>  varying
>  weekly and another is bi-weekly. I want to calculate Growth rate ppmv 
> / year
>  of these CO2 trends. Therefore I want to calculate  time derivative 
> ppmv /
>  year.
>  
>  How to do it in R?
>  
>  Here I attached example data file, I would appreciate if any one 
> kindly can
>  help on it.
>  
>  Thanks,
>  
>  Regards,
>  Yogesh 
> ______________________________________________
>  R-help at r-project.org mailing list
>  
>  PLEASE do read the posting guide 
>  and provide commented, minimal, self-contained, reproducible code.



------------------------------

Message: 136
Date: Mon, 22 Nov 2010 19:59:02 -0800 (PST)

To: Ista Zahn <izahn at psych.rochester.edu>
Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] How to start default browser on R
Message-ID: <64349.64091.qm at web113204.mail.gq1.yahoo.com>
Content-Type: text/plain; charset=utf-8

Hi Ista,

I see.  Your advice works.  Thanks

even without:
browser="C:/Program Files/Internet Explorer/iexplore.exe")

For non default browser it needs;
browser="C:/Program Files/Mozilla Firefoe/firefox.exe"


What will be "file:" used for?


B.R.
Stephen L



----- Original Message ----
From: Ista Zahn <izahn at psych.rochester.edu>

Cc: David Scott <d.scott at auckland.ac.nz>; "r-help at r-project.org" 
<r-help at r-project.org>
Sent: Tue, November 23, 2010 11:39:15 AM
Subject: Re: [R] How to start default browser on R

Hi Stephen,
I'm not sure if this is the problem, but you almost certainly do not
want the "file:" part. Try

browseURL("http://www.r-project.org")

-Ista


> Hi David,
>
> Thanks for your advice.
>
> According to the Example on ?browseURL I tried:
>
> 1)
> browseURL("file:http://www.r-project.org", browser="C:/Program
Files/Internet
> Explorer/iexplore.exe")
>
> It starts a small windows asking for permission to accept ActiveX
> -> OK
>
> IE doesn't start
>
> 2)
> browseURL("file:http://d:/R/R-2.5.1/html/index.html", browser="C:/Program
> Files/Internet Explorer/iexplore.exe")
>
> same result as 1) above
>
>
> What I have missed?  TIA
>
>
> B.R.
> Stephen L
>
>
>
>
> ----- Original Message ----
> From: David Scott <d.scott at auckland.ac.nz>

> Cc: "r-help at r-project.org" <r-help at r-project.org>
> Sent: Tue, November 23, 2010 10:16:04 AM
> Subject: Re: [R] How to start default browser on R
>
>  On 23/11/10 14:20, Stephen Liu wrote:
>> Hi folks,
>>
>> Win7 64 bit
>> IE 64 bit
>>
>> How to start IE on R?  TIA
>>
>> B.R.
>> Stephen L
>>
>>
> ?browseURL
>
> --
> _________________________________________________________________
> David Scott    Department of Statistics
>        The University of Auckland, PB 92019
>        Auckland 1142,    NEW ZEALAND
> Phone: +64 9 923 5055, or +64 9 373 7599 ext 85055
> Email:    d.scott at auckland.ac.nz,  Fax: +64 9 373 7018
>
> Director of Consulting, Department of Statistics
>
>
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Ista Zahn
Graduate student
University of Rochester
Department of Clinical and Social Psychology
http://yourpsyche.org






------------------------------

Message: 137
Date: Tue, 23 Nov 2010 17:00:36 +1300
From: David Scott <d.scott at auckland.ac.nz>

Cc: "r-help at r-project.org" <r-help at r-project.org>,	Ista Zahn
	<izahn at psych.rochester.edu>
Subject: Re: [R] How to start default browser on R
Message-ID: <4CEB3C64.3000106 at auckland.ac.nz>
Content-Type: text/plain; charset=UTF-8; format=flowed

  On 23/11/10 16:59, Stephen Liu wrote:
> Hi Ista,
>
> I see.  Your advice works.  Thanks
>
> even without:
> browser="C:/Program Files/Internet Explorer/iexplore.exe")
>
> For non default browser it needs;
> browser="C:/Program Files/Mozilla Firefoe/firefox.exe"
>
>
> What will be "file:" used for?
>
>
> B.R.
> Stephen L
>
You can use it to open a local file on your machine as well. I use this 
all the time with hwriter which writes html reports.

David Scott

-- 
_________________________________________________________________
David Scott	Department of Statistics
		The University of Auckland, PB 92019
		Auckland 1142,    NEW ZEALAND
Phone: +64 9 923 5055, or +64 9 373 7599 ext 85055
Email:	d.scott at auckland.ac.nz,  Fax: +64 9 373 7018

Director of Consulting, Department of Statistics



------------------------------

Message: 138
Date: Mon, 22 Nov 2010 20:23:29 -0800
From: Spencer Graves <spencer.graves at structuremonitoring.com>
To: Ravi Varadhan <rvaradhan at jhmi.edu>
Cc: Yogesh Tiwari <yogesh.mpi at googlemail.com>,	r-help
	<r-help at stat.math.ethz.ch>
Subject: Re: [R] how to calculate derivative
Message-ID: <4CEB41C1.8070106 at structuremonitoring.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

       The "fda" package includes various implementations for smoothing 
data and differentiating the smooth, per Ravi's alternative approach.  
This is generally preferable to using first differences of raw data, 
because differencing raw data amplifies noise, while appropriate smooths 
eliminate much of the noise, leaving you with what you most want.  
Ramsay and Silverman (2005) Functional Data Analysis, 2nd ed. (Springer) 
suggest that if you want a second derivative, it is often wise to use 
quintic splines, because then the second derivative are cubic splines.  
(The first derivative of a spline of order k is a spline of order k-1.)  
An example is provided in Figure 1.2 of Ramsay, Hooker and Graves (2009) 
Functional Data Analysis with R and Matlab (Springer).


       However, you don't have to get to book to see that example.  Just 
work through the script file "fdarm-ch01.R" in system.file('scripts', 
package='fda') on any computer with the 'fda' package installed.


       Hope this helps.
       Spencer


On 11/22/2010 7:49 PM, Ravi Varadhan wrote:
> Here is a simple approach:
>
> data<- read.table("test-data.txt")
>
> deriv<- diff(data$V2) / diff(data$V1)
>
> times<- (data$V1[-1] + data$V1[-3545])/2
>
> plot(times, deriv, type="l")
>
> Another approach is to smooth the original data and then obtain
derivatives from the smooth
>
> Ravi.
>
> ____________________________________________________________________
>
> Ravi Varadhan, Ph.D.
> Assistant Professor,
> Division of Geriatric Medicine and Gerontology
> School of Medicine
> Johns Hopkins University
>
> Ph. (410) 502-2619
> email: rvaradhan at jhmi.edu
>
>
> ----- Original Message -----
> From: Yogesh Tiwari<yogesh.mpi at googlemail.com>
> Date: Monday, November 22, 2010 5:14 pm
> Subject: [R] how to calculate derivative
> To: r-help<r-help at stat.math.ethz.ch>
>
>
>> Dear R Users,
>>
>>   I have trend of two time series of CO2 each 10  years of data. One is
>>   varying
>>   weekly and another is bi-weekly. I want to calculate Growth rate ppmv
>> / year
>>   of these CO2 trends. Therefore I want to calculate  time derivative
>> ppmv /
>>   year.
>>
>>   How to do it in R?
>>
>>   Here I attached example data file, I would appreciate if any one
>> kindly can
>>   help on it.
>>
>>   Thanks,
>>
>>   Regards,
>>   Yogesh
>> ______________________________________________
>>   R-help at r-project.org mailing list
>>
>>   PLEASE do read the posting guide
>>   and provide commented, minimal, self-contained, reproducible code.
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>


-- 
Spencer Graves, PE, PhD
President and Chief Operating Officer
Structure Inspection and Monitoring, Inc.
751 Emerson Ct.
San Jos?, CA 95126
ph:  408-655-4567



------------------------------

Message: 139
Date: Mon, 22 Nov 2010 20:29:11 -0800 (PST)
From: David Stoffer <dsstoffer at gmail.com>
To: r-help at r-project.org
Subject: Re: [R] Is it possible to make a matrix to start at row 0?
Message-ID: <1290486551690-3054829.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii



bogdanno-2 wrote:
> 
> I want to make the matrix to be indexed from row (column) 0, not 1
> Can I do that? How?
> Thanks
> 
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
> 


Try the Oarray package...  it did have some problems at one time, but maybe
it has been updated since then.  
-- 
View this message in context:
http://r.789695.n4.nabble.com/Is-it-possible-to-make-a-matrix-to-start-at-ro
w-0-tp3054248p3054829.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 140
Date: Mon, 22 Nov 2010 20:41:05 -0800
From: Steve Bellan <sbellan at berkeley.edu>
To: r-help at r-project.org
Subject: [R] overlay histograms on map at map coordinates
Message-ID: <0B1FE6E7-2E3A-4AD0-9199-A1CB88393D10 at berkeley.edu>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes

Hi all,

I'm trying to visualize animal movement data characteristics  
spatiotemporally by overlaying many different histograms on a map.  I  
want the histograms to be plotted at coordinates in the map that  
matches a region they describe.  If I was just doing this once, I'd  
fiddle in Illustrator after R and avoid the headache. But I'll be  
doing it many many times so it seems worth the elegance & repeatability.

For example, say we have the following data:

covariate <- rep(letters[1:10], each = 100)
shape.pars <- rep(seq(.5,.7, length.out = 10), each = 100)
zz <- rgamma(1000, shape = shape.pars,  scale = 2)
map <- data.frame(factor = unique(covariate), xx = rnorm(10), yy = 
rnorm(10))

I'd like to be able to have 10 histograms plotted for each level of  
the covariate at the coordinates specified in the map data.frame.   
Ideally, I'd like to be able to specify whether they were plotted to  
the left, right, top, bottom or center of the coordinates similar to  
in text() and other graphical functions.  Looking around the archives,  
all I've come across are the viewport system in library(grid) which  
doesn't seem to be able to handle plot() hist() or other such  
functions.  Otherwise, I could create viewports for each histogram  
inside the map's coordinate system.

Any ideas?  Thanks!

Steve


Steve Bellan
MPH, Epidemiology
PhD Candidate, Environmental Science, Policy & Management
Getz Lab
University of California, Berkeley



------------------------------

Message: 141
Date: Mon, 22 Nov 2010 21:35:11 -0800 (PST)
From: David Stoffer <dsstoffer at gmail.com>
To: r-help at r-project.org
Subject: Re: [R] Kalman filter
Message-ID: <1290490511301-3054858.post at n4.nabble.com>
Content-Type: text/plain; charset=UTF-8


It sounds like you've looked at the DLM, DSE, and SSPIR packages.  If not,
then certainly check them out.  Also, we have code for filtering, smoothing
and estimation in our text- go to www.stat.pitt.edu/stoffer/tsa3/ and look
at the code for chapter 6.  There's not a package for the text, but all the
code is in a compressed file that you can download.  The examples are
discussed in detail in the text, but I think looking at the code (and
Appendix R on the site) will be sufficient to set up your problem.

David



Garten Stuhl wrote:
> 
> Hello,
> 
> 
> 
> I have completed my kalman filter problem with more details.
> 
> 
> 
> The transition- and the measurement equation is given by
> 
> 
> 
> x[t]=A[t]*x[t-1]+B[t]*epsilon[t]
> 
> y[t]=C[t]*x[t]+eta[t]
> 
> 
> 
> A, y, B and C are Matrices. Y[t] is the data input vector with 800
> elements
> (every t has one element)
> 
> 
> 
> My Model is described by the following
> (discretisation<http://www.dict.cc/englisch-deutsch/discretisation.html>)
> stochastic differential equation
> 
> 
> 
> Lambda[t]=lambda[t-1]+kappa*lambda[t]*delta_t+epsilon_l
> 
> R[t]=R[t-1]+mu*delta_t+epsilon_r
> 
> epsilon_l=sigma_l*sqroot(delta_t)
> 
> epsilon_r=sigma_r*sqroot(delta_t)
> 
> 
> 
> Ln(S[t])=lambda[t]+R[t]
> 
> 
> 
> The paramters for estimation are:
> 
> kappa
> 
> mu
> 
> sigma_l
> 
> sigma_r
> 
> 
> 
> The state-space-model for this problem is:
> 
> 
> 
> x[t]=(lambda[t], R[t])?
> 
> A[t]=(1-kappa+delta_t, 0; 0, 1+mu)
> 
> B[t]=(1,0;0,1)
> 
> epsilon[t]=(epsilon_l, epsilon_r)?
> 
> C[t]=(1,1)
> 
> Eta[t]=0
> 
> 
> 
> I used serveral alternative methods (dlm, kalmanLike, fkf, kfilter) for
> parameter estimation but I don?t understand the syntax and the correct
> input
> for model estimation.
> 
> 
> 
> Can anybody help me, which packed is the most best for my problem and how
> is
> it to control?
> 
> 
> 
> Thanks for helping.
> 
> 
> 
> Best,
> 
> Thomas
> 
> 	[[alternative HTML version deleted]]
> 
> 
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
> 

-- 
View this message in context:
http://r.789695.n4.nabble.com/Kalman-filter-tp3049591p3054858.html
Sent from the R help mailing list archive at Nabble.com.



------------------------------

Message: 142
Date: Mon, 22 Nov 2010 23:36:53 -0600
From: Mari Pesek <marifrances at gmail.com>
To: r-help at r-project.org
Subject: [R] (no subject)
Message-ID:
	<AANLkTikd0ymrDTzJfjad=2CFJvLZEmZKP4gXDRO_ctGX at mail.gmail.com>
Content-Type: text/plain

Dear R Help -

I am analyzing data from an ecological experiment and am having problems
with the ANOVA functions I've tried thus far. The experiment consists of a
blocked/split-plot design, with plant biomass as the response. The following
is an overview of the treatments applied (nitrogen addition, phosphorus
addition, and seeded/not seeded) and at what level (block, main-plot, and
sub-plot):

- 6 experimental blocks divided into 2 main-plots
- each main-plot divided into 8 sub-plots, for a total of 96 sub-plots (6
blocks * 2 main-plots * 8 sub-plots)
- 16 experimental treatment conditions in a 4 x 2 x 2 factorial design:
   - N at 4 levels for each sub-plot
   - P at 2 levels for each sub-plot
   - Seed at two levels for each block (one level per main-plot)
- response variable = biomass
Block
   Main Plot 0 : No Seed
       Sub-Plot 0.1 : N0 P0
       Sub-Plot 0.2 : N0 P1
       Sub-Plot 0.3 : N1 P0
       Sub-Plot 0.4 : N1 P1
       Sub-Plot 0.5 : N2 P0
       Sub-Plot 0.6 : N2 P1
       Sub-Plot 0.7 : N3 P0
       Sub-Plot 0.8 : N3 P1
   Main Plot 1 : Seed
       Sub-Plot 1.1 : N0 P0
       Sub-Plot 1.2 : N0 P1
       Sub-Plot 1.3 : N1 P0
       Sub-Plot 1.4 : N1 P1
       Sub-Plot 1.5 : N2 P0
       Sub-Plot 1.6 : N2 P1
       Sub-Plot 1.7 : N3 P0
       Sub-Plot 1.8 : N3 P1

I've tried several different approaches to run an ANOVA (lmer, aov, lme) on
this data, trying to use type III SSs and include a random factor, but am
having problems. Any suggestions?

[[elided Yahoo spam]]

-Mari Pesek
-- 
Dept of Ecology and Evolutionary Biology
University of Kansas
Haworth Hall, 1200 Sunnyside Ave
Lawrence, KS 66045

	[[alternative HTML version deleted]]



------------------------------

Message: 143
Date: Mon, 22 Nov 2010 23:39:17 -0600
From: Mari Pesek <marifrances at gmail.com>
To: r-help at r-project.org
Subject: [R] factorial ANOVA for block/split-plot design
Message-ID:
	<AANLkTimenMxEoq_WGb6hD_Qifp1Y02dN+T3nh55N8LXP at mail.gmail.com>
Content-Type: text/plain

Dear R Help -

I am analyzing data from an ecological experiment and am having problems
with the ANOVA functions I've tried thus far. The experiment consists of a
blocked/split-plot design, with plant biomass as the response. The following
is an overview of the treatments applied (nitrogen addition, phosphorus
addition, and seeded/not seeded) and at what level (block, main-plot, and
sub-plot):

- 6 experimental blocks divided into 2 main-plots
- each main-plot divided into 8 sub-plots, for a total of 96 sub-plots (6
blocks * 2 main-plots * 8 sub-plots)
- 16 experimental treatment conditions in a 4 x 2 x 2 factorial design:
   - N at 4 levels for each sub-plot
   - P at 2 levels for each sub-plot
   - Seed at two levels for each block (one level per main-plot)
- response variable = biomass
Block
   Main Plot 0 : No Seed
       Sub-Plot 0.1 : N0 P0
       Sub-Plot 0.2 : N0 P1
       Sub-Plot 0.3 : N1 P0
       Sub-Plot 0.4 : N1 P1
       Sub-Plot 0.5 : N2 P0
       Sub-Plot 0.6 : N2 P1
       Sub-Plot 0.7 : N3 P0
       Sub-Plot 0.8 : N3 P1
   Main Plot 1 : Seed
       Sub-Plot 1.1 : N0 P0
       Sub-Plot 1.2 : N0 P1
       Sub-Plot 1.3 : N1 P0
       Sub-Plot 1.4 : N1 P1
       Sub-Plot 1.5 : N2 P0
       Sub-Plot 1.6 : N2 P1
       Sub-Plot 1.7 : N3 P0
       Sub-Plot 1.8 : N3 P1

I've tried several different approaches to run an ANOVA (lmer, aov, lme) on
this data, trying to use type III SSs and include a random factor, but am
having problems. Any suggestions?

[[elided Yahoo spam]]

-Mari Pesek
-- 
Dept of Ecology and Evolutionary Biology
University of Kansas
Haworth Hall, 1200 Sunnyside Ave
Lawrence, KS 66045

	[[alternative HTML version deleted]]



------------------------------

Message: 144
Date: Tue, 23 Nov 2010 00:59:09 -0500
From: "RICHARD M. HEIBERGER" <rmh at temple.edu>
To: Mari Pesek <marifrances at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] factorial ANOVA for block/split-plot design
Message-ID:
	<AANLkTimAK7kGT3Lgko+seYu9nKbBcSrnJ7oz4CpA71ib at mail.gmail.com>
Content-Type: text/plain

Please see the maiz example at the end of ?HH::MMC
You will need to all the way to the end of the example.
If you don't yet have HH, you can get it with
install.packages("HH")

If you need to write back to the list, please include your attempts.

Rich

	[[alternative HTML version deleted]]



------------------------------

Message: 145
Date: Mon, 22 Nov 2010 21:25:37 -0800 (PST)
From: "dhacademic at gmail.com" <dhacademic at gmail.com>
To: r-help at r-project.org
Subject: Re: [R] question about constraint minimization
Message-ID:
	<AANLkTim2AgXLOWcaJ7AbFpO9PACz6=TdNJh3nfZ3iie7 at mail.gmail.com>
Content-Type: text/plain


Dear Prof. Ravi Varadhan,

Many thanks for the reply. In my case, besides x1=x3, x1=x4 ("x1=x3=x4" was
used in last post), another constraint is needed, x2+x3+x4+...+x12=1.5. So
there are 9 variables.

I have downloaded your code, but I even do not know how to call your code in
R program. Actually, I know very little about R.  I spent lots of time to
read the R help files as well as lots of post on line, and finally prepared
the input file that I pasted in my last post. Unfortunately, it does not
work well. Can you please help to revise the input file that can work by
using the constrOptim function? Or can you plese show me how to call your
code in R and send me the input file?

[[elided Yahoo spam]]


Best,
Hao

On Mon, Nov 22, 2010 at 2:31 PM, Ravi Varadhan [via R] <
ml-node+3054297-1984476990-202837 at n4.nabble.com<ml-node%2B3054297-1984476990
-202837 at n4.nabble.com>
> wrote:

> I do not understand the constraint x1 = x3 = x4.  If this is correct, you
> only have 10 unknown parameters.
>
> If you can correctly formulate your problem, you can have a look at the
> packages "alabama" or "BB".  The function `auglag' in "alabama" or the
> function `spg' in "BB" may be useful.
>
> Ravi.
>
> -------------------------------------------------------
> Ravi Varadhan, Ph.D.
> Assistant Professor,
> Division of Geriatric Medicine and Gerontology School of Medicine Johns
> Hopkins University
>
> Ph. (410) 502-2619
> email: [hidden
email]<http://user/SendEmail.jtp?type=node&node=3054297&i=0>
>
>
> -----Original Message-----
> From: [hidden
email]<http://user/SendEmail.jtp?type=node&node=3054297&i=1>[mailto:[hidden
> email] <http://user/SendEmail.jtp?type=node&node=3054297&i=2>] On
> Behalf Of [hidden
email]<http://user/SendEmail.jtp?type=node&node=3054297&i=3>
> Sent: Monday, November 22, 2010 11:10 AM
> To: [hidden email] <http://user/SendEmail.jtp?type=node&node=3054297&i=4>
> Subject: Re: [R] question about constraint minimization
>
>
> Hi,
>
> I have struggled on this "bound optimization with equality constraint" by
> using optim function for two days, but still fail to prepare a good input.
> Can anyone help to prepare the input for my specific case? Many thanks.
>
> Best,
> Hao
>
>
> On Sat, Nov 20, 2010 at 3:17 AM, Hans W Borchers [via R] <
> [hidden email]
<http://user/SendEmail.jtp?type=node&node=3054297&i=5><ml-node%2B3051338-309
339578-2
>
> [hidden email] <http://user/SendEmail.jtp?type=node&node=3054297&i=6>>
>  > wrote:
>
> > dhacademic <at> gmail.com <dhacademic <at> gmail.com> writes:
> >
> > >
> > >
> > > Hi,
> > >
> > > I am a beginner of R. There is a question about constraint
> minimization.
> > A
> > > function, y=f(x1,x2,x3....x12), needs to be minimized. There are 3
> > > requirements for the minimization:
> > >
> > > (1) x2+x3+...+x12=1.5 (x1 is excluded);
> > > (2) x1=x3=x4;
> > > (3) x1, x3 and x5 are in the range of -1~0, respectively. The rest
> > variables
> > > (x2, x4, x6, x7, ...., x12) are in the range of 0~1, respectively.
> > >
> > > The "optim" function is used. And part of my input is as follow, where
> > > "xx1r" represents the x12:
> > >
> > > xx1r=1.5-x[2]-x[1]-x[1]-x[3]-x[4]-x[5]-x[6]-x[7]-x[8]-x[9]
> > > start=rnorm(9)
> > > up=1:9/1:9*1
> > > lo=1:9/1:9*-1
> > > out=optim(start,f,lower=lo,upper=up,method="L-BFGS-B",hessian=TRUE,
> > > control=list(trace=6,maxit=1000))
> > >
> > > There are two problems in this input. the "up" and "lo" only define a
> > range
> > > of -1~1 for x1 to x11, which can not meet the requirement (3). In
> > addition,
> > > there is not any constraint imposed on x12. I have no idea how to
> specify
> > a
> > > matrix that can impose different constraints on individual variables
in
>
> a
>
> >
> > > function. Any suggestion is highly appreciated.
> > >
> > > Best,
> > > Hao
> > >
> >
> > I don't see any direct need for real 'constraint' optimization here,
> > it is a 'bounded' optimization where you are allowed to use
> >
> >     lower <- c(-1,0,-1,0,-1,0,0,0,0,0,0,0)
> >     upper <- c( 0,1, 0,0, 0,1,1,1,1,1,1,1)
> >
> > Otherwise, your description is confusing:
> >   (1) Did you change f to a new function with 9 variables, eliminating
> >       x3, x4, and x12 ?
> >   (2) x4 (being equal to x1) has to be in [-1, 0] but also in [0, 1]?
> >   (3) If you need to restrict x12 to [0, 1] also, you cannot eliminate
> it.
> >       Either keep x12 and use an equality constraint, or use inequality
> >       constraints on xxlr.
> >
> > Hans Werner
> >
> > ______________________________________________
> > [hidden email]
> <http://user/SendEmail.jtp?type=node&node=3051338&i=0>mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> >
http://www.R-project.org/posting-guide.html<http://www.r-project.org/posting
-guide.html>
> > and provide commented, minimal, self-contained, reproducible code.
> >
> >
> > ------------------------------
> >  View message @
> >
>
>
http://r.789695.n4.nabble.com/question-about-constraint-minimization-tp30508
<http://r.789695.n4.nabble.com/question-about-constraint-minimization-tp3050
8?by-user=t>
> 80p3051338.html
> >
> > To unsubscribe from question about constraint minimization, click
> here<
>
http://r.789695.n4.nabble.com/template/NamlServlet.jtp?macro=unsubscrib<http
://r.789695.n4.nabble.com/template/NamlServlet.jtp?macro=unsubscrib&by-user=
t>
>
e_by_code&node=3050880&code=ZGhhY2FkZW1pY0BnbWFpbC5jb218MzA1MDg4MHwtNjM2Nzc0
>
> NA==>.
> >
> >
>
> --
> View this message in context:
>
>
http://r.789695.n4.nabble.com/question-about-constraint-minimization-tp30508
<http://r.789695.n4.nabble.com/question-about-constraint-minimization-tp3050
8?by-user=t>
> 80p3053912.html
> Sent from the R help mailing list archive at Nabble.com.
>
>         [[alternative HTML version deleted]]
>
> ______________________________________________
> [hidden email]
<http://user/SendEmail.jtp?type=node&node=3054297&i=7>mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
>
http://www.R-project.org/posting-guide.html<http://www.r-project.org/posting
-guide.html>
> and provide commented, minimal, self-contained, reproducible code.
>
> ______________________________________________
> [hidden email]
<http://user/SendEmail.jtp?type=node&node=3054297&i=8>mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
>
http://www.R-project.org/posting-guide.html<http://www.r-project.org/posting
-guide.html>
> and provide commented, minimal, self-contained, reproducible code.
>
>
> ------------------------------
>  View message @
>
http://r.789695.n4.nabble.com/question-about-constraint-minimization-tp30508
80p3054297.html
>
> To unsubscribe from question about constraint minimization, click
here<http://r.789695.n4.nabble.com/template/NamlServlet.jtp?macro=unsubscrib
e_by_code&node=3050880&code=ZGhhY2FkZW1pY0BnbWFpbC5jb218MzA1MDg4MHwtNjM2Nzc0
NA==>.
>
>

-- 
View this message in context:
http://r.789695.n4.nabble.com/question-about-constraint-minimization-tp30508
80p3054850.html
Sent from the R help mailing list archive at Nabble.com.

	[[alternative HTML version deleted]]



------------------------------

Message: 146
Date: Mon, 22 Nov 2010 21:44:16 -0800 (PST)
From: Shai <shainova at gmail.com>
To: r-help at r-project.org
Subject: [R] Explained GLZ model variation
Message-ID:
	<093c0ad4-6426-48be-aaa5-04993054424a at 26g2000yqv.googlegroups.com>
Content-Type: text/plain; charset=ISO-8859-1

Hello,

I am using the MASS library to create some GLZ models. How can I know
how much (%) of the variation is explained by the model, and if it is
significant? I did not see this in the anova() or summary() tables...

Thanks!
Shai



------------------------------

Message: 147
Date: Mon, 22 Nov 2010 23:11:37 -0700
From: Sarah Berry <escalosobre at gmail.com>
To: r-help at r-project.org
Subject: [R] Lattice and Quartz
Message-ID: <B12499D0-CE15-4C74-9705-BD906D10F956 at gmail.com>
Content-Type: text/plain

I ran this script in a source file on my Mac:

library(lattice)
year <- 1900:2000
dollars <- (year-1899)^2
plot(year,dollars)
quartz("year")
histogram(~dollars)

The first plot appears in Quartz 2. The second quartz window, named year,
opens but the histogram doesn't appear. 

However, when I copy and paste this script directly into the R console, both
quartz windows (Quartz 2 and year) open and both plots appear.

As a counter example, the script below run as a source file, works as
expected, and I get two plots in two windows:

library(lattice)
year <- 1900:2000
dollars <- (year-1899)^2
plot(year,dollars)
quartz("year")
plot(year, dollars)

How do I get the lattice package to generate multiple plots in multiple
windows from a script run from a source file?

Thank you ahead of time,

Sarah B.
	[[alternative HTML version deleted]]



------------------------------

Message: 148
Date: Mon, 22 Nov 2010 23:20:23 -0700
From: Kayce anderson <kaycelu at gmail.com>
To: r-help at r-project.org
Subject: [R] compare GLM coefficients
Message-ID:
	<AANLkTi==4cJTdVKgnw1K18KUq-VVXEGyjx70WOGJvMzi at mail.gmail.com>
Content-Type: text/plain

I have a data set of repeated abundance counts over time.  I am
investigating whether count data reduced to presence-absence (presence) data
will reveal similar population trends.  I am using a negative binomial
distribution for the glm (package MASS) because the count data contains many
zeros and extreme values.  "count" and "presence" are annual sums for each
metric.  I have also included sampling effort (visits) as an independent
variable because sampling varies between 29-33 visits per year.  My models
are:

glm.nb(count ~ year + visits) and
glm.nb(presence ~ year + visits)

I would like to test whether the coefficients for "year" are significantly
different between models.  Please advise me on the best method to make such
a comparison.

Thank you,
Kayce

	[[alternative HTML version deleted]]



------------------------------

Message: 149
Date: Tue, 23 Nov 2010 17:55:58 +1100
From: Michael Bedward <michael.bedward at gmail.com>
To: Kayce anderson <kaycelu at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] compare GLM coefficients
Message-ID:
	<AANLkTinoxyjWnP6WZMqCGc8Yu5aE0+eAvgfG6ZrXUiby at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

Hello Kayce,

My (very basic) understanding is that you can't directly compare the
coefficients across models that have different response variables, nor
could you use AIC and similar metrics of model goodness of fit.
Instead, I think you have to carefully define what you mean by "reveal
similar population trends".

If you treat the model with the count response as your reference, and
it predicts (for example) population decline of magnitude X over
period T, then you can investigate to what extent this same trend is
retrieved by the presence response model. But the specifics of the
comparison(s) should be closely tied to the population behaviours /
syndromes / critical points that you are most interested in. If there
are multiple behaviours of interest you want to know to what extent
the presence data perform as well as the count data for each of them.

That's my general take on the style of the approach. Hopefully others
here will have more detailed and knowledgable comments for you.

Michael


On 23 November 2010 17:20, Kayce anderson <kaycelu at gmail.com> wrote:
> I have a data set of repeated abundance counts over time. ?I am
> investigating whether count data reduced to presence-absence (presence)
data
> will reveal similar population trends. ?I am using a negative binomial
> distribution for the glm (package MASS) because the count data contains
many
> zeros and extreme values. ?"count" and "presence" are annual sums for each
> metric. ?I have also included sampling effort (visits) as an independent
> variable because sampling varies between 29-33 visits per year. ?My models
> are:
>
> glm.nb(count ~ year + visits) and
> glm.nb(presence ~ year + visits)
>
> I would like to test whether the coefficients for "year" are significantly
> different between models. ?Please advise me on the best method to make
such
> a comparison.
>
> Thank you,
> Kayce
>
> ? ? ? ?[[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



------------------------------

Message: 150
Date: Tue, 23 Nov 2010 14:57:23 +0800
From: gireesh bogu <girishbogu at gmail.com>
To: R-help at r-project.org
Subject: [R] Calculating correlation
Message-ID:
	<AANLkTi=_GXE1192XXSwaCg-WcugVDat4Rd1jKu8_GTzk at mail.gmail.com>
Content-Type: text/plain

Hi guys

I have an input file with multiple columns and and rows.
Is it possible to calculate correlation of certain value of certain No (For
example x of S1 = 112) with all other values (for example start with x 112
corr a 3 of S1 = x-a 0.2 )

INPUT
*******

No  S1  S2  S3  S4  Sn
a    3     4     45  34   23
x   112   0    12   23   0
b    0     1     23   12   1
n    0     1     0      1    1

OUTPUT
***********

No  S1  S2  S3  S4  Sn
x-a  0.2   0.3 ...............
x-x  1     1  ................
x-b  0..........................
x-n  0.9 .......................

	[[alternative HTML version deleted]]



------------------------------

Message: 151
Date: Tue, 23 Nov 2010 18:33:59 +1100
From: Jim Lemon <jim at bitwrit.com.au>
To: romzero <romzero at yahoo.it>
Cc: r-help at r-project.org
Subject: Re: [R] Some questione about plot
Message-ID: <4CEB6E67.6050608 at bitwrit.com.au>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

On 11/22/2010 10:14 PM, romzero wrote:
>
> Q1: How can i draw LSD (Least significant difference) on a plot?
> Like this...
> http://r.789695.n4.nabble.com/file/n3053430/LSD.jpg
>
> Q2: How can i draw the axis secondary scale?
>
Hi romzero,
This is somewhat of a guess, but:

segments(2.3,1.6,2.3,1.6+<value of LSD>)
text(2.2,1.6+<value of LSD>/2,"LSD(P<=0.05)",adj=1)

where <value of LSD> is the length of your line.
For the "secondary scale", I assume you mean the little ticks on the 
right side of the plot. Maybe:

plotlim<-par("usr")
par(xpd=TRUE)
segments(plotlim[2],<y values for ticks>,
  plotlim[2]+diff(plotlim[1:2])/50),<y values for ticks>)

where <y values for ticks> are the vertical positions for the ticks.

Jim



------------------------------

Message: 152
Date: Mon, 22 Nov 2010 23:31:55 -0800 (PST)

To: David Scott <d.scott at auckland.ac.nz>
Cc: "r-help at r-project.org" <r-help at r-project.org>,	Ista Zahn
	<izahn at psych.rochester.edu>
Subject: Re: [R] How to start default browser on R
Message-ID: <163783.15789.qm at web113203.mail.gq1.yahoo.com>
Content-Type: text/plain; charset=utf-8

Hi David,

I see.  File: gives the full path to the .html file created/download.  Then
the 
browser will open that file.  Thanks.

I don't have hwriter package installed.

A side question, what will be the corresponding command on R for check
whether a 
package already installed?

similar to;
On R (Windows)
Packages - Load package


B.R.
Stephen L



----- Original Message ----
From: David Scott <d.scott at auckland.ac.nz>

Cc: Ista Zahn <izahn at psych.rochester.edu>; "r-help at r-project.org" 
<r-help at r-project.org>
Sent: Tue, November 23, 2010 12:00:36 PM
Subject: Re: [R] How to start default browser on R

  On 23/11/10 16:59, Stephen Liu wrote:
> Hi Ista,
>
> I see.  Your advice works.  Thanks
>
> even without:
> browser="C:/Program Files/Internet Explorer/iexplore.exe")
>
> For non default browser it needs;
> browser="C:/Program Files/Mozilla Firefoe/firefox.exe"
>
>
> What will be "file:" used for?
>
>
> B.R.
> Stephen L
>
You can use it to open a local file on your machine as well. I use this 
all the time with hwriter which writes html reports.

David Scott

-- 
_________________________________________________________________
David Scott    Department of Statistics
        The University of Auckland, PB 92019
        Auckland 1142,    NEW ZEALAND
Phone: +64 9 923 5055, or +64 9 373 7599 ext 85055
Email:    d.scott at auckland.ac.nz,  Fax: +64 9 373 7018

Director of Consulting, Department of Statistics





------------------------------

Message: 153
Date: Tue, 23 Nov 2010 09:35:57 +0200
From: Tal Galili <tal.galili at gmail.com>
To: gireesh bogu <girishbogu at gmail.com>
Cc: R-help at r-project.org
Subject: Re: [R] Calculating correlation
Message-ID:
	<AANLkTi=VQ9N5a96Ytx1QPJJ65JrmeUCtWeDW+6+DJY=r at mail.gmail.com>
Content-Type: text/plain

Hi there,
I'm not sure I understand your question.

What are the two vectors you wish to check their correlation?
Are they the two rows x and a?
Because from your example it seems you are trying to do a correlation
between two singular numbers (so probably I didn't get something straight).

Tal



----------------Contact
Details:-------------------------------------------------------
Contact me: Tal.Galili at gmail.com |  972-52-7275845
Read me: www.talgalili.com (Hebrew) | www.biostatistics.co.il (Hebrew) |
www.r-statistics.com (English)
----------------------------------------------------------------------------
------------------




On Tue, Nov 23, 2010 at 8:57 AM, gireesh bogu <girishbogu at gmail.com> wrote:

> Hi guys
>
> I have an input file with multiple columns and and rows.
> Is it possible to calculate correlation of certain value of certain No
(For
> example x of S1 = 112) with all other values (for example start with x 112
> corr a 3 of S1 = x-a 0.2 )
>
> INPUT
> *******
>
> No  S1  S2  S3  S4  Sn
> a    3     4     45  34   23
> x   112   0    12   23   0
> b    0     1     23   12   1
> n    0     1     0      1    1
>
> OUTPUT
> ***********
>
> No  S1  S2  S3  S4  Sn
> x-a  0.2   0.3 ...............
> x-x  1     1  ................
> x-b  0..........................
> x-n  0.9 .......................
>
>        [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

	[[alternative HTML version deleted]]



------------------------------

Message: 154
Date: Tue, 23 Nov 2010 00:05:15 -0800 (PST)

To: r-help at r-project.org
Subject: [R] About available datasets on PC
Message-ID: <835227.40335.qm at web113207.mail.gq1.yahoo.com>
Content-Type: text/plain; charset=utf-8

Hi folks,

Win7


On running;
data(package = .packages(all.available = TRUE))

it displays a list of datasets under:-
Data sets in package ?AER?:

But I couldn't call/load all of them on the list.

> DJFranses
Error: object 'DJFranses' not found

> CreditCard
Error: object 'CreditCard' not found


But I can call/load;
> iris

Whether the list shows all available datasets on repo, NOT the datasets
already 
download/installed on PC?  If YES how to find the running/available datasets
on 
PC?


Packages -> Load Datasets
the list looks different.


Pls advise.  TIA

B.R.
Stephen L





------------------------------

Message: 155
Date: Tue, 23 Nov 2010 00:24:42 -0800
From: Noah Silverman <noah at smartmediacorp.com>
To: r-help <r-help at r-project.org>
Subject: [R] More detail in chart axis?
Message-ID: <4CEB7A4A.8040305 at smartmediacorp.com>
Content-Type: text/plain; charset=ISO-8859-1

Hi,

I have a series of data (about 80,000 pairs of x,y).

Plotting it shows a great chart.  However, R has randomly chosen about 6
labels for my x axis.  Now, clearly I can't show them all, but would
like some control over the granularity of what is displayed.  I can't
find anything in the documentation about controlling the axis data
labels.  Is there a way?

Alternately, is there a package that would allow me to zoom into an area
of the chart so I can see more detail?

Thanks,

-N



------------------------------

Message: 156
Date: Tue, 23 Nov 2010 09:26:48 +0100
From: derek eder <derek.eder at lungall.gu.se>
To: r-help at r-project.org
Subject: [R] Error: cannot allocate vector of size x Gb (64-bit ...
	yet	again)
Message-ID: <4CEB7AC8.6090304 at lungall.gu.se>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hello,

I am facing the dreaded "Error: cannot allocate vector of size x Gb" and 
don't understand
enough about R (or operating system) memory management to diagnose and 
solve the problem
-- despite studying previous posts and relevant R help -- e.g.:

"Error messages beginning cannot allocate vector of size indicate a 
failure to obtain memory,
either because the size exceeded the address-space limit for a process 
or, more likely,
because the system was unable to provide the memory.
[...] On all builds of R, the maximum length (number of elements)
of a vector is 2^31 - 1 ~ 2*10^9, as lengths are stored as signed integers.
In addition, the storage space cannot exceed the address limit."
- from Memory-limits {Base}


Simple question:  Given 64-bit R (AMD64 Linux) with a ulimit of 
"unlimited", can the size of an R object exceed the amount of availlable RAM
memory?

Empirically my system with 4Gb RAM and ample Swap, is failing:

 >  x <- integer(10^9)

 > object.size(x)
4000000040 bytes

 > gc()
             used   (Mb) gc trigger   (Mb)  max used   (Mb)
Ncells    121195    6.5     350000   18.7    350000   18.7
Vcells 500124024 3815.7  606849099 4629.9 550124408 4197.2

 > matrix(x, ncol=16)
Error: cannot allocate vector of size 3.7 Gb

I don't understand how this operation violates the limits detailed in 
the Memory-limit help (above).

Thank you!


Derek Eder



----------------------------------------------------------------------------
---------------------

 > version
                _
platform       x86_64-pc-linux-gnu
arch           x86_64
os             linux-gnu
system         x86_64, linux-gnu
status
major          2
minor          11.1
year           2010
month          05
day            31
svn rev        52157
language       R
version.string R version 2.11.1 (2010-05-31)



derek at papanca:~$ top

top - 09:10:18 up 51 min,  4 users,  load average: 0.51, 0.51, 0.45
Tasks: 160 total,   2 running, 158 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us, 25.0%sy,  0.0%ni, 75.0%id,  0.0%wa,  0.0%hi,  0.0%si,  
0.0%st
Mem:   3796484k total,  3764852k used,    31632k free,    14204k buffers
Swap:  2929660k total,   834240k used,  2095420k free,    94800k cached

   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+ COMMAND
  2854 derek     20   0  239m 9260 5448 S    6  0.2   0:05.53 
gnome-terminal
  1164 root      20   0  218m  31m  10m S    4  0.8   1:29.71 Xorg
  3331 derek     20   0 19276 1324  944 R    1  0.0   0:00.6  top



------------------------------

Message: 157
Date: Tue, 23 Nov 2010 00:41:57 -0800
From: Jeff Newmiller <jdnewmil at dcn.davis.ca.us>

Cc: r-help at r-project.org
Subject: Re: [R] About available datasets on PC
Message-ID: <4CEB7E55.1090209 at dcn.davis.ca.us>
Content-Type: text/plain; charset=utf-8; format=flowed

You need to load the package and then load the data:

library(AER)
data("DJFranses")

then it will be available for you to work with.

Stephen Liu wrote:
> Hi folks,
>
> Win7
>
>
> On running;
> data(package = .packages(all.available = TRUE))
>
> it displays a list of datasets under:-
> Data sets in package ?AER?:
>
> But I couldn't call/load all of them on the list.
>
>   
>> DJFranses
>>     
> Error: object 'DJFranses' not found
>
>   
>> CreditCard
>>     
> Error: object 'CreditCard' not found
>
>
> But I can call/load;
>   
>> iris
>>     
>
> Whether the list shows all available datasets on repo, NOT the datasets
already 
> download/installed on PC?  If YES how to find the running/available
datasets on 
> PC?
>
>
> Packages -> Load Datasets
> the list looks different.
>
>
> Pls advise.  TIA
>
> B.R.
> Stephen L
>
>
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



------------------------------

Message: 158
Date: Tue, 23 Nov 2010 19:52:47 +1100
From: Jim Lemon <jim at bitwrit.com.au>
To: Noah Silverman <noah at smartmediacorp.com>
Cc: r-help <r-help at r-project.org>
Subject: Re: [R] More detail in chart axis?
Message-ID: <4CEB80DF.7090206 at bitwrit.com.au>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

On 11/23/2010 07:24 PM, Noah Silverman wrote:
> Hi,
>
> I have a series of data (about 80,000 pairs of x,y).
>
> Plotting it shows a great chart.  However, R has randomly chosen about 6
> labels for my x axis.  Now, clearly I can't show them all, but would
> like some control over the granularity of what is displayed.  I can't
> find anything in the documentation about controlling the axis data
> labels.  Is there a way?
>
> Alternately, is there a package that would allow me to zoom into an area
> of the chart so I can see more detail?
>
Hi Noah,
"axis" will let you choose the positions, but may leave some out if it 
thinks they are too close. "staxlab" (plotrix) allows you to specify 
positions and labels and can stagger or rotate the labels. zoomInPlot is 
only one way to show a section of the plot next to the original. 
zoomplot (TeachingDemos) allows you to zoom on the same plot. There are 
other functions that offer different zooming methods.

Jim



------------------------------

Message: 159
Date: Tue, 23 Nov 2010 09:47:50 +0100
From: Petr PIKAL <petr.pikal at precheza.cz>
To: madr <madrazel at interia.pl>
Cc: r-help at r-project.org
Subject: Re: [R] how to get rid of unused space on all 4 borders in
	plot()	render
Message-ID:
	<OF5F77406D.9CCFFDAC-ONC12577E4.002F5706-C12577E4.00306366 at precheza.
cz>
	
Content-Type: text/plain; charset="US-ASCII"

Hi

r-help-bounces at r-project.org napsal dne 21.11.2010 19:31:33:

> 
> I have looked into par documentation, and only setting for size of the 
plot
> area was pin. But this setting sets the area as inflexible, that is no

R intro manual

Chapter 12: Graphical procedures page, 71
A typical figure is....

>From that you can find that

pin
The current plot dimensions, (width,height), in inches.

mar
A numerical vector of the form c(bottom, left, top, right) which gives the 
number of lines of margin to be specified on the four sides of the plot. 
The default is c(5, 4, 4, 2) + 0.1.




So David's remark to look at provided documentation seems to be correct. R 
is not Tetris but comes with quite good documentation and you shall use 
it. It would help you a lot.

par(mar=c(2,1,1,1)+.1)

Regards
Petr


> matter how I make the window small or big it stays the same. Default 
value
> has advantage that however it uses plot area that is always smaller than
> device area still this area is changing with the window and able to be
> bigger.
> -- 
> View this message in context: 
http://r.789695.n4.nabble.com/how-to-get-rid-of-
> unused-space-on-all-4-borders-in-plot-render-tp3052527p3052631.html
> Sent from the R help mailing list archive at Nabble.com.
> 
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.



------------------------------

Message: 160
Date: Tue, 23 Nov 2010 09:56:25 +0100
From: Ivan Calandra <ivan.calandra at uni-hamburg.de>
To: r-help at r-project.org
Subject: Re: [R] how to loop through variables in R?
Message-ID: <4CEB81B9.8050901 at uni-hamburg.de>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hi!

You haven't got any answer probably because you didn't provide a 
reproducible example. You can do it by copy/pasting the output of 
dput(d) or dput(df).
Moreover, before this email, I couldn't really understand what you were 
trying to do. It's not crystal clear now, but I think I got it.

First, when you read your txt, don't you already have the correct 
data.frame? What is the difference between d and df? It looks like your 
cbind() step is complicated. You can also index columns by their index 
numbers.
So let's say you want in df the columns 1 to 5 and 6 to 8 from d. You 
can do it like this:
sel <- c(1:5,6:8)  ## creates a vector with the columns indexes you want 
to have in df
df <- d[, sel]       ## extract these columns from d and assign it into 
[[elided Yahoo spam]]
You can also so it in one step of course:
df <- d[, c(1:5,6:8)]

Second, in your loop, you overwrite at each iteration the result from 
the previous one. You could do something like this:
result <- numeric(length(df))  ## shouldn't it be length(df)-1?
for (i in 1:(length(df)-1)) {
  result[i] <- chisq.test(table(df[[i]], df[[i+1]]))  ## each 
computation will be stored in a different element of result
}


Next, chisq.test() returns a list, so it's not really a good idea to 
store the output in a vector.
Take a look at
str(chisq.test(table(df[[1]], df[[2]])))
to know which element(s) you want to keep.
You would probably want something like this:
chisq.test(table(df[[1]], df[[2]]))[1:3]

[[elided Yahoo spam]]
result <- vector(mode="list", length=length(df))  ## create a list, 
shouldn't it here also be length(df)-1?
names(result) <- paste("chisq_df[[", 1:length(df), "]]_df[[", 
(1:length(df))+1, "]]", sep="") ## that way, your list is named, which 
is easier to remember what is
                                                                         
                                                                         
## what if you have lots of columns
for (i in 1:(length(df)-1)) {
  result[[i]] <- chisq.test(table(df[[i]], df[[i+1]]))[1:3]  ## each 
computation will be stored in a different element of the list
}

Is it what you're looking for?
HTH,
Ivan



Le 11/23/2010 03:11, watashi at post.com a ?crit :
>
> d<-read.table("D:\\Working\\Statics.txt")
>
> df<- cbind("Q1", "Q2", "Q3", "Q4", "Q5", "Q5A", "Q5B", "Q5C", "Q5D",
"Q5E", "Q5F", "Q5G", "Q6", "Q6A", "Q6B", "Q6C", "Q6D", "Q6E", "Q6F", "Q7",
"Q8", "Q9")
> #Than you can loop through them simply by doing:
> result<- numeric(length(df))
> for (i in 1:(length(df)-1)) {
>   result<- chisq.test(table(df[[i]], df[[i+1]]))
> }
>
> and then this error comes out:
>
> Error: unexpected '}' in "}"
>
>
> and how can I redirect the output of the chi-square test to a file instead
of console output?
>
>
> 	[[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

-- 
Ivan CALANDRA
PhD Student
University of Hamburg
Biozentrum Grindel und Zoologisches Museum
Abt. S?ugetiere
Martin-Luther-King-Platz 3
D-20146 Hamburg, GERMANY
+49(0)40 42838 6231
ivan.calandra at uni-hamburg.de

**********
http://www.for771.uni-bonn.de
http://webapp5.rrz.uni-hamburg.de/mammals/eng/1525_8_1.php



------------------------------

Message: 161
Date: Tue, 23 Nov 2010 10:13:18 +0100
From: Petr PIKAL <petr.pikal at precheza.cz>
To: "Ni, Melody Zhifang" <z.ni at imperial.ac.uk>
Cc: "'r-help at r-project.org'" <r-help at r-project.org>
Subject: [R] Odp:  save a regression model that can be used later
Message-ID:
	<OF688B8342.0757530D-ONC12577E4.0032343E-C12577E4.0032B81A at precheza.
cz>
	
Content-Type: text/plain; charset="US-ASCII"

Hi
r-help-bounces at r-project.org napsal dne 22.11.2010 16:02:20:

> Hi everyone
> 
> I have a question about how to save a regression model in R and how to 
> retrieve it for making predictions in a new session.
> 
> To be more specific, I fitted a multilevel logistic regression model 
using the
> lmer  from the "lme4" package. I then successfully make predictions 
using 
> fitted(mymodel).
> 
> Since data are complex (three levels, nested, numerous categorical and 
> continuous data describing types of laparoscopic surgery), the computer 
takes 
> quite a while to fit the MLM model.  I wonder whether it's possible to 
save 
> the fitted model so that I don't have to fit it again for making 
predictions 
> every time I start a new R session.

When you quit R session with option save =Yes you get a file .Rdata. 
Whenever you start R with this file you get saved environment back, 
together with your "mymodel".

I recommend for each bigger task to create separate directory in which you 
can save your .Rhelp and .Rdata file without interfering other tasks.

Regards
Petr

> 
> I searched the mailing-list archive. Suggestions include using save () 
to save
> the model as "mymodel.rda" and then use load(mymodel.rda) into the 
workspace. 
> I tried without success (in Windows), returning the error message: 
"Error in 
> object$fitted : $ operator is invalid for atomic vectors"
> 
> Did I do anything wrong?  Any help on this topic is much appreciated

1.11 Data permanency and removing objects

from R-intro

Regards
Petr


> 
> BW, Melody
> 
> --
> Dr Melody Ni
> Imperial College
> Department of Surgery and Cancer
> 10th floor, QEQM Building
> St. Mary's Hospital
> London W2 1NY
> Tel/Fax: +44 (0) 20 331 27657/26309
> z.ni at imperial.ac.uk<mailto:z.ni at imperial.ac.uk>
> 
>    [[alternative HTML version deleted]]
> 
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.



------------------------------

Message: 162
Date: Tue, 23 Nov 2010 01:14:03 -0800
From: Jeff Newmiller <jdnewmil at dcn.davis.ca.us>
To: Dimitri Shvorob <dimitri.shvorob at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Lost in POSIX
Message-ID: <4CEB85DB.8010203 at dcn.davis.ca.us>
Content-Type: text/plain; charset=us-ascii; format=flowed

Dimitri Shvorob wrote:
>> Nor would I call this much of an improvement in clarity... what about
>>     
> "min"? You want to know the minimum?
>
> LOL. (And apologies for the insensitivity). Thank you for help, Jeff. This
> works, but I am still curious to see a solution based on "trunc", if
anyone
> can find it. 
>   
You mean like

trunc(df$t,units="mins")

?

See ?trunc.POSIXt for hints on arguments to "units" parameter...



------------------------------

Message: 163
Date: Tue, 23 Nov 2010 01:21:29 -0800 (PST)

To: Jeff Newmiller <jdnewmil at dcn.davis.ca.us>
Cc: r-help at r-project.org
Subject: Re: [R] About available datasets on PC
Message-ID: <976004.93861.qm at web113205.mail.gq1.yahoo.com>
Content-Type: text/plain; charset=utf-8

Hi Jeff,

Tks for your advice.  I got it

B.R.
satimis



----- Original Message ----
From: Jeff Newmiller <jdnewmil at dcn.davis.ca.us>

Cc: r-help at r-project.org
Sent: Tue, November 23, 2010 4:41:57 PM
Subject: Re: [R] About available datasets on PC

You need to load the package and then load the data:

library(AER)
data("DJFranses")

then it will be available for you to work with.

Stephen Liu wrote:
> Hi folks,
>
> Win7
>
>
> On running;
> data(package = .packages(all.available = TRUE))
>
> it displays a list of datasets under:-
> Data sets in package ?AER?:
>
> But I couldn't call/load all of them on the list.
>
>  
>> DJFranses
>>    
> Error: object 'DJFranses' not found
>
>  
>> CreditCard
>>    
> Error: object 'CreditCard' not found
>
>
> But I can call/load;
>  
>> iris
>>    
>
> Whether the list shows all available datasets on repo, NOT the datasets
already 
>
> download/installed on PC?  If YES how to find the running/available
datasets on 
>
> PC?
>
>
> Packages -> Load Datasets
> the list looks different.
>
>
> Pls advise.  TIA
>
> B.R.
> Stephen L
>
>
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>  





------------------------------

Message: 164
Date: Tue, 23 Nov 2010 12:16:36 +0200
From: Tal Galili <tal.galili at gmail.com>
To: gireesh bogu <girishbogu at gmail.com>
Cc: R-help at r-project.org
Subject: Re: [R] Calculating correlation
Message-ID:
	<AANLkTi=0fzzzrZ5kCf6thcVyDSzm2+2c=qEGr_pW1rVZ at mail.gmail.com>
Content-Type: text/plain

Hi Gireesh,
I seem to be missing something.
Could you give me the formula you think should be used to calculate this?
I don't get how you wish to get a "correlation" for only one pair of
numbers.
(or maybe I didn't understood what you explained - please try a simpler
example)


----------------Contact
Details:-------------------------------------------------------
Contact me: Tal.Galili at gmail.com |  972-52-7275845
Read me: www.talgalili.com (Hebrew) | www.biostatistics.co.il (Hebrew) |
www.r-statistics.com (English)
----------------------------------------------------------------------------
------------------




On Tue, Nov 23, 2010 at 10:03 AM, gireesh bogu <girishbogu at gmail.com> wrote:

> in one output
>
> x - S1 i.e. 112 with all combinations ex:
>
>
> 112 vs 3 [ x-S1 vs a -S1]
>
> 112 vs 4 [ x-S1 vs a- S2 ]
>
> 112 vs 45
>
> 112 vs 34
>
> 112 vs 23
>
> 112 vs 112
>
> .
>
> ..
>
> 112 vs 1
>
>
> in second output
>
> x - S2 i.e. 0 with all . ex:
>
> 0 vs 3
>
> 0 vs 4
>
> 
>
> 
>
> 0 vs 1
>
>
> in next output
>
> x-S3 i.e. 12 with all  and so on
>
>
> Probably in the given input I should get 5 outputs because of 5 samples
> S1,2,3,4,n.
>
> If they are more like 69 or some thing then I should get 69 outputs or
> everything in one output if possible.
>
>
> Please let me know if it is still confusing.
>
>
> thanx
>
> On Tue, Nov 23, 2010 at 3:35 PM, Tal Galili <tal.galili at gmail.com> wrote:
>
>> Hi there,
>> I'm not sure I understand your question.
>>
>> What are the two vectors you wish to check their correlation?
>> Are they the two rows x and a?
>> Because from your example it seems you are trying to do a correlation
>> between two singular numbers (so probably I didn't get something
straight).
>>
>> Tal
>>
>>
>>
>> ----------------Contact
>> Details:-------------------------------------------------------
>> Contact me: Tal.Galili at gmail.com |  972-52-7275845
>> Read me: www.talgalili.com (Hebrew) | www.biostatistics.co.il (Hebrew) |
>> www.r-statistics.com (English)
>>
>>
----------------------------------------------------------------------------
------------------
>>
>>
>>
>>
>> On Tue, Nov 23, 2010 at 8:57 AM, gireesh bogu
<girishbogu at gmail.com>wrote:
>>
>>> Hi guys
>>>
>>> I have an input file with multiple columns and and rows.
>>> Is it possible to calculate correlation of certain value of certain No
>>> (For
>>> example x of S1 = 112) with all other values (for example start with x
>>> 112
>>> corr a 3 of S1 = x-a 0.2 )
>>>
>>> INPUT
>>> *******
>>>
>>> No  S1  S2  S3  S4  Sn
>>> a    3     4     45  34   23
>>> x   112   0    12   23   0
>>> b    0     1     23   12   1
>>> n    0     1     0      1    1
>>>
>>> OUTPUT
>>> ***********
>>>
>>> No  S1  S2  S3  S4  Sn
>>> x-a  0.2   0.3 ...............
>>> x-x  1     1  ................
>>> x-b  0..........................
>>> x-n  0.9 .......................
>>>
>>>        [[alternative HTML version deleted]]
>>>
>>> ______________________________________________
>>> R-help at r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>>> http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>>
>>
>>
>
>
> --
> Regards
>
> Gireesh
> #HP#8375-9256
>

	[[alternative HTML version deleted]]



------------------------------

_______________________________________________
R-help at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

End of R-help Digest, Vol 93, Issue 23



More information about the R-help mailing list