[R] nomianl response model
Mauricio Romero
mauricio.romero at quantil.com.co
Fri Oct 15 18:21:32 CEST 2010
Is there a way to estimate a nominal response model?
To be more specific let's say I want to calibrate:
\pi_{v}(\theta_j)=\frac{e^{\xi_{v}+\lambda_{v}\theta_j}}{\sum_{h=1}^m
e^{\xi_{h}+\lambda_{h}\theta_j}}
Where $\theta_j$ is a the dependent variable and I need to estimate
$\xi_{h}$ and $\lambda_{h}$ for $h \in {1...,m}$.
Thank you,
Mauricio Romero
Quantil S.A.S.
Cel: 3112231150
www.quantil.com.co
"It is from the earth that we must find our substance; it is on the earth
that we must find solutions to the problems that promise to destroy all life
here"
-----Original Message-----
From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-project.org] On
Behalf Of r-help-request at r-project.org
Sent: jueves, 14 de octubre de 2010 05:00 a.m.
To: r-help at r-project.org
Subject: R-help Digest, Vol 92, Issue 14
Send R-help mailing list submissions to
r-help at r-project.org
To subscribe or unsubscribe via the World Wide Web, visit
https://stat.ethz.ch/mailman/listinfo/r-help
or, via email, send a message with subject or body 'help' to
r-help-request at r-project.org
You can reach the person managing the list at
r-help-owner at r-project.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of R-help digest..."
Today's Topics:
1. Re: robust standard errors for panel data (Achim Zeileis)
2. Re: Bootstrapping Krippendorff's alpha coefficient (Jim Lemon)
3. Re: LME with 2 factors with 3 levels each (Ista Zahn)
4. Re: vertical kites in KiteChart (plotrix) (elpape)
5. exponentiate elements of a whole matrix (Maas James Dr (MED))
6. Re: exponentiate elements of a whole matrix (Barry Rowlingson)
7. Re: exponentiate elements of a whole matrix (Berwin A Turlach)
8. How to fix error in the package 'rgenoud' (Wonsang You)
9. Re: vertical kites in KiteChart (plotrix) (Jim Lemon)
10. Re: vertical kites in KiteChart (plotrix) (Michael Bedward)
11. Re: Lattice: arbitrary abline in multiple histograms
(Dennis Murphy)
12. Re: compare histograms (Dennis Murphy)
13. Re: Data Gaps (Dennis Murphy)
14. Date Time Objects (dpender)
15. Re: compare histograms (Michael Bedward)
16. Re: Lattice: arbitrary abline in multiple histograms (Alejo C.S.)
17. arima (nuncio m)
18. Re: vertical kites in KiteChart (plotrix) (elpape)
19. Re: LME with 2 factors with 3 levels each (Dennis Murphy)
20. Re: Data Gaps (dpender)
21. nnet help (Raji)
22. Re: How to fix error in the package 'rgenoud' (Wonsang You)
23. Nonparametric MANCOVA using matrices (Niccol? Bassani)
24. RODBC: forcing a special column to be read in as character
(RINNER Heinrich)
25. Re: How to fix error in the package 'rgenoud' (Wonsang You)
26. Re: Date Time Objects (Henrique Dallazuanna)
27. repeating a GAM across many factors (oc1)
28. Problem to create a matrix polynomial (Ron_M)
29. Re: Date Time Objects (dpender)
30. bootstrap in pROC package (zhu yao)
31. Re: Date Time Objects (Henrique Dallazuanna)
32. Re: Boxplot has only one whisker (David Winsemius)
33. Re: Date Time Objects (dpender)
34. "Memory not mapped" when using .C, problem in Mac but not in
Linux (David)
35. R: robust standard errors for panel data (Millo Giovanni)
36. Re: Data Gaps (Dennis Murphy)
37. bwplot change whiskers position to percentile 5 and P95
(Christophe Bouffioux)
38. NA with lmList (FMH)
39. Re: NA with lmList (Phil Spector)
40. strip month and year from MM/DD/YYYY format (Kurt_Helf at nps.gov)
41. Re: Pipeline pilot fingerprint package (Rajarshi Guha)
42. Re: R: robust standard errors for panel data
(max.e.brown at gmail.com)
43. Re: bwplot change whiskers position to percentile 5 and P95
(David Winsemius)
44. interaction contrasts (Kay Cichini)
45. Re: extract rows of a matrix (Greg Snow)
46. overlaying multiple contour plots (Jeremy Olson)
47. Re: Read Particular Cells within Excel (Greg Snow)
48. Pasting function arguments and strings (Manta)
49. Re: Plot table as table (Greg Snow)
50. Re: overlaying multiple contour plots (David Winsemius)
51. Re: Pasting function arguments and strings (Bert Gunter)
52. Re: Pasting function arguments and strings (Manta)
53. Re: Plotting Y axis labels within a loop (Steve Swope)
54. Re: Pasting function arguments and strings (Peter Langfelder)
55. Re: Pasting function arguments and strings (William Dunlap)
56. Re: "Memory not mapped" when using .C, problem in Mac but not
in Linux (William Dunlap)
57. Extracting index in character array. (lord12)
58. Re: Extracting index in character array. (Christian Raschke)
59. [OT] (slightly) - OpenOffice Calc and text files
(Schwab,Wilhelm K)
60. Re: Read Particular Cells within Excel (Henrique Dallazuanna)
61. Re: bootstrap in pROC package (?ukasz R?c?awowicz)
62. Re: [OT] (slightly) - OpenOffice Calc and text files (Albyn Jones)
63. Re: [OT] (slightly) - OpenOffice Calc and text files
(Peter Langfelder)
64. Re: [OT] (slightly) - OpenOffice Calc and text files
(Marc Schwartz)
65. Re: [OT] (slightly) - OpenOffice Calc and text files
(Joshua Wiley)
66. Re: [OT] (slightly) - OpenOffice Calc and text files
(Schwab,Wilhelm K)
67. Re: Pipeline pilot fingerprint package (Eric Hu)
68. Re: [OT] (slightly) - OpenOffice Calc and text files
(Schwab,Wilhelm K)
69. Coin Toss Simulation (Shiv)
70. Re: "Memory not mapped" when using .C, problem in Mac but not
in Linux (David)
71. loop (Julia Lira)
72. Regular expression to find value between brackets (Bart Joosen)
73. Re: Coin Toss Simulation (Dimitris Rizopoulos)
74. Re: Coin Toss Simulation (Erik Iverson)
75. Change global env variables from within a function (Jon Zadra)
76. Re: Change global env variables from within a function
(Duncan Murdoch)
77. Re: "Memory not mapped" when using .C, problem in Mac but not
in Linux (William Dunlap)
78. Re: [OT] (slightly) - OpenOffice Calc and text files
(Schwab,Wilhelm K)
79. Re: Regular expression to find value between brackets
(Henrique Dallazuanna)
80. (no subject) (Julia Lira)
81. Re: "Memory not mapped" when using .C, problem in Mac but not
in Linux (Berend Hasselman)
82. loop (Julia Lira)
83. Re: Regular expression to find value between brackets
(Erik Iverson)
84. Re: Regular expression to find value between brackets
(Bert Gunter)
85. Re: Change global env variables from within a function
(Erik Iverson)
86. Re: [OT] (slightly) - OpenOffice Calc and text files
(David Winsemius)
87. Re: loop (Erik Iverson)
88. Re: Regular expression to find value between brackets
(Bert Gunter)
89. Re: Regular expression to find value between brackets
(Matt Shotwell)
90. Re: Nonlinear Regression Parameter Shared Across Multiple
Data Sets (Jared Blashka)
91. Re: loop (Julia Lira)
92. Re: Change global env variables from within a function (Greg Snow)
93. Re: loop (Phil Spector)
94. Re: [OT] (slightly) - OpenOffice Calc and text files
(Charles C. Berry)
95. vectorizing: selecting one record per group (Mauricio Romero)
96. Re: Change global env variables from within a function
(David Winsemius)
97. Matrix subscripting to wrap around from end to start of row
(Alisa Wade)
98. Re: Regular expression to find value between brackets
(Gabor Grothendieck)
99. Re: vectorizing: selecting one record per group (Erik Iverson)
100. Re: Coin Toss Simulation (Shiv)
101. Re: [OT] (slightly) - OpenOffice Calc and text files
(Mike Marchywka)
102. Re: strip month and year from MM/DD/YYYY format
(Gabor Grothendieck)
103. Re: vectorizing: selecting one record per group
(Henrique Dallazuanna)
104. Re: Matrix subscripting to wrap around from end to start of
row (David Winsemius)
105. Re: [OT] (slightly) - OpenOffice Calc and text files
(Schwab,Wilhelm K)
106. Re: vectorizing: selecting one record per group (David Winsemius)
107. adding a named column to a Matrix (Alison Callahan)
108. Poisson Regression (Antonio Paredes)
109. Re: Matrix subscripting to wrap around from end to start of
row (Alisa Wade)
110. Re: adding a named column to a Matrix (Henrique Dallazuanna)
111. Re: adding a named column to a Matrix (David Winsemius)
112. Re: Matrix subscripting to wrap around from end to start of
row (David Winsemius)
113. Loop in columns by group (Julia Lira)
114. type II & III test for mixed model (array chip)
115. drilling down data on charts
(sachinthaka.abeywardana at allianz.com.au)
116. Re: drilling down data on charts
(sachinthaka.abeywardana at allianz.com.au)
117. Re: merging and working with BIG data sets. Is sqldf the
best way?? (Gabor Grothendieck)
118. Re: Poisson Regression (David Winsemius)
119. Re: repeating an analysis (Andrew Halford)
120. Re: Poisson Regression (Bill.Venables at csiro.au)
121. Re: Poisson Regression (Charles C. Berry)
122. Re: compare histograms (Michael Bedward)
123. Re: Matrix subscripting to wrap around from end to start of
row (Dennis Murphy)
124. Re: nnet help (Raji)
125. Re: Loop in columns by group (David Winsemius)
126. Basic data question (Santosh Srinivas)
127. Re: Basic data question (David Winsemius)
128. Plotting by Group (Adrian Hordyk)
129. several car scatterplots on one graph (cryan at binghamton.edu)
130. The width argument of stem() (Marcin Kozak)
131. Drop matching lines from readLines (Santosh Srinivas)
132. Re: Drop matching lines from readLines (Santosh Srinivas)
133. Re: Plotting by Group (Dieter Menne)
134. R and Oracle (siddharth.garg85 at gmail.com)
135. GridR error (Elizabeth Purdom)
136. Re: Boxplot has only one whisker (tom)
137. Replacing N.A values in a data frame (Santosh Srinivas)
138. robust standard errors for panel data - corrigendum
(Millo Giovanni)
139. Adding legend to lda-plot, using the MASS-package
(Arve Lynghammar)
140. spatial partition (ogbos okike)
141. Re: Data Gaps (dpender)
142. Re: (no subject) (Michael Bedward)
143. Re: Data Gaps (Dennis Murphy)
144. rounding issues (Federico Calboli)
145. Re: spatial partition (Michael Bedward)
----------------------------------------------------------------------
Message: 1
Date: Wed, 13 Oct 2010 12:06:21 +0200 (CEST)
From: Achim Zeileis <Achim.Zeileis at uibk.ac.at>
To: Max Brown <max.e.brown at gmail.com>
Cc: Giovanni_Millo at Generali.com, r-help at stat.math.ethz.ch
Subject: Re: [R] robust standard errors for panel data
Message-ID: <alpine.DEB.2.00.1010131157470.23222 at paninaro.uibk.ac.at>
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
On Wed, 13 Oct 2010, Max Brown wrote:
> Hi,
>
> I would like to estimate a panel model (small N large T, fixed effects),
> but would need "robust" standard errors for that. In particular, I am
> worried about potential serial correlation for a given individual (not so
> much about correlation in the cross section).
>
>> From the documentation, it looks as if the vcovHC that comes with plm
> does not seem to do autocorrelation,
My understanding is that it does, in fact. The details say
Observations may be clustered by '"group"' ('"time"') to account
for serial (cross-sectional) correlation.
Thus, the default appears to be to account for serial correlation anyway.
But I'm not an expert in panel-versions of these robust covariances. Yves
and Giovanni might be able to say more.
> and the NeweyWest in the sandwich
> package says that it expects a fitted model of type "lm" or "glm" (it
> says nothing about "plm").
That information in the "sandwich" package is outdated - prompted by your
email I've just fixed the manual page in the development version.
In principle, everything in "sandwich" is object-oriented now, see
vignette("sandwich-OOP", package = "sandwich")
However, the methods within "sandwich" are only sensible for
cross-sectional data (vcovHC, sandwich, ...) or time series data (vcovHAC,
NeweyWest, kernHAC, ...). There is not yet explicit support for panel
data.
hth,
Z
> How can I estimate the model and get robust standard errors?
>
> Thanks for your help.
>
> Max
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 2
Date: Wed, 13 Oct 2010 21:20:03 +1100
From: Jim Lemon <jim at bitwrit.com.au>
To: ?ukasz R?c?awowicz <lukasz.reclawowicz at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Bootstrapping Krippendorff's alpha coefficient
Message-ID: <4CB587D3.9050504 at bitwrit.com.au>
Content-Type: text/plain; charset=ISO-8859-2; format=flowed
On 10/12/2010 08:58 PM, ?ukasz R?c?awowicz wrote:
> Hi,
>
> I don't know how to sample such data, it can't be done by row sampling
> as default method on matrix in boot.
> Function takes matrix and returns single coefficient.
>
> #There is a macro but I want use R :)
> http://www.comm.ohio-state.edu/ahayes/SPSS%20programs/kalphav2_1.SPS
> library(concord)
> library(boot)
> # The data are rates among observers with NA's
> nmm<-matrix(c(1,1,NA,1,2,2,3,2,3,3,3,3,3,3,3,3,2,2,2,2,1,2,3,4,4,4,4,4,
> + 1,1,2,1,2,2,2,2,NA,5,5,5,NA,NA,1,1,NA,NA,3,NA),nrow=4)
>
> sample.rates<-function(matrix.data,i){
> #mixed.rates<-sample individual rates and put back in new matrix (?)
> return(kripp.alpha(mixed.rates)$statistic[i])
> }
> to.get<-boot(nmm, sample.rates, R=1e4, stype="i")
>
Hi Lukasz,
First, switch to the kripp.alpha function in the irr package. concord is
no longer maintained. The SPSS code would take some time to decipher and
translate into R, so I'll see if I can locate the algorithm. Professor
Krippendorff once wrote to me how he did it, so it must be available
somewhere.
Jim
------------------------------
Message: 3
Date: Wed, 13 Oct 2010 06:28:52 -0400
From: Ista Zahn <izahn at psych.rochester.edu>
To: Laura Halderman <lkh11 at pitt.edu>
Cc: r-help at r-project.org
Subject: Re: [R] LME with 2 factors with 3 levels each
Message-ID:
<AANLkTik4jCY3urY2kFMtL9=CNqhsT58M540cMcRp4Q=x at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Hi Laura,
If you want ANOVA output, ask for it! A general strategy that almost
always works in R is to fit 2 models, one without the term(s) you want
to test, and one with. Then use the anova() function to test them.
(models must be nested, and in the lmer() case you need to use REML =
FALSE).
So, try something like this:
m1 <- lmer(PTR ~ Test + Group + (1 | student), data=ptr)
m2 <- lmer(PTR ~ Test * Group + (1 | student), data=ptr)
anova(m1, m2)
Best,
Ista
On Tue, Oct 12, 2010 at 11:59 PM, Laura Halderman <lkh11 at pitt.edu> wrote:
> Hello. ?I am new to R and new to linear mixed effects modeling. ?I am
trying to model some data which has two factors. ?Each factor has three
levels rather than continuous data. ?Specifically, we measured speech at
Test 1, Test 2 and Test 3. ?We also had three groups of subjects: RepTP,
RepNTP and NoRepNTP.
>
> I am having a really hard time interpreting this data since all the
examples I have seen in the book I am using (Baayen, 2008) either have
continuous variables or factors with only two levels. ?What I find
particularly confusing are the interaction terms in the output. ?The output
doesn't present the full interaction (3 X 3) as I would expect with an
ANOVA.
Instead, it only presents an interaction term for one Test and one
Group, presumably comparing it to the reference Test and reference
Group. ?Therefore, it is hard to know what to do with the interactions
that aren't significant. ?In the book, non-significant interactions
are dropped from the model. ?However, in my model, I'm only ever
seeing the 2 X 2 interactions, not the full 3 X 3 interaction, so it's
not clear what I should do when only two levels of group and two
levels of test interact but the third group doesn't.
>
> If anyone can assist me in interpreting the output, I would really
appreciate it. ?I may be trying to interpret it too much like an ANOVA where
you would be looking for main effects of Test (was there improvement from
Test 1 to Test 2), main effects of Group (was one of the Groups better than
the other) and the interactions of the two factors (did one Group improve
more than another Group from Test 1 to Test 2, for example). ?I guess
another question to pose here is, is it pointless to do an LME analysis with
more than two levels of a factor? ?Is it too much like trying to do an
ANOVA? ?Alternatively, it's possible that what I'm doing is acceptable, I'm
just not able to interpret it correctly.
>
> I have provided output from my model to hopefully illustrate my question.
?I'm happy to provide additional information/output if someone is interested
in helping me with this problem.
>
> Thank you,
> ?Laura
>
> Linear mixed model fit by REML
> Formula: PTR ~ Test * Group + (1 | student)
> ? Data: ptr
> AIC ? ? ? ? ? ? BIC ? ? ? ? ? ? logLik ?deviance ? ? ? ?REMLdev
> ?-625.7 ? ? ? ? -559.8 ? ? ? ? ?323.9 ? ? ? ? ? -706.5 ? ? ? ? ?-647.7
> Random effects:
> ?Groups Name ? ? ? ? ? ?Variance ? ? ? ?Std.Dev.
> ?student ? ? ? ?(Intercept) ? ? 0.0010119 ? ? ? 0.03181
> ?Residual ? ? ? ? ? ? ? ? ? ? ? 0.0457782 ? ? ? 0.21396
> Number of obs: 2952, groups: studentID, 20
>
> Fixed effects:
> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?Estimate ? ? ? ?Std. Error ? ? ?t value
> (Intercept) ? ? ? ? ? ? ? ? ? ? 0.547962 ? ? ? ?0.016476 ? ? ? ?33.26
> Testtest2 ? ? ? ? ? ? ? ? ? ? ? -0.007263 ? ? ? 0.015889 ? ? ? ?-0.46
> Testtest1 ? ? ? ? ? ? ? ? ? ? ? -0.050653 ? ? ? 0.016305 ? ? ? ?-3.11
> GroupNoRepNTP ? 0.008065 ? ? ? ?0.022675 ? ? ? ?0.36
> GroupRepNTP ? ? ? ? ? ? -0.018314 ? ? ? 0.025483 ? ? ? ?-0.72
> Testtest2:GroupNoRepNTP ?0.006073 ? 0.021936 ? ?0.28
> Testtest1:GroupNoRepNTP ?0.013901 ? 0.022613 ? ?0.61
> Testtest2:GroupRepNTP ? 0.046684 ? ? ? ?0.024995 ? ? ? ?1.87
> Testtest1:GroupRepNTP ? 0.039994 ? ? ? ?0.025181 ? ? ? ?1.59
>
> Note: The reference level for Test is Test3. ?The reference level for
Group is RepTP. ?The interaction p value (after running pvals.fnc with the
MCMC) for Testtest2:GroupRepNTP is p = .062 which I'm willing to accept and
interpret since speech data with English Language Learners is particularly
variable.
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Ista Zahn
Graduate student
University of Rochester
Department of Clinical and Social Psychology
http://yourpsyche.org
------------------------------
Message: 4
Date: Wed, 13 Oct 2010 03:40:19 -0700 (PDT)
From: elpape <ellen.pape at gmail.com>
To: r-help at r-project.org
Subject: Re: [R] vertical kites in KiteChart (plotrix)
Message-ID:
<AANLkTinCrrQTLe2WuaLHWhMUTewF5qJnPisGdPfDJf=X at mail.gmail.com>
Content-Type: text/plain
Hello,
I've attached the result I'v'e got by applying your code (thanks for this!),
but I seem to have horizontal kites instead of vertical kites. I need to
rotate the entire graph, so to speak..
I've tried using viewpoint (found it on some forum), but this only seems to
work with lattice plots...
Do you know a solution to this last problem?
Thanks,
Ellen
On 13 October 2010 10:54, mbedward [via R] <
ml-node+2993367-1675229982-199230 at n4.nabble.com<ml-node%2B2993367-1675229982
-199230 at n4.nabble.com>
> wrote:
> Oops, sorry, I left out a step in that last post
>
> After replace NAs with 0 in Xwide...
>
> # use distance col as row names
> rownames( Xwide ) <- Xwide[ , 1 ]
> Xwide <- Xwide[ , -1 ]
>
> kiteChart( Xwide )
>
>
> On 13 October 2010 19:49, Michael Bedward <[hidden
email]<http://user/SendEmail.jtp?type=node&node=2993367&i=0>>
> wrote:
>
> > Hello Ellen,
> >
> > First up I think you can use reshape to get your data into a form that
> > kiteChart will work with...
> >
> > # assuming your matrix or data.frame is called X
> > Xwide <- reshape(X, timevar="depth", idvar="distance", direction="wide")
> >
> > # replace NAs with 0 (don't think kiteChart likes NA)
> > Xwide[ is.na(Xwide) ] <- 0
> >
> > kiteChart(Xwide)
> >
> > I haven't considered all of your plot requirements here but hopefully
> > this will get you started.
> >
> > Michael
> >
> >
> >
> >
> > On 13 October 2010 19:11, elpape <[hidden
email]<http://user/SendEmail.jtp?type=node&node=2993367&i=1>>
> wrote:
> >>
> >> Dear everyone,
> >>
> >> I would like to create a kite chart in which I plot densities (width of
> the
> >> vertical kites) in relation to sediment depth (on reversed y-axis) for
6
>
> >> different locations (Distances from seep site, on x-axis on top of the
> >> plot). The dataset I would like to use is:
> >>
> >>
> >> Distance_from_seep_site Sedimentdepth Density
> >> 1100 0 107.8
> >> 1100 1 264.6
> >> 1100 2 284.2
> >> 1100 3 470.4
> >> 1100 4 58.8
> >> 100 0 98
> >> 100 1 176.4
> >> 100 2 499.8
> >> 100 3 548.8
> >> 100 4 401.8
> >> 100 5 107.8
> >> 10 0 51.3
> >> 10 1 22.8
> >> 10 2 79.8
> >> 10 3 68.4
> >> 10 4 17.1
> >> 10 5 5.7
> >> 10 6 17.1
> >> 5 0 188.1
> >> 5 1 267.9
> >> 5 2 376.2
> >> 5 3 233.7
> >> 5 4 165.3
> >> 5 8 5.7
> >> 5 9 5.7
> >> 2 0 74.1
> >> 2 1 102.6
> >> 2 2 85.5
> >> 2 3 91.2
> >> 2 4 34.2
> >> 2 5 5.7
> >> 2 6 11.4
> >> 2 8 11.4
> >> 2 10 28.5
> >> 2 11 22.8
> >> 0 0 461.7
> >> 0 1 273.6
> >> 0 2 79.8
> >> 0 3 68.4
> >> 0 4 34.2
> >> 0 5 22.8
> >> 0 6 51.3
> >> 0 8 68.4
> >> 0 9 39.9
> >> 0 11 22.8
> >>
> >> I have tried to rearrange the data, but I do not seem to get the output
> that
> >> I desire...
> >>
> >> Can anyone help me?
> >>
> >> Thank you so much!
> >> Ellen
> >>
> >>
> >>
> >> --
> >> View this message in context:
>
http://r.789695.n4.nabble.com/vertical-kites-in-KiteChart-plotrix-tp2993295p
2993295.html<http://r.789695.n4.nabble.com/vertical-kites-in-KiteChart-plotr
ix-tp2993295p2993295.html?by-user=t>
> >> Sent from the R help mailing list archive at Nabble.com.
> >>
> >> ______________________________________________
> >> [hidden email]
<http://user/SendEmail.jtp?type=node&node=2993367&i=2>mailing list
> >> https://stat.ethz.ch/mailman/listinfo/r-help
> >> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> >> and provide commented, minimal, self-contained, reproducible code.
> >>
> >
>
> ______________________________________________
> [hidden email]
<http://user/SendEmail.jtp?type=node&node=2993367&i=3>mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>
> ------------------------------
> View message @
>
http://r.789695.n4.nabble.com/vertical-kites-in-KiteChart-plotrix-tp2993295p
2993367.html
> To unsubscribe from vertical kites in KiteChart (plotrix), click
here<http://r.789695.n4.nabble.com/template/TplServlet.jtp?tpl=unsubscribe_b
y_code&node=2993295&code=ZWxsZW4ucGFwZUBnbWFpbC5jb218Mjk5MzI5NXwyNjY4NjY4NTM
=>.
>
>
>
--
View this message in context:
http://r.789695.n4.nabble.com/vertical-kites-in-KiteChart-plotrix-tp2993295p
2993473.html
Sent from the R help mailing list archive at Nabble.com.
[[alternative HTML version deleted]]
------------------------------
Message: 5
Date: Wed, 13 Oct 2010 11:51:39 +0100
From: "Maas James Dr (MED)" <J.Maas at uea.ac.uk>
To: "r-help at r-project.org" <r-help at r-project.org>
Subject: [R] exponentiate elements of a whole matrix
Message-ID:
<9C2B89830110BF4A845878D9A31F3D9255D57ECA54 at UEAEXCHMBX.UEA.AC.UK>
Content-Type: text/plain; charset="us-ascii"
I've tried hard to find a way to exponentiate each element of a whole matrix
such that if I start with A
A = [ 2 3
2 4]
I can get back B
B = [ 7.38 20.08
7.38 54.60]
I've tried
B <- exp(A) but no luck.
Thanks
J
===============================
Dr. Jim Maas
University of East Anglia
------------------------------
Message: 6
Date: Wed, 13 Oct 2010 11:57:37 +0100
From: Barry Rowlingson <b.rowlingson at lancaster.ac.uk>
To: "Maas James Dr (MED)" <J.Maas at uea.ac.uk>
Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] exponentiate elements of a whole matrix
Message-ID:
<AANLkTinaPe9XonWjuuf5Mcu-2t32ufXM_8NUHcvOkW5T at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
On Wed, Oct 13, 2010 at 11:51 AM, Maas James Dr (MED) <J.Maas at uea.ac.uk>
wrote:
> I've tried hard to find a way to exponentiate each element of a whole
matrix such that if I start with A
>
> A = [ 2 ? 3
> ? ? ?2 ? 4]
>
> I can get back B
>
> B = [ 7.38 ? 20.08
> ? ? ?7.38 ? 54.60]
>
> I've tried
>
> B <- exp(A) but no luck.
Your matrix notation looks unlike R. We prefer cut n paste examples
here. In which case:
> A=matrix(1:4,2,2)
> A
[,1] [,2]
[1,] 1 3
[2,] 2 4
> exp(A)
[,1] [,2]
[1,] 2.718282 20.08554
[2,] 7.389056 54.59815
> B=exp(A)
> B
[,1] [,2]
[1,] 2.718282 20.08554
[2,] 7.389056 54.59815
so, works for me (as I expected it would).
Either you've redefined the exp function or you've accidentally
started matlab instead.
Barry
------------------------------
Message: 7
Date: Wed, 13 Oct 2010 19:02:13 +0800
From: Berwin A Turlach <berwin at maths.uwa.edu.au>
To: "Maas James Dr (MED)" <J.Maas at uea.ac.uk>
Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] exponentiate elements of a whole matrix
Message-ID: <20101013190213.44a5a53a at goodenia>
Content-Type: text/plain; charset=US-ASCII
On Wed, 13 Oct 2010 11:51:39 +0100
"Maas James Dr (MED)" <J.Maas at uea.ac.uk> wrote:
> I've tried hard to find a way to exponentiate each element of a whole
> matrix such that if I start with A
>
> A = [ 2 3
> 2 4]
>
> I can get back B
>
> B = [ 7.38 20.08
> 7.38 54.60]
>
> I've tried
>
> B <- exp(A) but no luck.
What have you tried exactly? And with which version? This should work
with all R versions that I am familiar with, e.g.:
R> A <- matrix(c(2,2,3,4),2,2)
R> A
[,1] [,2]
[1,] 2 3
[2,] 2 4
R> B <- exp(A)
R> B
[,1] [,2]
[1,] 7.389056 20.08554
[2,] 7.389056 54.59815
Cheers,
Berwin
========================== Full address ============================
Berwin A Turlach Tel.: +61 (8) 6488 3338 (secr)
School of Maths and Stats (M019) +61 (8) 6488 3383 (self)
The University of Western Australia FAX : +61 (8) 6488 1028
35 Stirling Highway
Crawley WA 6009 e-mail: berwin at maths.uwa.edu.au
Australia http://www.maths.uwa.edu.au/~berwin
------------------------------
Message: 8
Date: Wed, 13 Oct 2010 04:03:45 -0700 (PDT)
From: Wonsang You <you at ifn-magdeburg.de>
To: r-help at r-project.org
Subject: [R] How to fix error in the package 'rgenoud'
Message-ID:
<AANLkTimd8d_xjq=7xJJZRmfZwiVVAUuUvjoh4En6xJFM at mail.gmail.com>
Content-Type: text/plain
Dear R user fellows,
I would like to ask you about the package 'rgenoud' which is a genetic
optimization tool.
I ran the function 'genoud' with two variables to be minimized by the
following command.
result<-genoud(fn,nvars=2,starting.values=c(0.5,0),
pop.size=1000, max.generations=10, wait.generations=3)
Then, I had the following error message.
Error in solve.default(Djl) :
system is computationally singular: reciprocal condition number = 0
Can anyone give me some tip on how to fix the problem? Thank you for your
great help in advance.
Best Regards,
Wonsang You
-----
--
Wonsang You
Special Lab Non-Invasive Brain Imaging
Leibniz Institute for Neurobiology
http://www.ifn-magdeburg.de
--
View this message in context:
http://r.789695.n4.nabble.com/How-to-fix-error-in-the-package-rgenoud-tp2993
489p2993489.html
Sent from the R help mailing list archive at Nabble.com.
[[alternative HTML version deleted]]
------------------------------
Message: 9
Date: Wed, 13 Oct 2010 22:19:08 +1100
From: Jim Lemon <jim at bitwrit.com.au>
To: elpape <ellen.pape at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] vertical kites in KiteChart (plotrix)
Message-ID: <4CB595AC.4090006 at bitwrit.com.au>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 10/13/2010 07:11 PM, elpape wrote:
>
> Dear everyone,
>
> I would like to create a kite chart in which I plot densities (width of
the
> vertical kites) in relation to sediment depth (on reversed y-axis) for 6
> different locations (Distances from seep site, on x-axis on top of the
> plot). The dataset I would like to use is:
>...
> I have tried to rearrange the data, but I do not seem to get the output
that
> I desire...
>
Hi Ellen,
It looks like I will have to do a bit of rewriting of the kiteChart
function. You can't just rearrange the data. I can probably get to it
this weekend. Thanks for pointing out the problem.
Jim
------------------------------
Message: 10
Date: Wed, 13 Oct 2010 22:21:56 +1100
From: Michael Bedward <michael.bedward at gmail.com>
To: Jim Lemon <jim at bitwrit.com.au>
Cc: r-help at r-project.org
Subject: Re: [R] vertical kites in KiteChart (plotrix)
Message-ID:
<AANLkTinScnX-NnOoqJXwFJTdyNN6nHLOrdzcdMr87SXZ at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Super ! An option for vertical plotting would be very nice.
Michael
On 13 October 2010 22:19, Jim Lemon <jim at bitwrit.com.au> wrote:
> On 10/13/2010 07:11 PM, elpape wrote:
>>
>> Dear everyone,
>>
>> I would like to create a kite chart in which I plot densities (width of
>> the
>> vertical kites) in relation to sediment depth (on reversed y-axis) for 6
>> different locations (Distances from seep site, on x-axis on top of the
>> plot). The dataset I would like to use is:
>> ...
>> I have tried to rearrange the data, but I do not seem to get the output
>> that
>> I desire...
>>
> Hi Ellen,
> It looks like I will have to do a bit of rewriting of the kiteChart
> function. You can't just rearrange the data. I can probably get to it this
> weekend. Thanks for pointing out the problem.
>
> Jim
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 11
Date: Wed, 13 Oct 2010 04:28:47 -0700
From: Dennis Murphy <djmuser at gmail.com>
To: David Winsemius <dwinsemius at comcast.net>
Cc: R-help at r-project.org
Subject: Re: [R] Lattice: arbitrary abline in multiple histograms
Message-ID:
<AANLkTi=R7p=fgEZr+5p3dattC0Dm=x82gsH6KcZUmfOv at mail.gmail.com>
Content-Type: text/plain
Hi:
David was on the right track...
library(reshape) # for the melt() function below
rnorm(100,5,3) -> A
rnorm(100,7,3) -> B
rnorm(100,4,1) -> C
df <- melt(data.frame(A, B, C))
names(df)[1] <- 'gp'
histogram(~ value | gp, data=df, layout=c(3,1), nint=50,
panel=function(x, ..., groups){
panel.histogram(x, ...)
panel.abline(v=c(5.5, 6.5, 4.5)[groups = panel.number()],
col = 'red', lwd = 2)
})
It took several iterations, but persistence sometimes pays off :)
HTH,
Dennis
On Tue, Oct 12, 2010 at 6:13 PM, David Winsemius
<dwinsemius at comcast.net>wrote:
>
> On Oct 12, 2010, at 8:21 PM, Alejo C.S. wrote:
>
> Dear list.
>>
>> I have three histograms and I want to add a vertical abline in a
different
>> place in each plot. The next example will plot a vertical line at x=5.5
in
>> the three plots:
>>
>> rnorm(100,5,3) -> A
>> rnorm(100,7,3) -> B
>> rnorm(100,4,1) -> C
>> rep(c("A","B","C"),each=100) -> grp
>> data.frame(G=grp,D=c(A,B,C)) -> data
>> histogram(~ D | G, data=data, layout=c(3,1), nint=50,
>> panel=function(x,...){
>> panel.histogram(x,...)
>> panel.abline(v=5.5)
>> })
>> But I'd like to plot a vertical line at 5.5 in plot A, 6.5 in plot B and
>> 4.5 in plot C. How can I do that?
>>
>
> histogram(~ D | G, data=data, layout=c(3,1), nint=50,
> panel=function(x,subscripts,groups,...){
> panel.histogram(x,...)
> panel.abline(v=c(5.5, 6.5, 4.5)[packet.number()])
>
> })
>
>>
>> Thanks in advance..
>>
>> A.
>>
>> [[alternative HTML version deleted]]
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
> David Winsemius, MD
> West Hartford, CT
>
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
------------------------------
Message: 12
Date: Wed, 13 Oct 2010 04:36:22 -0700
From: Dennis Murphy <djmuser at gmail.com>
To: Michael Bedward <michael.bedward at gmail.com>
Cc: R help mailing list <r-help at r-project.org>
Subject: Re: [R] compare histograms
Message-ID:
<AANLkTinsO7DztZBjPxpg9rLeRhPfFOjqrcXDs5_85pn9 at mail.gmail.com>
Content-Type: text/plain
Hi:
This recent thread revealed that a package on R-forge for calculating earth
movers distance is available:
http://r.789695.n4.nabble.com/Measure-Difference-Between-Two-Distributions-t
d2712281.html#a2713505
HTH,
Dennis
On Tue, Oct 12, 2010 at 7:39 PM, Michael Bedward
<michael.bedward at gmail.com>wrote:
> Just to add to Greg's comments: I've previously used 'Earth Movers
> Distance' to compare histograms. Note, this is a distance metric
> rather than a parametric statistic (ie. not a test) but it at least
> provides a consistent way of quantifying similarity.
>
> It's relatively easy to implement the metric in R (formulating it as a
> linear programming problem). Happy to dig out the code if needed.
>
> Michael
>
> On 13 October 2010 02:44, Greg Snow <Greg.Snow at imail.org> wrote:
> > That depends a lot on what you mean by the histograms being equivalent.
> >
> > You could just plot them and compare visually. It may be easier to
> compare them if you plot density estimates rather than histograms. Even
> better would be to do a qqplot comparing the 2 sets of data rather than
the
> histograms.
> >
> > If you want a formal test then the ks.test function can compare 2
> datasets. Note that the null hypothesis is that they come from the same
> distribution, a significant result means that they are likely different
(but
> the difference may not be of practical importance), but a non-significant
> test could mean they are the same, or that you just do not have enough
power
> to find the difference (or the difference is hard for the ks test to see).
> You could also use a chi-squared test to compare this way.
> >
> > Another approach would be to use the vis.test function from the
> TeachingDemos package. Write a small function that will either plot your
2
> histograms (density plots), or permute the data between the 2 groups and
> plot the equivalent histograms. The vis.test function then presents you
> with an array of plots, one of which is the original data and the rest
based
> on permutations. If there is a clear meaningful difference in the groups
> you will be able to spot the plot that does not match the rest, otherwise
it
> will just be guessing (might be best to have a fresh set of eyes that have
> not seen the data before see if they can pick out the real plot).
> >
> > --
> > Gregory (Greg) L. Snow Ph.D.
> > Statistical Data Center
> > Intermountain Healthcare
> > greg.snow at imail.org
> > 801.408.8111
> >
> >
> >> -----Original Message-----
> >> From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-
> >> project.org] On Behalf Of solafah bh
> >> Sent: Monday, October 11, 2010 4:02 PM
> >> To: R help mailing list
> >> Subject: [R] compare histograms
> >>
> >> Hello
> >> How to compare two statistical histograms? How i can know if these
> >> histograms are equivalent or not??
> >>
> >> Regards
> >>
> >>
> >>
> >> [[alternative HTML version deleted]]
> >
> > ______________________________________________
> > R-help at r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
------------------------------
Message: 13
Date: Wed, 13 Oct 2010 04:47:15 -0700
From: Dennis Murphy <djmuser at gmail.com>
To: dpender <d.pender at civil.gla.ac.uk>
Cc: r-help at r-project.org
Subject: Re: [R] Data Gaps
Message-ID:
<AANLkTik2BNZ_sDe=KRPeBqk+gmsFCW-JdZKFmPaG8+FW at mail.gmail.com>
Content-Type: text/plain
Hi:
Perhaps
?append
for simple insertions...
HTH,
Dennis
On Wed, Oct 13, 2010 at 1:24 AM, dpender <d.pender at civil.gla.ac.uk> wrote:
>
> R community,
>
> I am trying to write a code that fills in data gaps in a time series. I
> have no R or statistics background at all but the use of R is proving to
be
> a large portion of my PhD research.
>
> So far my code identifies where and the number of new entries required but
> I
> do not know how to add additional rows or columns into an array. Any
> advice
> on how this can be done?
>
> Here is an example:
>
> H [0.88 0.72 0.89 0.93 1.23 0.86]
> T [7.14 7.14 7.49 8.14 7.14 7.32]
> O [0 0 0 2 0 0]
>
> This says that in order to complete the data set 2 entries are required
> prior to H[4] and T[4] i.e. where O = 2.
>
> Thanks,
>
> Doug
> --
> View this message in context:
> http://r.789695.n4.nabble.com/Data-Gaps-tp2993317p2993317.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
------------------------------
Message: 14
Date: Wed, 13 Oct 2010 04:51:51 -0700 (PDT)
From: dpender <d.pender at civil.gla.ac.uk>
To: r-help at r-project.org
Subject: [R] Date Time Objects
Message-ID: <1286970711066-2993524.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
I am trying to convert an array from numeric values back to date and time
format. The code I have used is as follows;
for (i in 0:(length(DateTime3)-1)) {
DateTime3[i] <- (strptime(start, "%m/%d/%Y %H:%M")+ i*interval)
where start <- [1] "1/1/1981 00:00"
However the created array (DateTime3) contains [1,] 347156400 347157600
347158800 347160000 347161200 347162400 347163600 NA
Does anyone know how I can change DateTime 3 to the same format as start?
Thanks,
Doug
--
View this message in context:
http://r.789695.n4.nabble.com/Date-Time-Objects-tp2993524p2993524.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 15
Date: Wed, 13 Oct 2010 22:58:31 +1100
From: Michael Bedward <michael.bedward at gmail.com>
To: Dennis Murphy <djmuser at gmail.com>
Cc: R help mailing list <r-help at r-project.org>
Subject: Re: [R] compare histograms
Message-ID:
<AANLkTimVYzSzPn4J9bLVybXpnGkXJnf7EJrvtzx2x1oa at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Ah, that's interesting. I'll have a look because it's bound to be
better than my effort.
Many thanks Dennis.
Michael
On 13 October 2010 22:36, Dennis Murphy <djmuser at gmail.com> wrote:
> Hi:
>
> This recent thread revealed that a package on R-forge for calculating
earth
> movers distance is available:
>
>
http://r.789695.n4.nabble.com/Measure-Difference-Between-Two-Distributions-t
d2712281.html#a2713505
>
> HTH,
> Dennis
>
> On Tue, Oct 12, 2010 at 7:39 PM, Michael Bedward
<michael.bedward at gmail.com>
> wrote:
>>
>> Just to add to Greg's comments: I've previously used 'Earth Movers
>> Distance' to compare histograms. Note, this is a distance metric
>> rather than a parametric statistic (ie. not a test) but it at least
>> provides a consistent way of quantifying similarity.
>>
>> It's relatively easy to implement the metric in R (formulating it as a
>> linear programming problem). Happy to dig out the code if needed.
>>
>> Michael
>>
>> On 13 October 2010 02:44, Greg Snow <Greg.Snow at imail.org> wrote:
>> > That depends a lot on what you mean by the histograms being equivalent.
>> >
>> > You could just plot them and compare visually. ?It may be easier to
>> > compare them if you plot density estimates rather than histograms.
?Even
>> > better would be to do a qqplot comparing the 2 sets of data rather than
the
>> > histograms.
>> >
>> > If you want a formal test then the ks.test function can compare 2
>> > datasets. ?Note that the null hypothesis is that they come from the
same
>> > distribution, a significant result means that they are likely different
(but
>> > the difference may not be of practical importance), but a
non-significant
>> > test could mean they are the same, or that you just do not have enough
power
>> > to find the difference (or the difference is hard for the ks test to
see).
>> > ?You could also use a chi-squared test to compare this way.
>> >
>> > Another approach would be to use the vis.test function from the
>> > TeachingDemos package. ?Write a small function that will either plot
your 2
>> > histograms (density plots), or permute the data between the 2 groups
and
>> > plot the equivalent histograms. ?The vis.test function then presents
you
>> > with an array of plots, one of which is the original data and the rest
based
>> > on permutations. ?If there is a clear meaningful difference in the
groups
>> > you will be able to spot the plot that does not match the rest,
otherwise it
>> > will just be guessing (might be best to have a fresh set of eyes that
have
>> > not seen the data before see if they can pick out the real plot).
>> >
>> > --
>> > Gregory (Greg) L. Snow Ph.D.
>> > Statistical Data Center
>> > Intermountain Healthcare
>> > greg.snow at imail.org
>> > 801.408.8111
>> >
>> >
>> >> -----Original Message-----
>> >> From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-
>> >> project.org] On Behalf Of solafah bh
>> >> Sent: Monday, October 11, 2010 4:02 PM
>> >> To: R help mailing list
>> >> Subject: [R] compare histograms
>> >>
>> >> Hello
>> >> How to compare? two statistical histograms? How i can know if these
>> >> histograms are equivalent or not??
>> >>
>> >> Regards
>> >>
>> >>
>> >>
>> >> ? ? ? [[alternative HTML version deleted]]
>> >
>> > ______________________________________________
>> > R-help at r-project.org mailing list
>> > https://stat.ethz.ch/mailman/listinfo/r-help
>> > PLEASE do read the posting guide
>> > http://www.R-project.org/posting-guide.html
>> > and provide commented, minimal, self-contained, reproducible code.
>> >
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
>
------------------------------
Message: 16
Date: Wed, 13 Oct 2010 09:04:01 -0300
From: "Alejo C.S." <alej.c.s at gmail.com>
To: R-help at r-project.org
Subject: Re: [R] Lattice: arbitrary abline in multiple histograms
Message-ID:
<AANLkTikPhgm870T6d38icv_dcZaULgEmaboT09vjVF-z at mail.gmail.com>
Content-Type: text/plain
Thanks all, works perfect.
Alejo
2010/10/13 Dennis Murphy <djmuser at gmail.com>
> Hi:
>
> David was on the right track...
>
> library(reshape) # for the melt() function below
>
> rnorm(100,5,3) -> A
> rnorm(100,7,3) -> B
> rnorm(100,4,1) -> C
> df <- melt(data.frame(A, B, C))
> names(df)[1] <- 'gp'
>
> histogram(~ value | gp, data=df, layout=c(3,1), nint=50,
> panel=function(x, ..., groups){
> panel.histogram(x, ...)
> panel.abline(v=c(5.5, 6.5, 4.5)[groups = panel.number()],
> col = 'red', lwd = 2)
> })
>
> It took several iterations, but persistence sometimes pays off :)
>
> HTH,
> Dennis
>
>
> On Tue, Oct 12, 2010 at 6:13 PM, David Winsemius
<dwinsemius at comcast.net>wrote:
>
>>
>> On Oct 12, 2010, at 8:21 PM, Alejo C.S. wrote:
>>
>> Dear list.
>>>
>>> I have three histograms and I want to add a vertical abline in a
>>> different
>>> place in each plot. The next example will plot a vertical line at x=5.5
>>> in
>>> the three plots:
>>>
>>> rnorm(100,5,3) -> A
>>> rnorm(100,7,3) -> B
>>> rnorm(100,4,1) -> C
>>> rep(c("A","B","C"),each=100) -> grp
>>> data.frame(G=grp,D=c(A,B,C)) -> data
>>> histogram(~ D | G, data=data, layout=c(3,1), nint=50,
>>> panel=function(x,...){
>>> panel.histogram(x,...)
>>> panel.abline(v=5.5)
>>> })
>>> But I'd like to plot a vertical line at 5.5 in plot A, 6.5 in plot B and
>>> 4.5 in plot C. How can I do that?
>>>
>>
>> histogram(~ D | G, data=data, layout=c(3,1), nint=50,
>> panel=function(x,subscripts,groups,...){
>> panel.histogram(x,...)
>> panel.abline(v=c(5.5, 6.5, 4.5)[packet.number()])
>>
>> })
>>
>>>
>>> Thanks in advance..
>>>
>>> A.
>>>
>>> [[alternative HTML version deleted]]
>>>
>>> ______________________________________________
>>> R-help at r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>>> http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>>
>>
>> David Winsemius, MD
>> West Hartford, CT
>>
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>
[[alternative HTML version deleted]]
------------------------------
Message: 17
Date: Wed, 13 Oct 2010 17:35:49 +0530
From: nuncio m <nuncio.m at gmail.com>
To: r-help at r-project.org
Subject: [R] arima
Message-ID:
<AANLkTikkuB6akyVP0Mj5F-MhseBYN1AYWv7U-HdG0HNq at mail.gmail.com>
Content-Type: text/plain
HI useRs,
Is it required to remove mean before using ARIMA models
thanks
nuncio
--
Nuncio.M
Research Scientist
National Center for Antarctic and Ocean research
Head land Sada
Vasco da Gamma
Goa-403804
[[alternative HTML version deleted]]
------------------------------
Message: 18
Date: Wed, 13 Oct 2010 05:08:58 -0700 (PDT)
From: elpape <ellen.pape at gmail.com>
To: r-help at r-project.org
Subject: Re: [R] vertical kites in KiteChart (plotrix)
Message-ID:
<AANLkTi=DnhhXqnWxXdk+u5sqXarhyTU7MLuS6fKyfkUA at mail.gmail.com>
Content-Type: text/plain
Wow! Thanks!
On 13 October 2010 13:23, mbedward [via R] <
ml-node+2993501-2047744113-199230 at n4.nabble.com<ml-node%2B2993501-2047744113
-199230 at n4.nabble.com>
> wrote:
> Super ! An option for vertical plotting would be very nice.
>
> Michael
>
> On 13 October 2010 22:19, Jim Lemon <[hidden
email]<http://user/SendEmail.jtp?type=node&node=2993501&i=0>>
> wrote:
>
> > On 10/13/2010 07:11 PM, elpape wrote:
> >>
> >> Dear everyone,
> >>
> >> I would like to create a kite chart in which I plot densities (width of
> >> the
> >> vertical kites) in relation to sediment depth (on reversed y-axis) for
6
>
> >> different locations (Distances from seep site, on x-axis on top of the
> >> plot). The dataset I would like to use is:
> >> ...
> >> I have tried to rearrange the data, but I do not seem to get the output
> >> that
> >> I desire...
> >>
> > Hi Ellen,
> > It looks like I will have to do a bit of rewriting of the kiteChart
> > function. You can't just rearrange the data. I can probably get to it
> this
> > weekend. Thanks for pointing out the problem.
> >
> > Jim
> >
> > ______________________________________________
> > [hidden email]
<http://user/SendEmail.jtp?type=node&node=2993501&i=1>mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>
> ______________________________________________
> [hidden email]
<http://user/SendEmail.jtp?type=node&node=2993501&i=2>mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>
> ------------------------------
> View message @
>
http://r.789695.n4.nabble.com/vertical-kites-in-KiteChart-plotrix-tp2993295p
2993501.html
> To unsubscribe from vertical kites in KiteChart (plotrix), click
here<http://r.789695.n4.nabble.com/template/TplServlet.jtp?tpl=unsubscribe_b
y_code&node=2993295&code=ZWxsZW4ucGFwZUBnbWFpbC5jb218Mjk5MzI5NXwyNjY4NjY4NTM
=>.
>
>
>
--
View this message in context:
http://r.789695.n4.nabble.com/vertical-kites-in-KiteChart-plotrix-tp2993295p
2993550.html
Sent from the R help mailing list archive at Nabble.com.
[[alternative HTML version deleted]]
------------------------------
Message: 19
Date: Wed, 13 Oct 2010 05:28:09 -0700
From: Dennis Murphy <djmuser at gmail.com>
To: Laura Halderman <lkh11 at pitt.edu>
Cc: r-help at r-project.org
Subject: Re: [R] LME with 2 factors with 3 levels each
Message-ID:
<AANLkTinVOgjg5ODti273JH8NpL02HwBjm_RP1CnnvtOC at mail.gmail.com>
Content-Type: text/plain
Hi:
On Tue, Oct 12, 2010 at 8:59 PM, Laura Halderman <lkh11 at pitt.edu> wrote:
> Hello. I am new to R and new to linear mixed effects modeling. I am
> trying to model some data which has two factors. Each factor has three
> levels rather than continuous data. Specifically, we measured speech at
> Test 1, Test 2 and Test 3. We also had three groups of subjects: RepTP,
> RepNTP and NoRepNTP.
>
Do you have three groups of subjects, where each subject is tested on three
separate occasions? Are the tests meant to be replicates, or is there some
other purpose for why they should be represented in the model? Based on this
description, it would appear to me that the groups constitute one factor,
the students nested within groups another, with three measurements taken on
each student. How many students per group?
>
> I am having a really hard time interpreting this data since all the
> examples I have seen in the book I am using (Baayen, 2008) either have
> continuous variables or factors with only two levels. What I find
> particularly confusing are the interaction terms in the output. The
output
> doesn't present the full interaction (3 X 3) as I would expect with an
> ANOVA. Instead, it only presents an interaction term for one Test and one
> Group, presumably comparing it to the reference Test and reference Group.
> Therefore, it is hard to know what to do with the interactions that
aren't
> significant. In the book, non-significant interactions are dropped from
the
> model. However, in my model, I'm only ever seeing the 2 X 2 interactions,
> not the full 3 X 3 interaction, so it's not clear what I should do when
only
> two levels of group and two levels of test interact but the third group
> doesn't.
>
Let's get the design straight first and the model will work itself out...
Dennis
>
> If anyone can assist me in interpreting the output, I would really
> appreciate it. I may be trying to interpret it too much like an ANOVA
where
> you would be looking for main effects of Test (was there improvement from
> Test 1 to Test 2), main effects of Group (was one of the Groups better
than
> the other) and the interactions of the two factors (did one Group improve
> more than another Group from Test 1 to Test 2, for example). I guess
> another question to pose here is, is it pointless to do an LME analysis
with
> more than two levels of a factor? Is it too much like trying to do an
> ANOVA? Alternatively, it's possible that what I'm doing is acceptable,
I'm
> just not able to interpret it correctly.
>
> I have provided output from my model to hopefully illustrate my question.
> I'm happy to provide additional information/output if someone is
interested
> in helping me with this problem.
>
> Thank you,
> Laura
>
>
> Linear mixed model fit by REML
> Formula: PTR ~ Test * Group + (1 | student)
> Data: ptr
> AIC BIC logLik deviance REMLdev
> -625.7 -559.8 323.9 -706.5 -647.7
> Random effects:
> Groups Name Variance Std.Dev.
> student (Intercept) 0.0010119 0.03181
> Residual 0.0457782 0.21396
> Number of obs: 2952, groups: studentID, 20
>
> Fixed effects:
> Estimate Std. Error t value
> (Intercept) 0.547962 0.016476 33.26
> Testtest2 -0.007263 0.015889 -0.46
> Testtest1 -0.050653 0.016305 -3.11
> GroupNoRepNTP 0.008065 0.022675 0.36
> GroupRepNTP -0.018314 0.025483 -0.72
> Testtest2:GroupNoRepNTP 0.006073 0.021936 0.28
> Testtest1:GroupNoRepNTP 0.013901 0.022613 0.61
> Testtest2:GroupRepNTP 0.046684 0.024995 1.87
> Testtest1:GroupRepNTP 0.039994 0.025181 1.59
>
> Note: The reference level for Test is Test3. The reference level for
Group
> is RepTP. The interaction p value (after running pvals.fnc with the MCMC)
> for Testtest2:GroupRepNTP is p = .062 which I'm willing to accept and
> interpret since speech data with English Language Learners is particularly
> variable.
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
------------------------------
Message: 20
Date: Wed, 13 Oct 2010 05:31:50 -0700 (PDT)
From: dpender <d.pender at civil.gla.ac.uk>
To: r-help at r-project.org
Subject: Re: [R] Data Gaps
Message-ID: <1286973110190-2993582.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Dennis,
Thanks for that. The problem now is that I am trying to use it in a for
loop. Based on the example before, 2 entries are required after H[3] as
specified by O. The problem is that when inserting values the length of the
array changes so I don't know how to tell the loop that. This is what I
have:
for (i in 1:length(H)) {
if (o[i]>0) {append(H, -1, after = i-1)}
}
Any ideas how to create a loop that allows the inclusion of more than 1
value?
Cheers,
Doug
djmuseR wrote:
>
> Hi:
>
> Perhaps
>
> ?append
>
> for simple insertions...
>
> HTH,
> Dennis
>
> On Wed, Oct 13, 2010 at 1:24 AM, dpender <d.pender at civil.gla.ac.uk> wrote:
>
>>
>> R community,
>>
>> I am trying to write a code that fills in data gaps in a time series. I
>> have no R or statistics background at all but the use of R is proving to
>> be
>> a large portion of my PhD research.
>>
>> So far my code identifies where and the number of new entries required
>> but
>> I
>> do not know how to add additional rows or columns into an array. Any
>> advice
>> on how this can be done?
>>
>> Here is an example:
>>
>> H [0.88 0.72 0.89 0.93 1.23 0.86]
>> T [7.14 7.14 7.49 8.14 7.14 7.32]
>> O [0 0 0 2 0 0]
>>
>> This says that in order to complete the data set 2 entries are required
>> prior to H[4] and T[4] i.e. where O = 2.
>>
>> Thanks,
>>
>> Doug
>> --
>> View this message in context:
>> http://r.789695.n4.nabble.com/Data-Gaps-tp2993317p2993317.html
>> Sent from the R help mailing list archive at Nabble.com.
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>
--
View this message in context:
http://r.789695.n4.nabble.com/Data-Gaps-tp2993317p2993582.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 21
Date: Wed, 13 Oct 2010 05:53:45 -0700 (PDT)
From: Raji <raji.sankaran at gmail.com>
To: r-help at r-project.org
Subject: [R] nnet help
Message-ID: <1286974425839-2993609.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hi R-helpers , i am learning nnet package now.I have seen that the nnet can
be used for regression and classification.Can you give me more insights on
the other data mining and predictive techniques that the nnet package can be
used for? If possible, can you send me links for sample datasets with which
nnet can be tested to get better understanding of it?
--
View this message in context:
http://r.789695.n4.nabble.com/nnet-help-tp2993609p2993609.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 22
Date: Wed, 13 Oct 2010 05:58:31 -0700 (PDT)
From: Wonsang You <you at ifn-magdeburg.de>
To: r-help at r-project.org
Subject: Re: [R] How to fix error in the package 'rgenoud'
Message-ID:
<AANLkTi=emmAhFG+vUmrTspAr8Nc-EqzbPGwiWtuvD22f at mail.gmail.com>
Content-Type: text/plain
I have to make correction in my error message which I introduced in my
original message. Sorry for my mistake.
Finally, I had the following error message after running the function '
genoud'.
Error in optim(foo.vals, fn = fn1, gr = gr1, method = optim.method, control
= control) :
non-finite finite-difference value [1]
When I execute 'traceback()' to trace where the error occured, I got the
following results. Unfortunately, I could not figure out what was the
problem from the above information.
6: optim(foo.vals, fn = fn1, gr = gr1, method = optim.method, control =
control)
5: function (foo.vals)
{
ret <- optim(foo.vals, fn = fn1, gr = gr1, method = optim.method,
control = control)
return(c(ret$value, ret$par))
}(c(0.220878697173384, -13.3643173824871))
4: .Call("rgenoud", as.function(fn1), new.env(), as.integer(nvars),
as.integer(pop.size), as.integer(max.generations),
as.integer(wait.generations),
as.integer(nStartingValues), as.real(starting.values), as.vector(P),
as.matrix(Domains), as.integer(max), as.integer(gradient.check),
as.integer(boundary.enforcement), as.double(solution.tolerance),
as.integer(BFGS), as.integer(data.type.int),
as.integer(provide.seeds),
as.integer(unif.seed), as.integer(int.seed),
as.integer(print.level),
as.integer(share.type), as.integer(instance.number),
as.integer(MemoryMatrix),
as.integer(debug), as.character(output.path),
as.integer(output.type),
as.character(project.path), as.integer(hard.generation.limit),
as.function(genoud.optim.wrapper101), as.integer(lexical),
as.function(fnLexicalSort), as.function(fnMemoryMatrixEvaluate),
as.integer(UserGradient), as.function(gr1func), as.real(P9mix),
as.integer(BFGSburnin), as.integer(transform), PACKAGE = "rgenoud")
3: genoud(Qmin, nvars = 2, starting.values = InitVal, max.generations = 10,
wait.generations = 3, n = n, yper = yper, pertype = pertype) at
wFGN.R#75
On 13 October 2010 13:03, Wonsang You <you at ifn-magdeburg.de> wrote:
> Dear R user fellows,
>
> I would like to ask you about the package 'rgenoud' which is a genetic
> optimization tool.
> I ran the function 'genoud' with two variables to be minimized by the
> following command.
>
> result<-genoud(fn,nvars=2,starting.values=c(0.5,0),
>
> pop.size=1000, max.generations=10, wait.generations=3)
>
>
> Then, I had the following error message.
>
> Error in solve.default(Djl) :
>
> system is computationally singular: reciprocal condition number = 0
>
>
> Can anyone give me some tip on how to fix the problem? Thank you for your
> great help in advance.
>
> Best Regards,
> Wonsang You
>
-----
Wonsang You
Leibniz Institute for Neurobiology
--
View this message in context:
http://r.789695.n4.nabble.com/Re-How-to-fix-error-in-the-package-rgenoud-tp2
993614p2993614.html
Sent from the R help mailing list archive at Nabble.com.
[[alternative HTML version deleted]]
------------------------------
Message: 23
Date: Wed, 13 Oct 2010 14:58:50 +0200
From: Niccol? Bassani <biostatistica at gmail.com>
To: r-help at stat.math.ethz.ch
Subject: [R] Nonparametric MANCOVA using matrices
Message-ID:
<AANLkTingyC8dYC3voVVAmU6jXouDNMKPR+8h9WamYSZR at mail.gmail.com>
Content-Type: text/plain
Dear R-users,
is anybody aware of some package or routine to implement nonparametric
Multivariate Analysis of Covariance (MANCOVA) using matrices instead of
single variable names? I found something for parametric MANCOVA which still
requires single variables to be used (ffmanova,vegan), but since both the
response matrix and the covariates matrix are quite large (306 and 152
variables, respectively) I have some difficulty in implementing this
model...
Something like Y ~ X*Z, where X is a design matrix, and Z is the covariate
matrix. Rows of both Y and Z are much less than respective columns...
Actually, the closest thing i found is the adonis function in the vegan
package (I've used it much but only for MANOVA purposes), but still things
do not seem to work that properly...namely no residual df is left for
estimating residuals SS...
thanks for any help!
niccolr
[[alternative HTML version deleted]]
------------------------------
Message: 24
Date: Wed, 13 Oct 2010 14:59:53 +0200
From: RINNER Heinrich <HEINRICH.RINNER at tirol.gv.at>
To: "r-help at stat.math.ethz.ch" <r-help at stat.math.ethz.ch>
Subject: [R] RODBC: forcing a special column to be read in as
character
Message-ID:
<E18FDA5C64FC5645A25B2B52EB8CC9CE0395BF9558 at EXCHMCA.tirol.local>
Content-Type: text/plain; charset="us-ascii"
Dear R-users,
I am working with R version 2.10.1 and package RODBC Version: 1.3-2 under
windows.
Say I have a table "testtable" (in an Access data base), which has many
different columns, among them a character column "X" with "integer-like"
data as "0012345".
Using sqlFetch, I'd like to assure that column X is read in as a character
variable. So what I'm looking for is something between these two approaches:
> sqlFetch(channel, "testtable") # -> column X is automatically converted to
character (so 0012345 becomes 12345)
> sqlFetch(channel, "testtable", as.is = TRUE) # -> all columns are
converted to character
I guess I'm looking for something like argument "colClasses" in function
"read.table", is there a something similar in RODBC?
Your advice is appreciated;
kind regards
Heinrich.
------------------------------
Message: 25
Date: Wed, 13 Oct 2010 06:01:21 -0700 (PDT)
From: Wonsang You <you at ifn-magdeburg.de>
To: r-help at r-project.org
Subject: Re: [R] How to fix error in the package 'rgenoud'
Message-ID: <1286974881705-2993619.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
I have to make correction in my error message which I introduced in my
original message. Sorry for my mistake.
Finally, I had the following error message after running the function
'genoud'.
Error in optim(foo.vals, fn = fn1, gr = gr1, method = optim.method, control
= control) :
non-finite finite-difference value [1]
When I execute 'traceback()' to trace where the error occured, I got the
following results. Unfortunately, I could not figure out what was the
problem from the above information.
6: optim(foo.vals, fn = fn1, gr = gr1, method = optim.method, control =
control)
5: function (foo.vals)
{
ret <- optim(foo.vals, fn = fn1, gr = gr1, method = optim.method,
control = control)
return(c(ret$value, ret$par))
}(c(0.220878697173384, -13.3643173824871))
4: .Call("rgenoud", as.function(fn1), new.env(), as.integer(nvars),
as.integer(pop.size), as.integer(max.generations),
as.integer(wait.generations),
as.integer(nStartingValues), as.real(starting.values), as.vector(P),
as.matrix(Domains), as.integer(max), as.integer(gradient.check),
as.integer(boundary.enforcement), as.double(solution.tolerance),
as.integer(BFGS), as.integer(data.type.int),
as.integer(provide.seeds),
as.integer(unif.seed), as.integer(int.seed), as.integer(print.level),
as.integer(share.type), as.integer(instance.number),
as.integer(MemoryMatrix),
as.integer(debug), as.character(output.path),
as.integer(output.type),
as.character(project.path), as.integer(hard.generation.limit),
as.function(genoud.optim.wrapper101), as.integer(lexical),
as.function(fnLexicalSort), as.function(fnMemoryMatrixEvaluate),
as.integer(UserGradient), as.function(gr1func), as.real(P9mix),
as.integer(BFGSburnin), as.integer(transform), PACKAGE = "rgenoud")
3: genoud(Qmin, nvars = 2, starting.values = InitVal, max.generations = 10,
wait.generations = 3, n = n, yper = yper, pertype = pertype) at
wFGN.R#75
-----
Wonsang You
Leibniz Institute for Neurobiology
--
View this message in context:
http://r.789695.n4.nabble.com/How-to-fix-error-in-the-package-rgenoud-tp2993
489p2993619.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 26
Date: Wed, 13 Oct 2010 10:04:27 -0300
From: Henrique Dallazuanna <wwwhsd at gmail.com>
To: dpender <d.pender at civil.gla.ac.uk>
Cc: r-help at r-project.org
Subject: Re: [R] Date Time Objects
Message-ID:
<AANLkTikQDeY8ofPiTUahGB_u5AmZoJxGsdmDK9D10ox+ at mail.gmail.com>
Content-Type: text/plain
Try this:
sapply(0:(length(DateTime3)-1), function(i)as.character(strptime(start,
"%m/%d/%Y %H:%M") + i * interval))
On Wed, Oct 13, 2010 at 8:51 AM, dpender <d.pender at civil.gla.ac.uk> wrote:
>
> I am trying to convert an array from numeric values back to date and time
> format. The code I have used is as follows;
>
> for (i in 0:(length(DateTime3)-1)) {
> DateTime3[i] <- (strptime(start, "%m/%d/%Y %H:%M")+ i*interval)
>
> where start <- [1] "1/1/1981 00:00"
>
> However the created array (DateTime3) contains [1,] 347156400 347157600
> 347158800 347160000 347161200 347162400 347163600 NA
>
> Does anyone know how I can change DateTime 3 to the same format as start?
>
> Thanks,
>
> Doug
>
> --
> View this message in context:
> http://r.789695.n4.nabble.com/Date-Time-Objects-tp2993524p2993524.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Henrique Dallazuanna
Curitiba-Parana-Brasil
250 25' 40" S 490 16' 22" O
[[alternative HTML version deleted]]
------------------------------
Message: 27
Date: Wed, 13 Oct 2010 04:50:18 -0700 (PDT)
From: oc1 <ocrowe at birdwatchireland.ie>
To: r-help at r-project.org
Subject: [R] repeating a GAM across many factors
Message-ID: <1286970618828-2993522.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hi there,
I'm working on a biological dataset that contains up to 40 species and 100
sites each. The response variable is index (trend), dependent is year - each
combination has an index for each year between 1994 and 2008 inclusive. So
my input file is along the following lines: species, site, year, index. I'd
like to run a GAM, and plot smoothing spline 5df, poisson, and output the
fitted values for each species/ site combination. I have written the code
for the GAM and for outputting the fitted values, which is very simple and
brief (<5 lines) but am unsure how to repeat the process for each
combination. Would be grateful if someone could advise on this code,
including producing fitted values for each.
Thanks,
Olivia
--
View this message in context:
http://r.789695.n4.nabble.com/repeating-a-GAM-across-many-factors-tp2993522p
2993522.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 28
Date: Wed, 13 Oct 2010 02:31:54 -0700 (PDT)
From: Ron_M <ron_michael70 at yahoo.com>
To: r-help at r-project.org
Subject: [R] Problem to create a matrix polynomial
Message-ID: <1286962314278-2993411.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Dear all R users, I was trying to create a polymonial using the polynom() of
"PolynomF" package. My problem is, if I pass coefficients as simple
numerical values, it is working, but for matrix coefficient it is not. Here
is my try:
library(PolynomF)
z <- polynom()
## following is working
p1 <- 1- 4.5*z
## However following is not giving results as intended
p1 <- diag(3)- matrix(1:9, 3,3)*z
In the 2nd example, I was expecting a matrix polynomial in "z" upto degree
1, which I did not get. Can anyone please correct me how to address this
problem?
Thanks,
--
View this message in context:
http://r.789695.n4.nabble.com/Problem-to-create-a-matrix-polynomial-tp299341
1p2993411.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 29
Date: Wed, 13 Oct 2010 06:16:15 -0700 (PDT)
From: dpender <d.pender at civil.gla.ac.uk>
To: r-help at r-project.org
Subject: Re: [R] Date Time Objects
Message-ID: <1286975775053-2993640.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Thanks Henrique,
Do you have any idea why the first entry doesn't have the time as the start
specified is "1/1/1981 00:00"?
Is it something to do with being at midnight?
Doug
--
View this message in context:
http://r.789695.n4.nabble.com/Date-Time-Objects-tp2993524p2993640.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 30
Date: Wed, 13 Oct 2010 21:20:51 +0800
From: zhu yao <mailzhuyao at gmail.com>
To: R-project help <r-help at r-project.org>
Subject: [R] bootstrap in pROC package
Message-ID:
<AANLkTikC10HcMcq9Qc5FE572WOXTJ0jj-0GkvMNgXRh9 at mail.gmail.com>
Content-Type: text/plain
Dear useRs:
I use pROC package to compute the bootstrap C.I. of AUC.
The command was as follows:
roc1<-roc(all$D,all$pre,ci=TRUE,boot.n=200)
However, the result was:
Area under the curve: 0.5903
95% CI: 0.479-0.7016 (DeLong)
Why the C.I. was computed by the Delong Method?
Yao Zhu
Department of Urology
Fudan University Shanghai Cancer Center
Shanghai, China
[[alternative HTML version deleted]]
------------------------------
Message: 31
Date: Wed, 13 Oct 2010 10:23:57 -0300
From: Henrique Dallazuanna <wwwhsd at gmail.com>
To: dpender <d.pender at civil.gla.ac.uk>
Cc: r-help at r-project.org
Subject: Re: [R] Date Time Objects
Message-ID:
<AANLkTi=vU71cor73Nd5imxBS3untP3RMbkZH+Fjokv=+ at mail.gmail.com>
Content-Type: text/plain
Try this:
sapply(0::(length(DateTime3)-1), function(i)format(strptime(start,
"%m/%d/%Y %H:%M") + i * interval, "%Y-%m-%d %H:%M:%S"))
On Wed, Oct 13, 2010 at 10:16 AM, dpender <d.pender at civil.gla.ac.uk> wrote:
>
> Thanks Henrique,
>
> Do you have any idea why the first entry doesn't have the time as the
start
> specified is "1/1/1981 00:00"?
>
> Is it something to do with being at midnight?
>
> Doug
> --
> View this message in context:
> http://r.789695.n4.nabble.com/Date-Time-Objects-tp2993524p2993640.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Henrique Dallazuanna
Curitiba-Parana-Brasil
250 25' 40" S 490 16' 22" O
[[alternative HTML version deleted]]
------------------------------
Message: 32
Date: Wed, 13 Oct 2010 09:24:48 -0400
From: David Winsemius <dwinsemius at comcast.net>
To: ivan.calandra at uni-hamburg.de
Cc: r-help at r-project.org
Subject: Re: [R] Boxplot has only one whisker
Message-ID: <BB17C09B-0555-4520-A466-FEF62090ED95 at comcast.net>
Content-Type: text/plain; charset=WINDOWS-1252; format=flowed;
delsp=yes
On Oct 13, 2010, at 5:06 AM, Ivan Calandra wrote:
> Well, you don't use the same data for both.
> Type 2 and 7 give different values for the 25 and 75%, which
> correspond more or less to the box hinges.
> If you take a look at how the whiskers are defined (look at ?
> boxplot.stats):
> "|coef| this determines how far the plot ?whiskers? extend out from
> the box. If |coef| is positive, the whiskers extend to the most
> extreme data point which is no more than |coef| times the length of
> the box away from the box."
> In your example, the box for "method1" is short; the 100% point is
> then further that 1.5*length of the box (IQR), which is then
> considered as outlier. Note that the default for coef is 1.5 (hence
> the 1.5*IQR). On the other hand, for method 2, the box is longer;
> the largest point is therefore within 1.5*IQR and plotted with the
> upper whisker.
> Understand what I mean?
> See that post too, they might have better explanations:
> http://finzi.psych.upenn.edu/Rhelp10/2010-May/238597.html
>
> But don't forget my second comment: you boxplot() summary data. From
> my understanding, you should not do that. Boxplot() returns the
> statistics itself (look at the value section from ?boxplot). Try:
> bx <- boxplot(d,ylab = "Beispiel 1",range = 1.5)
> bx$stats
> [,1] [,2]
> [1,] 2.7000 2.700
> [2,] 3.0000 2.900
> [3,] 3.3000 3.300
> [4,] 4.8625 6.225
> [5,] 4.8625 8.950
>
> You can see that boxplot() recomputed the summary stats from the
[[elided Yahoo spam]]
> You should provide raw data to boxplot(), not summary stats.
Agreed, boxplot is doing the summarization, but it then hands the
summaries on to bxp() to do the plotting. So if the OP wants to add
back in a whisker or a dot or to use summaries from another source,
then he should first look at the Value section of the boxplot help
page for the correct list structure, and then send that list to bxp().
The bxp help page also is where one would look to tweak various
plotting parameters, since those are better explained there.
--
David.
> If you want to input summary stats, there was a post some time ago
> on that:
> http://finzi.psych.upenn.edu/Rhelp10/2010-September/251674.html
>
> HTH,
> Ivan
>
> Le 10/13/2010 10:33, tom a ?crit :
>>
>> Ivan Calandra wrote:
[[elided Yahoo spam]]
>>> You have only five points, the last one being considered as outlier.
>>> Note that boxplot() requires a numeric vector for specifying data
>>> from
[[elided Yahoo spam]]
>>>
>> But why is only one of the boxplots missing his whisker? I use the
>> same data
>> for both boxplots.
>> thx,
>> tom
>>
>>
>>
>>
>
> --
> Ivan CALANDRA
> PhD Student
> University of Hamburg
> Biozentrum Grindel und Zoologisches Museum
> Abt. S?ugetiere
> Martin-Luther-King-Platz 3
> D-20146 Hamburg, GERMANY
> +49(0)40 42838 6231
> ivan.calandra at uni-hamburg.de
>
> **********
> http://www.for771.uni-bonn.de
> http://webapp5.rrz.uni-hamburg.de/mammals/eng/mitarbeiter.php
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
David Winsemius, MD
West Hartford, CT
------------------------------
Message: 33
Date: Wed, 13 Oct 2010 06:33:54 -0700 (PDT)
From: dpender <d.pender at civil.gla.ac.uk>
To: r-help at r-project.org
Subject: Re: [R] Date Time Objects
Message-ID: <1286976834524-2993663.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Perfect.
Thanks
--
View this message in context:
http://r.789695.n4.nabble.com/Date-Time-Objects-tp2993524p2993663.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 34
Date: Wed, 13 Oct 2010 15:46:55 +0200
From: David <david.maillists at gmail.com>
To: r-help at r-project.org
Subject: [R] "Memory not mapped" when using .C, problem in Mac but not
in Linux
Message-ID:
<AANLkTi=ihbDOS5NX-N5OmUHmeh7uw87oeuBdzAxi+G6G at mail.gmail.com>
Content-Type: text/plain
Hello,
I am aware this may be an obscure problem difficult to advice about, but
just in case... I am calling a C function from R on an iMac (almost shining:
bought by my institution this year) and gives a "memory not mapped" error.
Nevertheless, exactly the same code runs without problem in a powerful
computer using SuSE Linux, but also on my laptop of 2007, 32 bits, 2 GB RAM,
running Ubuntu. My supervisor says that he can run my code on his iMac (a
bit older than mine) without problem.
I have upgraded to the latest version of R, and I have tried compiling the C
code with the latest version of gcc (obtained from MacPorts), but the error
persists.
Any ideas?
Thank you very much in advance,
David
Can you please Cc to me any replies, just in case I may miss any of them
among the whole amount of emails :-) ?
[[alternative HTML version deleted]]
------------------------------
Message: 35
Date: Wed, 13 Oct 2010 15:55:13 +0200
From: "Millo Giovanni" <Giovanni_Millo at Generali.com>
To: "Achim Zeileis" <Achim.Zeileis at uibk.ac.at>, "Max Brown"
<max.e.brown at gmail.com>
Cc: r-help at stat.math.ethz.ch
Subject: [R] R: robust standard errors for panel data
Message-ID:
<28643F754DDB094D8A875617EC4398B2069D2C1C at BEMAILEXTV03.corp.generali.net>
Content-Type: text/plain; charset="iso-8859-1"
Hello.
In principle Achim is right, by default vcovHC.plm does things the
"Arellano" way, clustering by group and therefore giving SEs which are
robust to general heteroskedasticity and serial correlation. The problem
with your data, though, is that this estimator is N-consistent, so it is
inappropriate for your setting. The other way round, on the converse
(cluster="time") would yield a T-consistent estimator, robust to
cross-sectional correlation: there's no escape, because the "big" dimension
is always used to get robustness along the "small" one.
Therefore the road to go to have robustness along the "big" dimension is
some sort of nonparametric truncation. So:
** 1st (possible) solution **
In my opinion, you would actually need a panel implementation of Newey-West,
which is not implemented in 'plm' yet. It might well be feasible by applying
vcovHAC{sandwich} to the time-demeaned data but I'm not sure; in this case,
vcovHAC should be applied this way (here: the famous Munnell data, see
example(plm))
> library(plm)
> fm<-log(gsp)~log(pcap)+log(pc)+log(emp)+unemp
> data(Produc)
> ## est. FE model
> femod<-plm(fm, Produc)
> ## extract time-demeaned data
> demy<-pmodel.response(femod, model="within")
> demX<-model.matrix(femod, model="within")
> ## estimate lm model on demeaned data
> ## (equivalent to FE, but makes a 'lm' object)
> demod<-lm(demy~demX-1)
> library(sandwich)
> library(lmtest)
> ## apply HAC covariance, e.g., to t-tests
> coeftest(demod, vcov=vcovHAC)
t test of coefficients:
Estimate Std. Error t value Pr(>|t|)
demXlog(pcap) -0.0261497 0.0485168 -0.5390 0.59005
demXlog(pc) 0.2920069 0.0496912 5.8764 6.116e-09 ***
demXlog(emp) 0.7681595 0.0677258 11.3422 < 2.2e-16 ***
demXunemp -0.0052977 0.0018648 -2.8410 0.00461 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
> ## same goes for waldtest(), lht() etc.
but beware, things are probably complicated by the serial correlation
induced by demeaning: see the references in the serial correlation tests
section of the package vignette. Caveat emptor.
** 2nd solution **
Another possible strategy is screening for serial correlation first: again,
see ?pbgtest, ?pdwtest and be aware of all the caveats detailed in the
abovementioned section of the vignette regarding use on FE models.
** 3rd solution **
Another thing you could do (Hendry and friends would say "should" do!) to
get rid of serial correlation is a dynamic FE panel, as the Nickell bias is
of order 1/T and so might well be negligible in your case.
Anyway, thanks for motivating me: I thought we'd provided robust covariances
all over the place, but there was one direction left ;^)
Giovanni
-----Messaggio originale-----
Da: Achim Zeileis [mailto:Achim.Zeileis at uibk.ac.at]
Inviato: mercoled? 13 ottobre 2010 12:06
A: Max Brown
Cc: r-help at stat.math.ethz.ch; yves.croissant at univ-reunion.fr; Millo Giovanni
Oggetto: Re: [R] robust standard errors for panel data
On Wed, 13 Oct 2010, Max Brown wrote:
> Hi,
>
> I would like to estimate a panel model (small N large T, fixed
> effects), but would need "robust" standard errors for that. In
> particular, I am worried about potential serial correlation for a
> given individual (not so much about correlation in the cross section).
>
>> From the documentation, it looks as if the vcovHC that comes with plm
> does not seem to do autocorrelation,
My understanding is that it does, in fact. The details say
Observations may be clustered by '"group"' ('"time"') to account
for serial (cross-sectional) correlation.
Thus, the default appears to be to account for serial correlation anyway.
But I'm not an expert in panel-versions of these robust covariances. Yves
and Giovanni might be able to say more.
> and the NeweyWest in the sandwich
> package says that it expects a fitted model of type "lm" or "glm" (it
> says nothing about "plm").
That information in the "sandwich" package is outdated - prompted by your
email I've just fixed the manual page in the development version.
In principle, everything in "sandwich" is object-oriented now, see
vignette("sandwich-OOP", package = "sandwich")
However, the methods within "sandwich" are only sensible for cross-sectional
data (vcovHC, sandwich, ...) or time series data (vcovHAC, NeweyWest,
kernHAC, ...). There is not yet explicit support for panel data.
hth,
Z
> How can I estimate the model and get robust standard errors?
>
> Thanks for your help.
>
> Max
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
Ai sensi del D.Lgs. 196/2003 si precisa che le informazi...{{dropped:13}}
------------------------------
Message: 36
Date: Wed, 13 Oct 2010 07:12:49 -0700
From: Dennis Murphy <djmuser at gmail.com>
To: dpender <d.pender at civil.gla.ac.uk>
Cc: r-help at r-project.org
Subject: Re: [R] Data Gaps
Message-ID:
<AANLkTinNV+9uJkBtwHQrPFSySmdkufD1S4a4b8d4nmJ5 at mail.gmail.com>
Content-Type: text/plain
Hi:
On Wed, Oct 13, 2010 at 5:31 AM, dpender <d.pender at civil.gla.ac.uk> wrote:
>
> Dennis,
>
> Thanks for that. The problem now is that I am trying to use it in a for
> loop. Based on the example before, 2 entries are required after H[3] as
> specified by O. The problem is that when inserting values the length of
> the
> array changes so I don't know how to tell the loop that. This is what I
> have:
>
If you know what the replacement values are in advance, you can create a
list object where each component is a vector of values to insert and then
access the i-th component of the list in the loop. If the replacement values
are not known in advance, then it's not so obvious to me what you should do.
The function f() below takes three inputs: the first is a matrix whose rows
contain the original vectors, the second is a list of values to be inserted
(not necessarily equal in length from one row to the next), and the third is
a matrix whose rows indicate where the insertions should take place in the
vector (- 1). which(x > 0) returns the index (or indices) where the value of
the vector x is positive. The output of the function is a list because the
output vectors are not necessarily the same length after insertion.
Here's a small example with two row matrices for the inputs and matrices, as
well as a two-component list for the inserts:
H <- c(0.88, 0.72, 0.89, 0.93, 1.23, 0.86)
T <- c(7.14, 7.14, 7.49, 8.14, 7.14, 7.32)
O <- c(0, 0, 0, 2, 0, 0)
R <- c(1.2, 1.4)
m <- rbind(H, T)
o <- rbind(O, c(0, 0, 1, 0, 0, 0))
l <- list(R1 = R, R2 = 1.6)
f <- function(mval, midx, mrep) {
if(nrow(mval) != nrow(midx) || nrow(midx) != length(l))
stop('unequal numbers of rows among inputs')
n <- nrow(mval)
out <- vector('list', n)
for(i in seq_len(n))
out[[i]] <- append(mval[i, ], as.numeric(l[[i]]), after =
which(midx[i, ] > 0) - 1)
out
}
f(m, o, l)
[[1]]
[1] 0.88 0.72 0.89 1.20 1.40 0.93 1.23 0.86
[[2]]
[1] 7.14 7.14 1.60 7.49 8.14 7.14 7.32
HTH,
Dennis
for (i in 1:length(H)) {
> if (o[i]>0) {append(H, -1, after = i-1)}
> }
>
> Any ideas how to create a loop that allows the inclusion of more than 1
> value?
>
> Cheers,
>
> Doug
>
>
> djmuseR wrote:
> >
> > Hi:
> >
> > Perhaps
> >
> > ?append
> >
> > for simple insertions...
> >
> > HTH,
> > Dennis
> >
> > On Wed, Oct 13, 2010 at 1:24 AM, dpender <d.pender at civil.gla.ac.uk>
> wrote:
> >
> >>
> >> R community,
> >>
> >> I am trying to write a code that fills in data gaps in a time series.
I
> >> have no R or statistics background at all but the use of R is proving
to
> >> be
> >> a large portion of my PhD research.
> >>
> >> So far my code identifies where and the number of new entries required
> >> but
> >> I
> >> do not know how to add additional rows or columns into an array. Any
> >> advice
> >> on how this can be done?
> >>
> >> Here is an example:
> >>
> >> H [0.88 0.72 0.89 0.93 1.23 0.86]
> >> T [7.14 7.14 7.49 8.14 7.14 7.32]
> >> O [0 0 0 2 0 0]
> >>
> >> This says that in order to complete the data set 2 entries are required
> >> prior to H[4] and T[4] i.e. where O = 2.
> >>
> >> Thanks,
> >>
> >> Doug
> >> --
> >> View this message in context:
> >> http://r.789695.n4.nabble.com/Data-Gaps-tp2993317p2993317.html
> >> Sent from the R help mailing list archive at Nabble.com.
> >>
> >> ______________________________________________
> >> R-help at r-project.org mailing list
> >> https://stat.ethz.ch/mailman/listinfo/r-help
> >> PLEASE do read the posting guide
> >> http://www.R-project.org/posting-guide.html
> >> and provide commented, minimal, self-contained, reproducible code.
> >>
> >
> > [[alternative HTML version deleted]]
> >
> > ______________________________________________
> > R-help at r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> > http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
> >
>
> --
> View this message in context:
> http://r.789695.n4.nabble.com/Data-Gaps-tp2993317p2993582.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
------------------------------
Message: 37
Date: Wed, 13 Oct 2010 16:05:15 +0200
From: Christophe Bouffioux <christophe.b00 at gmail.com>
To: r-help at r-project.org
Subject: [R] bwplot change whiskers position to percentile 5 and P95
Message-ID:
<AANLkTi=KFUvB2Ov1qP8aphwn+Nx-mE2359xr8f8=0iFo at mail.gmail.com>
Content-Type: text/plain
Dear R-community,
Using bwplot, how can I put the whiskers at percentile 5 and percentile 95,
in place of the default position coef=1.5??
Using panel=panel.bwstrip, whiskerpos=0.05, from the package agsemisc gives
satisfaction, but changes the appearance of my boxplot and works with an old
version of R, what I dont want, and I didnt find the option in
box.umbrella parameters
Many thanks
Christophe
Here is the code:
library(lattice)
ex <- data.frame(v1 = log(abs(rt(180, 3)) + 1),
v2 = rep(c("2007", "2006", "2005"), 60),
z = rep(c("a", "b", "c", "d", "e", "f"), e = 30))
ex2 <- data.frame(v1b = log(abs(rt(18, 3)) + 1),
v2 = rep(c("2007", "2006", "2005"), 6),
z = rep(c("a", "b", "c", "d", "e", "f"), e = 3))
ex3 <- merge(ex, ex2, by=c("v2","z"))
D2007 <- ex3[ex3$z=="d" & ex3$v2==2007, ]
D2006 <- ex3[ex3$z=="d" & ex3$v2==2006, ]
C2007 <- ex3[ex3$z=="c" & ex3$v2==2007, ]
quantile(D2007$v1, probs = c(0.05, 0.95))
quantile(D2006$v1, probs = c(0.05, 0.95))
quantile(C2007$v1, probs = c(0.05, 0.95))
bwplot(v2 ~ v1 | z, data = ex3, layout=c(3,2), X = ex3$v1b,
pch = "|",
par.settings = list(
plot.symbol = list(alpha = 1, col = "transparent",cex = 1,pch = 20)),
panel = function(x, y, ..., X, subscripts){
panel.grid(v = -1, h = 0)
panel.bwplot(x, y, ..., subscripts = subscripts)
X <- X[subscripts]
xmax =max(x)
X <- tapply(X, y, unique)
Y <- tapply(y, y, unique)
tg <- table(y)
panel.points(X, Y, cex=3, pch ="|" , col = "red")
#vcount <- tapply(v1, v2, length)
panel.text((xmax-0.2), (Y-0.15), labels = paste("N=", tg))
})
[[alternative HTML version deleted]]
------------------------------
Message: 38
Date: Wed, 13 Oct 2010 06:52:41 -0700 (PDT)
To: r-help at r-project.org
Subject: [R] NA with lmList
Message-ID: <745168.50562.qm at web38305.mail.mud.yahoo.com>
Content-Type: text/plain; charset=iso-8859-1
Dear All,
?
I was trying to use the lmList function to get the lmList graphic but? have
a
problem creating the graphic.? My data has missing values and?an error
occurred?
as stated below.
###################################################
library(nlme)
> a
??schoolid spring score childid
1 1 0 550 345
2 1 1 568 345
3 1 0 560 456
4 1 1 NA 456
5 2 0 540 32
6 2 1 562 32
7 2 0 579 34
8 2 1 599 34
(lmlis1 <- lmList(score ~ childid | spring, data=a, na.action=T))
Error in UseMethod("getGroups") : no applicable method for "getGroups"
#######################################################
Could anybody advice me the way to write the command correctly, which?ignore
the
missing data?
Cheers
Fir
------------------------------
Message: 39
Date: Wed, 13 Oct 2010 07:51:46 -0700 (PDT)
From: Phil Spector <spector at stat.berkeley.edu>
Cc: r-help at r-project.org
Subject: Re: [R] NA with lmList
Message-ID: <alpine.DEB.2.00.1010130750560.2843 at springer.Berkeley.EDU>
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
Please take a look at the documentation for lmList regarding
the na.action= argument. It should be a *function*, not TRUE
or FALSE. For example, try
lmList(score ~ childid | spring, data=a, na.action=na.omit)
- Phil Spector
Statistical Computing Facility
Department of Statistics
UC Berkeley
spector at stat.berkeley.edu
On Wed, 13 Oct 2010, FMH wrote:
> Dear All,
> ?
> I was trying to use the lmList function to get the lmList graphic but?
have a
> problem creating the graphic.? My data has missing values and?an error
occurred?
> as stated below.
>
>
>
> ###################################################
>
> library(nlme)
>
>> a
> ??schoolid spring score childid
> 1 1 0 550 345
> 2 1 1 568 345
> 3 1 0 560 456
> 4 1 1 NA 456
> 5 2 0 540 32
> 6 2 1 562 32
> 7 2 0 579 34
> 8 2 1 599 34
>
>
> (lmlis1 <- lmList(score ~ childid | spring, data=a, na.action=T))
>
> Error in UseMethod("getGroups") : no applicable method for "getGroups"
>
> #######################################################
>
>
> Could anybody advice me the way to write the command correctly,
which?ignore the
> missing data?
>
> Cheers
> Fir
>
>
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 40
Date: Wed, 13 Oct 2010 09:49:59 -0500
From: Kurt_Helf at nps.gov
To: r-help at r-project.org
Subject: [R] strip month and year from MM/DD/YYYY format
Message-ID:
<OFB2D613C9.A1B0D0E8-ON862577BB.0051124F-862577BB.00517B37 at nps.gov>
Content-Type: text/plain; charset=US-ASCII
Greetings
I'm having difficulty witht the strptime function. I can't seem to
figure a way to strip month (name) and year and create separate columns
from a column with MM/DD/YYYY formatted dates. Can anyone help?
Cheers
Kurt
***************************************************************
Kurt Lewis Helf, Ph.D.
Ecologist
EEO Counselor
National Park Service
Cumberland Piedmont Network
P.O. Box 8
Mammoth Cave, KY 42259
Ph: 270-758-2163
Lab: 270-758-2151
Fax: 270-758-2609
****************************************************************
Science, in constantly seeking real explanations, reveals the true majesty
of our world in all its complexity.
-Richard Dawkins
The scientific tradition is distinguished from the pre-scientific tradition
in having two layers. Like the latter it passes on its theories but it
also passes on a critical attitude towards them. The theories are passed
on not as dogmas but rather with the challenge to discuss them and improve
upon them.
-Karl Popper
...consider yourself a guest in the home of other creatures as significant
as yourself.
-Wayside at Wilderness Threshold in McKittrick Canyon, Guadalupe Mountains
National Park, TX
Cumberland Piedmont Network (CUPN) Homepage:
http://tiny.cc/e7cdx
CUPN Forest Pest Monitoring Website:
http://bit.ly/9rhUZQ
CUPN Cave Cricket Monitoring Website:
http://tiny.cc/ntcql
CUPN Cave Aquatic Biota Monitoring Website:
http://tiny.cc/n2z1o
------------------------------
Message: 41
Date: Wed, 13 Oct 2010 10:52:20 -0400
From: Rajarshi Guha <rajarshi.guha at gmail.com>
To: Eric Hu <Eric.Hu at gilead.com>
Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] Pipeline pilot fingerprint package
Message-ID:
<AANLkTim25x7wvAZmVQjDCoU5A2bzBF1W_HW34EcDKqmx at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
On Tue, Oct 12, 2010 at 8:54 PM, Eric Hu <Eric.Hu at gilead.com> wrote:
> Hi,
>
> I am trying to see if I can use R to perform more rigorous regression
> analysis. I wonder if the fingerprint package is able to handle pipeline
> pilot fingerprints (ECFC6 etc) now.
Currently no - does Pipeline Pilot out put their ECFP's in a standard
format? if so can you send me an example file? (Asuming they output
fp's for a single molecule on a single row, you could implement your
own line parse and supply it via the lf argument in fp.read. See
cdk.lf, moe.lf or bci.lf for examples)
The other issue is how one evaluates similarity between variable
length feature fingerprints, such as ECFPs. One approach is to map the
features into a fixed length bit string. Another approach is to just
look at intersections and unions of features to evaluate the Tanimoto
score. It seems to me that the former leads to loss of resolution and
that the latter could lead to generally low Tanimoto scores.
Do you know what Pipeline Pilot does?
--
Rajarshi Guha
NIH Chemical Genomics Center
------------------------------
Message: 42
Date: Wed, 13 Oct 2010 17:08:42 +0200
From: max.e.brown at gmail.com
To: r-help at stat.math.ethz.ch
Subject: Re: [R] R: robust standard errors for panel data
Message-ID: <m21v7u3zb9.fsf at max.servers.cemfi>
Content-Type: text/plain; charset=us-ascii
Thanks Giovanni and Achim!
I will try out some of the things you suggested. Let's see how far I get :-)
------------------------------
Message: 43
Date: Wed, 13 Oct 2010 11:13:56 -0400
From: David Winsemius <dwinsemius at comcast.net>
To: Christophe Bouffioux <christophe.b00 at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] bwplot change whiskers position to percentile 5 and
P95
Message-ID: <9A437D82-A2FB-42EF-A2E9-CF632FCA1BC8 at comcast.net>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed; delsp=yes
On Oct 13, 2010, at 10:05 AM, Christophe Bouffioux wrote:
> Dear R-community,
>
> Using bwplot, how can I put the whiskers at percentile 5 and
> percentile 95,
> in place of the default position coef=1.5??
>
> Using panel=panel.bwstrip, whiskerpos=0.05, from the package
> agsemisc gives
> satisfaction, but changes the appearance of my boxplot and works
> with an old
> version of R, what I don?t want, and I didn?t find the option in
> box.umbrella parameters
Nope, you won't find it even if you search harder, but you do have a
lattice path forward. Just as base function boxplot() does the
calculations and then plots with bxp(), by default panel.bwplot sends
the data to boxplot.stats, but panel.bwplot also allows you to specify
an alternate function that returns plotting parameters differently as
long as those conforms to the requirements for structure. You can look
at boxplot.stats (it's not that big) and then construct an
alternative. The line you would need to alter would be the one
starting with: stats<-stats::fivenum(...), since you are changing the
values returned by fivenum(). You might get away with just changing
stats[1] and stats[5] to your revised specifications, although it has
occurred to me that you might get some of those "out" dots inside your
whiskers. (Fixing that would not be too hard once you are inside
boxplot.stats().
Seemed to work for me with your data (at least the extent of plotting
a nice 3 x 2 panel display. All I did was redefine an nboxplot.stats
by inserting this line after the line cited above:
stats[c(1,5)]<- quantile(x, probs=c(0.05, 0.95))
and then added an argument ..., stats=nboxplot.stats) inside your
panel.bwplot.
--
David.
> Many thanks
> Christophe
>
> Here is the code:
>
> library(lattice)
> ex <- data.frame(v1 = log(abs(rt(180, 3)) + 1),
> v2 = rep(c("2007", "2006", "2005"), 60),
> z = rep(c("a", "b", "c", "d", "e", "f"), e = 30))
>
> ex2 <- data.frame(v1b = log(abs(rt(18, 3)) + 1),
> v2 = rep(c("2007", "2006", "2005"), 6),
> z = rep(c("a", "b", "c", "d", "e", "f"), e = 3))
> ex3 <- merge(ex, ex2, by=c("v2","z"))
> D2007 <- ex3[ex3$z=="d" & ex3$v2==2007, ]
> D2006 <- ex3[ex3$z=="d" & ex3$v2==2006, ]
> C2007 <- ex3[ex3$z=="c" & ex3$v2==2007, ]
> quantile(D2007$v1, probs = c(0.05, 0.95))
> quantile(D2006$v1, probs = c(0.05, 0.95))
> quantile(C2007$v1, probs = c(0.05, 0.95))
>
> bwplot(v2 ~ v1 | z, data = ex3, layout=c(3,2), X = ex3$v1b,
> pch = "|",
> par.settings = list(
> plot.symbol = list(alpha = 1, col = "transparent",cex = 1,pch = 20)),
> panel = function(x, y, ..., X, subscripts){
> panel.grid(v = -1, h = 0)
> panel.bwplot(x, y, ..., subscripts = subscripts)
> X <- X[subscripts]
> xmax =max(x)
> X <- tapply(X, y, unique)
> Y <- tapply(y, y, unique)
> tg <- table(y)
> panel.points(X, Y, cex=3, pch ="|" , col = "red")
> #vcount <- tapply(v1, v2, length)
> panel.text((xmax-0.2), (Y-0.15), labels = paste("N=", tg))
> })
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
David Winsemius, MD
West Hartford, CT
------------------------------
Message: 44
Date: Wed, 13 Oct 2010 08:19:21 -0700 (PDT)
From: Kay Cichini <Kay.Cichini at uibk.ac.at>
To: r-help at r-project.org
Subject: [R] interaction contrasts
Message-ID: <1286983161935-2993845.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
hello list,
i'd very much appreciate help with setting up the
contrast for a 2-factorial crossed design.
here is a toy example:
library(multcomp)
dat<-data.frame(fac1=gl(4,8,labels=LETTERS[1:4]),
fac2=rep(c("I","II"),16),y=rnorm(32,1,1))
mod<-lm(y~fac1*fac2,data=dat)
## the contrasts i'm interressted in:
c1<-rbind("fac2-effect in A"=c(0,1,0,0,0,0,0,0),
"fac2-effect in B"=c(0,1,0,0,0,1,0,0),
"fac2-effect in C"=c(0,1,0,0,0,0,1,0),
"fac2-effect in D"=c(0,1,0,0,0,0,0,1),
"fac2-effect, A*B"=c(0,0,0,0,0,1,0,0),
"fac2-effect, A*C"=c(0,0,0,0,0,0,1,0),
"fac2-effect, A*D"=c(0,0,0,0,0,0,0,1))
summary(glht(mod,c1))
## now i want to add the remaining combinations
## "fac2, B*C"
## "fac2, B*D"
## "fac2, C*D"
## to the simultanous tests to see whether the effects
## of fac2 within the levels of fac1 differ between
## each combination of the levels of fac1, or not ??
[[elided Yahoo spam]]
yours,
kay
-----
------------------------
Kay Cichini
Postgraduate student
Institute of Botany
Univ. of Innsbruck
------------------------
--
View this message in context:
http://r.789695.n4.nabble.com/interaction-contrasts-tp2993845p2993845.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 45
Date: Wed, 13 Oct 2010 09:32:31 -0600
From: Greg Snow <Greg.Snow at imail.org>
To: li li <hannah.hlx at gmail.com>, r-help <r-help at r-project.org>
Subject: Re: [R] extract rows of a matrix
Message-ID:
<B37C0A15B8FB3C468B5BC7EBC7DA14CC633E8150FF at LP-EXMBVS10.CO.IHC.COM>
Content-Type: text/plain; charset="us-ascii"
> newmat <- oldmat[ c(rep(FALSE,19),TRUE), ]
Or
> newmat <- oldmat[ seq(20, nrow(oldmat), 20), ]
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.snow at imail.org
801.408.8111
> -----Original Message-----
> From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-
> project.org] On Behalf Of li li
> Sent: Tuesday, October 12, 2010 2:00 PM
> To: r-help
> Subject: [R] extract rows of a matrix
>
> Hi all,
> I want to extract every 20th row of a big matrix, say 10000 by 1000.
> What is the simper way to do this?
[[elided Yahoo spam]]
> Hannah
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 46
Date: Wed, 13 Oct 2010 09:44:55 -0600
From: Jeremy Olson <Jeremy.Olson at lightsource.ca>
To: "r-help at r-project.org" <r-help at r-project.org>
Subject: [R] overlaying multiple contour plots
Message-ID:
<3C2F98D1F41963409E88FAD3274F7CC506379678D4 at srv-mail-04.clsi.ca>
Content-Type: text/plain
Dear All,
I have 4 or 5 contour plots that I need to overlay. Currently they are maps
showing hot and cold areas for specific elements.
I would like to combine these plots into one map, where each color will
correspond to a different element and you would be able to see the areas
that each element occupies on this map.
Is there a way to do this in 'R'? I am fairly new to this software, so any
help would be much appreciated.
Thank you in advance,
Jeremy
[[alternative HTML version deleted]]
------------------------------
Message: 47
Date: Wed, 13 Oct 2010 09:55:56 -0600
From: Greg Snow <Greg.Snow at imail.org>
<r-help at r-project.org>
Subject: Re: [R] Read Particular Cells within Excel
Message-ID:
<B37C0A15B8FB3C468B5BC7EBC7DA14CC633E81513C at LP-EXMBVS10.CO.IHC.COM>
Content-Type: text/plain; charset="us-ascii"
I don't know of any tools that would do this generally (RODBC could do it if
there were specific column names in the 1st row and a given column with
information to identify the rows, but this seems unlikely from your
description). A couple of possibilities:
In Excel you can highlight the columns of interest, right click and choose
'copy', then in R type:
> mydata <- read.delim('clipboard')
And you will have the data. This is easy and straight forward, but requires
clicking and therefore cannot be automated.
The read.xls function in the gdata package uses Perl to extract data from an
excel file and import it into R, it has an option for setting the beginning
row to read from, but not the general section that you ask for. The Perl
tools however can grab specific ranges of cells, you could adapt the
read.xls function (and Perl script) to do what you want in a more automated
way.
Good luck,
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.snow at imail.org
801.408.8111
> -----Original Message-----
> From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-
> project.org] On Behalf Of Jeevan Duggempudi
> Sent: Tuesday, October 12, 2010 6:17 PM
> To: r-help at r-project.org
> Subject: [R] Read Particular Cells within Excel
>
> Hello all,
>
> I have a business user who generates monthly reports in MS Excel in a
> particular
> format. The data I need is present in different portions of this excel
> file. Is
> there a way to read different cells from a particular excel worksheet?
> i.e.,
> cells b50:d100 in the Inputs worksheet. I am investigating
> odbcConnectExcel but
> did not yet see such capability.
>
>
> Appreciate your help.
>
> Jeevan
>
>
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 48
Date: Wed, 13 Oct 2010 08:57:03 -0700 (PDT)
From: Manta <mantino84 at libero.it>
To: r-help at r-project.org
Subject: [R] Pasting function arguments and strings
Message-ID: <1286985423721-2993905.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Dear R community,
I am struggling a bit with a probably fairly simple task. I need to use some
already existing functions as argument for a new function that I am going to
create. 'dataset' is an argument, and it comprises objects named
'mean_test', 'sd_test', 'kurt_test' and so on. 'arg1' tells what object I
want (mean, sd, kurt) while 'arg2' tells what to do to the object taken in
'arg1' (again could be mean, sd, but also any other operation/function).
I was thinking about something like:
myfunction<-function(dataset,arg1,arg2)
{
attach(dataset)
result=arg2(paste(arg1,"_test"))
return(result)
}
But this of course does not work! 'arg1' is of type closure and I cannot set
it as character. Moreover, paste will create a string, and I do not think I
can pass a string to the function of 'arg2'. How should I do?
Thanks,
Marco
--
View this message in context:
http://r.789695.n4.nabble.com/Pasting-function-arguments-and-strings-tp29939
05p2993905.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 49
Date: Wed, 13 Oct 2010 10:01:19 -0600
From: Greg Snow <Greg.Snow at imail.org>
To: Joel <joda2457 at student.uu.se>, "r-help at r-project.org"
<r-help at r-project.org>
Subject: Re: [R] Plot table as table
Message-ID:
<B37C0A15B8FB3C468B5BC7EBC7DA14CC633E815148 at LP-EXMBVS10.CO.IHC.COM>
Content-Type: text/plain; charset="us-ascii"
Also look at textplot in the gplots package and addtable2plot in the plotrix
package.
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.snow at imail.org
801.408.8111
> -----Original Message-----
> From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-
> project.org] On Behalf Of Joel
> Sent: Wednesday, October 13, 2010 2:41 AM
> To: r-help at r-project.org
> Subject: Re: [R] Plot table as table
>
>
> Thx for the ideas will try them out.
>
> Have a wonderful day
>
> //Joel
> --
> View this message in context: http://r.789695.n4.nabble.com/Plot-table-
> as-table-tp2993270p2993344.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 50
Date: Wed, 13 Oct 2010 12:08:06 -0400
From: David Winsemius <dwinsemius at comcast.net>
To: Jeremy Olson <Jeremy.Olson at lightsource.ca>
Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] overlaying multiple contour plots
Message-ID: <D5A2A83B-CC61-4624-8518-11BDDF2384AE at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
On Oct 13, 2010, at 11:44 AM, Jeremy Olson wrote:
> Dear All,
>
> I have 4 or 5 contour plots that I need to overlay. Currently they
> are maps showing hot and cold areas for specific elements.
Providing paste-able examples is the standard way to present such
problems.
> I would like to combine these plots into one map, where each color
> will correspond to a different element and you would be able to see
> the areas that each element occupies on this map.
Hopefully there is not much overlap , so dealing with the collisions/
overlap is not a major issue?
> Is there a way to do this in 'R'? I am fairly new to this software,
> so any help would be much appreciated.
> Thank you in advance,
First, specify the plotting paradigm to be used (base, lattice, ggplot2)
If base, then the "new=TRUE" parameter in par (and don't complain to
me if you think new=TRUE is a weird construction for keeping the old
plot active and adding additional elements. I find it backwards to how
I think "new" would be interpreted.) You will need to repeated invoke
par)new="TRUE" before each addition.
If lattice, probably panel.superpose(), which I don't really
understand very well, but I'm learning slowly.
If ggplot2, someone with experience will come along and advise us.
--
David Winsemius, MD
West Hartford, CT
------------------------------
Message: 51
Date: Wed, 13 Oct 2010 09:10:17 -0700
From: Bert Gunter <gunter.berton at gene.com>
To: Manta <mantino84 at libero.it>
Cc: r-help at r-project.org
Subject: Re: [R] Pasting function arguments and strings
Message-ID:
<AANLkTi=cw-y1-dhRv+xCX212Np-NF4L=WuHXk8WTj8up at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
If I understand you correctly, just arg2(arg1) will work fine...One of
the nice features of R.
Example:
f <- function(dat,fun=mean,...)
{
fun(dat, ...) ## ... allows extra arguments to fun
}
f( rnorm(10), trim=.05) ## trimmed mean
x <- 1:10
f(x, fun = max,na.rm=TRUE)
Cheers,
Bert
On Wed, Oct 13, 2010 at 8:57 AM, Manta <mantino84 at libero.it> wrote:
>
> Dear R community,
>
> I am struggling a bit with a probably fairly simple task. I need to use
some
> already existing functions as argument for a new function that I am going
to
> create. 'dataset' is an argument, and it comprises objects named
> 'mean_test', 'sd_test', 'kurt_test' and so on. 'arg1' tells what object I
> want (mean, sd, kurt) while 'arg2' tells what to do to the object taken in
> 'arg1' (again could be mean, sd, but also any other operation/function).
>
> I was thinking about something like:
>
> myfunction<-function(dataset,arg1,arg2)
>
> {
>
> attach(dataset)
> result=arg2(paste(arg1,"_test"))
>
> return(result)
>
> }
>
> But this of course does not work! 'arg1' is of type closure and I cannot
set
> it as character. Moreover, paste will create a string, and I do not think
I
> can pass a string to the function of 'arg2'. How should I do?
>
> Thanks,
> Marco
> --
> View this message in context:
http://r.789695.n4.nabble.com/Pasting-function-arguments-and-strings-tp29939
05p2993905.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Bert Gunter
Genentech Nonclinical Biostatistics
------------------------------
Message: 52
Date: Wed, 13 Oct 2010 09:28:31 -0700 (PDT)
From: Manta <mantino84 at libero.it>
To: r-help at r-project.org
Subject: Re: [R] Pasting function arguments and strings
Message-ID: <1286987311760-2993976.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Thaks for your quick reply Bert, although I doubt it works.
The reason is that the names of the objects of the dataset, all end with the
sufix '_test' and therefore I need to attach/paste/glue this suffix to the
'arg2' of the function. Any other idea?
--
View this message in context:
http://r.789695.n4.nabble.com/Pasting-function-arguments-and-strings-tp29939
05p2993976.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 53
Date: Wed, 13 Oct 2010 09:30:15 -0700 (PDT)
From: Steve Swope <steve at pgwg.com>
To: r-help at r-project.org
Subject: Re: [R] Plotting Y axis labels within a loop
Message-ID:
<15DC157F6456D742BE3781BE7559EFF8010BABB5 at PGGSBS.PGWG.LOCAL>
Content-Type: text/plain
Got it , thanks!
From: David Winsemius [via R]
[mailto:ml-node+2992845-2070125287-199206 at n4.nabble.com]
Sent: Tuesday, October 12, 2010 3:28 PM
To: Steve Swope
Subject: Re: Plotting Y axis labels within a loop
On Oct 12, 2010, at 5:54 PM, Steve Swope wrote:
>
> When I plot y axis labels with in a loop they (I) get confused. Here
> is some
> sample code:
>
> Fe<-c(1.1, 4.5, 7.2, 8.8)
> Mn<-c(9.6, 7.2, 5.3, 2.1)
> Cd<-c(2.2, 3.4, 6.1, 3.2)
> FeMnCd<-data.frame(Fe, Mn, Cd)
>
> par(mfrow=c(2,2))
>
> for(i in FeMnCd)plot(i, xlab="Event",ylab=colnames(FeMnCd)[i])
>
> The more plots per page, the crazier it gets! TIA
Think about what will be assigned to "i" and perhaps even print it to
your console, since it's not what you apparently expect:
for(i in FeMnCd){print(i); plot(i, xlab="Event",ylab=colnames(FeMnCd)
[i]) }
The source of you confusion may become more clear.
>
> Steve
> --
> View this message in context:
http://r.789695.n4.nabble.com/Plotting-Y-axis-labels-within-a-loop-tp299
2813p2992813.html
<http://r.789695.n4.nabble.com/Plotting-Y-axis-labels-within-a-loop-tp29
92813p2992813.html?by-user=t>
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
David Winsemius, MD
West Hartford, CT
______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
________________________________
View message @
http://r.789695.n4.nabble.com/Plotting-Y-axis-labels-within-a-loop-tp299
2813p2992845.html
To unsubscribe from Plotting Y axis labels within a loop, click here
<http://r.789695.n4.nabble.com/template/TplServlet.jtp?tpl=unsubscribe_b
y_code&node=2992813&code=c3RldmVAcGd3Zy5jb218Mjk5MjgxM3wtMTYxNzE0MDA0Mw=
=> .
--
View this message in context:
http://r.789695.n4.nabble.com/Plotting-Y-axis-labels-within-a-loop-tp2992813
p2993979.html
Sent from the R help mailing list archive at Nabble.com.
[[alternative HTML version deleted]]
------------------------------
Message: 54
Date: Wed, 13 Oct 2010 09:35:47 -0700
From: Peter Langfelder <peter.langfelder at gmail.com>
To: Manta <mantino84 at libero.it>
Cc: r-help at r-project.org
Subject: Re: [R] Pasting function arguments and strings
Message-ID:
<AANLkTimw7oXoPJEtzYyd98Bx005QL-EWwHh1_UiS8By4 at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
>From what I read, you want something like this:
myfunction<-function(dataset,arg1,arg2)
{
func = match.fun(arg2)
argument = dataset[, match(paste(arg1,"_test", sep=""), names(dataset))]
result=func(argument)
return(result)
}
On Wed, Oct 13, 2010 at 9:28 AM, Manta <mantino84 at libero.it> wrote:
>
> Thaks for your quick reply Bert, although I doubt it works.
>
> The reason is that the names of the objects of the dataset, all end with
the
> sufix '_test' and therefore I need to attach/paste/glue this suffix to the
> 'arg2' of the function. Any other idea?
> --
> View this message in context:
http://r.789695.n4.nabble.com/Pasting-function-arguments-and-strings-tp29939
05p2993976.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 55
Date: Wed, 13 Oct 2010 09:39:10 -0700
From: "William Dunlap" <wdunlap at tibco.com>
To: "Manta" <mantino84 at libero.it>, <r-help at r-project.org>
Subject: Re: [R] Pasting function arguments and strings
Message-ID:
<77EB52C6DD32BA4D87471DCD70C8D700038B1A51 at NA-PA-VBE03.na.tibco.com>
Content-Type: text/plain; charset="us-ascii"
Is this the sort of thing you are looking for?
> f <- function(testName, dataset, ...) {
+ testFunc <- match.fun(paste(testName, "_test", sep=""))
+ testFunc(dataset, ...)
+ }
> m_test <- function(x) "the m test"
> p_test <- function(x) "the p test"
> f("m", 17)
[1] "the m test"
Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com
> -----Original Message-----
> From: r-help-bounces at r-project.org
> [mailto:r-help-bounces at r-project.org] On Behalf Of Manta
> Sent: Wednesday, October 13, 2010 9:29 AM
> To: r-help at r-project.org
> Subject: Re: [R] Pasting function arguments and strings
>
>
> Thaks for your quick reply Bert, although I doubt it works.
>
> The reason is that the names of the objects of the dataset,
> all end with the
> sufix '_test' and therefore I need to attach/paste/glue this
> suffix to the
> 'arg2' of the function. Any other idea?
> --
> View this message in context:
> http://r.789695.n4.nabble.com/Pasting-function-arguments-and-s
trings-tp2993905p2993976.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 56
Date: Wed, 13 Oct 2010 09:54:33 -0700
From: "William Dunlap" <wdunlap at tibco.com>
To: "David" <david.maillists at gmail.com>, <r-help at r-project.org>
Subject: Re: [R] "Memory not mapped" when using .C, problem in Mac but
not in Linux
Message-ID:
<77EB52C6DD32BA4D87471DCD70C8D700038B1A61 at NA-PA-VBE03.na.tibco.com>
Content-Type: text/plain; charset="us-ascii"
This often happens when your C code uses memory that
it did not allocate, particularly when it reads or
writes just a little beyond the end of a memory block.
On some platforms or if you are lucky there is unused
memory between blocks of allocated memory and you don't
see a problem. Other machines may pack allocated memory
blocks more tightly and writing off the end of an array
corrupts another array, leading to the program crashing.
(Or reading off the end of an array may pick up data from
the next array, which leads to similar crashes later on.)
You may also be reading memory which has never been written
to, hence you are using random data, which could corrupt
things.
Use a program like valgrind (on Linux, free) or Purify (on
various platforms, but pricy) to detect memory misuse
in C/C++ code. As long as your code isn't #ifdef'ed for
various platforms, you can use valgrind to detect and fix
memory misuse on Linux and you will find that overt
problems on other platforms will go away.
Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com
> -----Original Message-----
> From: r-help-bounces at r-project.org
> [mailto:r-help-bounces at r-project.org] On Behalf Of David
> Sent: Wednesday, October 13, 2010 6:47 AM
> To: r-help at r-project.org
> Subject: [R] "Memory not mapped" when using .C,problem in Mac
> but not in Linux
>
> Hello,
>
> I am aware this may be an obscure problem difficult to advice
> about, but
> just in case... I am calling a C function from R on an iMac
> (almost shining:
> bought by my institution this year) and gives a "memory not
> mapped" error.
>
> Nevertheless, exactly the same code runs without problem in a powerful
> computer using SuSE Linux, but also on my laptop of 2007, 32
> bits, 2 GB RAM,
> running Ubuntu. My supervisor says that he can run my code on
> his iMac (a
> bit older than mine) without problem.
>
> I have upgraded to the latest version of R, and I have tried
> compiling the C
> code with the latest version of gcc (obtained from MacPorts),
> but the error
> persists.
>
> Any ideas?
>
> Thank you very much in advance,
>
> David
> Can you please Cc to me any replies, just in case I may miss
> any of them
> among the whole amount of emails :-) ?
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 57
Date: Wed, 13 Oct 2010 09:56:17 -0700 (PDT)
To: r-help at r-project.org
Subject: [R] Extracting index in character array.
Message-ID: <1286988977155-2994029.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
If I have a character array:
list = c("A", "B", "C")
how do I access the third element without doing list[3]. Can't I find the
index of "C" using a particular function?
--
View this message in context:
http://r.789695.n4.nabble.com/Extracting-index-in-character-array-tp2994029p
2994029.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 58
Date: Wed, 13 Oct 2010 12:11:54 -0500
From: Christian Raschke <crasch2 at tigers.lsu.edu>
To: r-help at r-project.org
Subject: Re: [R] Extracting index in character array.
Message-ID: <4CB5E85A.40705 at tigers.lsu.edu>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
> which(list=="C")
[1] 3
See ?which
On 10/13/2010 11:56 AM, lord12 wrote:
> If I have a character array:
>
> list = c("A", "B", "C")
>
> how do I access the third element without doing list[3]. Can't I find the
> index of "C" using a particular function?
>
--
Christian Raschke
Department of Economics
and
ISDS Research Lab (HSRG)
Louisiana State University
Patrick Taylor Hall, Rm 2128
Baton Rouge, LA 70803
crasch2 at lsu.edu
------------------------------
Message: 59
Date: Wed, 13 Oct 2010 13:13:03 -0400
From: "Schwab,Wilhelm K" <bschwab at anest.ufl.edu>
To: "r-help at r-project.org" <r-help at r-project.org>
Subject: [R] [OT] (slightly) - OpenOffice Calc and text files
Message-ID:
<A93938CC72687245835AC0BE10199F7C84D47D04 at HSC-CMS01.ad.ufl.edu>
Content-Type: text/plain; charset="us-ascii"
Hello all,
I had a very strange looking problem that turned out to be due to unexpected
(by me at least) format changes to one of my data files. We have a small
lab study in which each run is represented by a row in a tab-delimited file;
each row identifies a repetition of the experiment and associates it with
some subjective measurements and times from our notes that get used to index
another file with lots of automatically collected data. In short, nothing
shocking.
In a moment of weakness, I opened the file using (I think it's version 3.2)
of OpenOffice Calc to edit something that I had mangled when I first entered
it, saved it (apparently the mistake), and reran my analysis code. The
results were goofy, and the problem was in my code that runs before R ever
sees the data. That code was confused by things that I would like to ensure
don't happen again, and I suspect that some of you might have thoughts on
it.
The problems specifically:
(1) OO seems to be a little stingy about producing tab-delimited text; there
is stuff online about using the csv and editing the filter and folks
(presumably like us) saying that it deserves to be a separate option.
(2) Dates that I had formatted as YYYY got chopped to YY (did we not learn
anything last time?<g>) and times that I had formatted in 24 hours ended up
AM/PM.
Have any of you found a nice (or at least predictable) way to use OO Calc to
edit files like this? If it insists on thinking for me, I wish it would
think in 24 hour time and 4 digit years :) I work on Linux, so Excel is off
the table, but another spreadsheet or text editor would be a viable option,
as would configuration changes to Calc.
Bill
------------------------------
Message: 60
Date: Wed, 13 Oct 2010 14:20:30 -0300
From: Henrique Dallazuanna <wwwhsd at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Read Particular Cells within Excel
Message-ID:
<AANLkTimCSt9QvV8Hpz1Tmg-fTYARMvUJk1g+wkn5KXyQ at mail.gmail.com>
Content-Type: text/plain
Try this:
library(RDCOMClient)
xl <- COMCreate("Excel.Application")
wk <- xl$Workbooks()$Open("Book1.xlsx")
do.call(cbind, wk$Sheets(1)$Range("B50:C55")$Value())
On Tue, Oct 12, 2010 at 9:17 PM, Jeevan Duggempudi
> Hello all,
>
> I have a business user who generates monthly reports in MS Excel in a
> particular
> format. The data I need is present in different portions of this excel
> file. Is
> there a way to read different cells from a particular excel worksheet?
> i.e.,
> cells b50:d100 in the Inputs worksheet. I am investigating
odbcConnectExcel
> but
> did not yet see such capability.
>
>
> Appreciate your help.
>
> Jeevan
>
>
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Henrique Dallazuanna
Curitiba-Parana-Brasil
250 25' 40" S 490 16' 22" O
[[alternative HTML version deleted]]
------------------------------
Message: 61
Date: Wed, 13 Oct 2010 19:25:25 +0200
From: ?ukasz R?c?awowicz <lukasz.reclawowicz at gmail.com>
To: zhu yao <mailzhuyao at gmail.com>
Cc: R-project help <r-help at r-project.org>
Subject: Re: [R] bootstrap in pROC package
Message-ID:
<AANLkTi=XQ-2AJ3eQTvL3wB_tjSZ8srruxYbDWuEL+q+k at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-2
Try:
ci.auc(all$D,all$pre,m="b")
2010/10/13 zhu yao <mailzhuyao at gmail.com>:
> Dear useRs:
>
> I use pROC package to compute the bootstrap C.I. of AUC.
>
> The command was as follows:
>
> roc1<-roc(all$D,all$pre,ci=TRUE,boot.n=200)
--
Mi?ego dnia
------------------------------
Message: 62
Date: Wed, 13 Oct 2010 10:39:59 -0700
From: Albyn Jones <jones at reed.edu>
To: "Schwab,Wilhelm K" <bschwab at anest.ufl.edu>
Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] [OT] (slightly) - OpenOffice Calc and text files
Message-ID: <20101013173959.GA12088 at reed.edu>
Content-Type: text/plain; charset=us-ascii
How about emacs?
albyn
On Wed, Oct 13, 2010 at 01:13:03PM -0400, Schwab,Wilhelm K wrote:
> Hello all,
> <.....>
> Have any of you found a nice (or at least predictable) way to use OO Calc
to edit files like this? If it insists on thinking for me, I wish it would
think in 24 hour time and 4 digit years :) I work on Linux, so Excel is off
the table, but another spreadsheet or text editor would be a viable option,
as would configuration changes to Calc.
>
> Bill
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Albyn Jones
Reed College
jones at reed.edu
------------------------------
Message: 63
Date: Wed, 13 Oct 2010 10:41:02 -0700
From: Peter Langfelder <peter.langfelder at gmail.com>
To: "Schwab,Wilhelm K" <bschwab at anest.ufl.edu>
Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] [OT] (slightly) - OpenOffice Calc and text files
Message-ID:
<AANLkTi=+fB9+oxNEh2GoAzhvNXKyvfvGxcD_25oJSEu0 at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
On Wed, Oct 13, 2010 at 10:13 AM, Schwab,Wilhelm K
<bschwab at anest.ufl.edu> wrote:
> Hello all,
>
> I had a very strange looking problem that turned out to be due to
unexpected (by me at least) format changes to one of my data files. ?We have
a small lab study in which each run is represented by a row in a
tab-delimited file; each row identifies a repetition of the experiment and
associates it with some subjective measurements and times from our notes
that get used to index another file with lots of automatically collected
data. ?In short, nothing shocking.
>
> In a moment of weakness, I opened the file using (I think it's version
3.2) of OpenOffice Calc to edit something that I had mangled when I first
entered it, saved it (apparently the mistake), and reran my analysis code.
?The results were goofy, and the problem was in my code that runs before R
ever sees the data. ?That code was confused by things that I would like to
ensure don't happen again, and I suspect that some of you might have
thoughts on it.
>
> The problems specifically:
>
> (1) OO seems to be a little stingy about producing tab-delimited text;
there is stuff online about using the csv and editing the filter and folks
(presumably like us) saying that it deserves to be a separate option.
>
> (2) Dates that I had formatted as YYYY got chopped to YY (did we not learn
anything last time?<g>) and times that I had formatted in 24 hours ended up
AM/PM.
>
> Have any of you found a nice (or at least predictable) way to use OO Calc
to edit files like this? ?If it insists on thinking for me, I wish it would
think in 24 hour time and 4 digit years :) ?I work on Linux, so Excel is off
the table, but another spreadsheet or text editor would be a viable option,
as would configuration changes to Calc.
No idea about Calc, I use it regularly but only to view files (and
that mostly csv, not tab-delinited). The most primitive solution is to
use a plain text editor such as vi that will save everything as it
loaded it except for what you change. The second most primitive idea
(or maybe not so primitive after all) is to read the table into R and
manually fix it there such as table$column[row] = "ABCD" (this is my
favorite way of changing things :)). The third most primitive idea
which I have actually never used but which may be viable is to load it
into R and use the fix() function that pulls up a rather primitive but
functional data editor.
Peter
------------------------------
Message: 64
Date: Wed, 13 Oct 2010 12:50:59 -0500
From: Marc Schwartz <marc_schwartz at me.com>
To: "Schwab,Wilhelm K" <bschwab at anest.ufl.edu>
Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] [OT] (slightly) - OpenOffice Calc and text files
Message-ID: <DAD89DF4-090B-499B-9FC3-671888C977CA at me.com>
Content-Type: text/plain; CHARSET=US-ASCII
On Oct 13, 2010, at 12:13 PM, Schwab,Wilhelm K wrote:
> Hello all,
>
> I had a very strange looking problem that turned out to be due to
unexpected (by me at least) format changes to one of my data files. We have
a small lab study in which each run is represented by a row in a
tab-delimited file; each row identifies a repetition of the experiment and
associates it with some subjective measurements and times from our notes
that get used to index another file with lots of automatically collected
data. In short, nothing shocking.
>
> In a moment of weakness, I opened the file using (I think it's version
3.2) of OpenOffice Calc to edit something that I had mangled when I first
entered it, saved it (apparently the mistake), and reran my analysis code.
The results were goofy, and the problem was in my code that runs before R
ever sees the data. That code was confused by things that I would like to
ensure don't happen again, and I suspect that some of you might have
thoughts on it.
>
> The problems specifically:
>
> (1) OO seems to be a little stingy about producing tab-delimited text;
there is stuff online about using the csv and editing the filter and folks
(presumably like us) saying that it deserves to be a separate option.
>
> (2) Dates that I had formatted as YYYY got chopped to YY (did we not learn
anything last time?<g>) and times that I had formatted in 24 hours ended up
AM/PM.
>
> Have any of you found a nice (or at least predictable) way to use OO Calc
to edit files like this? If it insists on thinking for me, I wish it would
think in 24 hour time and 4 digit years :) I work on Linux, so Excel is off
the table, but another spreadsheet or text editor would be a viable option,
as would configuration changes to Calc.
>
> Bill
I don't use OpenOffice (soon to be LibreOffice) much these days, but one of
the things that you can try, is when you go to save the file as a CSV and
edit the filter, there is an option there "Save cell content as shown". If
that is checked, then any cell formatting that has been applied, either by
default or by your actions, will be retained in the exported data. If that
is unchecked, then the 'raw' data is exported to the file. I just tried it
here (on OSX) and with the option checked, the years were exported with the
default two digits. The years were exported with four digits with the box
unchecked.
Unfortunately, I had no joy with a time field. The AM/PM formatting was
retained with the box checked or unchecked.
>From what I can tell from a quick search, these default formats are
determined by the language/locale settings.
On Linux, a spreadsheet based alternative would be Gnumeric
(http://projects.gnome.org/gnumeric/) and of course, there is always Emacs,
which I have now used on Windows, Linux and OSX.
HTH,
Marc Schwartz
------------------------------
Message: 65
Date: Wed, 13 Oct 2010 10:53:46 -0700
From: Joshua Wiley <jwiley.psych at gmail.com>
To: "Schwab,Wilhelm K" <bschwab at anest.ufl.edu>
Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] [OT] (slightly) - OpenOffice Calc and text files
Message-ID:
<AANLkTi=0o82ttxQnbLgHCEjcVTtrGMVx8s5DAs321Fbf at mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
On Wed, Oct 13, 2010 at 10:13 AM, Schwab,Wilhelm K
<bschwab at anest.ufl.edu> wrote:
> Hello all,
>
> I had a very strange looking problem that turned out to be due to
unexpected (by me at least) format changes to one of my data files. ?We have
a small lab study in which each run is represented by a row in a
tab-delimited file; each row identifies a repetition of the experiment and
associates it with some subjective measurements and times from our notes
that get used to index another file with lots of automatically collected
data. ?In short, nothing shocking.
>
> In a moment of weakness, I opened the file using (I think it's version
3.2) of OpenOffice Calc to edit something that I had mangled when I first
entered it, saved it (apparently the mistake), and reran my analysis code.
?The results were goofy, and the problem was in my code that runs before R
ever sees the data. ?That code was confused by things that I would like to
ensure don't happen again, and I suspect that some of you might have
thoughts on it.
>
> The problems specifically:
>
> (1) OO seems to be a little stingy about producing tab-delimited text;
there is stuff online about using the csv and editing the filter and folks
(presumably like us) saying that it deserves to be a separate option.
It is, but it is doable (you can manually edit the extension to .txt
and edit the field and then choose tab or a _few_ other options that
your heart desires. Importing should be similar.
>
> (2) Dates that I had formatted as YYYY got chopped to YY (did we not learn
anything last time?<g>) and times that I had formatted in 24 hours ended up
AM/PM.
The "general" cell format can be quite convenient, but usually seems
one of the most awful creations in both Excel and Calc. Rants aside,
try forcing the cell format (I like text because it generally just
treats it asis then).
>
> Have any of you found a nice (or at least predictable) way to use OO Calc
to edit files like this? ?If it insists on thinking for me, I wish it would
think in 24 hour time and 4 digit years :) ?I work on Linux, so Excel is off
the table, but another spreadsheet or text editor would be a viable option,
as would configuration changes to Calc.
>
> Bill
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Joshua Wiley
Ph.D. Student, Health Psychology
University of California, Los Angeles
http://www.joshuawiley.com/
------------------------------
Message: 66
Date: Wed, 13 Oct 2010 13:53:46 -0400
From: "Schwab,Wilhelm K" <bschwab at anest.ufl.edu>
To: Albyn Jones <jones at reed.edu>
Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] [OT] (slightly) - OpenOffice Calc and text files
Message-ID:
<A93938CC72687245835AC0BE10199F7C84D47D07 at HSC-CMS01.ad.ufl.edu>
Content-Type: text/plain; charset="us-ascii"
Albyn,
I'll look into it. In fact, I have a small book on it that I bought in my
very early days of using Linux. I quickly found TeX Maker (for the
obvious), Code::Blocks for C/C++ and I would not have started the move
without a working Smalltalk (http://pharo-project.org/home).
For editing data files, I really just want something that shows data in an
understandable grid and does not do weird stuff thinking it's being helpful.
Bill
________________________________________
From: Albyn Jones [jones at reed.edu]
Sent: Wednesday, October 13, 2010 1:39 PM
To: Schwab,Wilhelm K
Cc: r-help at r-project.org
Subject: Re: [R] [OT] (slightly) - OpenOffice Calc and text files
How about emacs?
albyn
On Wed, Oct 13, 2010 at 01:13:03PM -0400, Schwab,Wilhelm K wrote:
> Hello all,
> <.....>
> Have any of you found a nice (or at least predictable) way to use OO Calc
to edit files like this? If it insists on thinking for me, I wish it would
think in 24 hour time and 4 digit years :) I work on Linux, so Excel is off
the table, but another spreadsheet or text editor would be a viable option,
as would configuration changes to Calc.
>
> Bill
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Albyn Jones
Reed College
jones at reed.edu
------------------------------
Message: 67
Date: Wed, 13 Oct 2010 11:04:40 -0700
From: Eric Hu <Eric.Hu at gilead.com>
To: "rajarshi.guha at gmail.com" <rajarshi.guha at gmail.com>
Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] Pipeline pilot fingerprint package
Message-ID:
<B5212860E8BF6F4098CA6A662B67D289182FA29436 at FCXVS02.na.gilead.com>
Content-Type: text/plain; charset="us-ascii"
Hi Rajarshi,
Here is a post I found from Pipeline pilot community help pages:
https://community.accelrys.com/message/3466
Eric
-----Original Message-----
From: Rajarshi Guha [mailto:rajarshi.guha at gmail.com]
Sent: Wednesday, October 13, 2010 7:52 AM
To: Eric Hu
Cc: r-help at r-project.org
Subject: Re: Pipeline pilot fingerprint package
On Tue, Oct 12, 2010 at 8:54 PM, Eric Hu <Eric.Hu at gilead.com> wrote:
> Hi,
>
> I am trying to see if I can use R to perform more rigorous regression
> analysis. I wonder if the fingerprint package is able to handle pipeline
> pilot fingerprints (ECFC6 etc) now.
Currently no - does Pipeline Pilot out put their ECFP's in a standard
format? if so can you send me an example file? (Asuming they output
fp's for a single molecule on a single row, you could implement your
own line parse and supply it via the lf argument in fp.read. See
cdk.lf, moe.lf or bci.lf for examples)
The other issue is how one evaluates similarity between variable
length feature fingerprints, such as ECFPs. One approach is to map the
features into a fixed length bit string. Another approach is to just
look at intersections and unions of features to evaluate the Tanimoto
score. It seems to me that the former leads to loss of resolution and
that the latter could lead to generally low Tanimoto scores.
Do you know what Pipeline Pilot does?
--
Rajarshi Guha
NIH Chemical Genomics Center
------------------------------
Message: 68
Date: Wed, 13 Oct 2010 14:04:27 -0400
From: "Schwab,Wilhelm K" <bschwab at anest.ufl.edu>
To: Peter Langfelder <peter.langfelder at gmail.com>
Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] [OT] (slightly) - OpenOffice Calc and text files
Message-ID:
<A93938CC72687245835AC0BE10199F7C84D47D08 at HSC-CMS01.ad.ufl.edu>
Content-Type: text/plain; charset="us-ascii"
Peter,
vi is *really* primitive =:0 R is a little late because I tend to do shape
changes prior to invoking R. However, I could load tweak and re-save and
then bring R back into it later. I never would have thought of it. Thanks!
Bill
________________________________________
From: Peter Langfelder [peter.langfelder at gmail.com]
Sent: Wednesday, October 13, 2010 1:41 PM
To: Schwab,Wilhelm K
Cc: r-help at r-project.org
Subject: Re: [R] [OT] (slightly) - OpenOffice Calc and text files
On Wed, Oct 13, 2010 at 10:13 AM, Schwab,Wilhelm K
<bschwab at anest.ufl.edu> wrote:
> Hello all,
>
> I had a very strange looking problem that turned out to be due to
unexpected (by me at least) format changes to one of my data files. We have
a small lab study in which each run is represented by a row in a
tab-delimited file; each row identifies a repetition of the experiment and
associates it with some subjective measurements and times from our notes
that get used to index another file with lots of automatically collected
data. In short, nothing shocking.
>
> In a moment of weakness, I opened the file using (I think it's version
3.2) of OpenOffice Calc to edit something that I had mangled when I first
entered it, saved it (apparently the mistake), and reran my analysis code.
The results were goofy, and the problem was in my code that runs before R
ever sees the data. That code was confused by things that I would like to
ensure don't happen again, and I suspect that some of you might have
thoughts on it.
>
> The problems specifically:
>
> (1) OO seems to be a little stingy about producing tab-delimited text;
there is stuff online about using the csv and editing the filter and folks
(presumably like us) saying that it deserves to be a separate option.
>
> (2) Dates that I had formatted as YYYY got chopped to YY (did we not learn
anything last time?<g>) and times that I had formatted in 24 hours ended up
AM/PM.
>
> Have any of you found a nice (or at least predictable) way to use OO Calc
to edit files like this? If it insists on thinking for me, I wish it would
think in 24 hour time and 4 digit years :) I work on Linux, so Excel is off
the table, but another spreadsheet or text editor would be a viable option,
as would configuration changes to Calc.
No idea about Calc, I use it regularly but only to view files (and
that mostly csv, not tab-delinited). The most primitive solution is to
use a plain text editor such as vi that will save everything as it
loaded it except for what you change. The second most primitive idea
(or maybe not so primitive after all) is to read the table into R and
manually fix it there such as table$column[row] = "ABCD" (this is my
favorite way of changing things :)). The third most primitive idea
which I have actually never used but which may be viable is to load it
into R and use the fix() function that pulls up a rather primitive but
functional data editor.
Peter
------------------------------
Message: 69
Date: Wed, 13 Oct 2010 10:28:00 -0700 (PDT)
To: r-help at r-project.org
Subject: [R] Coin Toss Simulation
Message-ID: <1286990880619-2994088.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
I am trying a simple toin coss simulation, of say 200 coin tosses. The idea
is to have a data frame like so:
Experiment# Number_Of_Heads
1 104
2 96
3 101
So I do:
d <-data.frame(exp_num=c(1,2,3)); /* Just 3 experiments to begin with */
d$head_ct <-sum(sample(0:1,200,repl=TRUE));
d;
exp_num head_ct
1 1 85
2 2 85
3 3 85 /* the same scalar value is
applied to all the rows */
So I tried using "within", and "for", and making the "sum( " as a
function, and I end up with the same...I get the same constant value. But
what I want of course is different "samples"....
--
View this message in context:
http://r.789695.n4.nabble.com/Coin-Toss-Simulation-tp2994088p2994088.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 70
Date: Wed, 13 Oct 2010 20:12:59 +0200
From: David <david.maillists at gmail.com>
To: William Dunlap <wdunlap at tibco.com>
Cc: r-help at r-project.org
Subject: Re: [R] "Memory not mapped" when using .C, problem in Mac but
not in Linux
Message-ID:
<AANLkTimSoo9bve2pZ_74i8W1HgtxzDrGFYDKZ1La8WEc at mail.gmail.com>
Content-Type: text/plain
Thank you very much for your kind reply.
I used gdb, and it returns a reason "KERN_INVALID_ADDRESS" on a very simple
operation (obtaining the time step for a numerical integration). Please see
bellow.
But, um, I solved it by changing the function's arguments in the C code from
"unsigned long int" to "int". In R, those arguments were defined with
"as.integer". So, can you now understand why the error happens (on iMac and
not on Linux)? If yes, can you explain it to me? :-)
If some argument is defined in C as "unsigned long int", I see that the R
equivalent is *not* "as.integer". Which is it?
Thank you very much in advance,
David
Can you please Cc to me any replies, just in case I may miss any of them
among the whole amount of emails :-) ?
---------------------------------------
Program received signal EXC_BAD_ACCESS, Could not access memory.
Reason: KERN_INVALID_ADDRESS at address: 0x0000000902ef3d68
[...]
139 double
time_step=(times[number_times[0]-1]-times[0])/(number_timesteps[0]);
(note: "number_times" and "number_timesteps" are integer arrays with a
single component, because they are arguments of the C function that need to
be passed from R. They were defined as "unsigned long int" in C, and
"as.integer" in R, giving the error. If I define them as "int" in C, the
error disappears).
2010/10/13 William Dunlap <wdunlap at tibco.com>
> This often happens when your C code uses memory that
> it did not allocate, particularly when it reads or
> writes just a little beyond the end of a memory block.
> On some platforms or if you are lucky there is unused
> memory between blocks of allocated memory and you don't
> see a problem. Other machines may pack allocated memory
> blocks more tightly and writing off the end of an array
> corrupts another array, leading to the program crashing.
> (Or reading off the end of an array may pick up data from
> the next array, which leads to similar crashes later on.)
> You may also be reading memory which has never been written
> to, hence you are using random data, which could corrupt
> things.
>
> Use a program like valgrind (on Linux, free) or Purify (on
> various platforms, but pricy) to detect memory misuse
> in C/C++ code. As long as your code isn't #ifdef'ed for
> various platforms, you can use valgrind to detect and fix
> memory misuse on Linux and you will find that overt
> problems on other platforms will go away.
>
> Bill Dunlap
> Spotfire, TIBCO Software
> wdunlap tibco.com
>
> > -----Original Message-----
> > From: r-help-bounces at r-project.org
> > [mailto:r-help-bounces at r-project.org] On Behalf Of David
> > Sent: Wednesday, October 13, 2010 6:47 AM
> > To: r-help at r-project.org
> > Subject: [R] "Memory not mapped" when using .C,problem in Mac
> > but not in Linux
> >
> > Hello,
> >
> > I am aware this may be an obscure problem difficult to advice
> > about, but
> > just in case... I am calling a C function from R on an iMac
> > (almost shining:
> > bought by my institution this year) and gives a "memory not
> > mapped" error.
> >
> > Nevertheless, exactly the same code runs without problem in a powerful
> > computer using SuSE Linux, but also on my laptop of 2007, 32
> > bits, 2 GB RAM,
> > running Ubuntu. My supervisor says that he can run my code on
> > his iMac (a
> > bit older than mine) without problem.
> >
> > I have upgraded to the latest version of R, and I have tried
> > compiling the C
> > code with the latest version of gcc (obtained from MacPorts),
> > but the error
> > persists.
> >
> > Any ideas?
> >
> > Thank you very much in advance,
> >
> > David
> > Can you please Cc to me any replies, just in case I may miss
> > any of them
> > among the whole amount of emails :-) ?
> >
> > [[alternative HTML version deleted]]
> >
> > ______________________________________________
> > R-help at r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> > http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>
[[alternative HTML version deleted]]
------------------------------
Message: 71
Date: Wed, 13 Oct 2010 19:16:18 +0100
From: Julia Lira <julia.lira at hotmail.co.uk>
To: <r-help at r-project.org>
Subject: [R] loop
Message-ID: <SNT126-W12F89B2ED39215CBC9C123D1550 at phx.gbl>
Content-Type: text/plain
Dear all,
I am trying to run a loop in my codes, but the software returns an error:
"subscript out of bounds"
I dont understand exactly why this is happenning. My codes are the
following:
rm(list=ls()) #remove almost everything in the memory
set.seed(180185)
nsim <- 10
mresultx <- matrix(-99, nrow=1000, ncol=nsim)
mresultb <- matrix(-99, nrow=1000, ncol=nsim)
N <- 200
I <- 5
taus <- c(0.480:0.520)
h <- c(1:20/1000)
codd <-
c(1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,5
3,55,57,59,61,63,65,67,69,71,73,75,77,79,81)
ceven <-
c(2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,
54,56,58,60,62,64,66,68,70,72,74,76,78,80,82)
cevenl <- c(2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40)
#Create an object to hold results.
M <- matrix(0, ncol=82, nrow=nsim)
Mhb0 <- matrix(0, ncol=20, nrow=nsim)
Mhb1 <- matrix(0, ncol=20, nrow=nsim)
Mchb0 <- matrix(0, ncol=20, nrow=nsim)
Mchb1 <- matrix(0, ncol=20, nrow=nsim)
for (i in 1:nsim){
# make a matrix with 5 cols of N random uniform values
u <- replicate( 5, runif(N, 0, 1) )
# fit matrix u in another matrix of 1 column
mu <- matrix(u, nrow=1000, ncol=1)
# make auction-specific covariate
x <- runif(N, 0, 1)
mx <- matrix(rep(x,5), nrow=1000, ncol=1)
b0 <- matrix(rep(c(1),1000), nrow=1000, ncol=1)
#function for private cost
cost <- b0+b0*mx+mu
#bidding strategy
bid <- mx+((I+1)/I)+((I-1)/I)*mu
mresultb[,i] <- bid
mresultx[,i] <- mx
qf <- rq(formula = mresultb[,i] ~ mresultx[,i], tau= 480:520/1000)
# Storing result and does not overwrite prior values
M[i, ] <- coef(qf)
QI <- (1-0.5)/(I-1)
M50b0 <- M[,41]
M50b1 <- M[,42]
Mb0 <- matrix(M[,codd], nrow=nsim, ncol=20)
Mb1 <- matrix(M[,ceven], nrow=nsim, ncol=20)
for (t in cevenl){
Mhb0[ ,t] <- M[,(41+t)]-M[,(41-t)]
Mhb1[ ,t] <- M[,(42+t)]-M[,(42-t)]
}
}
Problem: the problem is in the red part of the loop. I want that the
software takes the column (41+t) from the matrix called M and subtract from
it the cloumn (41-t) of the same matrix M, such that the value of t varies
according to the vector cevenl above.
Why is this looping not working?
Thanks in advance!!!
Julia
[[alternative HTML version deleted]]
------------------------------
Message: 72
Date: Wed, 13 Oct 2010 11:16:24 -0700 (PDT)
From: Bart Joosen <bartjoosen at hotmail.com>
To: r-help at r-project.org
Subject: [R] Regular expression to find value between brackets
Message-ID: <1286993784648-2994166.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hi,
this should be an easy one, but I can't figure it out.
I have a vector of tests, with their units between brackets (if they have
units).
eg tests <- c("pH", "Assay (%)", "Impurity A(%)", "content (mg/ml)")
Now I would like to hava a function where I use a test as input, and which
returns the units
like:
f <- function (x) sub("\\)", "", sub("\\(", "",sub("[[:alnum:]]+","",x)))
this should give "", "%", "%", "mg/ml", but it doesn't do the job quit well.
After searching in the manual, and on the help lists, I cant find the
answer.
anyone?
Bart
--
View this message in context:
http://r.789695.n4.nabble.com/Regular-expression-to-find-value-between-brack
ets-tp2994166p2994166.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 73
Date: Wed, 13 Oct 2010 20:16:55 +0200
From: Dimitris Rizopoulos <d.rizopoulos at erasmusmc.nl>
Cc: r-help at r-project.org
Subject: Re: [R] Coin Toss Simulation
Message-ID: <4CB5F797.6060300 at erasmusmc.nl>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
try this:
n <- 3
data.frame(Exp = seq_len(n),
Heads = rbinom(n, 200, 0.5))
I hope it helps.
Best,
Dimitris
On 10/13/2010 7:28 PM, Shiv wrote:
>
> I am trying a simple toin coss simulation, of say 200 coin tosses. The
idea
> is to have a data frame like so:
> Experiment# Number_Of_Heads
> 1 104
> 2 96
> 3 101
>
> So I do:
>
> d<-data.frame(exp_num=c(1,2,3)); /* Just 3 experiments to begin with */
> d$head_ct<-sum(sample(0:1,200,repl=TRUE));
> d;
> exp_num head_ct
> 1 1 85
> 2 2 85
> 3 3 85 /* the same scalar value
is
> applied to all the rows */
>
> So I tried using "within", and "for", and making the "sum( " as a
> function, and I end up with the same...I get the same constant value. But
> what I want of course is different "samples"....
>
>
>
--
Dimitris Rizopoulos
Assistant Professor
Department of Biostatistics
Erasmus University Medical Center
Address: PO Box 2040, 3000 CA Rotterdam, the Netherlands
Tel: +31/(0)10/7043478
Fax: +31/(0)10/7043014
Web: http://www.erasmusmc.nl/biostatistiek/
------------------------------
Message: 74
Date: Wed, 13 Oct 2010 13:18:32 -0500
From: Erik Iverson <eriki at ccbr.umn.edu>
Cc: r-help at r-project.org
Subject: Re: [R] Coin Toss Simulation
Message-ID: <4CB5F7F8.1080609 at ccbr.umn.edu>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Shiv wrote:
> I am trying a simple toin coss simulation, of say 200 coin tosses. The
idea
> is to have a data frame like so:
> Experiment# Number_Of_Heads
> 1 104
> 2 96
> 3 101
>
> So I do:
>
> d <-data.frame(exp_num=c(1,2,3)); /* Just 3 experiments to begin with */
> d$head_ct <-sum(sample(0:1,200,repl=TRUE));
/* */ is not a valid comment in R, and you don't need ';' either :)
Just try evaluating the right hand side so that the R interpreter
prints the result. This is just calling sample once, and therefore
you only get one number, 85 in your case.
> d;
> exp_num head_ct
> 1 1 85
> 2 2 85
> 3 3 85 /* the same scalar value
is
> applied to all the rows */
>
> So I tried using "within", and "for", and making the "sum( " as a
> function, and I end up with the same...I get the same constant value. But
> what I want of course is different "samples"....
Look at ?replicate or just use ?rbinom directly
d <-data.frame(exp_num = 1:3)
d$head_ct <- rbinom(nrow(d), 200, prob = 0.5)
>
>
>
------------------------------
Message: 75
Date: Wed, 13 Oct 2010 14:19:35 -0400
From: Jon Zadra <jrz9f at virginia.edu>
To: r-help at r-project.org
Subject: [R] Change global env variables from within a function
Message-ID: <4CB5F837.6010105 at virginia.edu>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Hi,
I've looked all over for a solution to this, but haven't had much look
in specifying what I want to do with appropriate search terms. Thus I'm
turning to R-help.
In the process of trying to write a simple function to rename individual
column names in a data frame, I ran into the following problem: When I
rename the columns within my function, I can't seem to get it to apply
to the data frame in the global environment in a simple manner.
Given:
tempdf <- data.frame("a" = 1:6, "b" = 7:12)
#I can rename a to g this way:
names(tempdf)[names(tempdf)=="a"] <- "g"
#Wanting to simplify this for the future, I have the function:
colrename <- function(dframe, oldname, newname) {
names(dframe)[names(dframe)==oldname] <- newname
}
colrename(tempdf, "a", "g")
#However of course the change to tempdf stays within colrename(). I
could add "return(names(dframe))" to the function and then call the
function like:
#names(tempdf) <- colrename(tempdf, "a", "g")
#but at that point its not much simpler than the original:
#names(tempdf)[names(tempdf)=="a"] <- "g"
So, i'm wondering, is there a way to write code within that function so
that the tempdf in the global environment gets changed?
I thought of doing "names(dframe)[names(dframe)==oldname] <<- newname"
but then of course it doesn't replace "dframe" with "tempdf" in the
evaluation, so it seems to be a problem of looking at some variables
within the function environment but others in the parent environment.
Any help is greatly appreciated.
Thanks,
Jon
--
Jon Zadra
Department of Psychology
University of Virginia
P.O. Box 400400
Charlottesville VA 22904
(434) 982-4744
email: zadra at virginia.edu
<http://www.google.com/calendar/embed?src=jzadra%40gmail.com>
------------------------------
Message: 76
Date: Wed, 13 Oct 2010 14:28:06 -0400
From: Duncan Murdoch <murdoch.duncan at gmail.com>
To: zadra at virginia.edu
Cc: r-help at r-project.org, Jon Zadra <jrz9f at virginia.edu>
Subject: Re: [R] Change global env variables from within a function
Message-ID: <4CB5FA36.10705 at gmail.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 13/10/2010 2:19 PM, Jon Zadra wrote:
> Hi,
>
> I've looked all over for a solution to this, but haven't had much look
> in specifying what I want to do with appropriate search terms. Thus I'm
> turning to R-help.
>
> In the process of trying to write a simple function to rename individual
> column names in a data frame, I ran into the following problem: When I
> rename the columns within my function, I can't seem to get it to apply
> to the data frame in the global environment in a simple manner.
>
> Given:
>
> tempdf<- data.frame("a" = 1:6, "b" = 7:12)
>
> #I can rename a to g this way:
> names(tempdf)[names(tempdf)=="a"]<- "g"
>
> #Wanting to simplify this for the future, I have the function:
> colrename<- function(dframe, oldname, newname) {
> names(dframe)[names(dframe)==oldname]<- newname
> }
>
> colrename(tempdf, "a", "g")
>
> #However of course the change to tempdf stays within colrename(). I
> could add "return(names(dframe))" to the function and then call the
> function like:
> #names(tempdf)<- colrename(tempdf, "a", "g")
> #but at that point its not much simpler than the original:
> #names(tempdf)[names(tempdf)=="a"]<- "g"
>
> So, i'm wondering, is there a way to write code within that function so
> that the tempdf in the global environment gets changed?
> I thought of doing "names(dframe)[names(dframe)==oldname]<<- newname"
> but then of course it doesn't replace "dframe" with "tempdf" in the
> evaluation, so it seems to be a problem of looking at some variables
> within the function environment but others in the parent environment.
>
> Any help is greatly appreciated.
Make the changes to a local copy, then save it to the global (or just
return the local copy as the value of the global). For example,
colrename<- function(dframe, oldname, newname) {
names(dframe)[names(dframe)==oldname]<- newname
dframe
}
called as
tempdf<- colrename(tempdf, "a", "g")
or if you really want to mess with things that don't belong to your
function,
colrename<- function(dframe, oldname, newname) {
name<- deparse(substitute(dframe))
names(dframe)[names(dframe)==oldname]<- newname
assign(name, dframe, env=globalenv())
}
(but watch out if you call this with something other than a simple
variable name as the first argument).
Duncan Murdoch
------------------------------
Message: 77
Date: Wed, 13 Oct 2010 11:30:14 -0700
From: "William Dunlap" <wdunlap at tibco.com>
To: "David" <david.maillists at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] "Memory not mapped" when using .C, problem in Mac but
not in Linux
Message-ID:
<77EB52C6DD32BA4D87471DCD70C8D700038B1AB8 at NA-PA-VBE03.na.tibco.com>
Content-Type: text/plain
I don't know much about the iMac. R's .C() passes
R-language integers as pointers to 32-bit ints. If
your iMac is 64-bit and sizeof(long)==8 in your compiler
(pretty common for 64-bit compilers but not so for
Microsoft Windows compilers) then Long[0] will
use the next 64 bits to make an integral value out of
while Int[0] will only use the next 32 bits. Since R
only passes 32 bits/integer your Long[0] would include
32 random bits in its value and using it as a subscript
would often point to a place in memory that you don't own.
Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com
________________________________
From: David [mailto:david.maillists at gmail.com]
Sent: Wednesday, October 13, 2010 11:13 AM
To: William Dunlap
Cc: r-help at r-project.org
Subject: Re: [R] "Memory not mapped" when using .C,problem in
Mac but not in Linux
Thank you very much for your kind reply.
I used gdb, and it returns a reason "KERN_INVALID_ADDRESS" on a
very simple operation (obtaining the time step for a numerical
integration). Please see bellow.
But, um, I solved it by changing the function's arguments in the
C code from "unsigned long int" to "int". In R, those arguments were
defined with "as.integer". So, can you now understand why the error
happens (on iMac and not on Linux)? If yes, can you explain it to me?
:-)
If some argument is defined in C as "unsigned long int", I see
that the R equivalent is *not* "as.integer". Which is it?
Thank you very much in advance,
David
Can you please Cc to me any replies, just in case I may miss any
of them among the whole amount of emails :-) ?
---------------------------------------
Program received signal EXC_BAD_ACCESS, Could not access memory.
Reason: KERN_INVALID_ADDRESS at address: 0x0000000902ef3d68
[...]
139 double
time_step=(times[number_times[0]-1]-times[0])/(number_timesteps[0]);
(note: "number_times" and "number_timesteps" are integer arrays
with a single component, because they are arguments of the C function
that need to be passed from R. They were defined as "unsigned long int"
in C, and "as.integer" in R, giving the error. If I define them as "int"
in C, the error disappears).
2010/10/13 William Dunlap <wdunlap at tibco.com>
This often happens when your C code uses memory that
it did not allocate, particularly when it reads or
writes just a little beyond the end of a memory block.
On some platforms or if you are lucky there is unused
memory between blocks of allocated memory and you don't
see a problem. Other machines may pack allocated memory
blocks more tightly and writing off the end of an array
corrupts another array, leading to the program crashing.
(Or reading off the end of an array may pick up data
from
the next array, which leads to similar crashes later
on.)
You may also be reading memory which has never been
written
to, hence you are using random data, which could corrupt
things.
Use a program like valgrind (on Linux, free) or Purify
(on
various platforms, but pricy) to detect memory misuse
in C/C++ code. As long as your code isn't #ifdef'ed for
various platforms, you can use valgrind to detect and
fix
memory misuse on Linux and you will find that overt
problems on other platforms will go away.
Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com
> -----Original Message-----
> From: r-help-bounces at r-project.org
> [mailto:r-help-bounces at r-project.org] On Behalf Of
David
> Sent: Wednesday, October 13, 2010 6:47 AM
> To: r-help at r-project.org
> Subject: [R] "Memory not mapped" when using .C,problem
in Mac
> but not in Linux
>
> Hello,
>
> I am aware this may be an obscure problem difficult to
advice
> about, but
> just in case... I am calling a C function from R on an
iMac
> (almost shining:
> bought by my institution this year) and gives a
"memory not
> mapped" error.
>
> Nevertheless, exactly the same code runs without
problem in a powerful
> computer using SuSE Linux, but also on my laptop of
2007, 32
> bits, 2 GB RAM,
> running Ubuntu. My supervisor says that he can run my
code on
> his iMac (a
> bit older than mine) without problem.
>
> I have upgraded to the latest version of R, and I have
tried
> compiling the C
> code with the latest version of gcc (obtained from
MacPorts),
> but the error
> persists.
>
> Any ideas?
>
> Thank you very much in advance,
>
> David
> Can you please Cc to me any replies, just in case I
may miss
> any of them
> among the whole amount of emails :-) ?
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained,
reproducible code.
>
[[alternative HTML version deleted]]
------------------------------
Message: 78
Date: Wed, 13 Oct 2010 14:28:35 -0400
From: "Schwab,Wilhelm K" <bschwab at anest.ufl.edu>
To: Albyn Jones <jones at reed.edu>
Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] [OT] (slightly) - OpenOffice Calc and text files
Message-ID:
<A93938CC72687245835AC0BE10199F7C84D47D09 at HSC-CMS01.ad.ufl.edu>
Content-Type: text/plain; charset="us-ascii"
[[elided Yahoo spam]]
Bill
________________________________________
From: Albyn Jones [jones at reed.edu]
Sent: Wednesday, October 13, 2010 2:14 PM
To: Schwab,Wilhelm K
Subject: Re: [R] [OT] (slightly) - OpenOffice Calc and text files
emacs shows you exactly what is there, nothing more nor less.
it isn't a spreadsheet, but tabs will align columns.
albyn
On Wed, Oct 13, 2010 at 01:53:46PM -0400, Schwab,Wilhelm K wrote:
> Albyn,
>
> I'll look into it. In fact, I have a small book on it that I bought in my
very early days of using Linux. I quickly found TeX Maker (for the
obvious), Code::Blocks for C/C++ and I would not have started the move
without a working Smalltalk (http://pharo-project.org/home).
>
> For editing data files, I really just want something that shows data in an
understandable grid and does not do weird stuff thinking it's being helpful.
>
> Bill
>
>
> ________________________________________
> From: Albyn Jones [jones at reed.edu]
> Sent: Wednesday, October 13, 2010 1:39 PM
> To: Schwab,Wilhelm K
> Cc: r-help at r-project.org
> Subject: Re: [R] [OT] (slightly) - OpenOffice Calc and text files
>
> How about emacs?
>
> albyn
>
> On Wed, Oct 13, 2010 at 01:13:03PM -0400, Schwab,Wilhelm K wrote:
> > Hello all,
> > <.....>
> > Have any of you found a nice (or at least predictable) way to use OO
Calc to edit files like this? If it insists on thinking for me, I wish it
would think in 24 hour time and 4 digit years :) I work on Linux, so Excel
is off the table, but another spreadsheet or text editor would be a viable
option, as would configuration changes to Calc.
> >
> > Bill
> >
> > ______________________________________________
> > R-help at r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>
> --
> Albyn Jones
> Reed College
> jones at reed.edu
>
>
--
Albyn Jones
Reed College
jones at reed.edu
------------------------------
Message: 79
Date: Wed, 13 Oct 2010 15:34:27 -0300
From: Henrique Dallazuanna <wwwhsd at gmail.com>
To: Bart Joosen <bartjoosen at hotmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Regular expression to find value between brackets
Message-ID:
<AANLkTinAUe_3dA8SNc7hTK4+LKg-Y+nFFgXPRH5vo8v0 at mail.gmail.com>
Content-Type: text/plain
Try this:
replace(gsub(".*\\((.*)\\)$", "\\1", tests), !grepl("\\(.*\\)", tests), "")
On Wed, Oct 13, 2010 at 3:16 PM, Bart Joosen <bartjoosen at hotmail.com> wrote:
>
> Hi,
>
> this should be an easy one, but I can't figure it out.
> I have a vector of tests, with their units between brackets (if they have
> units).
> eg tests <- c("pH", "Assay (%)", "Impurity A(%)", "content (mg/ml)")
>
> Now I would like to hava a function where I use a test as input, and which
> returns the units
> like:
> f <- function (x) sub("\\)", "", sub("\\(", "",sub("[[:alnum:]]+","",x)))
> this should give "", "%", "%", "mg/ml", but it doesn't do the job quit
> well.
>
> After searching in the manual, and on the help lists, I cant find the
> answer.
>
> anyone?
>
> Bart
> --
> View this message in context:
>
http://r.789695.n4.nabble.com/Regular-expression-to-find-value-between-brack
ets-tp2994166p2994166.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Henrique Dallazuanna
Curitiba-Parana-Brasil
250 25' 40" S 490 16' 22" O
[[alternative HTML version deleted]]
------------------------------
Message: 80
Date: Wed, 13 Oct 2010 19:37:08 +0100
From: Julia Lira <julia.lira at hotmail.co.uk>
To: <r-help at r-project.org>
Subject: [R] (no subject)
Message-ID: <SNT126-W65613CF4307E85766E67D5D1550 at phx.gbl>
Content-Type: text/plain
Dear all,
I have just sent an email with my problem, but I think no one can see the
red part, beacuse it is black. So, i am writing again the codes:
rm(list=ls()) #remove almost everything in the memory
set.seed(180185)
nsim <- 10
mresultx <- matrix(-99, nrow=1000, ncol=nsim)
mresultb <- matrix(-99, nrow=1000, ncol=nsim)
N <- 200
I <- 5
taus <- c(0.480:0.520)
h <- c(1:20/1000)
alpha1 <- c(1:82)
aeven1 <- alpha[2 * 1:41]
aodd1 <- alpha[-2 * 1:41]
alpha2 <- c(1:40)
aeven2 <- alpha2[2 * 1:20]
#Create an object to hold results.
M <- matrix(0, ncol=82, nrow=nsim)
Mhb0 <- matrix(0, ncol=20, nrow=nsim)
Mhb1 <- matrix(0, ncol=20, nrow=nsim)
Mchb0 <- matrix(0, ncol=20, nrow=nsim)
Mchb1 <- matrix(0, ncol=20, nrow=nsim)
for (i in 1:nsim){
# make a matrix with 5 cols of N random uniform values
u <- replicate( 5, runif(N, 0, 1) )
# fit matrix u in another matrix of 1 column
mu <- matrix(u, nrow=1000, ncol=1)
# make auction-specific covariate
x <- runif(N, 0, 1)
mx <- matrix(rep(x,5), nrow=1000, ncol=1)
b0 <- matrix(rep(c(1),1000), nrow=1000, ncol=1)
#function for private cost
cost <- b0+b0*mx+mu
#bidding strategy
bid <- mx+((I+1)/I)+((I-1)/I)*mu
mresultb[,i] <- bid
mresultx[,i] <- mx
qf <- rq(formula = mresultb[,i] ~ mresultx[,i], tau= 480:520/1000)
# Storing result and does not overwrite prior values
M[i, ] <- coef(qf)
QI <- (1-0.5)/(I-1)
M50b0 <- M[,41]
M50b1 <- M[,42]
Mb0 <- matrix(M[,aodd1], nrow=nsim, ncol=20)
Mb1 <- matrix(M[,aeven1], nrow=nsim, ncol=20)
for (t in aeven2){
Mhb0[,t] <- M[,(41+t)]-M[,(41-t)]
Mhb1[,t] <- M[,(42+t)]-M[,(42-t)]
}
}
The problem is in the part:
for (t in aeven2){
Mhb0[,t] <- M[,(41+t)]-M[,(41-t)]
Mhb1[,t] <- M[,(42+t)]-M[,(42-t)]
}
Since I want the software to subtract from column (41+t) of matrix called M
the column (41-t), in such a way that the matrix Mhb0 will show me the
result for each t organized by columns.
Does anybody know what exactly I am doing wrong?
Thanks in advance!
Julia
[[alternative HTML version deleted]]
------------------------------
Message: 81
Date: Wed, 13 Oct 2010 11:39:22 -0700 (PDT)
From: Berend Hasselman <bhh at xs4all.nl>
To: r-help at r-project.org
Subject: Re: [R] "Memory not mapped" when using .C, problem in Mac but
not in Linux
Message-ID: <1286995162849-2994220.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
- which version of Mac OS X?
- Which version of R? (version, architecture)
- Officially provided R or compiled by you? Official R is compiled with
Apple gcc.
- if R was compiled with Apple compiler, who knows what can happen if you
link with code compiled with a non Apple gcc?
- if the code runs ok on your supervisors iMac: did he use Apple compiler?
if so I wouldn't be surprised if your Macports compiler is a possible cause
of your troubles.
- on 64bit Mac OS X long int is 64bits and int is 32bits; on 32bit Mac OSX
both are 32 bits.
/Berend
--
View this message in context:
http://r.789695.n4.nabble.com/Memory-not-mapped-when-using-C-problem-in-Mac-
but-not-in-Linux-tp2993689p2994220.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 82
Date: Wed, 13 Oct 2010 19:41:29 +0100
From: Julia Lira <julia.lira at hotmail.co.uk>
To: <r-help at r-project.org>
Subject: [R] loop
Message-ID: <SNT126-W29E2BA310C39A8BF008856D1550 at phx.gbl>
Content-Type: text/plain
Dear all,
I have just sent an email with my problem, but I think no one can see the
red part, beacuse it is black. So, i am writing again the codes:
rm(list=ls()) #remove almost everything in the memory
set.seed(180185)
nsim <- 10
mresultx <- matrix(-99, nrow=1000, ncol=nsim)
mresultb <- matrix(-99, nrow=1000, ncol=nsim)
N <- 200
I <- 5
taus <- c(0.480:0.520)
h <- c(1:20/1000)
alpha1 <- c(1:82)
aeven1 <- alpha1[2 * 1:41]
aodd1 <- alpha1[-2 * 1:41]
alpha2 <- c(1:40)
aeven2 <- alpha2[2 * 1:20]
#Create an object to hold results.
M <- matrix(0, ncol=82, nrow=nsim)
Mhb0 <- matrix(0, ncol=20, nrow=nsim)
Mhb1 <- matrix(0, ncol=20, nrow=nsim)
Mchb0 <- matrix(0, ncol=20, nrow=nsim)
Mchb1 <- matrix(0, ncol=20, nrow=nsim)
for (i in 1:nsim){
# make a matrix with 5 cols of N random uniform values
u <- replicate( 5, runif(N, 0, 1) )
# fit matrix u in another matrix of 1 column
mu <- matrix(u, nrow=1000, ncol=1)
# make auction-specific covariate
x <- runif(N, 0, 1)
mx <- matrix(rep(x,5), nrow=1000, ncol=1)
b0 <- matrix(rep(c(1),1000), nrow=1000, ncol=1)
#function for private cost
cost <- b0+b0*mx+mu
#bidding strategy
bid <- mx+((I+1)/I)+((I-1)/I)*mu
mresultb[,i] <- bid
mresultx[,i] <- mx
qf <- rq(formula = mresultb[,i] ~ mresultx[,i], tau= 480:520/1000)
# Storing result and does not overwrite prior values
M[i, ] <- coef(qf)
QI <- (1-0.5)/(I-1)
M50b0 <- M[,41]
M50b1 <- M[,42]
Mb0 <- matrix(M[,aodd1], nrow=nsim, ncol=20)
Mb1 <- matrix(M[,aeven1], nrow=nsim, ncol=20)
for (t in aeven2){
Mhb0[,t] <- M[,(41+t)]-M[,(41-t)]
Mhb1[,t] <- M[,(42+t)]-M[,(42-t)]
}
}
The problem is in the part:
for (t in aeven2){
Mhb0[,t] <- M[,(41+t)]-M[,(41-t)]
Mhb1[,t] <- M[,(42+t)]-M[,(42-t)]
}
Since I want the software to subtract from column (41+t) of matrix called M
the column (41-t), in such a way that the matrix Mhb0 will show me the
result for each t organized by columns.
Does anybody know what exactly I am doing wrong?
Thanks in advance!
Julia
[[alternative HTML version deleted]]
------------------------------
Message: 83
Date: Wed, 13 Oct 2010 13:41:39 -0500
From: Erik Iverson <eriki at ccbr.umn.edu>
To: Bart Joosen <bartjoosen at hotmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Regular expression to find value between brackets
Message-ID: <4CB5FD63.1020106 at ccbr.umn.edu>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Bart,
I'm hardly one of the lists regex gurus: but this can
get you started...
tests <- c("pH", "Assay (%)", "Impurity A(%)", "content (mg/ml)")
x <- regexpr("\\((.*)\\)", tests)
substr(tests, x + 1, x + attr(x, "match.length") - 2)
Bart Joosen wrote:
> Hi,
>
> this should be an easy one, but I can't figure it out.
> I have a vector of tests, with their units between brackets (if they have
> units).
> eg tests <- c("pH", "Assay (%)", "Impurity A(%)", "content (mg/ml)")
>
> Now I would like to hava a function where I use a test as input, and which
> returns the units
> like:
> f <- function (x) sub("\\)", "", sub("\\(", "",sub("[[:alnum:]]+","",x)))
> this should give "", "%", "%", "mg/ml", but it doesn't do the job quit
well.
>
> After searching in the manual, and on the help lists, I cant find the
> answer.
>
> anyone?
>
> Bart
------------------------------
Message: 84
Date: Wed, 13 Oct 2010 11:45:46 -0700
From: Bert Gunter <gunter.berton at gene.com>
To: Bart Joosen <bartjoosen at hotmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Regular expression to find value between brackets
Message-ID:
<AANLkTinSa4OTT3oEbukVku-MEdgqfHmoFs6heOy3gdYw at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
One way:
gsub(".*\\(([^()]*)\\).*", "\\1",tests)
Idea: Pick out the units designation between the "()" and replace the
whole expression with it. The "\\1" refers to the "[^()]*
parenthesized expression in the middle that picks out the units.
Cheers,
Bert
On Wed, Oct 13, 2010 at 11:16 AM, Bart Joosen <bartjoosen at hotmail.com>
wrote:
>
> Hi,
>
> this should be an easy one, but I can't figure it out.
> I have a vector of tests, with their units between brackets (if they have
> units).
> eg tests <- c("pH", "Assay (%)", "Impurity A(%)", "content (mg/ml)")
>
> Now I would like to hava a function where I use a test as input, and which
> returns the units
> like:
> f <- function (x) sub("\\)", "", sub("\\(", "",sub("[[:alnum:]]+","",x)))
> this should give "", "%", "%", "mg/ml", but it doesn't do the job quit
well.
>
> After searching in the manual, and on the help lists, I cant find the
> answer.
>
> anyone?
>
> Bart
> --
> View this message in context:
http://r.789695.n4.nabble.com/Regular-expression-to-find-value-between-brack
ets-tp2994166p2994166.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Bert Gunter
Genentech Nonclinical Biostatistics
------------------------------
Message: 85
Date: Wed, 13 Oct 2010 13:49:59 -0500
From: Erik Iverson <eriki at ccbr.umn.edu>
To: zadra at virginia.edu
Cc: r-help at r-project.org
Subject: Re: [R] Change global env variables from within a function
Message-ID: <4CB5FF57.3020102 at ccbr.umn.edu>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Hello,
Jon Zadra wrote:
> Hi,
>
> I've looked all over for a solution to this, but haven't had much look
> in specifying what I want to do with appropriate search terms. Thus I'm
> turning to R-help.
>
> In the process of trying to write a simple function to rename individual
> column names in a data frame, I ran into the following problem: When I
> rename the columns within my function, I can't seem to get it to apply
> to the data frame in the global environment in a simple manner.
Not a direct answer to your question, but
does the ?rename function in the reshape package do what you want?
In general, the paradigm that I think R tries to promote is to
call functions that *return values* that you can assign to object
names, so calling "colrename" below to change an object in the
global environment is warned against for its side effects.
You could have the function return the modified data.frame, but
of course that's what the "rename" function I mention above does
already.
You might also see ?transform.
>
> Given:
>
> tempdf <- data.frame("a" = 1:6, "b" = 7:12)
>
> #I can rename a to g this way:
> names(tempdf)[names(tempdf)=="a"] <- "g"
>
> #Wanting to simplify this for the future, I have the function:
> colrename <- function(dframe, oldname, newname) {
> names(dframe)[names(dframe)==oldname] <- newname
> }
>
> colrename(tempdf, "a", "g")
>
> #However of course the change to tempdf stays within colrename(). I
> could add "return(names(dframe))" to the function and then call the
> function like:
> #names(tempdf) <- colrename(tempdf, "a", "g")
> #but at that point its not much simpler than the original:
> #names(tempdf)[names(tempdf)=="a"] <- "g"
>
> So, i'm wondering, is there a way to write code within that function so
> that the tempdf in the global environment gets changed?
> I thought of doing "names(dframe)[names(dframe)==oldname] <<- newname"
> but then of course it doesn't replace "dframe" with "tempdf" in the
> evaluation, so it seems to be a problem of looking at some variables
> within the function environment but others in the parent environment.
>
> Any help is greatly appreciated.
>
> Thanks,
>
> Jon
------------------------------
Message: 86
Date: Wed, 13 Oct 2010 14:52:21 -0400
From: David Winsemius <dwinsemius at comcast.net>
To: "Schwab,Wilhelm K" <bschwab at anest.ufl.edu>
Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] [OT] (slightly) - OpenOffice Calc and text files
Message-ID: <4E93F7BA-3D14-4E63-BC25-09057FBFC3A0 at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
On Oct 13, 2010, at 1:13 PM, Schwab,Wilhelm K wrote:
> Hello all,
>
> I had a very strange looking problem that turned out to be due to
> unexpected (by me at least) format changes to one of my data files.
> We have a small lab study in which each run is represented by a row
> in a tab-delimited file; each row identifies a repetition of the
> experiment and associates it with some subjective measurements and
> times from our notes that get used to index another file with lots
> of automatically collected data. In short, nothing shocking.
>
> In a moment of weakness, I opened the file using (I think it's
> version 3.2) of OpenOffice Calc to edit something that I had mangled
> when I first entered it, saved it (apparently the mistake), and
> reran my analysis code. The results were goofy, and the problem was
> in my code that runs before R ever sees the data. That code was
> confused by things that I would like to ensure don't happen again,
> and I suspect that some of you might have thoughts on it.
>
> The problems specifically:
>
> (1) OO seems to be a little stingy about producing tab-delimited
> text; there is stuff online about using the csv and editing the
> filter and folks (presumably like us) saying that it deserves to be
> a separate option.
You have been little stingy yourself about describing what you did. I
see no specifics about the actual data used as input nor the specific
operations. I just opened an OO.o Calc workbook and dropped a
character vector, "1969-12-31 23:59:50" copied from help(POSIXct) into
a2. I then copied it to a3 and formatted it to be in the precanned
format, MM/DD/YYYY HH:MM:SS , noticed that it had not been interpreted
as a data-time vlaue at all so entered =TODAY()+TIME(13;0;0) in a4 and
=TIME(13;0;0) in a5, formated to a user specified custom time format
of YYYY-MM-DD HH:MM:SS
Copied a5 to c1:c5
saved to a text-csv file specifying the field separator as tab and
the text-delimiter as '"' and got:
""time" 1899-12-30 13:00:00
"1969-12-31 23:59:50" 1899-12-30 13:00:00
"1969-12-31 23:59:50" 1899-12-30 13:00:00
2010-10-13 13:00:00 1899-12-30 13:00:00
1899-12-30 13:00:00 1899-12-30 13:00:00
This handling of dates and times does not seem particularly difficult
to elicit andseems to represent dates in YYYY and times in "military
time".
>
> (2) Dates that I had formatted as YYYY got chopped to YY (did we not
> learn anything last time?<g>) and times that I had formatted in 24
> hours ended up AM/PM.
>
> Have any of you found a nice (or at least predictable) way to use OO
> Calc to edit files like this?
I didn't do anything I thought was out of the ordinary and so cannot
reproduce your problem. (This was on a Mac, but OO.o is probably going
to behave the same across *NIX cultures.)
--
David
> If it insists on thinking for me, I wish it would think in 24 hour
> time and 4 digit years :)
Is it possible that you have not done enough thinking for _it_?
> I work on Linux, so Excel is off the table, but another spreadsheet
> or text editor would be a viable option, as would configuration
> changes to Calc.
>
> Bill
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
David Winsemius, MD
West Hartford, CT
------------------------------
Message: 87
Date: Wed, 13 Oct 2010 13:53:28 -0500
From: Erik Iverson <eriki at ccbr.umn.edu>
To: Julia Lira <julia.lira at hotmail.co.uk>
Cc: r-help at r-project.org
Subject: Re: [R] loop
Message-ID: <4CB60028.4090805 at ccbr.umn.edu>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Julia,
Can you provide a reproducible example? Your code calls the
'rq' function which is not found on my system.
Any paring down of the code to make it more readable would
help us help you better, too.
Julia Lira wrote:
> Dear all,
>
>
>
> I am trying to run a loop in my codes, but the software returns an error:
"subscript out of bounds"
>
>
>
> I dont understand exactly why this is happenning. My codes are the
following:
>
>
>
> rm(list=ls()) #remove almost everything in the memory
>
> set.seed(180185)
> nsim <- 10
> mresultx <- matrix(-99, nrow=1000, ncol=nsim)
> mresultb <- matrix(-99, nrow=1000, ncol=nsim)
> N <- 200
> I <- 5
> taus <- c(0.480:0.520)
> h <- c(1:20/1000)
> codd <-
c(1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,5
3,55,57,59,61,63,65,67,69,71,73,75,77,79,81)
> ceven <-
c(2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,
54,56,58,60,62,64,66,68,70,72,74,76,78,80,82)
> cevenl <- c(2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40)
> #Create an object to hold results.
> M <- matrix(0, ncol=82, nrow=nsim)
> Mhb0 <- matrix(0, ncol=20, nrow=nsim)
> Mhb1 <- matrix(0, ncol=20, nrow=nsim)
> Mchb0 <- matrix(0, ncol=20, nrow=nsim)
> Mchb1 <- matrix(0, ncol=20, nrow=nsim)
> for (i in 1:nsim){
> # make a matrix with 5 cols of N random uniform values
> u <- replicate( 5, runif(N, 0, 1) )
> # fit matrix u in another matrix of 1 column
> mu <- matrix(u, nrow=1000, ncol=1)
> # make auction-specific covariate
> x <- runif(N, 0, 1)
> mx <- matrix(rep(x,5), nrow=1000, ncol=1)
> b0 <- matrix(rep(c(1),1000), nrow=1000, ncol=1)
> #function for private cost
> cost <- b0+b0*mx+mu
> #bidding strategy
> bid <- mx+((I+1)/I)+((I-1)/I)*mu
> mresultb[,i] <- bid
> mresultx[,i] <- mx
> qf <- rq(formula = mresultb[,i] ~ mresultx[,i], tau= 480:520/1000)
> # Storing result and does not overwrite prior values
> M[i, ] <- coef(qf)
> QI <- (1-0.5)/(I-1)
> M50b0 <- M[,41]
> M50b1 <- M[,42]
> Mb0 <- matrix(M[,codd], nrow=nsim, ncol=20)
> Mb1 <- matrix(M[,ceven], nrow=nsim, ncol=20)
> for (t in cevenl){
> Mhb0[ ,t] <- M[,(41+t)]-M[,(41-t)]
> Mhb1[ ,t] <- M[,(42+t)]-M[,(42-t)]
> }
> }
>
>
>
> Problem: the problem is in the red part of the loop. I want that the
software takes the column (41+t) from the matrix called M and subtract from
it the cloumn (41-t) of the same matrix M, such that the value of t varies
according to the vector cevenl above.
>
>
>
> Why is this looping not working?
>
>
>
[[elided Yahoo spam]]
>
>
>
> Julia
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 88
Date: Wed, 13 Oct 2010 12:01:33 -0700
From: Bert Gunter <gunter.berton at gene.com>
To: Bart Joosen <bartjoosen at hotmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Regular expression to find value between brackets
Message-ID:
<AANLkTinb7VCVys=yx23QBoBB0LCTRa3oF1vXEpjcNpCA at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Note:
My original proposal, not quite right, can be made quite right via:
gsub(".*\\((.*)\\).*||[^()]+", "\\1",tests)
The "||" or clause at the end handles the case where there are no
parentheses in the string.
-- Bert
On Wed, Oct 13, 2010 at 11:16 AM, Bart Joosen <bartjoosen at hotmail.com>
wrote:
>
> Hi,
>
> this should be an easy one, but I can't figure it out.
> I have a vector of tests, with their units between brackets (if they have
> units).
> eg tests <- c("pH", "Assay (%)", "Impurity A(%)", "content (mg/ml)")
>
> Now I would like to hava a function where I use a test as input, and which
> returns the units
> like:
> f <- function (x) sub("\\)", "", sub("\\(", "",sub("[[:alnum:]]+","",x)))
> this should give "", "%", "%", "mg/ml", but it doesn't do the job quit
well.
>
> After searching in the manual, and on the help lists, I cant find the
> answer.
>
> anyone?
>
> Bart
> --
> View this message in context:
http://r.789695.n4.nabble.com/Regular-expression-to-find-value-between-brack
ets-tp2994166p2994166.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Bert Gunter
Genentech Nonclinical Biostatistics
------------------------------
Message: 89
Date: Wed, 13 Oct 2010 15:12:17 -0400
From: Matt Shotwell <shotwelm at musc.edu>
To: Henrique Dallazuanna <wwwhsd at gmail.com>
Cc: Bart Joosen <bartjoosen at hotmail.com>, "r-help at r-project.org"
<r-help at r-project.org>
Subject: Re: [R] Regular expression to find value between brackets
Message-ID: <1286997137.1644.37.camel at matt-laptop>
Content-Type: text/plain; charset="UTF-8"
Here's a shorter (but more cryptic) one:
> gsub("^([^\\(]+)(\\((.+)\\))?", "\\2", tests)
[1] "" "(%)" "(%)" "(mg/ml)"
> gsub("^([^\\(]+)(\\((.+)\\))?", "\\3", tests)
[1] "" "%" "%" "mg/ml"
-Matt
On Wed, 2010-10-13 at 14:34 -0400, Henrique Dallazuanna wrote:
> Try this:
>
> replace(gsub(".*\\((.*)\\)$", "\\1", tests), !grepl("\\(.*\\)", tests),
"")
>
>
> On Wed, Oct 13, 2010 at 3:16 PM, Bart Joosen <bartjoosen at hotmail.com>
wrote:
>
> >
> > Hi,
> >
> > this should be an easy one, but I can't figure it out.
> > I have a vector of tests, with their units between brackets (if they
have
> > units).
> > eg tests <- c("pH", "Assay (%)", "Impurity A(%)", "content (mg/ml)")
> >
> > Now I would like to hava a function where I use a test as input, and
which
> > returns the units
> > like:
> > f <- function (x) sub("\\)", "", sub("\\(",
"",sub("[[:alnum:]]+","",x)))
> > this should give "", "%", "%", "mg/ml", but it doesn't do the job quit
> > well.
> >
> > After searching in the manual, and on the help lists, I cant find the
> > answer.
> >
> > anyone?
> >
> > Bart
> > --
> > View this message in context:
> >
http://r.789695.n4.nabble.com/Regular-expression-to-find-value-between-brack
ets-tp2994166p2994166.html
> > Sent from the R help mailing list archive at Nabble.com.
> >
> > ______________________________________________
> > R-help at r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> > http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>
>
>
--
Matthew S. Shotwell
Graduate Student
Division of Biostatistics and Epidemiology
Medical University of South Carolina
------------------------------
Message: 90
Date: Wed, 13 Oct 2010 15:14:24 -0400
From: Jared Blashka <evilamarant7x at gmail.com>
To: r-help at r-project.org
Subject: Re: [R] Nonlinear Regression Parameter Shared Across Multiple
Data Sets
Message-ID:
<AANLkTik0LhLW7EV4W2pmeO8zQdEyRmovbn7Lqm-zm9Oq at mail.gmail.com>
Content-Type: text/plain
As an addendum to my question, I'm attempting to apply the solution to the
robust non-linear regression function nlrob from the robustbase package, and
it doesn't work in that situation. I'm getting
allRobustFit <- nlrob(Y ~ (upper)/(1+10^(X-LOGEC50[dset])), data=all
,start=list(upper=max(all$Y),LOGEC50=c(-8.5,-8.5,-8.5)))
Error in nls(formula, data = data, start = start, algorithm = algorithm, :
parameters without starting value in 'data': LOGEC50
I'm guessing this is because the nlrob function doesn't know what to do with
a vector for a start value. Am I correct and is there another method of
using nlrob in the same way?
Thanks,
Jared
On Tue, Oct 12, 2010 at 8:58 AM, Jared Blashka
<evilamarant7x at gmail.com>wrote:
> Thanks so much! It works great.
>
> I had thought the way to do it relied on combining the data sets, but I
> couldn't figure out how to alter the formula to work with the combination.
>
> Jared
>
>
> On Tue, Oct 12, 2010 at 7:07 AM, Keith Jewell
<k.jewell at campden.co.uk>wrote:
>
>>
>> "Jared Blashka" <evilamarant7x at gmail.com> wrote in message
>> news:AANLkTinFfMuDugqNkUDVr=FMf0wrRTsbjXJExuki_MRH at mail.gmail.com...
>> > I'm working with 3 different data sets and applying this non-linear
>> > regression formula to each of them.
>> >
>> > nls(Y ~ (upper)/(1+10^(X-LOGEC50)), data=std_no_outliers,
>> > start=list(upper=max(std_no_outliers$Y),LOGEC50=-8.5))
>> >
>> > Previously, all of the regressions were calculated in Prism, but I'd
>> like
>> > to
>> > be able to automate the calculation process in a script, which is why
>> I'm
>> > trying to move to R. The issue I'm running into is that previously, in
>> > Prism, I was able to calculate a shared value for a constraint so that
>> all
>> > three data sets shared the same value, but have other constraints
>> > calculated
>> > separately. So Prism would figure out what single value for the
>> constraint
>> > in question would work best across all three data sets. For my formula,
>> > each
>> > data set needs it's own LOGEC50 value, but the upper value should be
the
>> > same across the 3 sets. Is there a way to do this within R, or with a
>> > package I'm not aware of, or will I need to write my own nls function
to
>> > work with multiple data sets, because I've got no idea where to start
>> with
>> > that.
>> >
>> > Thanks,
>> > Jared
>> >
>> > [[alternative HTML version deleted]]
>> >
>> An approach which works for me (code below to illustrate principle, not
>> tried...)
>>
>> 1) combine all three "data sets" into one dataframe with a column (e.g.
>> dset) indicating data set (1, 2 or 3)
>>
>> 2) express your formula with upper as single valued and LOGEC50 as a
>> vector
>> inderxed by dest e.g.
>> Y ~ upper/(1+10^(C-LOGEC50[dset]))
>>
>> 3) in the start list, make LOGEC50 a vector e.g. using -8.5 as start for
>> all
>> three LOGEC50 values
>> start =
>> list(start=list(upper=max(std_no_outliers$Y),LOGEC50=c(-8.5, -8.5, -8.5))
>>
>> Hope that helps,
>>
>> Keith J
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>
[[alternative HTML version deleted]]
------------------------------
Message: 91
Date: Wed, 13 Oct 2010 20:35:35 +0100
From: Julia Lira <julia.lira at hotmail.co.uk>
To: <eriki at ccbr.umn.edu>
Cc: r-help at r-project.org
Subject: Re: [R] loop
Message-ID: <SNT126-W173E6BA20DCB14273FECDED1550 at phx.gbl>
Content-Type: text/plain
Dear Eriki and all
To run Quantile regression, it is necessary to install the following package
in R:
install.packages("quantreg")
Then, write:
library(quantreg)
And the software will run.
rm(list=ls()) #remove almost everything in the memory
set.seed(180185)
nsim <- 10
mresultx <- matrix(-99, nrow=1000, ncol=nsim)
mresultb <- matrix(-99, nrow=1000, ncol=nsim)
N <- 200
I <- 5
taus <- c(0.480:0.520)
h <- c(1:20/1000)
alpha1 <- c(1:82)
aeven1 <- alpha1[2 * 1:41]
aodd1 <- alpha1[-2 * 1:41]
alpha2 <- c(1:40)
aeven2 <- alpha2[2 * 1:20]
#Create an object to hold results.
M <- matrix(0, ncol=82, nrow=nsim)
Mhb0 <- matrix(0, ncol=20, nrow=nsim)
Mhb1 <- matrix(0, ncol=20, nrow=nsim)
Mchb0 <- matrix(0, ncol=20, nrow=nsim)
Mchb1 <- matrix(0, ncol=20, nrow=nsim)
for (i in 1:nsim){
# make a matrix with 5 cols of N random uniform values
u <- replicate( 5, runif(N, 0, 1) )
# fit matrix u in another matrix of 1 column
mu <- matrix(u, nrow=1000, ncol=1)
# make auction-specific covariate
x <- runif(N, 0, 1)
mx <- matrix(rep(x,5), nrow=1000, ncol=1)
b0 <- matrix(rep(c(1),1000), nrow=1000, ncol=1)
#function for private cost
cost <- b0+b0*mx+mu
#bidding strategy
bid <- mx+((I+1)/I)+((I-1)/I)*mu
mresultb[,i] <- bid
mresultx[,i] <- mx
qf <- rq(formula = mresultb[,i] ~ mresultx[,i], tau= 480:520/1000)
# Storing result and does not overwrite prior values
M[i, ] <- coef(qf)
QI <- (1-0.5)/(I-1)
M50b0 <- M[,41]
M50b1 <- M[,42]
Mb0 <- matrix(M[,aodd1], nrow=nsim, ncol=20)
Mb1 <- matrix(M[,aeven1], nrow=nsim, ncol=20)
for (t in aeven2){
Mhb0[,t] <- M[,(41+t)]-M[,(41-t)]
Mhb1[,t] <- M[,(42+t)]-M[,(42-t)]
}
}
Here, the matrix M stores the result for all quantiles (by column)
considering each simulation i (by row).
Then, I try to make some algebric simulations. I need to calculate Mhb0 such
that for each value of t, the column called t in matrix Mhb0 will be the
result of the subtraction of column (41-t) from the column (41+t) of the
matrix M. To be more precise:
for t = 2, 4, 6,...
Mhb0 will be a matrix such that in the first column I will have {column
(41+t) of M - column (41-t) of M}
This should be the loop, and I would have 20 column for all the even number
from 2 to 82 (specified in aeven2).
Is that clear now? But the software says: "Error in Mhb0[, t] <- M[, (41 +
t)] - M[, (41 - t)] : subscript out of bounds"
What am I doing wrong?
Thanks a lot!
Julia
> Date: Wed, 13 Oct 2010 13:53:28 -0500
> From: eriki at ccbr.umn.edu
> To: julia.lira at hotmail.co.uk
> CC: r-help at r-project.org
> Subject: Re: [R] loop
>
> Julia,
>
> Can you provide a reproducible example? Your code calls the
> 'rq' function which is not found on my system.
>
> Any paring down of the code to make it more readable would
> help us help you better, too.
>
>
> Julia Lira wrote:
> > Dear all,
> >
> >
> >
> > I am trying to run a loop in my codes, but the software returns an
error: "subscript out of bounds"
> >
> >
> >
> > I dont understand exactly why this is happenning. My codes are the
following:
> >
> >
> >
> > rm(list=ls()) #remove almost everything in the memory
> >
> > set.seed(180185)
> > nsim <- 10
> > mresultx <- matrix(-99, nrow=1000, ncol=nsim)
> > mresultb <- matrix(-99, nrow=1000, ncol=nsim)
> > N <- 200
> > I <- 5
> > taus <- c(0.480:0.520)
> > h <- c(1:20/1000)
> > codd <-
c(1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,5
3,55,57,59,61,63,65,67,69,71,73,75,77,79,81)
> > ceven <-
c(2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,
54,56,58,60,62,64,66,68,70,72,74,76,78,80,82)
> > cevenl <- c(2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40)
> > #Create an object to hold results.
> > M <- matrix(0, ncol=82, nrow=nsim)
> > Mhb0 <- matrix(0, ncol=20, nrow=nsim)
> > Mhb1 <- matrix(0, ncol=20, nrow=nsim)
> > Mchb0 <- matrix(0, ncol=20, nrow=nsim)
> > Mchb1 <- matrix(0, ncol=20, nrow=nsim)
> > for (i in 1:nsim){
> > # make a matrix with 5 cols of N random uniform values
> > u <- replicate( 5, runif(N, 0, 1) )
> > # fit matrix u in another matrix of 1 column
> > mu <- matrix(u, nrow=1000, ncol=1)
> > # make auction-specific covariate
> > x <- runif(N, 0, 1)
> > mx <- matrix(rep(x,5), nrow=1000, ncol=1)
> > b0 <- matrix(rep(c(1),1000), nrow=1000, ncol=1)
> > #function for private cost
> > cost <- b0+b0*mx+mu
> > #bidding strategy
> > bid <- mx+((I+1)/I)+((I-1)/I)*mu
> > mresultb[,i] <- bid
> > mresultx[,i] <- mx
> > qf <- rq(formula = mresultb[,i] ~ mresultx[,i], tau= 480:520/1000)
> > # Storing result and does not overwrite prior values
> > M[i, ] <- coef(qf)
> > QI <- (1-0.5)/(I-1)
> > M50b0 <- M[,41]
> > M50b1 <- M[,42]
> > Mb0 <- matrix(M[,codd], nrow=nsim, ncol=20)
> > Mb1 <- matrix(M[,ceven], nrow=nsim, ncol=20)
> > for (t in cevenl){
> > Mhb0[ ,t] <- M[,(41+t)]-M[,(41-t)]
> > Mhb1[ ,t] <- M[,(42+t)]-M[,(42-t)]
> > }
> > }
> >
> >
> >
> > Problem: the problem is in the red part of the loop. I want that the
software takes the column (41+t) from the matrix called M and subtract from
it the cloumn (41-t) of the same matrix M, such that the value of t varies
according to the vector cevenl above.
> >
> >
> >
> > Why is this looping not working?
> >
> >
> >
[[elided Yahoo spam]]
> >
> >
> >
> > Julia
> >
> > [[alternative HTML version deleted]]
> >
> > ______________________________________________
> > R-help at r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
[[alternative HTML version deleted]]
------------------------------
Message: 92
Date: Wed, 13 Oct 2010 13:36:08 -0600
From: Greg Snow <Greg.Snow at imail.org>
To: "zadra at virginia.edu" <zadra at virginia.edu>, "r-help at r-project.org"
<r-help at r-project.org>
Subject: Re: [R] Change global env variables from within a function
Message-ID:
<B37C0A15B8FB3C468B5BC7EBC7DA14CC633E8152F2 at LP-EXMBVS10.CO.IHC.COM>
Content-Type: text/plain; charset="us-ascii"
R is primarily a function language not a macro language and what you are
trying to do matches more with macro programming.
The best approach is to think functionally and learn the more Rish ways of
doing things (have the function return the changed data and the caller does
the replacement, like has been shown in other responses).
If you really want the macro behavior (and are willing to live with the
consequences (don't complain that you were not warned)), then look at the
defmacro function in the gtools package and the article in the reference
section on its help page.
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.snow at imail.org
801.408.8111
> -----Original Message-----
> From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-
> project.org] On Behalf Of Jon Zadra
> Sent: Wednesday, October 13, 2010 12:20 PM
> To: r-help at r-project.org
> Subject: [R] Change global env variables from within a function
>
> Hi,
>
> I've looked all over for a solution to this, but haven't had much look
> in specifying what I want to do with appropriate search terms. Thus
> I'm
> turning to R-help.
>
> In the process of trying to write a simple function to rename
> individual
> column names in a data frame, I ran into the following problem: When I
> rename the columns within my function, I can't seem to get it to apply
> to the data frame in the global environment in a simple manner.
>
> Given:
>
> tempdf <- data.frame("a" = 1:6, "b" = 7:12)
>
> #I can rename a to g this way:
> names(tempdf)[names(tempdf)=="a"] <- "g"
>
> #Wanting to simplify this for the future, I have the function:
> colrename <- function(dframe, oldname, newname) {
> names(dframe)[names(dframe)==oldname] <- newname
> }
>
> colrename(tempdf, "a", "g")
>
> #However of course the change to tempdf stays within colrename(). I
> could add "return(names(dframe))" to the function and then call the
> function like:
> #names(tempdf) <- colrename(tempdf, "a", "g")
> #but at that point its not much simpler than the original:
> #names(tempdf)[names(tempdf)=="a"] <- "g"
>
> So, i'm wondering, is there a way to write code within that function so
> that the tempdf in the global environment gets changed?
> I thought of doing "names(dframe)[names(dframe)==oldname] <<- newname"
> but then of course it doesn't replace "dframe" with "tempdf" in the
> evaluation, so it seems to be a problem of looking at some variables
> within the function environment but others in the parent environment.
>
> Any help is greatly appreciated.
>
> Thanks,
>
> Jon
> --
> Jon Zadra
> Department of Psychology
> University of Virginia
> P.O. Box 400400
> Charlottesville VA 22904
> (434) 982-4744
> email: zadra at virginia.edu
> <http://www.google.com/calendar/embed?src=jzadra%40gmail.com>
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 93
Date: Wed, 13 Oct 2010 12:48:31 -0700 (PDT)
From: Phil Spector <spector at stat.berkeley.edu>
To: Julia Lira <julia.lira at hotmail.co.uk>
Cc: r-help at r-project.org
Subject: Re: [R] loop
Message-ID:
<alpine.DEB.2.00.1010131246130.32077 at springer.Berkeley.EDU>
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
Julia -
Your subscript is out of range because in this loop:
for (t in aeven2){
Mhb0[,t] <- M[,(41+t)]-M[,(41-t)]
Mhb1[,t] <- M[,(42+t)]-M[,(42-t)]
}
t takes the values from 2 to 40 by 2, and you've
declared Mhb0 and Mhb1 as matrices with 20 columns.
So when t reaches 22, there is no corresponding
column in Mhb0 or Mhb1.
- Phil Spector
Statistical Computing Facility
Department of Statistics
UC Berkeley
spector at stat.berkeley.edu
On Wed, 13 Oct 2010, Julia Lira wrote:
>
> Dear Eriki and all
>
>
>
> To run Quantile regression, it is necessary to install the following
package in R:
>
>
>
> install.packages("quantreg")
>
>
>
> Then, write:
>
>
>
> library(quantreg)
>
>
>
> And the software will run.
>
>
>
> rm(list=ls()) #remove almost everything in the memory
>
> set.seed(180185)
> nsim <- 10
> mresultx <- matrix(-99, nrow=1000, ncol=nsim)
> mresultb <- matrix(-99, nrow=1000, ncol=nsim)
> N <- 200
> I <- 5
> taus <- c(0.480:0.520)
> h <- c(1:20/1000)
> alpha1 <- c(1:82)
> aeven1 <- alpha1[2 * 1:41]
> aodd1 <- alpha1[-2 * 1:41]
> alpha2 <- c(1:40)
> aeven2 <- alpha2[2 * 1:20]
> #Create an object to hold results.
> M <- matrix(0, ncol=82, nrow=nsim)
> Mhb0 <- matrix(0, ncol=20, nrow=nsim)
> Mhb1 <- matrix(0, ncol=20, nrow=nsim)
> Mchb0 <- matrix(0, ncol=20, nrow=nsim)
> Mchb1 <- matrix(0, ncol=20, nrow=nsim)
> for (i in 1:nsim){
> # make a matrix with 5 cols of N random uniform values
> u <- replicate( 5, runif(N, 0, 1) )
> # fit matrix u in another matrix of 1 column
> mu <- matrix(u, nrow=1000, ncol=1)
> # make auction-specific covariate
> x <- runif(N, 0, 1)
> mx <- matrix(rep(x,5), nrow=1000, ncol=1)
> b0 <- matrix(rep(c(1),1000), nrow=1000, ncol=1)
> #function for private cost
> cost <- b0+b0*mx+mu
> #bidding strategy
> bid <- mx+((I+1)/I)+((I-1)/I)*mu
> mresultb[,i] <- bid
> mresultx[,i] <- mx
> qf <- rq(formula = mresultb[,i] ~ mresultx[,i], tau= 480:520/1000)
> # Storing result and does not overwrite prior values
> M[i, ] <- coef(qf)
> QI <- (1-0.5)/(I-1)
> M50b0 <- M[,41]
> M50b1 <- M[,42]
> Mb0 <- matrix(M[,aodd1], nrow=nsim, ncol=20)
> Mb1 <- matrix(M[,aeven1], nrow=nsim, ncol=20)
> for (t in aeven2){
> Mhb0[,t] <- M[,(41+t)]-M[,(41-t)]
> Mhb1[,t] <- M[,(42+t)]-M[,(42-t)]
> }
> }
>
>
>
> Here, the matrix M stores the result for all quantiles (by column)
considering each simulation i (by row).
>
> Then, I try to make some algebric simulations. I need to calculate Mhb0
such that for each value of t, the column called t in matrix Mhb0 will be
the result of the subtraction of column (41-t) from the column (41+t) of the
matrix M. To be more precise:
>
>
>
> for t = 2, 4, 6,...
>
>
>
> Mhb0 will be a matrix such that in the first column I will have {column
(41+t) of M - column (41-t) of M}
>
> This should be the loop, and I would have 20 column for all the even
number from 2 to 82 (specified in aeven2).
>
>
>
> Is that clear now? But the software says: "Error in Mhb0[, t] <- M[, (41 +
t)] - M[, (41 - t)] : subscript out of bounds"
>
>
>
> What am I doing wrong?
>
>
>
> Thanks a lot!
>
>
>
> Julia
>
>
>
>> Date: Wed, 13 Oct 2010 13:53:28 -0500
>> From: eriki at ccbr.umn.edu
>> To: julia.lira at hotmail.co.uk
>> CC: r-help at r-project.org
>> Subject: Re: [R] loop
>>
>> Julia,
>>
>> Can you provide a reproducible example? Your code calls the
>> 'rq' function which is not found on my system.
>>
>> Any paring down of the code to make it more readable would
>> help us help you better, too.
>>
>>
>> Julia Lira wrote:
>>> Dear all,
>>>
>>>
>>>
>>> I am trying to run a loop in my codes, but the software returns an
error: "subscript out of bounds"
>>>
>>>
>>>
>>> I dont understand exactly why this is happenning. My codes are the
following:
>>>
>>>
>>>
>>> rm(list=ls()) #remove almost everything in the memory
>>>
>>> set.seed(180185)
>>> nsim <- 10
>>> mresultx <- matrix(-99, nrow=1000, ncol=nsim)
>>> mresultb <- matrix(-99, nrow=1000, ncol=nsim)
>>> N <- 200
>>> I <- 5
>>> taus <- c(0.480:0.520)
>>> h <- c(1:20/1000)
>>> codd <-
c(1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,5
3,55,57,59,61,63,65,67,69,71,73,75,77,79,81)
>>> ceven <-
c(2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,
54,56,58,60,62,64,66,68,70,72,74,76,78,80,82)
>>> cevenl <- c(2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40)
>>> #Create an object to hold results.
>>> M <- matrix(0, ncol=82, nrow=nsim)
>>> Mhb0 <- matrix(0, ncol=20, nrow=nsim)
>>> Mhb1 <- matrix(0, ncol=20, nrow=nsim)
>>> Mchb0 <- matrix(0, ncol=20, nrow=nsim)
>>> Mchb1 <- matrix(0, ncol=20, nrow=nsim)
>>> for (i in 1:nsim){
>>> # make a matrix with 5 cols of N random uniform values
>>> u <- replicate( 5, runif(N, 0, 1) )
>>> # fit matrix u in another matrix of 1 column
>>> mu <- matrix(u, nrow=1000, ncol=1)
>>> # make auction-specific covariate
>>> x <- runif(N, 0, 1)
>>> mx <- matrix(rep(x,5), nrow=1000, ncol=1)
>>> b0 <- matrix(rep(c(1),1000), nrow=1000, ncol=1)
>>> #function for private cost
>>> cost <- b0+b0*mx+mu
>>> #bidding strategy
>>> bid <- mx+((I+1)/I)+((I-1)/I)*mu
>>> mresultb[,i] <- bid
>>> mresultx[,i] <- mx
>>> qf <- rq(formula = mresultb[,i] ~ mresultx[,i], tau= 480:520/1000)
>>> # Storing result and does not overwrite prior values
>>> M[i, ] <- coef(qf)
>>> QI <- (1-0.5)/(I-1)
>>> M50b0 <- M[,41]
>>> M50b1 <- M[,42]
>>> Mb0 <- matrix(M[,codd], nrow=nsim, ncol=20)
>>> Mb1 <- matrix(M[,ceven], nrow=nsim, ncol=20)
>>> for (t in cevenl){
>>> Mhb0[ ,t] <- M[,(41+t)]-M[,(41-t)]
>>> Mhb1[ ,t] <- M[,(42+t)]-M[,(42-t)]
>>> }
>>> }
>>>
>>>
>>>
>>> Problem: the problem is in the red part of the loop. I want that the
software takes the column (41+t) from the matrix called M and subtract from
it the cloumn (41-t) of the same matrix M, such that the value of t varies
according to the vector cevenl above.
>>>
>>>
>>>
>>> Why is this looping not working?
>>>
>>>
>>>
[[elided Yahoo spam]]
>>>
>>>
>>>
>>> Julia
>>>
>>> [[alternative HTML version deleted]]
>>>
>>> ______________________________________________
>>> R-help at r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 94
Date: Wed, 13 Oct 2010 12:51:22 -0700
From: "Charles C. Berry" <cberry at tajo.ucsd.edu>
To: "Schwab,Wilhelm K" <bschwab at anest.ufl.edu>
Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] [OT] (slightly) - OpenOffice Calc and text files
Message-ID: <Pine.LNX.4.64.1010131231310.16987 at tajo.ucsd.edu>
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
On Wed, 13 Oct 2010, Schwab,Wilhelm K wrote:
[[elided Yahoo spam]]
emacs org-mode can convert your tab delimited file to a 'table' that you
can edit either using org-mode functions OR as plain text by switching to
fundamental mode.
In emacs speak, just put the cursor at the top of a buffer holding your
file and do
M-x replace-string RET TAB RET | RET
I think, then move your cursor to a line that has a '|' in it and hit TAB,
and you have a neatly formatted table.
See,
http://orgmode.org/worg/org-tutorials/tables.php
for an intro.
A big advantage in using an org-mode table is you can place an R source
code block further down in the same file, and it can read in the data in
the table. Then you can go back to the table to edit, then rerun R, ...
I append an example below.
There is a load of tutorial info at
http://orgmode.org/worg/org-tutorials/index.php
HTH,
Chuck
#+begin_example
#+tblname: simpleDF
| a | b | c |
|---+---+---|
| 1 | 2 | 3 |
| 5 | 4 | 2 |
|---+---+---|
#+end_example
#+begin_src R :var df=simpleDF :results output :colnames yes
summary( df )
#+end_src
#+results:
: a b c
: Min. :1 Min. :2.0 Min. :2.00
: 1st Qu.:2 1st Qu.:2.5 1st Qu.:2.25
: Median :3 Median :3.0 Median :2.50
: Mean :3 Mean :3.0 Mean :2.50
: 3rd Qu.:4 3rd Qu.:3.5 3rd Qu.:2.75
: Max. :5 Max. :4.0 Max. :3.00
>
> Bill
>
>
>
> ________________________________________
> From: Albyn Jones [jones at reed.edu]
> Sent: Wednesday, October 13, 2010 2:14 PM
> To: Schwab,Wilhelm K
> Subject: Re: [R] [OT] (slightly) - OpenOffice Calc and text files
>
> emacs shows you exactly what is there, nothing more nor less.
> it isn't a spreadsheet, but tabs will align columns.
>
> albyn
>
> On Wed, Oct 13, 2010 at 01:53:46PM -0400, Schwab,Wilhelm K wrote:
>> Albyn,
>>
>> I'll look into it. In fact, I have a small book on it that I bought in
my very early days of using Linux. I quickly found TeX Maker (for the
obvious), Code::Blocks for C/C++ and I would not have started the move
without a working Smalltalk (http://pharo-project.org/home).
>>
>> For editing data files, I really just want something that shows data in
an understandable grid and does not do weird stuff thinking it's being
helpful.
>>
>> Bill
>>
>>
>> ________________________________________
>> From: Albyn Jones [jones at reed.edu]
>> Sent: Wednesday, October 13, 2010 1:39 PM
>> To: Schwab,Wilhelm K
>> Cc: r-help at r-project.org
>> Subject: Re: [R] [OT] (slightly) - OpenOffice Calc and text files
>>
>> How about emacs?
>>
>> albyn
>>
>> On Wed, Oct 13, 2010 at 01:13:03PM -0400, Schwab,Wilhelm K wrote:
>>> Hello all,
>>> <.....>
>>> Have any of you found a nice (or at least predictable) way to use OO
Calc to edit files like this? If it insists on thinking for me, I wish it
would think in 24 hour time and 4 digit years :) I work on Linux, so Excel
is off the table, but another spreadsheet or text editor would be a viable
option, as would configuration changes to Calc.
>>>
>>> Bill
>>>
>>> ______________________________________________
>>> R-help at r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>>
>>
>> --
>> Albyn Jones
>> Reed College
>> jones at reed.edu
>>
>>
>
> --
> Albyn Jones
> Reed College
> jones at reed.edu
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
Charles C. Berry (858) 534-2098
Dept of Family/Preventive
Medicine
E mailto:cberry at tajo.ucsd.edu UC San Diego
http://famprevmed.ucsd.edu/faculty/cberry/ La Jolla, San Diego 92093-0901
------------------------------
Message: 95
Date: Wed, 13 Oct 2010 15:01:17 -0500
From: "Mauricio Romero" <mauricio.romero at quantil.com.co>
To: <r-help at r-project.org>
Subject: [R] vectorizing: selecting one record per group
Message-ID: <4cb6100f.0c44970a.6fe3.699a at mx.google.com>
Content-Type: text/plain
Hi,
I want to select a subsample from my data, choosing one record from each
group. I know how to do this with a for.
For example: lets say I have the data:
A=cbind(rnorm(100),runif(100),(rep(c(1,2,3,4,5),20)))
Where the third column is the group variable. Then what I want is to select
5 observations. Each one taken randomly from each group.
INDEX =NULL
i=1
for(index_g in unique(A[,3])){
INDEX [i]=sample(which(A[,3]==index_g),1)
i=i+1
}
SEL=A[INDEX,]
Is there a way to do this without a for? in other words is there a way to
vectorize this?
Thank you,
Mauricio Romero
Quantil S.A.S.
Bogota,Colombia
www.quantil.com.co
"It is from the earth that we must find our substance; it is on the earth
that we must find solutions to the problems that promise to destroy all life
here"
[[alternative HTML version deleted]]
------------------------------
Message: 96
Date: Wed, 13 Oct 2010 16:01:59 -0400
From: David Winsemius <dwinsemius at comcast.net>
To: zadra at virginia.edu
Cc: r-help at r-project.org
Subject: Re: [R] Change global env variables from within a function
Message-ID: <F809917B-0A2E-46DC-A8CD-639E4A856966 at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
On Oct 13, 2010, at 2:19 PM, Jon Zadra wrote:
> Hi,
>
> I've looked all over for a solution to this, but haven't had much
> look in specifying what I want to do with appropriate search terms.
> Thus I'm turning to R-help.
>
> In the process of trying to write a simple function to rename
> individual column names in a data frame, I ran into the following
> problem: When I rename the columns within my function, I can't seem
> to get it to apply to the data frame in the global environment in a
> simple manner.
There already is a function (or perhaps a composition of two
functions) that does what you ask. It's the '<-' method for names()
usually invoked in its dyadic form:
names(object) <- character_vector
> df <- data.frame(one=rnorm(10), two=rnorm(10))
> names(df) <- c("three", "four")
> names(df)
[1] "three" "four"
In your case it would work like this:
> names(tempdf)[ which(names(tempdf)=="a") ] <- "g"
> names(tempdf)
[1] "g" "b"
--
David.
>
> Given:
>
> tempdf <- data.frame("a" = 1:6, "b" = 7:12)
>
> #I can rename a to g this way:
> names(tempdf)[names(tempdf)=="a"] <- "g"
>
> #Wanting to simplify this for the future, I have the function:
> colrename <- function(dframe, oldname, newname) {
> names(dframe)[names(dframe)==oldname] <- newname
> }
>
> colrename(tempdf, "a", "g")
>
> #However of course the change to tempdf stays within colrename(). I
> could add "return(names(dframe))" to the function and then call the
> function like:
> #names(tempdf) <- colrename(tempdf, "a", "g")
> #but at that point its not much simpler than the original:
> #names(tempdf)[names(tempdf)=="a"] <- "g"
>
> So, i'm wondering, is there a way to write code within that function
> so that the tempdf in the global environment gets changed?
> I thought of doing "names(dframe)[names(dframe)==oldname] <<-
> newname" but then of course it doesn't replace "dframe" with
> "tempdf" in the evaluation, so it seems to be a problem of looking
> at some variables within the function environment but others in the
> parent environment.
>
> Any help is greatly appreciated.
>
> Thanks,
>
> Jon
> --
> Jon Zadra
> Department of Psychology
> University of Virginia
> P.O. Box 400400
> Charlottesville VA 22904
> (434) 982-4744
> email: zadra at virginia.edu
> <http://www.google.com/calendar/embed?src=jzadra%40gmail.com>
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
David Winsemius, MD
West Hartford, CT
------------------------------
Message: 97
Date: Wed, 13 Oct 2010 14:10:43 -0600
From: Alisa Wade <alisaww at gmail.com>
To: r-help at r-project.org
Subject: [R] Matrix subscripting to wrap around from end to start of
row
Message-ID:
<AANLkTimX+evNtDhHe9EQ350pLS-FkVpcSfFL=H5j4Drd at mail.gmail.com>
Content-Type: text/plain
Perhaps it is just that I don't even know the correct term to search for,
but I can find nothing that explains how to wrap around from the end to a
start of a row in a matrix.
For example, you have a matrix of 2 years of data, where rows are years, and
columns are months.
month.data = matrix(c(3,4,6,8,12,90,5,14,22, 8), nrow = 2, ncol=5)
I would like to take the average of months 5:1 for each year (for row 1
=12.5). However, I am passing the start month (5) and the end month (1) as
variables.
I would like to do something like
year.avg = apply(month.data[, start.month:end.month], MARGIN=1, mean)
But that gives me the average of months 1:5. (for row 1 =9.6)
I know I could use:
apply(month.data[, c(1,5)], 1, mean)
but I don't know how to pass start.month, end.month into that format that
because paste or sprintf forces them to strings, which are not accepted in a
subscript.
I have the feeling I am unaware of some obvious trick.
[[elided Yahoo spam]]
*****************************************************
Alisa A. Wade
Postdoctoral Center Associate
National Center for Ecological Analysis and Synthesis
wade at nceas.ucsb.edu
(406) 529-9722
home email: alisaww at gmail.com
[[alternative HTML version deleted]]
------------------------------
Message: 98
Date: Wed, 13 Oct 2010 16:14:23 -0400
From: Gabor Grothendieck <ggrothendieck at gmail.com>
To: Bart Joosen <bartjoosen at hotmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Regular expression to find value between brackets
Message-ID:
<AANLkTimTv5WhH8vxBkQe+wu1wbrivXNALX6n7RDVpQUH at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
On Wed, Oct 13, 2010 at 2:16 PM, Bart Joosen <bartjoosen at hotmail.com> wrote:
>
> Hi,
>
> this should be an easy one, but I can't figure it out.
> I have a vector of tests, with their units between brackets (if they have
> units).
> eg tests <- c("pH", "Assay (%)", "Impurity A(%)", "content (mg/ml)")
>
strapply in gsubfn can match by content which is what you want.
We use a regular expression which is a literal left paren, "\\("
followed by a capturing paren ( followed by the longest string not
containing a right paren [^)]* followed by the matching capturing
paren "\\)" with strapply from the gsubfn package. This returns the
matches to the function that is in the third arg and it just
concatenates them. The result is simplified into a character vector
(rather than a list).
library(gsubfn)
strapply(tests, "\\(([^)]*)\\)", c, simplify = c)
e.g.
> strapply(tests, "\\(([^)]*)\\)", c, simplify = c)
[1] "%" "%" "mg/ml"
See http://gsubfn.googlecode.com for the gsubfn home page and more info.
--
Statistics & Software Consulting
GKX Group, GKX Associates Inc.
tel: 1-877-GKX-GROUP
email: ggrothendieck at gmail.com
------------------------------
Message: 99
Date: Wed, 13 Oct 2010 15:17:16 -0500
From: Erik Iverson <eriki at ccbr.umn.edu>
To: Mauricio Romero <mauricio.romero at quantil.com.co>
Cc: r-help at r-project.org
Subject: Re: [R] vectorizing: selecting one record per group
Message-ID: <4CB613CC.7010803 at ccbr.umn.edu>
Content-Type: text/plain; charset=windows-1252; format=flowed
Hello,
There are probably many ways to do this, but I think
it's easier if you use a data.frame as your object.
The easy solution for the matrix you provide is escaping
me at the moment.
One solution, using the plyr package:
library(plyr)
A <- data.frame(a = rnorm(100),b = runif(100), c = rep(c(1,2,3,4,5),20))
ddply(A, .(c), function(x) x[sample(1:nrow(x), 1), ])
a b c
1 0.02995847 0.4763819 1
2 0.72035194 0.2948611 2
3 1.34963917 0.2057488 3
4 -1.99427160 0.1147923 4
5 -0.73612703 0.5889539 5
Mauricio Romero wrote:
> Hi,
>
>
>
> I want to select a subsample from my data, choosing one record from each
> group. I know how to do this with a for.
>
>
>
> For example: lets say I have the data:
>
> A=cbind(rnorm(100),runif(100),(rep(c(1,2,3,4,5),20)))
>
> Where the third column is the group variable. Then what I want is to
select
> 5 observations. Each one taken randomly from each group.
>
>
>
>
>
> INDEX =NULL
>
> i=1
>
> for(index_g in unique(A[,3])){
>
> INDEX [i]=sample(which(A[,3]==index_g),1)
>
> i=i+1
>
> }
>
> SEL=A[INDEX,]
>
>
>
>
>
> Is there a way to do this without a ?for?? in other words is there a way
to
> ?vectorize? this?
>
>
>
> Thank you,
>
>
>
>
>
> Mauricio Romero
>
> Quantil S.A.S.
>
> Bogot?,Colombia
>
> www.quantil.com.co
>
>
>
> "It is from the earth that we must find our substance; it is on the earth
> that we must find solutions to the problems that promise to destroy all
life
> here"
>
>
>
>
> [[alternative HTML version deleted]]
>
>
>
> ------------------------------------------------------------------------
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 100
Date: Wed, 13 Oct 2010 12:16:05 -0700 (PDT)
To: r-help at r-project.org
Subject: Re: [R] Coin Toss Simulation
Message-ID: <227938.26341.qm at web43136.mail.sp1.yahoo.com>
Content-Type: text/plain
Erik- thank you very much. The rbinom worked.
Thanks !
GR
________________________________
From: Erik Iverson-3 [via R]
<ml-node+2994186-584049527-199273 at n4.nabble.com>
Sent: Wed, October 13, 2010 2:24:18 PM
Subject: Re: Coin Toss Simulation
Shiv wrote:
> I am trying a simple toin coss simulation, of say 200 coin tosses. The
idea
> is to have a data frame like so:
> Experiment# Number_Of_Heads
> 1 104
> 2 96
> 3 101
>
> So I do:
>
> d <-data.frame(exp_num=c(1,2,3)); /* Just 3 experiments to begin with */
> d$head_ct <-sum(sample(0:1,200,repl=TRUE));
/* */ is not a valid comment in R, and you don't need ';' either :)
Just try evaluating the right hand side so that the R interpreter
prints the result. This is just calling sample once, and therefore
you only get one number, 85 in your case.
> d;
> exp_num head_ct
> 1 1 85
> 2 2 85
> 3 3 85 /* the same scalar value
is
> applied to all the rows */
>
> So I tried using "within", and "for", and making the "sum( " as a
> function, and I end up with the same...I get the same constant value. But
> what I want of course is different "samples"....
Look at ?replicate or just use ?rbinom directly
d <-data.frame(exp_num = 1:3)
d$head_ct <- rbinom(nrow(d), 200, prob = 0.5)
>
>
>
______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
________________________________
View message @
http://r.789695.n4.nabble.com/Coin-Toss-Simulation-tp2994088p2994186.html
To unsubscribe from Coin Toss Simulation, click here.
--
View this message in context:
http://r.789695.n4.nabble.com/Coin-Toss-Simulation-tp2994088p2994284.html
Sent from the R help mailing list archive at Nabble.com.
[[alternative HTML version deleted]]
------------------------------
Message: 101
Date: Wed, 13 Oct 2010 15:10:29 -0400
From: Mike Marchywka <marchywka at hotmail.com>
To: <dwinsemius at comcast.net>, <bschwab at anest.ufl.edu>
Cc: r-help at r-project.org
Subject: Re: [R] [OT] (slightly) - OpenOffice Calc and text files
Message-ID: <BLU113-W193E2B80BDDED72030B93EBE550 at phx.gbl>
Content-Type: text/plain; charset="iso-8859-1"
----------------------------------------
> From: dwinsemius at comcast.net
> To: bschwab at anest.ufl.edu
> Date: Wed, 13 Oct 2010 14:52:21 -0400
> CC: r-help at r-project.org
> Subject: Re: [R] [OT] (slightly) - OpenOffice Calc and text files
>
>
> On Oct 13, 2010, at 1:13 PM, Schwab,Wilhelm K wrote:
>
> > Hello all,
> >
> > I had a very strange looking problem that turned out to be due to
> > unexpected (by me at least) format changes to one of my data files.
> > We have a small lab study in which each run is represented by a row
> > in a tab-delimited file; each row identifies a repetition of the
> > experiment and associates it with some subjective measurements and
> > times from our notes that get used to index another file with lots
> > of automatically collected data. In short, nothing shocking.
> >
> > In a moment of weakness, I opened the file using (I think it's
> > version 3.2) of OpenOffice Calc to edit something that I had mangled
> > when I first entered it, saved it (apparently the mistake), and
> > reran my analysis code. The results were goofy, and the problem was
> > in my code that runs before R ever sees the data. That code was
> > confused by things that I would like to ensure don't happen again,
> > and I suspect that some of you might have thoughts on it.
> >
> > The problems specifically:
> >
> > (1) OO seems to be a little stingy about producing tab-delimited
> > filter and folks (presumably like us) saying that it deserves to be
> > a separate option.
>
> You have been little stingy yourself about describing what you did. I
> see no specifics about the actual data used as input nor the specific
> operations. I just opened an OO.o Calc workbook and dropped a
> character vector, "1969-12-31 23:59:50" copied from help(POSIXct) into
> > Have any of you found a nice (or at least predictable) way to use OO
> > Calc to edit files like this?
>
> I didn't do anything I thought was out of the ordinary and so cannot
> reproduce your problem. (This was on a Mac, but OO.o is probably going
> to behave the same across *NIX cultures.)
>
> --
> David
>
> > If it insists on thinking for me, I wish it would think in 24 hour
> > time and 4 digit years :)
>
> Is it possible that you have not done enough thinking for _it_?
>
> > I work on Linux, so Excel is off the table, but another spreadsheet
> > or text editor would be a viable option, as would configuration
> > changes to Calc.
> >
> > Bill
Probably instead of guessing and seeing how various things react, you
could go get a utility like octal dump or open in an editor that
has a hex mode and see what happened. This could be anything- crlf
convention,
someone turned it to unicode, etc. On linux or cygwin I think you have
"od" available. Then of course, if you know what R likes, you can use
sed to fix it...
------------------------------
Message: 102
Date: Wed, 13 Oct 2010 16:20:28 -0400
From: Gabor Grothendieck <ggrothendieck at gmail.com>
To: Kurt_Helf at nps.gov
Cc: r-help at r-project.org
Subject: Re: [R] strip month and year from MM/DD/YYYY format
Message-ID:
<AANLkTinvYyoLbbr8CijitLVHz2OzqdgA93ffgGB+q9qx at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
On Wed, Oct 13, 2010 at 10:49 AM, <Kurt_Helf at nps.gov> wrote:
> Greetings
> ? ?I'm having difficulty witht the strptime function. ?I can't seem to
> figure a way to strip month (name) and year and create separate columns
> from a column with MM/DD/YYYY formatted dates. ?Can anyone help?
Try month.day.year in chron:
> library(chron)
> dat <- as.Date(c("6/12/2009", "4/15/2010"), "%m/%d/%Y")
> month.day.year(dat)
$month
[1] 6 4
$day
[1] 12 15
$year
[1] 2009 2010
> # or
> as.data.frame(month.day.year(dat))
month day year
1 6 12 2009
2 4 15 2010
--
Statistics & Software Consulting
GKX Group, GKX Associates Inc.
tel: 1-877-GKX-GROUP
email: ggrothendieck at gmail.com
------------------------------
Message: 103
Date: Wed, 13 Oct 2010 17:23:00 -0300
From: Henrique Dallazuanna <wwwhsd at gmail.com>
To: Mauricio Romero <mauricio.romero at quantil.com.co>
Cc: r-help at r-project.org
Subject: Re: [R] vectorizing: selecting one record per group
Message-ID:
<AANLkTikJbbPKmK9Yjnet2DFY0J4Sk2Nnmgpcj337ycBM at mail.gmail.com>
Content-Type: text/plain
Try this:
A <- data.frame(V1 = rnorm(100), V2 = runif(100), V3 = rep(c(1,2,3,4,5),20))
do.call(cbind, lapply(aggregate(. ~ V3, A, FUN = sample, size = 5), c))
On Wed, Oct 13, 2010 at 5:01 PM, Mauricio Romero <
mauricio.romero at quantil.com.co> wrote:
> Hi,
>
>
>
> I want to select a subsample from my data, choosing one record from each
> group. I know how to do this with a for.
>
>
>
> For example: lets say I have the data:
>
> A=cbind(rnorm(100),runif(100),(rep(c(1,2,3,4,5),20)))
>
> Where the third column is the group variable. Then what I want is to
select
> 5 observations. Each one taken randomly from each group.
>
>
>
>
>
> INDEX =NULL
>
> i=1
>
> for(index_g in unique(A[,3])){
>
> INDEX [i]=sample(which(A[,3]==index_g),1)
>
> i=i+1
>
> }
>
> SEL=A[INDEX,]
>
>
>
>
>
> Is there a way to do this without a for? in other words is there a way
to
> vectorize this?
>
>
>
> Thank you,
>
>
>
>
>
> Mauricio Romero
>
> Quantil S.A.S.
>
> Bogota,Colombia
>
> www.quantil.com.co
>
>
>
> "It is from the earth that we must find our substance; it is on the earth
> that we must find solutions to the problems that promise to destroy all
> life
> here"
>
>
>
>
> [[alternative HTML version deleted]]
>
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>
--
Henrique Dallazuanna
Curitiba-Parana-Brasil
250 25' 40" S 490 16' 22" O
[[alternative HTML version deleted]]
------------------------------
Message: 104
Date: Wed, 13 Oct 2010 16:23:28 -0400
From: David Winsemius <dwinsemius at comcast.net>
To: Alisa Wade <alisaww at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Matrix subscripting to wrap around from end to start
of row
Message-ID: <92FE940F-573C-45F0-8BDF-4219A83A109C at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
On Oct 13, 2010, at 4:10 PM, Alisa Wade wrote:
> Perhaps it is just that I don't even know the correct term to search
> for,
> but I can find nothing that explains how to wrap around from the end
> to a
> start of a row in a matrix.
>
> For example, you have a matrix of 2 years of data, where rows are
> years, and
> columns are months.
> month.data = matrix(c(3,4,6,8,12,90,5,14,22, 8), nrow = 2, ncol=5)
>
> I would like to take the average of months 5:1 for each year (for
> row 1
> =12.5). However, I am passing the start month (5) and the end month
> (1) as
> variables.
>
> I would like to do something like
>
> year.avg = apply(month.data[, start.month:end.month], MARGIN=1, mean)
>
> But that gives me the average of months 1:5. (for row 1 =9.6)
>
> I know I could use:
> apply(month.data[, c(1,5)], 1, mean)
> start=5; end=1; year.avg = apply(month.data[, c(start,end)], 1, mean)
> year.avg
[1] 12.5 6.0
> but I don't know how to pass start.month, end.month into that format
> that
> because paste or sprintf forces them to strings, which are not
> accepted in a
> subscript.
>
> I have the feeling I am unaware of some obvious trick.
[[elided Yahoo spam]]
>
> *****************************************************
> Alisa A. Wade
> Postdoctoral Center Associate
David Winsemius, MD
West Hartford, CT
------------------------------
Message: 105
Date: Wed, 13 Oct 2010 16:27:47 -0400
From: "Schwab,Wilhelm K" <bschwab at anest.ufl.edu>
To: Mike Marchywka <marchywka at hotmail.com>, "dwinsemius at comcast.net"
<dwinsemius at comcast.net>
Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] [OT] (slightly) - OpenOffice Calc and text files
Message-ID:
<A93938CC72687245835AC0BE10199F7C84D47D0B at HSC-CMS01.ad.ufl.edu>
Content-Type: text/plain; charset="us-ascii"
I know *what* happened (Calc reformatted the data in ways I did not want or
expect). It is not end-of-line conventions; they reformatted the data
leaving the structure intact. As to why/how, that could depend on the
sequence of operations, so I thought to ask here to see if you had
collectively either found something specific to do or to avoid.
Gnumeric is now freshly installed and will get some testing; if I don't care
for it, I'll look more at emacs. I don't ask much of a spreadsheet
(show/edit a grid and maybe hide/show columns for complex data sets), but it
would be nice if it did not reformat everything every time I open a file :(
So far, gnumeric successfully opened a file; I will be a little less
[[elided Yahoo spam]]
Bill
________________________________________
From: Mike Marchywka [marchywka at hotmail.com]
Sent: Wednesday, October 13, 2010 3:10 PM
To: dwinsemius at comcast.net; Schwab,Wilhelm K
Cc: r-help at r-project.org
Subject: RE: [R] [OT] (slightly) - OpenOffice Calc and text files
----------------------------------------
> From: dwinsemius at comcast.net
> To: bschwab at anest.ufl.edu
> Date: Wed, 13 Oct 2010 14:52:21 -0400
> CC: r-help at r-project.org
> Subject: Re: [R] [OT] (slightly) - OpenOffice Calc and text files
>
>
> On Oct 13, 2010, at 1:13 PM, Schwab,Wilhelm K wrote:
>
> > Hello all,
> >
> > I had a very strange looking problem that turned out to be due to
> > unexpected (by me at least) format changes to one of my data files.
> > We have a small lab study in which each run is represented by a row
> > in a tab-delimited file; each row identifies a repetition of the
> > experiment and associates it with some subjective measurements and
> > times from our notes that get used to index another file with lots
> > of automatically collected data. In short, nothing shocking.
> >
> > In a moment of weakness, I opened the file using (I think it's
> > version 3.2) of OpenOffice Calc to edit something that I had mangled
> > when I first entered it, saved it (apparently the mistake), and
> > reran my analysis code. The results were goofy, and the problem was
> > in my code that runs before R ever sees the data. That code was
> > confused by things that I would like to ensure don't happen again,
> > and I suspect that some of you might have thoughts on it.
> >
> > The problems specifically:
> >
> > (1) OO seems to be a little stingy about producing tab-delimited
> > text; there is stuff online about using the csv and editing the
> > filter and folks (presumably like us) saying that it deserves to be
> > a separate option.
>
> You have been little stingy yourself about describing what you did. I
> see no specifics about the actual data used as input nor the specific
> operations. I just opened an OO.o Calc workbook and dropped a
> character vector, "1969-12-31 23:59:50" copied from help(POSIXct) into
> > Have any of you found a nice (or at least predictable) way to use OO
> > Calc to edit files like this?
>
> I didn't do anything I thought was out of the ordinary and so cannot
> reproduce your problem. (This was on a Mac, but OO.o is probably going
> to behave the same across *NIX cultures.)
>
> --
> David
>
> > If it insists on thinking for me, I wish it would think in 24 hour
> > time and 4 digit years :)
>
> Is it possible that you have not done enough thinking for _it_?
>
> > I work on Linux, so Excel is off the table, but another spreadsheet
> > or text editor would be a viable option, as would configuration
> > changes to Calc.
> >
> > Bill
Probably instead of guessing and seeing how various things react, you
could go get a utility like octal dump or open in an editor that
has a hex mode and see what happened. This could be anything- crlf
convention,
someone turned it to unicode, etc. On linux or cygwin I think you have
"od" available. Then of course, if you know what R likes, you can use
sed to fix it...
------------------------------
Message: 106
Date: Wed, 13 Oct 2010 16:29:15 -0400
From: David Winsemius <dwinsemius at comcast.net>
To: Erik Iverson <eriki at ccbr.umn.edu>
Cc: Mauricio Romero <mauricio.romero at quantil.com.co>,
r-help at r-project.org
Subject: Re: [R] vectorizing: selecting one record per group
Message-ID: <53A2CAF5-8BF6-45B9-AA73-4B50A2EB6C49 at comcast.net>
Content-Type: text/plain; charset=WINDOWS-1252; format=flowed;
delsp=yes
On Oct 13, 2010, at 4:17 PM, Erik Iverson wrote:
> Hello,
>
> There are probably many ways to do this, but I think
> it's easier if you use a data.frame as your object.
>
> The easy solution for the matrix you provide is escaping
> me at the moment.
Perhaps using sampling to derive an index?
A[ tapply(1:100, A[,3], sample, 1), ]
> A[tapply(1:100, A[,3], sample, 1), ]
[,1] [,2] [,3]
[1,] -1.9512142 0.9823905 1
[2,] 1.4983879 0.4961661 2
[3,] 0.7815468 0.3531835 3
[4,] -0.9210731 0.6508500 4
[5,] 0.2354838 0.8616220 5
--
David.
>
> One solution, using the plyr package:
>
>
> library(plyr)
> A <- data.frame(a = rnorm(100),b = runif(100), c = rep(c(1,2,3,4,5),
> 20))
> ddply(A, .(c), function(x) x[sample(1:nrow(x), 1), ])
>
> a b c
> 1 0.02995847 0.4763819 1
> 2 0.72035194 0.2948611 2
> 3 1.34963917 0.2057488 3
> 4 -1.99427160 0.1147923 4
> 5 -0.73612703 0.5889539 5
>
>
> Mauricio Romero wrote:
>> Hi,
>> I want to select a subsample from my data, choosing one record from
>> each
>> group. I know how to do this with a for.
>> For example: lets say I have the data:
>> A=cbind(rnorm(100),runif(100),(rep(c(1,2,3,4,5),20)))
>> Where the third column is the group variable. Then what I want is
>> to select
>> 5 observations. Each one taken randomly from each group.
>> INDEX =NULL
>> i=1
>> for(index_g in unique(A[,3])){
>> INDEX [i]=sample(which(A[,3]==index_g),1)
>> i=i+1
>> }
>> SEL=A[INDEX,]
>> Is there a way to do this without a ?for?? in other words is there
>> a way to
>> ?vectorize? this?
>> Thank you,
>> Mauricio Romero Quantil S.A.S.
>> Bogot?,Colombia
>> www.quantil.com.co
>> "It is from the earth that we must find our substance; it is on the
>> earth
>> that we must find solutions to the problems that promise to destroy
>> all life
>> here"
>> [[alternative HTML version deleted]]
>> ------------------------------------------------------------------------
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
David Winsemius, MD
West Hartford, CT
------------------------------
Message: 107
Date: Wed, 13 Oct 2010 16:44:51 -0400
From: Alison Callahan <alison.callahan at gmail.com>
To: r-help at r-project.org
Subject: [R] adding a named column to a Matrix
Message-ID:
<AANLkTikF1d=sd+Nve7GM_NRN0_=9GWTtZQx82CSYd-ct at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Hello all,
I am trying to use cbind to add a named empty column to a Matrix:
outputmatrix <- cbind(outputmatrix,kog_id = seq(0,0,0))
The problem I have is that kog_id is a variable that has a value e.g.
"KOG1234", but I when I try to use this to name the added column, it
is named literally "kog_id" instead of "KOG1234".
How can I name a column by passing in a variable for the column name?
I am performing this action inside of a for loop, so I can't
necessarily know the position of the column in order to name it after
it is created.
Thanks,
Alison Callahan
-----------------------
PhD Candidate
Department of Biology
Carleton University
------------------------------
Message: 108
Date: Wed, 13 Oct 2010 16:50:58 -0400
From: Antonio Paredes <antonioparedes14 at gmail.com>
To: r-help at r-project.org
Subject: [R] Poisson Regression
Message-ID:
<AANLkTikXc5tvziGaxuV1GqM3CgNyPPpay-FQCC6uzQWE at mail.gmail.com>
Content-Type: text/plain
Hello everyone,
I wanted to ask if there is an R-package to fit the following Poisson
regression model
log(\lambda_{ijk}) = \phi_{i} + \alpha_{j} + \beta_{k}
i=1,\cdots,N (subjects)
j=0,1 (two levels)
k=0,1 (two levels)
treating the \phi_{i} as nuinsance parameters.
Thank you very much
--
-Tony
[[alternative HTML version deleted]]
------------------------------
Message: 109
Date: Wed, 13 Oct 2010 14:54:39 -0600
From: Alisa Wade <alisaww at gmail.com>
To: r-help at r-project.org
Subject: Re: [R] Matrix subscripting to wrap around from end to start
of row
Message-ID:
<AANLkTikn=KAnFicUcfYhD5GLz88N1hnsDCep932jAFd_ at mail.gmail.com>
Content-Type: text/plain
Thanks, David for the response.
Unfortunately, that only works for the case where I happen to only want the
last and first month "wrap".
But it doesn't work for any other case, e.g., say I want months start.month
= 4, end.month = 2.
Now
start=4; end=2; year.avg = apply(month.data[, c(start,end)], 1, mean)
would give me just the average of months 4 and 2, when I want the average of
months 4, 5, 1, 2.
I could do an if statement:
start = 4
end = 2
if (end < start ) {
start.script = paste(start, ":5", sep="")
end.script = paste("1:", 5-end, sep="")
final.script = paste("c(",start.script,",",end.script,")", sep="")}
out = apply(month.data[,final.script], 1, mean)
But, alas, final.script is now a string, and it can't be passed that way.
Is there either a way to tell the script that 4:2 does not mean 4,3,2, but
it means 4,5,1,2
or is there a way to pass final.script into the subscript in the apply
command
or am I missing something else?
Thanks again,
Alisa
On Wed, Oct 13, 2010 at 2:23 PM, David Winsemius
<dwinsemius at comcast.net>wrote:
>
> On Oct 13, 2010, at 4:10 PM, Alisa Wade wrote:
>
> Perhaps it is just that I don't even know the correct term to search for,
>> but I can find nothing that explains how to wrap around from the end to a
>> start of a row in a matrix.
>>
>> For example, you have a matrix of 2 years of data, where rows are years,
>> and
>> columns are months.
>> month.data = matrix(c(3,4,6,8,12,90,5,14,22, 8), nrow = 2, ncol=5)
>>
>> I would like to take the average of months 5:1 for each year (for row 1
>> =12.5). However, I am passing the start month (5) and the end month (1)
as
>> variables.
>>
>> I would like to do something like
>>
>> year.avg = apply(month.data[, start.month:end.month], MARGIN=1, mean)
>>
>> But that gives me the average of months 1:5. (for row 1 =9.6)
>>
>> I know I could use:
>> apply(month.data[, c(1,5)], 1, mean)
>>
>
> > start=5; end=1; year.avg = apply(month.data[, c(start,end)], 1, mean)
> > year.avg
> [1] 12.5 6.0
>
>
> but I don't know how to pass start.month, end.month into that format that
>> because paste or sprintf forces them to strings, which are not accepted
in
>> a
>> subscript.
>>
>> I have the feeling I am unaware of some obvious trick.
[[elided Yahoo spam]]
>>
>> *****************************************************
>> Alisa A. Wade
>> Postdoctoral Center Associate
>>
>
>
> David Winsemius, MD
> West Hartford, CT
>
>
--
*****************************************************
Alisa A. Wade
Postdoctoral Center Associate
National Center for Ecological Analysis and Synthesis
wade at nceas.ucsb.edu
(406) 529-9722
home email: alisaww at gmail.com
[[alternative HTML version deleted]]
------------------------------
Message: 110
Date: Wed, 13 Oct 2010 17:58:21 -0300
From: Henrique Dallazuanna <wwwhsd at gmail.com>
To: Alison Callahan <alison.callahan at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] adding a named column to a Matrix
Message-ID:
<AANLkTikrYUCnBjkyKdfvU5O1a6g7oh0i22c7kZB2_h02 at mail.gmail.com>
Content-Type: text/plain
Try this:
`colnames<-`(cbind(m, kog_id = 0), c(colnames(m), kog_id))
On Wed, Oct 13, 2010 at 5:44 PM, Alison Callahan
<alison.callahan at gmail.com>wrote:
> Hello all,
>
> I am trying to use cbind to add a named empty column to a Matrix:
>
> outputmatrix <- cbind(outputmatrix,kog_id = seq(0,0,0))
>
> The problem I have is that kog_id is a variable that has a value e.g.
> "KOG1234", but I when I try to use this to name the added column, it
> is named literally "kog_id" instead of "KOG1234".
>
> How can I name a column by passing in a variable for the column name?
> I am performing this action inside of a for loop, so I can't
> necessarily know the position of the column in order to name it after
> it is created.
>
> Thanks,
>
> Alison Callahan
> -----------------------
> PhD Candidate
> Department of Biology
> Carleton University
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Henrique Dallazuanna
Curitiba-Parana-Brasil
250 25' 40" S 490 16' 22" O
[[alternative HTML version deleted]]
------------------------------
Message: 111
Date: Wed, 13 Oct 2010 17:01:55 -0400
From: David Winsemius <dwinsemius at comcast.net>
To: Alison Callahan <alison.callahan at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] adding a named column to a Matrix
Message-ID: <CF864351-93C2-492A-B861-0A1216103A4C at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
On Oct 13, 2010, at 4:44 PM, Alison Callahan wrote:
> Hello all,
>
> I am trying to use cbind to add a named empty column to a Matrix:
>
> outputmatrix <- cbind(outputmatrix,kog_id = seq(0,0,0))
Not sure off the top of my head what the right way might be, but this
should work:
names(outputmatrix)[ which( names(outputmatrix) == "kog_id")] <- kog_id
>
> The problem I have is that kog_id is a variable that has a value e.g.
> "KOG1234", but I when I try to use this to name the added column, it
> is named literally "kog_id" instead of "KOG1234".
>
> How can I name a column by passing in a variable for the column name?
> I am performing this action inside of a for loop, so I can't
> necessarily know the position of the column in order to name it after
> it is created.
>
> Thanks,
>
> Alison Callahan
David Winsemius, MD
West Hartford, CT
------------------------------
Message: 112
Date: Wed, 13 Oct 2010 17:07:50 -0400
From: David Winsemius <dwinsemius at comcast.net>
To: Alisa Wade <alisaww at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Matrix subscripting to wrap around from end to start
of row
Message-ID: <AE5071BA-A566-4BA8-9BCF-5750789602AD at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
On Oct 13, 2010, at 4:54 PM, Alisa Wade wrote:
> Thanks, David for the response.
>
> Unfortunately, that only works for the case where I happen to only
> want the last and first month "wrap".
> But it doesn't work for any other case, e.g., say I want months
> start.month = 4, end.month = 2.
> Now
>
> would give me just the average of months 4 and 2, when I want the
> average of months 4, 5, 1, 2.
>
Then just put in the true start's and end's of the loop/series/ring/
whatever:
start=4; end=2; year.avg = apply(month.data[, c(1:start,end:5)], 1,
mean)
If you don't know the length of the full cycle, the has to be some way
to derive length(series-object):
start=4; end=2; year.avg = apply(month.data[,
c(1:start,end:length(series-object))], 1, mean)
--
David.
> I could do an if statement:
> start = 4
> end = 2
> if (end < start ) {
> start.script = paste(start, ":5", sep="")
> end.script = paste("1:", 5-end, sep="")
> final.script = paste("c(",start.script,",",end.script,")", sep="")}
> out = apply(month.data[,final.script], 1, mean)
>
> But, alas, final.script is now a string, and it can't be passed that
> way.
>
> Is there either a way to tell the script that 4:2 does not mean
> 4,3,2, but it means 4,5,1,2
> or is there a way to pass final.script into the subscript in the
> apply command
> or am I missing something else?
>
> Thanks again,
> Alisa
>
>
>
> On Wed, Oct 13, 2010 at 2:23 PM, David Winsemius <dwinsemius at comcast.net
> > wrote:
>
> On Oct 13, 2010, at 4:10 PM, Alisa Wade wrote:
>
> Perhaps it is just that I don't even know the correct term to search
> for,
> but I can find nothing that explains how to wrap around from the end
> to a
> start of a row in a matrix.
>
> For example, you have a matrix of 2 years of data, where rows are
> years, and
> columns are months.
> month.data = matrix(c(3,4,6,8,12,90,5,14,22, 8), nrow = 2, ncol=5)
>
> I would like to take the average of months 5:1 for each year (for
> row 1
> =12.5). However, I am passing the start month (5) and the end month
> (1) as
> variables.
>
> I would like to do something like
>
> year.avg = apply(month.data[, start.month:end.month], MARGIN=1, mean)
>
> But that gives me the average of months 1:5. (for row 1 =9.6)
>
> I know I could use:
> apply(month.data[, c(1,5)], 1, mean)
>
> > start=5; end=1; year.avg = apply(month.data[, c(start,end)], 1,
> mean)
> > year.avg
> [1] 12.5 6.0
>
>
> but I don't know how to pass start.month, end.month into that format
> that
> because paste or sprintf forces them to strings, which are not
> accepted in a
> subscript.
>
> I have the feeling I am unaware of some obvious trick.
[[elided Yahoo spam]]
>
> *****************************************************
> Alisa A. Wade
> Postdoctoral Center Associate
>
>
> David Winsemius, MD
> West Hartford, CT
>
>
>
>
> --
> *****************************************************
> Alisa A. Wade
> Postdoctoral Center Associate
> National Center for Ecological Analysis and Synthesis
> wade at nceas.ucsb.edu
> (406) 529-9722
> home email: alisaww at gmail.com
David Winsemius, MD
West Hartford, CT
------------------------------
Message: 113
Date: Wed, 13 Oct 2010 22:20:52 +0100
From: Julia Lira <julia.lira at hotmail.co.uk>
To: <r-help at r-project.org>
Subject: [R] Loop in columns by group
Message-ID: <SNT126-W54140EB086EAFA5F78FC53D1550 at phx.gbl>
Content-Type: text/plain
Dear all,
I need to do a loop as following:
#Consider a matrix:
M <- matrix(1, nrow=10, ncol=20)
#Matrices to store the looping results
M1 <- matrix(0, nrow=10, ncol=400)
h <- c(1:20/1000)
#loop
for (j in h){
M1 <- M/(2*j)
}
But this means that the first 20 columns of matrix M1 (that is, columns
1:20) should show the results of M/(2*0.001). Then, the following 20 columns
of M1 (that is, columns 21:40) should show the results of M/(2*0.002) and so
on until columns 381:400 that would show the results of M/(2*0.02).
What should I include in the loop above in order to have this result?
Thanks a lot!
Julia
[[alternative HTML version deleted]]
------------------------------
Message: 114
Date: Wed, 13 Oct 2010 14:42:28 -0700 (PDT)
To: r-help at r-project.org
Subject: [R] type II & III test for mixed model
Message-ID: <620485.27693.qm at web56304.mail.re3.yahoo.com>
Content-Type: text/plain
Hi, is there a package for getting type II or type III tests on mixed models
(lme or lmer), just like what Anova() in car package does for aov, lm, etc.?
Thanks
John
[[alternative HTML version deleted]]
------------------------------
Message: 115
Date: Thu, 14 Oct 2010 10:00:02 +1100
From: sachinthaka.abeywardana at allianz.com.au
To: r-help at r-project.org
Subject: [R] drilling down data on charts
Message-ID:
<OFE9208170.2EAD3C45-ONCA2577BB.007DCBCC-CA2577BB.007E588C at allianz.com.au>
Content-Type: text/plain; charset="us-ascii"
Hey all,
Suppose a=b^2 for starters. I want to be able to create a graph that
displays a initially and if i was to click on 'a' to show 'b' on the chart
itself. Does anyone know if this is possible in R?
Also as an extension (not necessary as yet) to output the above into a
'html' file.
Thanks,
Sachin
--- Please consider the environment before printing this email ---
Allianz - General Insurance Company of the Year 2009+
+ Australia and New Zealand Insurance Industry Awards
This email and any attachments has been sent by Allianz Australia Insurance
Limited (ABN 15 000 122 850) and is intended solely for the addressee. It is
confidential, may contain personal information and may be subject to legal
professional privilege. Unauthorised use is strictly prohibited and may be
unlawful. If you have received this by mistake, confidentiality and any
legal privilege are not waived or lost and we ask that you contact the
sender and delete and destroy this and any other copies. In relation to any
legal use you may make of the contents of this email, you must ensure that
you comply with the Privacy Act (Cth) 1988 and you should note that the
contents may be subject to copyright and therefore may not be reproduced,
communicated or adapted without the express consent of the owner of the
copyright.
Allianz will not be liable in connection with any data corruption,
interruption, delay, computer virus or unauthorised access or amendment to
the contents of this email. If this email is a commercial electronic message
and you would prefer not to receive further commercial electronic messages
from Allianz, please forward a copy of this email to
unsubscribe at allianz.com.au with the word unsubscribe in the subject header.
------------------------------
Message: 116
Date: Thu, 14 Oct 2010 10:07:00 +1100
From: sachinthaka.abeywardana at allianz.com.au
To: sachinthaka.abeywardana at allianz.com.au
Cc: r-help at r-project.org
Subject: Re: [R] drilling down data on charts
Message-ID:
<OF5B493BDE.1A69E109-ONCA2577BB.007EE21B-CA2577BB.007EFBB4 at allianz.com.au>
Content-Type: text/plain; charset="us-ascii"
Just to clarify I meant opening it in a new window (and perhaps closing old
frame?).
Thanks again,
Sachin
Sachinthaka
Abeywardana/HO/Al
lianz-AU To
r-help at r-project.org
14/10/2010 10:00 cc
AM
Subject
drilling down data on charts
Please consider
the environment
before printing
this email
Hey all,
Suppose a=b^2 for starters. I want to be able to create a graph that
displays a initially and if i was to click on 'a' to show 'b' on the chart
itself. Does anyone know if this is possible in R?
Also as an extension (not necessary as yet) to output the above into a
'html' file.
Thanks,
Sachin
--- Please consider the environment before printing this email ---
Allianz - General Insurance Company of the Year 2009+
+ Australia and New Zealand Insurance Industry Awards
This email and any attachments has been sent by Allianz Australia Insurance
Limited (ABN 15 000 122 850) and is intended solely for the addressee. It is
confidential, may contain personal information and may be subject to legal
professional privilege. Unauthorised use is strictly prohibited and may be
unlawful. If you have received this by mistake, confidentiality and any
legal privilege are not waived or lost and we ask that you contact the
sender and delete and destroy this and any other copies. In relation to any
legal use you may make of the contents of this email, you must ensure that
you comply with the Privacy Act (Cth) 1988 and you should note that the
contents may be subject to copyright and therefore may not be reproduced,
communicated or adapted without the express consent of the owner of the
copyright.
Allianz will not be liable in connection with any data corruption,
interruption, delay, computer virus or unauthorised access or amendment to
the contents of this email. If this email is a commercial electronic message
and you would prefer not to receive further commercial electronic messages
from Allianz, please forward a copy of this email to
unsubscribe at allianz.com.au with the word unsubscribe in the subject header.
------------------------------
Message: 117
Date: Wed, 13 Oct 2010 19:31:52 -0400
From: Gabor Grothendieck <ggrothendieck at gmail.com>
To: Chris Howden <chris at trickysolutions.com.au>
Cc: r-help at r-project.org
Subject: Re: [R] merging and working with BIG data sets. Is sqldf the
best way??
Message-ID:
<AANLkTi=gE-AYOwGjfQOQPvwvN+SLAr+4AJQKz9dNxP01 at mail.gmail.com>
Content-Type: text/plain; charset=windows-1252
On Tue, Oct 12, 2010 at 2:39 AM, Chris Howden
<chris at trickysolutions.com.au> wrote:
> I?m working with some very big datasets (each dataset has 11 million rows
> and 2 columns). My first step is to merge all my individual data sets
> together (I have about 20)
>
> I?m using the following command from sqldf
>
> ? ? ? ? ? ? ? data1 <- sqldf("select A.*, B.* from A inner join B
> using(ID)")
>
> But it?s taking A VERY VERY LONG TIME to merge just 2 of the datasets
(well
> over 2 hours, possibly longer since it?s still going).
You need to add indexes to your tables. See example 4i on the sqldf home
page
http://sqldf.googlecode.com
This can result in huge speedups for large tables.
--
Statistics & Software Consulting
GKX Group, GKX Associates Inc.
tel: 1-877-GKX-GROUP
email: ggrothendieck at gmail.com
------------------------------
Message: 118
Date: Wed, 13 Oct 2010 20:21:41 -0400
From: David Winsemius <dwinsemius at comcast.net>
To: Antonio Paredes <antonioparedes14 at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Poisson Regression
Message-ID: <E0EB21E3-4DD8-4D1E-91E7-4B2E04FBAE8E at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
On Oct 13, 2010, at 4:50 PM, Antonio Paredes wrote:
> Hello everyone,
>
> I wanted to ask if there is an R-package to fit the following Poisson
> regression model
>
> log(\lambda_{ijk}) = \phi_{i} + \alpha_{j} + \beta_{k}
> i=1,\cdots,N (subjects)
> j=0,1 (two levels)
> k=0,1 (two levels)
>
> treating the \phi_{i} as nuinsance parameters.
If I am reading this piece correctly there should be no difference
between a conditional treatment of phi_i in that model and results
from the unconditional model one would get from fitting with
glm(lambda ~ phi + alpha + beta ,family="poisson").
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.6.9679&rep=rep1&typ
e=pdf
(But I am always looking for corrections to my errors.)
--
David Winsemius, MD
West Hartford, CT
------------------------------
Message: 119
Date: Thu, 14 Oct 2010 10:38:51 +1000
From: Andrew Halford <andrew.halford at gmail.com>
To: Phil Spector <spector at stat.berkeley.edu>
Cc: r-help at r-project.org
Subject: Re: [R] repeating an analysis
Message-ID:
<AANLkTinnymwwq3jfxPdt4xg=CVfsRv8VC6x9_4JLR0Gx at mail.gmail.com>
Content-Type: text/plain
thanks Phil, I have your solution and another which I will attempt in the
next day or so and will post results to the list then.
cheers
andy
On Wed, Oct 13, 2010 at 10:30 AM, Phil Spector
<spector at stat.berkeley.edu>wrote:
> Andrew -
> I think
>
> answer = replicate(50,{fit1 <- rpart(CHAB~.,data=chabun, method="anova",
>
> control=rpart.control(minsplit=10,
> cp=0.01, xval=10));
> x = printcp(fit1);
> x[which.min(x[,'xerror']),'nsplit']})
>
> will put the numbers you want into answer, but there was no reproducible
> example to test it on. Unfortunately, I don't know of any way to surpress
> the printing from printcp().
>
> - Phil Spector
> Statistical Computing Facility
> Department of Statistics
> UC Berkeley
> spector at stat.berkeley.edu
>
>
>
>
>
> On Wed, 13 Oct 2010, Andrew Halford wrote:
>
> Hi All,
>>
>> I have to say upfront that I am a complete neophyte when it comes to
>> programming. Nevertheless I enjoy the challenge of using R because of its
>> incredible statistical resources.
>>
>> My problem is this .........I am running a regression tree analysis using
>> "rpart" and I need to run the calculation repeatedly (say n=50 times) to
>> obtain a distribution of results from which I will pick the median one to
>> represent the most parsimonious tree size. Unfortunately rpart does not
>> contain this ability so it will have to be coded for.
>>
>> Could anyone help me with this? I have provided the code (and relevant
>> output) for the analysis I am running. I need to run it n=50 times and
>> from
>> each output pick the appropriate tree size and post it to a datafile
where
>> I
>> can then look at the frequency distribution of tree sizes.
>>
>> Here is the code and output from a single run
>>
>> fit1 <- rpart(CHAB~.,data=chabun, method="anova",
>>>
>> control=rpart.control(minsplit=10, cp=0.01, xval=10))
>>
>>> printcp(fit1)
>>>
>>
>> Regression tree:
>> rpart(formula = CHAB ~ ., data = chabun, method = "anova", control =
>> rpart.control(minsplit = 10,
>> cp = 0.01, xval = 10))
>> Variables actually used in tree construction:
>> [1] EXP LAT POC RUG
>> Root node error: 35904/33 = 1088
>> n= 33
>> CP nsplit rel error xerror xstd
>> 1 0.539806 0 1.00000 1.0337 0.41238
>> 2 0.050516 1 0.46019 1.2149 0.38787
>> 3 0.016788 2 0.40968 1.2719 0.41280
>> 4 0.010221 3 0.39289 1.1852 0.38300
>> 5 0.010000 4 0.38267 1.1740 0.38333
>>
>> Each time I re-run the model I will get a slightly different output. I
>> want
>> to extract the nsplit number corresponding to the lowest xerror for each
>> run
>> of the model (in this case it is for nsplit = 0) over 50 runs and then
>> look
>> at the distribution of nsplits after 50 runs.
>>
>> Any help appreciated.
>>
>>
>> Andy
>>
>>
>> --
>> Andrew Halford
>> Associate Researcher
>> Marine Laboratory
>> University of Guam
>> Ph: +1 671 734 2948
>>
>> [[alternative HTML version deleted]]
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>>
--
Andrew Halford Ph.D
Associate Researcher Scientist
Marine Laboratory
University of Guam
Ph: +1 671 734 2948
[[alternative HTML version deleted]]
------------------------------
Message: 120
Date: Thu, 14 Oct 2010 11:44:15 +1100
From: <Bill.Venables at csiro.au>
To: <dwinsemius at comcast.net>, <antonioparedes14 at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Poisson Regression
Message-ID:
<1BDAE2969943D540934EE8B4EF68F95FB27A44F5E5 at EXNSW-MBX03.nexus.csiro.au>
Content-Type: text/plain; charset="us-ascii"
One possible way to treat parameters as "nuisance parameters" is to model
them as random. This gives allows them to have a reduced parametric load.
There are many packages with funcitons to fit glmms. One you may wish to
look at is lme4, which has the lmer fitting function
library(lme4)
fm <- glmer(Y ~ A + B + (1|Subject), family = poisson, data = pData)
for example, may be a useful alternative to a fully fixed effects approach.
W.
-----Original Message-----
From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-project.org] On
Behalf Of David Winsemius
Sent: Thursday, 14 October 2010 10:22 AM
To: Antonio Paredes
Cc: r-help at r-project.org
Subject: Re: [R] Poisson Regression
On Oct 13, 2010, at 4:50 PM, Antonio Paredes wrote:
> Hello everyone,
>
> I wanted to ask if there is an R-package to fit the following Poisson
> regression model
>
> log(\lambda_{ijk}) = \phi_{i} + \alpha_{j} + \beta_{k}
> i=1,\cdots,N (subjects)
> j=0,1 (two levels)
> k=0,1 (two levels)
>
> treating the \phi_{i} as nuinsance parameters.
If I am reading this piece correctly there should be no difference
between a conditional treatment of phi_i in that model and results
from the unconditional model one would get from fitting with
glm(lambda ~ phi + alpha + beta ,family="poisson").
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.6.9679&rep=rep1&typ
e=pdf
(But I am always looking for corrections to my errors.)
--
David Winsemius, MD
West Hartford, CT
______________________________________________
R-help at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 121
Date: Wed, 13 Oct 2010 17:51:08 -0700
From: "Charles C. Berry" <cberry at tajo.ucsd.edu>
To: David Winsemius <dwinsemius at comcast.net>
Cc: r-help at r-project.org
Subject: Re: [R] Poisson Regression
Message-ID: <Pine.LNX.4.64.1010131744060.17583 at tajo.ucsd.edu>
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
On Wed, 13 Oct 2010, David Winsemius wrote:
>
> On Oct 13, 2010, at 4:50 PM, Antonio Paredes wrote:
>
>> Hello everyone,
>>
>> I wanted to ask if there is an R-package to fit the following Poisson
>> regression model
>>
>> log(\lambda_{ijk}) = \phi_{i} + \alpha_{j} + \beta_{k}
>> i=1,\cdots,N (subjects)
>> j=0,1 (two levels)
>> k=0,1 (two levels)
>>
>> treating the \phi_{i} as nuinsance parameters.
>
> If I am reading this piece correctly there should be no difference between
a
> conditional treatment of phi_i in that model and results from the
> unconditional model one would get from fitting with
>
> glm(lambda ~ phi + alpha + beta ,family="poisson").
Right.
But if N is large, the model.matrix will be huge and there may be problems
with memory and elapsed time.
loglin() and loglm() will fit the same model without need for a
model.matrix (modulo having enough data to actually fit that model), and
large values of N are no big deal.
HTH,
Chuck
>
>
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.6.9679&rep=rep1&typ
e=pdf
>
> (But I am always looking for corrections to my errors.)
>
> --
> David Winsemius, MD
> West Hartford, CT
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
Charles C. Berry (858) 534-2098
Dept of Family/Preventive
Medicine
E mailto:cberry at tajo.ucsd.edu UC San Diego
http://famprevmed.ucsd.edu/faculty/cberry/ La Jolla, San Diego 92093-0901
------------------------------
Message: 122
Date: Thu, 14 Oct 2010 12:15:37 +1100
From: Michael Bedward <michael.bedward at gmail.com>
To: Juan Pablo Fededa <jpfededa at gmail.com>, Rhelp
<r-help at r-project.org>
Subject: Re: [R] compare histograms
Message-ID:
<AANLkTi=oLNVM2Thy6w2YUdk6uxS5FB0+8ph6mkxPKRpo at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Hi Juan,
Yes, you can use EMD to quantify the difference between any pair of
histograms regardless of their shape. The only constraint, at least
the way that I've done it previously, is to have compatible bins. The
original application of EMD was to compare images based on colour
histograms which could have all sorts of shapes.
I looked at the package that Dennis alerted me to on RForge but
unfortunately it seems to be inactive and the nightly builds are
broken. I've downloaded the source code and will have a look at it
sometime in the next few days.
Meanwhile, let me know if you want a copy of my own code. It uses the
lpSolve package.
Michael
On 14 October 2010 08:46, Juan Pablo Fededa <jpfededa at gmail.com> wrote:
> Hi Michael,
>
>
> I have the same challenge, can you use this earth movers distance it to
> compare bimodal distributions?
> Thanks & cheers,
>
>
> Juan
>
>
> On Wed, Oct 13, 2010 at 4:39 AM, Michael Bedward
<michael.bedward at gmail.com>
> wrote:
>>
>> Just to add to Greg's comments: I've previously used 'Earth Movers
>> Distance' to compare histograms. Note, this is a distance metric
>> rather than a parametric statistic (ie. not a test) but it at least
>> provides a consistent way of quantifying similarity.
>>
>> It's relatively easy to implement the metric in R (formulating it as a
>> linear programming problem). Happy to dig out the code if needed.
>>
>> Michael
>>
>> On 13 October 2010 02:44, Greg Snow <Greg.Snow at imail.org> wrote:
>> > That depends a lot on what you mean by the histograms being equivalent.
>> >
>> > You could just plot them and compare visually. ?It may be easier to
>> > compare them if you plot density estimates rather than histograms.
?Even
>> > better would be to do a qqplot comparing the 2 sets of data rather than
the
>> > histograms.
>> >
>> > If you want a formal test then the ks.test function can compare 2
>> > datasets. ?Note that the null hypothesis is that they come from the
same
>> > distribution, a significant result means that they are likely different
(but
>> > the difference may not be of practical importance), but a
non-significant
>> > test could mean they are the same, or that you just do not have enough
power
>> > to find the difference (or the difference is hard for the ks test to
see).
>> > ?You could also use a chi-squared test to compare this way.
>> >
>> > Another approach would be to use the vis.test function from the
>> > TeachingDemos package. ?Write a small function that will either plot
your 2
>> > histograms (density plots), or permute the data between the 2 groups
and
>> > plot the equivalent histograms. ?The vis.test function then presents
you
>> > with an array of plots, one of which is the original data and the rest
based
>> > on permutations. ?If there is a clear meaningful difference in the
groups
>> > you will be able to spot the plot that does not match the rest,
otherwise it
>> > will just be guessing (might be best to have a fresh set of eyes that
have
>> > not seen the data before see if they can pick out the real plot).
>> >
>> > --
>> > Gregory (Greg) L. Snow Ph.D.
>> > Statistical Data Center
>> > Intermountain Healthcare
>> > greg.snow at imail.org
>> > 801.408.8111
>> >
>> >
>> >> -----Original Message-----
>> >> From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-
>> >> project.org] On Behalf Of solafah bh
>> >> Sent: Monday, October 11, 2010 4:02 PM
>> >> To: R help mailing list
>> >> Subject: [R] compare histograms
>> >>
>> >> Hello
>> >> How to compare? two statistical histograms? How i can know if these
>> >> histograms are equivalent or not??
>> >>
>> >> Regards
>> >>
>> >>
>> >>
>> >> ? ? ? [[alternative HTML version deleted]]
>> >
>> > ______________________________________________
>> > R-help at r-project.org mailing list
>> > https://stat.ethz.ch/mailman/listinfo/r-help
>> > PLEASE do read the posting guide
>> > http://www.R-project.org/posting-guide.html
>> > and provide commented, minimal, self-contained, reproducible code.
>> >
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
>
------------------------------
Message: 123
Date: Wed, 13 Oct 2010 17:12:58 -0700
From: Dennis Murphy <djmuser at gmail.com>
To: Alisa Wade <alisaww at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Matrix subscripting to wrap around from end to start
of row
Message-ID:
<AANLkTinAhK0-tsfoNEPwGku-fPhp7mR5_9kMgxwpGUCU at mail.gmail.com>
Content-Type: text/plain
Hi:
This isn't particularly elegant, but I think it works:
# The function to be applied:
f <- function(x, idx) {
n <- length(x)
if(idx[1] < idx[2]) {idx <- seq(idx[1], idx[2])
} else { idx <- c(seq(idx[1], n), seq(1, idx[2])) }
mean(x[idx])
}
# tests
> month.data = matrix(c(3,4,6,8,12,90,5,14,22, 8), nrow = 2, ncol=5)
> month.data
[,1] [,2] [,3] [,4] [,5]
[1,] 3 6 12 5 22
[2,] 4 8 90 14 8
> apply(month.data, 1, f, idx = c(1, 3))
[1] 7 34
> apply(month.data, 1, f, idx = c(3, 1))
[1] 10.5 29.0
> apply(month.data, 1, f, idx = c(5, 2))
[1] 10.333333 6.666667
HTH,
Dennis
On Wed, Oct 13, 2010 at 1:10 PM, Alisa Wade <alisaww at gmail.com> wrote:
> Perhaps it is just that I don't even know the correct term to search for,
> but I can find nothing that explains how to wrap around from the end to a
> start of a row in a matrix.
>
> For example, you have a matrix of 2 years of data, where rows are years,
> and
> columns are months.
> month.data = matrix(c(3,4,6,8,12,90,5,14,22, 8), nrow = 2, ncol=5)
>
> I would like to take the average of months 5:1 for each year (for row 1
> =12.5). However, I am passing the start month (5) and the end month (1) as
> variables.
>
> I would like to do something like
>
> year.avg = apply(month.data[, start.month:end.month], MARGIN=1, mean)
>
> But that gives me the average of months 1:5. (for row 1 =9.6)
>
> I know I could use:
> apply(month.data[, c(1,5)], 1, mean)
> but I don't know how to pass start.month, end.month into that format that
> because paste or sprintf forces them to strings, which are not accepted in
> a
> subscript.
>
> I have the feeling I am unaware of some obvious trick.
[[elided Yahoo spam]]
>
> *****************************************************
> Alisa A. Wade
> Postdoctoral Center Associate
> National Center for Ecological Analysis and Synthesis
> wade at nceas.ucsb.edu
> (406) 529-9722
> home email: alisaww at gmail.com
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
------------------------------
Message: 124
Date: Wed, 13 Oct 2010 20:39:39 -0700 (PDT)
From: Raji <raji.sankaran at gmail.com>
To: r-help at r-project.org
Subject: Re: [R] nnet help
Message-ID: <1287027579228-2994756.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hi R-helpers , can you please give me more insights on the other data mining
and predictive techniques that the nnet package can be used for?
--
View this message in context:
http://r.789695.n4.nabble.com/nnet-help-tp2993609p2994756.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 125
Date: Wed, 13 Oct 2010 23:49:59 -0400
From: David Winsemius <dwinsemius at comcast.net>
To: Julia Lira <julia.lira at hotmail.co.uk>
Cc: r-help at r-project.org
Subject: Re: [R] Loop in columns by group
Message-ID: <68C24CBB-91B8-454A-AF3A-52B7C29ED927 at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
On Oct 13, 2010, at 5:20 PM, Julia Lira wrote:
>
> Dear all,
>
> I need to do a loop as following:
> #Consider a matrix:
>
> M <- matrix(1, nrow=10, ncol=20)
> #Matrices to store the looping results
> M1 <- matrix(0, nrow=10, ncol=400)
> h <- c(1:20/1000)
> #loop
# (I've seen more informative comments)
> for (j in h){
> M1 <- M/(2*j)
>
> }
>
> But this means that the first 20 columns of matrix M1 (that is,
> columns 1:20) should show the results of M/(2*0.001).
Why would you think that?
> Then, the following 20 columns of M1 (that is, columns 21:40) should
> show the results of M/(2*0.002) and so on until columns 381:400 that
> would show the results of M/(2*0.02).
Dividing a matrix by a scalar produces a matrix of the _same_
dimensions. You overwrote the M1 matrix 20 times and ended up with the
results of only the last iteration.
>
> What should I include in the loop above in order to have this result?
Use an indexing strategy that limits the assignment to the particular
columns targeted.
Perhaps (although I forgot the factor of 2):
for (j in 1:20) {
M1[ ,(20*j -19):(20*j) <- rep(M1[,j]/h[j], 20) }
> for (j in 1:20) {
+ M1[ ,(20*j -19):(20*j)] <- rep(M[,j]/h[j], 20) }
> str(M1)
num [1:10, 1:400] 1000 1000 1000 1000 1000 1000 1000 1000 1000
1000 ...
> M1[1,21]
[1] 500
> M1[1,41]
[1] 333.3333
--
David
>
> Thanks a lot!
>
> Julia
>
> [[alternative HTML version deleted]]
------------------------------
Message: 126
Date: Thu, 14 Oct 2010 09:22:37 +0530
From: "Santosh Srinivas" <santosh.srinivas at gmail.com>
To: "'r-help'" <r-help at r-project.org>
Subject: [R] Basic data question
Message-ID: <4cb67e8c.2a978e0a.7797.3409 at mx.google.com>
Content-Type: text/plain; charset="us-ascii"
I have a question about the output given below after running few lines of
[[elided Yahoo spam]]
MF_Data <- read.csv("MF_Data_F.txt", header = F, sep="|")
temp <- head(MF_Data) #Get the sample Data
temp1 <- subset(temp, select= c(V1,V4,V6)) #where V1, V4, V6 are the col
names .. to Get the relevant data
names(temp1) <- c('Ticker', 'Price','Date') #Adjusted column names
Now as expected, I get:
> temp1
Ticker Price Date
1 106270 10.3287 01-Apr-2008
2 106269 10.3287 01-Apr-2008
3 102767 12.6832 01-Apr-2008
4 102766 10.5396 01-Apr-2008
5 102855 9.7833 01-Apr-2008
6 102856 12.1485 01-Apr-2008
BUT, for the below:
temp1$Price
[1] 10.3287 10.3287 12.6832 10.5396 9.7833 12.1485
439500 Levels: -101.2358 -102.622 -2171.1276 -6796.4926 -969.5193 ...
Repurchase Price
What is this line? "439500 Levels: -101.2358 -102.622 -2171.1276 -6796.4926
-969.5193 ... Repurchase Price"??
Many thanks for the help.
Santosh
------------------------------
Message: 127
Date: Thu, 14 Oct 2010 00:00:26 -0400
From: David Winsemius <dwinsemius at comcast.net>
To: "Santosh Srinivas" <santosh.srinivas at gmail.com>
Cc: 'r-help' <r-help at r-project.org>
Subject: Re: [R] Basic data question
Message-ID: <F9041835-7680-4718-8F88-547323378415 at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
On Oct 13, 2010, at 11:52 PM, Santosh Srinivas wrote:
> I have a question about the output given below after running few
> lines of
[[elided Yahoo spam]]
>
> MF_Data <- read.csv("MF_Data_F.txt", header = F, sep="|")
> temp <- head(MF_Data) #Get the sample Data
> temp1 <- subset(temp, select= c(V1,V4,V6)) #where V1, V4, V6 are the
> col
> names .. to Get the relevant data
> names(temp1) <- c('Ticker', 'Price','Date') #Adjusted column names
>
> Now as expected, I get:
>> temp1
> Ticker Price Date
> 1 106270 10.3287 01-Apr-2008
> 2 106269 10.3287 01-Apr-2008
> 3 102767 12.6832 01-Apr-2008
> 4 102766 10.5396 01-Apr-2008
> 5 102855 9.7833 01-Apr-2008
> 6 102856 12.1485 01-Apr-2008
>
> BUT, for the below:
> temp1$Price
> [1] 10.3287 10.3287 12.6832 10.5396 9.7833 12.1485
> 439500 Levels: -101.2358 -102.622 -2171.1276 -6796.4926 -969.5193 ...
> Repurchase Price
>
> What is this line? "439500 Levels: -101.2358 -102.622 -2171.1276
> -6796.4926
> -969.5193 ... Repurchase Price"??
>
It tells you that the Price column got constructed as a factor. One of
the items in the input data couldn't be coerced to numeric hence
looked like a character variable and the default stringsAsFactors
setting of TRUE resulted in classifying that column as factor rather
than as numeric (or character. Your Date column is surely a factor
variable.
You may want to look at colClasses in the read.table help page.
The read.zoo function in the zoo package may have better behavior for
this sort of data input task.
> Many thanks for the help.
>
> Santosh
--
David.
------------------------------
Message: 128
Date: Thu, 14 Oct 2010 12:12:30 +0800
From: Adrian Hordyk <Adrianh at netspace.net.au>
To: r-help at r-project.org
Subject: [R] Plotting by Group
Message-ID: <4CB6832E.1090407 at netspace.net.au>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Hi all,
My apologies if this is a very simple problem. I am very new to R,
having worked a lot previously with Excel. I recently completed a R
course with John Hoenig which introduced me to R.
My data:
X Y Species Group
0 0 A 1
. . A 1
. . A 1
. . A 1
. . A 1
. . A 1
99 1 A 1
0 0 B 1
. . B 1
. . B 1
. . B 1
99 1 B 1
etc etc etc etc
with variable numbers between 0 and 99 and 0 and 1 for each species. I
have about 30 different species, each catergorised as 1 of 3 groups.
I want to plot X vs Y for each species onto the same plot, using lines.
In other words, I need 30 different lines each plotted between 0 and
1 on the x axis and 0 and 100 on the y, but
they need to be grouped by the Species, otherwise the lines return from
100 to 0 at the end of the data from each species.
Any help would be greatly appreciated,
Thanks
--
Adrian Hordyk
PhD Candidate
Centre for Fish and Fisheries
Biological Sciences and Biotechnology
Murdoch University
Phone: (08) 93606520
Mobile: 0438113583
Email: A.Hordyk at Murdoch.edu.au
------------------------------
Message: 129
Date: Thu, 14 Oct 2010 00:17:13 -0400
From: cryan at binghamton.edu
To: r-help at r-project.org
Subject: [R] several car scatterplots on one graph
Message-ID: <4CB68449.6846.4A2DDE at cryan.stny.rr.com>
Content-Type: text/plain; charset=US-ASCII
R version 2.11.1 on WinXP
How do I get 3 scatterplots with marginal boxplots (from the car
package) onto a single plot?
I have a data frame called bank
> dim(bank)
[1] 46 5
head(bank)
x1 x2 x3 x4 pop
1 -0.45 -0.41 1.09 0.45 0
2 -0.56 -0.31 1.51 0.16 0
3 0.06 0.02 1.01 0.40 0
4 -0.07 -0.09 1.45 0.26 0
5 -0.10 -0.09 1.56 0.67 0
6 -0.14 -0.07 0.71 0.28 0
library(car)
par(mfrow=c(2,2))
# following lines may be wrapped badly--sorry
with(bank, scatterplot(x1,x2,groups=pop, reg.line=FALSE,
smooth=FALSE, boxplots="xy", reset.par=FALSE))
with(bank, scatterplot(x1,x3,groups=pop, reg.line=FALSE,
smooth=FALSE, boxplots="xy", reset.par=FALSE))
with(bank, scatterplot(x1,x4,groups=pop, reg.line=FALSE,
smooth=FALSE, boxplots="xy", reset.par=FALSE))
I have tried various permutations of the reset.par= option: all
three lines FALSE, all 3 lines TRUE, the first TRUE and the others
FALSE, vice versa, etc. And always I get just one scatterplot showing
up on the device at a time, and occupying the whole thing.
Thanks.
--Chris Ryan
SUNY Upstate Medical University
Binghamton Clinical Campus
------------------------------
Message: 130
Date: Thu, 14 Oct 2010 07:11:22 +0200
From: Marcin Kozak <nyggus at gmail.com>
To: r-help at r-project.org
Subject: [R] The width argument of stem()
Message-ID:
<AANLkTimTKQOMEkru7gB=fkHE1OzuPJM9ZdM3PQjfQk3p at mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Dear all,
The help page of stem() says that the width argument gives the desired
width of plot. However, I cannot come up with any idea what this width
stays for. This becomes especially difficult if you run these couple
of examples:
The decimal point is 3 digit(s) to the right of the |
0 | 00000000000000000000000000000111111222338
2 | 07
4 | 5
6 | 8
8 | 4
10 | 5
12 |
14 |
16 | 0
> stem(islands, width = 0)
The decimal point is 3 digit(s) to the right of the |
0 | +41
2 | +2
4 | +1
6 | +1
8 | +1
10 | +1
12 |
14 |
16 | +1
The ableve examples are clear, but the below ones are not clear for me:
> stem(islands, width = 10)
The decimal point is 3 digit(s) to the right of the |
0 | +31
2 |
4 |
6 |
8 |
10 |
12 |
14 |
16 |
Note that for width = 10 there are only 10 leaves while there are 48
observations. In fact, taking width = 1, you got 47 leaves; width = 2,
46 leaves, and so on. Look at width = 20:
> stem(islands, width = 20)
The decimal point is 3 digit(s) to the right of the |
0 | 00000000+21
2 | 07
4 | 5
6 | 8
8 | 4
10 | 5
12 |
14 |
16 | 0
Now we have 36 leaves.
Could anyone pleaase clarify what is going on here?
Kind regards,
Marcin
------------------------------
Message: 131
Date: Thu, 14 Oct 2010 11:27:57 +0530
From: "Santosh Srinivas" <santosh.srinivas at gmail.com>
To: "'r-help'" <r-help at r-project.org>
Subject: [R] Drop matching lines from readLines
Message-ID: <4cb69bed.13f88e0a.25d8.1cb8 at mx.google.com>
Content-Type: text/plain; charset="us-ascii"
Dear R-group,
I have some noise in my text file (coding issues!) ... I imported a 200 MB
text file using readlines
Used grep to find the lines with the error?
What is the easiest way to drop those lines? I plan to write back the
"cleaned" data set to my base file.
Thanks.
------------------------------
Message: 132
Date: Thu, 14 Oct 2010 11:49:31 +0530
From: "Santosh Srinivas" <santosh.srinivas at gmail.com>
To: "'r-help'" <r-help at r-project.org>
Subject: Re: [R] Drop matching lines from readLines
Message-ID: <4cb6a0fb.241b8f0a.051e.ffffa806 at mx.google.com>
Content-Type: text/plain; charset="us-ascii"
I guess "invert" does the trick.
For recording ... example ..
file <- grep("Repurchase Price",file, fixed = TRUE, invert = TRUE)
-----Original Message-----
From: Santosh Srinivas [mailto:santosh.srinivas at gmail.com]
Sent: 14 October 2010 11:28
To: 'r-help'
Subject: Drop matching lines from readLines
Dear R-group,
I have some noise in my text file (coding issues!) ... I imported a 200 MB
text file using readlines
Used grep to find the lines with the error?
What is the easiest way to drop those lines? I plan to write back the
"cleaned" data set to my base file.
Thanks.
------------------------------
Message: 133
Date: Wed, 13 Oct 2010 23:33:10 -0700 (PDT)
From: Dieter Menne <dieter.menne at menne-biomed.de>
To: r-help at r-project.org
Subject: Re: [R] Plotting by Group
Message-ID: <1287037990404-2994867.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Ado wrote:
>
>
> My data:
>
> X Y Species Group
> 0 0 A 1
> . . A 1
> . . A 1
> I want to plot X vs Y for each species onto the same plot, using lines.
> In other words, I need 30 different lines each plotted between 0 and 1 on
> the x axis and 0 and 100 on the y, but they need to be grouped by the
> Species, otherwise the lines return from 100 to 0 at the end of the data
> from each species.
>
>
Probably not exactly what you want, but some permutations of group and
|Species should work.
Dieter
library(lattice)
n= 200
dat = data.frame(X=sample(1:99,n,TRUE), Y=sample(0:1,n,TRUE),
Species=sample(c("A","B"),n,TRUE),Group=as.factor(sample(1:3,n,TRUE)))
# avoid criss-cross by ordering
dat = dat[with(dat,order(X,Y,Species,Group)),]
xyplot(X~Y|Species, group=Group,data=dat,type="l")
--
View this message in context:
http://r.789695.n4.nabble.com/Plotting-by-Group-tp2994780p2994867.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 134
Date: Thu, 14 Oct 2010 05:45:15 +0000
From: siddharth.garg85 at gmail.com
To: r-help at r-project.org
Subject: [R] R and Oracle
Message-ID:
<714481025-1287035116-cardhu_decombobulator_blackberry.rim.net-1187320602- at b
da205.bisx.produk.on.blackberry>
Content-Type: text/plain; charset="Windows-1252"
Hi
Can someone please help me with connecting to oracle via R. I have been
trying to use ROracle but its giving me a lot of trouble because of pro*c.
Thanks&Regards
Siddharth
Sent on my BlackBerry? from Vodafone
------------------------------
Message: 135
Date: Thu, 14 Oct 2010 00:41:51 -0700
From: Elizabeth Purdom <epurdom at stat.berkeley.edu>
To: dennis.wegener at iais.fraunhofer.de
Cc: r-help at R-project.org
Subject: [R] GridR error
Message-ID: <4CB6B43F.1000109 at stat.berkeley.edu>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Hi,
I am trying to use 'GridR' package for the first time, and I'm running
into a strange error from grid.check:
> grid.check(gridFun)
Error in exists(add) : invalid first argument
After playing around in recover mode, I see that this because the
variable 'add' created by grid.check is blank:
Browse[1]> split[[1]][k]
[1] " : "
Browse[1]> add
[1] ""
The problem appears to be that my grid.input.Parameters (shown in full
below) has components that look like this:
'<anonymous> : <anonymous>'
so when it is split by '<anonymous>' I get add that is "", and this
gives me an error in 'exists(add)'. I have no real idea what this
function is doing, but is there a work around for what ever is going on?
My function is quite complicated and calls a lot of local variables
which are themselves quite intricate, so I don't really know what other
[[elided Yahoo spam]]
> sessionInfo()
R version 2.11.1 (2010-05-31)
x86_64-unknown-linux-gnu
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
[3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=C LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] GridR_0.9.1 codetools_0.2-2 multicore_0.1-3
[4] Rsamtools_1.0.7 Biostrings_2.16.9 GenomicFeatures_1.0.6
[7] GenomicRanges_1.0.7 IRanges_1.6.11 projectManager_1.0
[10] XML_3.1-0
loaded via a namespace (and not attached):
[1] Biobase_2.8.0 biomaRt_2.4.0 BSgenome_1.16.5 DBI_0.2-5
[5] RCurl_1.4-3 RSQLite_0.9-2 rtracklayer_1.8.1 tools_2.11.1
Thanks,
Elizabeth
The full grid.input.Parameters:
Browse[1]> grid.input.Parameters
[1] " <anonymous>: no visible global function definition for
?myreadPairedAlignments?\n <anonymous>: no visible global function
definition for ?ScanBamParam?\n <anonymous>: no visible global function
definition for ?scanBamFlag?\n <anonymous>: no visible global function
definition for ?matchBamToNames?\n <anonymous>: no visible binding for
global variable ?GRangesList?\n <anonymous>: no visible binding for
global variable ?align2GRanges?\n <anonymous>: no visible binding for
global variable ?GRangesList?\n <anonymous>: no visible binding for
global variable ?align2GRanges?\n <anonymous>: no visible binding for
global variable ?align2GRanges?\n <anonymous>: no visible binding for
global variable ?align2GRanges?\n <anonymous>: no visible global
function definition for ?values?\n <anonymous>: no visible global
function definition for ?getTxIdsByGene?\n <anonymous>: no visible
global function definition for ?exonsBy?\n <anonymous>: no visible
global function definition for ?endoapply?\n <anonymous>: no visible
binding for global variable ?reduce?\n <anonymous> : <anonymous>: no
visible global function definition for ?alignPairs2Tx?\n <anonymous>: no
visible global function definition for ?seqnames?\n <anonymous> :
<anonymous> : <anonymous>: no visible global function definition for
?alignPairs2Tx?\n <anonymous> : <anonymous>: no visible global function
definition for ?alignPairs2Ex?\n <anonymous>: no visible global function
definition for ?seqnames?\n <anonymous> : <anonymous> : <anonymous>: no
visible global function definition for ?alignPairs2Ex?\n <anonymous>: no
visible global function definition for ?mclapply?\n <anonymous>: no
visible binding for global variable ?.tabulateReads?\n <anonymous>: no
visible binding for global variable ?.tabulateReads?\n"
--
-------------
Elizabeth Purdom
Assistant Professor
Department of Statistics
UC, Berkeley
Office: 433 Evans Hall
(510) 642-6154 (office)
(510) 642-7892 (fax)
Mailing: 367 Evans Hall Berkeley, CA 94720-3860
epurdom at stat.berkeley.edu
------------------------------
Message: 136
Date: Thu, 14 Oct 2010 01:04:06 -0700 (PDT)
From: tom <oana.tomescu at student.tugraz.at>
To: r-help at r-project.org
Subject: Re: [R] Boxplot has only one whisker
Message-ID: <1287043446835-2994962.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
You should provide raw data to boxplot(), not summary stats.
If you want to input summary stats, there was a post some time ago on that:
http://finzi.psych.upenn.edu/Rhelp10/2010-September/251674.html
<\quoate>
Overwriting $stats did the job for me. I wanted to show the effect of using
different quantile computation methods.
[[elided Yahoo spam]]
Tom
--
View this message in context:
http://r.789695.n4.nabble.com/Boxplot-has-only-one-whisker-tp2993262p2994962
.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 137
Date: Thu, 14 Oct 2010 13:47:31 +0530
From: "Santosh Srinivas" <santosh.srinivas at gmail.com>
To: "'r-help'" <r-help at r-project.org>
Subject: [R] Replacing N.A values in a data frame
Message-ID: <4cb6bca3.2a978e0a.7797.4734 at mx.google.com>
Content-Type: text/plain; charset="us-ascii"
Hello, I have a data frame as below ... in cases where I have N.A. I want
to use an average of the past date and next date .. any help?
13/10/2010 A 23
13/10/2010 B 12
13/10/2010 C 124
14/10/2010 A 43
14/10/2010 B 54
14/10/2010 C 65
15/10/2010 A 43
15/10/2010 B N.A.
15/10/2010 C 65
----------------------------------------------------------------------------
--------------------------
Thanks R-Helpers.
------------------------------
Message: 138
Date: Thu, 14 Oct 2010 10:28:58 +0200
From: "Millo Giovanni" <Giovanni_Millo at Generali.com>
To: "Achim Zeileis" <Achim.Zeileis at uibk.ac.at>, "Max Brown"
<max.e.brown at gmail.com>
Cc: r-help at stat.math.ethz.ch
Subject: [R] robust standard errors for panel data - corrigendum
Message-ID:
<28643F754DDB094D8A875617EC4398B206A1CE46 at BEMAILEXTV03.corp.generali.net>
Content-Type: text/plain; charset="iso-8859-1"
Hello again Max. A correction to my response from yesterday. Things were
better than they seemed.
I thought it over, checked Arellano's panel book and Driscoll and Kraay
(Rev. Econ. Stud. 1998) and finally realized that vcovSCC does what you
want: in fact, despite being born primarily for dealing with cross-sectional
correlation, 'SCC' standard errors are robust to "both contemporaneous and
lagged cross-sectional correlation", of which serial correlation is a
special case (lagged correlation of u_is with u_jt for i=j).
>From Driscoll and Kraay's simulations, you need T>20-25 at a minimum but N
can be arbitrarily large. The bandwidth of the smoother is your choice, just
as in Newey-West, defaults to a reasonable value etc. etc., please see
?vcovSCC. A nice explanation (in a Stata context) in Hoechle (2007) on the
Stata journal, also here: http://fmwww.bc.edu/repec/bocode/x/xtscc_paper.pdf
I still believe my home-made panel-Newey-West from yesterday would work, but
[[elided Yahoo spam]]
PS a couple of experts on the subject are bc/c-ed, please correct me if I'm
wrong.
Best,
Giovanni
-----Messaggio originale-----
Da: Millo Giovanni
Inviato: mercoled? 13 ottobre 2010 14:16
A: 'Achim Zeileis'; Max Brown
Cc: r-help at stat.math.ethz.ch; yves.croissant at univ-reunion.fr
Oggetto: R: [R] robust standard errors for panel data
Hello.
In principle Achim is right, by default vcovHC.plm does things the
"Arellano" way, clustering by group and therefore giving SEs which are
robust to general heteroskedasticity and serial correlation. The problem
with your data, though, is that this estimator is N-consistent, so it is
inappropriate for your setting. The other way round, on the converse
(cluster="time") would yield a T-consistent estimator, robust to
cross-sectional correlation: there's no escape, because the "big" dimension
is always used to get robustness along the "small" one.
Therefore the road to go to have robustness along the "big" dimension is
some sort of nonparametric truncation. So:
** 1st (possible) solution **
In my opinion, you would actually need a panel implementation of Newey-West,
which is not implemented in 'plm' yet. It might well be feasible by applying
vcovHAC{sandwich} to the time-demeaned data but I'm not sure; in this case,
vcovHAC should be applied this way (here: the famous Munnell data, see
example(plm))
> library(plm)
> fm<-log(gsp)~log(pcap)+log(pc)+log(emp)+unemp
> data(Produc)
> ## est. FE model
> femod<-plm(fm, Produc)
> ## extract time-demeaned data
> demy<-pmodel.response(femod, model="within") demX<-model.matrix(femod,
> model="within") ## estimate lm model on demeaned data ## (equivalent
> to FE, but makes a 'lm' object)
> demod<-lm(demy~demX-1)
> library(sandwich)
> library(lmtest)
> ## apply HAC covariance, e.g., to t-tests coeftest(demod,
> vcov=vcovHAC)
t test of coefficients:
Estimate Std. Error t value Pr(>|t|)
demXlog(pcap) -0.0261497 0.0485168 -0.5390 0.59005
demXlog(pc) 0.2920069 0.0496912 5.8764 6.116e-09 ***
demXlog(emp) 0.7681595 0.0677258 11.3422 < 2.2e-16 ***
demXunemp -0.0052977 0.0018648 -2.8410 0.00461 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
> ## same goes for waldtest(), lht() etc.
but beware, things are probably complicated by the serial correlation
induced by demeaning: see the references in the serial correlation tests
section of the package vignette. Caveat emptor.
** 2nd solution **
Another possible strategy is screening for serial correlation first: again,
see ?pbgtest, ?pdwtest and be aware of all the caveats detailed in the
abovementioned section of the vignette regarding use on FE models.
** 3rd solution **
Another thing you could do (Hendry and friends would say "should" do!) to
get rid of serial correlation is a dynamic FE panel, as the Nickell bias is
of order 1/T and so might well be negligible in your case.
Anyway, thanks for motivating me: I thought we'd provided robust covariances
all over the place, but there was one direction left ;^) Giovanni
-----Messaggio originale-----
Da: Achim Zeileis [mailto:Achim.Zeileis at uibk.ac.at]
Inviato: mercoled? 13 ottobre 2010 12:06
A: Max Brown
Cc: r-help at stat.math.ethz.ch; yves.croissant at univ-reunion.fr; Millo Giovanni
Oggetto: Re: [R] robust standard errors for panel data
On Wed, 13 Oct 2010, Max Brown wrote:
> Hi,
>
> I would like to estimate a panel model (small N large T, fixed
> effects), but would need "robust" standard errors for that. In
> particular, I am worried about potential serial correlation for a
> given individual (not so much about correlation in the cross section).
>
>> From the documentation, it looks as if the vcovHC that comes with plm
> does not seem to do autocorrelation,
My understanding is that it does, in fact. The details say
Observations may be clustered by '"group"' ('"time"') to account
for serial (cross-sectional) correlation.
Thus, the default appears to be to account for serial correlation anyway.
But I'm not an expert in panel-versions of these robust covariances. Yves
and Giovanni might be able to say more.
> and the NeweyWest in the sandwich
> package says that it expects a fitted model of type "lm" or "glm" (it
> says nothing about "plm").
That information in the "sandwich" package is outdated - prompted by your
email I've just fixed the manual page in the development version.
In principle, everything in "sandwich" is object-oriented now, see
vignette("sandwich-OOP", package = "sandwich")
However, the methods within "sandwich" are only sensible for cross-sectional
data (vcovHC, sandwich, ...) or time series data (vcovHAC, NeweyWest,
kernHAC, ...). There is not yet explicit support for panel data.
hth,
Z
> How can I estimate the model and get robust standard errors?
>
> Thanks for your help.
>
> Max
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
Ai sensi del D.Lgs. 196/2003 si precisa che le informazi...{{dropped:13}}
------------------------------
Message: 139
Date: Thu, 14 Oct 2010 01:31:23 -0700 (PDT)
From: Arve Lynghammar <arvely at gmail.com>
To: r-help at r-project.org
Subject: [R] Adding legend to lda-plot, using the MASS-package
Message-ID: <1287045083612-2994991.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hi everyone!
I'm going to analyze the attached data set, but I'm not able to add the
correct legend. In other words, I can't see which species is which.
http://r.789695.n4.nabble.com/file/n2994991/test_discriminant_anal_alle.xls
test_discriminant_anal_alle.xls
This is my commands:
data<- read.table("clipboard",header=TRUE)
library(MASS)
output<-lda(Species~., data)
scores=predict(output, data)$x
plot(scores, col=rainbow(4)[data$Species], asp=1)
Is there a quick way to add a legend to the plot with the corresponding
colours?
It's probably an easy way to do it, but now I've used too many days on
this...
Thank you in advance,
Arve L.
--
View this message in context:
http://r.789695.n4.nabble.com/Adding-legend-to-lda-plot-using-the-MASS-packa
ge-tp2994991p2994991.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 140
Date: Thu, 14 Oct 2010 10:32:19 +0200
From: ogbos okike <ogbos.okike at gmail.com>
To: R-help at stat.math.ethz.ch
Subject: [R] spatial partition
Message-ID:
<AANLkTinZkR9j0XBNOxp42tW+cp0GbGQ-nuhoQhR9t6=i at mail.gmail.com>
Content-Type: text/plain
Hi everybody,
I have a huge longitude and latitude data. I have used raster package to
get the since of its global distribution. But I wish to display the
information using a few points on the world map.
The data is of the form:
95.2156 0.8312
-65.3236 -3.3851
-65.2364 -3.2696
-65.2349 -3.2679
164.7025 -17.6404
148.8214 -4.8285
-67.6568 2.6477
-73.4833 -0.2350
40.9587 -16.8655
-61.6474 8.1179
93.3401 -0.2755
119.9011 -17.1733
-69.7640 1.1245
-149.3088 -20.0035
177.8753 -3.4200
-67.5590 3.0133
-21.9331 15.4684
120.2656 -17.6166
165.9368 -17.3888
164.7335 -17.6953
-74.0017 -1.8623
-71.1195 -3.3562
130.1496 -11.5775
I am thinking of a way of handling that data in such a way that they can be
represented by fewer points when plotted on the map. I have tried using
factor method but didn't get any way.
Thank you for any idea
Best
Ogbos
[[alternative HTML version deleted]]
------------------------------
Message: 141
Date: Thu, 14 Oct 2010 01:34:12 -0700 (PDT)
From: dpender <d.pender at civil.gla.ac.uk>
To: r-help at r-project.org
Subject: Re: [R] Data Gaps
Message-ID: <1287045252294-2994997.post at n4.nabble.com>
Content-Type: text/plain; charset=UTF-8
Thanks Dennis.
One more thing if you don't mind. How to I abstract the individual H and T
?arrays? from f(m,o,l) so as I can combine them with a date/time array and
write to a file?
Sorry if it?s a simple question but I?m completely new to R.
Cheers,
Doug
--
View this message in context:
http://r.789695.n4.nabble.com/Data-Gaps-tp2993317p2994997.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 142
Date: Thu, 14 Oct 2010 20:32:46 +1100
From: Michael Bedward <michael.bedward at gmail.com>
To: Julia Lira <julia.lira at hotmail.co.uk>
Cc: r-help at r-project.org
Subject: Re: [R] (no subject)
Message-ID:
<AANLkTim8T3HfrRPA-z2cGkUnDjkQoA5L8DjX-cHL9Zzp at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Hello Julia,
I'm afraid your code had multiple problems: variables declared but not
used, incorrect or unnecessary use of the "c" function, out-of-bounds
subscripts and overwriting of result objects.
Rather than point them all out in detail I've modified your code so
that it works (see below). Please study this code and compare it to
your original code. This will help you to understand how to do things
more simply and reliably in R. Don't be discouraged - it takes
practice :)
Note, I didn't know whether you really wanted to retain the resultx
and resultb matrices. I've left them in there just in case.
Michael
bidding.simulation <- function(nsim=10, N=200, I=5) {
set.seed(180185)
tau <- seq(0.48, 0.52, 0.001)
tau.mid <- which(tau == 0.5)
Ntau <- length(tau)
mresultx <- matrix(-99, nrow=I*N, ncol=nsim)
mresultb <- matrix(-99, nrow=I*N, ncol=nsim)
h <- seq(0.001, 0.020, 0.001)
Mb0 <- matrix(0, nrow=nsim, ncol=Ntau)
Mb1 <- matrix(0, nrow=nsim, ncol=Ntau)
colnames(Mb1) <- colnames(Mb0) <- paste("tau", tau, sep=".")
Mhb0 <- matrix(0, nrow=nsim, ncol=tau.mid - 1)
Mhb1 <- matrix(0, nrow=nsim, ncol=tau.mid - 1)
colnames(Mhb1) <- colnames(Mhb0) <- paste("width", tau[(tau.mid -
1):1] - tau[(tau.mid + 1):Ntau], sep=".")
for (i in 1:nsim){
mu <- runif(I*N)
mx <- rep(runif(N), I)
b0 <- rep(1, I*N)
#function for private cost
cost <- b0+b0*mx+mu
#bidding strategy
bid <- mx+((I+1)/I)+((I-1)/I)*mu
mresultb[,i] <- bid
mresultx[,i] <- mx
qf <- rq(formula = bid ~ mx, tau = tau)
coefs <- coef(qf)
Mb0[i, ] <- coefs[1, ]
Mb1[i, ] <- coefs[2, ]
Mhb0[i, ] <- coefs[1, (tau.mid - 1):1] - coefs[1, (tau.mid+1):Ntau]
Mhb1[i, ] <- coefs[2, (tau.mid - 1):1] - coefs[2, (tau.mid+1):Ntau]
}
# return results as a list
list(Mb0=Mb0, Mb1=Mb1, Mhb0=Mhb0, Mhb1=Mhb1, mresultx=mresultx,
mresultb=mresultb)
}
On 14 October 2010 05:37, Julia Lira <julia.lira at hotmail.co.uk> wrote:
>
> Dear all,
>
>
>
> I have just sent an email with my problem, but I think no one can see the
red part, beacuse it is black. So, i am writing again the codes:
>
>
>
> rm(list=ls()) #remove almost everything in the memory
>
> set.seed(180185)
> nsim <- 10
> mresultx <- matrix(-99, nrow=1000, ncol=nsim)
> mresultb <- matrix(-99, nrow=1000, ncol=nsim)
> N <- 200
> I <- 5
> taus <- c(0.480:0.520)
> h <- c(1:20/1000)
> alpha1 <- c(1:82)
> aeven1 <- alpha[2 * 1:41]
> aodd1 <- alpha[-2 * 1:41]
> alpha2 <- c(1:40)
> aeven2 <- alpha2[2 * 1:20]
> #Create an object to hold results.
> M <- matrix(0, ncol=82, nrow=nsim)
> Mhb0 <- matrix(0, ncol=20, nrow=nsim)
> Mhb1 <- matrix(0, ncol=20, nrow=nsim)
> Mchb0 <- matrix(0, ncol=20, nrow=nsim)
> Mchb1 <- matrix(0, ncol=20, nrow=nsim)
> for (i in 1:nsim){
> # make a matrix with 5 cols of N random uniform values
> u <- replicate( 5, runif(N, 0, 1) )
> # fit matrix u in another matrix of 1 column
> mu <- matrix(u, nrow=1000, ncol=1)
> # make auction-specific covariate
> x <- runif(N, 0, 1)
> mx <- matrix(rep(x,5), nrow=1000, ncol=1)
> b0 <- matrix(rep(c(1),1000), nrow=1000, ncol=1)
> #function for private cost
> cost <- b0+b0*mx+mu
> #bidding strategy
> bid <- mx+((I+1)/I)+((I-1)/I)*mu
> mresultb[,i] <- bid
> mresultx[,i] <- mx
> qf <- rq(formula = mresultb[,i] ~ mresultx[,i], tau= 480:520/1000)
> # Storing result and does not overwrite prior values
> M[i, ] <- coef(qf)
> QI <- (1-0.5)/(I-1)
> M50b0 <- M[,41]
> M50b1 <- M[,42]
> Mb0 <- matrix(M[,aodd1], nrow=nsim, ncol=20)
> Mb1 <- matrix(M[,aeven1], nrow=nsim, ncol=20)
> ?for (t in aeven2){
> ?Mhb0[,t] <- M[,(41+t)]-M[,(41-t)]
> ?Mhb1[,t] <- M[,(42+t)]-M[,(42-t)]
> ?}
> }
>
>
>
> The problem is in the part:
>
> for (t in aeven2){
> ?Mhb0[,t] <- M[,(41+t)]-M[,(41-t)]
> ?Mhb1[,t] <- M[,(42+t)]-M[,(42-t)]
> ?}
>
>
> Since I want the software to subtract from column (41+t) of matrix called
M the column (41-t), in such a way that the matrix Mhb0 will show me the
result for each t organized by columns.
>
>
>
> Does anybody know what exactly I am doing wrong?
>
>
>
> Thanks in advance!
>
>
>
> Julia
>
> ? ? ? ?[[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 143
Date: Thu, 14 Oct 2010 02:37:23 -0700
From: Dennis Murphy <djmuser at gmail.com>
To: dpender <d.pender at civil.gla.ac.uk>
Cc: r-help at r-project.org
Subject: Re: [R] Data Gaps
Message-ID:
<AANLkTinrAuFrh71J9WpNfZVg_8JAYACkw1e8CPn=cZ2g at mail.gmail.com>
Content-Type: text/plain
Hi:
The essential problem is that after you append items, the result is a list
with possibly unequal lengths. Trying to convert that into a data frame by
the 'usual' methods (do.call(rbind, ...) or ldply() in plyr) didn't work (as
anticipated). One approach is to initialize a maximum size matrix with NAs
and then replace the NAs by the contents of each component of the list. The
following function is far from elegant, but the idea is to output a data
frame by 'NA filling' the shorter vectors in the list and doing the
equivalent of do.call(rbind, list).
listNAfill <- function(l) {
# input argument l is a list with numeric component vectors
lengths <- sapply(l, length)
m <- matrix(NA, nrow = length(l), ncol = max(lengths))
for(i in seq_len(length(l))) m[i, ] <- replace(m[i, ], 1:lengths[i],
l[[i]])
as.data.frame(m)
}
# From the previous mail:
u <- f(m, o, l)
> u
[[1]]
[1] 0.88 0.72 0.89 1.20 1.40 0.93 1.23 0.86
[[2]]
[1] 7.14 7.14 1.60 7.49 8.14 7.14 7.32
> listNAfill(u)
V1 V2 V3 V4 V5 V6 V7 V8
1 0.88 0.72 0.89 1.20 1.40 0.93 1.23 0.86
2 7.14 7.14 1.60 7.49 8.14 7.14 7.32 NA
The result of listNAfill is a data frame, so you could cbind or otherwise
attach the date-time info and then use write.table() or some other
appropriate write function.
I'm sure there is some built-in way to NA fill the list components so that
they all have equal length, but I couldn't find it. The rbind.fill()
function from plyr works with data frame inputs, not lists. A quick search
of the archives on 'NA fill' didn't turn up anything I could use, either,
but I didn't look very hard. This seems to work as expected on the simple
example I used.
Better approaches are welcomed...
HTH,
Dennis
On Thu, Oct 14, 2010 at 1:34 AM, dpender <d.pender at civil.gla.ac.uk> wrote:
>
> Thanks Dennis.
>
>
>
> One more thing if you don't mind. How to I abstract the individual H and
T
> arrays from f(m,o,l) so as I can combine them with a date/time array and
> write to a file?
>
>
> Sorry if its a simple question but Im completely new to R.
>
>
>
> Cheers,
>
>
>
> Doug
>
>
>
> --
> View this message in context:
> http://r.789695.n4.nabble.com/Data-Gaps-tp2993317p2994997.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
------------------------------
Message: 144
Date: Thu, 14 Oct 2010 10:43:53 +0100
From: Federico Calboli <f.calboli at imperial.ac.uk>
To: r-help <r-help at stat.math.ethz.ch>
Subject: [R] rounding issues
Message-ID: <E670ACB6-7DCD-45E3-AA46-19FC978DDADE at imperial.ac.uk>
Content-Type: text/plain; charset=us-ascii
Hi All,
I'm running the now almost-to-be upgraded R 2.11.1 on a Intel Mac, and on a
Ubuntu machine, but the problem I see is the same. I noticed the following
behaviour:
407585.91 * 0.8
[1] 326068.7 -- the right asnwer is 326068.728
round(407585.91 * 0.8, 2)
[1] 326068.7 -- same issue
407585.91/100
[1] 4075.859 -- the right answer is 4075.8591
I have no saved .Rwhatever in my environment, and I never set any option to
have such strange rounding. I'm obviously missing something, and I'd
appreciate suggestions.
Best,
Federico
--
Federico C. F. Calboli
Department of Epidemiology and Biostatistics
Imperial College, St. Mary's Campus
Norfolk Place, London W2 1PG
Tel +44 (0)20 75941602 Fax +44 (0)20 75943193
f.calboli [.a.t] imperial.ac.uk
f.calboli [.a.t] gmail.com
------------------------------
Message: 145
Date: Thu, 14 Oct 2010 20:43:42 +1100
From: Michael Bedward <michael.bedward at gmail.com>
To: ogbos okike <ogbos.okike at gmail.com>
Cc: R-help at stat.math.ethz.ch
Subject: Re: [R] spatial partition
Message-ID:
<AANLkTinxGbAr+60GwBB0h+0g7FGfr5QsO5DO+suaU6OC at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Some quick ideas...
One very easy way would be to round them all to integer degrees and
remove the duplicates - or even just let the duplicates overwrite each
other in the plot.
A step up from that would be to create a matrix at some resolution
(e.g. 180 x 360 for a 1 degree global grid) and count the number of
points that fall into each matrix cell. Then plot those points with
symbol color or size to represent frequency categories.
If you want to get fancy and look for "natural" spatial clusters of
points in your data there are a number of packages that can help you:
http://cran.r-project.org/web/views/Spatial.html
Hope this helps.
Michael
On 14 October 2010 19:32, ogbos okike <ogbos.okike at gmail.com> wrote:
> Hi everybody,
> I have a huge ?longitude and latitude data. I have used raster package to
> get the since of its global distribution. But I wish to display the
> information using a few points on the world map.
>
> The data is of the form:
> 95.2156 0.8312
> -65.3236 -3.3851
> -65.2364 -3.2696
> -65.2349 -3.2679
> 164.7025 -17.6404
> 148.8214 -4.8285
> -67.6568 2.6477
> -73.4833 -0.2350
> 40.9587 -16.8655
> -61.6474 8.1179
> 93.3401 -0.2755
> 119.9011 -17.1733
> -69.7640 1.1245
> -149.3088 -20.0035
> 177.8753 -3.4200
> -67.5590 3.0133
> -21.9331 15.4684
> 120.2656 -17.6166
> 165.9368 -17.3888
> 164.7335 -17.6953
> -74.0017 -1.8623
> -71.1195 -3.3562
> 130.1496 -11.5775
>
>
> I am thinking of a way of handling that data in such a way that they can
be
> represented by fewer points when plotted on the map. I have tried using
> factor method but didn't get any way.
>
> Thank you for any idea
> Best
> Ogbos
>
> ? ? ? ?[[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
_______________________________________________
R-help at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
End of R-help Digest, Vol 92, Issue 14
More information about the R-help
mailing list