[R] What is cast telling me?

Mark Knecht markknecht at gmail.com
Wed Jul 8 22:22:27 CEST 2009


On Wed, Jul 8, 2009 at 12:58 PM, <rmailbox at justemail.net> wrote:
>
> That you have non-unique rows in your data (as identified by the identifying variables).
>

Humm...OK - so I cast it with PL_Pos which has (in this data subset) a
unique value for each experiment in this set. there are 25 experiments
and there are 25 rows so each row in the output of cast should be
unique, and indeed (although it won't survive email) there is only one
EnTime now for each experiement and the 755 vs 950 reading of 25 is
the number of 5 minute periods which is the observation rate of this
data.

> cast(MyResults,PL_Pos + ExNum + EnTime ~ ExTime)
Aggregation requires fun.aggregate: length used as default
   PL_Pos ExNum EnTime 950 1125 1155 1210 1245 1305 1310
1   -1420    23   1125   0    0    0    0    0   22    0
2   -1340    13    755  25    0    0    0    0    0    0
3   -1120    22    850   0   33    0    0    0    0    0
4   -1040     5    855   0    0    0    0   48    0    0
5    -830     8    800   0    0   49    0    0    0    0
6    -810    10    935   0    0    0   33    0    0    0
7    -550    14    950   0    0    0    0    0    0   42
8    -400    17    750   0    0    0    0    0    0   66
9    -340     3    945   0    0    0    0    0    0   43
10   -310    12    750   0    0    0    0    0    0   66
11   -280    11   1210   0    0    0    0    0    0   14
12    -60    19    810   0    0    0    0    0    0   62
13    110     6   1245   0    0    0    0    0    0    7
14    160     9   1155   0    0    0    0    0    0   17
15    180    24   1305   0    0    0    0    0    0    3
16    440    16    815   0    0    0    0    0    0   61
17    520     1    800   0    0    0    0    0    0   64
18    530     2    755   0    0    0    0    0    0   65
19    680     7    925   0    0    0    0    0    0   47
20    700    15    740   0    0    0    0    0    0   68
21   1060    20    740   0    0    0    0    0    0   68
22   1080    25    805   0    0    0    0    0    0   63
23   1120    21    755   0    0    0    0    0    0   65
24   1720    18   1210   0    0    0    0    0    0   14
25   2150     4    820   0    0    0    0    0    0   60

On the other hand, if I'm looking for a higher PL_Pos reading I can
cast the same variables this way:

Aggregation requires fun.aggregate: length used as default
   EnTime PL_Pos ExNum 950 1125 1155 1210 1245 1305 1310
1     740    700    15   0    0    0    0    0    0   68
2     740   1060    20   0    0    0    0    0    0   68
3     750   -400    17   0    0    0    0    0    0   66
4     750   -310    12   0    0    0    0    0    0   66
5     755  -1340    13  25    0    0    0    0    0    0
6     755    530     2   0    0    0    0    0    0   65
7     755   1120    21   0    0    0    0    0    0   65
8     800   -830     8   0    0   49    0    0    0    0
9     800    520     1   0    0    0    0    0    0   64
10    805   1080    25   0    0    0    0    0    0   63
11    810    -60    19   0    0    0    0    0    0   62
12    815    440    16   0    0    0    0    0    0   61
13    820   2150     4   0    0    0    0    0    0   60
14    850  -1120    22   0   33    0    0    0    0    0
15    855  -1040     5   0    0    0    0   48    0    0
16    925    680     7   0    0    0    0    0    0   47
17    935   -810    10   0    0    0   33    0    0    0
18    945   -340     3   0    0    0    0    0    0   43
19    950   -550    14   0    0    0    0    0    0   42
20   1125  -1420    23   0    0    0    0    0   22    0
21   1155    160     9   0    0    0    0    0    0   17
22   1210   -280    11   0    0    0    0    0    0   14
23   1210   1720    18   0    0    0    0    0    0   14
24   1245    110     6   0    0    0    0    0    0    7
25   1305    180    24   0    0    0    0    0    0    3

then I might notice that experiments that start early (<830) and
others that start late (>1130) might tend to have higher PL_Pos
values.

However I'd like to get PL_Pos into the table. Does that mean melt the
data a second time?

Thanks,
Mark




More information about the R-help mailing list