[R] importing text file with duplicate rows / indexing rows and columns
Prof Brian Ripley
ripley at stats.ox.ac.uk
Sun May 16 08:24:13 CEST 2004
The issue is not `duplicate rows' but duplicated row names. You asked R
explicitly to make a column into row names -- if they are not suitable row
names, don't do that. You can remove duplicated rows later (see ?unique)
but you cannot have duplicated row names in a data frame so leave them as
numbers.
On Sat, 15 May 2004 grr at grell.mailshell.com wrote:
> Could somebody advise me about importing a txt file as a frame? I am using the command:
>
> test <- read.delim ("~/docs/perl/expr_ctx.txt2", header=T, sep = "\t", row.names = 1)
>
> This gives me an error because there are duplicate rows.
>
> In the txt file, the columns are unique subjects and the rows are
> variables, so I had planned to transform the file after importing. The
> first row and column are text labels, which I could either leave in
> (with duplicate rows) or ask R to remove for me, saving another file
> with index values. But I can't figure out how to do either of these
> things.
--
Brian D. Ripley, ripley at stats.ox.ac.uk
Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel: +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UK Fax: +44 1865 272595
More information about the R-help
mailing list