[R] Developing a web crawler

rex.dwyer at syngenta.com rex.dwyer at syngenta.com
Thu Mar 3 14:58:07 CET 2011

Perl seems like a 10x better choice for the task, but try looking at the examples in ?strsplit to get started.

-----Original Message-----
From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-project.org] On Behalf Of antujsrv
Sent: Thursday, March 03, 2011 4:23 AM
To: r-help at r-project.org
Subject: [R] Developing a web crawler


I wish to develop a web crawler in R. I have been using the functionalities
available under the RCurl package.
I am able to extract the html content of the site but i don't know how to go
about analyzing the html formatted document.
I wish to know the frequency of a word in the document. I am only acquainted
with analyzing data sets.
So how should i go about analyzing data that is not available in table

Few chunks of code that i wrote:
w <-
t <- readLines(w)

readLines also didnt prove out to be of any help.

Any help would be highly appreciated. Thanks in advance.

View this message in context: http://r.789695.n4.nabble.com/Developing-a-web-crawler-tp3332993p3332993.html
Sent from the R help mailing list archive at Nabble.com.

R-help at r-project.org mailing list
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

message may contain confidential information. If you are not the designated recipient, please notify the sender immediately, and delete the original and any copies. Any use of the message by you is prohibited. 

More information about the R-help mailing list