start page | rating of books | rating of authors | reviews | copyrights

Perl Cookbook

Perl CookbookSearch this book
Previous: 20.10. Mirroring Web Pages Chapter 20
Web Automation
Next: 20.12. Parsing a Web Server Log File
 

20.11. Creating a Robot

Problem

You want to create a script that navigates the Web on its own (i.e., a robot), and you'd like to respect the remote sites' wishes.

Solution

Instead of writing your robot to use LWP::UserAgent, have it use LWP::RobotUA instead:

use LWP::RobotUA; $ua = LWP::RobotUA->new('websnuffler/0.1', '[email protected]');

Discussion

To avoid having marauding robots and web crawlers hammer their servers, sites are encouraged to create a file with access rules called robots.txt. If you're fetching only one document with your script, this is no big deal, but if your script is going to fetch many documents, probably from the same server, you could easily exhaust that site's bandwidth.

When you create your own scripts to run around the Web, it's important to be a good net citizen. That means two things: don't request documents from the same server too often, and heed the advisory access rules in their robots.txt file.

The easiest way to handle this is to use the LWP::RobotUA module to create agents instead of LWP::UserAgent. This agent automatically knows to pull things over slowly when repeatedly calling the same server. It also checks each site's robots.txt file to see whether you're trying to grab a file that is off limits. If you do, you'll get back a response like this:

403 (Forbidden) Forbidden by robots.txt

Here's an example robots.txt file, fetched using the GET program that comes with the LWP module suite:

% GET http://www.webtechniques.com/robots.txt  



User-agent: *



 



     Disallow: /stats



 



     Disallow: /db



 



     Disallow: /logs



 



     Disallow: /store



 



     Disallow: /forms



 



     Disallow: /gifs



 



     Disallow: /wais-src



 



     Disallow: /scripts



 



     Disallow: /config



A more interesting and extensive example is at http://www.cnn.com/robots.txt. This file is so big, they even keep it under RCS control!

% GET http://www.cnn.com/robots.txt | head 



# robots, scram



 



# $I d : robots.txt,v 1.2 1998/03/10 18:27:01 mreed Exp $



 



User-agent: *



 



Disallow: /



 



User-agent:     Mozilla/3.01 (hotwired-test/0.1)



 



Disallow:   /cgi-bin



 



Disallow:   /TRANSCRIPTS



 



Disallow:   /development











See Also

The documentation for the CPAN module LWP::RobotUA(3); http://info.webcrawler.com/mak/projects/robots/robots.html for a description of how well-behaved robots act


Previous: 20.10. Mirroring Web Pages Perl Cookbook Next: 20.12. Parsing a Web Server Log File
20.10. Mirroring Web Pages Book Index 20.12. Parsing a Web Server Log File

Library Navigation Links

Copyright © 2001 O'Reilly & Associates. All rights reserved.