284 lines
6.6 KiB
HTML
284 lines
6.6 KiB
HTML
|
|
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
|
|
<HTML><HEAD><TITLE>Man page of WWW::RobotRules</TITLE>
|
|
</HEAD><BODY>
|
|
<H1>WWW::RobotRules</H1>
|
|
Section: User Contributed Perl Documentation (3pm)<BR>Updated: 2018-04-14<BR><A HREF="#index">Index</A>
|
|
<A HREF="/cgi-bin/man/man2html">Return to Main Contents</A><HR>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
<A NAME="lbAB"> </A>
|
|
<H2>NAME</H2>
|
|
|
|
WWW::RobotRules - database of robots.txt-derived permissions
|
|
<A NAME="lbAC"> </A>
|
|
<H2>SYNOPSIS</H2>
|
|
|
|
|
|
|
|
|
|
|
|
<PRE>
|
|
use WWW::RobotRules;
|
|
my $rules = WWW::RobotRules->new('MOMspider/1.0');
|
|
|
|
use LWP::Simple qw(get);
|
|
|
|
{
|
|
my $url = "<A HREF="http://some.place/robots.txt">http://some.place/robots.txt</A>";
|
|
my $robots_txt = get $url;
|
|
$rules->parse($url, $robots_txt) if defined $robots_txt;
|
|
}
|
|
|
|
{
|
|
my $url = "<A HREF="http://some.other.place/robots.txt">http://some.other.place/robots.txt</A>";
|
|
my $robots_txt = get $url;
|
|
$rules->parse($url, $robots_txt) if defined $robots_txt;
|
|
}
|
|
|
|
# Now we can check if a URL is valid for those servers
|
|
# whose "robots.txt" files we've gotten and parsed:
|
|
if($rules->allowed($url)) {
|
|
$c = get $url;
|
|
...
|
|
}
|
|
|
|
</PRE>
|
|
|
|
|
|
<A NAME="lbAD"> </A>
|
|
<H2>DESCRIPTION</H2>
|
|
|
|
|
|
|
|
This module parses <I>/robots.txt</I> files as specified in
|
|
``A Standard for Robot Exclusion'', at
|
|
<<A HREF="http://www.robotstxt.org/wc/norobots.html">http://www.robotstxt.org/wc/norobots.html</A>>
|
|
Webmasters can use the <I>/robots.txt</I> file to forbid conforming
|
|
robots from accessing parts of their web site.
|
|
<P>
|
|
|
|
The parsed files are kept in a WWW::RobotRules object, and this object
|
|
provides methods to check if access to a given <FONT SIZE="-1">URL</FONT> is prohibited. The
|
|
same WWW::RobotRules object can be used for one or more parsed
|
|
<I>/robots.txt</I> files on any number of hosts.
|
|
<P>
|
|
|
|
The following methods are provided:
|
|
<DL COMPACT>
|
|
<DT id="1">$rules = WWW::RobotRules->new($robot_name)<DD>
|
|
|
|
|
|
|
|
|
|
This is the constructor for WWW::RobotRules objects. The first
|
|
argument given to <I>new()</I> is the name of the robot.
|
|
<DT id="2">$rules->parse($robot_txt_url, $content, $fresh_until)<DD>
|
|
|
|
|
|
|
|
|
|
The <I>parse()</I> method takes as arguments the <FONT SIZE="-1">URL</FONT> that was used to
|
|
retrieve the <I>/robots.txt</I> file, and the contents of the file.
|
|
<DT id="3">$rules->allowed($uri)<DD>
|
|
|
|
|
|
|
|
|
|
Returns <FONT SIZE="-1">TRUE</FONT> if this robot is allowed to retrieve this <FONT SIZE="-1">URL.</FONT>
|
|
<DT id="4">$rules->agent([$name])<DD>
|
|
|
|
|
|
|
|
|
|
Get/set the agent name. <FONT SIZE="-1">NOTE:</FONT> Changing the agent name will clear the robots.txt
|
|
rules and expire times out of the cache.
|
|
</DL>
|
|
<A NAME="lbAE"> </A>
|
|
<H2>ROBOTS.TXT</H2>
|
|
|
|
|
|
|
|
The format and semantics of the ``/robots.txt'' file are as follows
|
|
(this is an edited abstract of
|
|
<<A HREF="http://www.robotstxt.org/wc/norobots.html">http://www.robotstxt.org/wc/norobots.html</A>>):
|
|
<P>
|
|
|
|
The file consists of one or more records separated by one or more
|
|
blank lines. Each record contains lines of the form
|
|
<P>
|
|
|
|
|
|
|
|
<PRE>
|
|
<field-name>: <value>
|
|
|
|
</PRE>
|
|
|
|
|
|
<P>
|
|
|
|
The field name is case insensitive. Text after the '#' character on a
|
|
line is ignored during parsing. This is used for comments. The
|
|
following <field-names> can be used:
|
|
<DL COMPACT>
|
|
<DT id="5">User-Agent<DD>
|
|
|
|
|
|
The value of this field is the name of the robot the record is
|
|
describing access policy for. If more than one <I>User-Agent</I> field is
|
|
present the record describes an identical access policy for more than
|
|
one robot. At least one field needs to be present per record. If the
|
|
value is '*', the record describes the default access policy for any
|
|
robot that has not not matched any of the other records.
|
|
|
|
|
|
<P>
|
|
|
|
|
|
The <I>User-Agent</I> fields must occur before the <I>Disallow</I> fields. If a
|
|
record contains a <I>User-Agent</I> field after a <I>Disallow</I> field, that
|
|
constitutes a malformed record. This parser will assume that a blank
|
|
line should have been placed before that <I>User-Agent</I> field, and will
|
|
break the record into two. All the fields before the <I>User-Agent</I> field
|
|
will constitute a record, and the <I>User-Agent</I> field will be the first
|
|
field in a new record.
|
|
<DT id="6">Disallow<DD>
|
|
|
|
|
|
The value of this field specifies a partial <FONT SIZE="-1">URL</FONT> that is not to be
|
|
visited. This can be a full path, or a partial path; any <FONT SIZE="-1">URL</FONT> that
|
|
starts with this value will not be retrieved
|
|
</DL>
|
|
<P>
|
|
|
|
Unrecognized records are ignored.
|
|
<A NAME="lbAF"> </A>
|
|
<H2>ROBOTS.TXT EXAMPLES</H2>
|
|
|
|
|
|
|
|
The following example ``/robots.txt'' file specifies that no robots
|
|
should visit any <FONT SIZE="-1">URL</FONT> starting with ``/cyberworld/map/'' or ``/tmp/'':
|
|
<P>
|
|
|
|
|
|
|
|
<PRE>
|
|
User-agent: *
|
|
Disallow: /cyberworld/map/ # This is an infinite virtual URL space
|
|
Disallow: /tmp/ # these will soon disappear
|
|
|
|
</PRE>
|
|
|
|
|
|
<P>
|
|
|
|
This example ``/robots.txt'' file specifies that no robots should visit
|
|
any <FONT SIZE="-1">URL</FONT> starting with ``/cyberworld/map/'', except the robot called
|
|
``cybermapper'':
|
|
<P>
|
|
|
|
|
|
|
|
<PRE>
|
|
User-agent: *
|
|
Disallow: /cyberworld/map/ # This is an infinite virtual URL space
|
|
|
|
# Cybermapper knows where to go.
|
|
User-agent: cybermapper
|
|
Disallow:
|
|
|
|
</PRE>
|
|
|
|
|
|
<P>
|
|
|
|
This example indicates that no robots should visit this site further:
|
|
<P>
|
|
|
|
|
|
|
|
<PRE>
|
|
# go away
|
|
User-agent: *
|
|
Disallow: /
|
|
|
|
</PRE>
|
|
|
|
|
|
<P>
|
|
|
|
This is an example of a malformed robots.txt file.
|
|
<P>
|
|
|
|
|
|
|
|
<PRE>
|
|
# robots.txt for ancientcastle.example.com
|
|
# I've locked myself away.
|
|
User-agent: *
|
|
Disallow: /
|
|
# The castle is your home now, so you can go anywhere you like.
|
|
User-agent: Belle
|
|
Disallow: /west-wing/ # except the west wing!
|
|
# It's good to be the Prince...
|
|
User-agent: Beast
|
|
Disallow:
|
|
|
|
</PRE>
|
|
|
|
|
|
<P>
|
|
|
|
This file is missing the required blank lines between records.
|
|
However, the intention is clear.
|
|
<A NAME="lbAG"> </A>
|
|
<H2>SEE ALSO</H2>
|
|
|
|
|
|
|
|
LWP::RobotUA, WWW::RobotRules::AnyDBM_File
|
|
<A NAME="lbAH"> </A>
|
|
<H2>COPYRIGHT</H2>
|
|
|
|
|
|
|
|
|
|
|
|
<PRE>
|
|
Copyright 1995-2009, Gisle Aas
|
|
Copyright 1995, Martijn Koster
|
|
|
|
</PRE>
|
|
|
|
|
|
<P>
|
|
|
|
This library is free software; you can redistribute it and/or
|
|
modify it under the same terms as Perl itself.
|
|
<P>
|
|
|
|
<HR>
|
|
<A NAME="index"> </A><H2>Index</H2>
|
|
<DL>
|
|
<DT id="7"><A HREF="#lbAB">NAME</A><DD>
|
|
<DT id="8"><A HREF="#lbAC">SYNOPSIS</A><DD>
|
|
<DT id="9"><A HREF="#lbAD">DESCRIPTION</A><DD>
|
|
<DT id="10"><A HREF="#lbAE">ROBOTS.TXT</A><DD>
|
|
<DT id="11"><A HREF="#lbAF">ROBOTS.TXT EXAMPLES</A><DD>
|
|
<DT id="12"><A HREF="#lbAG">SEE ALSO</A><DD>
|
|
<DT id="13"><A HREF="#lbAH">COPYRIGHT</A><DD>
|
|
</DL>
|
|
<HR>
|
|
This document was created by
|
|
<A HREF="/cgi-bin/man/man2html">man2html</A>,
|
|
using the manual pages.<BR>
|
|
Time: 00:06:00 GMT, March 31, 2021
|
|
</BODY>
|
|
</HTML>
|