11.2 Standard Module urllib
This module provides a high-level interface for fetching data across
the World-Wide Web. In particular, the urlopen() function
is similar to the built-in function open(), but accepts
Universal Resource Locators (URLs) instead of filenames. Some
restrictions apply -- it can only open URLs for reading, and no seek
operations are available.
It defines the following public functions:
- urlopen (url)
-
Open a network object denoted by a URL for reading. If the URL does
not have a scheme identifier, or if it has "file:" as its scheme
identifier, this opens a local file; otherwise it opens a socket to a
server somewhere on the network. If the connection cannot be made, or
if the server returns an error code, the IOError exception
is raised. If all went well, a file-like object is returned. This
supports the following methods: read(), readline(),
readlines(), fileno(), close() and
info().
Except for the last one, these methods have the same interface as for
file objects -- see section 2.1 in this
manual. (It is not a built-in file object, however, so it can't be
used at those few places where a true built-in file object is
required.)
The info() method returns an instance of the class
mimetools.Message containing the headers received from the
server, if the protocol uses such headers (currently the only
supported protocol that uses this is HTTP). See the description of
the mimetools module.
- urlretrieve (url)
-
Copy a network object denoted by a URL to a local file, if necessary.
If the URL points to a local file, or a valid cached copy of the
object exists, the object is not copied. Return a tuple
(filename, headers) where filename is the
local file name under which the object can be found, and headers
is either None (for a local object) or whatever the
info() method of the object returned by urlopen()
returned (for a remote object, possibly cached). Exceptions are the
same as for urlopen().
- urlcleanup ()
-
Clear the cache that may have been built up by previous calls to
urlretrieve().
- quote (string[, addsafe])
-
Replace special characters in string using the "%xx" escape.
Letters, digits, and the characters "_,.-" are never quoted.
The optional addsafe parameter specifies additional characters
that should not be quoted -- its default value is '/'.
Example: quote('/connolly/') yields '/%7econnolly/'.
- quote_plus (string[, addsafe])
-
Like quote(), but also replaces spaces by plus signs, as
required for quoting HTML form values.
- unquote (string)
-
Replace "%xx" escapes by their single-character equivalent.
Example: unquote('/%7Econnolly/') yields '/connolly/'.
- unquote_plus (string)
-
Like unquote(), but also replaces plus signs by spaces, as
required for unquoting HTML form values.
Restrictions:
-
Currently, only the following protocols are supported: HTTP, (versions
0.9 and 1.0), Gopher (but not Gopher-+), FTP, and local files.
-
The caching feature of urlretrieve() has been disabled
until I find the time to hack proper processing of Expiration time
headers.
-
There should be a function to query whether a particular URL is in
the cache.
-
For backward compatibility, if a URL appears to point to a local file
but the file can't be opened, the URL is re-interpreted using the FTP
protocol. This can sometimes cause confusing error messages.
-
The urlopen() and urlretrieve() functions can
cause arbitrarily long delays while waiting for a network connection
to be set up. This means that it is difficult to build an interactive
web client using these functions without using threads.
-
The data returned by urlopen() or urlretrieve()
is the raw data returned by the server. This may be binary data
(e.g. an image), plain text or (for example) HTML. The HTTP protocol
provides type information in the reply header, which can be inspected
by looking at the content-type header. For the Gopher protocol,
type information is encoded in the URL; there is currently no easy way
to extract it. If the returned data is HTML, you can use the module
htmllib to parse it.
-
Although the urllib module contains (undocumented) routines
to parse and unparse URL strings, the recommended interface for URL
manipulation is in module urlparse .
guido@python.org