[Previous] [Next] [Up] [Top] [Search] [Index]

WN Utility Programs


The main utility program used by WN is wndex which is used to produce the index.cache files from index files. Its use is described in detail in the chapter on creating your data hierarchy, In this chapter we consider some other utilities, mostly perl scripts, which are useful in maintaining your server.

13.1 Digest

Digest is a perl script which can be found in the bin directory of the distribution. This program is designed to work with the range feature of the WN server and with list searches. It produces a list of anchors or links to sections of a structured plain text document like an address list or a mail file.

Here is how it works. The digest utility is executed with three (or more) arguments. The first two arguments are regular expressions. The first regular expression should match the section separator of the structured file and the second should match the beginning of the line to be used as the section title. (More about this below.) The next argument is the name of a structured file, like a mail file, news digest or address list. Instead of a single structured file several files can be listed and digest will process their concatenation.

Now more about the regular expressions: Suppose our structured file is a mail file in its usual format with a number of messages. The first regular expression should match just the lines which are the beginning of each section (in this case each message). For a mail file a good choice would be '^From ' which matches the word "From" followed by a space at the beginning of a line.

The second regular expression matches start of the line which you would like to be the title of the section. It is convenient to have the link text be everything after the occurrence of the matching pattern for this regular expression. So for the mail file we would choose '^Subject:' for this regular expression. Then the program will produce a list of links one for each message with the text in the anchor the contents of the message Subject line (minus the word "Subject:"). Each link when accessed will produce a plain text document containing just that mail message.

So if our mail file is named foo we should execute the command

digest "^From " "^Subject:" foo
Note the quotation marks which are needed to get the space after From. It produces a file named foo.index.html which consists primarily of an unordered list. Each item in the list is an anchor referring to a line range in foo -- the ranges being delimited by lines which match the first regular expression argument. In this case that means each range will start with a line beginning with "From " which is the marker in a mail file designating the start of a new message. The anchor label for each range is taken from the first line in the range which contains a match for the second regular expression and, in fact, as mentioned above, it will consist of everything on that line after the matched regular expression.

The first line of each range or section is a line which matches the first regular expression and the next matching line will begin the next section. Normally the search for the match for the anchor title regular expression begins with this first line. However, it is sometimes useful to skip this first line in the search for a title match. This can be done by starting the second regular expression with the character '$'. For example the command

digest ^$ $^ foo
is a common one. It says to divide foo into sections (line ranges) which are separated by blank lines (the regular expression ^$ matches a blank line). To obtain an anchor title for each section the blank line is skipped (since the second regular expression starts with $) and then everything on the next line is taken as the title (since ^ matches the beginning of the next line). The regular expressions of this example would be useful, for example, for an address list foo which consisted of multiline records separated by blank lines with an individual's name on the first line of each record. The digest utility would then produce a foo.index.html file with an unordered list of anchors, one for each individual in the list. Selecting an anchor would present the record for that individual. Using a list search for this file would allow a form user to enter a name or regular expression and obtain a list of anchors for matching items.

The digest command can have any number of files listed after the regular expressions and it will produce a single file whose name is the name of the first file with "index.html" appended. This file will contain a list of links to all the sections of all the files given on the command line.

When digest writes the index file (e.g., foo.index.html), it adds two HTML comments to mark the start and end of the lines containing links to the records in your structured document. The markers look like this, where VERSION is the current version of digest:

<!-- Range list generated by digest/VERSION --> <!-- End of range list generated by digest/VERSION --> The first time digest writes an index file, it writes a default leader and trailer before and after the link lines. If digest finds an existing index file when it runs, it uses the information preceding the first marker and following the second marker as the leader and trailer for the new index file. This means you can run digest to create the initial index file, then edit the beginning and/or end of the file to modify the leader and trailer. Subsequent invocations of digest will retain your modifications each time the index file is recreated.

If you add the -b argument when you use digest (i.e. run the command "digest -b regexp1 regexp1 foo" then it will produce a file foo.index.html which uses byte ranges rather than the default line ranges. This functions the same except the server will log the number of bytes actually sent when a request is served (the server won't bother to count the bytes in a line range request).

There are fancier tools than digest for displaying mail archives, but this utility has great flexibility for dealing with a wide variety of structured files.

13.2 PNUTS

PNUTS (pronounced "peanuts") is an acronym for previous, next, up, top, search. It is a perl script which takes as argument the name of a file describing the hierarchical structure of a group of HTML files constituting a single virtual document. The pnuts program then searches these files for lines which begin with the string
<!-- pnuts -->
which it replaces with this string followed by a sequence of anchors like

[previous] [next] [ up] [ top] [ search] [ index]

with links to the relevant files in the virtual document. Actually it replaces this line with a single line starting with <!-- pnuts -->, followed by the anchors. That way the next time it is run, say after inserting a new chapter in your document, the "pnuts" line will be replaced by a new one with the appropriate links.

The pnuts program is run with a command like

pnuts -s dosearch.html -i docindex.html foo.pnuts

The argument -s dosearch.html is optional and supplies a URL for the [search] anchor to be substituted. Thus if just "dosearch.html" is used this will be an anchor linking to a relative URL. Instead you could use a full URL like "http://hostname/dir/file". If there is no -s argument then there will be no search item in the list of items inserted by pnuts. The optional argument -i docindex.html is similar to the -s option except it provides the URL (relative or absolute) which should be anchored to [index]. This URL typically points to an an HTML document created with indexmaker.

The file foo.pnuts contains the information by which pnuts knows which files to process and what the order of those files should be. It consists of a list of files relative to the current directory, one per line, in the order which should be reflected in the [next] [previous] links. If a file is hierarchically one level lower than the previous file this should be indicated by preceding its name with one more character than the preceding file. Here is an example:

top.html
second.html <tab>firstsub.html
<tab><tab>subsub.html
<tab>secondsub.html
third.html

If this list is supplied to pnuts it will insert anchors into all these files wherever <!-- pnuts --> occurs. All those named [top] will point to the file top.html. In firstsub.html and secondsub.html the [up] link will point to second.html. The [previous] and [next] links will reflect the order top.html, second.html, firstsub.html, subsub.html, secondsub.html, third.html.

13.3 Indexmaker

This is a perl script whose function is to produce an index (in the usual sense not the WN sense) for a virtual document consisting of a number of HTML files in a single directory. The index to this guide is a good example of how an index produced by indexmaker works. The indexmaker program is run with a command like
indexmaker -d path -t "Index Title" -o outputfile words

Here the -d -t and -o arguments are optional. The -t option supplies the title for the HTML document produced. If no -t argument is given then "Index" is used as the title. The -o option provides a name for the output HTML file -- the default being docindex.html. The -d option should be the directory containing the files being indexed. It should either begin with a '/' and be relative to the WN root directory or not begin with a '/' and be relative to the directory which will contain the docindex.html file. If there is no -d option then the docindex.html file must reside in the same directory as the files being indexed. If this is done then it is a good idea to add an "Attribute=nosearch" to the docindex.html record in the index file for the directory. Otherwise docindex.html will index itself in addition to the other files in the directory.

The final argument to indexmaker is the file words. It is a list of words or phrases, in alphabetical order, one per line, which you wish to appear in the index. One way to produce it is to use UNIX utilities to produce a list of all words in the files, then run sort -dfu on it and remove unsuitable words from the list.

What the indexmaker program does is produce a long list of anchors, one for each word in the words file. Each word is linked to a context search for itself.

13.4Uncache

Uncache is a perl script which reverses the action of wndex. It will convert an index.cache file to an index file. It read from its standard input and writes to its standard output.

Thus when invoked with

uncache <index.cache >index
it will create a file named "index" (overwriting any other file of that name). This file may not be identical to the original index file used to create index.cache, but when wndex is run on this new index it should produce an index.cache identical to the one used as input for uncache.

13.5 V2C

The perl script v2c converts logfiles produced by the server in the "verbose format" to files in the common log format handled by most server stats utilities. It also can extract the entries for each IP address of a "multi-homed" server which uses different data roots for different IP addresses.

Usage: v2c [-v] [-i IP#] <verboselog >commonlog

By default this script reads from standard input a WN logfile produced in the verbose format and writes a non-verbose one in the "common log format" to standard output. With the "-i IP-address" option it writes only those entries from the interface with specified IP-address. E.g., if you have listed three IP addresses and corresponding data roots in the file wn/vhost.h then "v2c -i 123.1.2.3 <logfile >log2" will create log2, the file of log entries for the interface with IP address 123.1.2.3. Adding the "-v" option gives the verbose form of log entries for this interface.


John Franks <john@math.nwu.edu>
[Previous] [Next] [Up] [Top] [Search] [Index]