Default Gather Settings  
			 The gather module in PANDAS uses a standard interface
				developed specifically for PANDAS. It is designed to allow gathering
				software, such as off-line browsers and web spiders, to be used and upgraded or
				changed without requiring the user to become familiar with different
				proprietary softwares and versions. Currently the primary gathering software
				used is HTTrack (see the HTTrack website
				copier home page for more information).  
			 NB: The gather filters and settings on PANDAS are based on
				those used by the website copier HTTrack. Documentation on the HTTrack homepage
				provides more detailed descriptions of the filters and settings and other
				useful FAQs. See the HTTrack
				documentation page.  
			 Basic Settings 
			 The default settings included in the gather module are designed to
				facilitate gathering procedures. These include the following defaults found on
				the Settings tab on the Gather Details screen:  
			  
				- Directory Travel - default setting is: Can go
				  down. This means that the gatherer will gather all files in the starting
				  directory and any sub-directories of that directory. 
  Other Directory
				  Travel options you can select are:  Stay on the same directory -
				  use this if you do not wish to gather sub directories of the starting
				  directory Can go up - use this if your you don't wish to gather
				  sub-directories of the starting directory but you do want to capture files in
				  directories above the starting directory Can go both up and down -
				  you could use this option if you want to capture a whole web site but your
				  starting directory is not at the top level. This option will allow you to start
				  your gather at any level in the web site and the gatherer will capture both
				  sub-directories of the starting directory and any directories above the
				  starting directory
 
   
				- Host Travel - default setting is: Stay on the same
				  address. This means that the gatherer will remain on the same IP address
				  (host name) of the starting directory. For example if you start on
				  http://www.olympics.com.au/index.html the gather will only gather from the
				  http://www.olympics.com.au/ host, and not, for example, from
				  http://shop.olympics.com.au/ 
  Other Host Travel options you can
				  select are: Stay on the same domain - this option allows the gatherer
				  to gather from various hosts on the same domain. In the URL
				  http://www.olympics.com.au/ the domain is "olympics". You could use this option
				  if, for example, you did wish to gather files on both the
				  http://www.olympics.com.au/ host and the http://shop.olympics.com.au/
				  host Stay on the same location - this options restricts the gatherer
				  to a major portion of the Internet, i.e. the domain type such as .com .gov
				  .edu. It is unlikely you will have a use for this option when gathering
				  specific titles.  Go everywhere - as the label suggest this means
				  that the gatherer will not be restricted to any address, domain or location.
				  Without using other options such as limiting the maximum depth of the archiving
				  this option will allow you gather the whole Internet, or at least every place
				  that there is a link to. It is strongly recommended that you do not use this
				  option!
 
   
				- Max. Depth - i.e. Maximum Depth: default setting is:
				  50. This means that the gather can go to 50 sub-directories below the
				  start directory. You will probably never need to increase this, although you
				  can, as generally most web sites don't go beyond 20 levels. A default level is
				  set so as to avoid symbolic links causing and infinite recursion of directory
				  levels which is not good for the gatherer and not good for the site being
				  gathered. 
  Other options - you may find it useful to change the Max.
				  Depth when gathering sites where you want to gather some documents linked
				  directly (i.e. one level) from your starting page but to do so would mean you
				  will also gather the whole, much larger site, and you are unable to limit the
				  gather effectively using URL filters. By limiting the gather depth to 2 or 3
				  levels (you may have to experiment to get it right for the particular site) you
				  will prevent the gatherer from gathering the whole site although you may still
				  end up with some pages you don't want.
 
   
				- Get Near Files - the default setting is:
				  
  i.e. ticked to
				  on. Having this option switched on means that non-HTML files like images, sound
				  files, PDF and zip files that are linked directly from the gathered pages will
				  also be gathered irrespective of their domain or location. This means that you
				  do not have to create URL filters to gather these types of files which are
				  often located in directories that are not necessarily sub-directories of your
				  starting directory. If you do not wish to gather PDF or zip files, for example,
				  you will need to switch this option to off or create URL files to deny those
				  types of files. This option is not foolprooof and you may find it necessary at
				  times to also use URL filters to allow images in directories or locations not
				  allowed by the default settings.  
			    
			 Advanced Settings 
			 The Advance Settings tab is not visible unless you click
				the Advanced settings tick box on the Basic tab. You should not
				have any need to change these options. However they are made available should
				you need to view them. One default to note is the Follow Robots rule
				which is set to Never. This means that the gatherer is set so as to
				ignore robot exclusion rules. Many sites have robot rule text files to
				discourage web spiders, offline browsers and other robots from doing such
				things as downloading their sites. However, because the PANDORA Archive only
				includes sites for which we have obtained permission to archive from the
				publisher our gatherer is set to ignore these robot exclusion rules.   
			 Gather Filters 
			 Gather filters are scan rules you can apply to the gatherer to
				allow it to exclude or accept whole directories of files, certain types of
				files or individual files. If the archived version of the title you are
				checking is missing files or if it has gathered more than you expected,
				adjusting the gather filters may help. It is a good idea to spend time before
				you archive the site having a thorough look at the directory structure of the
				site to make sure you set the required gather filters at the outset.
				
  Gather filters are an important, powerful and versatile tool. The
				information included here is not intended to be comprehensive but rather to
				provide some general tips.  
			  To set up gather filters go to the Filters tab on the
				Edit Gather Settings screen.  
			  
				- The first step is to select the type of filter you wish to
				  apply from the Rule drop-down box. In other words you can select whether
				  you want the rule to apply just to file names, to file extensions, to folder
				  (i.e. directory) names, names on the same domain or host or if you want the
				  rule to apply to names found in any link (i.e. 'links
				  containing').
 
   
				- The next step is to enter the keyword for your rule. The
				  'keyword' only needs to be a string of characters (and/or numbers). It does not
				  need to be a word as such.
 
   
				- Then you need to select either the includes links button (with
				  '+' sign) 
 
				  or the exlcudes links button (with minus '-' sign)
				   .
 
   
				- The gather filter will appear in the Filter List box.
				  (NB: You can edit the filters in the gather Filter List box and you can
				  also enter the filters directly into the Filter List if you know the
				  syntax, however it is advisable to construct the filters selecting them from
				  the drop-down box as described above.) 
 
  
				- You can add as many gather filters as required. Repeat steps 1
				  to 3 above for each filter.
  
			   
			 Gather filter syntax
			 
				- Selecting a filter described as "containing", e.g. File name
				  containing, means that the string of characters you specify can appear
				  anywhere in the part of the URL link described by the filter. For example the
				  gather filter for File name containing with the keyword entered as
				  "copyright", selected as an "includes" filter, will appear in the Filter
				  List as:
 +*/copyright*
  In this string, the + sign indicates that
				  it is an includes filter and the * indicates wild cards. In this example the
				  gather will pick up files with names containing the string "copyright" in any
				  directory. It will also pick up files called "copyrights.htm" and
				  "copyrightstatement.html" and "copyright.asp", etc.
 
  
				- Selecting a filters such as This file name or This
				  folder name or This link means that only the characters you specify
				  can appear in the part of the URL described by the filter. For example, using
				  the filter This folder name with the keyword "images", selected as an
				  includes filter, will appear in the Filter List
				  as:
 +*/images/*
  This means that only files in folder called "images"
				  will be gathered, not files in folders called "image". However the folder
				  called "images" can appear anywhere in a URL as indicated by the wildcard
				  symbols before and after the directory. 
			   
			  
			  |