Archiving issues frequently asked questions
Contents
- 1. General Information
- 1.1 What is a Host?
- 1.2 How do I see the Source File?
- 1.3 How do I make an External
Link?
- 2. Applications
- 2.1 What are Cascading Style Sheets?
- 2.2 Why do my Fonts look different?
- 2.3 What is Javascript?
- 2.4 Where are my Pop-Up
files?
- 3. Missing Files
- 3.1 How do I fix a Broken
Image?
- 4. Multimedia Files
- 4.1 How do I archive Media Files?
- 4.2 How do I embed FLV files?
- 4.3 Real Media Files
- 4.4 How do I save PDF files?
- 5. Using Gather Filters
- 5.1 How to Create Gather Filters
- 5.2 How Do I Use Filters on Difficult to Gather Sites?
- 5.3 How do I Create Gather Filters by Using the Rule Drop-down Menu?
- 5.4 Some Questions and Answers
- 5.4.1 How do I limit gathering to only the chosen pages, when PANDAS has gathered the whole site?
- 5.4.2 Why are the Filters not working?
1. General Information
1.1 What is a Host, a Domain and an Address?
You will find options under the gather filters and the settings that refer to Host, Domain and Address.
If we take this URL as an example: http://pandora.nla.gov.au/manual/pandas/index.html
The HOST is the computer connected to the Internet (each HOST has an IP, Internet Protocol, address) hosting the files. In the URL above "pandora" is the Host computer and like all computers connected to the Internet it will have an IP number (e.g. and IP address looks like this: 192.102.239.46); "pandora" is just a name given to a computer because IP numbers are not memorable. So, the HOST is the name of the specific machine referred to in the URL and it will be the first thing after the http://
The DOMAIN is the group of computers with IP address on a shared network. The National Library for example obviously has more than one computer (HOST) serving web pages to the Internet. Computers (HOSTs) on the same DOMAIN will all share a common part of the IP address. So a DOMAIN can have a number of HOST computers that are networked. The shared DOMAIN in the example URL is "nla.gov.au".
The ADDRESS is in fact the URL. Every file in the Internet has an address. However, in HTTrack terms ADDRESS is the URL without the file name, i.e. http://pandora.nla.gov.au/manual/pandas/ So the ADDRESS includes all the files under this URL, i.e. index.html, general.html, searching.html etc. If you think about this, this ties in with the HTTrack default setting for directory travel, i.e.. "can go down". The default in PANDAS/HTTrack is to "stay on the same address" and, for directory travel "can go down". So all files will be gathered from the starting ADDRESS and all sub-directories.
1.2 How do I see the source file?
The source is the coding behind each web document, which
dictates how and what to display. To see the source file go to the View
option on your browser (IE & Netscape) and select Source, a new window will
open in Notepad. If you want to edit a gathered file, you will first need to
send it to your local drive, before editing.
You can also use Notepad to
view the contents of other types of files, such as .js and .rm.
1.3 How do I make an External Link?
This is how to make an active external link to display the PANDORA standard message for external link "This file has not been archived".
The post possessing work of the gatherer should produce the PANDORA standard message for external links in the gathered files. However, every so often we find active external links in gathered files. It often happens to large sites when it takes a long time to download and for the post gather processing to work. There is also a bug that causes the post gather processing fails to complete when gatherer is restarted in the middle of post gather processing.
To fix the link, you need to:
- Locate the html file in WeDdav, the one where the active external link occurs. In this example it is the file 20thCentury.html . Copy it to your desktop.
- Use a html editor e.g. Wordpad or Notepad to locate and open up the html file saved in your desktop.
- Use Edit and Find to locate the mark-up of the active external link of the publisher's URL.
- Add the script ../external.html?link= before the publisher's
URL. Always start with one ../ for testing purposes. Look at how other links
are referenced in the page might also help. An example of a full url with the added code should look like this:
<a href="../external.html?link=http://www.theaustralian.news.com.au/story.html">
- Test whether the link works by copying the file from your desktop back to the appropriate folder in WebDav and refresh the screen on your browser. (*Always put the edited file back to where it comes from.)
- If it doesn't work, study the link carefully and observe whether there are missing or extra directories appear in the broken link.
- You might need to manipulate the mark-up again by adding and removing extra ../../ in the link a few times before it works.
2. Applications
2.1 What are Cascading Style Sheets?
Cascading Style Sheets (CSS) are a mechanism for adding style (e.g. fonts, colors, spacing) to Web documents. To find out if a website you are archiving uses CSS, look for mark-up such as: <.link rel="stylesheet" href="style/default.css" type="text/css"> in the source code.
2.2 Why do my Fonts look different?
If your gather appears to have a different font than the original, this is probably because you have either (a) not gathered the style sheet , or (b) the style sheet is not not referenced properly.
To fix the problem: Open the source document using Notepad or WordPad and check to see whether there is a style sheet referenced. It will be a reference to a file with an extension ".css".
If you find this in your webpage, you will then need to check whether you have the file in your gather if you don't then you need to gather it and add it to the gather (as in the FAQ for missing files). If you have the file then it must not referenced correctly and you will have to move the file up or down the hierarchy (by adding or subtracting "../") until it works.
2.3 What is Java script?
A problem similar to the above concerning stylesheets not functioning properly can also occur with websites using Javascript. This will occur if you have not gathered or referenced a Javascript source file. These will appear in your Source Code somewhat like this: <.script src=../utils/common.js> and will look in your gather files like this:
Inputting the file can be done the same way as shown in Font
Irregularities.
However if the function does not work after inputting the
file, you may also need to edit the .js file in Notepad/Wordpad to make it work
in the archived version. Note there may be more than one .js file for a
website.
2.4 Where are my Pop-Up files?
A Pop-up is a small new window which opens up from a webpage. Gatherers can sometimes miss the files making your gather incomplete. To fix the problem you need to find out the name of the file missing, with Pop-ups this can be difficult. You can find them however by looking in the Source Code.
The file will probably not have the words 'pop-up' within them so you will need to look for other references. Pop-ups can be made using HTML, CSS or Javascript and so can appear quite different. Some examples are:
- <.A HREF="view-source:http://www.filename.com.au/..">
- <.WINDOW.ONLOAD ="new" FUNCTION("SHOW('FILENAME');")..>
- <.A HREF="filename" ONCLICK="CPW_showWindow(1);...">
3. Missing Files
3.1 How do I fix a Broken Image?
In order for an image link to display an image file correctly, firstly, you need to have the file downloaded properly and saved in the right folder; secondly, you need to have the link correctly coded or referenced in the source file.
In order to know whether PANDAS has gathered the image file, you need to know the name of the file.
To determine the name of the image file, move the cursor to the broken image, right click the mouse, select Properties and the link of the broken image appears in a pop up window, for example http://pandas-s-prod.nla.gov.au/menu/MenuBar3.gif
The image file is MenuBar3.gif in this example.
To determine the correct location of the image file, go to the publisher site to locate the page where the broken link occurs. Sometimes it is a matter of removing the first part of the archived URL that displays the archived instance information for the page where the broken image occurs.
For example: http://pandas-s-prod.nla.gov.au/view/14179/20031007/www.apra.gov.au/Statistics/Australian-Banking-Statistics.html
Move the cursor to the image you want, right click the mouse, select Properties and the link of the image appears in a pop up window.
In this case it is at: http://www.apra.gov.au/menu/MenuBar3.gif
To see whether you have the image file downloaded properly and saved in the right folder, use the WebDav URL to create a WebDav instance in your Network Place. Follow the directory structure/level and locate the folder where the image file should be kept.
The image file, MenuBar3.gif should be found in the folder menu in WebDav.
If the file is not found, download it from the publisher's site onto your desktop. Copy and paste the image file to the menu folder in WebDav.
If the file is there, but it still does not work, it is most probably because the link is incorrectly referenced. In this case, in order for the link to work, the link should look like this:
http://pandas-s-prod.nla.gov.au/view/14179/20031007/www.apra.gov.au/menu/Menubar3.gif not: http://pandas-s-prod.nla.gov.au/menu/MenuBar3.gif
To fix the link, you need to:
- Locate and copy the html file where the broken link occurs to your desktop. In this case it is the file, Australian-Banking-Statistics.html
- Use a html editor e.g. Wordpad or Notepad to locate and open up the html file saved in your desktop
- Use Edit and Find to locate the link with the image file, MenuBar3.gif.
- Manipulate the link so that it will appear as http://pandas-s-prod.nla.gov.au/view/14179/20031007/www.apra.gov.au/menu/Menubar3.gif
- It may involve adding or removing extra ../../ etc.in the link.
- Test whether the link works by copying the file from your desktop back to the appropriate folder (in this case, it is in the folder named, Statistics) in WebDav and refresh the screen on your browser. (*Always put the edited file back to where it comes from.)
- If it doesn't work, study the link carefully and observe whether there are missing or extra directories that appear in the broken link
- You might need to manipulate the mark-up again by adding and removing extra ../../ in the link a few times before it works
4. Multimedia Files
4.1 How do I archive Media Files?
Introduction
This document provides a general outline of procedures for archiving media files in PANDORA. In particular detailed instructions for archiving and referencing Real Media files will be discussed as these files present a unique archiving problem for us.
Specifying domains for gathering media files in PANDAS gather settings
Prior to setting gather configurations for a resource, staff should check if any associated media files are located on different domains (servers) to the selected resource. If so, these additional domains must be specified as extra urls in the PANDAS gather settings so these files can be downloaded. If the additional domains are missed then the media files can be separately downloaded and added to the gathered instance via WebDav. (But note: if media files are located on secure servers they won't be gatherable or downloadable. The publisher must be contacted to supply these files to PANDORA).
Problems encountered with archiving media files
Problems can
occur when gathering media files that are not located on the same domain(s) as
a selected electronic resource. If care is not taken when assessing the site
for gathering, it is easy to miss specifying all the relevant domains that
house associated media files. Locating these files and downloading them
separately can fix this. However, if media files are located on secure servers
they are not gatherable, and the publisher will have to be contacted to supply
these files to PANDORA.
Generally when any media file is identified during quality assurance processing in PANDAS, users should routinely go into WebDav for the particular instance to ensure that the media file has been gathered. Mostly this can be verified by checking that the downloaded file is greater than 1KB. If doubt remains as to whether the downloaded file is the actual media file then opening the file directly from within the appropriate plug-in is necessary.
Checking gathered media files in WebDav
Routinely, whenever a media file is identified during the quality assurance process staff should view the instance in WebDav to establish if the media file has been gathered. Usually if the downloaded file is greater than 1KB then it will be the actual media file. If doubt remains, staff should open the file directly using its associated software plug-in (i.e. open the plug-in from your PC desktop and locate and play the file from there - do not open the file by clicking on it within WebDav).
4.2 How to embed an FLV file
- Download the FLV file using a download tool (see Tools webpage for examples) or directly capture the file after playing it on the live site in your browser and then retrieving the file from your browsers temporary internet files cache
- Rename the file to something meaningful
- Copy the FLV file to the required directory in the instance (the same as the page if there is one link to one file, is probably the easiest)
- Add the downloaded flvplayer.swf to the instance (probably best to add it to the same directory to make the links simple). [The FLV player can be downloaded from: http://www.jeroenwijering.com/?item=JW_FLV_Player (nb. Unpack the zip file and locate the file mediaplayer.swf changing the name of the file to flvplayer.swf is helpful, though this is not necessary).
-
Replace the script used to point to YouTube or whichever video website you have found the media file with the following code (Note this is for files that are all in the same directory):
<embed src="flvplayer.swf" width="425" height="336" allowfullscreen="true" allowscriptaccess="always" flashvars="&file=filename.flv&height=336&width=425" />
- NB "flvplayer.swf" is the link to the FLV player software and the file=filename.flv bit is the media file. For an example see the GetUp! Video linked from http://pandora.nla.gov.au/tep/51827 Direct link is http://pandora.nla.gov.au/pan/51827/20080506-1136/www.getup.org.au/campaign/MakeThisAHit%26id=339.html
4.3 Real Media Files
Real Media files can create particular archiving problems for us because of the practice of using metafiles. An Internet page may appear to have a direct link to a Real Media file but examination of the source code shows that the link is only to a metafile. In HTML source code a reference to a metafile mostly has a .ram extension (or occasionally .rpm). When checking a gathered resource it can appear that the media file has gathered as the media file plays OK. But the gathered metafile is simply activating an externally located (i.e. not archived) media file. These media files may reside on an open access HTML server using the standard http:// protocol, or on a secure streaming server using a protocol such as pnm://. The latter servers are inaccessible to both gathering robots and manual access.
Metafiles
A metafile is a text file that points to the actual media file we want to gather. These metafiles are normally 1KB in size. Publishers use metafile referencing to prevent unauthorized downloading of their media files and to permit streaming of files so users don't have to wait until the file downloads before it plays.
Gathering Scenarios
The rest of this document outlines strategies for determining whether a Real Media file has been gathered or not and if not, how to download it and reference it. Firstly, there are several gathering scenarios staff may encounter:
Scenario 1: The gathered ".ram" file is actually the media file
Occasionally the .ram file is in fact the actual media file. Sometimes web site creators code the media file with a .ram extension rather than .ra or .rm. If the .ram file is in fact the media file (remember the file size can show this) then you don't need to download anything else. Just test that it works in the archive as you would any other media file.
Scenario 2: Only the metafile has gathered and the media file (.ra, .rm) is accessible for downloading from a directory listing on the publisher's website
- Open the metafile using WordPad (available through Windows Accessories). The file will now display an absolute URL. This URL points to the location of the media file.
- By examining the URL's protocol (http:// or pnm://) you can decide whether to try downloading the media file. If the media file is located on an http server you should be able to download it (see next step). If however the URL has a protocol such as a "pnm:" you're unlikely to be able to access the server. In this case contact the publisher to negotiate delivery of the media files.
- Cut and paste the URL into your browser address field, deleting the file name so that the URL points to the directory level and press enter. You will get a directory listing which includes the file or files you want to download (if not, see Scenario 3). Note: when downloading media files use Microsoft IE as this browser handles the task best.
- Right click on the link to the desired file. From the menu choose "Save Target As" and save the file directly into the appropriate directory in the archive.
Scenario 3: Metafile has been gathered and you cannot download the media file as you are denied a directory listing at the publisher's site
If you can gain access to the remote server but are denied access to a directory listing you will have to create your own directory listing in order to save the file:
- Create a basic HTML page using HotMetal or WordPad.
- Add the URL(s) for the file(s) you want to download to it.
- Save the page to your PC using a suitably generic name as you can use this page over and over again (e.g. name might be 'Real Media files with denied directory listings.html').
- Open the page in your browser, right click on the link you have created, choosing "Save Target As" from the displayed menu and save the file in the appropriate directory in the archive.
Referencing a Gathered or Downloaded Media File
Once a media file has been added to the archive (whether by downloading or by receiving it from the publisher) you need to link to the archived version either by referencing the media file directly from the referring HTML page or by editing the metafile. The choice of method depends on how the media file was acquired. If the file was freely available on the publisher's website or any other http server then the first method should be used (this will be uncommon though as the publisher probably didn't use a metafile in the first place). If however, the publisher had to supply us with the media files or they were gathered from a pnm server then the second method (Editing the metafile) should be used.
Referencing the media file directly from the referring HTML page
- Copy the gathered HTML file that references the media file and move it to your PC.
- Open the copied file in HotMetal or WordPad and edit the link pointing to the .ram file to point to the downloaded media file (the .ra or .rm file). This should always be a relative and not an absolute link so that when the archived version is moved or migrated the referencing will still work (e.g. ../../audio/irish_trad.ram will become ../../audio/irish_trad.ra)
- Save the file and move it back to the archive.
Editing the Metafile (.ram)
This method should always be used if the media files were located on a pnm server.
- Copy the gathered metafile and move it to your PC.
- Open the metafile in WordPad and replace the existing absolute (full) URL with the absolute URL pointing to the location of the archived media file (this location should be the National Library's Real Media Server - see next section for further details about moving files to this server and the required syntax for referencing files stored here).
- Save the file and move it back to the archive
National Library's Real Media Server
The National Library has its own secure Real Media server where Real Media files can be securely stored and streamed (Note that other media file types cannot be stored here). Files obtained from a publisher's secure server should always be stored here. PANDORA partners will have to contact the Digital Archiving Section when they wish to move Real Media files to this server as they are unable to access it directly.
Procedures for moving media files to NLA's secure Real Media (PNM) Server and referencing them in the metafile
- First a drive needs to be mapped to the NLA's Real Media server:
- In Windows Explorer select a drive from the list of available drives appearing under your name
- Select Tools from the Menu Bar then select Map Network Drive
- Choose a free drive (one not in use) from the dropdown list in the Drive field box
- In the Folder field type \\ntuxstor\rms\pandora
- Click on the box 'Reconnect at login' and click Finish button
- You will now be presented with a list of folders including folders for the years 1999, 2000 and 2001, and folders representing individual PI's. You will need to create and name a folder in which to store the media files for your title instance. The PI or a word representing the title can be used for the folder name.
- Move the Real Media files from wherever they have been saved to, to your newly created folder on the Real Media server.
- Now change the reference within the metafile (as before) to reflect the location of the archived media files. The link must be an absolute link as the NLA Real Media server is external to the archive. The protocol, domain and folder for this server currently is: pnm://www.nla.gov.au/pandora/. Here is an example of the new absolute URL in a metafile: pnm://www.nla.gov.au/pandora/33961/architects.rm would represent the file 'architects.rm' located in the folder 33961 on the NLA Real Media server.
4.4 How do I save PDF files?
How can a PDF file displayed in my browser be saved to my desktop if there is no File/Save command?
You need to create your own Web page as follows:
- Use a text-editing program like Wordpad or Notepad to create a new text file called pdflinks.htm. Save it on to a convenient location for future use.
- Add the following line of coding to the file: <A HREF="Paste" THE URL HERE>Right-click here </A>
- Go to your browser and select the URL of the PDF file. Press Ctrl-C to copy the URL.
- Paste this address into your Wordpad/Notepad file in place of the Paste the URL here.
- Save the pdflinks.htm file and close Wordpad/Notepad.
- Go to Window Explore to locate the pdflinks.htm file, double click on the icon for that file to open it in a browser window. The words 'Right-click here' will appear as a link, in blue with an underline.
- Right-click on the link and select 'Save Target As' from the menu that appears. Specify the location that you want to store the PDF file.
If you got to the PDF file directly from a link on a Web page, then go back to the Web page and proceed from step 7.
5. How do I use Gather Filters?
Please Note when using filters
- only use Filters when necessary, since they can have an impact on PANDAS performance.
- Take extra care when using filters to expand gathering, it is generally best to use them instead to limit gathering unnecessary files from being archived
5.1 How to Create Gather Filters
Filters are constructed by using command syntax. The main
options include:
+
standing for 'accept'
-
standing for 'refuse'
*
standing for 'all' (wild card)
*[name] standing for all
names
(other options are available from the Httrack Website Copier's
Advance Filters section http://www.httrack.com/html/filters.html)
Filters have an order of importance - the last of the filters that you create has precedence over all previous ones.
You can construct filters by yourself, or you can use the Rule drop-down menu provided in Pandas.
To construct gather filters by yourself, key-in the filters you wish to use directly in the Filter List area found in Filters part of the Gather Details section of the PANDAS record.
Use the above command language with the URL/part of URL of the web sites that you want to allow/disallow. For example, if you wish to include the whole of the NLA home page, use:
+ (include)
www.nla.gov.au (NLA URL)
* (all)
to
construct:
+www.nla.gov.au/* -everything after www.nla.gov.au will be
included, that is, everything that is contained in the specified domain will be
gathered.
If you wish to be more precise, you can use:
+www.nla.gov.au/*.* (all names and types of files that come off the domain will
be gathered)
Or
+www.nla.gov.au/pub/*.* (all names and types of files
that come off the directory 'pub' will be gathered)
Filters can be very specific, for instance if you wish to
include only gif images available in a certain directory, you can specify this
by using:
+www.nla.gov.au/images/*.gif (all files in the directory
'images' that have extension 'gif' will be included)
However, if you wish to use broader filter for the same example,
you can construct:
+*/images/*.gif (similar to above, but links to all
domains that have directory 'images' will be accepted, not just the ones found
in 'www.nla.gov.au')
If you wish to include the entire website, including all its
shared domains, such as are NLA shared domains www.nla.gov.au,
pandora.nla.gov.au, shop.nla.gov.au, you can construct the filter:
+*[name].nla.gov.au/* (all nla.gov.au shared domains will be included).
5.2 Using Filters on Difficult to Gather Sites
When dealing with difficult sites that gather too much
information and/or more than one domain, a very precise gathering can be
ensured by disallowing the whole of the web first:
-*
Afterwards what
should be gathered is specified, eg:
In the above example everything will be refused except html pages contained in
the 'gov' directory as well as gif images contained in the '2001'
directory.
If you wish to gather only a certain number of pages from the site, you can specify this by first disallowing the whole of the site, and than allowing only the pages you wish to gather.
For instance:
In the above example only www.nla.gov/padi/about.html page would be gathered, and not other html pages linked to the site, eg. www.nla.gov.au/padi/search.html, www.nla.gov.au/padi/padifeedback.html, etc will be refused.
5.3 How to Create Gather Filters by Using the Rule Drop-down Menu
There are some options for setting up filters available in the PANDAS Rule drop-down menu that you can use in order to help you construct filters that you wish to use. However please note that this is a limited list which for practical reasons cannot include all options that are available for users. For other options not included please refer to the section that explains constructing of the filters directly by using the command syntax (How to Create Gather Filters by Yourself).
Filters can be used to include or to exclude gathering of certain web sites/pages/images (please see PANDAS Manual at http://pandora.nla.gov.au/manual/pandas/gather_rules.html). For this purpose two buttons in the Filters area of Gather Details are used, labelled '+Links' (include) and '-Links' (exclude)
Options available in Rule drop-down menu:
Files names with extension
This option is for including/excluding specified types of files, such as image files with extension gif. If you key in 'gif' and choose '+Links' OR '-Links' you will get the following filter: +*.gif OR -*.gif (all files with extension 'gif' will be included/refused). Please note that this is a broad filter and that it will include/refuse gif files from all various linked domains.
You could construct a more specific filter by yourself that
would suit better specific gathering circumstances. For instance, if you are
gathering a NLA site, but are missing some images in the directory 'images' you
could use a filter:
+www.nla.gov.au/images/*.gif - only gif files coming
from www.nla.gov.au/images/ will be included Or, a bit broader:
+*/images/*.gif - only gif images linked to any site with directory 'images'
will be included
File name containing
This option is used for including/excluding files that have specified words within their names. If you key in 'index' and choose '+Links' OR '-Links' the following filter will be constructed: +*/*index* OR -*/*index* (every link that contains 'index' in its file name will be accepted/refused, including /nlaindex.html, /indexofpages.html, etc). Again a broad filter, it will allow/disallow links from various domains that have 'index' mentioned in their file names.
You could construct a more specific filter (example: you are gathering a NLA site and wish to include some information form 'pub' directory): +www.nla.gov/pub/*index*.html -only files coming from www.nla.gov.au/pub/ that contain 'index' in the name of the html files will be included.
This file name
This is an option that will include exact file names that you specify. If you key in 'index' and choose '+Links' OR '-Links' the filter will be constructed: +*/index OR -*/index (all linked files called index will be allowed/disallowed). Since this is also a broad filter, there is a possibility that files from various domains are gathered or refused.
A more specific filter that could be used in some circumstance is 'This link' option found a bit lower on the Rules drop-down menu.
Folder names containing
This is an option that would include links to web pages that have the specified letters/words anywhere within their directory names. If you key in 'images' and choose '+Links' OR '-Links' you will get the filter: +*/*images*/* OR -*/*images*/* (all linked web sites that contain 'images' anywhere in their directories will be allowed/disallowed, including /picturesandimages/, /preimagesafter/ etc). Also a broad filter, since it would allow/disallow gathering from various linked domains.
A more controlled filter could be set up (in the case when you are gathering a NLA site and wish to include information from certain directories that have a common part of the name) as: +www.nla.gov.au/*images*/* - only web pages coming from www.nla.gov.au domain containing word 'images' in their directories will be allowed.
This folder name
This is an option that would gather all linked sites that have the specified directory name in their URLs. If you key in 'images' and choose '+Links' OR '-Links' you will get the filter: +*/images/* OR -*/images/* (all linked web sites that have an directory called 'images' will be allowed/disallowed). This is also a broad filter that would accept/refuse gathering from various domains.
A more specific filter could be set up (when a NLA page is being
gathered and data is missing from images directory) as:
+www.nla.gov.au/images/* - in the case that everything in the NLA 'images'
directory is required) or even more specifically:
+www.nla.gov.au/images/*.jpg - in the case when only jpg files from the
'images' directory are required).
Links on this domain
This option is used when you wish to gather various shared domains starting from a particular site such as www.nla.gov.au (examples of nla.gov.au shared domains are: pandora.nla.gov.au, shop.nla.gov.au, etc). If you key in nla.gov.au and choose '+Links' OR '-Links', you will get a filter: +*[name].nla.gov.au/* OR -*[name].nla.gov.au/* This will include/exclude all 'nla.gov.au' domains that are linked to the site, such as pandora.nla.gov.au
Links on domains containing
Similarly to above this option is used to allow/disallow all shared domains containing specified letters in their URLs. If you key in 'nla' and choose '+Links' OR '-Links' you will construct a filter: +*[name].*[name]nla*[name].*[name]/* OR -*[name].*[name]nla*[name].*[name]/* This is a very broad filter that would allow/disallow gathering of all linked sites that have 'nla' in their shared domains.
Links from this host
This option is used when you wish to include/refuse all links coming from a specified domain. For instance, if you are gathering a site and wish to include all documents linked to it coming from the National Library you would select this option, key in 'www.nla.gov.au' and choose '+Links' OR '-Links'. A filter would be constructed: +www.nla.gov.au/* OR -www.nla.gov.au/* All links to the www.nla.gov.au domain will be allowed/disallowed. This filter is broad, since it could include the gathering of the whole of the specified domain.
A more specific filter could be set as:
+www.nla.gov.au/collect/*.* - only files contained in the directory 'collect'
will be gathered)
or
+www.nla.gov.au/*.gif - only gif files from
specified domain will be allowed).
Links containing
Similarly to above, this option is used when you wish to include/exclude all links from web sites that contain specified letters anywhere in their URLs. If you key in 'nla' and choose '+Links' OR '-Links' you will construct a filter: +*nla* OR -*nla* This is a very broad filter and it could allow inclusion of a lot of linked sites.
This link
This option is used when you wish to include a specific link to the site that you are gathering. If you key in 'www.nla.gov.au.index.html' and choose '+Links' OR '-Links' a filter will be constructed: +www.nla.gov.au.index.html OR -www.nla.gov.au.index.html . This is a specific filter which will gather the specified web page.
Please note that the gatherer will take the web page specified by the filter as a starting point, and that it will gather pages linked to this page as well, if they are on the same domain and/or directory. In order to avoid this, you would have to add more filters to disallow unwanted information.
All links
Regardless of what you key in, and choose '+Links' OR '-Links' the result will be: +* OR -* (include/refuse all). This is the broadest of filters, everything will be allowed/disallowed.
5.4 Some Questions and Answers
Q: How do I limit gathering to only the chosen pages, when PANDAS has gathered the whole site?
A: In order to gather a specific file and not the whole
of the web site, you would first have to disallow gathering of the whole site
by using the following filter (that is specify what you do not want to be
gathered):
-www.liswa.wa.gov.au/*.*
After that specify what you do want
by using the filters that would allow all information that you want gathered,
for example:
Pages:
+www.liswa.wa.gov.au/pbk*.html
+www.liswa.wa.gov.au/pba*.*
Images:
+www.liswa.wa.gov.au/*.gif
+www.liswa.wa.gov.au/*.jpg
Q: Why are the Filters not working?
A: There might be a number of reasons as to why this is happening. Here are some more common problems:
- Filters have an order of importance - what has been put the last will have the precedence. If you have keyed in: +www.nla.gov.au/images/*.* -*/images/* Nothing in the directory 'images' will be gathered, because the filter -*/images/* has an upper priority
- In some cases if the filters are not working, you could try to be more specific. For instance, if a gif image is not being included, instead of using a broad +*.gif, you could construct a filter +www.nla.gov.au/icons/2001/*.gif (this filter should pick all gif files in the www.nla.gov.au/icons/2001/ directory)
- If you have changed the filters they might not work on the same day - try setting the gather for the next day
6. How do I make make a link open up in a new browser window
Add: target="_blank" to the HREF link tag. For an example of a link: %lt;a href="http://www.pageresource.com/linkus.htm" target="_blank">