PCC Standing Committee on Automation (SCA)
2nd Task Group on Journals in Aggregator Databases
FINAL REPORT
October 23, 2001
Task Group Members: Jeanne Baker (U. Maryland); Kyle Banerjee
(Oregon State); Matthew Beacom (Yale); Karen Calhoun (SCA liaison); Ruth Haas
(Harvard); Jean Hirons (LC liaison); John Riemer (UCLA) - Chair
Introduction
The PCC SCA Task Group on Journals in Aggregator Databases successfully assisted
two more vendors in creating sets of cataloging records for the contents of
their aggregations. A few changes to the record-creation specifications are
needed, owing to experience in using the records and from changes in the MARC
format; only a few changes are necessary to those serial specifications to
accommodate monographic resources. Yale University successfully tested maintenance
by loading to an OPAC an updated set from the first vendor to create a set
of records. This is a good time for a transition from ad hoc task groups to
mainstream plans for addressing the bibliographic control needs for aggregated
resources, and the Task Group is presenting ideas on how its work can carry
on. A sampling of the various means currently being used for providing access
to the contents of aggregated resources is presented. Members of the Task Group
continued to raise awareness in the library community by making a number of
presentations on aggregators at conferences and publishing articles.
Progress Summary, 2000-2001
The library community continues to be strongly interested in a useful, cost-effective
and timely means for providing records to identify full-text electronic journal
titles and holdings in aggregator databases. The PCC Standing Committee on
Automation's Task Group on Journals in Aggregator Databases continued to pursue
vendor creation of sets of cataloging records.
In spring 2000, the group surveyed the institutions known to have downloaded
the EBSCO set of test records. Libraries that have successfully loaded the
set in their OPACs include California State University, Northridge; University
of Wisconsin--Eau Claire; Yale; and eight different online catalogs within
the University System of Maryland, including the global database of all USM
libraries. These libraries represent a variety of OPAC systems. Northridge
initially consolidated data onto print-version record (the single-record technique),
while the others loaded the data as separate records across the board.
As evidence that these records make a difference, one can consider what Jina
Wakimoto reported in a poster session at this past June's North American Serials
Interest Group meeting. In a five-month period following the load, Cal State
Northridge's use of Academic Search Elite, http://library.csun.edu/jwakimoto/EBSCOstat.html,
has doubled from approximately 5000 to 10000+ searches!
The Task Group worked through the summer of 2000 with John Law and Connie
McGuire of Bell+Howell, and this culminated with the fall release of the long-awaited
aggregator records for all the ProQuest titles: http://www.infolearning.com/hp/Features/Marc/.
Early in 2001, the Gale Group offered on a test site both ASCII and MARC records
for the titles in its InfoTrac Web database products, at URL: http://www.galegroup.com:9966/catalog/marc.htm.
As had first occurred two years earlier, several Task Group members met with
several representatives of LexisNexis, during the American Library Association
Annual Meeting in June 2001, to discuss the need and strategies for a set of
records for the titles in Academic Universe. LexisNexis is now seriously considering
a couple of record creation options.
As the content of aggregated resources grows, vendors are including full-text
document types that are not journal titles, and customers are asking where
the records are for their OPACS. Recently Task Group members reviewed a set
of test records from EBSCO for monographic resources and made some suggestions
for changes.
Throughout its work, the Task Group has given the highest priority to those
aggregations with the largest numbers of titles, which are also those with
the most volatile contents. Collaboration with the provider was crucial, since
they were uniquely positioned to keep up with added and dropped titles and
the latest volume coverage available. The more stable publisher- or project-based
aggregations seldom drop titles and lend themselves better to cooperative cataloging
projects within the bibliographic utilities. The electronic offerings of OCLC
WorldCat Collection Sets, http://www.stats.oclc.org/cgi-bin/db2www/wcs/wcs_cols.d2w/Electronic [Link
no longer works as of July 5, 2005],
appear to represent this type of aggregation exclusively.
Recommended Data Elements for Machine-Derived Serial Records
(These fields are taken from a cataloging record for another version of the
title.)
FIXED LENGTH FIELDS
All Leader and 006/007/008 bytes as appropriate
CONTROL FIELDS--0XX
001 Control number
003 Control number identifier
022 International Standard Serial Number
035 System control number(s)
VARIABLE FIELDS--1XX-9XX
1XX Main entry
240 Uniform title
245 Title statement (insert $h)
246 Varying form of title
250 Edition statement
260 Publication, etc. (Imprint)
310 Current publication frequency
362 Dates of pub., vol. designation
4XX Series statement
5XX Notes
6XX Subject added entries
700-730 Name/title added entries
773 Host item entry
780/785 Preceding/Succeeding entry
8XX Series added entries
856 Electronic location and access ($3, $u only)
Recommended Data Elements for Machine-Generated Serial
Records
(For use when cataloging records are unavailable to consult)
FIXED LENGTH FIELDS
All Leader and 006/007/008 bytes as appropriate
< Leader default:
byte 05 n or c or d - depending on if new or corrected record, or if record
is to be deleted
byte 06 a - for language material
byte 07 s- for serial
byte 17 z - for not applicable
byte 18 u - for unknown conformance to AACR2 rules
byte 20 4 - for length of the length-of-field portion of entry map
byte 21 5 - for length of the starting-character-position portion of entry
map
byte 22 0 - for length of the implementation-defined portion
byte 23 0 - for undefined entry map character position
(all other bytes would default to "blank")>
< 006 default:
006/00 m - for computer file
006/09 d - for document
(all other bytes would default to "blank")>
< 007 default:
007/00 c - for computer file as a category of material
007/01 r - for "remote" as a specific material designation
007/02 'blank' since that byte is no longer used for anything
007/03 a - for one color (black-and-white counts as one color)
007/04 n - for not applicable in the case of remote resources
007/05 u - for unknown sound content in the resource>
< 008 default:
bytes 00-05 yymmdd - for date record created
byte 06 c or d or u - for continuing or dead, according to serial's publication
status; status would be u for unknown if the
holdings statement does not reach into the most recent 3 years
bytes 07-10 - year the serial began publication (not first year of full-text
availability) copy from 362 field
bytes 11-14 - year the serial ceased publication, or "9999" if open-ended
bytes 15-17 xx_ - for unknown place of publication
byte 18 u - for unknown frequency
byte 19 u - for unknown regularity
byte 20 'blank' - for agency assigning the ISSN
byte 21 p - for periodical as type of serial
byte 22 'blank' - for form of original item
byte 23 s - for electronic form of item
byte 24 'blank' - for nature of original work
bytes 25-27 'blanks' - for nature of contents
byte 28 u - for unknown if a government publication
byte 29 zero - for not a conference publication
bytes 30-32 'blanks' - for undefined
byte 33 'blank' - for original alphabet/script code
byte 34 zero - for successive entry record
bytes 35-37 language code - first 3 letters of the name of language, in English.
Exception: use 'jpn' for Japanese
byte 38 'blank' - for a (non)modified record
byte 39 d - for non-LC source of cataloging record>
CONTROL FIELDS--0XX
001 Control number
<vendor's control number, if any>
003 Control number identifier
<USMARC organization code for the vendor>
022 International Standard Serial Number
035 System control number(s) (USMARC org. code is parenthesized at beginning
of the field)
<vendor file ID (portion of this field) would be (a) key to record removal
if subscription to aggregator database is canceled>
VARIABLE FIELDS--1XX-9XX
1XX Main entry
<This field would probably be used much less often when records are not being
derived. If the vendor's brief listing of titles
gave the body name and generic title, separated by a period, this field could
be used.>
245 00 Title statement (insert $h)
<Omit initial articles for sake of titles indexing correctly, and set both
indicators to zero.>
250 Edition statement
<Might be applicable if online version only equates to one audience or geographic
edition>
260 Publication, etc. (Imprint)
<reflects publisher of original version; publication date ($c) can be omitted>
310 Current publication frequency
362 Dates of pub., vol. designation
<reflects facts of original publication, not range of volumes covered in aggregator>
4XX Series statement
<if applicable>
500 General note
<Standard wording: Record generated from vendor title list.>
506 Restrictions on access note
<if applicable: "Access limited to licensed institutions.">
516 Type of computer file or data note
<"Text (electronic journal)">
530 Additional physical form available note
<if applicable: "Online version of print publication.">
538 System details note
<"Mode of access: Internet.">
653 Index term-Uncontrolled
<would be used at vendor's discretion; probably would reflect a broad subject
categorization>
720 Added entry-Uncontrolled name
<1st indicator should default to "2" for non-personal name>
773 Host item entry
<Sample: 773 0_ $t Title of aggregation $d Place of publication : Publisher
of aggregation, date- $x ISSN of aggregation as a whole>
780/785 Preceding/Succeeding entry
<when known to vendor>
856 Electronic location and access ($3, $u, $z only)
<$3 to represent volumes covered within the aggregation.
$z at end of field to contain user instructions. Examples:
$z Available via ProQuest Direct. Search for this journal by title.
$z Consult "field by field instructions" to qualify a search by publication.
[for Lexis-Nexis]
$z Search "Publications by title" in Dow Jones Publication Library.>
Proposed Data Elements for Machine-Derived Monographic
Records
(These fields are taken from a cataloging record for another version of the
title.)
FIXED LENGTH FIELDS
All Leader and 006/007/008 bytes as appropriate
CONTROL FIELDS--0XX
001 Control number
003 Control number identifier
020 International Standard Book Number
035 System control number(s)
VARIABLE FIELDS--1XX-9XX
1XX Main entry
240 Uniform title
245 Title statement (insert $h)
246 Varying form of title
250 Edition statement
260 Publication, etc. (Imprint)
4XX Series statement
5XX Notes
6XX Subject added entries
700-730 Name/title added entries
773 Host item entry
8XX Series added entries
856 Electronic location and access ($3, $u only)
Proposed Data Elements for Machine-Generated Monographic
Records
(For use when cataloging records are unavailable to consult)
FIXED LENGTH FIELDS
All Leader and 006/007/008 bytes as appropriate
< Leader default:
byte 05 n or c or d - depending on if new or corrected record, or if record
is to be deleted
byte 06 a - for language material
byte 07 m - for monograph
byte 17 z - for not applicable
byte 18 u - for unknown conformance to AACR2 rules
byte 20 4 - for length of the length-of-field portion of entry map
byte 21 5 - for length of the starting-character-position portion of entry
map
byte 22 0 - for length of the implementation-defined portion
byte 23 0 - for undefined entry map character position
(all other bytes would default to "blank")>
< 006 default:
006/00 m - for computer file
006/09 d - for document
(all other bytes would default to "blank")>
< 007 default:
007/00 c - for computer file as a category of material
007/01 r - for "remote" as a specific material designation
007/02 'blank' since that byte is no longer used for anything
007/03 a - for one color (black-and-white counts as one color)
007/04 n - for not applicable in the case of remote resources
007/05 u - for unknown sound content in the resource>
< 008 default:
bytes 00-05 yymmdd - for date record created
byte 06 c or d or u - for continuing or dead, according to serial's publication
status; status would be u for unknown if the holdings statement does not reach
into the most recent 3 years
bytes 07-10 - year of publication (not first year of full-text availability)
bytes 11-14 - four 'blanks' if originally published in a single-year, otherwise
the last year of a multi-year publication or "9999" if open-ended
bytes 15-17 xx_ - for unknown place of publication
byte 18 a - if known to be illustrated, otherwise blank for unknown
bytes 19-21 'blank' - for no attempt to code presence of other illustrative
matter
byte 22 'blank' - for form of original item
byte 23 s - for the electronic form of current item
bytes 24-27 'blanks' - for nature of contents
byte 28 u - for unknown if a government publication
byte 29 zero - for not a conference publication
byte 30 zero - for not a festschrift
byte 31 zero - for no index within the publication
byte 32 'blank' - for undefined
byte 33 zero - for nonfiction
byte 34 'blank' - for no biographical content
bytes 35-37 language code - first 3 letters of the name of language, in English.
Exception: use 'jpn' for Japanese
byte 38 'blank' - for a (non)modified record
byte 39 d - for non-LC source of cataloging record>
CONTROL FIELDS--0XX
001 Control number
<vendor's control number, if any>
003 Control number identifier
<MARC organization code for the vendor>
020 International Standard Book Number
035 System control number(s) (MARC org. code is parenthesized at beginning
of the field)
<vendor file ID (portion of this field) would be (a) key to record removal
if subscription to aggregator database is canceled>
VARIABLE FIELDS--1XX-9XX
1XX Main entry
<This field would probably be used much less often when records are not being
derived. If the vendor's brief listing of titles
gave the body name and generic title, separated by a period, this field could
be used.>
245 00 Title statement (insert $h)
<Omit initial articles for sake of titles indexing correctly, and set both
indicators to zero.>
250 Edition statement
260 Publication, etc. (Imprint)
<reflects publisher of original version>
4XX Series statement
<if applicable>
500 General note
<Standard wording: Record generated from vendor title list.>
506 Restrictions on access note
<if applicable: "Access limited to licensed institutions.">
516 Type of computer file or data note
<"Text (electronic [choose appropriate term from a list])">
530 Additional physical form available note
<if applicable: "Online version of print publication.">
538 System details note
<"Mode of access: Internet.">
653 Index term-Uncontrolled
<would be used at vendor's discretion; probably would reflect a broad subject
categorization>
720 Added entry-Uncontrolled name
<1st indicator should default to "2" for non-personal name>
773 Host item entry
<Sample: 773 0_ $t Title of aggregation $d Place of publication : Publisher
of aggregation, date- $x ISSN of aggregation as a whole>
856 Electronic location and access ($3, $u, $z only)
<$3 to represent volumes covered within the aggregation.
$z at end of field to contain user instructions. Examples:
$z Available via ProQuest Direct. Search for this publication by title.
$z Consult "field by field instructions" to qualify a search by publication.
[for Lexis-Nexis]
$z Search "Publications by title" in Dow Jones Publication Library.>
Comments on Record Content
A few changes have been made to the specifications presented in the first
Task Group's final report. Owing to changes in the MARC format, the value "s" (electronic
publication) should now appear in 008, byte 23, and the text of the 245 subfield
$h should change to "[electronic resource]."
In applying the machine-derived technique, it is critically important that
the vendor's title/ISSN list be completely accurate. If that list is used to
match against a file of cataloging records for deriving aggregator records,
the resulting set of records will not include relevant titles if those titles
were missing from the initial vendor list. When serials undergo title changes,
care must be taken to see that all the titles (records) are included. Records
for titles no longer being published should be closed out. Electronic coverage
statements can implausibly exceed the scope of the journal life dates of the
original publication if title changes and cessations are not processed.
For the sake of users, the 856 subfield $3 needs to dependably contain the
electronic coverage of the publication and that coverage statement should be
correct; this is particularly true when the coverage differs among multiple
sources and staff or users need to know readily where to turn. Deficiencies
in these areas can jeopardize the acceptance of the cataloging records in the
library community.
For a clearer distinction in the serial records between the electronic coverage
offered in the aggregation and the original publication coverage, the data
in the 362 field could be preceded by "Original publication:" and
the first indicator changed to "1."
Test of Loading and Maintenance Activities
Yale University successfully experimented with updating loads in its NOTIS
7.1.0 OPAC for two EBSCO products, Academic Search Elite and Business Source
Premier. Yale had first loaded 2200 records in May 2000. Between the two
products there was an overlap of a few hundred titles, and Yale had duplicate
records then.
The initial URLs led to pages in the product Web sites that in turn led users
to the individual title. Coverage information was at first inadequate in some
cases as "embargoed" recent issues were not mentioned in the records.
Improvements in EBSCO's service in later updating loads have corrected those
problems.
Yale loaded the records into a high range of database record numbers in its
catalog so that it could eliminate the records easily. Yale thought it necessary
to design an exit strategy for removing the aggregator records, as selectors
might change their minds about a leased product in ways in ways in which they
do not with purchased products, e.g., paper books and serials. The load into
a separate range of record numbers also facilitates the updating process. In
short, Yale loaded the set of records it first received and thereafter deleted
the loaded records and replaced them with the latest file from EBSCO. Yale
has been updating these records on roughly a monthly basis. The process is
crude but effective. It is a relatively simple process to delete the old file
and add the new. This simplicity is its virtue. If it were more complicated,
Yale feels it might not ever have done it and the e-journals would not be represented
in the catalog at all.
While the use of separate aggregator records facilitates maintenance, it can
cause problems in the OPAC. Yale relies on the indexes in its NOTIS OPAC to
display the various "editions" of its journal titles to users. This
reliance is somewhat problematic as the records from EBSCO do not have uniform
titles and the OPAC does not adequately alert users of formats in the index
display. To compensate for this, Yale adds some information to the EBSCO records
as they enter the catalog, e.g., the addition of a genre heading (655_7 Electronic
journals).
Quoting from Matthew Beacom, "Effective problem solving has been one
of the hallmarks of the EBSCO service. Overall, EBSCO has been highly responsive
to Yale and the PCC SCA Task Group's needs and interests. In my opinion, EBSCO's
service of providing record sets for online e-journal aggregations is exemplary.
I cite it repeatedly as a best practice that should be used as a benchmark
by libraries when contracting for similar services and by other vendors when
designing and delivering similar services. Yale's success is due to EBSCO's
interest in the venture and its responsiveness to our needs as a customer."
Libraries practicing the single-record technique for the more volatile aggregations
are reluctantly concluding it is an unwise approach in the face of maintenance
workloads. It is too difficult to quickly remove the data when subsequent updated
record sets arrive for loading.
As record sets become available from multiple sources and title coverage overlaps
among aggregators, libraries are likely to remain concerned about the prevalence
of duplicate records. Libraries may need to change where the deduplication
effort takes place in the workflow. A library could collect all the monthly
and weekly updates from all sources and dedup and consolidate within those
records locally, just prior to loading them to the OPAC. In this manner, the
proliferation of additional records for the same title would be held to a mere
single record representing all the e-versions.
Mainstreaming into PCC Activities
The conclusion of this Task Group's work does not end the need for PCC involvement
in bibliographic control measures for aggregators. Activity on this front is
starting to become mainstream in the PCC. The action plan stemming the Library
of Congress Bicentennial Conference on Bibliographic Control for the New Millennium:
Confronting the Challenge of Networked Resources and the Web specifically cites "record
supply and maintenance for aggregated resources" in point 4.1 <http://www.loc.gov/catdir/bibcontrol/draftplan.html>.
The CONSER and BIBCO Coordinators announced on September 12, 2001 that Becky
Culbertson (UCSD) and Kate Harcourt (Columbia) have volunteered to begin compiling
lists of aggregations that have records available.
Members of the Task Group recommend that the PCC website serve as a clearinghouse
for (1) a listing of what record sets are available, from which source, and
on what terms. The sources could include vendors, bibliographic utilities,
and individual libraries. For each set, the name of one or two institutions
who have successfully loaded the set into their catalogs might be included,
for the benefit of peers who want to ask questions. (2) a source for "best
practice" information for the benefit of providers of aggregated resources
who want the latest information on specifications for record sets.
In the instances that providers would like to call on the expertise of PCC
members for help in creating record sets, this need could be met by a Standing
Committee on Automation subgroup devoted to this purpose. This unit would continue
for as long as the needs appears to exist.
The Array of Solutions in Use
This sampler of the variety of solutions others are utilizing to provide access
to the contents of aggregators includes reflections on the advantages and disadvantages
of the strategies and cites example institutions. The Task Group extends special
thanks to Jina Wakimoto of California State University, Northridge, for sharing
her notes from her 2001 North American Serials Interest Group presentation
on aggregators, http://library.csun.edu/jwakimoto/NASIG2001JW.ppt.
A) The A-Z title lists many libraries started with were easy to produce via
HTML coding, but they are relatively static and quickly outdated. These browsing
lists do not lend themselves well to searching. Example institution: University
of California, Irvine http://www.lib.uci.edu/rraz/compre.html [Link no longer
works as of July 5, 2005]
B) A variety of web-based database approaches have been taken.
B1) One approach to creation of a stand-alone database utilizes downloading
sources to an MS Access table, cleaning up the data by adding ISSNs and other
information, and creating a web interface to it. It is possible to search
the database by title alone and there is no connection with the print resources
in the OPAC. Example institution: Virginia Commonwealth University http://www.library.vcu.edu/ejournals/
B2) Another approach is to populate the database with both print and electronic
resources. This single-search database for all formats of full-text serials
is still restricted to title access. Example institution: California State
University, Chico http://aphid.csuchico.edu/lso/search.asp
B3) Another approach utilizes a remote database, jake. This involves php programming
to remove mention of resources not licensed by the library, in addition to
library-licensed resources not involving full text. The drawbacks of using
this database are that it only tells the user where to find full-text e-journals
and print resources are not integrated. Patron rekeying of the e-serial title
search is needed to access the article. Searching is only by title. The necessary
program is free, but the technological knowledge may be difficult to come by.
Example institution: Notre Dame http://www3.nd.edu/ndlibs/jake/search.php
B4) One means of obtaining a database with multiple points beyond title is
to construct it from the contents of the OPAC. This labor-intensive strategy
requires flagging in the OPAC the records desired in the separate database
and inputting the scope information to the OPAC records. The subject access
in the database will typically be broader than the specificity offered in the
OPAC. Example institution: Los Alamos National Laboratory http://lib-www.lanl.gov/ejournals/ejournals.htm
C) Multiple means of integrating e-resources into the OPAC have been attempted.
C1) Local cataloging efforts can utilize the single- or the separate-record
technique. Results of the application of the single-record technique are
difficult to share at the consortial level. However, the California Digital
Library's Shared Cataloging is distributing print-version records with Internet
addresses included, http://tpot.ucsd.edu/Cataloging/HotsElectronic/SOPAG/cdlguid2.html [Link
no longer works as of July 5, 2005],
and recipients of the records are comfortable loading them whether or not
the library happens to subscribe to hard copy.
C2) Local cataloging efforts can also make use of the separate-record technique,
as University of Illinois at Chicago has done in creating records for Wilson
Select titles via OCLC.
C3) Another use of the separate-record technique is for an individual library
to create MARC records from a vendor's title listing through automated means.
The vendor list is transformed into a text file and converted to MARC records,
using a utility like MARCMakr, and the records loaded to the OPAC. Each library
must create and maintain the records itself. The record sets contain few access
points beyond title and are only as accurate as the vendor data they are based
on. Example institution: University of Tennessee, Knoxville http://pac.lib.utk.edu:8000/WebZ/initialize:sessionid=0:next=html/UTKhomeframe.html [Link
no longer works as of July 5, 2005]
C4) Machine-created records supplied by vendors can provide a maximum of access
points and up-to-date coverage information in a single-source discovery tool,
the online catalog. Employing the separate-record technique in the loads facilitates
maintenance. Libraries wishing to use the single-record technique must dedup
vendor records via ISSN or other means against the existing records in the
OPAC. Information about the online version must be manually maintained on the
hard copy-version records. A variation in this deduping work is adding the
URLs for the online versions to the holdings versus the bibliographic records
of print-version titles. University of Washington is an example of a library
taking the single-bibliographic record approach with multiple holdings.
D) Another variation is hiring a vendor such as SerialsSolutions http://www.serialssolutions.com/ or
TDnet http://www.tdnet.com/ to perform some of these same functions.
Raising Awareness in the Library Community
Presentations
Riemer, John and Jina Wakimoto. "Taming the Aggregators: Providing Access
to Journals in Aggregator Databases," presentation to the North American
Serials Interest Group, May 24, 2001.
Calhoun, Karen. "Aggregators and the Catalog: Where Will It End?" Presentation
at NELINET conference "E-Books and E-Journals," May 21-22,
2001, University of Massachusetts at Amherst.
Calhoun, Karen. "Access to Full Text Journals in Aggregator Databases:
A Workshop," presented for the Central New York Library Resources Council,
July 25, 2000, Syracuse NY.
Calhoun, Karen. "The Catalog, Next Generation: Access to Full Text Journals," presentation
at the New York Technical Services Librarians annual meeting, May 5, 2000,
New York NY.
Calhoun, Karen. "Access to Full Text Journals in Aggregations," presentation
for the Partners in Information and Innovation (Pi2) Meeting, February 2, 2000,
at Renssalaer Polytechnic Institute, NY.
Riemer, John. "Aggregators--Extending the Reach of Our Bibliographic
Control Efforts," presentation to the ALCTS SS Committee to Study Serials
Cataloging, January 17, 2000.
Baker, Jeanne, Jean Hirons, and Jean Pajerek. "Aggregator Aggravation:
Cataloging Issues and Challenges of Electronic Serial Aggregators," presentation
to American Association of Law Libraries, July 20, 1999.
Calhoun, Karen and John Riemer. "SCA Task Force on Journals in Aggregator
Databases," presentation to PCC Participants Meeting at ALA annual conference,
June 27, 1999, New Orleans LA.
Riemer, John. "CONSER's Aggregator Survey and the Work of Its Subgroup," presentation
to ALCTS Technical Services Administrators of Medium-sized Research Libraries
Discussion Group, June 26, 1999.
Baker, Jeanne. "Toward Better Access to Full-Text Aggregator Collections.
Designing an Aggregator Analytic Record," presentation to the North American
Serials Interest Group, June 12, 1999.
Calhoun, Karen. "E-Journals in Aggregators: Breaking the Rules," presentation
to the ALCTS CCS Catalog Management Discussion Group, January 30, 1999, Philadelphia
PA.
Riemer, John. "CONSER's Aggregator Survey and the Work of Its Subgroup," presentation
to the ALCTS
CCS Catalog Management Discussion Group, January 30, 1999, Philadelphia PA.
Publications
Riemer, John. 'Updates on PCC Aggregators and E-versions Task Groups' Work," CONSERline:
Newsletter of the CONSER (Cooperative Online Serials) Program 18 (winter 2001).
Callhoun, Karen. "Redesign of Traditional Library Workflows: Experimental
Models for Electronic Resource Description." Invited paper for the Bicentennial
Conference on Bibliographic Control for the New Millennium, Nov. 15-17, 2000,
sponsored by the Library of Congress Cataloging Directorate. (Includes information
about aggregator projects) Available: http://www.loc.gov/catdir/bibcontrol/calhoun.html
Calhoun, Karen and Bill Kara. "Aggregation or Aggravation? Optimizing
Access to Full-Text Journals," ALCTS Online Newsletter 11:1 (spring
2000). Available: http://www.ala.org/alcts/alcts_news/v11n1/index.html
Riemer, John and Karen Calhoun. "Final report, PCC Standing Committee
on Automation (SCA) Task Group on Journals in Aggregator Databases," unpublished
report, January 2000. Available: http://www.loc.gov/catdir/pcc/aggfinal.html
Riemer, John. "Guest editorial, "Aggregators-New Challenges to Bibliographic
Control" Cataloging & Classification Quarterly 28 (4) 1999 http://www.haworthpressinc.com/ccq/ccq28nr4guested.htm
[Link no longer works as of July 5, 2005]
Riemer, John. "CONSER's Aggregator Survey and the Work of the PCC Task
Group" Cataloging & Classification Quarterly 28 (4) 1999. p.
7-13.
Calhoun, Karen and John Riemer. "Progress Continues for Aggregator Task
Group," CONSERline: Newsletter of the CONSER (Cooperative Online
Serials) Program 14 (summer 1999).
Calhoun, Karen and John Riemer. "Interim report, PCC Standing Committee
on Automation Task Group on Journals in Aggregator Databases," unpublished
report, May 1999.
|