You are on page 1of 5

Rachel Newlin

Ann Santori
Jill Walker

Platform 9
Fantastic Potters and Where to Find Them
Our institution, Platform 9 , is a physical collection which also mounts online, digital
exhibits for researchers who cannot travel to our location. We collect and manage a number of
different types of resources -- including those related to the Harry Potter series of books, their
movie adaptations, and their cultural impact.
Our digital collection, Fantastic Potters and Where to Find Them, consists of
photographs showing cover art of different editions of the novels in the Harry Potter series. We
have designed the digital exhibit as a companion piece to the formal physical collection (where
the books themselves would be on display), Harry Potter Around the World.
For our purposes, we have included only three different edition examples here (the 2015
cover for the Jim Kay illustrated edition, the original American hardcover edition illustrated by
Mary Grandpre, and the paperback of the French translation illustrated by Jean-Franois
Mnard). However, if the exhibition were real, we would obviously have a much larger
selection, potentially including photographs of books not held by Platform 9 .
We are also operating under the assumption that Platform 9 holds physical copies of
each of these editions in its archives, and therefore we have created extra records in Dublin Core
(both Simple and Qualified) -- as well as used the <relatedItem> sandwich in MODS -- to
maintain the one-to-one principle and distinguish between our copy of the book itself and the
digital photograph our archival staff has personally taken to include as a digital surrogate for our
online exhibit. We see, in the future of this online exhibit, the opportunity to gain digital access
to images of editions that we dont own, as well.
We envision our user population as incredibly diverse. Some of the visitors to our digital
collections and our physical site would be run-of-the-mill Harry Potter fans. The benefit to
having a physical location means that we could also provide museum-style exhibits and
programming for this group of users on site. Others would be researching the sociological and
cultural phenomenon of Harry Potter (we expect the archive would not only include news
coverage items like newspaper clippings and television coverage, but also have ephemera related
to the books and movies, such as toys and action figures). Still, others might be investigating a
broader topic like illustrations in childrens books and the Harry Potter emphasis of the
collection would be of secondary interest.
Since we have such a wide range of users, we wanted to be as clear as possible with our
local elements so that information would not get easily confused both between our digital records
and our physical ones, but also for the sake of including information that would both benefit our
Harry Potter fans visiting our collections, as well as the scholars searching for specific
information about our items.
We chose to utilize MODS instead of VRA Core because we felt that the typical users for
our Platform 9 archive would not be art scholars specifically, and the level of granularity that
VRA Core would provide would be unnecessary. Especially considering our use of a collection

1

with a basis in book materials, we thought that MODS would have more information that was
relevant to the book elements in both the physical records as well as the digital records. While
the cover is the focus of the digital exhibit, the artistic quality of it is not. MODS provided a
level of granularity with dates and locations that was perfect for our collection, where dates of
publication and copyright, as well as location of publication, would be important information.
When it came to our work process, we first decided to set deadlines for each part of our
assignment.
On November 17, we submitted our photos, a list of local elements, suggested controlled
vocabularies, what would be indexed, and input guidelines. From there, our crosswalk was
finalized on November 23rd, where we embarked on separating elements relevant to our physical
records and those relevant to our digital records. On December 1st, we began a rough draft of our
group report, knowing that we would need to go back and finalize things once our records were
produced. Rough drafts of our records were due on December 8th, when we began editing them
as a group. Finally, on December 11th, we met and finalized all our record changes, as well as
some issues with the MAP that werent yet resolved. We found that setting our deadlines
provided a basis for finishing the assignment on time, as well as left time for us to consider
adding and changing things in our MAP, especially (as predicted) when things changed after we
began record creation.
Overall, our group decided to divide the work for each aspect of the assignment in order
to best facilitate working on things without one another present. For this same reason, we chose
to pool all of our work into a group folder on Google Drive.
This meant that we each added elements -- and reasoning behind those elements (with
columns for (a), local, (b), Simple Dublin Core, (c), Qualified Dublin Core, and (d), MODS) -- to
a Google Sheet. We also each chose a color to edit in, making it easier to know who had added
which elements.
We also created a number of Google Docs to house our ideas for and progress on sections
of the project (such as our group report) and local authority file. Each group member was to be
responsible for four Dublin Core records (Simple and Qualified for the digital image; Simple and
Qualified for the physical book) as well as one MODS record. We consistently added our records
in progress to the Google Doc, making changes as we saw fit. If there was a disagreement about
placement of certain local elements, we would make a comment online and then resolve the issue
in person, leaving the least room for miscommunication.
In creating our application profile, our first challenge was in attempting to uphold the
one-to-one principle. We felt that much of the information that pertained to the book would be
important access points for users searching the collection. We needed to make a clear distinction
between which elements and values described the physical book and which elements described
the digital image. We wanted to make sure that our metadata reflected user search patterns while
accurately representing the asset.
Considering this, we decided against doing one record where all that information would
be present, instead opting for the most realistic situation: our archive would have a record for the
physical book as well as the digital image of the book cover. We chose to do records for both of
the items with the intention that our work would create a fuller research experience for our users.
In addition, we decided against doing more than one MODS record for each item, seeing in

2

MODS a way for us to incorporate all the information from both the digital image and the
physical book in a way that was still clear, concise, organized, and easy to understand from a
user perspective.
Another challenge that presented itself in the application profile was choosing controlled
vocabularies. At first, we thought we would utilize all of the controlled vocabularies relevant to
each element, but quickly saw that gave too much room for unnecessary error.
Instead, we opted for the TGN for geographical elements, where the level of granularity
was greater, which we saw important to our collection and its users, and decided that LCSH
would serve most of the purposes of our controlled vocabularies quite well but that it wasnt
quite specific enough to serve our users in the way that we would have liked. In navigating this
challenge, we opted for a local authority file where we could create our own established forms of
Harry Potter related characters, settings, places, etc. This local authority file would give us the
flexibility to narrow our collection in a way that saw necessary to serve our users as efficiently as
possible.
We also had an important decision to make regarding record identifiers. After much
discussion, we assigned only two record identifiers to each of our records: one for the record of
the digital image (i_12345) and one for the record for the physical object (b_12345). While we
understood that each record was unique and should logically have a unique identifier, we
reasoned that in a real archival context a repository would be creating only one record rather than
three. We chose to use a single record identifier as we felt that each record was merely a
different form of the same information, not multiple records that would, in real-world context, be
held by a single repository.
Once we began applying elements from our map to our records, we ran into another
challenge. This challenge was especially relevant to our Dublin Core records (both Simple and
Qualified). The publication information for our original book record as well as our own archives
publication location wasnt really fitting in anywhere naturally.
This is where we learned that Dublin Core has a lot of deficiencies in describing location
information, especially if that information is complex. We tried a number of places, first
deciding that publisher would make the most sense. After reading the scope note for this
element on the Dublin Core website, though, we found that our location information didnt really
fit into it. So, we decided on standardizing a statement of publication place which we would all
adapt to our own records, and placing that in a description field.
Overall, we found that Dublin Core had us placing a lot of information in description
fields, and that it was a bit messy. We did our best to use these fields as clearly as possible,
making sure anything that we put into it was defined.
While most of the information we wanted in our records mapped well, there was
definitely a loss of specificity with Dublin Core. The difficulty was preserving information that
we wanted indexed in Dublin Core fields that are typically free text. Much of this information
ended up in the DC description element.
To mitigate this issue, when we created our local elements we did so in tandem with the
metadata schemes that we knew we would be using precisely so that information was not lost
between schemas, even if the desired level of specificity was not preserved.

3

We designed our local elements with two goals in mind. We wanted our local elements to
be accessible to the wide and diverse population of users we imagine our archive draws. The
language we used in our local elements was designed to be specific without being technical so
that users of all education levels and ages find our records useful. As we envision Platform 9
also as a research facility, we also wanted our local elements to reflect a level of granularity that
was appropriate to both researchers and casual users.
Another challenge with our specific collection was the need to track data that applied
more to the physical object than the digital photograph. This is why we adhered so rigidly to the
one-to-one principle. We felt that this information was crucial in terms of how users of the
repository and collection would search for the information, but we did not feel that it was always
appropriate to include in the record pertaining to the digital photograph.
Therefore, in addition to creating the application profile, we also divided up our local
elements into those that applied to the digital object and those that applied to the physical book
covers themselves. This was a helpful exercise in that we were able to see what information was
missing from each of these profiles that was obscured by the application profile as a whole. This
was especially important as many of our elements apply to both the digital photograph and the
physical book cover. Until we separated the elements, it was easy to overlook the fact that we
had only included one element for publication date, and a new local element should be added for
the date of digital publication.
Our approach to metadata quality focused on completeness and clear guidelines. As you
can see from the application profile, most of our elements are required. We decided we wished
for such a high level of granularity not only because of the wide range of users we imagine our
repository to serve, but also because of metadata sharing. Our particular project was very
susceptible to the on a horse problem. We addressed this issue by creating local elements that
we knew might be obvious in the repository context but essential to anyone harvesting this
metadata. We also tried to think outside the box in terms of metadata that would be necessary or
helpful for items that are merely loaned to the repository rather than owned outright.
We also tried to be as specific as possible with our input guidelines so that even in cases
where our uses of elements were somewhat nontraditional, the input would be consistent for all
repository records. Using the Dublin Core and MODS elements as accurately as was feasible was
also a priority for our input guidelines. Our goal was to create metadata that would demonstrate
its quality by the completeness of its records and the accuracy of its element applications.
Our MAP did undergo some changes once we had all created our records. The first
change was the addition of record identifiers and the linking of our Dublin Core physical and
digital records through the source and relation fields. As we chose to uphold the one-to-one
principle by creating separate records, we used the source field in the digital record to point
toward the record for the physical book, so that users could access all the information of the
physical book from the digital record.
The second major change to our MAP came from the discovery that we had mapped the
location of the publishers, both physical and digital, to coverage in Dublin Core and subject in
MODS. While this blatant misuse of the coverage element escaped notice in the creation of the
MAP, it was glaringly obvious to all of us once we began encoding our records. We rectified this
error by remapping the publisher location to yet another description field in Dublin Core and
placing it in the <originInfo> element in MODS. We also changed our controlled vocabulary

4

from the TGN to the MARC country text and codes. The only reason we had decided on the
TGN was to preserve the geographic hierarchy, but this is only beneficial in MODS in the
subject context. Once we had remapped to the <originInfo> sandwich, we felt that the MARC
country codes were largely sufficient, although we did also include a free text <placeTerm> field
that included the city.
Overall, our group worked incredibly well together and we are proud of our final
products.

Jill Walker, French Edition

Ann Santori, Illustrated Edition

Rachel Newlin, Hardcover First Edition

You might also like