You are on page 1of 64

Volume 2, Issue 2, June 2005

FIGARO International Workshop
“Intellectual Property Rights Issues of Digital
Publishing - Presence and Perspectives”
Hamburg University
September 23rd – 24th, 2003









DOI: 10.2966/scrip.020205.142
© Contributors 2003. This work is licensed through SCRIPT-ed Open Licence (SOL).
(2005) 2:2 SCRIPT-ed

143

Workshop Chairman:
Peter Mankowski, Hamburg University


Participants:
Volker Grassmuck, Berlin
Willms Buhse, Digital World Services, Hamburg
F. Willem Grosheide, Utrecht University, Utrecht
Alexander Peukert, Max-Planck-Institut, Munich
Guido Westkamp, Queen Mary Institute, London
Burkhard Schäfer, Edinburgh Law School, Edinburgh
Elena Bordeanu, EDHEC Business School, Nice
Federica Gioia, Piacenza University, Piacenza
Concepción Saiz-Garcia, Valencia University, Valencia
Ziga Turk, SCIX-Project, Ljubiljana



Volker Grassmuck .................................................................................................144
Willms Buhse:........................................................................................................157
Willem Grosheide: .................................................................................................162
Alexander Peukert:.................................................................................................166
Guido Westkamp: ..................................................................................................170
Burkhard Schäfer: ..................................................................................................173
Elena Bordeanu:.....................................................................................................182
Federica Gioia:.......................................................................................................185
Concepción Saiz-Garcia: .......................................................................................189
Ziga Turk: ..............................................................................................................192
Summary: ...............................................................................................................199

(2005) 2:2 SCRIPT-ed

144

Volker Grassmuck
A. Intro
„The evolution of print teaches us [...] that what is fundamentally at
stake changes very little over time. [... T]he essential issues remain
few: there is control, there is property extended to new objects, and
there is lusting for profit. [...] The same remains true of the digital
era; the objectives of control do not change, only the tools do.“
(Jean-Claude Guédon, Université de Montréal, In Oldenburg‘s
Long Shadow)
One week ago the new German copyright law for the digital age came into effect. At
its core it is a privatization of copyright law. The balancing of interests until now
could be achieved through differentiated exceptions and limitations in a public
manner. Under the newly privatized order of law, exploiters are unilaterally setting
the rules by means of contract and technology. Especially in the online-realm, all
public interest claims have been waived.
Another privatization law is in effect in Germany since February 2002, the law on
patenting in universities. Universities thereby gained the right to patent inventions of
their employees. Connected to the law is a political program of establishing a
patenting system in the public universities, paid for from the proceeds from the sale of
the UMTS frequencies (35 million Euro until 2003) :an infrastructure of professional
patenting and patent exploitation agencies throughout the federal states. The returns
are shared 70% for the university, 30% for the inventor. Until then, only the inventor,
a researcher or more likely her professor, had the right to patent and economically
exploit an invention. But this, in the opinion of the German government, happened
much too rarely –something that from another viewpoint, makes perfectly sense,
because basic research is supported as public task precisely because it has no direct
economic relevance.
But the minister Edelgard Bulmahn sees it as a mobilization of a potential for
innovation that has been lying fallow. Just as in medieval agriculture, fallow or
wasteland was the result of the commons, which in turn became the source for private
and state property.
The private property model - inventors patenting their own work - in this case failed.
So the reform in a sense is a re-socialization of personal innovation based on common
means, i.e. the university facilities. But the overall goal is a commodification of
research results and the generation of revenues for the universities, with all the
property controls attached to it. And yes, there will be public funds for patent
litigation as well which is, of course, the central means of controlling intellectual
property.
The third story of privatization I want to talk about, is that of science publishing, of
the databanks of fundamental science. But before that, some basics:
Robert Merton, the great sociologist of science and completely inconspicuous of
party-political communism, based the essentially democratic, egalitarian ethos of
science on four pillars: communism, universalism, disinterestedness and organized
(2005) 2:2 SCRIPT-ed

145
scepticism, or in short: CUDOS. I would like to start by quoting from the passage
titled „Communism“ written in the 1940s and republished in the seminal collection
„The Sociology of Science“ from 1974.
"Communism", in the nontechnical and extended sense of common
ownership of goods, is a second integral element of the scientific
ethos. The substantive findings of science are a product of social
collaboration and are assigned to the community. They constitute a
common heritage in which the equity of the individual producer is
severely limited. An eponymous law or theory does not enter into the
exclusive possession of the discoverer and his heirs, nor do the
mores bestow upon them special rights of use and disposition.
Property rights in science are whittled down to a bare minimum by
the rationale of the scientific ethic. The scientist's claim to "his"
intellectual "property" is limited to that of recognition and esteem
which, if the institution functions with a modicum of efficiency, is
roughly commensurate with the significance of the increments
brought to the common fund of knowledge.
The institutional conception of science as part of the public domain is linked with the
imperative for communication of findings. Secrecy is the antithesis of this norm, full
and open communication its enactment. The pressure for diffusion of results is
reinforced by the institutional goal of advancing the boundaries of knowledge and by
the incentive of recognition which is, of course, contingent upon publication. A
scientist who does not communicate his important discoveries to the scientific
fraternity ... becomes the target for ambivalent responses.
„We feel that such a man is selfish and anti-social“ (Huxley).
This epithet is particularly instructive for it implies the violation of a definite
institutional imperative. Even though it serves no ulterior motive, the suppression of
scientific discovery is condemned.
Publication not only as ethical rule, but demanded by epistemology as well. Truth is
what has not yet been disproved by intersubjective scrutiny. It is the whole
community‘s task to confirm, refute, and extend the knowledge in published works.
And discovering truth - not helping finance universities - is the raison d‘etre of
science.
The communal character of science is further reflected in the recognition by scientists
of their dependence upon a cultural heritage to which they lay no differential claims.
Newton's remark - "If I have seen farther it is by standing on the shoulders of giants" -
expresses at once a sense of indebtedness to the common heritage and a recognition of
the essentially cooperative and selectively cumulative quality of scientific
achievement. The humility of scientific genius is not simply culturally appropriate but
results from the realization that scientific advance involves the collaboration of past
and present generations.
The communism of the scientific ethos is incompatible with the
definition of technology as "private property" in a capitalistic
economy.
(2005) 2:2 SCRIPT-ed

146
When you read Merton and exchange „scientific ethics“ by „hacker‘s ethics“ or „spirit
of free software“, the statements fit to an astonishing degree. Free Software is the
product of social collaboration which is contingent upon publication. It is owned in
common by a global community, which strives to advance the boundaries of
knowledge, very well aware that they are standing on the shoulders of giants.
Recognition and esteem are assigned to innovators. Freedom of free software is
ensured not by technology, but by means of a licensing contract, first of all the GNU
GPL. In 1985, Richard Stallman created the GNU Manifesto, the FSF and the GPL.
Since then the movement has become philosophically self-reflexive, institutionalized,
and juridically defensible. It has become a sustainable knowledge and innovation
commons.
The similarity to Merton should no be surprising, since free
software was born out of the academic culture of free exchange of
knowledge. Interestingly, this happened exactly at a point where this
free exchange was threatened by commodification. Today, maybe
science can learned something back from free software that it had
originally taught to it, but has since forgotten. Before I‘ll be
speaking about this process of forgetting, I would like to open up a
third scenario that hopefully will prove instructive for our issues.
Software, just as science is not a consumer good, like Hollywood products, but a tool.
Editors for text, image, and sound are means of production for knowledge.
Underlying the digital revolution is this special quality of the computer to empower
its users to create new things. The special quality of the Internet is to connect people
without gatekeepers and intermediaries, to bring the potential of the connective
intelligence to bear fruit.
I want to end my introduction with a third example besides science and free software
that powerfully demonstrates the paradigm shift.
The idea to let an encyclopedia emerge from the vast pool of interconnected brains is
rather old. One such project was Nupedia. Founded in March 2000, it was built on a
classical process of encyclopedia making. Entries had to be approved by at least three
experts in the relevant field, and professionally copy-edited to ensure a rigorous
quality-control. The declared aim was that articles will be „better than Britannica's“.
This was an open project. The articles were released are under the GNU General
Documentation License, but the editorial process was so strict that many people who
would have liked to contribute were frustrated. In March 2002, when funding ran out
and editor-in-chief, Larry Sanger resigned, about 20 articles had reached higher level
in this multi-tiered process.
Fortunately, before Nupedia died, Sanger was alerted to the Wiki software by Ward
Cunningham, a database of freely editable web-pages. On January 15, 2001,
Wikipedia was started, the free-for-all companion to Nupedia. In February there were
1000 entries already, in September 10.000 and after the first two years 100.000, and
Wikipedia had spawned off countless language versions including a very active
encyclopedia in Esperanto.
Different from the expert-based, quality-control model of Nupedia, Wikipedia‘s open
structure and the community-based self-organization allows to make full use of the
power of distributed connective intelligence: Given enough eyeballs, knowledge gets
(2005) 2:2 SCRIPT-ed

147
better. As a wiki, it allows for general public authorship and editing of any page. And
it works: when 30-40 people edit an article it does indeed get better.
Nu- and Wikipedia seem to prove the weaknesses of a closed controlled process
where people first have to show their credentials before they can participate, and an
open process where everyone is trusted until proven otherwise.
B. How is the situation of companies and especially online publishing houses
that scientific authors work for?
Brilliant! The three remaining major players have nicely developed the market and
divided it up among themselves. Authors and reviewers don‘t get paid, and they have
no choice but to publish or perish. Research libraries across the globe are a
guaranteed, „inflexible“ market. They also don‘t have a choice. When publishers go
online-only, they also eliminate the cost of printing and distribution. Whereas raising
prices in other markets will lead customers to buy from a competitor, here it leads to
competitors raising their prices as well. Whatever they say, companies don‘t want
perfect markets. They all hate the Microsoft and Elsevier - but only because they envy
them for their monopolistic position.
How did this „brilliant“ situation come about? It started from what Merton called
„recognition and esteem which ... is roughly commensurate with the significance of
the increments brought to the common fund of knowledge.“ How to measure and
allocate recognition commensurate with the achievements of a scientist?
Eugene Garfield invented Citation Indexing and set up the Institute of Scientific
Information (ISI) in 1958. The Science Citation Index is an ingenious self-referential
method for measuring „importance.“ Instead of setting up committees that vote on
value and grant reputation, a supposedly objective variable is being measured. Who
gets cited frequently must have said something important for the respective discipline.
The workings of science themselves produce the rough indicator of what the
community deems relevant.
In order to be able to practically handle this counting of citations, ISI defined a single
set of core-journals, a few thousand titles, a small fraction of all scientific journals
published in the world. Being in or out of the core set is determined by the impact
factor, a citation index on the journal-level. And this changed everything.
Scientific journals until then were mostly published by academies and learned
societies. They were not very lucrative, were not meant to be, just cover costs. In the
early 1960s, commercial interests realized that through ISI‘s citation hierarchy,
scientific journals could be turned into a profitable business.
At the same time - the late 1960s - a wave of new post-war university studies boomed
and with them new libraries which became the target of corporate interest. It was the
time when the „information society“ discourse started. Information came to be viewed
as central to economic growth.
A staggering escalation of prices set it. Chemical magazines cost $ 9,000 US/year on
average. Brain Research was the first to break through the $ 20,000 barrier. Most of
all journals in areas close to industry raise prices. On average, by 200% throughout
the last 15 years.
(2005) 2:2 SCRIPT-ed

148
The core set of 5-6.000 journals costs a library about 5 Mio per year. A „must“ for
every serious university and research library. A „cannot“ for all but the richest
libraries in the richest countries.
High degree of concentration: EU blocked Reed-Elsevier merger with Kluwer
announced in October 1997. In 2001, Academy Press merged with Elsevier. In 2002,
an obscure British equity fund bought Kluwer with its 700 journals. In May 2003 the
same company acquired Springer from Bertelsman for 1 billion . Reed-Elsevier now
holds about 25% of the market. The new Springer-Kluwer Group has 20%.
At the heart of the reputation economy is still the Institute of Scientific Information.
Founded by Garfield on a loan of $500 it grew rapidly, until in 1992 it was acquired
by Thomson Business Information. Thomson provides value-added information,
software applications and tools to more than 20 million users in the fields of law, tax,
accounting, financial services, higher education, reference information, corporate
training and assessment, scientific research and healthcare. The Corporation's
common shares are listed on the New York and Toronto Stock Exchanges. A real one-
stop shop. An in the basket: the backbone of the reputation system of science.
By the end of the eighties, the new publishing system was firmly in place and its
financial consequences had become hurtful enough to elicit some serious "ouches" on
the part of librarians. It even attracted the attention of some scientists, such as Henry
Barschall, the University of Wisconsin physicist who pioneered some very interesting
statistics showing that, between various journals, the cost/1,000 characters could vary
by two orders of magnitude; if weighted by the impact factor, the variations could
reach three orders of magnitude.
Simply pointing to this 1 to 1,000 range in weighted prices is enough to demonstrate
the total arbitrariness of the pricing of scientific journals, i.e., its complete
disconnection from actual production costs.
Barshall was sued by one publisher, Gordon and Breach, for allegedly distorting and
manipulating the market, not to win but to intimidate.
Scientific excellence, already somewhat skewed into scientific elitism, has by now
neatly dovetailed with financial elitism. Only the rich can read up-to-date scientific
information. For their part, poorer institutions in some rich countries and all
institutions in poor countries have suffered enormously from the financial bonanza
made possible by the revolutionary invention of the ‚core journals‘.
Scientific journals since the foundation of the first of a kind, the Philosophical
Transactions of the Royal Society of London by Henry Oldenburg in 1665, combine
five functions:
1. Dissemination of knowledge and
2. Archiving for future use. Both based on its physical form, print on paper. Now
being challenged by the digital revolution.
3. They are a public registry of innovation, validate originality, provide proof of
priority, „not unlike that of a patent office for scientific ideas“ (Guedon)
provide property title.
4. Filtering through peer-review „that guarantees as much one’s belonging to a
certain kind of club as it guarantees the quality of one’s work“ (Guedon)
evaluation instrument of individual scientists’ performance
(2005) 2:2 SCRIPT-ed

149
5. Branding. For authors being printed in Cell or Nature is proof of recognition.
Egalitarian ethics, hierarchical social system. An intellectual hierarchy based on
excellence, nobility granted by peers, based on publicly accessible scientific results.
1. Gatekeepers
Among scientists, those who manage to play an active editorial role in the publication
process enjoy a special and rather powerful role, that of "gatekeeper". As mediators,
they are supposed to extract the wheat from the chaff. Of course, this judgmental role
can only be justified if it is cloaked with the integrity (and authority) of the scientific
institution. Any hint of systematic arbitrariness or bias would threaten the whole
edifice of scientific communication. In this regard, a scientific editor acts a bit like the
Keeper of the Seal without which royal prerogative cannot be exerted in the physical
absence of the King. The difference lies in one important detail: in science, there is no
King: only truth and reality are supposed to prevail. Silently, the journal’s editor,
therefore, has come to occupy the role of guardian of truth and reality“ (Guedon).
If a scientist of some repute is offered the chance to head a new journal, the response
will generally be positive, perhaps even enthusiastic. The ability of offering this
status-enhancing role to various scientists lies, I believe, at the foundation of a de
facto and largely unexamined alliance between a number of key research scientists
and commercial publishers.
Seemingly a win-win situation, alas with a losing third estate: the libraries and
universities and research centers, governments that finance them. All these players see
their budgets flowing inexorably into the pockets of the large publishers.
2. Publishers
In a fruitful alliance with the gatekeeper scientists.
„The presence of a public registry of scientific innovations would
help create internal rules of behavior leading to a well structured,
hierarchical society“ (Guedon).
3. Authors
Imperative: publish or perish. The new numerical indicator became the measure for
reputation and basis for employment, career, and acquisition of research funds. The
non-monetary value translates directly into money.
4. Guedon, two modes:
• Scientists-as-readers, in information-gathering mode: conferences, seminars,
pre-prints, telephone calls, e-mail. invisible colleges
• Scientists-as-authors: footnoting, from most authoritative sources -> libraries.
Since the 1970s, journals force authors to transfer all their rights to them. This is
changing a bit thanks to the open access movement. The licensing agreement of the
Royal Society:
I assign all right, title and interest in copyright in the Paper to The
Royal Society with immediate effect and without restriction. This
assignment includes the right to reproduce and publish the Paper
(2005) 2:2 SCRIPT-ed

150
throughout the world in all languages and in all formats and media
in existence or subsequently developed.
I understand that I remain free to use the Paper for the internal
educational or other purposes of my own institution or company, to
post it on my own or my institution’s website or to use it as the basis
for my own further publication or spoken presentations, as long as I
do not use the Paper in any way that would conflict directly with
The Royal Society’s commercial interests and provided any use
includes the normal acknowledgement to The Royal Society. I assert
my moral right to be identified as the (co-)author of the Paper.
5. Libraries
are suffering the most from the „serial pricing crisis“: The transformation of a quest
for excellence into a race for elitist status bore important implications for any research
library claiming to be up to snuff: once highlighted, a publication becomes
indispensable, unavoidable. The race demands it. It must be acquired at all costs.
The reaction of libraries was to join and form consortia, sharing of legal experience
and gaining some weight in price negotiations. This did pressure vendors and taking
them by surprise. But those were quick to learn: Consortia bargained for full
collections. Publishers turned this around, offering package deals. The Big Deal: You
have some Elsevier journals already e.g. as part of „Science Direct“, how would you
like to pay a bit more and get access to all Elsevier core journals, say for $900,000 or
$1.5m, depending on client, currency fluctuations etc.
E.g. OhioLINK has contracted a "Big Deal" with Elsevier. As reported in the
September 2000 newsletter from OhioLINK, 68.4% of all articles downloaded from
the OhioLINK’s Electronic Journal Center came from Elsevier, followed far behind
by John Wiley articles (9.2%). Elsevier, although it controls only about 20% of the
core journals, manages to obtain 68.4% of all downloaded articles in Ohio. A local
monopoly. It affects the citation rate, impact factors of the journals, and attractiveness
of journals for authors.
Are they [libraries] not being temporarily assuaged by a kind of compulsory buffet
approach to knowledge whose highly visible richness disguises the distorted vision of
the scientific landscape it provides? In other words, are not "Big Deals" the cause of
informational astigmatism, so to speak?
Publishers gain „panoptic vision“: with regard to site licensing negotiations, and
usage statistics: Scientometrics specialists would die to lay their hands on such
figures; governmental planners also. With usage statistics you move faster and stand
closer to the realities of research than with citations. Usage statistics can be elaborated
into interesting science indicators of this or that, for example how well a research
project is proceeding on a line that might prepare the designing of new drugs or new
materials. The strategic possibilities of such knowledge are simply immense. It is
somewhat disquieting to note that such powerful tools are being monopolized by
private interests and it is also disquieting to imagine that the same private interests can
monitor, measure, perhaps predict. They can probably influence investment strategies
or national science policies. In short they could develop a secondary market of
meta-science studies that would bear great analogies with intelligence gathering.
(2005) 2:2 SCRIPT-ed

151
6. Digitization
If publishers were not paying authors and reviewers, they were still justified in
charging for their service of handling print and paper. Not any longer after scientific
publishing went online. Scientist‘s knowledge should release science from the
publisher‘s strangle-hold. The opposite is the case.
Market concentration escalated again through digitization and innovations in
copyright law and practice.
At the end of the 1980s, the Internet became a factor. 1991 Elsevier launched the
TULIP project: The University Licensing Program, a cooperative research project,
testing systems for networked delivery and use of journals together with 9 US
universities. Questions of profitability quickly linked up with questions of control,
and the technology was shaped to try and respond to these needs.
TULIP was conceived as a licensing system. It was inspired by the software industry,
which licensed rather than sold its products, in order to avoid some of the dispositions
of the copyright law, such as first sale provisions & fair use. Elsevier extended this
notion of license to scientific documents, thus setting a counterrevolution into motion.
• Distributed to each site on a CD-ROM to be mounted on local servers.
• Page images in TIFF files, too big to transmit over modem, printing very
slowly.
• Full-text search was offered but without user access to text files.
From this pilot, Elsevier retained only the licensing part and moved away from giving
sites a CD-ROM to access on its own servers.
Elsevier managed to invert the library’s function in a radical way: instead of
defending a public space of access to information by buying copies of books and then
taking advantage of the first sale provision of the copyright law, librarians were
suddenly placed in the position of restricting access to a privatized space. They no
longer owned anything; they had bought only temporary access under certain
conditions and for a limited number of specified patrons.
Licensing is „a lawyer’s paradise and a librarian’s hell.“ The traditional role of
librarians, epistemological engineering - organizing and cataloguing information -
and long-term preservation, was also made impossible. Publishers don‘t want to take
on the task of preservation, rather unload it onto libraries.
Aside from preservation, the libraries might become superfluous. Once secure
transactions and easy payment is there, end-users might access publisher‘s servers
directly. In effect, licensing contracts subvert copyright legislation on all but one basic
point: they do not question the fundamental legitimacy of intellectual property
The digital knowledge space - whether scientific, business or entertainment - is ruled
by licensing contracts and technological enforcement.
C. What are the legal limits of Digital Rights Management ?
Under the current regime of privatized copyright, there are essentially no limits to
DRM.
(2005) 2:2 SCRIPT-ed

152
General laws like data protection and privacy: assumed to apply but no effective
control. Enforcement of rights under exceptions and limitations: varies widely
throughout Europe, from Spain with possible fines of 6.000/day for noncompliance
to no enforcement at all. Germany: option to sue, but enactment of that provision was
postponed by one year.
There are no limits on the effective technological measures being protected, and there
are no limits to the purpose for which these DRMs may be used.
DRM is the powers of technology harnessed for controlling intellectual property.
DRM deals with objects of (money) economy.
Open Archives is the powers of technology harnessed for fast, highly distributed and
massively parallel open knowledge exchanges. OAI deals with scientific objects that
need to be dealt with first and foremost by science‘s own rules.
D. Charges for scientific online publications – hindrance to science or proper
business?
Rhetorical question: the monopoly fees that the large publishers are charging have
nothing to do with proper business - by whatever standards.
But maybe the question refers to charging authors for getting published, which has
become usual practice in print publishing of science books. BioMed Central, a
commercial open access archive, implements this model:

We use other business models. Article-processing charges [flat fee
of US$500 for each accepted manuscript, various provisions for
getting them waived] enable the costs of immediate, world-wide
access to research to be covered. The levying of a moderate and
reasonable charge for the costs of peer review and publication has
the merit of directly linking the charge with the service to authors
and the costs involved. Other sources of revenue include
subscription access to commissioned articles, sales of paper copies
of our journals to libraries, sales of reprints, advertising and
sponsorship, and most importantly the development of a range of
subscription-based value added services such as literature reviews
and evaluation, personalized information services delivered
electronically, provision of editorially enhanced databases, tools
that help scientists collaborate, and other software research aids.
$ 500 is a barrier to publication. Unclear where the money goes.
Faculty of 1000, an expert group rating articles in BioMed Central and other Open
Access Archives can be subscribed at 55/year.
E. Is the Internet a source of danger to the rights of scientific authors ?
Internet is a danger to anyone hoping to become rich and famous through her works,
and believes she will fail if she doesn‘t have total control over the use of her
published works. This hope is likely more prevalent among teenagers wanting to
(2005) 2:2 SCRIPT-ed

153
become pop stars than among scientists. But also scientists are perfectly legitimate in
expecting recognition for their work.
Endangered by plagiarism: Is that a problem in science? Apparently yes. Is it
increasing thru the Internet? Unlikely. Texts can be easily checked through Google.
The real danger: privatization of the databanks of fundamental science and of usage
data and scientific communications by corporations which are not accountable for it.
F. Do scientific authors need intellectual property law in an Anglo-American or
in a continental-European way to preserve their intellectual property rights?
What are the IPRs of scientists that need preservation? Attribution and integrity. Both
inalienable rights in continental European copyright law. Not so in USA and UK
where the attribution right explicitly has to be claimed. The Royal Society says: “I
assert my moral right to be identified as the author“. But then again, as a scientist you
don‘t have much choice.
G. How might a legal framework be adjusted in order to preserve intellectual
property rights of scientific authors?
Focus should not be on changing the law, a messy business and maybe unnecessary.
Freedom can be regulated by licensing contract, e.g. Creative Commons licenses.
For preserving the possibilities for scientists to fulfill their tasks as information
gatherers and as teachers, adjustments are needed to educational limitation of the
EUCD and its implementations, see for Germany in particular UrhG § 52a.
H. Is it useful to establish an open source system in the field of academic
publishing?
By all means, not only useful but crucial for public science to preserve the
justification of its very existence:
• Open source or free software (infrastructure)
• Open source information:
o data formats (convertibility)
o information itself:
public domain
info released under an open license (CC)
The digital revolution doesn‘t mean we do the same old things only now with bits.
„Real changes in power structures and social relations are in the offing“ (Guedon).
Most important innovation is not legal or technical but social: evaluation.
I. Free Access Science Movement
„Whatever may be the outcome of the political battle that is heating
up in the United States, it is easy to imagine how a system of open
archives, with unified harvesting tools and citation linkages
(2005) 2:2 SCRIPT-ed

154
constructed in a distributed manner, can threaten vast commercial
interests“ (Guedon).
First experiments with electronic journals came from a few exceptional scientists and
scholars (e.g., Jim O’Donnell at Penn, Stevan Harnad then at Princeton, etc.). The
motives were:
• to reduce publishing delays
• to decrease publishing costs (by at least 30%)
• to lower startup costs of journals
• to open up the possibility of free journals.
In 1991, Paul Ginsparg started his physics preprint server at the Los Alamos National
Laboratory: arXiv.org. - to my knowledge not yet as a consciously anti-publishing
initiative. It just seemed the right thing to do. Started in high-energy physics, it later
transferred to Cornell. Currently hold a quarter of a million articles.
It is today the primary source for large parts of physics. Darius Cuplinskas, Budapest
Open Access Initiative, reported on the case of a graduate student in Prague who put
an article on spring theory onto arXiv, within 3 days had responses from leading
theorists in the field. Got a scholarship. Without any filters and gatekeepers.
The SPARC initiative (The Scholarly Publishing and Academic Research Coalition)
launched in June 1998, and then „create change“ movement, attempting to reintroduce
competition, by creating or supporting journals that compete with the expensive major
journals. SPARC works with learned societies and university presses to position free
or comparatively reasonably priced journals head-on against their oligopoly
equivalents. It is successful in providing alternatives and to some degree keeping
prices of the major journals down (Elsevier‘s Tetrahedron Letters, already at $5,200
in 1995 appeared poised to reach $12,000 in 2001, leveling off at $9,000 in 2001,
because of Organic Letters, at $2,438 by far the most expensive of SPARC journals.)
Also raising awareness, networking, helping editors negotiate better deals, supporting
learned societies retain control over their journals. „A dozen SPARC-labeled journals
is wonderful, but, let us face it, it may appear to many as more symbolic than
anything else“ (Guedon).
Growing sense of crisis led the whole editorial board of the Journal of Logic
Programming to resign en masse from this Elsevier publication and found a new
journal, Theory & Practice of Logic Programming (Cambridge University Press).
Professor Maurice Bruynooghe even won a prize for this courageous move. In
February 2001, Elsevier launched the Journal of Logic & Algebraic Programming, 25
with a new editorial board. „For one Professor Bruynooghe, there may be ten others
eager to enjoy the opportunity to act as gatekeepers“ (Guedon).
At an October 1999 Meeting, the Santa Fe Conventions were formulated: distributed
e-print server with common metadata that can be harvest and searched easily.
This initiative has been taken over by OAI.
The principles are: interoperability, simplicity, distributedness. OAI refuses to design
any utopian SOE (standard of everything). As they say of the Internet,
"implementation precedes standardization."
(2005) 2:2 SCRIPT-ed

155
Public Library of Science started in October 2000. The Petition was signed by 30,000
scientists asking commercial publishers to release their articles 6 months after
publication for inclusion in an open access archive. It only managed to convince a
few. In summer 2001, PLoS decided that setting up their own journals is the only
way, in December 2002 it received a grant from the Gorden Moore Foundation.
In December 2001, on the initiative of Darius Cuplinskas from the Soros Open
Society Institute, the Budapest Open Access Initiative was started, also promoting
establishing of university-based open access archives and self-archiving.
Software & Information Industry Association (SIIA) complained that it competes with
private sector publishers. PubScience counts several important commercial publishers
on its side: Kluwer, Springer, Taylor & Francis, etc. In short, the PubScience initiative
appears to be dividing the publishers, according to an interesting fault line that ought
to be explored further—namely, that between publishers involved purely in
publishing and publishers also involved in the bibliographic indexing kind of
activities. House of Representatives removed all budgetary provisions for
PubSCIENCE, but the Senate restored them.
CogPrints, an electronic archive for self-archive papers in any area of Psychology,
neuroscience, and Linguistics, and many areas of Computer Science, Philosophy,
Biology, Medicine, Anthropology, as well as any other portions of the sciences that
are pertinent to the study of cognition.
Networked Computer Science Technical Reference Library (NCSTRL), a
collaboration of NASA, Old Dominion University, University of Virginia and
Virginia Tech. migrated to OAI in October 2001.
GNU EPrints was developed at Southampton, Stevan Harnad, for creating an OAI
compliant archive.
ARC, A Cross Archive Search Service, used to investigate issues in harvesting OAI
compliant repositories and making them accessible through a unified search interface.
German Academic Publishers (GAP), partner of FIGARO, supporting publishing by
University Presses, Research Institutions, Learned Societies and SMEs. Joint project
by universities in Hamburg, Karlsruhe and Oldenburg, funded by the DFG.
Commercial open archives: BioMedCentral, HighWire Press, Bepress, and BioOne
stand somewhere between a commercial outfit and a cooperative.
BioMed Central is somewhat different:
BioMed Central was created in response to the partial failure of the NIH-led
PubMed Central project: under the impulsion of Nobel prize winner Harold
Varmus, and then NIH director, PubMed Central sought to encourage journals
to free their content as quickly as possible, possibly from day one. There
again, the journal level was targeted by PubMed Central, but it behaved a little
too idealistically: Journals, especially commercial journals, were not ready to
give the store away, or even parts of it, and they strongly criticized the
PubMed Central venture; in the end, the NIH-based proposal was left with
very few concrete results, as could have probably been expected from the
beginning.
By contrast, BioMed Central, a part of the Current Science Group, locates itself
squarely as a commercial publisher; at the same time, it sees itself as a complement to
(2005) 2:2 SCRIPT-ed

156
PubMed Central. It invites scientists to submit articles which are immediately peer
reviewed; once accepted, they are published in both PubMed Central and BioMed
Central. Most of PubMed Central "successes" actually come from BioMed Central!
BioMed Central offers about 350 free peer-reviewed journals, and is working on
methods for citation tracking, calculating impact factors and other methods for
evaluating articles. The reasons given by BioMed Central to induce scientists to
publish with them deserve some scrutiny:
1. The author(s) keep their copyright;
2. High visibility
3. Long-term archiving
J. What is changing? From journals to archives.
The interest of commercial publishers is to keep pushing journal titles, and not
individual articles, as they are the foundation for their financially lucrative technique
of branding individual scientists.
“For scientists, print publication are related to career management, while publicly
archived digital versions deal with the management of intellectual quests” (Guedon).
But journals are not the only way of evaluation. Elsevier is experimenting with
archive evaluations on its Chemical Preprint Server.

If archives are open, mirrors can be established with little hassle and, as a result, the
archive has more chances to survive. Frequent replication and wide distribution, not
hardened bank vaults, have long been used by DNA to ensure species stability that
can span millions of years. We should never forget that lesson! Vicky Reich’s
LOCKSS project at Stanford University appears to have taken in the implications of
this "dynamic stability" vision for long-term preservation of documents. The model
should be discussed, refined and extended if needed by librarians. If openness can be
demonstrably and operationally linked with better long-term survival, it will have
gained a powerful argument that will be difficult to counter.
What needs to be done? Technology: yes, but not DRM which is a complete waste of
time, money and environmental quality. But...
K. Evaluation: An Open Panoptic Space
…to print means to select. In the digital world no preselection is needed. A self-
selection of contributors choosing a suitable archive for their works is sufficient.
Anyone is allowed to publish. There are no more economic, technical, skill
constraints.
In the digital world, selection and evaluation through usage becomes the dominant
mode. The peer review process tends to extend to the whole community almost
immediately.
Ginsparg knew well what information could emerge from the use statistics of his
server, but he refused to release them for ethical reasons and political prudence. If
(2005) 2:2 SCRIPT-ed

157
evaluation were ever to rely on his archives, it had better emerge as a conscious,
collective move stemming from the whole community of scientists.
In the end, the access to large corpora of texts, laid out in open archives, and
cross-linked in various ways, in particular through their citations, will open the way to
many different and useful forms of evaluation. It will also help monitoring the crucial
growth areas of science while placing this bit of intelligence gathering into the public
sphere.
The evaluation process will have to be torn out of the publishers’ grip and that of
gatekeepers, i.e., colleagues! Ways to tear this detrimental alliance apart while relying
on the real expertise of these gatekeepers must therefore be sought. It is not an easy
problem to solve, but it should clearly be on the agenda of learned societies,
university administrators, and, of course, librarians.
Guedon speculates that commercial open archives like Elsevier‘s: „Chemical Preprint
Server" (CPS) are there for testing ways to reconstruct a firm grip on the evaluation
process of science in the digital context.
If we imagine that a significant fraction of scientific knowledge should circulate
through open archives structured in the OAI spirit, it is easy to see that tools to
evaluate all sorts of dimensions of scientific life could also be designed and tested.
These tools might be designed as public good, by a combination of specialists in
scientometrics and bibliometrics—an ideal outcome in my opinion. This would
amount to creating an open panoptic space—a marvellous project for librarians. But ...
some of the main players will try either to destroy or control what they do not already
own, but, if suitably forewarned, scientific and library communities backed by lucid
administrators do pack a certain amount of firepower. Unlike consortial battles,
fraught as they are with many ambiguities, these are not unclear battles.
In short, with the digital world, the evaluation process stands ready to be reinvented in
a clear, rational way by the relevant research communities themselves. With a
well-designed principle of distributed intelligence, with the help of scientists
self-archiving their work, with the help also of selections that do not rest on the prior
reputation of a brand, but on the actual quality of each selected work, librarians hold
the key to developing a total, global mapping of science. The vision, in itself, is
dizzying, but it is not new; somewhere, it has lain in the background of Garfield’s
(and Vannevar Bush’s) thoughts and quests; we may just begin to have the tools and
the social know-how (distributed intelligence again) to do it all now. Let us do it!
L. Literature:
Jean-Claude Guédon (Université de Montréal), „In Oldenburg’s Long Shadow:
Librarians, Research Scientists, Publishers, and the Control of Scientific Publishing“,
held at ARL May 2001 Membership Meeting „Creating the Digital Future“, Last
Updated: October 26, 2001, http://www.arl.org/arl/proceedings/138/guedon.html.

Willms Buhse:
The difficulty I face here is that Digital World Services (DWS) as a Bertelsmann
company is not working in the field of academic publishing. So, what I will do is, I
(2005) 2:2 SCRIPT-ed

158
will briefly explain what my company does, and then I switch over to what I am doing
that is more similar to academic publishing – I will talk about some findings of my
research thesis about Digital Rights Management. I took some findings from the
music industry and from my analysis. I hope this combination will be helpful for your
conclusions.
DWS is part of the Arvato Group – the media services division within Bertelsmann,
which means that we are a service provider just like a printing facility or a facility that
manufactures CDs. We are a system provider, so we are building technology. We
have developed a platform, that is able to handle music, software games, video
publishing, and it protects these bits and distributes them to a number of different end
devices including PCs and mobile devices. The central part of this is something that is
called clearing house, which is very often mentioned as the core part of the DRM
system.
DWS believes that Digital Rights Management is central and critical to the success of
the digital content business. It is important to stress the word “business” as the media
industry has to rely on revenue streams in the digital age. DWS started about five
years ago and during that time we came to a couple of conclusions. We realised that
DRM is not just a technology, but it is actually a strategic business enabler. For
example, we are working in the field of mobile content distribution. If you distribute
games to your cell phone, then we actually have a DRM technology to protect these
games for certain types of cell phones. This is a strategic business enabler, because
otherwise many content providers would just not allow publishing this type of content
in a digital format. They would just stick to traditional means of distribution.
For them it is crucial to generate content revenues. Nevertheless, as a little disclaimer
it is not the only way of generating revenues. Of course, there are many others.
The technical, legal and business frameworks are coming together to facilitate DRM -
I will come to that a little later. Leading content providers and distributors are
currently building these platforms. If you look into certain systems, for example for
the online distribution of music, e.g. I- Tunes and Pressplay, they are all using DRM
systems. Another DRM standard that is evolving and being used across the wireless
industry is one provided by the Open Mobile Alliance (OMA), which is in fact an
open standard. That means that any company can actually join the OMA and help to
develop technical specifications. And the OMA has already successfully released
specifications for DRM last year. Today there are about 40 cell phone types currently
in the market, which have this kind of DRM technology implemented. It was released
about a year ago, so that is a big success, that in such a short timeframe this
technology has become implemented by many different network operators and
especially hand set providers.
What trends do we see, when it comes to DRM? First of all, why is DRM so relevant?
On the one hand you can see how we used to have closed networks such as the cable
networks. These cable networks or telecommunication networks are now opening up.
As they are opening up, there is an increasing need for DRM systems. DRM used to
be only relevant as some sort of a billing mechanism, as some sort of a pay-per-view.
Then you have the open internet which is uncontrolled and full of illegal content from
the perspective of music industry. It is becoming more and more controlled, as we see
certain technologies appearing, such as Liberty Alliance and Passport. Then there are
(2005) 2:2 SCRIPT-ed

159
certain security measures, TCG as an example but it certainly will take a while - a
couple of years.
We will get to some trusted gardens, something that is open, but still there is the
possibility to establish trust. We see this as the ideal environment for DRM.
On the other hand you can see the content suppliers certainly asking for a DRM
infrastructure to be in place, but also they are making content licences available. This
is certainly true for the music industry. The music industry currently licenses music to
a number of companies. That wasn't the case five years ago, where they were
extremely sceptical towards the new distribution channel.
On the environment side, there is a growing political support for IP protection and a
growing education, a growing discussion in this field. At least people realize that it is
an important area to work on. And more and more legal aspects are being solved in
this area. I think it is something companies can rely on. They know which
environment, which business environment they are working on.
From a technological perspective, DRM is certainly an extremely complex technology
but fast developing. There are number of things that are coming together. DRM is
something that can always be hacked – but there are developments when it comes to
security updates.
On the consumer side, they are demanding access to rich media content and they have
increasing access to bandwidth.
We see all these factors are coming together so that DRM actually has its place in
the value chain of digital content distribution.
As I promised earlier, I would like to present some ideas, which do not necessarily
reflect the opinions of my company. They are the results of my doctoral thesis at
Munich University, but I thought it is a good idea to share this with you.
Regarding the agenda, we take a quick look at the cost prospective. Then I would like
to look at some uncertainties in supply and demand, and I would like to introduce you
to the digital distribution matrix - four scenarios for academic publishing and a
conclusion. Again this refers a lot to online music.
Regarding the cost perspective, there are differences between first copy cost and
variable cost per unit. And when you look at how those costs are distributed within
the music industry, you actually can see that there is little you can do about
production, first copy cost, manufacturing cost and label cost. At least from the music
industry perspective the real potential of 50 percent lies in marketing and distribution
cost, which is in the end similar to the publishing process. There you can have impact
on the cost structure. As a result, that is actually something that really revolutionizes
an entire industry.
So, what are the uncertainties on the demand side? Where do the revenues actually
come from? Different sources, direct and indirect sources. There is a limited
willingness for the end consumer to actually pay for digitally delivered content. There
is a number of factors associated with that. It is very hard to explain to a consumer
who burns a CD by himself for probably 50 Eurocent why, if he bought it, it would
cost him 15 Euros. To explain this difference to him is extremely challenging. I think
some might actually apply to the academic world. Why should I buy a book for
100 Euros, if I can actually download it for a very, very small portion of that cost and
even print it for a fraction of that? So what will be the business model behind that?
(2005) 2:2 SCRIPT-ed

160
That’s from the demand side. On the supply side digital goods can be public or they
can be private. Most of digital goods today are public to the extent that you can find
them on the internet, even though copyright is associated to many of them. It almost
seems as it is a sort of looking at the “Darknet” as it was quoted from researches from
Microsoft. It is very difficult to control, and that is an essential part of privatizing
digital goods. Music might become non-excludable. So what is the copy protection
behind that?
I took these two questions, these two uncertainties, and combined them to a scenario
matrix, which you can see here. Still I would like to go briefly into the demand side
again, how to pay for digital content. There are different ways. Either you can pay
directly or indirectly for digital content. On the left side there is a little empirical
example, the willingness to pay for one song: three quarters of the people were
saying, no, I am not willing to pay for this. This is a Jupiter survey from some time
ago. These numbers have slightly changed, but still it is very tough to charge for
digital goods.
What are the business models ? If you come from the direct side, you can charge for
usage related, sort of for single transactions. This might be a charge for a single track.
You might even charge on a time based, that is you charge for an academic title and
you read it once or you print it once or you read it for 24 hours or something like that.
So, you get transaction based access to this.
A mix is some sort of a non-usage related charging, which might be a single fee, a
licence fee, device purchase or set-up fees or repetitive, which is a subscription fee.
For example you pay to access a data base a couple of Euros a month, you can rent it,
or even broadcasting fees are this kind of non-usage related fees. You can do this via
corporations - that means advertising - that means data mining with a huge challenge
of privacy issues, which are associated with that, certain commissions, cross selling,
even syndications. Or via certain institutions. That refers to fees and taxes for
example on end devices or on storage media.
How does this digitalized solution matrix look like? You have on the top supply side
online music as a public good or as a private good and you have on the demand side a
revenue model, which is indirect, or it is direct, and you go from scenario one to four.
The first scenario is the promotion. The reason to publish is fame. The publisher is
even willing to give it away for free.
The second scenario is a music service provider. In this case there is a direct charge,
still you pay not because of the content, but because of the service, so it might be for
example high quality access, very good search tools or personalisation tools.
The third scenario is a subscription scenario with some sort of a light-weight-DRM,
some sort of access control. People are willing to pay for the content directly.
Nevertheless, you still might apply from scenario two some technique in scenario
three, so that it motivates people to join this subscription. But the idea is that actually
people pay directly for the content in the sense that they pay indirectly for an access
fee.
And in the fourth scenario, which is called Super-Distribution, which means, that
actually every single item, digital item, be it an article, be it a book, is encrypted and
consumers pay for access to this digital item. People can forward it amongst each
other via e-mail, MMS etc. But still, whenever somebody wants to read it, rules apply
(2005) 2:2 SCRIPT-ed

161
and it can be charged for. Super-Distribution certainly also faces the challenge of fair
use rights. There are certain questions how to solve this. If you want to listen to music
for example and you buy this music, you can listen to it in your car, even when you
bought it for your PC.
Let’s apply these scenarios to academic publishing:
In scenario one, revenues are based on economies of attention, hence the promotion
scenario. What is the impact on academic publishing? Well, it is an author-sponsored
building-fame model. It can be done through promotions of peer to peer file sharing,
can be done for example through sponsoring or public funds or grants.
Scenario two, the service provider business model. As this is the service model it is a
question of connectivity. How do I get access to the type of content I really want to
get access to? In the case of music, it has to do with personalisation, recommendation
or bundling. This also can be done with academic research. Here you are paying for
the service that you do not have to look up, "when was the book published", and so on
and so on, and write that into the reference table. It is about hosting and aggregation,
it is about a community service for chats and message boards for example. It also can
be done through a peer review fee, so that a system is established that allows for a
faster peer review process for example.
Scenario three, the subscription case. Revenues are based on a light DRM. It is like an
access and account management. You do not necessarily encrypt every digital item,
but you have a restriction when it comes to access. What is the impact on academic
publishing here? It can be the scenario of providing updates, providing some sort of
continuous publishing, so that, when you are part of this subscription service, you will
always have access to the most actual version that is out there. It can be a service and
this can be controlled through some sort of a light DRM version that automatically
identifies older versions for example. It is a question of quality control. You can make
sure, that an article that was published by somebody is definitely published by this
person and not by somebody else. If you have an Open Source system, that is very
hard to control. It is more about reputation and branding, so you can actually build
this with these kinds of subscription systems and you can obviously charge for
example via monthly subscription fees.
Scenario four is the Super-Distribution case, where revenues are based on DRM
protection - that is where DRM becomes the business enabler. With music you could
imagine that system via MMS. You forward music clips and you can watch that clip
once for free, but if you want to buy it, you have to buy the right to do it. Those
technologies are being currently implemented. And I think this is exciting and it
allows for more flexible business models. Especially in this case standardisation is
extremely important. It is mandatory that the same system has to be by all the
participants. And that makes it extremely challenging, because obviously a company
providing these kinds of systems might become some sort of a gatekeeper and that has
to be closely watched. The impact on academic publishing is that for example peer to
peer collaboration could be possible with that; a controlled version management could
be possible and even more interesting, the sale of individual articles, chapters and
books.
After this bouquet of different scenarios as a conclusion, it is my opinion that all these
four scenarios will coexist. It is not that you pick and choose and only the promotion
scenario and the open source scenario will be around. I am sure that those four
(2005) 2:2 SCRIPT-ed

162
scenarios will be around for a long, long time. In case of the music industry those
scenarios on the left probably apply in 99.5 percent of the cases today - if you
consider how many people use Kazaa-like systems compared to how many people
legally buy music currently on the internet. I think the music industry has started a lot
of initiatives and they are trying to migrate more and more people from this “Darknet”
to legal users.
From an academic publishing perspective, authors and publications might actually
develop from scenario one to scenario four, meaning if you are at the beginning of
your research, you might have a huge interest in getting your work known. You are
actually willing to publish your articles without charging for it; probably you even
would be willing to pay for its broad distribution.
As a next step, you are probably going to a service provider, because you can see a
number of downloads that were done and he integrates your content into his huge
database and there is cost for that in the hope in the end he can charge for some sort of
access fees. Then you might actually move into a subscription scenario, where you are
part of a bigger journal or part of a bigger publication. And in the end an author might
actually be able to individually charge for his articles as mentioned in the last
scenario.
It is always a question: “what do you want as an author?”. Do you want to earn money
or do you want to get known in the world. And I think these four scenarios allow for
both cases. In my opinion it should be up to every individual to decide whether they
want to charge for their articles or not and so both systems should be coexistent.

Willem Grosheide:
A. Preliminary observations
I would like to stress that in my approach to e-publishing in its broadest sense, I
endorse the so-called Zwolle Principles 2002 (ZP) with regard to copyright
management(http://www.surfine/copyright/keyissues/scholarlyimplementation_zwoll
e_principles_pdf).
As you will know these principles are set within the framework laid down by the
Tempe Principles and accompanying documents stemming from the American
Association for the Advancement of Sciences (AAAS).
For reasons of convenience I would here like to quote the Objective of the ZP:
“To assist stakeholders – including authors, publishers, librarians,
universities and the public – to achieve maximum access to
scholarship without compromising quality or academic freedom and
without denying aspects of costs and rewards involved.”
In pursuing this objective the ZP promote the development of an appropriate
copyright management, taking into account the various, at times coinciding or
diverging, interests of all stakeholders involved by allocating specific rights to them.
It follows that I favour an approach towards e-publishing from the academic
community in cooperation, not in antagonism, with the community of scientific and
(2005) 2:2 SCRIPT-ed

163
educational publishers. The foregoing is said under the assumption that there are no
conflicting interests within the academic community, i.e. no differences of opinion
between scientific authors and scientific institutions. It appears that different schemes
are here applied worldwide.
As is well known, in recent years conflicts have arisen with regard to the ownership of
copyrights created for scientific work which has been created within academic
institutions. Who owns the copyrights in such a situation; the individual academic or
the institution?
Finally, although so-called moral rights aspects can easily come into play in the
context of e-publishing, e.g. with regard to the integrity of the content, I confine
myself in this statement to the economic aspects of copyright law as the ones that
prevail in the context of this workshop.
B. Issues of general interest
E-publishing in whatever form should only be taken up by the scientific community
itself if what it can offer to authors and public adds - in terms of formats, quality,
cost-saving, remuneration and the like - something to what scientific and educational
publishers can offer. As a consequence the academic community should abstain from
diving headlong into e-publishing activities executed adequately by scientific and
educational publishers.
It is a fallacy to believe - as is done by some - that copyright law is being washed
away through the digital colander. On the contrary, both on an international and a
national level copyright law has been substantially strengthened to adjust it to the
digital environment. See TRIPS Agreement 1994, WIPO Copyright Treaty 1996,
InfoSoc Directive 2001, DMCA.
As a consequence the prerogatives of the right owner have been extended towards an
access right, the enforcement of copyright law has been articulated, e.g. by providing
that the so-called three-step-test has to be applied by the courts, and the legal
protection of technological protective instruments has been introduced.
It is true to say that particularly the advent of technological devices to protect
copyright content, e.g. various forms of encoding, makes it possible to dispose of
copyrights by way of contract law, e.g. digital rights management. It is of note here
that the strengthening of the international and national copyright regimes is mainly
induced by economic considerations promoted by the so-called copyright industries,
and they are not necessarily beneficial to the interests of individual authors such as
academics.
This not to say that copyright law in its actual form can not serve the interests of
individual academics or the academic community at large.
This may also be true for the electronically making available (i.e. transmission,
distribution) on the Internet of text, either in the form of an e-book, in an e journal or
in any other digital format. Taking the notion of e-book as the terminus technicus with
regard to electronic publishing in general, according to a recent study of the
Association of American Publishers (AAP) and the American Library Association
(ALA)[AAP/ALA, White Paper -What Consumers Want in Digital Rights
Management, March 2003 (http://dx.doi.org/10.1003/whitepaperl)] no less than the
following three different interpretations of an e-book are used within the publishing,
(2005) 2:2 SCRIPT-ed

164
library and users communities: the content file (digital file), software application (the
electronic wrapper/envelope) and hardware device (the e book reader device).
Other definitions can be found in the e-book industry, such as literary work in the
form of a digital object (AAP), the content itself (netLibrary), or any full-text
electronic resource designed to be read on a screen (University of Virginia E-text
Center).
What is common to all these interpretations and definitions is their reference to the
metaphor of a book made of paper and cardboard. However, as e-books evolve,
reference to a book in its traditional form is expected to fade as comparisons to
products in print become less necessary.
It appears that today the notion of the traditional book is challenged by computer
technology and the variety of ways in which new products, designed to achieve the
same ends, are being developed (Karen Coyle, Stakeholders and Standards in the E-
book Ecology or It’s the Economics, Stupid, Library Hi Tech, Volume 10, No. 4, p.
314-324). So, if e-books are different the general question to be answered may be if
the applicable law should then also be different and should different forms of business
models then be used with regard to the dissemination? It seems that the general
answer to this question must be: It all depends.
With regard to the applicable law, the following issues, amongst other things, may be
taken into account are the relationships that are at stake. First the upstream
relationships between authors and exploiters. What counts here from the perspective
of the academic community are interests such as the broadest possible dissemination
of scientific content amongst other authors and consumers, integrity of the content
and reasonable costs and fair remuneration. This requires close monitoring of
exploitation contracts, i.e. no unrestricted application of the principle of the freedom
of contract with regard to, amongst other things, balancing proprietary claims of
copyright owners against access to information claims by authors and consumers
(e.g.: are exceptions and limitations to copyright law merely reflecting market
failure?) and considering transfer of rights clauses with respect to their legal nature
(assignment or license?) and their scope (general or purpose-restricted?).
Also the downstream relationships between authors and consumers such as the
relationships between exploiters and consumers have to be monitored. It is obvious
that, although the same legal instruments are available for authors and exploiters
acting as copyright owners, their application may differ since there may be a
divergence of interests here. For example, access to information may be more valued
amongst scholars than amongst scientific publishers.
With regard to the different forms of business models, again the same formats such as
particularly DRM, are available to both authors and exploiters but their use may differ
depending on who makes use of them. This may be particularly true taking account of
the different products that can be produced electronically, ranging from e-books, e-
journals to e-platforms. Whereas scholars will focus on rapid and broad dissemination
of their works and will have no concern with regard to exploitation in the sense of
profit making, the latter is crucial for scientific publishers.
(2005) 2:2 SCRIPT-ed

165
C. Issues of special interest
I. How might a legal framework be adjusted in order to preserve the intellectual
property rights of scientific authors?
It does not seem that the position of scientific authors is in any way different in this
respect from the position of literary authors, artists, composers or other creative
people. Preservation of intellectual property rights, i.e. copyright, in a digital context
can be attained by a combination of legal and technological measures; further, the
possibilities of enforcing the law are of the essence.
It is of note here that in recent times, next to the traditional so-called personal
copyright, the entrepreneurial copyright has emerged.
Entrepreneurial copyright should be understood as, amongst other things, the bundling
of copyrights in large industrial conglomerates, the co-called copyright industries, e.g.
Bertelsmann or EMI and particularly with regard to scientific publishing e.g. Elsevier
Science or Wolters/Kluwer. Obviously, these copyright industries try to pursue their
own commercial interests which do not necessarily coincide with the interests of the
(scientific) authors which they represent on the basis of an assignment or a license of
their copyrights. So, if the copyright industries refer to themselves as right owners this
reference can under these circumstances be misleading.
It follows that while with regard to the adjustment of copyright law to the digital
environment, the available options such as the introduction of an access right and, at
the same time, restricting the use of exceptions and limitations via an extension of the
three-step-test, all this together with either compulsory licenses, levies or DRM or a
combination of those instruments may be valued differently from the perspective of
scientific authors than from the perspective of scientific publishers.
What measures should be taken is primarily a policy issue, depending on how the
respective interests are balanced against each other.
II. Charges for scientific online publications - a hindrance to science or a proper
business?
A simple positive or negative answer to this question is not possible since charges can
be of various kinds, e.g. reimbursement of costs, remuneration of authors or profit
making. Neither is a simple answer necessary. It seems likely that the best answer will
be given on a case by case basis, e.g. the requirements for e-book publishing and the
publication of an e-journal are not necessarily the same.
III. Do scientific authors need intellectual property law in an Anglo-American or in a
continental European way in order to preserve their intellectual property rights?
This question is self-evident: there are no intellectual property rights without
intellectual property law. Further, it indeed makes a difference which legal system
applies: the perspective from the principle of the Anglo-American law work-made-
for-hire rule has different consequences than the contractual arrangement which
applies under continental law. However, in practice contracts may have more or less
the same effects under the continental rule as the work-made-for-hire rule due to
extensive assignments of copyrights which are usually made.
(2005) 2:2 SCRIPT-ed

166
IV. Is it useful to establish an open source system in the field of academic
publishing?
Since there is no established concept of open source it is difficult to say whether the
academic world will benefit from a system of open source. If open source stands for
establishing platforms for scholarly discussion, the beneficial effect is quite obvious.
V. What are the legal limits of Digital Rights Management?
In principle there are no legal limits to DRM other than those limitations which can be
found in positive law.
It is of note here that a distinction should be made between the management of digital
rights and the digital management of rights. The former comprises the technologies of
identification, metadata and languages, whereas the latter deals with encryption,
watermarking, digital signatures, privacy technologies and payment systems.
For example, if DRM makes it impossible for authors or consumers to make a copy
for private use this is not a violation of copyright law under the regime of the InFoSoc
Directive applicable in the EU (WIPO, Standing Committee on Copyright and related
Rights – Current Developments in the Field of Digital Rights Management,
SCCR/10/2 (Geneva August 2003).
VI. Is the Internet a source of danger to the rights of scientific authors?
Again, scientific authors do not have a special position in this respect compared with
other authors. Further, the Internet is both an opportunity and a danger, if the latter
means that legally and technologically unprotected scientific works may be easily
plagiarized, corrupted and exploited without the authorization of the author.
VII. How is the situation of companies and especially online publishing houses that
scientific authors work for?
This question cannot be answered because of a lack of adequate information.

Alexander Peukert:
Intellectual property rights issues of digital publishing - this issue was exactly the core
question of an expert opinion a colleague of mine and I prepared for the Heinz
Nixdorf Center for Information Management in the Max Planck Society. The
background is that the Max Planck Society launched a so called eDoc Server and
wanted to know in how far copyright could be an obstacle to its plans.
According to the official website of the eDoc-Server, it aims to:
• build a comprehensive resource of scientific information produced by the Max
Planck institutes, providing a stable location for its preservation and
dissemination,
• increase the visibility of the intellectual output of the Max Planck Institutes in
all the forms it takes in the era of the Internet
• strengthen the Society and the scientific community in negotiations with
publishers about the ownership of scientific research documents at a time
(2005) 2:2 SCRIPT-ed

167
where sky-rocketing journal prices and restrictive copyright undermine their
wide dissemination and persistent accessibility.
• contribute to a world wide, emerging scholarly communication system, which
exploits the full potential of the Internet and the digital representation and
processing of scientific information.
I suppose that these are typical purposes for parallel activities of other research
institutions (like the MIT or the Swiss Federal Institute of Technology in Zurich) and
that they are one of the reasons for our workshop today.
In the following minutes I do not want to bother you with details of our findings.
Basically, we had to tell the Max Planck Society that copyright law has severe
implications for a project like the eDoc Server. After all, the results of our study
summarized in the following show the complexity of today’s digital publishing in the
scientific domain.
Since the general standards of copyrightability are very low, basically all scientific
articles are subject to copyright protection. Article 9 paragraph 2 of the TRIPS
agreement makes it clear that Copyright protection shall extend to expressions and not
to ideas, procedures, methods of operation or mathematical concepts as such. This so
called idea-expression dichotomy preserves the public domain and fosters the
dissemination of knowledge. Scientific ideas as such are not subject to copyright
protection. Nevertheless, this limitation of the scope of copyright does not help much
when it comes to digital publishing. Courts have generally been generous in granting
copyright protection to scientific writings, illustrations of a scientific or technical
nature, such as drawings, plans, maps, sketches, tables and three-dimensional
representations. Besides, the growing importance of the legal protection of databases
has to be considered in this field.
Statutory limitations to this broad copyright protection do not apply to digital
publishing of whole articles or even books on the Internet, even if this is a non-profit
publication for educational or scholarly purposes. This assessment particularly relates
to a new limitation in the German copyright act as amended on September 13th, 2003.
§ 52a German CA proclaims that it shall be permissible to make available parts of a
printed work, works of minor volume or of individual contributions published in
newspapers or periodicals for persons that form a clearly defined group for their own
research activities. Obviously, this limitation does not cover a worldwide publication
on the Internet. It only applies to small groups of researchers that may share their
research results amongst them using computer networks. Moreover, digital publishing
on the Internet involves possibly all national copyright laws because this exploitation
takes place worldwide. Therefore, every national law would have to contain a broad
limitation of copyright in this respect. Otherwise, the right holder who did not agree to
the use of the work on the Internet could bar this use of the work invoking one
particularly far reaching national law.
To sum up, the publisher has to acquire the rights necessary for his envisaged
exploitation. And here we are in the middle of the whole problem: Who owns the
rights in scientific publications?
Under German law, copyright initially vests with the author. Even if scientific authors
are employed at universities or other research institutions, these employers or
commissioners have to contractually obtain rights in the work. German copyright law
does not follow the concept of works made for hire. Moreover, the employment
(2005) 2:2 SCRIPT-ed

168
contract between the researcher and the university is not deemed to contain an
implicit transfer of rights.
Therefore, the Max Planck Society has to acquire rights for digital publishing just as
commercial publishing houses have to do. By the way, at this point it should be
obvious that the Max Planck Society actually appears as a publisher and that this
behaviour causes legitimate concerns in the branch of commercial publishers.
However, this is not the whole picture. The one who is missing yet is the commercial
publisher. According to current standard contract forms used by publishers of
scholarly works, these publishers normally gain comprehensive rights in the
respective work that an author submits for publication in a scientific journal. This
transfer of rights covers the whole copyright term, all countries and every exploitation
right of commercial value, especially the right of making the work available to the
public. Admittedly, a number of publishers nowadays accept electronic preprints on
publicly accessible servers, including the author’s own home page. However, authors
are not allowed to update public server versions of their articles to be identical to the
articles as published. This means that articles may be freely accessible, but only in a
format that makes it difficult for others to quote the article. Moreover, it is unclear
whether this quite recent policy of some publishers also relates to prior publications
never been published online before and whether publishers accept a comprehensive
collection of all publications of an author on a server like eDoc.
In my view, this exemplary situation shed light on some fundamental issues about
digital publishing in the field of scholarly communication: The three parties involved
are the author, the university and the commercial publisher. Since nowadays most of
the creative output of the scientific community is published in journals that are edited
by companies doing this for commercial purposes, it is the publisher and not the
author who decides about whether or not copyright is installed to impede the free flow
of information. Research Institutions like the Max Planck Society are trying to change
this situation by becoming publishers themselves.
One at least theoretically conceivable way to solve these problems would be to
abolish copyright protection in the field of scientific publications or to limit copyright
protection to certain moral rights, especially the right to claim authorship of the work
and to object to any distortion, mutilation or other modification of the work. In this
case, the financing of these works would totally depend on public funding because
neither authors nor commercial publishers would be able to exclusively exploit works
of this kind. Note that in June the “Public Access to Science Act” was introduced in
the legislative process in the U.S. This proposal actually precludes copyright
protection for scholarly works that were created with public funding.
However, this proposal will not likely be implemented in the U.S. code anytime soon.
The reasons for this are manifold. Above all, international conventions on copyright
regulate the mandatory protection of every production in the scientific domain (see for
example Art. 2 of the Berne Convention).
Thus, we all will have to deal with copyright protection of scientific works.
Moreover, the implementation of technological protection measures poses a threat to
the vision of unrestricted dissemination of knowledge over the Internet. Commercial
publishers that use the Internet as a platform to distribute their products will certainly
rely on DRM systems in order to charge their customers. Without technological
measures, free riding will be the rule, not the exemption. And just as works in the
(2005) 2:2 SCRIPT-ed

169
scientific domain are protected by copyright, DRM systems are legally protected
against circumvention. These rules are also laid down in international conventions on
copyright, notably the WIPO Copyright Treaty and the WIPO Performances and
Phonograms Treaty of 1996, which makes it particularly difficult to change the law.
Therefore, we have to face the fact that DRM systems are part of the commercial use
of the Internet, also with regard to digital content of interest for the scientific
community.
Nevertheless, it seems to me that this factual and legal background does not
compellingly put an end to the vision of globally accessible and shared information.
The reason is - and here I can come back to my prior statements - that not everybody
is interested in implementing DRM-systems to protect digital content against certain
uses. Those right holders who primarily publish articles because they want to make
money will more likely impose electronic fences around digital content than will
scientific and academic authors who receive a regular salary and normally do not
depend upon revenues from publishing their works. Scientific authors are - above all -
interested in disseminating their works worldwide in order to earn as much reputation
as possible. Digitization and the Internet are the technical features that make this aim
cost-efficiently achievable for universities and institutions like the Max Planck
Society or even for the author himself. I believe that this differentiation is the key for
future copyright policies in the field of the scientific domain: publishers acting for
commercial purposes on the one hand and authors and research institutions on the
other hand. Legislators should provide measures to preserve for authors of scientific
works the control over their creations, because this group of right holders is less likely
to employ copyright to restrict access to their works.
Some aspects of this policy could be:
• Initial ownership of copyright should vest with the author.
• National legislators should think twice before applying the concept of works
made for hire in favour of universities and other publicly funded research
institutions because it is not impossible that universities as the future right
holders would evolve into commercial publishers that would again impose
technical measures restricting the access to digital content.
• Implementation of DRM systems must not be obligatory.
• It should be acknowledged that the exploiter is not free to implement DRM
systems as he sees fit, but that he has to acquire the right to do so from the
author.
• Publishers should not be granted a neighbouring right for their economic
performance (such a right is currently under debate in Germany).
• Copyright contract law should protect the author against overly broad transfers
of copyright.
Applying this approach will help authors to control the use of their work without
necessarily driving commercial publishers out of business. What commercial
publishers have to cope with is a new group of competitors: universities and even
authors. However, if Adam Smith’s “invisible hand” tells us the truth - something we
have reason to believe - then more acute competition serves the public interest.
Therefore, we should be happy about the implications of digitization and the Internet
in the field of publishing of scholarly works.
(2005) 2:2 SCRIPT-ed

170
Guido Westkamp:
The following remarks are intended to present a brief introduction into those national
and international norms and provisions which currently shape copyright law, in
particular with respect to the thematic scope of this workshop.
In short, copyright is undergoing the perhaps most drastic changes of its history.
These changes can briefly be summarised as a turn from a bundle of certain economic
right towards an access or use right. This is dangerous as it comes potentially close to
embracing access to and use of information and data.
The reason why this is the case is, again, manifold. One reason is, naturally, an
increase in electronic uses of copyrightable material and the ease of copying and
dissemination associated with it. The second reason has a political dimension: in the
European Union, two distinct copyright systems exist: according to the author’s rights
systems, protection is afforded to the personal intellectual creation embodied in a
work. The UK system is much more based upon investment than personality. This,
according to wisdom currently prevailing in the European Commission, necessitates
harmonisation under Article 95 EC as it is conceived as a potential barrier to trade.
Hence, European harmonisation has been introduced by way of directives, the final
two of which – the directive on the legal protection of databases and the directive on
copyright in the information society – are most relevant to the workshop topic.
Finally, international norm setting was achieved under the 1996 WIPO Copyright
treaties dealing with the conceived dangers of the internet.
The most important changes copyright has faced occur on two levels. First, the
traditional set of economic rights is expanded. Secondly, a new set of rights dealing
with the protection of digital rights management systems and technological measures
was introduced which is intended to regulate, where appropriate, access to
copyrighted works.
The analysis will, in turn, deal with these changes.
First, it should be noted that the threshold for determining copyright protection has
been left untouched. Despite a rise in informational works – works which must reflect
a personal intellectual creation by reason of an original selection or arrangement of
contents – neither has the European Union achieved any formulation nor has any
attempt been made to harmonise the threshold for copyright protection. In that sense,
jurisdictions are still relying on the catalogue of works under Article 2 of the Berne
Convention. This is particularly difficult to reconcile with the question as to when, in
particular, scientific and organised works are in fact protectable. The basic and
commonly shared opinion suggests that neither ideas embodied in a work nor
information “as such” is protectable under any type of Intellectual Property right. The
reason is that (1) information is discovered and not created and (2) that ideas have no
form by virtue of which these may be protected. The commonly used example is a
telephone directory, but the same principle applies to electronic databases, software
and compilations of research results. It needs to be pointed out, though, that the exact
demarcation lines are difficult to detect. However, the taking and re-compilation of
existing research and scientific does not amount to a reproduction if what has been
taken does not, in itself, reflect a personal intellectual creation.
The second principle, stemming from a more general notion of Intellectual Property
theory, is a basic distinction between copyright and industrial property rights, most
notably patents. Copyright gives a bundle of rights rather than a use right. A patent
(2005) 2:2 SCRIPT-ed

171
gives a use right but only under commercial circumstances. This highlights, again, the
basic distinction under which copyright does not protect information contained in a
work. Such monopoly only exists, for a much more limited period of time, in patent
law, and for this good reason patent law demands a process of filing and eventual
analysis as to novelty.
Turning to copyright once more, the exclusion of information is reflected in the basic
distinction between physical and non-physical rights. Physical rights are the right to
reproduce the work in a physical form and the right to (physically) distribute those
copies. Once a copy has been placed on the market, the author is not permitted to
exert any further control over its destination, i.e. it may be resold without any
authorisation. In terms of non physical rights, the most striking feature is that any
communication, by whatever means, must reach the public. Most copyright acts
include certain categories of these rights such as uses on television, broadcasting,
theatre (or other) performances etc. these acts are usually, but for the most
extraordinary purposes, always intended to reach a public, which in turn was defined
as denoting a commercial activity. This notion, though not explicitly expressed in
most copyright statutes, can at least implicitly be derived from the interplay of acts of
communication typically reaching a vast audience and the impact of this on the
ordinary meaning of the public: a large group of consumers sharing no ties.
In line with the general copyright principles, protection is granted for the expression
of a work. Ideas and Information “as such” are not subject to such monopoly.
Traditionally, the judicial treatment of the so called idea/expression dichotomy did not
cause any intricacies since the works in question were already published. The
principle therefore enabled a public domain to be built. For the scientific community,
in particular in areas such as natural sciences, engineering and life sciences, copyright
did not prevent access to such information.
This has changed. Exploiters of cultural works have detected a danger to the
commercial viability of their businesses and pledge for stricter protection. These
concerns have been met by various international and European instruments, in
particular:
• The introduction of a general right of communication to the public under the
1996 WIPO World Copyright Treaty for Authors and under the 1996 WIPO
World Performers and Phonogram Producers Protection Treaty (Articles 8 and
6 respectively). This right is understood to complement the non-physical rights
in respect of online transmission in closed or open networks. The acts which
would infringe copyright include the communication of a work to the public as
well the making available (such as on an internet site). There are serious
concerns about such right. Whereas a strict regime may well protect works of
high authorship at an acceptable standard, it appears to give no regard to the
issue of information access. For the purpose of copyright, the quality of a
work is normally the most decisive factor. The right will, however, apply
without such distinction. Secondly, it becomes quite impossible to determine
who the public will be. As noted, traditional copyright draws a distinction
between physical and non-physical rights. The list of non-physical rights
normally found refers to certain technologies which will reach the public, and
the term therefore does not require any elaborate attempts of definition. The
introduction of a much more general right of communication will now place
the emphasis for judicial interpretation on exactly such broad terms. This
(2005) 2:2 SCRIPT-ed

172
means that the commercial factor, which was at least notionally underlying
aspects such as broadcasting becomes inapplicable. Therefore, the right is
dangerously close to monopolising private (one to one) communications and
could even stretch to email communications. This is in contrast to traditional
copyright which does not imply any right as to the communication of a work.
• The European Union has introduced a general right of transient copying under
the 2001 directive on Copyright and Related Rights in the Information
Society. This refers to acts of technically necessary copies which occur once
information is transmitted through electronic networks. Here, temporary or
transient copies effect a faster transmission. They have no value as such. This
right subsists alongside the communication right. The directive has, to ease the
effects of this broad understanding, introduced a defence based upon either a
lawful use of for the benefit of an intermediate service provider who will not
be liable for copyright infringement. However, liability will be incurred if the
transient copy has economic significance. It remains entirely unclear whether
a transmit copy has such value, as it does not result in any tangible copy which
can be re-utilised. For providers of scientific material, the combined effect is
quite serous. Both the reproduction right, as understood in the sense the
directive suggests, and the right of communication to the public will avail
owners of copyrighted materials with a general use right. In effect, no
limitation will apply. As a use right, it remains immaterial whether the act of
access (or providing an opportunity to access for third parties by uploading
information) is intended to make use of the work or of the information
contained therein.
• Thirdly, most limitations with regard to the scientific or research use are
withering away. The directive contains a list of optional limitation member
states may introduce or uphold but seeks no general balancing test for certain
private copies or academic purpose in relation to digital copying. This further
makes the situation for the free dissemination of scientific material
complicated.
• An additional step towards a strict copyright management system is the
introduction of provisions for the protection of both Digital Rights
Management Systems and Technological Measures protecting copyrighted
works. Again, this is likely to cause a development toward an access right. As
yet, it remains dubious whether the access to information only will qualify as
copyright infringement or whether such act would, contrary to all principles of
tort law, amount to an infringement by simply committing an act to
circumvent such measures or manipulate rights management systems.
• The final step in legal development is likely to be a shift from traditional
copyright principles towards an assessment under different concepts. Most
notably, access to copyright works (indeed, database systems) was subjected
to scrutiny under competition principles, and here the most important current
disputes surrounds the character of such Intellectual Property rights as an
“essential facility”. Moreover, there is, at present, a new line of thought
developing which aims to assess Intellectual Property and its interface with
information and communication freedom under constitutional law. There is
nothing of substance as yet.
(2005) 2:2 SCRIPT-ed

173
Apart from these aspects, the online delivery of copyrighted material needs to be
viewed in light of the limitation which had been granted to the area of academic
research. There is a great divide globally and on a European level as to the scope of
limitations necessary and acceptable. The most important issue here is whether a new
levy system should be introduced which would then facilitate the licensing of
university libraries to disseminate academic materials. It should, of course, be noted
that a voluntary license – for the sheer fact of publication – can always be asked for,
but this will remains subject to a license agreement.
As to online delivery, it should also be noted that in relation to entirely open system
access and download facilities will also have an impact on foreign users. It still is
unclear which law will apply to the act of downloading a copyrighted work. The
conventional concepts – either the law of origin or the law of the country for which
protection is sought – are both fraught with difficulties in practical application.
From a legal point of view, it therefore appears difficult to advise in respect of the
FIGARO project on any one aspect. The authors would, however, like to state his
personal opinion that the complexities of current copyright law need to be addresses
so as to facilitate a more flexible, case by case judicial analysis. This could either be
achieved on the level of economic rights, by reinserting a general test based on
aspects such as the character of the work, the purpose of the use, the overall
social/commercial impact on the use of the work. This could systematically also be
achieved by introducing a general limitation under a wider fair use concept.

Burkhard Schäfer:
The traditional academic library faces a crisis. Research journal publishers have
increased their prices by an average of 10% per annum over the last decade. In the
same time, the introduction of the Research assessment exercise in the UK has
dramatically increased the pressure on academics to publish. The market reacted by
an equally dramatic increase in the number of journals. Libraries therefore have to
buy ever more, and more expensive journals. Library budgets in the UK however
grew if at all only in line of inflation. While arts, humanities and law have seen less
dramatic rises in the prices for journals, they too face a problem. Storage space is
increasingly at a premium, especially since university libraries quite often also
double as student microlabs. Electronic publishing can always address this second
issue, even if the electronic journals should remain as expensive as their paper
counterparts. Edinburgh and Glasgow Schools of Law are at present negotiating for
instance the possibility that hard copies of journals are kept in only one of the two
libraries for those journals that both subscribe to electronically. Electronic publishing
offers however even more radical solutions to the problems facing the academic
archive. Firstly, there is the potential for institutional self-archiving. In addition to
publishing in traditional journals, academics can upload their papers in open access
digital archives. In its simplest form, this can simply be their own or their
departmental website. On the next higher level, these archives can be located on the
level of universities. Finally and more ambitiously there are projects such as the
Budapest Open Access Initiative funded by the Soros Foundation Open Society
Institute. Its protocol allows all databases which comply with it to function as if they
were part of one central server, regardless of their physical location. Even small
institution-based servers can in this way generate large aggregates of research
(2005) 2:2 SCRIPT-ed

174
literature. Researchers can self-archive their papers and don’t need to upload the same
paper on several smaller discipline-based archives. Only this creates the necessary
mass to challenge the main journals. In the near future, technologies will be
available that can be used to transform these archives from cheap alternatives to paper
journals to superior competitors, for instance through the possibility to allow users to
customise hyperlinks between papers, a technology already used commercially in the
Juris system. OAI promotes the adoption of their protocol by repositories of all kinds.
Sites which meet the format requirements can register to become OAI-complaint and
have their records joined to an international federation of repositories. In the UK, the
joint information systems committee JISC supports institutional self-archiving.
Edinburgh University is one of the leading partners in a consortium to create and
promote such archives (The Nottingham University led SHERPA project)
Since electronic publishing has low initial costs (compared with paper publishing)
“free-to-air” journals form a complementary strategy to reduce costs, firstly by
offering free or low costs journals, secondly by creating market pressure for
established publishers to reduce their costs too. Learned societies in particular can act
as competitors to established publishers. In law, the electronic law journal project at
Warwick University offers two free to air law journals, the Web Journal of
Contemporary Legal Issues is another examples of this new breed of publications.
Unlike self-archiving, open access journals do not face the problem to negotiate
copyright release for articles published already in paper journals. They do however
face potentially higher costs for peer reviewing, copy-editing, proof reading and
general administration. Possibilities to meet these costs range from “author pays”
models to sponsorship to a system of “value added” publishing were the core texts
remain free of charge, but individuals and institutions are offered additional services
(like for instance the hyperlinking facility discussed above, customising of papers
etc). It also allows to tap into this huge underused resource (in Europe), students.
While student edited journals are the norm in US law schools, they are the exception
in the UK. Apart from the potential to reduce costs, they have pedagogical
advantages, from involving students early on in research to more reader friendly
writing by academics. Edinburgh University is presently developing a student edited
webjournal on IT and IP law.
These different approaches to technology facilitated non-traditional publishing create
very different legal problems. Self-archiving faces very obvious copyright issues not
encountered by free to air journals which publish original work. Other legal problems
are less obvious. One function of traditional publishers is to protect their authors from
litigation. Think for example of a research article that questions the efficacy of a
commercial drug. Will an underfunded open source archive/journal be as robust as a
commercial publisher in resisting threats of libel litigation? Will it simple remove
“dangerous” papers? To the extend that OAI establishes a successful monopoly,
should there be a legally recognised “right” of authors to upload their papers?
Conversely, under which conditions should an author have a right to withdraw an
uploaded paper? Imagine a lecturer who uses an archives facility to hyperlink papers
needed for a course, and who makes the so created customised archive available to his
students. Does this create potential violations of the moral rights of the authors, if for
instance they are linked to a derogatory comment of their publication? If a free paper
or archive changes ownership and becomes commercial, should authors have the right
to withdraw their papers?
(2005) 2:2 SCRIPT-ed

175
The main argument of my talk is hence the interaction of what is technologically
possible, institutionally required, legally permissible and economically or
academically desirable. Each aspect of this equation influences every other one, often
in unpredictable ways. Cheap as opposed to free online journals for instance might
initially increase costs to libraries, as they will be ordered in addition to existing high
impact journals, at least initially. Can institutions/publishers/governments financially
support these loss making journals, or does this raise unfair competition issues?
Universities and Ministries of education have established reward structures, often
based on instruments like citation indices. The precise nature of these instruments will
determine the interest the author has in his work. If there are problems (for instance
regarding the standing of online journals, or the problem of citation of online
archives) are solutions best developed in law (a moral right to be quoted), technology
(automatic electronic registration of “hits” on websites), in a change of these reward
structures, or a mixture of these measures? In what follows, I will explore the
interplay of these different issues, arguing for more radically new forms of publishing
which technology makes possible, and which pre-empts many of the pertinent legal
issues (but raises new ones)
The starting point for my own analysis is going to be where our discussion yesterday
ended, the issue of peer review and quality control. I think that this is the most
problematic “Red Herring” which has had a very negative influence on the entire
debate. One of the main reasons given by academic publishers for their high charges
is the service that they provide. The main component of this service is the system of
peer review. Because publishers guarantee the quality of the academic end product, so
their argument, they are not only entitled to but depended upon high charges for their
journals. This claim has two presuppositions: Firstly that the main result of academic
work is an object, e.g. an article. Secondly, that it is important to ensure the quality of
this article. Both presuppositions are false, I think. I want to start with the first of
them, and ask somewhat provocatively: what sort of “artists” are academics, what sort
of artist should they be? Are we more similar to a sculptor, to someone who invests a
lot of hard work to produce a physically touchable end-product, the sculpture? Or are
we more like performance artists, like singers or actors, who produce a less tangible
product?
My argument is that for all the wrong reasons, academics and academic products are
at present more similar to sculptors, or poet and novelist, we “produce” finished
objects (articles, books, inventions). What we should be doing is much more similar
to performance artist, inviting others (academics and the general public) to participate
as spectators in our work, not as buyers of it.
This obviously has implications for the debate on intellectual property. Is it really
copyright protection that we need, or is there some leverage to be gained from the
debate on the protection of rights similar to copyright, for instance the concept of
neighbouring rights?
The issue of quality control is directly at the heart of this matter. I think that what
academia is suffering from collectively, over the past 200 years at least, is a sort of
collective Stockholm Syndrome. The Stockholm syndrome is the psychological
theory that hostages tend to fall in love with their captors and that they develop
emotional bonds with them. I think that is very much what we did as academics. We
had to have peer review for mainly economic reasons, not epistemological reasons,
and we fell in love with our captor. We invented reasons why peer review is a good
(2005) 2:2 SCRIPT-ed

176
thing in itself, and why it is absolutely needed for quality enhancement. But that is
really only the same survival strategy hostages take towards their captor. It makes life
so much easier if we can reinvent ourselves as the willing participants in a totally
pointless process rather than its victims.
Why is peer review a bad thing? Normally it is seen as the hallmark of academic
publishing or academic activity. It is said to enhance or ensure quality. Now, if we
put our prejudices and comforting narratives aside for a moment, we can realise
immediately that this is untenable. Peer review does not enhance quality. There are
very few papers or results that are enhanced by the process of peer review.
Granted, sometimes one might have a good reviewer whose comments are
genuinely helpful. But one can equally get helpful comments from colleagues or
from people met at a conference. Peer review is not different in that respect from any
other person commenting on a first draft of a paper. What peer review does is not
primarily to enhance the quality of academic research. It produces or excludes
research that is too bad to be published, and that is not the same thing at all. It doesn’t
increase or improve the quality, it only deselect pieces of possibly low quality. Once
that distinction is made we realize what the purpose of peer review really is. In an
environment where the production of tangible forms of knowledge and its
dissemination is expensive and the channels of communication are restricted, some
form of peer review is indeed necessary. Traditional printing is expensive. You have
to cut down trees and transport them to some other place where they are manufactured
into paper, and then you have to transport the paper to somewhere else again, where
the expensive printing machinery is located, and as a result you have heavy books
that need to be transported yet again and finally stored. All this is expensive. It is also
costly on the libraries or the intended reader who has to find storage space Because of
these costs, it is not feasible to print every idea, thought or inspiration academics
might have. For all these reasons, we need a system, or rather we needed a system in
the past, where the worst of academic output was deselected at source. To the extend
however that we can reduce these costs, exclusion of low quality material becomes
less important.
This adjustment between costs of production and quality control is by no means
unique to our time. At a time when books where hand written, peer review and
negative quality control were even more important than they are today. Prior to the
invention of the printing press, copies had to be hand-made. Since it took great effort
and considerable investment in time to produce a single copy, it was crucial not to
waste it on works of insufficient quality. Consequently, the only book frequently
copies was the bible. This guaranteed quality due to the high standing of the author.
(though arguably, there wasn’t any peer review).
Now we have the internet, we can produce output without very high costs, and with
that peer review loses its raison d'être. There is no any longer the economic pressure
to deselect a low quality material. And with that we can stop to pretend that peer
review is actually enhancing academic research.
Not only that, there are other issues involved here. Firstly, peer review is failing even
measured against its own standards. Alan Sokal and Jean Bricmont started the
onslaught on the peer review system when their spoof article “transgressing the
boundaries - towards a transformative hermeneutic of quantum physics” was
published in the peer reviewed “Social text”. For some time, scientists could console
themselves with the thought that this problem was unique for a very specific branch of
(2005) 2:2 SCRIPT-ed

177
contemporary philosophy. However, last year’s scandal at the MIT paid put to this.
An academic tipped for the Nobel price on the strength of his long list of publications
in the leading peer-reviewed journals was finally found out as a fraudster who had
deliberately (and not even in a particularly sophisticated way) used falsified data.
There are two reasons for the demise of peer review as a useful tool for quality
control. One is increasing specialisation. It is increasingly difficult to find competent
and independent peers - more likely than not, they are going to be co-authors or
direct competitors. Secondly, the high costs of in particular empirical research make
replication of results increasingly difficult. Funding councils might be prepared to
finance groundbreaking new research. They are much less forthcoming in financing
the testing and evaluation of old results.
Not only is peer review inefficient, it is positively harmful. First, as discussed
extensively in the literature following Kuhn’s “Structure of Scientific Revolutions”, it
re-enforces conservative tendencies within established scientific paradigmata. In a
peer review system traditional knowledge is more easily accepted, newcomers and
new ideas have difficulties in passing through the border police of academic quality
control.
Second, it puts an epistemological important part of academic discourse behind closed
doors. Sometimes the reviewers actually have good ideas. The only way I can make
these public is by a small footnote. I thank my anonymous referee for this brilliant
comment. Would it not be much better to publish both of them together ? The internet
makes this possible. With that, there is no more reason to keep the process of review
behind closed doors. Internet publishing allows us in principle a process of
permanent, and open, peer review in which readers annotate constantly the published
text. You can think of an article which permanently accumulates comments, very
much like the process at Amazon for instance, where people can comment and write
reviews of books, for everyone to read directly on line. Epistemologically speaking,
this process is considerably more efficient. The number of reviewers is increased, the
critical process of debate is in the open, and the reviewing process is not completed
after an arbitrarily set (and rather short) period of time. The one possible comeback
argument in favour of peer review is that it creates trust in the wider public. Policy
maker, lawyers (evidence) and other members of the public need to know which
scientific theories they can trust, and which are “junk science”. However, even for
that function the anonymous peer review fails. Not only for the factual mistakes
mentioned above. To judge the validity of a statement, as any professional spy would
tell you, it is crucial to know “sources and methods”. To determine e.g. if a foreign
country is in the possession of dangerous weapons, to know that someone considers
this likely is not enough. To determine the credibility of such a statement, it is crucial
to know who these people are, if they have ulterior motives or a stake in the outcome
(disgruntled dissidents hoping for crime change etc.) The same holds true for quality
statements about academic work. Indeed, the US may be moving soon to a mandatory
publication of failed pharmaceutical experiments – information typically filtered out
through the peer review process of scientific publishing.
Traditional peer review – or, generally the idea that academics produce products, high
quality finished products also meant that academic research and teaching became
artificially separated. The very best articles, the leading articles in their field, are
often totally inappropriate for teaching proposes, because they don’t invite the reader
into the thought process which produced the paper. They give us authoritative end
results, but little guidance on how the author came from an initial research question to
(2005) 2:2 SCRIPT-ed

178
just that result. We give our students finished results, and they sit there and marble
how on earth anyone could come up with such an answer. Why? Because the way we
produce our knowledge does not invite the reader to participate in the process that
created that specific research output, and the authority bestowed by the reviewer’s
seal of approval discourages the sort of critical attitude that we want to instil in our
students. Internet publishing and its low marginal costs allow us to move away from
producing as output objects (papers and articles) to a process version of academic
research which is much more appropriate to what we are doing, epistemologically and
pedagogically.
That is why I think peer review is a “Red Herring”. The internet allows us finally to
produce and disseminate knowledge in a way that is appropriate for what we are
actually doing, a true image of the process that is scientific discovery. We can publish
all our papers as they emerge, which also increases the chance to find out cheaters.
This colleague at the MIT miraculously was able to publish 200 papers a year. Had he
been forced to demonstrate how he came to his results by a step to step process that
the reader could follow, it would have become evident at a much earlier stage that
something wrong was going on. As teachers, we all know this simple truth: Frequent
meetings with our students and explanations of their work as it progresses are the best
way to ensure that they are indeed the authors of the dissertations that they submit,
and the most reliable safeguard against plagiarism. We should expose our colleagues
to the same scrutiny that we consider appropriate for our research students, and a
process model of academic publishing allows us just that. Two emerging technologies
on the internet already address this issue. One was already mentioned in a different
context: Wikis, the most famous being wikipedia, allow a continuous process of
writing, criticism and rewriting of information. In addition to the benefits mentioned
in an earlier contribution, the ability for all users to amend anytime an earlier entry,
and where issues are particularly controversial, extend the encyclopaedia by a linked
discussion board, allows to make optimal use of “the wisdom of the crowds” and
brings the process of analytical criticism into the open. Blogging is another
development that supports the same idea. Last year, for the first time, students in my
master course used bloggs by influential researchers as source for their essays. Blogs
are an even better example of the process I indicated above: away from the static end-
product to participating in a performance by the author.
That is the first argument I want to make. Peer review was economically necessary in
the past, but it always was epistemologically counterproductive, and now we have the
means of academic production to get rid of it.
My second argument is that at the moment, we also observe a failure of the academic
imagination and we also, as a result, will make things worse before they get better.
What do I mean by that? There are examples of free to air academic journals. My
own association, BILETA, is involved with a journal for internet law and technology
which is free-to-air. We have very little production costs as papers are sent to referees
by e-mail and the reviewers work for free We can do that really on a shoestring
budget. But the format which we are using is still relatively close to that of the
traditional journal. At the centre is still the refereed article. We are slightly more
flexible in having a category for work in progress. This is already a big concession.
Based on what I said before, I think that is where we as academics are failing. We
should think much more radically of new ways to publish on the internet. One of these
more radical forms I have already indicated, publishing as continuous annotating and
commenting of articles. A draft paper is put online, other people can post their
(2005) 2:2 SCRIPT-ed

179
comments. The author then re-writes the paper and updates it constantly. This is the
process model of academic work, as opposed to the product model. Similar forms of
dynamic collaboration do already take place on the internet in contexts other than
academic publishing. Harvard University’s “open law” project produces compilations
of comments by experts on draft submissions to courts. The best known example, at
least as basis for an analogy, is the LINUX software system: multiple users constantly
upgrading and improving a system that is never really completed.
However, things are going to get worse rather than better, for some time to come at
least. As we have seen yesterday, excessive journal prices are at least partly due to
market failure in the field of academic publishing. First, there isn’t enough
competition between publishers, with a small number of publishers dominating the
market. Second, the people who decide which journals to buy are not the people who
actually have to invest money. We tell our librarians that we really need this new
important legal journal, and it is not us who are paying for it, it is our university. As a
result, we might be overenthusiastic in recommending journals for the library. If
library budgets and the salary budgets came from the same pot, and the savings we
made in recommending journals for the library translated into salary increases, this
might result in rather interesting competition in the field of academic publishing, but
it is of course not going to happen. This market failure that allowed academic
publishers to increase on average annually journal prices by 10% will get worse now
that traditional publishers move into electronic publishing. Source of concern is
especially the idea that we now need licences to access a website, rather than acquire
property of the journal itself. This potentially makes it much more difficult to switch a
journal against a (better priced, or better quality) competitor. Imagine that you
subscribe for ten years to a previously rather good journal. In due time, a better or
cheaper journal comes along, or the established journal loses its academic reputation
together with its editor. To swap for the better journal under the new system might
lose the access to the past backup copies, which are also only available on the
publisher’s website. This would mean in effect to lose the investment made over the
past ten years. This is a huge disincentive to switch. There are legally issues involved
here. In particular, I think that potentially, competition law is going to provide
academics and universities with some leverage, and it might be possible to argue that
there should be either a legal or contractual obligation to make backup copies
available if subscription was paid for them at the time they were originally published.
I now want to draw some very tentative conclusions for the legal issues involved in
the alternative form of publishing, and if IP is part of the problem or of the solution.
Firsy, I want to re-iterate a point made by most of the contributors. Academics aren’t
primarily motivated by the commercialisation of their research and are by and large
willing to make it available for free, provided that the “alternative payment” in terms
of recognition, citation and peer-esteem can be efficiently collected. IP regimes based
on the notion of commercial exploitation in the Anglo-American tradition are
therefore least suitable for academic publishing, online or offline and contributed to
the undesirable present situation in which tax funded academic research is made
available for free to publishers who can than reap the financial benefits of the initial
public investment. However, I think we have to be more specific than that and
exclude at least one form of academic publishing, the publication of teaching
materials. Since I do not compete with fellow academics for a scarce resource
“knowledge”, I do not need to exclude them from using my work. We do compete
however, as members of our teaching institutions, for the scarce resource “good” (or
(2005) 2:2 SCRIPT-ed

180
“fee paying”) student - at least in the Anglo-American world. Some of the
problematic consequences of this competition can already be seen. Some institutions,
including my own, make some teaching materials only available to registered
students, potentially limiting in the process their duty to disseminate knowledge. Nor
is it necessarily the case that a “commons” in academic writing is equally benign if it
comes to teaching material. The move by MIT to make all their teaching material
freely available online can also be interpreted as a rather sinister attempt to prevent
competition. Universities with sufficient endowment income might decide that while
it is unnecessarily burdensome for them to move into online education themselves,
less established institutions who need this source of income might use it to compete
eventually also in other fields, e.g. research. By making their own material available
for free, they decrease the value of paid-for online education offered by competitors.
Again, this is more a competition law problem than an IP issue (though of course,
conceptually they are the same, IP rights creating limited monopolies)
I already indicated above that the conceptual vocabulary of “neighbouring rights”
might be useful to develop a theory of copyright protection for “process publication”.
Technological solutions can further contribute to an adequate protection of the author.
“Moral rights” in the continental European tradition ensure the right to have one’s
work attributed, which is important if the predominant reward system is based on the
ubiquitous citations indices. In addition to these citation indices, it is possible to use
much more direct measurements of the impact of internet based publications that do
not rely on the observation of IP law. It is for instance possible to measure the hits
that a website receives, the number of downloads made of an article, or the links to a
website (already used for ranking purposes in search engines like google). Both
models can be refined. A low quality article might get a substantial number of hits by
users who leave the website more or less immediately when they realise that the work
in question is irrelevant. To address this distortion, users can be asked to “grade”
websites that they visit. This can be seen as a new application of Lessing’s idea of
“law as code”. IP protection of academic writing becomes less important if it
becomes physically impossible for a user to access an article without leaving behind a
trace that measures the impact or importance of this article.
However, this new form of publishing also creates new problems. As described
above, the new form of publication combines academic papers with publication of
annotations and comments. Discussion of IP issues in academic publishing focus so
far on the protection of the author of the original paper. However, the IP rights of the
commentators are in this model of equal importance. Partly, this is an issue for the
reward systems employed by several higher education regimes. To work to its full
potential, it is necessary to recognise the work invested in commenting on other
peoples’ writing in the form described. At present, peer reviewers are either
financially rewarded or work entirely for free. Under the new system, their work
would be of the same category as any other academic writing and should be
recognised as such. The system of “moral rights “ however can potentially create
conflicts between the commentator and the original author. The IP law protection
against “denigrating treatment” could allow an author to delete or otherwise suppress
comments that are seen as too negative, especially if papers are published on the
website of the author or his institution. Repositories managed by third parties, and the
contractual permission to accept annotations and comments, might be a way forward.
(2005) 2:2 SCRIPT-ed

181
Secondly, commentators could artificially increase their own impact rating by posting
low quality comments on high impact/high quality papers. Again, technological rather
than legal solutions might mitigate this problem
Another problem could be labelled: “quality control, the empire strikes back”. It is a
quality control problem in the sort of academic-work-as-process model which I am
advocating here. Obviously, one consequence of my proposals will be the
proliferation of low quality articles. It might be a problem to use or to optimize the
search engines to get only those articles that are of relevance to a potential user. On
the other hand, search engines become so important under this system that they could
seriously distort the quality measurements discussed above. Most of the pertinent
legal issues have already been discussed in other research on e-commerce, in
particular if there should be a right to be listed on search engines, if intentionally
misleading ranking should carry legal penalties, if financial incentives to rank one’s
site more highly (as in the case of google) are akin to fraud or conversely, if the use of
misleading meta tags by authors to increase the popularity of their site should carry
similar punishment.
A potentially even bigger problem is to authenticate the authors online. Amazon can
again serve as an example. Readers are invited to write reviews on books, and their
reviews influence buying behaviour. Obviously, this system is open to abuse if
authors write positive reviews of their own work under a pseudonym. In a system
where peer review is replaced by continuous commenting and annotating, authors
might feel tempted to post positive comments to their own papers. More generally, the
issue becomes how we can make sure that the person who claims to have written an
article online is really the person he claims to be. This is where some equivalent to
traditional publishers might become necessary, taking up the theme of FIGARO. The
third party repository would not so much guarantee the quality of the content of
academic writing, but the identity of the author. The question of anonymity and
pseudonymity is again a general problem for the internet, with discussions ranging
from internet democracy issues to the manipulation of share values in internet
discussion forums. How can you make sure that people are who they claim to be?
The final problem is how we can avoid the tragedy of the commons repeating itself?
If most academic writing is made available for free online, how can we prevent
someone appropriating the results? That is where IP is challenged. I am not totally
sure how far the Open Source software licence can really be translated or extended by
analogy to academic publishing. I think the way you might use Linux differs from the
way people might misappropriate academic results in publishing. It would be partly
an empirical question, partly a legal question of how we can ensure that things that
have been produced for free by the academic community remain free. In an extreme
scenario, someone like Bill Gates (who is in the process of copyrighting electronic
replications of all major works of art) could come around, “harvest” articles from free
websites, collects them in commercial databases with some “added value” and claim
ownership for it. Technically, this is indeed feasible provided again that owners of the
main search engines also have a property interest in the collections of academic
writing. Theoretically, it would be possible under such a scenario that third parties can
only access commercial collections (potentially protected by IP law on databases and
similar collections) of academic articles, while free sources are simply not listed any
longer. As with most of the issues discussed here, the solution once again might be in
the field of competition law rather than IP law as traditionally understood.
(2005) 2:2 SCRIPT-ed

182
Elena Bordeanu:
Firstly, I totally agree with your opinion on the quality control regarding the difficulty
to find out the origin of information on the Internet, who is the owner. Let me bring to
you an example from my teaching experience. There, most of the time, teachers
wanting to evaluate a student’s work and having reasonable doubts over the origin of
such a work, do searches on the Internet with keywords for finding any article that
would treat the same subject or that can be connected to the idea expressed by the
student.
Secondly, I also criticize your opinion as I am thinking of self-regulation. And talking
about self-regulation I am referring to the Common Law perspective which is very
different from the European one. In Europe we tend to regulate everything, to have
rules for every practical issue. Opposite to this, there is the experience of self-
regulation which is used, for example, for arbitration. Indeed, the internet has its
origin in the self-regulation idea. So, I don’t know if the question of open source
could be a good solution for academic research, but there are possibilities to apply it
properly.
After this introduction, let me carry on with my presentation. I wanted to say that it is
a pleasure to be here and talk about copyright in a country where printing has its
origins. I am talking about the idea of copyright related to the possibilities of
reproducing the work. Even the concept of copyright refers to the American approach.
We are talking about copy, the right to copy. In French law we are rather speaking
about the right of the author, “droit d'auteur”. So starting with that, we will see that
there is a big difference between the two approaches. And why I am talking about the
author's right? Because according to French law, the author is all important. He is the
person having all the rights. From this point of view can we talk about a real right to
copy? Is there a right that is conferred to the users to make copies? And I am only
referring to private copies here.
Some authors found the source of such a right in the European Convention of Human
Rights – these rights are well known, and in France appear in the declaration of 1789.
All this is in connection with the freedom to access information, which is a part of the
freedom of expression. And once again, I am wondering if there really is a right to
access the information.
The right to make private copies appeared in legal texts, European or national ones,
like an exception to the rights granted to the author of a copyrightable work. As I said,
he is the one having all the power over his creation. The Intellectual Property French
code says that “once a work has been disclosed, the author may not prohibit copies or
reproductions reserved strictly for the private use of the copier and not intended for
collective use with the exception of copies of works of art to be used for purposes
identical with those for which the original work was created and copies of software,
other than backup copies.” What is then its legal regime? I won’t lay emphasise on
this debated issue anymore but I would like to strengthen an idea related to one of the
contributions presented yesterday. It is about the relation between DRM and this
exception (could this be extended to all the other exceptions mentioned in the
European Directive?).
Digital Rights Management could be a valid answer to the question of implementing
copyright law. We are all aware of the fact that new technologies increase the transfer
(2005) 2:2 SCRIPT-ed

183
of information and make it difficult to track a copyrightable content, to find out who
the real author of a work is.
And if we are talking about technology, I would like to mention that not only the most
advanced of them could be used for protection. The simple ones could accompany the
legal framework. I am referring to a French case where a text belonging to the public
domain was published on two websites. It was a question of infringement of the
author rights of the first party by the second. I am now bringing this to you in order to
underline the issue of proof on the internet. How can we prove that we have an
original or a copy? This is an important question. In this case it was technology that
brought the answer. In particular, the two ways of typing the text were compared and
the result was that in the same cases we had the same errors, for example spaces
between the words. The parties managed to show that it was possible to compare two
texts on the internet. And it seems to me that such techniques could not only establish
content monopolies, they could also contribute to the disappearing of any exception
to this.
And we have seen that there are some limits - I am referring to software. For example,
it is common nowadays to buy a CD or some other goods on the internet and to
discover that they can’t be used on another medium. And related to this, let me
present you two other recent French cases (the first one of June 2003, the second of
September 2003). What happened in these cases? Two persons bought a CD with
music and discovered later that it was impossible to listen to these CDs on other
devices, for example on their car radios. And on the CDs it was only mentioned the
followings: “this CD contains a technical device that limits the possibilities of
copying it”. The persons finally complained against the distributor and the editor and
they won. It is important to look at the legal regimes invoked in the two cases. In the
first one the person chose to act on a consumer law basis and to refer to a failure in
delivering the correct information on the qualities of the product. And in the second
case the person used contract law to prove that the product bought had latent defects.
There is an article in the French civil code saying that “a seller is bound to a warranty
on account of the later defects of the thing, which is not adequate for the use for
which it was intended or which so impair that use, that the buyer would not have
bought or would only have given a lesser price for it had he known of them”.
Thus, we can find other ways of protecting information on the internet, not only the
technical ones or those related to intellectual property.
In these cases, judges gave an injunction to the distributor to use the following
sentence on the products: “Attention, this product can't be read on any machine or car
radio.”
Therefore the use of different anticopying devices won't be easy in practice. Most of
the users understand to preserve their right to copy, if we can actually talk about a
right to copy. And they will find more traditional legal ways for protecting it.
But why is this relevant for this meeting? We should actually be debating over the
academic research and copyright. Are we really concerned about this aspect of
copyright? It seems to me that academic research has two sides. On the one hand, we
need to communicate our ideas from a scientific point of view, but also from a
commercial point of view - and I would also like to refer to the four scenarios
presented yesterday by one of the participants. And on the other hand, we need to
have access to other resources, which actually implies making copies. The status is
(2005) 2:2 SCRIPT-ed

184
complex and is placed in the very heart of the debate on the relation between
copyright and new techniques.
What exactly is more appropriate to such a case? As I already said, I could think of
Open Source as an example. A question was raised yesterday about the difficulties to
put a boundary to such a creation. But what happened with the collective works or the
works made for hire? It seems to me that traditional IP law, is able to deal with such
kind of situations.
Another aspect I would like to focus on is the exception to the reproduction right
mentioned in the Article 5, 3
rd
paragraph of the European Directive. In France there is
a long debate over this issue. I am referring to the use of copyrighted works for the
sole purpose of illustration of teaching or scientific research. The main reason for
mentioning it is that most of the teachers are using illegally all kind of information for
their lectures. The fact of copying information and using it for your lecture is today
illegal. And apparently this situation will continue because France isn't ready to grant
such a right to universities or research laboratories. I want to quote a reply of the
French Ministry of Culture from April 2003, who was saying that this option can not
be considered because it is incompatible with international treaties as well as with
European directives, “letter et esprit de la directive”. The emphasis lies once again on
the author.
Meanwhile the French government has created a working group of representatives of
universities and, of course, of all the organisations representing the authors in order to
reach a fair and balanced deal.
In practice, the French universities make contracts with different organizations of
authors for determining the conditions of use of the works protected by copyright.
Here I want to refer to a presentation done yesterday that was addressing contracts
and the fact of transferring rights. Of course, you all know that in France it is
impossible to transfer the moral rights of the author. I would like to underline this.
The third point of my presentation is rather a complex of different legal aspects. I
would like to concentrate on personal data aspects. Someone mentioned yesterday the
control of academic content through subscriptions. But this implies a large amount of
personal data and we all remember what happened with Microsoft Passport. I would
also like to underline another issue, that of using different techniques for tracking
illegal content on the internet. The French commissioner for personal data protection
was requested to give his opinion about the use of such a device, software called Web
Control that was collecting IP-addresses on the internet. The French commissioner
disapproved it in March 2001 because IP-addresses are considered personal data.
Furthermore, I already mentioned some limits of digital rights management. But I am
wondering if law is really the answer to the problem. I would like to quote John Perry
Barlow who was saying that protections that will be developed will rely far more on
ethics and technologies than on law. Encryption will be the technical basis for most
intellectual property protection.
We should all admit that Digital Right Management could partially affect piracy. If
piracy could be decreased by just a few percentages points from using the digital right
management, than this might translate into millions of dollars of otherwise unrealised
revenues. But Digital Right Management does not come without a price. And here I
would like to refer to the cost of building, deploying or maintaining a DRM
infrastructure.
(2005) 2:2 SCRIPT-ed

185
Moreover, the DRM protected content is economically less valuable than unprotected
content. So, deploying DRM will result in fewer sales of legitimate content. The
question is whether or not the benefits of DRM outweigh its costs.
In addition, I would like to mention the issue of proprietary technology that does not
allow us to read a certain file on several media. For example, Acrobat gives us the
possibility to draw some limits to the use of documents. Thus, I can forbid printing or
copying such a document. If I manage to do the forbidden action by simply using
different software, could this be considered a circumvention of the technical
protection?
Will all this be enough for protecting copyright? Can we actually think about the end
of copyright law? I am confident that a balance will be drawn between the conflicting
interests of the authors and of all the other users or organization interested in it.

Federica Gioia:
1. The agenda prompted participants to envisage “how a legal framework might be
adjusted in order to preserve intellectual property rights of scientific authors”. As
formulated, it sounded as if it assumed that such adjustment is indeed necessary for
the purposes of the digital environment. I would like to question such an assumption.
Are we sure that the digital environment per se requires any adjustment of the
traditional principles of intellectual property whatsoever? A number of factors seems
to indicate a negative answer, namely:
i. The EU approach. Whereas no. 5 of the preamble to the “Info Directive”
(Directive 2001/29/EC of 22 May 2001 on the harmonization of certain
aspects of copyright and related rights in the information society) states that
while “technological development has multiplied and diversified the vectors
for creation, production and exploitation”, “no new concepts for the protection
of intellectual property are needed”; all that is required is that the laws on
copyright and related rights be “adapted and supplemented to respond
adequately to economic realities such as new forms of exploitation”..
ii. With specific regard to publishing, on the assumption that this is one of the
most significant ways in which the fundamental right to expression is
exercised, it is interesting to have a look at the Universal Declaration of
Human Rights (adopted and proclaimed by the General Assembly of the
United Nations more than half a century ago, when the digital world was long
to come: on 10 December 1948. Similar provisions are to be found in the
European Convention for the Protection of Human Rights and Fundamental
Freedoms as well as in the International Covenant on Economic, Social and
Cultural Rights). Article 19 UDHR provides that the right to freedom of
opinion and expression includes the right “to seek, receive and impart
information and ideas through any media and regardless of frontiers”. Indeed,
the last qualifications seem to encompass the very characteristics of the
Internet. It is widely accepted that the pursuit of academic research and the
related academic freedom are among the interests protected by freedom of
speech. Furthermore, the drafters of the UDHR were well aware of intellectual
property: Article 27 provides, on the one hand, that “Everyone has the right
freely to participate in the cultural life of the community, to enjoy the arts and
(2005) 2:2 SCRIPT-ed

186
to share in scientific advancement and its benefits”; on the other hand, that
“Everyone has the right to the protection of the moral and material interests
resulting from any scientific, literary or artistic production of which he is the
author”.
iii. a glance at the historical perspective: it has been noted that “throughout
history numerous innovations and advances have created more efficient
systems to disseminate data. For example, development of the mail system,
printing press, telegraph, telephone, radio and television all represent vast
improvements in the ability to disseminate information. The Internet is simply
the most recent, albeit the most effective, technological advance useful in the
dissemination of information”. As Beckerman-Rodau clarifies, the “migration
of intellectual property to the Internet” only “exacerbated the problem of
unauthorised copying and distribution of intellectual property, which militates
in favour of strong private property protection for intellectual property”.
iv. some countries, inter alia Italy, have enacted legislation to the effect that any
and all rules governing the publishing industry (including issues such as
authorizations, editors’ liability and the like) shall be the same, irrespective of
the paper or digital nature of the publication at stake (Italian Law 7 March
2001, no. 62, article 1). The need for all journals, including electronic journals,
to comply with the regulations on publishing can now be safely established,
after some uncertainties and discrepancies had arisen in case law.
The foregoing elements seems to convey and support the idea of intellectual property
law as being per se media-neutral, i.e. not influenced by the specific characteristics of
the means in which it is embodied or through which it is exploited, subject only to the
need to take into account any new factual scenario which might develop. As stated by
WIPO, “the fundamental guiding principles of copyright and related rights … remain
constant whatever may be the technology of the day: giving incentives to creators to
produce and disseminate new creative materials; recognizing the importance of their
contributions, by giving them reasonable control over the exploitation of those
materials and allowing them to profit from them; providing appropriate balance for
the public interest, particularly education, research and access to information; and
thereby ultimately benefiting society, by promoting the development of culture,
science, and the economy”. All such objectives seemed to be already well enshrined
in Article 27 UDHR.
2. Against this background, I have no doubt as to the need for scientific authors to
have their rights protected under intellectual property laws (I come thus to another
point of the agenda), as clearly mandated by Article 27 UDHR. Not only is there
definitely such a need, but there is no such thing as an alternative between the Anglo-
American as opposed to the continental-European approach. Both approaches are
needed: the Anglo-American, with a view to preserving the revenues possibly arising
in connection with the protected subject-matter, the continental, with a view to
ensuring the moral rights of the authors. While the EU has been most effective in
ensuring protection of the former aspect, it still has to take care of the latter. I will
here forward a second provocative remark: are we sure that intellectual property and
specifically copyright, as currently conceived and shaped, is aimed at protecting
authors? At least in the EU, the trend seems rather aimed at protecting those who, by
investing money and dedicating financial resources, make creation possible in the first
place. When it comes to copyright protection in the EU legislative documents,
(2005) 2:2 SCRIPT-ed

187
productivity, economic growth and employment are far more frequently mentioned
than individual creativity as objectives worth pursuing and achieving. Most of the
initiatives underpinning European copyright legislation seem to strengthen investors,
as opposed to authors; infrastructure providers, as opposed to content-providers. The
assumption underlying the whole European copyright system is that an appropriate
reward of those who sustain the financial burden of creativity is the condition upon
which the survival of the latter relies. Accordingly, authorship seems on its way to
evolve very much along the lines which led, in the field of patents and scientific
research, to enact rules vesting the employer with economic rights of patents for
inventions developed within the context of employment relationships. In the context
of scientific publishing, if title to economic rights is to be vested with the party
sustaining the financial burden underlying the scientific contribution and making it
possible, it might be appropriate to draw distinctions among protected subject-matters
based upon the kind and amount of financial and economic investment involved in
their creation. It is common knowledge that scientific research relies more heavily and
significantly on financing than literary, legal or artistic writing and publishing.
This seems the approach already taken by a number of universities. The University of
Oxford vests itself with rights’ ownership in respect of “all intellectual property …
devised, made or created … by persons employed by the University in the course of
their employment”, provided however that the University “will not assert any claim to
the ownership or copyright in … artistic works, books, articles, plays, lyrics, scores,
or lectures, apart from those specifically commissioned by the university”. Columbia
University vests the faculty with ownership of copyright whenever authors “make
substantial use of financial or logistical support from the University beyond the level
of common resources provided to faculty”, provided however that “even when
intellectual property rights are held by the University, revenues from new digital
media and other property should be shared among its creators”.
Once the principle is established, various contractual schemes and devices might be
developed, with a view to preserving adaptability and flexibility depending on the
circumstances at stake (namely, type of relationship, amount of support by the
institution, type of work and perspective forms of exploitation available; type of user).
Pursuing the objectives of keeping prices low, quality high and maximising
distribution of the information might require publishers to resort to alternative forms
of financing (other than charging a fee for consultation of and access to the
information as such, possibly including advertising banners, sponsoring and the like).
3. On the other hand, it might also be argued that copyright protection, whilst
necessary, is not sufficient and needs to be supplemented by other forms of protection
ensuring preservation of the economic viability. As a matter of fact, the most obvious
forms of exploitation of academic publishing are likely to fall within the scope of
copyright exceptions, as mandated recently by EU legislation: namely, those activities
could

benefit from the exception to the right of reproduction provided for “specific
acts of reproduction made by publicly accessible libraries, educational establishments
or museums, or by archives, which are not for direct or indirect economic or
commercial advantage” (Article 5.2.c, InfoDirective), as well as from the one
provided for “use for the sole purpose of illustration for teaching or scientific
research, as long as the source, including the author's name, is indicated, unless this
turns out to be impossible and to the extent justified by the non-commercial purpose
to be achieved”( Article 5.3.a, InfoDirective

). Similarly, under US law, uses of
protected materials for scientific, research-oriented purposes have always be
(2005) 2:2 SCRIPT-ed

188
construed as an exception falling under the fair use clause (parody, criticism,
comment, news reporting, teaching or scholarship), whose ultimate purpose has been
identified in preserving the freedom of “important forms of communication that are
the types of speech that the first Amendment seeks to protect from government
intervention” (Beckerman-Rodau).
In that respect, the sui generis protection guaranteed to the maker of a database under
the EU legislation (Directive 96/9/EC of 11 March 1996 on the legal protection of
databases) seems particularly promising (and a possible response to worries arising in
connection with “the situation of companies and especially online publishing houses
that scientific authors work for”). According to Article 5.2 thereof, a database is “a
collection of independent works, data or other materials arranged in a systematic or
methodical way and individually accessible by electronic or other means”. It is
uncontroversial that a collection of scientific works should qualify as a database
within the meaning of the Directive, as it is uncontroversial (Clark) that “an issue of a
learned journal is… a database, whether issued in paper or electronic form”.
4. The question as to whether charges for scientific online publications should be
understood as “hindrance to science as opposed to proper business” also should be
addressed under the traditional principles governing intellectual property laws. It has
been stated (Kamperman Sanders) that “in the face of expanding protection of
intellectual property rights, freedom of access for academics and civilians alike is
currently of grave concern”, epitomized by developments such as the WIPO treaties,
the Info Directive and the Database Directive. Again, I fail to see the “new” issue at
stake. IPR law, is, has always been, and will continue to be a matter of balance. This
is apparent even from the UDHR (Article 27) and it has been long acknowledged. The
academic community seems to be well aware of this: in June 2001, a group of Dutch
academics known as the Zwolle Group agreed to develop “a set of principles aimed at
optimising access to scholarly information in all formats, explaining the underlying
relationships of the stakeholders involved (i.e., authors, publishers, librarians,
universities and the general public) and providing a guide to good practice on
copyright policies in universities”. In December 2002 those statements were
elaborated into a set of principles, under the heading “Balancing stakeholder interests
in scholarship friendly copyright practices”, with a view to assisting such stakeholders
“to achieve maximum access to scholarship without compromising quality or
academic freedom and without denying aspects of costs and rewards involved” and on
the assumption that the aforementioned stakeholders share common goals, including
“attaining the highest standards of quality, maximising current and future access, and
ensuring preservation”. They explicitly endorsed the so called “Tempe principles”,
according to which “the academic community embraces the concepts of copyright and
fair use and seeks a balance in the interests of owners and users in the digital
environment. Universities, colleges, and especially their faculties should manage
copyright and its limitations and exceptions in a manner that assures the faculty
access to and use of their own published works in their research and teaching”.
The whole intellectual property system has long been familiar with the idea that
scientific, research-oriented uses of protected materials, as opposed to commercial
uses, should fall outside the scope of the exclusive rights. We have already mentioned
copyright; this is also true for patents (experimental use) and even trademarks (use for
descriptive purposes is allowed, provided not entailing likelihood of confusion and in
accordance with honest business practices: see EU Directive and CTMR).
(2005) 2:2 SCRIPT-ed

189
5. The foregoing remarks do not purport to suggest that addressing the issue of IP
rights in the digital context is a waste of time. Excluding the need for a substantive
review of the traditional principles and rules underpinning the intellectual property
system is not tantamount to saying that no awareness of the specific features of the
digital environment is advisable or that no amendment of the current practice might
be envisaged. Initiatives such as the Zwolle principles witness the desire of the
publishing community to nail down the respective needs and interests of the various
constituencies and to shape flexible practices adequately reflecting and balancing such
needs and interests. Only, the various stakeholders should be aware that appropriate
answers not necessarily lie in laws and regulations, but might as well be found in
contractual mechanisms (including the phase of dispute resolution, by encouraging a
wider recourse to Alternative Dispute Resolution mechanisms such as conciliation or
arbitration), possibly coupled with technological measures of protection

within the
meaning of the InfoDirective, provided such supplementary instruments comply with
the relevant mandatory provisions (and without prejudice to the possible impact that
such need for individualised, ad hoc contractual negotiations might have on the
feasibility of collective management of rights).
6. I would like to conclude by addressing the question as to whether Internet is “a
source of danger to the rights of scientific authors” by striking a note of optimism.
Internet is an opportunity as much as a danger: it may provide enhanced opportunities
for easy copying, but also definitely provides authors with enhanced visibility and
wider audiences. Furthermore, both publishing companies and authors could benefit
from and take advantage of the new rights arising in connection with the Internet: web
sites are now conceived as attracting copyright protection, or at least sui generis
protection under the EU Database Directive. Under Italian Law, once established that
the web site falls within the scope of copyright protection, it also follows that the
imitation of the general features of its appearance so as to entail a likelihood of
confusion is prohibited as an act of unfair competition. This provides the web site
authors with a type of protection which very much resembles the “right in the
typographical arrangement of published editions” granted to publishers under British
law. Further issues which might arise in respect of the use of trademarks or other
distinctive signs can be adequately addressed under the traditional laws on
trademarks, unfair competition and the like. Brief, I would like to conclude by
rejecting the commonplace statement that “digital technology …introduces new and
unprecedented threats of piracy” (Jeanneret), or that the relationship between
copyright and technology is “uneasy” (Ginsburg), and instead subscribing to Prof.
Cornish’s view that Internet is rather “an exotic concoction of prospects”.

Concepción Saiz-Garcia:
First of all, I would like to start stressing the need of protecting authors, and more
specifically, authors of scientific works, by means of Copyright law. Contractual law
is effective by protecting author’s interests in relation to the person who is going to
exploit the work. And even in such cases a specific regulation to strengthen the
always weaker position of the author to preserve contractual balance is necessary.
However, contractual law seems not to be sufficient to protect the author in case of
non-authorized exploitation acts carried out by third parties.
(2005) 2:2 SCRIPT-ed

190
Secondly, I think that scientific works should not be excluded from Copyright system.
Although many propose free access - or returning science to the scientists - to such
works; in my opinion, their exclusion would raise problems in determining which
works under art. 2 of the Bern Convention could be subsumed under that description -
for instance works of architecture, engineering projects... . Besides, as I said before,
without copyright protection authors would be completely unprotected against any
action of the publishing agents or any other third party. If necessary, let us remember
that one of the aims of this exclusive right is to encourage the creation of intellectual
works. And this is achieved through the two-fold nature of the faculties contained in
copyright: patrimonial and moral. In regard to patrimonial rights, I would like to say
that in my country, scientific authors are not adequately remunerated. As far as the
moral rights are concerned, I don’t believe that many authors would be willing to
publicly share their work unless a juridical order warrants paternity and integrity (both
inalienable and nontransferable rights) of those works.
It would be different if the author agreed on a free publication of his work. Just a
decade ago, it used to be possible only by transferring the exploitation rights, except
the communication to the public right, to the publisher in return for a payment, that in
the most cases was but a small percentage of the money obtained from the distribution
of copies of the work. Nevertheless, free distribution is nowadays a reality. It has
triggered a great change, not only from the point of view of authors of scientific work,
who are currently giving their copyrights to commercial publishers for free, but from
the viewpoint of public interest of free access to culture, because the use of the new
technologies has yielded a drastic reduction in the distribution costs of the works.
A good example of this new situation is FIGARO. Leo Waaijers said in one of his
works, that in FIGARO the author does not need to assign his copyright to the Front
Office. It is true that an exclusive rights transfer, in the sense of the classical
publishing contract, is not required. In FIGARO, the content distributor will need at
least an authorization to undertake the acts of exploitation that will be indispensable
for the on-line exploitation of the works. That’s why a contract between author and
publishing agent will be necessary. In that contract both the “reproduction right” and
“the right of making available to the public” must be authorized.
Reproduction: it must be authorized because, first, it is a previous step of the works
up-loading. Other reproductions such as transient or incidental and essential for the
technological process whose sole purpose is to enable a transmission in the network
between third parties by an intermediary, don’t need to be authorized. Art. 5 of the
Directive excludes them; this exclusion is mandatory for all member States, unlike
other limitations that are observed. All excluded reproductions acts are economically
irrelevant out of the on-line exploitation. And second, because, regardless of the
financial system of the publishing agent, for instance a pay per view or an open access
systems, the license they give the end user implies the transfer of the reproduction
right for private use.
However, a regulation of the parties’ interests should be completed with subsequent
terms in the contract. And at this point, I would like to refer to Digital Management
Systems. According to the above mentioned model, the author of scientific works
continues being the exclusive rightholder of his work. He has only authorized the up-
loading and the public availability of his work, so he still has to decide which way his
work will be exploited.
(2005) 2:2 SCRIPT-ed

191
Nowadays new technologies have allowed authors to control the on-line exploitation
of his work. As we all know, Digital Rights Management Systems enables the author
to decide whether his work shall be exploited through a license system - e. g. pay per
view system, subscriptions,…-, or not - free access system-. Nonetheless, authors
usually don’t have the proper infrastructure and skills to install such digital protection
systems, and normally the e-publisher will be the one who provides such
technologies. It is clear that the author must agree the use of such technological
measures with the e-publisher.
At the same time those measures will not only allow the control of the reproduction
for private use, but also prevent it. With this assertion we have come to one of the
most controversial issues of the current copyright system: the legal limit of
reproduction for private use.
The EC Directive 2001/29 of the European Parliament on the harmonisation of certain
aspects of copyrights and related rights in the information society, disposes in its art.
5. number 2 letter b) that “Member States may provide for exemptions or limitations
to the reproduction right provide for art. 2 in the following cases: b) In respect of
reproductions on any medium made by a natural person for private use and for ends
that are neither directly nor indirectly commercial, on condition that the rightholders
receive fair compensation which takes account of the application or non-application
of technological measures referred to in art. 6 to the work or subject-matter
concerned”.
I would like to recall the voluntary character of this disposition for the Member
States. The voluntary character may yield different regulation for each country. This
is not advisable given the global nature of the Internet.
We all know that the way authors have been compensated for the private reproduction
of their work is through a fixed amount of money, due to the impossibility of
controlling the mechanic reproductions. Nowadays, this initial foundation on a market
deficiency cannot be maintained for the reproduction right for private use by digital
means, as thanks to the DRMS the rightholder can control the reproduction of the
work. That’s why the detractors of the extension of reproduction right to the private
sphere, as with computer programmes and databases, need to find a new justification.
And this argument can only be found in the public interests on open access to culture,
education, science and information.
Generally, legal exemptions to exclusive rights don’t create subjective rights for the
users. They just limit exploitation rights aiming at the balance between the authors
and the public interests. They bring the constitutional requirement of promotion and
protection of culture, education, science and research to fruition. They are
consequently norms that are excluded from the free negotiation principle, otherwise
the agreement would be null and void. This means that technological measures that
hinder their beneficiary the enjoyment of such exemptions would constitute a right
abuse and a contra legem behaviour that should be corrected by the current authority.
I don’t believe that all that I’ve said in general about the exemptions could be said in
particular for the exemption of reproduction for private use. This is because its
primary foundation lies on the market deficiency of the technical impossibility of
controlling the reproduction of protected works by users that have access to them,
either legally or illegally. This is the reason why I believe that there is neither legal
support to affirm that there exists a subjective right of reproduction for private use,
(2005) 2:2 SCRIPT-ed

192
nor objective reasons of general interests that justify the defence of reproduction for
private use with compulsory character, as it happens with other exemptions. Thus this
limit is a non compulsory one that would allow the rightholders to ignore it in the
license contract through technological measures of access and reproduction control.
This seems to be the motivation of the European legislator who, in paragraph 4th. of
art. 6. number 4 of the Directive, gives an absolute priority to the will of the parties
over the general interest in which legal exemption are founded, with regard to on-line
exploitation. Beside, it will not prevent rightholders from adopting adequate measures
regarding the number of reproductions.
This solution has been adopted by the Spanish legislator on the draft reform of our
intellectual property law. It allows generally to limit reproduction for private use,
without preventing the rightholders from choosing not to do so, using technological
measures regarding the number of reproductions of works, which have been normally
made available through a license of use for each reproduction, either on-line or off-
line. Finally, in cases of onerous on-line transfers of works lacking technological
measures of reproduction control, reproduction for private use will subsist the way it
exists now: by compensating the damages through a fair compensation that, in any
case, should be adapted to the new means and materials of digital reproduction.
To finish my exposition, I would like to refer shortly to the current situation of
Spanish e-publishing enterprises.
There is still a total lack of organisation in this field. Some enterprises have
transferred to software houses specialized in e-publishing, exploiting their
reproduction right and the right of communication to the public and in particular the
right of making available to the public on-line their analogical archives. This
practice wouldn’t be a problem if the original contracts had transferred them the right
of communication to the public and in particular the right of making available to the
public. But this is surely not the case of contracts before the 90s. That is why a new
negotiation with those authors will be required.
Regarding works whose initial exploitation is through on-line means, enterprises are
in practice divided:
Some of them add to their classical publication contracts the right of communication
to the public. This way does not seem to be the best one because the current legal
regulation of publication contract is far from satisfying the new interests of the parties
in the on-line exploitation.
Other enterprises negotiate simply the exclusive transfer of the reproduction rights
and the right to communicate to the public. These contracts even regulate the kind of
technological measures and the salary of the author, which will be normally linked to
the kind of use: a percentage by access or time of use; an fixed amount by non
controlled access.

Ziga Turk:
I would like to start with a short overview of the SCIX-Project and the results of the
process analysis that was made there. Obviously we are all looking at how to change
the current scientific publishing process. We have identified that the enablers of this
(2005) 2:2 SCRIPT-ed

193
change are on one hand soft - like social, legal and business innovation kinds of
issues. On the other hand, there is a possibility of technical innovation. A framework
will be presented listing key innovation opportunities. And finally I will try to give
my personal answer to the questionnaire, which was part of the programme of this
seminar.
The reason for starting the SCIX-Project in the first place was that some of the key
partners have been involved in issues related to scientific publishing or to scientific
information exchange before the project started. We are otherwise involved in
information technology for construction, that is, using computers in architecture and
engineering. What we have been learning is that there is a big gap between what we
as researchers know is possible, and what the industry actually implemented. We have
identified, that the knowledge transfer process between the research community and
the industry is one of the major obstacles to change in that area.
With Prof. Bjoerk we have been running the ITcon Journal, the Electronic Journal of
Information Technology in Construction, since 1996. It is publishing volume 8 now -
a good track record for an electronic journal. With Prof. Bob Martens from Vienna we
have been setting up a kind of a topic archive of publications related to computer
edited architectural design. This is known as CUMINCAD. I think, practically every
active researcher in this community is a member and probably this site contains more
or less all the relevant works that have been published in this area even since the
1960s. A lot of the scanning of old historical work has been done also.
Research in the area started in 2000. We did a survey in our scientific community on
how people feel about electronic publishing or on the publishing process in general.
The conclusion was that people want to read electronic, but publish on paper, and that
there is a kind of a schizophrenic attitude towards this, because as readers they care
about totally different things than they do as writers.
SCIX-Project is a 24-months project, with an EU funding of 1.000.000 and 200
person months. The duration has been extended until 30
th
of April 2004. As you can
see, the partners are more or less from the edges of Europe. This tells you that the
benefits of electronic publishing are probably more imminent if readers do not have
the infrastructure of big countries or big institutions. At www.scix.net you can read all
about the project and all its deliverables. We have made it a policy to have everything
except the exploitation plan to be public.
There are two main areas of work. One is the business process analysis, in which
basically three types of studies are being done: (1) a theoretical model of the
publishing process using the IDEF0 mechanism, which is a well established
mechanism to model industrial or intellectual processes in order to analyse the
problems, before the re-engineering of the processes can take place. (2) A redesign of
the publishing process and (3) a study of the barriers to change.
The part of SciX is the technical innovation and technical demonstration, in which we
are looking at how to lower the technical bar for the independent publishers,
organizations or individuals to enter this open access publishing model.
We are adding another interesting angle to the project and that is how can this
scientific information be tailor-made for the industry? Our partner from Iceland is
analyzing how to use this open access content and make it more appealing for the
industrial readers and people from the practice. We understand that if something is an
open access publication, people then are free to exploit any business model and any
(2005) 2:2 SCRIPT-ed

194
business ideas with that content. There was some discussion today about the need to
restrict open access or whether you should allow anyone to make an advantage of
that.
I would like to start with the motives for scientific publishing. Traditionally they are
roughly five different motives for scientific publishing. (1) Registration of “who
thought of something first”, (2) quality certification, (3) dissemination through which
one also builds up his own prestige or becomes known in academic circles, (4) long-
term archiving, so the work does not get lost, (5) bibliometrics or some kind of
measurement of the quality of the research. We can add to these the profit or earning
money and the return on investment.
As we can see, it is quite clear from this diagram, that the scientists have very
different interests in the process from the publishers. The main motive of the library is
archiving. The librarians want to handle paper based journals and books, because by
the number of books and by the number of journals their budget is defined. And the
larger the budget, the more people they can employ. They have a vested interest. The
funding bodies' main interest is bibliometrics – a measure to decide whom to give the
research money and whom not.
Neither the scientist nor the libraries nor the funding bodies are interested in making a
profit out of that. The funding body that did make an investment into the research
process is definitely not interested in making the return of investment through
publishing. Of course, the publisher is interested in the return on investment, but we
are speaking here on the investment into the research, not investment into the printing
the scientific articles.
How do different media address these motives? This table has the paper based
journals in the first column and then various types of open access, the digital
publishing in the remaining ones. The green mark means that this is well taken care
of, and a read one that there is room for an innovation in this area.
And as we can see there is some problem, that paper based journals today have, with
the registration of who thought or who did something first, because without any
electronic means of comparing the content and given the sometimes very slow
publication process, which can last between one and three years, one is hesitant to
give them more credit in that area. They do very poorly at dissemination, because
actually by putting something in a scientific journal, the reach of that journal is rather
small, it is limited to the subscribers of that journal and to the libraries. Otherwise
they got quite well, actually they are better in every other category than any of the
open access digital publishing outlets with the given exception of dissemination.
The question for the SCIX-Project is how open access media can be as good as or
better than the paper based journals. Paper based journals have some problems:
restricted dissemination due to costs, high prices because of the inelastic market, bad
service to the authors because authors “publish or perish”.
It is not entirely true what the Open-Access-proponents say: The authors give the
journals copyright to their work for free. In exchange for that, the journals provide the
printing, the advertising and the indexing.
But I think, the vital problem in the current process, and one that the scientific
community should be most worried about is, that the control of the academic
community is outside of the academic community. So this autonomy, which the
(2005) 2:2 SCRIPT-ed

195
academic community should have or thinks it has, has in fact been diminished by the
commercial publishers.
The problem is, that, for example, a scientific publisher can invent a new journal on
some exotic topic. And this journal would, of course, accept some works. People, who
wrote this works, would have good bibliometric results or would be evaluated fairly
well and could therefore apply for research grants and money based on the fact that
they have published in these new journals. By getting these research grants their
organizations can afford to subscribe to these journals. The publishers are in a
position to steer what the priorities of scientific research are. Not to mention, that they
appoint the editors and they appoint the review boards, which give them a certain type
of control, what kind of work passes, whose work passes and whose work does not
pass the review process.
I think we will come to the conclusion that open access publishing is not free and that
the commercial publishers can improve their service. But I think the key issue is that
if we want scientists to keep their autonomy, the publication process should be in their
hands and not in the hands of some organizations, which are otherwise not involved
in the research or educational process.
What can we do to improve the problems of open access publishing? What are the
chances of innovation and improvement? When it comes to registration of primacy
(who thought of it first), electronic processing can be faster. What these sites could
have built in is digital signatures related to time stamping, so that it is known, when
something was submitted and did not change since that. Similarity measurements are
another opportunity. If you have texts in electronic form it is fairly easy to find out
very similar work and also to measure, to some extent, the originality.
As far as the quality certification goes, electronic journals are following more or less
the same principals as paper based journals. But there is a possible innovation in the
organization of publication model in the sense, that there is no need to have the one-
on-one mapping between the organization, that certifies that something is good, and
the publishing outlet. To put it in other words, if you publish a paper in a journal now,
it is the editorial board of that journal that certifies that this work is okay. What you
could have is a model where you have a paper published for example in institution
archives. And many different editorial boards would say, this work is okay, this work
is good. The paper does not end up in one journal, it is in the repository, but the
certification happens through other means.
It applies the same to the personal archives, but they would be probably a little
dubious, when it comes to that. As far as dissemination is concerned, I think, we have
the upper hand in relation to paper journals already. Particularly regarding electronic
journals and topic archives, I don't think there is much that can be done to improve
that. When it comes to institutional archives and personal archives, I think, they need
to be first and foremost Google-friendly.
I am not so sure about the importance of the OAI compatibility. When I personally
search for stuff, I go to Google or I go to some topic specific archive. But I can't
remember when I was last going to a search engine that was based on OAI. OAI,
however, is very important as a means to aggregate institutional archives.
The problem of institutional archive is that I would never go searching Frauenhofer's
and then go to the Stanford University and then go to MIT and so on and so on. These
institutions need to make it friendly for the search engines, so that their data could
(2005) 2:2 SCRIPT-ed

196
somehow get accumulated, probably by topic archives using in the end OAI
standards.
When it comes to archiving, innovation is needed in the sense that local libraries or
national libraries need to get involved. The current excuse for their existence is
handling of paper. It could be a major breakthrough, if we could engage them in
handling electronic publications and maybe they could also take some of the roles of
the publishers in this area as well.
As to the measurement of impact and other bibliometric studies, the problem is - and
it appears everywhere in the business of publishing, of scientific publishing and paper
based scientific publishing - that the control is somehow shifting strongly from the
scientists and the libraries to the funding bodies. We are seeing that the major clients
of publishing in general are becoming the organizations, who measure how good you
are and how good you are not. If you look at the business perspective, somehow the
ISI - the Institute for Scientific Information - is doing very well in providing that
information. I think Elsevier is already developing a similar system, which would try
to be competing with that. And for electronic journals to enter into that system is a
fairly long and slow process. The innovation here could only come from some open
citation approaches.
Probably sustainability is the major problem of electronic journals and of topic
archives today. A model, which would actually work on the long run has not been
invented yet. For some time one can work on a shoestring budget or for some time
one can work on enthusiasm, but throughout history I am not sure how many
successful things have worked in the a long run on enthusiasm. Sooner or later
business takes over.
Some of the barriers on the transition path are technical. In relation to electronic
journals, workflow systems need to be put in place. In relation to all other types of
archives it should be made easy, through technology, to set things up and to manage it
on a day to day basis. As far as business related items are concerned there is a
problem of who pays for the electronic archives. There are social problems like
recognition of electronically published texts on the one hand and the
acknowledgement of the public interest in scientific publications openly available on
the other. Furthermore there are organizational problems: Who is the publisher, when
it comes to electronic journals? Is it just some guy on the internet? Is it a proper
organization? Is it a library? Is it a university? Is it a professional publisher, who is
doing electronic business? And as to the institutional and personal archives, I think,
there are new rolls for old organizations, particularly the libraries.
The work in SciX that I was mentioning is related to a business process analysis. But
with technical innovation, we tried to make it really simple for people to enter into the
open access publishing. Multiple languages and the support for sustainability in the
sense that the maintenance of these archives really doesn’t take too much effort.
The idea behind the architecture of the site is that there are a number of core-services,
which are on this layer here. If you combine them differently, you create different
types of publications. If you want to have electronic journals, you probably have user
management reviews and annotations as well as discussions, in addition to the
repository, which stores the work. But if you want to support a conference or
something similar, the workflow system is different. These components like the Lego
blocks and the user decides what she wants to create with them.
(2005) 2:2 SCRIPT-ed

197
A special module is dealing with this content syndication towards the industrial user.
It uses the same path or access to the actual data. We use XML for all that web-
services.
A minor part of information exchange can happen with the protocol for metadata
harvesting by OAI. Parts of this system were developed within SciX. Parts were using
existing open source code. All these services on this layer are, although using in part
open source code, homogeneous.
What is up and running now is using this technology? Common code base is shared
by one journal, three conferences, five topic archives including the electronic
publishing conference series and their archives. Currently we have three languages
implemented, German, English, Slovenian, Spanish will be added fairly soon. So the
code is written in a way which allows translation by people and by programmers.
The final innovation or something that comes on top is not fully operational, but
probably will be fairly soon, is a kind of support for rentable services, again using the
same code base. When you go to a website and would like to create a journal or a
personal archive or institutional archive, you can set it up through the web only
through clicking on the web interface like you select on a form that you want to set up
an open archive. You fill in some information, e.g. who the manager is, what the
password was and it is done for you and you can start uploading your works or your
pictures or works of art or if you want - well, whatever content. The plain version
looks like that, but there are some 200 parameters, which you can change such as the
colours, the structure of the data base itself and the interfaces to maybe other services.
As to the IPR questions that this workshop asked: the baseline for answering those
questions is actually on this slide. This diagram here shows the costs related to
creating a scientific article or the costs related to lifecycle of a scientific work, in
several columns. There are some costs involved in the time of the scientist, who was
doing the research the first one. Writing the paper is the second one. The review
process is the third one. The publication itself is there light grey and the retrieval is
the dark grey. And the order of magnitude of the publisher’s profit is also in the order
of magnitude of these boxes here. What we need to know or have in mind when it
comes to scientific publishing is that by buying scientific publication, one does not
pay for the innovation, which is described in the work. One does not pay for the costs
that were associated with making that work possible. And this is dramatically
different to any other kinds of publishing. If I buy a CD of music or a video, what I
am contributing with that money is to the effort, to the costs of making that video. If I
buy a scientific journal, I am not contributing to the costs required to do the research.
I think, if one would look at the copyright laws and on any kind of legal framework
related to that, this is a very important difference. It is intellectual property that should
be somehow protected or managed. This property was created by public funding and
it is not the publication that is supposed to make some return on investment. The legal
framework should ensure, that this intellectual property right addresses the interest of
the public, which in the first place made the invention possible, and that covered the
cost of the invention, not the cost of the publishing.
Charges for online publication? I think, that in the end somebody needs to pay. Ideally
one would have these online publications managed by existing organizations with
existing budgets like, for example libraries. Then this could be quite cost-efficient.
(2005) 2:2 SCRIPT-ed

198
The important is the autonomy of the scientific community. So it should be the
scientific community that is providing the publication media. In that respect
something like BioMed is not different from Elsevier. There is somebody at the top of
BioMed, who decides to create a new journal on some exotic genetic theme. In that
case, this person who - I am not sure, what his relation to the academic community is
- is in exactly the same position as Elsevier setting up a new paper based journal.
I knew nothing about the difference between Continental andAnglo-Saxon IP-law
before I came here. But I learned this morning that in scientific publishing it seems
that there are some authors that need protection and others do not need to be
protected.
Woody Allen cannot make a movie without somebody who gives him the money for
the movie. He as author can be in conflict, in relation to copyright, with the producer.
But Elsevier gave me no money at all to do some research. Why would their interest
need to be taken care of? Somehow the Continental model seems much more
appropriate than the Anglo-American one in this particular case.
Referring to Open Source Systems I am sure that there is a difference between Open
Source and Open Access. Open Source means to me for example that I get the Word
version with everything open. I can copy and paste this text or maybe improve it. And
Open Access is a PDF or Postscript-File which let me do the reading or printing, but
not much more. There is a possibility of some invention on this Open Source model in
the sense that somebody posts a paper and somebody else can take the three quarters
of it, improve it and puts himself at the end of the author list. Or one could use some
text analysis of systems do actually compute, calculate, what the contribution of one
or the second author was. I think for Open Source some invention is needed.
The problem with Open Source publishing is that if I would give word files out,
people would be able maybe to steal my pictures or be able to improve on my pictures
without a very clear reference to me in some sense.
I think the scientific community is not interested in the legal limits of Digital Rights
Management. As a scientist I am not interested in earning one or two dollars for each
click on the paper that I have published. It is marginal, these costs, these revenues are
so small in comparison to the cost of doing the research.
Is internet the danger or an opportunity? I agree with, what Prof. Gioia was saying: It
is a challenge. Danger is in the ease of copy-paste that, students are using extensively
to produce “their” seminar assignments. But it is also good. It is what they will also
need to do in real life. Why type something in again, if it is out there. What we need
to teach them is to credit that properly. I have totally no problem accepting a paper
from a student that would be 75 percent cut-paste from various sources, if he says,
this is from there, this is from there and this is from there and this is what I did. And if
every person would do five percent original of what others have done, we would have
enormous progress.
The other danger of the internet is that people get so much involved in electronic
dissemination on the internet that they forget about how to score the proper points for
their evaluation and promotion. It may not be too good for their careers.
(2005) 2:2 SCRIPT-ed

199
Summary:
A. General situation
As a result of the workshop it can be said that there is no comprehensive, ultimate and
unanimous opinion about the requirements of DRM and digital publishing. Depending
on their respective backgrounds, the individual participants (lawyers, journalists and
computer professionals) had different answers to the questions posed in the
workshop’s principles and gave importance of various aspects in the field of DRM
and in the academic field of electronic publishing.
According to the different statements the situation in the field of academic publishing
is as follows: The competition between individual publishing houses is restricted by
the monopoly of a few players in the market. From the point of view of the publishing
houses, there is no interest in changing this situation, since their financial position
would only worsen with increased competition. The present day market situation
mainly affects authors of scientific papers who have to pay the publishing houses for
publishing their work. The situation is, according to several participants, similarly
critical for universities and their libraries, whose interests match those of the authors.
Universities have to raise large amounts of money for procuring books. Public money
is thus spent twice: Once in paying the researchers for their research work and again
for buying scientific books that are published. As a reaction to the monopoly, many
universities have set up their own publishing units or have forged alliances with other
universities and research institutions.
According to Prof. Turk there are five different motives for scientific publishing. First
registration of “who thought of something first”, (2) quality certification, (3)
dissemination through which one also builds up his own prestige or becomes known
in academic circles, (4) long-term archiving, so the work does not get lost, (5)
bibliometrics or some kind of measurement of the quality of the research. Prof. Turk
opined that the main problem of evaluating is the fact that measurement of all
scientific publications is done by commercial investors rather than by the scientific
community or libraries. Prof. Turk criticized this development because the evaluation
of scientific contributions should be done by the general public.
In Mr. Schafer´s opinion the current practice of peer-review is problematic. This
process dis-empowers authors and, at the same time, does not result in any
improvement in the quality of scientific works. In this context, Mr. Schafer pointed
out a few cases in the recent past, where scientific works had proved to be wrong or
faked, in spite of having been subject of a peer-review.
B. Situation of online-publishing
While outlining the general conditions of online-publishing, Prof. Turk pointed out
the following advantages: contrary to paper-based journals, online-publishing would
permit a very quick dissemination of knowledge. Traditional scientific journals, on
the other hand, could reach only a small group of readers within the scientific
community and in universities. Worldwide access to the results of scientific research
can be significantly faster if online-publishing is used, and one can see which scientist
has achieved what results at any point in time. Scientific works can thus be compared
and evaluated more easily. Long-term operation of an electronic journal would be
problematic because the finances for a long-term use would not be available.
(2005) 2:2 SCRIPT-ed

200
Moreover, there would be difficulties with public acceptance for electronically
published texts and knowledge that a text has been published electronically. Finally,
organizational problems like who is to be regarded as the publisher of an electronic
journal would also have to be solved.
Mr. Schafer emphasized that electronic publishing could offer a solution to the
problems facing the academic archive. Firstly, there is the potential for institutional
self-archiving. In addition to publishing in traditional journals, academics can upload
their papers in open access digital archives. On the next higher level, these archives
can be located on the level of universities. Finally and more ambitiously there would
be projects like the Budapest Open Access Initiative funded by the Soros Foundation
Open Society Institute. Its protocol would allow all databases which comply with it to
function as if they were part of one central server, regardless of their physical
location. Small institution-based servers can in this way create aggregates of research
literature, allowing researchers to self-archive without the need to upload the same
paper on several smaller discipline-based archives, most of which lack the necessary
mass to be successful competitors to the main journals. According to Mr. Schafer in
the near future technologies would be available that can be used to transform these
archives from cheap alternatives to paper journals to superior competitors, for
instance through the possibility to allow users to customise hyperlinks between
papers, a technology already used commercially in the Juris system.
Mr. Schafer also said that unlike self-archiving, open access journals would not face
the problem to negotiate copyright release for articles published already in paper
journals. They would however face potentially higher costs for peer reviewing, copy-
editing, proof reading and general administration. Possibilities to meet these costs
range from “author pays” models to sponsorship to a system of “value added”
publishing were the core texts remain free of charge, but individuals and institutions
are offered additional services (like for instance the hyperlinking facility discussed
above, customising of papers etc). Cheap as opposed to free online journals for
instance would initially increase costs to libraries, as they will be ordered in addition
to existing high impact journals, at least initially.
Dr. Westkamp said that exploiters of cultural works had detected a danger to the
commercial viability of their businesses and pledges for stricter protection. These
concerns have been met by various international and European instruments, in
particular the introduction of a general right of communication to the public under the
1996 WIPO World Copyright Treaty for Authors and under the 1996 WIPO World
Performers and Phonogram Producers Protection Treaty (Articles 8 and 6
respectively). This right would be understood to complement the non-physical rights
in respect of online transmission in closed or open networks. The acts which would
infringe copyright would include the communication of a work to the public as well
the making available (such as on an internet site). The European Union had
introduced a general right of transient copying under the 2001 directive on Copyright
and Related Rights in the Information Society. This would refer to acts of technically
necessary copies which occur once information is transmitted through electronic
networks. For providers of scientific material, the combined effect would be quite
serious. Both the reproduction right, as understood in the sense the directive suggests,
and the right of communication to the public will avail owners of copyrighted
materials with a general use right. In effect, no limitation will apply. As a use right, it
would remain immaterial whether the act of access (or providing an opportunity to
access for third parties by uploading information
(2005) 2:2 SCRIPT-ed

201
Further, the participants rated the development of the market for commercial online-
publishing companies as another critical factor. Here, a monopolization is indicated
(Elsevier / Springer-Kluwer Group) similar to the situation of traditional publishing
houses. Apparently, some online publishers have demanded up to US-$ 500,00 for
publishing scientific treatises. These publishers justify these costs saying that they
provide other services in addition to the publishing, such as peer-reviews. According
to the participants, such enormous amounts do not correspond to the savings that
online-publishers receive by publishing on the Internet instead of in paper-based
journals. Thus, it was pointed out that the administrative costs for digital archiving of
scientific works would come up to a mere US-$ 55.00 a year. In this context, it is to
be noted that the online-publishers often get access to further sources of income such
as sale of paper editions, advertisements and sponsorships through the publication of
scientific articles.
Commercial online-publishing is particularly detrimental to universities. Instead of
being able to sell knowledge in materially embodied form such as books, libraries
have to pay for access to knowledge. If the subscription to an electronic journal is
cancelled, the result is a loss of knowledge, because the universities would no longer
have access to it. As a consequence, universities are less willing to make use of new
electronic journals, even if these are of a very high quality. Commercial online-
publishing thus contributes to the death of the free market, on the one hand. On the
other hand, the current situation deteriorates quality of research and academics.
To solve this problem, the participants called for a process of emancipation of the
scientific community and the continuance and expansion of existing alliances. Efforts
undertaken so far in this direction have often failed due to the unimaginativeness of
the scientific community, resulting in the strong position of the publishing companies.
I. Legal solutions
In Prof. Grosheide’s opinion, the legal right to copyright protection of scientific
authors would not differ from that of other creative persons. Prof. Turk on the
contrary emphasized that the price of a music CD or a video would depend on the
respective manufacturing costs while the price of a scientific journal would be much
higher than the costs of the research. According Dr. Westkamp it should also be noted
that it is still unclear which law will apply to the act of downloading a copyrighted
work. The conventional concepts – either the law of origin or the law of the country
for which protection is sought – would both be fraught with difficulties in practical
application.
Most of the participants agreed that the works of scientific authors needed copyright
protection. According to some participants, this protection ought to be provided
mainly at the contractual level. There will certainly be disadvantages if a third person
makes use of the work without the author’s permission. That is why authors should
take a careful look at the publishers’ usage regulations for their clients, before signing
a contract with an online-publisher.
J. Role of IP law
The estimation of the role of copyright in the field of digital publishing varied.
Dr. Peukert considered the abolition of copyright protection for scientific
publications, or alternatively, restricting it to protection for the author against changes
to his works by third persons. This would mean that publications could only be
(2005) 2:2 SCRIPT-ed

202
financed by public funding, but the present problems of online-publications should be
solved. At the same time Dr. Peukert doubted the practical feasibility of his
suggestion, because this would imply international treaties.
In Dr. Westkamp´s opinion copyright would be undergoing the perhaps most drastic
changes of its history. One reason would be an increase in electronic uses of
copyrightable material and the ease of copying and dissemination associated with it.
The second reason would have a political dimension: In the European Union, two
distinct copyright systems exist: According to wisdom currently prevailing in the
European Commission, necessitates harmonisation under Article 95 EC as it is
conceived as a potential barrier to trade. Hence, European harmonisation had been
introduced by way of directives, the final two of which – the directive on the legal
protection of databases and the directive on copyright in the information society –
would be most relevant to the workshop topic. Finally, international norm setting had
been achieved under the 1996 WIPO Copyright treaties dealing with the conceived
dangers of the internet.
According to Mr. Schafer copyright regulations were needed that protect contents
which were freely available originally from commercial exploitation by third parties.
In addition to this identification and authentication of online-authors should also be
ensured.
In this respect, Mr. Schafer’s view was opposed by Prof. Bordeanu. She preferred
solving the problem on the basis of self regulation through the legal departments. In
her opinion, it was not necessary to adapt the traditional copyright regulations for the
field digital publications. The best protection, according to her, would be technology
and corresponding ethics among the users.
Prof. Gioia called attention to the fact that copyright itself was independent of the
medium of publication. Thus, there would be no need to modify and adapt the existing
copyright legislations at the European level especially in the field of digital
publishing. Moreover, Prof. Gioia questioned protection of authors as a goal of the
European copyright legislation. With regard to the latest EU legislation, strengthening
commercial users would seem to be the legal purpose.
Prof. Saiz-Garcia found the possibility of a voluntary implementation of Art. 5 No. 2
lit. a; b) RiLi 2001/29 problematic. This, in her opinion, would create non-uniform
pre-requisites within the community and is not designed for global availability of
online-publications.
There were also different opinions on the advantages and disadvantages of the
continental copyright regulations versus the Anglo-Saxon rights. Mr. Grassmuck was
clearly against a legal model based on the Anglo-Saxon ideal. Prof. Grosheide opined
that the differences would play no role, at least at the contractual level. Prof. Gioia
favored a mixed model based on both legislations.
2. National differences
During their talks, some of the participants referred to national regulations and the
legal situation in their home countries. On the whole, the impression was of a very
heterogeneous regulatory structure.
Dr. Peukert firstly referred to § 52a UrhG, which permits the usage of published
works exclusively by a definite, restricted group of persons for their scientific
research. According to his views, § 52a UrhG can only be applied to a small group of
(2005) 2:2 SCRIPT-ed

203
persons who can exchange the results of their research within a network. This
regulation did not touch upon the world-wide usage of research results via Internet,
although this is a necessary part of every national copyright regulation considering the
global availability of online-publications.
Further, Dr. Peukert drew attention to the regulations in the German copyright act
which tie these to an author. This would also be applicable for authors employed in a
scientific institution. Hence, it would be necessary for universities to contractually
allow the usage of the works of their scientists.
Mr. Schafer reported that in the UK traditional academic libraries would face a crisis
as research journal publishers had increased their prices by an average of 10% per
annum over the last decade. In the same time introduction of the Research assessment
exercise the UK had dramatically increased the pressure on academics to publish.
Furthermore in the UK, the joint information systems committee JISC would support
institutional self-archiving. Edinburgh University is one of the leading partners in a
consortium to create and promote such archives (The Nottingham University led
SHERPA project).
Considering the potential to reduce the high costs of electronic publishing Mr. Schafer
pointed out that one possibility could be student edited journals. Although these
journals would be the exception in the UK, they would be the norm in US law
schools. Student edited journals would also have pedagogical advantages, from
involving students early on in research to more reader friendly writing by academics.
Edinburgh University would presently develop a student edited webjournal on IT and
IP law.
Prof. Bordeanu propounded that universities in France would be entering into
contracts with authors, and the conditions for using copyright protected material
would be laid down in these contracts. Prof. Bordeanu also said that it is not possible
to transfer ethical values.
Furthermore the exception provided under Art. 5 Abs. 3 RiLi 2001/29 in relation to
multiplication rights poses problems in the field of teaching and learning. The course
material of most professors would have to be seen as illegal, if the guidelines are
applied, since copying of information is not allowed. Despite this problem the French
government apparently would believe that there would be no need for further action
as international treatises and European directives would oppose another regulation.
According to Prof. Gioia usage and copyright regulations in Italy and a few other
member states would not depend on the medium used. As confirmed by the legal
departments, digital usage is considered equivalent to usage on paper. Moreover, web-
sites are covered under the provisions for copyright and competition protection in
Italy. This is meant to apply to both the larger design of a web-site as well as to
special features. The creator of a web-site would be covered by a protective right that
is reminiscent of the “right in the typographical arrangement of published editions”
under British law.
Prof. Saiz-Garcia reported that the Spanish legislator hade made provisions in
principle for restricting the right to multiplication to private usage only, in their draft
for the implementation of the copyright directive. However, this should not at the
same time prevent the owner of the right from using technical means to circumvent
this and make copies of the work, which would otherwise be possible only through a
license.
(2005) 2:2 SCRIPT-ed

204
The situation of Spanish online-publishing houses is, according to Prof. Saiz-Garcia,
marked by a lack of organization. Some publishing companies had shifted to
publishing exclusively in the Internet. Since the publications in the Internet would
also contain traditional works from an earlier date, the usage rights for these would
not contain any contractual terms for such (digital) publication. For this reason, older
contracts with the individual authors would have to be re-phrased accordingly. In the
case of works, which are envisaged exclusively for online-publication, practice is
apparently divided: some publishing companies have merely added the possibility of
using the work through Internet to the existing contract. However, since the
regulations for existing contracts would not address the requirements of Internet-
publications, Prof. Saiz-Garcia advocates against this practice. Other publishers, on
the other hand, would incorporate comprehensive regulations for online-usage of
works in their contracts.
II. Legal limits and perspectives of DRM
In the assessment of the possibilities and outlook for DRM in the field of academic
publishing, participants agreed that DRM could not be bound by any legal limits. As
far as the authorization of DRM in the academic field was concerned, however,
opinion was divided.
Some of the participants completely rejected DRM in the academic field. The main
argument against DRM was that it would restrict free access to knowledge and hence
oppose the basic principle of the Internet, namely free access to information. DRM,
they said, would not offer a legal position to the people requesting the information,
but mere access. Implementation of DRM-systems in commercial online-publishing
companies would probably be inevitable, because the only alternative would be free
usage of these services.
Prof. Bordeanu and Dr. Peukert were skeptical about the success of DRM. Prof.
Bordeanu considered DRM as a possibility for ensuring copyright protection through
technical measures. One special field of application here would be author
authentication. Further, DRM would be suited for countering unauthorized usage of
works, leading to high profits. The usage of DRM-systems would necessitate
considerable investments for setting up the DRM-infrastructure. Whether these
investments would pay off would remain questionable.
Dr. Peukert was mainly concerned for the difference between commercial and
scientific publications. Unlike commercial authors, scientists would be only
marginally interested in financial exploitation of their works as scientific authors use
the Internet as an inexpensive way of presenting their work worldwide and acquiring
a reputation thereby.
According to Mr. Buhse from the point of view of commercial legal representatives
DRM would be the key to success in the electronic contents business. The
significance of DRM would not be restricted to the possibility of generating high
turnover through the Internet. He said that the use of DRM would be based on a
strategic component as well, since it contains legal, technical and business aspects.
The need for DRM can be traced back to several other developments. At present there
would be four different business models, which could hold their own in future as well,
and can be applied to the academic field. These models envisage, among other things,
remunerations for one-time usage or access over a restricted period of time. It is also
(2005) 2:2 SCRIPT-ed

205
possible to restrict usage to the type of usage, individual parts or even charge for
particular services.
Regarding the role of DRM Prof. Grosheide pointed out that individual usage
business models for DRM and the usage forms in each case would have to be
considered. Prof. Grosheide also drew attention to the difference between the
management of digital rights and digital management of rights. On the one hand
management of digital rights would include technologies for identification, metadata
and languages. Digital management of rights on the other hand would cover
decryption, watermarks, digital signatures, protection technologies and payment
systems.
Finally, many participants highlighted the growing significance of database protection
as an area of application for DRM. Here, protection for scientific publications would
make room for the EU legislation, because online-collections of scientific
publications would have to be seen as a database in the sense of Art 5.2. of the
database directive (RiLi 96/9/EC) and would therefore enjoy the concomitant
protection.
III. Open Source Solutions
The use of open source systems was assessed differently by some of the participants,
both in terms of its definition as well as from an advanced perspective.
Prof. Grosheide basically welcomed the use of open source systems, as long as they
would promote scientific encounters. He also noted that no such system would be in
place as yet in the field of academic publishing.
Prof. Turk was of a similar opinion. For him, there was no great difference between
open source and open access. He associated the possibility of making changes to
electronic documents with the concept of open source. Open access on the contrary
provides access to documents where only reading is possible, as in the case of PDF-
files. According to Prof. Turk, one area of application for open source systems would
be a model, which would allow authors to exchange documents or parts of documents
among themselves, for purposes of editing. Further, one could also consider a tool that
could analyze text and summarize it. At the same time, open-source systems would be
prone to the risk of third persons appropriating the contents of documents or even
eliminating them.

(2005) 2:2 SCRIPT-ed

143

Workshop Chairman: Peter Mankowski, Hamburg University

Participants: Volker Grassmuck, Berlin Willms Buhse, Digital World Services, Hamburg F. Willem Grosheide, Utrecht University, Utrecht Alexander Peukert, Max-Planck-Institut, Munich Guido Westkamp, Queen Mary Institute, London Burkhard Schäfer, Edinburgh Law School, Edinburgh Elena Bordeanu, EDHEC Business School, Nice Federica Gioia, Piacenza University, Piacenza Concepción Saiz-Garcia, Valencia University, Valencia Ziga Turk, SCIX-Project, Ljubiljana

Volker Grassmuck .................................................................................................144 Willms Buhse:........................................................................................................157 Willem Grosheide: .................................................................................................162 Alexander Peukert:.................................................................................................166 Guido Westkamp: ..................................................................................................170 Burkhard Schäfer: ..................................................................................................173 Elena Bordeanu:.....................................................................................................182 Federica Gioia:.......................................................................................................185 Concepción Saiz-Garcia: .......................................................................................189 Ziga Turk: ..............................................................................................................192 Summary:...............................................................................................................199

(2005) 2:2 SCRIPT-ed

144

Volker Grassmuck
A. Intro

„The evolution of print teaches us [...] that what is fundamentally at stake changes very little over time. [... T]he essential issues remain few: there is control, there is property extended to new objects, and there is lusting for profit. [...] The same remains true of the digital era; the objectives of control do not change, only the tools do.“ (Jean-Claude Guédon, Université de Montréal, In Oldenburg‘s Long Shadow) One week ago the new German copyright law for the digital age came into effect. At its core it is a privatization of copyright law. The balancing of interests until now could be achieved through differentiated exceptions and limitations in a public manner. Under the newly privatized order of law, exploiters are unilaterally setting the rules by means of contract and technology. Especially in the online-realm, all public interest claims have been waived. Another privatization law is in effect in Germany since February 2002, the law on patenting in universities. Universities thereby gained the right to patent inventions of their employees. Connected to the law is a political program of establishing a patenting system in the public universities, paid for from the proceeds from the sale of the UMTS frequencies (35 million Euro until 2003) :an infrastructure of professional patenting and patent exploitation agencies throughout the federal states. The returns are shared 70% for the university, 30% for the inventor. Until then, only the inventor, a researcher or more likely her professor, had the right to patent and economically exploit an invention. But this, in the opinion of the German government, happened much too rarely –something that from another viewpoint, makes perfectly sense, because basic research is supported as public task precisely because it has no direct economic relevance. But the minister Edelgard Bulmahn sees it as a mobilization of a potential for innovation that has been lying fallow. Just as in medieval agriculture, fallow or wasteland was the result of the commons, which in turn became the source for private and state property. The private property model - inventors patenting their own work - in this case failed. So the reform in a sense is a re-socialization of personal innovation based on common means, i.e. the university facilities. But the overall goal is a commodification of research results and the generation of revenues for the universities, with all the property controls attached to it. And yes, there will be public funds for patent litigation as well which is, of course, the central means of controlling intellectual property. The third story of privatization I want to talk about, is that of science publishing, of the databanks of fundamental science. But before that, some basics: Robert Merton, the great sociologist of science and completely inconspicuous of party-political communism, based the essentially democratic, egalitarian ethos of science on four pillars: communism, universalism, disinterestedness and organized

The institutional conception of science as part of the public domain is linked with the imperative for communication of findings. in the nontechnical and extended sense of common ownership of goods."If I have seen farther it is by standing on the shoulders of giants" expresses at once a sense of indebtedness to the common heritage and a recognition of the essentially cooperative and selectively cumulative quality of scientific achievement. And discovering truth . nor do the mores bestow upon them special rights of use and disposition. the suppression of scientific discovery is condemned..(2005) 2:2 SCRIPT-ed 145 scepticism. Newton's remark . "Communism". . An eponymous law or theory does not enter into the exclusive possession of the discoverer and his heirs. Even though it serves no ulterior motive. or in short: CUDOS. The scientist's claim to "his" intellectual "property" is limited to that of recognition and esteem which. Publication not only as ethical rule. Truth is what has not yet been disproved by intersubjective scrutiny. It is the whole community‘s task to confirm. Secrecy is the antithesis of this norm. becomes the target for ambivalent responses.is the raison d‘etre of science. refute. is roughly commensurate with the significance of the increments brought to the common fund of knowledge. The humility of scientific genius is not simply culturally appropriate but results from the realization that scientific advance involves the collaboration of past and present generations. and extend the knowledge in published works. contingent upon publication. Property rights in science are whittled down to a bare minimum by the rationale of the scientific ethic.. if the institution functions with a modicum of efficiency. The substantive findings of science are a product of social collaboration and are assigned to the community. but demanded by epistemology as well.not helping finance universities . „We feel that such a man is selfish and anti-social“ (Huxley). I would like to start by quoting from the passage titled „Communism“ written in the 1940s and republished in the seminal collection „The Sociology of Science“ from 1974. full and open communication its enactment. This epithet is particularly instructive for it implies the violation of a definite institutional imperative. A scientist who does not communicate his important discoveries to the scientific fraternity . They constitute a common heritage in which the equity of the individual producer is severely limited. The communism of the scientific ethos is incompatible with the definition of technology as "private property" in a capitalistic economy. The communal character of science is further reflected in the recognition by scientists of their dependence upon a cultural heritage to which they lay no differential claims. of course. is a second integral element of the scientific ethos. The pressure for diffusion of results is reinforced by the institutional goal of advancing the boundaries of knowledge and by the incentive of recognition which is.

The special quality of the Internet is to connect people without gatekeepers and intermediaries. but by means of a licensing contract. Editors for text. image. The declared aim was that articles will be „better than Britannica's“. and Wikipedia had spawned off countless language versions including a very active encyclopedia in Esperanto. just as science is not a consumer good. this happened exactly at a point where this free exchange was threatened by commodification. Underlying the digital revolution is this special quality of the computer to empower its users to create new things. Sanger was alerted to the Wiki software by Ward Cunningham. On January 15. like Hollywood products. since free software was born out of the academic culture of free exchange of knowledge. In February there were 1000 entries already. and juridically defensible. The similarity to Merton should no be surprising. It has become a sustainable knowledge and innovation commons. 2001. In March 2002.000 and after the first two years 100. to bring the potential of the connective intelligence to bear fruit. in September 10. Entries had to be approved by at least three experts in the relevant field. a database of freely editable web-pages. It is owned in common by a global community. but a tool. Freedom of free software is ensured not by technology. very well aware that they are standing on the shoulders of giants. it was built on a classical process of encyclopedia making. about 20 articles had reached higher level in this multi-tiered process. Since then the movement has become philosophically self-reflexive.(2005) 2:2 SCRIPT-ed 146 When you read Merton and exchange „scientific ethics“ by „hacker‘s ethics“ or „spirit of free software“. when funding ran out and editor-in-chief. I would like to open up a third scenario that hopefully will prove instructive for our issues. and sound are means of production for knowledge. knowledge gets . maybe science can learned something back from free software that it had originally taught to it. One such project was Nupedia. Interestingly. quality-control model of Nupedia. before Nupedia died. This was an open project. Fortunately. Today. The idea to let an encyclopedia emerge from the vast pool of interconnected brains is rather old. the statements fit to an astonishing degree. Wikipedia was started. first of all the GNU GPL. Software. the free-for-all companion to Nupedia. Richard Stallman created the GNU Manifesto. Different from the expert-based. and professionally copy-edited to ensure a rigorous quality-control. The articles were released are under the GNU General Documentation License. Before I‘ll be speaking about this process of forgetting. but has since forgotten. I want to end my introduction with a third example besides science and free software that powerfully demonstrates the paradigm shift. which strives to advance the boundaries of knowledge.000. Larry Sanger resigned. institutionalized. but the editorial process was so strict that many people who would have liked to contribute were frustrated. Founded in March 2000. In 1985. Recognition and esteem are assigned to innovators. Free Software is the product of social collaboration which is contingent upon publication. the FSF and the GPL. Wikipedia‘s open structure and the community-based self-organization allows to make full use of the power of distributed connective intelligence: Given enough eyeballs.

Research libraries across the globe are a guaranteed. On average. commercial interests realized that through ISI‘s citation hierarchy.“ Instead of setting up committees that vote on value and grant reputation. It was the time when the „information society“ discourse started. They all hate the Microsoft and Elsevier . How is the situation of companies and especially online publishing houses that scientific authors work for? Brilliant! The three remaining major players have nicely developed the market and divided it up among themselves. At the same time . The workings of science themselves produce the rough indicator of what the community deems relevant. just cover costs. by 200% throughout the last 15 years. They were not very lucrative. and an open process where everyone is trusted until proven otherwise. They also don‘t have a choice. a few thousand titles.the late 1960s . In order to be able to practically handle this counting of citations. they also eliminate the cost of printing and distribution. a supposedly objective variable is being measured.. a small fraction of all scientific journals published in the world. B. Whereas raising prices in other markets will lead customers to buy from a competitor. were not meant to be. In the early 1960s. companies don‘t want perfect markets. When publishers go online-only. Nu. and they have no choice but to publish or perish. . Brain Research was the first to break through the $ 20. Authors and reviewers don‘t get paid. it allows for general public authorship and editing of any page. a citation index on the journal-level..and Wikipedia seem to prove the weaknesses of a closed controlled process where people first have to show their credentials before they can participate. Most of all journals in areas close to industry raise prices.but only because they envy them for their monopolistic position. is roughly commensurate with the significance of the increments brought to the common fund of knowledge. Chemical magazines cost $ 9.“ How to measure and allocate recognition commensurate with the achievements of a scientist? Eugene Garfield invented Citation Indexing and set up the Institute of Scientific Information (ISI) in 1958. The Science Citation Index is an ingenious self-referential method for measuring „importance. And it works: when 30-40 people edit an article it does indeed get better. A staggering escalation of prices set it. ISI defined a single set of core-journals. Scientific journals until then were mostly published by academies and learned societies. And this changed everything. Who gets cited frequently must have said something important for the respective discipline. here it leads to competitors raising their prices as well. scientific journals could be turned into a profitable business. Information came to be viewed as central to economic growth. As a wiki. „inflexible“ market.(2005) 2:2 SCRIPT-ed 147 better. Whatever they say.a wave of new post-war university studies boomed and with them new libraries which became the target of corporate interest.000 barrier. How did this „brilliant“ situation come about? It started from what Merton called „recognition and esteem which .000 US/year on average. Being in or out of the core set is determined by the impact factor.

if weighted by the impact factor. Barshall was sued by one publisher. Scientific journals since the foundation of the first of a kind. 3. print on paper. Gordon and Breach. the cost/1. Founded by Garfield on a loan of $500 it grew rapidly. until in 1992 it was acquired by Thomson Business Information. Academy Press merged with Elsevier. an obscure British equity fund bought Kluwer with its 700 journals. Archiving for future use. It even attracted the attention of some scientists. Dissemination of knowledge and 2. has by now neatly dovetailed with financial elitism. the variations could reach three orders of magnitude. reference information. Now being challenged by the digital revolution. At the heart of the reputation economy is still the Institute of Scientific Information. Reed-Elsevier now holds about 25% of the market. The Corporation's common shares are listed on the New York and Toronto Stock Exchanges.e. higher education. the University of Wisconsin physicist who pioneered some very interesting statistics showing that. i. A real onestop shop. In 2002. the Philosophical Transactions of the Royal Society of London by Henry Oldenburg in 1665. A „must“ for every serious university and research library. combine five functions: 1. Scientific excellence. such as Henry Barschall. An in the basket: the backbone of the reputation system of science. Only the rich can read up-to-date scientific information. already somewhat skewed into scientific elitism. not to win but to intimidate. A „cannot“ for all but the richest libraries in the richest countries. „not unlike that of a patent office for scientific ideas“ (Guedon) provide property title. The new Springer-Kluwer Group has 20%. corporate training and assessment. tax. financial services. software applications and tools to more than 20 million users in the fields of law. Simply pointing to this 1 to 1. for allegedly distorting and manipulating the market. High degree of concentration: EU blocked Reed-Elsevier merger with Kluwer announced in October 1997. For their part. 4.000 range in weighted prices is enough to demonstrate the total arbitrariness of the pricing of scientific journals. validate originality. Filtering through peer-review „that guarantees as much one’s belonging to a certain kind of club as it guarantees the quality of one’s work“ (Guedon) evaluation instrument of individual scientists’ performance . scientific research and healthcare.(2005) 2:2 SCRIPT-ed 148 The core set of 5-6. accounting. In 2001. the new publishing system was firmly in place and its financial consequences had become hurtful enough to elicit some serious "ouches" on the part of librarians. poorer institutions in some rich countries and all institutions in poor countries have suffered enormously from the financial bonanza made possible by the revolutionary invention of the ‚core journals‘. By the end of the eighties.000 journals costs a library about 5 Mio per year. Thomson provides value-added information.. provide proof of priority. its complete disconnection from actual production costs. between various journals. They are a public registry of innovation. Both based on its physical form.000 characters could vary by two orders of magnitude. In May 2003 the same company acquired Springer from Bertelsman for 1 billion .

(2005) 2:2 SCRIPT-ed

149

5. Branding. For authors being printed in Cell or Nature is proof of recognition. Egalitarian ethics, hierarchical social system. An intellectual hierarchy based on excellence, nobility granted by peers, based on publicly accessible scientific results.
1. Gatekeepers

Among scientists, those who manage to play an active editorial role in the publication process enjoy a special and rather powerful role, that of "gatekeeper". As mediators, they are supposed to extract the wheat from the chaff. Of course, this judgmental role can only be justified if it is cloaked with the integrity (and authority) of the scientific institution. Any hint of systematic arbitrariness or bias would threaten the whole edifice of scientific communication. In this regard, a scientific editor acts a bit like the Keeper of the Seal without which royal prerogative cannot be exerted in the physical absence of the King. The difference lies in one important detail: in science, there is no King: only truth and reality are supposed to prevail. Silently, the journal’s editor, therefore, has come to occupy the role of guardian of truth and reality“ (Guedon). If a scientist of some repute is offered the chance to head a new journal, the response will generally be positive, perhaps even enthusiastic. The ability of offering this status-enhancing role to various scientists lies, I believe, at the foundation of a de facto and largely unexamined alliance between a number of key research scientists and commercial publishers. Seemingly a win-win situation, alas with a losing third estate: the libraries and universities and research centers, governments that finance them. All these players see their budgets flowing inexorably into the pockets of the large publishers.
2. Publishers

In a fruitful alliance with the gatekeeper scientists. „The presence of a public registry of scientific innovations would help create internal rules of behavior leading to a well structured, hierarchical society“ (Guedon).
3. Authors

Imperative: publish or perish. The new numerical indicator became the measure for reputation and basis for employment, career, and acquisition of research funds. The non-monetary value translates directly into money.
4. Guedon, two modes:

• •

Scientists-as-readers, in information-gathering mode: conferences, seminars, pre-prints, telephone calls, e-mail. invisible colleges Scientists-as-authors: footnoting, from most authoritative sources -> libraries.

Since the 1970s, journals force authors to transfer all their rights to them. This is changing a bit thanks to the open access movement. The licensing agreement of the Royal Society: I assign all right, title and interest in copyright in the Paper to The Royal Society with immediate effect and without restriction. This assignment includes the right to reproduce and publish the Paper

(2005) 2:2 SCRIPT-ed throughout the world in all languages and in all formats and media in existence or subsequently developed. I understand that I remain free to use the Paper for the internal educational or other purposes of my own institution or company, to post it on my own or my institution’s website or to use it as the basis for my own further publication or spoken presentations, as long as I do not use the Paper in any way that would conflict directly with The Royal Society’s commercial interests and provided any use includes the normal acknowledgement to The Royal Society. I assert my moral right to be identified as the (co-)author of the Paper.
5. Libraries

150

are suffering the most from the „serial pricing crisis“: The transformation of a quest for excellence into a race for elitist status bore important implications for any research library claiming to be up to snuff: once highlighted, a publication becomes indispensable, unavoidable. The race demands it. It must be acquired at all costs. The reaction of libraries was to join and form consortia, sharing of legal experience and gaining some weight in price negotiations. This did pressure vendors and taking them by surprise. But those were quick to learn: Consortia bargained for full collections. Publishers turned this around, offering package deals. The Big Deal: You have some Elsevier journals already e.g. as part of „Science Direct“, how would you like to pay a bit more and get access to all Elsevier core journals, say for $900,000 or $1.5m, depending on client, currency fluctuations etc. E.g. OhioLINK has contracted a "Big Deal" with Elsevier. As reported in the September 2000 newsletter from OhioLINK, 68.4% of all articles downloaded from the OhioLINK’s Electronic Journal Center came from Elsevier, followed far behind by John Wiley articles (9.2%). Elsevier, although it controls only about 20% of the core journals, manages to obtain 68.4% of all downloaded articles in Ohio. A local monopoly. It affects the citation rate, impact factors of the journals, and attractiveness of journals for authors. Are they [libraries] not being temporarily assuaged by a kind of compulsory buffet approach to knowledge whose highly visible richness disguises the distorted vision of the scientific landscape it provides? In other words, are not "Big Deals" the cause of informational astigmatism, so to speak? Publishers gain „panoptic vision“: with regard to site licensing negotiations, and usage statistics: Scientometrics specialists would die to lay their hands on such figures; governmental planners also. With usage statistics you move faster and stand closer to the realities of research than with citations. Usage statistics can be elaborated into interesting science indicators of this or that, for example how well a research project is proceeding on a line that might prepare the designing of new drugs or new materials. The strategic possibilities of such knowledge are simply immense. It is somewhat disquieting to note that such powerful tools are being monopolized by private interests and it is also disquieting to imagine that the same private interests can monitor, measure, perhaps predict. They can probably influence investment strategies or national science policies. In short they could develop a secondary market of meta-science studies that would bear great analogies with intelligence gathering.

(2005) 2:2 SCRIPT-ed
6. Digitization

151

If publishers were not paying authors and reviewers, they were still justified in charging for their service of handling print and paper. Not any longer after scientific publishing went online. Scientist‘s knowledge should release science from the publisher‘s strangle-hold. The opposite is the case. Market concentration escalated again through digitization and innovations in copyright law and practice. At the end of the 1980s, the Internet became a factor. 1991 Elsevier launched the TULIP project: The University Licensing Program, a cooperative research project, testing systems for networked delivery and use of journals together with 9 US universities. Questions of profitability quickly linked up with questions of control, and the technology was shaped to try and respond to these needs. TULIP was conceived as a licensing system. It was inspired by the software industry, which licensed rather than sold its products, in order to avoid some of the dispositions of the copyright law, such as first sale provisions & fair use. Elsevier extended this notion of license to scientific documents, thus setting a counterrevolution into motion. • • • Distributed to each site on a CD-ROM to be mounted on local servers. Page images in TIFF files, too big to transmit over modem, printing very slowly. Full-text search was offered but without user access to text files.

From this pilot, Elsevier retained only the licensing part and moved away from giving sites a CD-ROM to access on its own servers. Elsevier managed to invert the library’s function in a radical way: instead of defending a public space of access to information by buying copies of books and then taking advantage of the first sale provision of the copyright law, librarians were suddenly placed in the position of restricting access to a privatized space. They no longer owned anything; they had bought only temporary access under certain conditions and for a limited number of specified patrons. Licensing is „a lawyer’s paradise and a librarian’s hell.“ The traditional role of librarians, epistemological engineering - organizing and cataloguing information and long-term preservation, was also made impossible. Publishers don‘t want to take on the task of preservation, rather unload it onto libraries. Aside from preservation, the libraries might become superfluous. Once secure transactions and easy payment is there, end-users might access publisher‘s servers directly. In effect, licensing contracts subvert copyright legislation on all but one basic point: they do not question the fundamental legitimacy of intellectual property The digital knowledge space - whether scientific, business or entertainment - is ruled by licensing contracts and technological enforcement.
C. What are the legal limits of Digital Rights Management ?

Under the current regime of privatized copyright, there are essentially no limits to DRM.

D. and other software research aids. Enforcement of rights under exceptions and limitations: varies widely throughout Europe. $ 500 is a barrier to publication. DRM is the powers of technology harnessed for controlling intellectual property. Article-processing charges [flat fee of US$500 for each accepted manuscript. BioMed Central.(2005) 2:2 SCRIPT-ed 152 General laws like data protection and privacy: assumed to apply but no effective control. which has become usual practice in print publishing of science books. The levying of a moderate and reasonable charge for the costs of peer review and publication has the merit of directly linking the charge with the service to authors and the costs involved. a commercial open access archive. various provisions for getting them waived] enable the costs of immediate. implements this model: We use other business models. from Spain with possible fines of 6. OAI deals with scientific objects that need to be dealt with first and foremost by science‘s own rules. and there are no limits to the purpose for which these DRMs may be used. world-wide access to research to be covered. sales of reprints. advertising and sponsorship. and believes she will fail if she doesn‘t have total control over the use of her published works. But maybe the question refers to charging authors for getting published. E. personalized information services delivered electronically. DRM deals with objects of (money) economy.by whatever standards. Charges for scientific online publications – hindrance to science or proper business? Rhetorical question: the monopoly fees that the large publishers are charging have nothing to do with proper business . Other sources of revenue include subscription access to commissioned articles. tools that help scientists collaborate.000/day for noncompliance to no enforcement at all. Open Archives is the powers of technology harnessed for fast. highly distributed and massively parallel open knowledge exchanges. Unclear where the money goes. provision of editorially enhanced databases. Germany: option to sue. Faculty of 1000. Is the Internet a source of danger to the rights of scientific authors ? Internet is a danger to anyone hoping to become rich and famous through her works. This hope is likely more prevalent among teenagers wanting to . an expert group rating articles in BioMed Central and other Open Access Archives can be subscribed at 55/year. sales of paper copies of our journals to libraries. but enactment of that provision was postponed by one year. There are no limits on the effective technological measures being protected. and most importantly the development of a range of subscription-based value added services such as literature reviews and evaluation.

g. But then again. Freedom can be regulated by licensing contract. For preserving the possibilities for scientists to fulfill their tasks as information gatherers and as teachers. see for Germany in particular UrhG § 52a. H. How might a legal framework be adjusted in order to preserve intellectual property rights of scientific authors? Focus should not be on changing the law. Creative Commons licenses. Free Access Science Movement „Whatever may be the outcome of the political battle that is heating up in the United States. Most important innovation is not legal or technical but social: evaluation. Is it useful to establish an open source system in the field of academic publishing? By all means. Endangered by plagiarism: Is that a problem in science? Apparently yes. Not so in USA and UK where the attribution right explicitly has to be claimed.(2005) 2:2 SCRIPT-ed 153 become pop stars than among scientists. a messy business and maybe unnecessary. Texts can be easily checked through Google. But also scientists are perfectly legitimate in expecting recognition for their work. Do scientific authors need intellectual property law in an Anglo-American or in a continental-European way to preserve their intellectual property rights? What are the IPRs of scientists that need preservation? Attribution and integrity. F. „Real changes in power structures and social relations are in the offing“ (Guedon). not only useful but crucial for public science to preserve the justification of its very existence: • • Open source or free software (infrastructure) Open source information: o data formats (convertibility) o information itself: public domain info released under an open license (CC) The digital revolution doesn‘t mean we do the same old things only now with bits. I. e. Is it increasing thru the Internet? Unlikely. it is easy to imagine how a system of open archives. as a scientist you don‘t have much choice. Both inalienable rights in continental European copyright law. The real danger: privatization of the databanks of fundamental science and of usage data and scientific communications by corporations which are not accountable for it. G. The Royal Society says: “I assert my moral right to be identified as the author“. adjustments are needed to educational limitation of the EUCD and its implementations. with unified harvesting tools and citation linkages .

It just seemed the right thing to do. Jim O’Donnell at Penn. but. it later transferred to Cornell.org. Theory & Practice of Logic Programming (Cambridge University Press). Budapest Open Access Initiative. and then „create change“ movement. by creating or supporting journals that compete with the expensive major journals." .(2005) 2:2 SCRIPT-ed constructed in a distributed manner. . attempting to reintroduce competition. 25 with a new editorial board. It is today the primary source for large parts of physics. The SPARC initiative (The Scholarly Publishing and Academic Research Coalition) launched in June 1998. Elsevier launched the Journal of Logic & Algebraic Programming. OAI refuses to design any utopian SOE (standard of everything).. supporting learned societies retain control over their journals.g. helping editors negotiate better deals. „For one Professor Bruynooghe. simplicity. Paul Ginsparg started his physics preprint server at the Los Alamos National Laboratory: arXiv. it may appear to many as more symbolic than anything else“ (Guedon). within 3 days had responses from leading theorists in the field. The motives were: • • • • to reduce publishing delays to decrease publishing costs (by at least 30%) to lower startup costs of journals to open up the possibility of free journals. reported on the case of a graduate student in Prague who put an article on spring theory onto arXiv. 154 First experiments with electronic journals came from a few exceptional scientists and scholars (e. Darius Cuplinskas. let us face it. leveling off at $9. Stevan Harnad then at Princeton. because of Organic Letters. at $2. Started in high-energy physics. At an October 1999 Meeting.438 by far the most expensive of SPARC journals. As they say of the Internet. It is successful in providing alternatives and to some degree keeping prices of the major journals down (Elsevier‘s Tetrahedron Letters.000 in 2001. Currently hold a quarter of a million articles. distributedness. the Santa Fe Conventions were formulated: distributed e-print server with common metadata that can be harvest and searched easily.to my knowledge not yet as a consciously anti-publishing initiative. there may be ten others eager to enjoy the opportunity to act as gatekeepers“ (Guedon). Professor Maurice Bruynooghe even won a prize for this courageous move. „A dozen SPARC-labeled journals is wonderful. Without any filters and gatekeepers. can threaten vast commercial interests“ (Guedon). In February 2001. "implementation precedes standardization. networking.000 in 2001. In 1991.). Growing sense of crisis led the whole editorial board of the Journal of Logic Programming to resign en masse from this Elsevier publication and found a new journal. already at $5. SPARC works with learned societies and university presses to position free or comparatively reasonably priced journals head-on against their oligopoly equivalents. Got a scholarship. This initiative has been taken over by OAI. etc.) Also raising awareness.200 in 1995 appeared poised to reach $12. The principles are: interoperability.

supporting publishing by University Presses. A Cross Archive Search Service. etc. also promoting establishing of university-based open access archives and self-archiving. the journal level was targeted by PubMed Central. but it behaved a little too idealistically: Journals. especially commercial journals. By contrast. the PubScience initiative appears to be dividing the publishers. PLoS decided that setting up their own journals is the only way. funded by the DFG. PubMed Central sought to encourage journals to free their content as quickly as possible. Stevan Harnad. but the Senate restored them. as could have probably been expected from the beginning. a collaboration of NASA. HighWire Press. used to investigate issues in harvesting OAI compliant repositories and making them accessible through a unified search interface. Biology. in the end. Springer. Karlsruhe and Oldenburg. In summer 2001. Medicine. according to an interesting fault line that ought to be explored further—namely. Old Dominion University. on the initiative of Darius Cuplinskas from the Soros Open Society Institute. as well as any other portions of the sciences that are pertinent to the study of cognition. in December 2002 it received a grant from the Gorden Moore Foundation. GNU EPrints was developed at Southampton. migrated to OAI in October 2001. an electronic archive for self-archive papers in any area of Psychology. German Academic Publishers (GAP). a part of the Current Science Group. PubScience counts several important commercial publishers on its side: Kluwer. BioMed Central. Philosophy. or even parts of it.(2005) 2:2 SCRIPT-ed 155 Public Library of Science started in October 2000. and Linguistics. Learned Societies and SMEs. The Petition was signed by 30. and many areas of Computer Science. that between publishers involved purely in publishing and publishers also involved in the bibliographic indexing kind of activities. locates itself squarely as a commercial publisher. Taylor & Francis. were not ready to give the store away.000 scientists asking commercial publishers to release their articles 6 months after publication for inclusion in an open access archive. ARC. Bepress. There again. House of Representatives removed all budgetary provisions for PubSCIENCE. the NIH-based proposal was left with very few concrete results. Anthropology. In December 2001. Networked Computer Science Technical Reference Library (NCSTRL). It only managed to convince a few. Software & Information Industry Association (SIIA) complained that it competes with private sector publishers. and then NIH director. Joint project by universities in Hamburg. for creating an OAI compliant archive. at the same time. partner of FIGARO. Research Institutions. and they strongly criticized the PubMed Central venture. it sees itself as a complement to . BioMed Central is somewhat different: BioMed Central was created in response to the partial failure of the NIH-led PubMed Central project: under the impulsion of Nobel prize winner Harold Varmus. and BioOne stand somewhere between a commercial outfit and a cooperative. the Budapest Open Access Initiative was started. University of Virginia and Virginia Tech. CogPrints. possibly from day one. neuroscience. Commercial open archives: BioMedCentral. In short.

. What needs to be done? Technology: yes. print publication are related to career management. Frequent replication and wide distribution. Ginsparg knew well what information could emerge from the use statistics of his server. mirrors can be established with little hassle and. A selfselection of contributors choosing a suitable archive for their works is sufficient. If archives are open.. technical. skill constraints. and not individual articles. Anyone is allowed to publish. and is working on methods for citation tracking. once accepted. But journals are not the only way of evaluation. refined and extended if needed by librarians. Elsevier is experimenting with archive evaluations on its Chemical Preprint Server. while publicly archived digital versions deal with the management of intellectual quests” (Guedon). Long-term archiving J. K. If . What is changing? From journals to archives. It invites scientists to submit articles which are immediately peer reviewed. but he refused to release them for ethical reasons and political prudence. The author(s) keep their copyright. In the digital world. If openness can be demonstrably and operationally linked with better long-term survival. We should never forget that lesson! Vicky Reich’s LOCKSS project at Stanford University appears to have taken in the implications of this "dynamic stability" vision for long-term preservation of documents. 2. High visibility 3. have long been used by DNA to ensure species stability that can span millions of years. as they are the foundation for their financially lucrative technique of branding individual scientists. selection and evaluation through usage becomes the dominant mode.(2005) 2:2 SCRIPT-ed 156 PubMed Central. not hardened bank vaults. “For scientists. as a result. it will have gained a powerful argument that will be difficult to counter. In the digital world no preselection is needed. they are published in both PubMed Central and BioMed Central. But. money and environmental quality. but not DRM which is a complete waste of time. the archive has more chances to survive. The interest of commercial publishers is to keep pushing journal titles. The reasons given by BioMed Central to induce scientists to publish with them deserve some scrutiny: 1. Most of PubMed Central "successes" actually come from BioMed Central! BioMed Central offers about 350 free peer-reviewed journals. The model should be discussed. Evaluation: An Open Panoptic Space …to print means to select. There are no more economic. calculating impact factors and other methods for evaluating articles. The peer review process tends to extend to the whole community almost immediately.

but it should clearly be on the agenda of learned societies. I . collective move stemming from the whole community of scientists. librarians hold the key to developing a total. The evaluation process will have to be torn out of the publishers’ grip and that of gatekeepers. will open the way to many different and useful forms of evaluation. Literature: Jean-Claude Guédon (Université de Montréal). it had better emerge as a conscious. these are not unclear battles. the access to large corpora of texts. held at ARL May 2001 Membership Meeting „Creating the Digital Future“. Willms Buhse: The difficulty I face here is that Digital World Services (DWS) as a Bertelsmann company is not working in the field of academic publishing. if suitably forewarned. Last Updated: October 26. Research Scientists.arl. laid out in open archives.. with the digital world. university administrators. the evaluation process stands ready to be reinvented in a clear.. Unlike consortial battles. The vision. fraught as they are with many ambiguities. 2001. but. in particular through their citations. So. but it is not new. rational way by the relevant research communities themselves. http://www. It is not an easy problem to solve. i. some of the main players will try either to destroy or control what they do not already own. of course. colleagues! Ways to tear this detrimental alliance apart while relying on the real expertise of these gatekeepers must therefore be sought.e. scientific and library communities backed by lucid administrators do pack a certain amount of firepower. This would amount to creating an open panoptic space—a marvellous project for librarians. it is easy to see that tools to evaluate all sorts of dimensions of scientific life could also be designed and tested. librarians.. is dizzying. These tools might be designed as public good. in itself. Let us do it! L. with the help of scientists self-archiving their work. In the end. it has lain in the background of Garfield’s (and Vannevar Bush’s) thoughts and quests. we may just begin to have the tools and the social know-how (distributed intelligence again) to do it all now. and cross-linked in various ways. „In Oldenburg’s Long Shadow: Librarians. But . Guedon speculates that commercial open archives like Elsevier‘s: „Chemical Preprint Server" (CPS) are there for testing ways to reconstruct a firm grip on the evaluation process of science in the digital context. In short. by a combination of specialists in scientometrics and bibliometrics—an ideal outcome in my opinion. what I will do is. and.org/arl/proceedings/138/guedon. global mapping of science. somewhere. It will also help monitoring the crucial growth areas of science while placing this bit of intelligence gathering into the public sphere. but on the actual quality of each selected work. If we imagine that a significant fraction of scientific knowledge should circulate through open archives structured in the OAI spirit. Publishers. with the help also of selections that do not rest on the prior reputation of a brand.(2005) 2:2 SCRIPT-ed 157 evaluation were ever to rely on his archives. and the Control of Scientific Publishing“. With a well-designed principle of distributed intelligence.html.

What trends do we see. DWS believes that Digital Rights Management is central and critical to the success of the digital content business. For them it is crucial to generate content revenues. there are many others. legal and business frameworks are coming together to facilitate DRM I will come to that a little later. Another DRM standard that is evolving and being used across the wireless industry is one provided by the Open Mobile Alliance (OMA). we are working in the field of mobile content distribution.(2005) 2:2 SCRIPT-ed 158 will briefly explain what my company does. there is an increasing need for DRM systems. but it is actually a strategic business enabler. DWS is part of the Arvato Group – the media services division within Bertelsmann. which is in fact an open standard. Of course. software games. I hope this combination will be helpful for your conclusions. e. If you distribute games to your cell phone. Leading content providers and distributors are currently building these platforms. Then you have the open internet which is uncontrolled and full of illegal content from the perspective of music industry. Then there are . when it comes to DRM? First of all. The central part of this is something that is called clearing house. which is very often mentioned as the core part of the DRM system. I took some findings from the music industry and from my analysis. These cable networks or telecommunication networks are now opening up. for example for the online distribution of music.Tunes and Pressplay. which have this kind of DRM technology implemented. Today there are about 40 cell phone types currently in the market. why is DRM so relevant? On the one hand you can see how we used to have closed networks such as the cable networks. The technical. That means that any company can actually join the OMA and help to develop technical specifications. so that is a big success. And the OMA has already successfully released specifications for DRM last year. If you look into certain systems. They would just stick to traditional means of distribution. that is able to handle music. as some sort of a pay-per-view. I. such as Liberty Alliance and Passport. as a little disclaimer it is not the only way of generating revenues.g. that in such a short timeframe this technology has become implemented by many different network operators and especially hand set providers. which means that we are a service provider just like a printing facility or a facility that manufactures CDs. As they are opening up. We are a system provider. they are all using DRM systems. It is important to stress the word “business” as the media industry has to rely on revenue streams in the digital age. as we see certain technologies appearing. DRM used to be only relevant as some sort of a billing mechanism. DWS started about five years ago and during that time we came to a couple of conclusions. For example. It was released about a year ago. because otherwise many content providers would just not allow publishing this type of content in a digital format. and then I switch over to what I am doing that is more similar to academic publishing – I will talk about some findings of my research thesis about Digital Rights Management. It is becoming more and more controlled. so we are building technology. This is a strategic business enabler. We have developed a platform. and it protects these bits and distributes them to a number of different end devices including PCs and mobile devices. then we actually have a DRM technology to protect these games for certain types of cell phones. Nevertheless. video publishing. We realised that DRM is not just a technology.

There are number of things that are coming together. Regarding the cost perspective. So. DRM is something that can always be hacked – but there are developments when it comes to security updates. where they were extremely sceptical towards the new distribution channel. The music industry currently licenses music to a number of companies. And when you look at how those costs are distributed within the music industry. which is in the end similar to the publishing process. TCG as an example but it certainly will take a while . there is a growing political support for IP protection and a growing education. On the environment side. direct and indirect sources. but still there is the possibility to establish trust. On the consumer side. it would cost him 15 Euros. I think some might actually apply to the academic world. At least from the music industry perspective the real potential of 50 percent lies in marketing and distribution cost. first copy cost. To explain this difference to him is extremely challenging. Why should I buy a book for 100 Euros. It is very hard to explain to a consumer who burns a CD by himself for probably 50 Eurocent why. There you can have impact on the cost structure. there are differences between first copy cost and variable cost per unit.a couple of years. which business environment they are working on. DRM is certainly an extremely complex technology but fast developing. They know which environment. As a result. On the other hand you can see the content suppliers certainly asking for a DRM infrastructure to be in place. that is actually something that really revolutionizes an entire industry. There is a limited willingness for the end consumer to actually pay for digitally delivered content. very small portion of that cost and even print it for a fraction of that? So what will be the business model behind that? . and I would like to introduce you to the digital distribution matrix . We see this as the ideal environment for DRM. a growing discussion in this field. Again this refers a lot to online music. they are demanding access to rich media content and they have increasing access to bandwidth. manufacturing cost and label cost. As I promised earlier. but also they are making content licences available. This is certainly true for the music industry.(2005) 2:2 SCRIPT-ed 159 certain security measures. which do not necessarily reflect the opinions of my company. if I can actually download it for a very. if he bought it. That wasn't the case five years ago. Regarding the agenda. something that is open. Then I would like to look at some uncertainties in supply and demand. From a technological perspective. what are the uncertainties on the demand side? Where do the revenues actually come from? Different sources. And more and more legal aspects are being solved in this area. At least people realize that it is an important area to work on. but I thought it is a good idea to share this with you. We see all these factors are coming together so that DRM actually has its place in the value chain of digital content distribution. we take a quick look at the cost prospective. We will get to some trusted gardens. There is a number of factors associated with that. I would like to present some ideas.four scenarios for academic publishing and a conclusion. They are the results of my doctoral thesis at Munich University. you actually can see that there is little you can do about production. I think it is something companies can rely on.

You might even charge on a time based. you can rent it. It is very difficult to control. This might be a charge for a single track. which are associated with that. What are the business models ? If you come from the direct side. digital item. device purchase or set-up fees or repetitive. For example you pay to access a data base a couple of Euros a month. People can forward it amongst each other via e-mail. I am not willing to pay for this. and you go from scenario one to four. you get transaction based access to this. whenever somebody wants to read it. But still.that means data mining with a huge challenge of privacy issues. The publisher is even willing to give it away for free. these two uncertainties. that actually every single item. which is indirect. The third scenario is a subscription scenario with some sort of a light-weight-DRM. is encrypted and consumers pay for access to this digital item. cross selling. So what is the copy protection behind that? I took these two questions. Most of digital goods today are public to the extent that you can find them on the internet. which is a subscription fee. So. some sort of access control. It almost seems as it is a sort of looking at the “Darknet” as it was quoted from researches from Microsoft. even though copyright is associated to many of them. but because of the service. On the left side there is a little empirical example. But the idea is that actually people pay directly for the content in the sense that they pay indirectly for an access fee. On the supply side digital goods can be public or they can be private. which is called Super-Distribution. sort of for single transactions. how to pay for digital content. still you pay not because of the content. so that it motivates people to join this subscription. rules apply . the willingness to pay for one song: three quarters of the people were saying. or it is direct. you can charge for usage related. Still I would like to go briefly into the demand side again. a licence fee. There are different ways.that means advertising . MMS etc. In this case there is a direct charge. that is you charge for an academic title and you read it once or you print it once or you read it for 24 hours or something like that. People are willing to pay for the content directly. How does this digitalized solution matrix look like? You have on the top supply side online music as a public good or as a private good and you have on the demand side a revenue model. This is a Jupiter survey from some time ago. so it might be for example high quality access. be it an article. That refers to fees and taxes for example on end devices or on storage media.(2005) 2:2 SCRIPT-ed 160 That’s from the demand side. Music might become non-excludable. Or via certain institutions. which you can see here. which might be a single fee. The first scenario is the promotion. Either you can pay directly or indirectly for digital content. and that is an essential part of privatizing digital goods. which means. A mix is some sort of a non-usage related charging. or even broadcasting fees are this kind of non-usage related fees. but still it is very tough to charge for digital goods. even syndications. Nevertheless. The reason to publish is fame. and combined them to a scenario matrix. These numbers have slightly changed. you still might apply from scenario two some technique in scenario three. You can do this via corporations . be it a book. very good search tools or personalisation tools. And in the fourth scenario. no. The second scenario is a music service provider. certain commissions.

hence the promotion scenario. It is like an access and account management. Scenario three. You forward music clips and you can watch that clip once for free. Especially in this case standardisation is extremely important. it is about a community service for chats and message boards for example. and write that into the reference table. Revenues are based on a light DRM. Those technologies are being currently implemented. As this is the service model it is a question of connectivity. Scenario four is the Super-Distribution case. How do I get access to the type of content I really want to get access to? In the case of music. providing some sort of continuous publishing. If you want to listen to music for example and you buy this music. a controlled version management could be possible and even more interesting. so that. Here you are paying for the service that you do not have to look up. that is very hard to control.(2005) 2:2 SCRIPT-ed 161 and it can be charged for. that an article that was published by somebody is definitely published by this person and not by somebody else. What is the impact on academic publishing? Well. Let’s apply these scenarios to academic publishing: In scenario one. the subscription case. the sale of individual articles. And I think this is exciting and it allows for more flexible business models. It can be done through promotions of peer to peer file sharing. It also can be done through a peer review fee. where revenues are based on DRM protection . recommendation or bundling. And that makes it extremely challenging. "when was the book published". it has to do with personalisation. If you have an Open Source system. It is mandatory that the same system has to be by all the participants. I am sure that those four . Scenario two. This also can be done with academic research. and so on and so on.that is where DRM becomes the business enabler. With music you could imagine that system via MMS. It is a question of quality control. chapters and books. What is the impact on academic publishing here? It can be the scenario of providing updates. it is an author-sponsored building-fame model. revenues are based on economies of attention. but you have a restriction when it comes to access. but if you want to buy it. you have to buy the right to do it. so you can actually build this with these kinds of subscription systems and you can obviously charge for example via monthly subscription fees. The impact on academic publishing is that for example peer to peer collaboration could be possible with that. It can be a service and this can be controlled through some sort of a light DRM version that automatically identifies older versions for example. even when you bought it for your PC. You do not necessarily encrypt every digital item. There are certain questions how to solve this. you will always have access to the most actual version that is out there. It is not that you pick and choose and only the promotion scenario and the open source scenario will be around. when you are part of this subscription service. You can make sure. It is about hosting and aggregation. it is my opinion that all these four scenarios will coexist. because obviously a company providing these kinds of systems might become some sort of a gatekeeper and that has to be closely watched. It is more about reputation and branding. can be done for example through sponsoring or public funds or grants. Super-Distribution certainly also faces the challenge of fair use rights. you can listen to it in your car. the service provider business model. After this bouquet of different scenarios as a conclusion. so that a system is established that allows for a faster peer review process for example.

surfine/copyright/keyissues/scholarlyimplementation_zwoll e_principles_pdf).if you consider how many people use Kazaa-like systems compared to how many people legally buy music currently on the internet. not in antagonism. probably you even would be willing to pay for its broad distribution. Do you want to earn money or do you want to get known in the world. In my opinion it should be up to every individual to decide whether they want to charge for their articles or not and so both systems should be coexistent. And in the end an author might actually be able to individually charge for his articles as mentioned in the last scenario. You are actually willing to publish your articles without charging for it. you are probably going to a service provider. universities and the public – to achieve maximum access to scholarship without compromising quality or academic freedom and without denying aspects of costs and rewards involved. In case of the music industry those scenarios on the left probably apply in 99. Then you might actually move into a subscription scenario. long time. It is always a question: “what do you want as an author?”.5 percent of the cases today . I endorse the so-called Zwolle Principles 2002 (ZP) with regard to copyright management(http://www. with the community of scientific and . at times coinciding or diverging. meaning if you are at the beginning of your research. Willem Grosheide: A. From an academic publishing perspective. It follows that I favour an approach towards e-publishing from the academic community in cooperation. where you are part of a bigger journal or part of a bigger publication. interests of all stakeholders involved by allocating specific rights to them. Preliminary observations I would like to stress that in my approach to e-publishing in its broadest sense. I think the music industry has started a lot of initiatives and they are trying to migrate more and more people from this “Darknet” to legal users. As you will know these principles are set within the framework laid down by the Tempe Principles and accompanying documents stemming from the American Association for the Advancement of Sciences (AAAS). because you can see a number of downloads that were done and he integrates your content into his huge database and there is cost for that in the hope in the end he can charge for some sort of access fees. As a next step.(2005) 2:2 SCRIPT-ed 162 scenarios will be around for a long. publishers.” In pursuing this objective the ZP promote the development of an appropriate copyright management. authors and publications might actually develop from scenario one to scenario four. librarians. you might have a huge interest in getting your work known. For reasons of convenience I would here like to quote the Objective of the ZP: “To assist stakeholders – including authors. taking into account the various. And I think these four scenarios allow for both cases.

although so-called moral rights aspects can easily come into play in the context of e-publishing. White Paper -What Consumers Want in Digital Rights Management. transmission. WIPO Copyright Treaty 1996.g. various forms of encoding. March 2003 (http://dx. and the legal protection of technological protective instruments has been introduced. Issues of general interest E-publishing in whatever form should only be taken up by the scientific community itself if what it can offer to authors and public adds .as is done by some .doi. both on an international and a national level copyright law has been substantially strengthened to adjust it to the digital environment.1003/whitepaperl)] no less than the following three different interpretations of an e-book are used within the publishing. in an e journal or in any other digital format.that copyright law is being washed away through the digital colander. This not to say that copyright law in its actual form can not serve the interests of individual academics or the academic community at large.e. digital rights management. It appears that different schemes are here applied worldwide. e. makes it possible to dispose of copyrights by way of contract law. with regard to the integrity of the content. I confine myself in this statement to the economic aspects of copyright law as the ones that prevail in the context of this workshop. the enforcement of copyright law has been articulated. e.something to what scientific and educational publishers can offer. DMCA. As is well known. according to a recent study of the Association of American Publishers (AAP) and the American Library Association (ALA)[AAP/ALA. It is true to say that particularly the advent of technological devices to protect copyright content.(2005) 2:2 SCRIPT-ed 163 educational publishers. in recent years conflicts have arisen with regard to the ownership of copyrights created for scientific work which has been created within academic institutions. Who owns the copyrights in such a situation. remuneration and the like . See TRIPS Agreement 1994. the individual academic or the institution? Finally. cost-saving. It is a fallacy to believe . either in the form of an e-book.g. Taking the notion of e-book as the terminus technicus with regard to electronic publishing in general.in terms of formats. no differences of opinion between scientific authors and scientific institutions. As a consequence the academic community should abstain from diving headlong into e-publishing activities executed adequately by scientific and educational publishers. B. . and they are not necessarily beneficial to the interests of individual authors such as academics.e.org/10. e. e. The foregoing is said under the assumption that there are no conflicting interests within the academic community. This may also be true for the electronically making available (i. InfoSoc Directive 2001.g. As a consequence the prerogatives of the right owner have been extended towards an access right. On the contrary. distribution) on the Internet of text.g. i. It is of note here that the strengthening of the international and national copyright regimes is mainly induced by economic considerations promoted by the so-called copyright industries. by providing that the so-called three-step-test has to be applied by the courts. quality.

With regard to the different forms of business models. . It is obvious that.e. Stupid.: are exceptions and limitations to copyright law merely reflecting market failure?) and considering transfer of rights clauses with respect to their legal nature (assignment or license?) and their scope (general or purpose-restricted?). or any full-text electronic resource designed to be read on a screen (University of Virginia E-text Center). i. No. access to information may be more valued amongst scholars than amongst scientific publishers. ejournals to e-platforms. the content itself (netLibrary). 4. the latter is crucial for scientific publishers. although the same legal instruments are available for authors and exploiters acting as copyright owners. It appears that today the notion of the traditional book is challenged by computer technology and the variety of ways in which new products. This may be particularly true taking account of the different products that can be produced electronically. software application (the electronic wrapper/envelope) and hardware device (the e book reader device). again the same formats such as particularly DRM. What counts here from the perspective of the academic community are interests such as the broadest possible dissemination of scientific content amongst other authors and consumers. What is common to all these interpretations and definitions is their reference to the metaphor of a book made of paper and cardboard. designed to achieve the same ends. Also the downstream relationships between authors and consumers such as the relationships between exploiters and consumers have to be monitored. reference to a book in its traditional form is expected to fade as comparisons to products in print become less necessary. Other definitions can be found in the e-book industry. For example. their application may differ since there may be a divergence of interests here. amongst other things. Whereas scholars will focus on rapid and broad dissemination of their works and will have no concern with regard to exploitation in the sense of profit making. no unrestricted application of the principle of the freedom of contract with regard to. First the upstream relationships between authors and exploiters. may be taken into account are the relationships that are at stake. With regard to the applicable law. such as literary work in the form of a digital object (AAP). if e-books are different the general question to be answered may be if the applicable law should then also be different and should different forms of business models then be used with regard to the dissemination? It seems that the general answer to this question must be: It all depends. Stakeholders and Standards in the Ebook Ecology or It’s the Economics.g. as e-books evolve. balancing proprietary claims of copyright owners against access to information claims by authors and consumers (e. amongst other things. So.(2005) 2:2 SCRIPT-ed 164 library and users communities: the content file (digital file). the following issues. are being developed (Karen Coyle. are available to both authors and exploiters but their use may differ depending on who makes use of them. ranging from e-books. This requires close monitoring of exploitation contracts. Library Hi Tech. p. 314-324). However. integrity of the content and reasonable costs and fair remuneration. Volume 10.

Preservation of intellectual property rights. next to the traditional so-called personal copyright. it indeed makes a difference which legal system applies: the perspective from the principle of the Anglo-American law work-madefor-hire rule has different consequences than the contractual arrangement which applies under continental law. i. depending on how the respective interests are balanced against each other. the requirements for e-book publishing and the publication of an e-journal are not necessarily the same.a hindrance to science or a proper business? A simple positive or negative answer to this question is not possible since charges can be of various kinds. Further. artists. Charges for scientific online publications . in a digital context can be attained by a combination of legal and technological measures. e. What measures should be taken is primarily a policy issue. It is of note here that in recent times. at the same time. e. e. Entrepreneurial copyright should be understood as. in practice contracts may have more or less the same effects under the continental rule as the work-made-for-hire rule due to extensive assignments of copyrights which are usually made. Obviously. further. these copyright industries try to pursue their own commercial interests which do not necessarily coincide with the interests of the (scientific) authors which they represent on the basis of an assignment or a license of their copyrights.g. the bundling of copyrights in large industrial conglomerates. So. III. levies or DRM or a combination of those instruments may be valued differently from the perspective of scientific authors than from the perspective of scientific publishers.g. all this together with either compulsory licenses. .(2005) 2:2 SCRIPT-ed C. It follows that while with regard to the adjustment of copyright law to the digital environment. Do scientific authors need intellectual property law in an Anglo-American or in a continental European way in order to preserve their intellectual property rights? This question is self-evident: there are no intellectual property rights without intellectual property law.g. Neither is a simple answer necessary. if the copyright industries refer to themselves as right owners this reference can under these circumstances be misleading. Bertelsmann or EMI and particularly with regard to scientific publishing e. the possibilities of enforcing the law are of the essence. composers or other creative people. Issues of special interest 165 I.g. the co-called copyright industries. reimbursement of costs. amongst other things. remuneration of authors or profit making. II. restricting the use of exceptions and limitations via an extension of the three-step-test. copyright.e. Elsevier Science or Wolters/Kluwer. How might a legal framework be adjusted in order to preserve the intellectual property rights of scientific authors? It does not seem that the position of scientific authors is in any way different in this respect from the position of literary authors. However. the available options such as the introduction of an access right and. the entrepreneurial copyright has emerged. It seems likely that the best answer will be given on a case by case basis.

Is the Internet a source of danger to the rights of scientific authors? Again. Further. digital signatures. Standing Committee on Copyright and related Rights – Current Developments in the Field of Digital Rights Management. it aims to: • build a comprehensive resource of scientific information produced by the Max Planck institutes. if the latter means that legally and technologically unprotected scientific works may be easily plagiarized.this issue was exactly the core question of an expert opinion a colleague of mine and I prepared for the Heinz Nixdorf Center for Information Management in the Max Planck Society. whereas the latter deals with encryption. watermarking. providing a stable location for its preservation and dissemination. the Internet is both an opportunity and a danger. The former comprises the technologies of identification. The background is that the Max Planck Society launched a so called eDoc Server and wanted to know in how far copyright could be an obstacle to its plans. For example. VI. It is of note here that a distinction should be made between the management of digital rights and the digital management of rights. corrupted and exploited without the authorization of the author. increase the visibility of the intellectual output of the Max Planck Institutes in all the forms it takes in the era of the Internet strengthen the Society and the scientific community in negotiations with publishers about the ownership of scientific research documents at a time • • . According to the official website of the eDoc-Server. scientific authors do not have a special position in this respect compared with other authors. if DRM makes it impossible for authors or consumers to make a copy for private use this is not a violation of copyright law under the regime of the InFoSoc Directive applicable in the EU (WIPO. Is it useful to establish an open source system in the field of academic publishing? Since there is no established concept of open source it is difficult to say whether the academic world will benefit from a system of open source. How is the situation of companies and especially online publishing houses that scientific authors work for? This question cannot be answered because of a lack of adequate information. SCCR/10/2 (Geneva August 2003). privacy technologies and payment systems.(2005) 2:2 SCRIPT-ed 166 IV. VII. Alexander Peukert: Intellectual property rights issues of digital publishing . metadata and languages. the beneficial effect is quite obvious. What are the legal limits of Digital Rights Management? In principle there are no legal limits to DRM other than those limitations which can be found in positive law. If open source stands for establishing platforms for scholarly discussion. V.

emerging scholarly communication system. works of minor volume or of individual contributions published in newspapers or periodicals for persons that form a clearly defined group for their own research activities. these employers or commissioners have to contractually obtain rights in the work. • contribute to a world wide. And here we are in the middle of the whole problem: Who owns the rights in scientific publications? Under German law. 2003. we had to tell the Max Planck Society that copyright law has severe implications for a project like the eDoc Server. After all. Basically. the publisher has to acquire the rights necessary for his envisaged exploitation. tables and three-dimensional representations. every national law would have to contain a broad limitation of copyright in this respect. Courts have generally been generous in granting copyright protection to scientific writings. this limitation of the scope of copyright does not help much when it comes to digital publishing. In the following minutes I do not want to bother you with details of our findings. § 52a German CA proclaims that it shall be permissible to make available parts of a printed work. which exploits the full potential of the Internet and the digital representation and processing of scientific information. copyright initially vests with the author. This so called idea-expression dichotomy preserves the public domain and fosters the dissemination of knowledge. It only applies to small groups of researchers that may share their research results amongst them using computer networks. the results of our study summarized in the following show the complexity of today’s digital publishing in the scientific domain. Otherwise. basically all scientific articles are subject to copyright protection. procedures. Since the general standards of copyrightability are very low. maps.(2005) 2:2 SCRIPT-ed 167 where sky-rocketing journal prices and restrictive copyright undermine their wide dissemination and persistent accessibility. Article 9 paragraph 2 of the TRIPS agreement makes it clear that Copyright protection shall extend to expressions and not to ideas. German copyright law does not follow the concept of works made for hire. such as drawings. the employment . Moreover. the growing importance of the legal protection of databases has to be considered in this field. methods of operation or mathematical concepts as such. Statutory limitations to this broad copyright protection do not apply to digital publishing of whole articles or even books on the Internet. Scientific ideas as such are not subject to copyright protection. Besides. Moreover. I suppose that these are typical purposes for parallel activities of other research institutions (like the MIT or the Swiss Federal Institute of Technology in Zurich) and that they are one of the reasons for our workshop today. Nevertheless. plans. sketches. Even if scientific authors are employed at universities or other research institutions. Therefore. This assessment particularly relates to a new limitation in the German copyright act as amended on September 13th. digital publishing on the Internet involves possibly all national copyright laws because this exploitation takes place worldwide. illustrations of a scientific or technical nature. Obviously. even if this is a non-profit publication for educational or scholarly purposes. To sum up. this limitation does not cover a worldwide publication on the Internet. the right holder who did not agree to the use of the work on the Internet could bar this use of the work invoking one particularly far reaching national law.

we all will have to deal with copyright protection of scientific works. The one who is missing yet is the commercial publisher. this proposal will not likely be implemented in the U. 2 of the Berne Convention). Therefore. However. However. authors are not allowed to update public server versions of their articles to be identical to the articles as published. the implementation of technological protection measures poses a threat to the vision of unrestricted dissemination of knowledge over the Internet. In my view. This means that articles may be freely accessible. this exemplary situation shed light on some fundamental issues about digital publishing in the field of scholarly communication: The three parties involved are the author. Moreover.(2005) 2:2 SCRIPT-ed 168 contract between the researcher and the university is not deemed to contain an implicit transfer of rights. this is not the whole picture. the financing of these works would totally depend on public funding because neither authors nor commercial publishers would be able to exclusively exploit works of this kind. free riding will be the rule. especially the right of making the work available to the public. In this case. it is unclear whether this quite recent policy of some publishers also relates to prior publications never been published online before and whether publishers accept a comprehensive collection of all publications of an author on a server like eDoc. By the way. However. Without technological measures. the university and the commercial publisher. code anytime soon. One at least theoretically conceivable way to solve these problems would be to abolish copyright protection in the field of scientific publications or to limit copyright protection to certain moral rights. Thus. including the author’s own home page. mutilation or other modification of the work. all countries and every exploitation right of commercial value. Commercial publishers that use the Internet as a platform to distribute their products will certainly rely on DRM systems in order to charge their customers. the Max Planck Society has to acquire rights for digital publishing just as commercial publishing houses have to do. Since nowadays most of the creative output of the scientific community is published in journals that are edited by companies doing this for commercial purposes. And just as works in the . international conventions on copyright regulate the mandatory protection of every production in the scientific domain (see for example Art.S. Note that in June the “Public Access to Science Act” was introduced in the legislative process in the U. Admittedly. Moreover. According to current standard contract forms used by publishers of scholarly works. Research Institutions like the Max Planck Society are trying to change this situation by becoming publishers themselves. The reasons for this are manifold. these publishers normally gain comprehensive rights in the respective work that an author submits for publication in a scientific journal. This transfer of rights covers the whole copyright term. a number of publishers nowadays accept electronic preprints on publicly accessible servers. especially the right to claim authorship of the work and to object to any distortion. Above all. This proposal actually precludes copyright protection for scholarly works that were created with public funding. but only in a format that makes it difficult for others to quote the article. it is the publisher and not the author who decides about whether or not copyright is installed to impede the free flow of information. at this point it should be obvious that the Max Planck Society actually appears as a publisher and that this behaviour causes legitimate concerns in the branch of commercial publishers.S. not the exemption.

. but that he has to acquire the right to do so from the author. Therefore. Those right holders who primarily publish articles because they want to make money will more likely impose electronic fences around digital content than will scientific and academic authors who receive a regular salary and normally do not depend upon revenues from publishing their works. What commercial publishers have to cope with is a new group of competitors: universities and even authors. notably the WIPO Copyright Treaty and the WIPO Performances and Phonograms Treaty of 1996. Therefore.then more acute competition serves the public interest. Digitization and the Internet are the technical features that make this aim cost-efficiently achievable for universities and institutions like the Max Planck Society or even for the author himself. we should be happy about the implications of digitization and the Internet in the field of publishing of scholarly works. because this group of right holders is less likely to employ copyright to restrict access to their works.something we have reason to believe . Some aspects of this policy could be: • • Initial ownership of copyright should vest with the author. It should be acknowledged that the exploiter is not free to implement DRM systems as he sees fit. which makes it particularly difficult to change the law. I believe that this differentiation is the key for future copyright policies in the field of the scientific domain: publishers acting for commercial purposes on the one hand and authors and research institutions on the other hand. However. also with regard to digital content of interest for the scientific community. Nevertheless.that not everybody is interested in implementing DRM-systems to protect digital content against certain uses. Copyright contract law should protect the author against overly broad transfers of copyright. if Adam Smith’s “invisible hand” tells us the truth . The reason is . DRM systems are legally protected against circumvention. Legislators should provide measures to preserve for authors of scientific works the control over their creations. Implementation of DRM systems must not be obligatory.and here I can come back to my prior statements . These rules are also laid down in international conventions on copyright. it seems to me that this factual and legal background does not compellingly put an end to the vision of globally accessible and shared information. • • • • Applying this approach will help authors to control the use of their work without necessarily driving commercial publishers out of business. we have to face the fact that DRM systems are part of the commercial use of the Internet.(2005) 2:2 SCRIPT-ed 169 scientific domain are protected by copyright. Scientific authors are . National legislators should think twice before applying the concept of works made for hire in favour of universities and other publicly funded research institutions because it is not impossible that universities as the future right holders would evolve into commercial publishers that would again impose technical measures restricting the access to digital content. Publishers should not be granted a neighbouring right for their economic performance (such a right is currently under debate in Germany).above all interested in disseminating their works worldwide in order to earn as much reputation as possible.

a new set of rights dealing with the protection of digital rights management systems and technological measures was introduced which is intended to regulate. an increase in electronic uses of copyrightable material and the ease of copying and dissemination associated with it. in itself. scientific and organised works are in fact protectable. First. This. The basic and commonly shared opinion suggests that neither ideas embodied in a work nor information “as such” is protectable under any type of Intellectual Property right. but the same principle applies to electronic databases. according to wisdom currently prevailing in the European Commission. The most important changes copyright has faced occur on two levels. necessitates harmonisation under Article 95 EC as it is conceived as a potential barrier to trade. naturally. First. A patent . copyright is undergoing the perhaps most drastic changes of its history. the final two of which – the directive on the legal protection of databases and the directive on copyright in the information society – are most relevant to the workshop topic. European harmonisation has been introduced by way of directives. However. again. reflect a personal intellectual creation. access to copyrighted works. It needs to be pointed out. The UK system is much more based upon investment than personality. jurisdictions are still relying on the catalogue of works under Article 2 of the Berne Convention. is a basic distinction between copyright and industrial property rights. software and compilations of research results. that the exact demarcation lines are difficult to detect. This is particularly difficult to reconcile with the question as to when. The second reason has a political dimension: in the European Union. manifold. the traditional set of economic rights is expanded. These changes can briefly be summarised as a turn from a bundle of certain economic right towards an access or use right. One reason is. in particular. Hence. it should be noted that the threshold for determining copyright protection has been left untouched. in particular with respect to the thematic scope of this workshop. The reason why this is the case is. Copyright gives a bundle of rights rather than a use right. two distinct copyright systems exist: according to the author’s rights systems. In that sense. deal with these changes. though. The commonly used example is a telephone directory. most notably patents. Despite a rise in informational works – works which must reflect a personal intellectual creation by reason of an original selection or arrangement of contents – neither has the European Union achieved any formulation nor has any attempt been made to harmonise the threshold for copyright protection. international norm setting was achieved under the 1996 WIPO Copyright treaties dealing with the conceived dangers of the internet. The second principle. In short. the taking and re-compilation of existing research and scientific does not amount to a reproduction if what has been taken does not. The reason is that (1) information is discovered and not created and (2) that ideas have no form by virtue of which these may be protected. This is dangerous as it comes potentially close to embracing access to and use of information and data. Secondly. The analysis will.(2005) 2:2 SCRIPT-ed Guido Westkamp: 170 The following remarks are intended to present a brief introduction into those national and international norms and provisions which currently shape copyright law. stemming from a more general notion of Intellectual Property theory. in turn. Finally. where appropriate. protection is afforded to the personal intellectual creation embodied in a work.

i. broadcasting. these acts are usually. apply without such distinction. can at least implicitly be derived from the interplay of acts of communication typically reaching a vast audience and the impact of this on the ordinary meaning of the public: a large group of consumers sharing no ties. but for the most extraordinary purposes. The list of non-physical rights normally found refers to certain technologies which will reach the public. Turning to copyright once more. Most copyright acts include certain categories of these rights such as uses on television. copyright did not prevent access to such information. protection is granted for the expression of a work. the basic distinction under which copyright does not protect information contained in a work. The acts which would infringe copyright include the communication of a work to the public as well the making available (such as on an internet site). though not explicitly expressed in most copyright statutes. Traditionally. must reach the public.e. For the purpose of copyright. which in turn was defined as denoting a commercial activity. There are serious concerns about such right. The right will. the most striking feature is that any communication. Ideas and Information “as such” are not subject to such monopoly.(2005) 2:2 SCRIPT-ed 171 gives a use right but only under commercial circumstances. These concerns have been met by various international and European instruments. it appears to give no regard to the issue of information access. For the scientific community. the exclusion of information is reflected in the basic distinction between physical and non-physical rights. Whereas a strict regime may well protect works of high authorship at an acceptable standard. Secondly. theatre (or other) performances etc. and for this good reason patent law demands a process of filing and eventual analysis as to novelty. As noted. in particular: • The introduction of a general right of communication to the public under the 1996 WIPO World Copyright Treaty for Authors and under the 1996 WIPO World Performers and Phonogram Producers Protection Treaty (Articles 8 and 6 respectively). This highlights. again. however. the quality of a work is normally the most decisive factor. The introduction of a much more general right of communication will now place the emphasis for judicial interpretation on exactly such broad terms. Once a copy has been placed on the market. and the term therefore does not require any elaborate attempts of definition. the judicial treatment of the so called idea/expression dichotomy did not cause any intricacies since the works in question were already published. This . Such monopoly only exists. In line with the general copyright principles. in patent law. This right is understood to complement the non-physical rights in respect of online transmission in closed or open networks. traditional copyright draws a distinction between physical and non-physical rights. in particular in areas such as natural sciences. the author is not permitted to exert any further control over its destination. it may be resold without any authorisation. Exploiters of cultural works have detected a danger to the commercial viability of their businesses and pledge for stricter protection. by whatever means. it becomes quite impossible to determine who the public will be. The principle therefore enabled a public domain to be built. This notion. Physical rights are the right to reproduce the work in a physical form and the right to (physically) distribute those copies. In terms of non physical rights. This has changed. always intended to reach a public. engineering and life sciences. for a much more limited period of time.

(2005) 2:2 SCRIPT-ed 172 means that the commercial factor. Moreover. as it does not result in any tangible copy which can be re-utilised. at present. This refers to acts of technically necessary copies which occur once information is transmitted through electronic networks. This right subsists alongside the communication right. a new line of thought developing which aims to assess Intellectual Property and its interface with information and communication freedom under constitutional law. introduced a defence based upon either a lawful use of for the benefit of an intermediate service provider who will not be liable for copyright infringement. the right is dangerously close to monopolising private (one to one) communications and could even stretch to email communications. this is likely to cause a development toward an access right. As a use right. the combined effect is quite serous. it remains immaterial whether the act of access (or providing an opportunity to access for third parties by uploading information) is intended to make use of the work or of the information contained therein. The directive has. and the right of communication to the public will avail owners of copyrighted materials with a general use right. There is nothing of substance as yet. database systems) was subjected to scrutiny under competition principles. An additional step towards a strict copyright management system is the introduction of provisions for the protection of both Digital Rights Management Systems and Technological Measures protecting copyrighted works. most limitations with regard to the scientific or research use are withering away. Here. They have no value as such. • • • . • The European Union has introduced a general right of transient copying under the 2001 directive on Copyright and Related Rights in the Information Society. For providers of scientific material. Both the reproduction right. This further makes the situation for the free dissemination of scientific material complicated. Therefore. This is in contrast to traditional copyright which does not imply any right as to the communication of a work. liability will be incurred if the transient copy has economic significance. In effect. access to copyright works (indeed. Thirdly. contrary to all principles of tort law. Most notably. and here the most important current disputes surrounds the character of such Intellectual Property rights as an “essential facility”. to ease the effects of this broad understanding. temporary or transient copies effect a faster transmission. The directive contains a list of optional limitation member states may introduce or uphold but seeks no general balancing test for certain private copies or academic purpose in relation to digital copying. It remains entirely unclear whether a transmit copy has such value. The final step in legal development is likely to be a shift from traditional copyright principles towards an assessment under different concepts. as understood in the sense the directive suggests. there is. no limitation will apply. As yet. However. Again. which was at least notionally underlying aspects such as broadcasting becomes inapplicable. it remains dubious whether the access to information only will qualify as copyright infringement or whether such act would. amount to an infringement by simply committing an act to circumvent such measures or manipulate rights management systems.

In addition to publishing in traditional journals. As to online delivery. It should. It still is unclear which law will apply to the act of downloading a copyrighted work. regardless of their physical location. Electronic publishing can always address this second issue. The most important issue here is whether a new levy system should be introduced which would then facilitate the licensing of university libraries to disseminate academic materials. Research journal publishers have increased their prices by an average of 10% per annum over the last decade. From a legal point of view. academics can upload their papers in open access digital archives. Firstly. case by case judicial analysis. especially since university libraries quite often also double as student microlabs. Library budgets in the UK however grew if at all only in line of inflation. the purpose of the use. The conventional concepts – either the law of origin or the law of the country for which protection is sought – are both fraught with difficulties in practical application. Finally and more ambitiously there are projects such as the Budapest Open Access Initiative funded by the Soros Foundation Open Society Institute. On the next higher level. There is a great divide globally and on a European level as to the scope of limitations necessary and acceptable. Burkhard Schäfer: The traditional academic library faces a crisis. they too face a problem. this can simply be their own or their departmental website. the introduction of the Research assessment exercise in the UK has dramatically increased the pressure on academics to publish. This could either be achieved on the level of economic rights. This could systematically also be achieved by introducing a general limitation under a wider fair use concept.(2005) 2:2 SCRIPT-ed 173 Apart from these aspects. The authors would. In its simplest form. While arts. Its protocol allows all databases which comply with it to function as if they were part of one central server. In the same time. however. Electronic publishing offers however even more radical solutions to the problems facing the academic archive. like to state his personal opinion that the complexities of current copyright law need to be addresses so as to facilitate a more flexible. humanities and law have seen less dramatic rises in the prices for journals. Even small institution-based servers can in this way generate large aggregates of research . but this will remains subject to a license agreement. even if the electronic journals should remain as expensive as their paper counterparts. these archives can be located on the level of universities. the overall social/commercial impact on the use of the work. the online delivery of copyrighted material needs to be viewed in light of the limitation which had been granted to the area of academic research. there is the potential for institutional self-archiving. be noted that a voluntary license – for the sheer fact of publication – can always be asked for. it should also be noted that in relation to entirely open system access and download facilities will also have an impact on foreign users. it therefore appears difficult to advise in respect of the FIGARO project on any one aspect. by reinserting a general test based on aspects such as the character of the work. The market reacted by an equally dramatic increase in the number of journals. and more expensive journals. of course. Storage space is increasingly at a premium. Edinburgh and Glasgow Schools of Law are at present negotiating for instance the possibility that hard copies of journals are kept in only one of the two libraries for those journals that both subscribe to electronically. Libraries therefore have to buy ever more.

Researchers can self-archive their papers and don’t need to upload the same paper on several smaller discipline-based archives. One function of traditional publishers is to protect their authors from litigation. Learned societies in particular can act as competitors to established publishers. but individuals and institutions are offered additional services (like for instance the hyperlinking facility discussed above. if for instance they are linked to a derogatory comment of their publication? If a free paper or archive changes ownership and becomes commercial. under which conditions should an author have a right to withdraw an uploaded paper? Imagine a lecturer who uses an archives facility to hyperlink papers needed for a course. a technology already used commercially in the Juris system. Sites which meet the format requirements can register to become OAI-complaint and have their records joined to an international federation of repositories. Does this create potential violations of the moral rights of the authors. from involving students early on in research to more reader friendly writing by academics. While student edited journals are the norm in US law schools. OAI promotes the adoption of their protocol by repositories of all kinds.(2005) 2:2 SCRIPT-ed 174 literature. proof reading and general administration. Think for example of a research article that questions the efficacy of a commercial drug. Self-archiving faces very obvious copyright issues not encountered by free to air journals which publish original work. Apart from the potential to reduce costs. the joint information systems committee JISC supports institutional self-archiving. Edinburgh University is one of the leading partners in a consortium to create and promote such archives (The Nottingham University led SHERPA project) Since electronic publishing has low initial costs (compared with paper publishing) “free-to-air” journals form a complementary strategy to reduce costs. should there be a legally recognised “right” of authors to upload their papers? Conversely. Unlike self-archiving. and who makes the so created customised archive available to his students. the electronic law journal project at Warwick University offers two free to air law journals. should authors have the right to withdraw their papers? . for instance through the possibility to allow users to customise hyperlinks between papers. It also allows to tap into this huge underused resource (in Europe). they are the exception in the UK. students. Will an underfunded open source archive/journal be as robust as a commercial publisher in resisting threats of libel litigation? Will it simple remove “dangerous” papers? To the extend that OAI establishes a successful monopoly. Only this creates the necessary mass to challenge the main journals. copy-editing. They do however face potentially higher costs for peer reviewing. firstly by offering free or low costs journals. In law. secondly by creating market pressure for established publishers to reduce their costs too. Possibilities to meet these costs range from “author pays” models to sponsorship to a system of “value added” publishing were the core texts remain free of charge. Other legal problems are less obvious. customising of papers etc). In the UK. Edinburgh University is presently developing a student edited webjournal on IT and IP law. These different approaches to technology facilitated non-traditional publishing create very different legal problems. the Web Journal of Contemporary Legal Issues is another examples of this new breed of publications. they have pedagogical advantages. In the near future. open access journals do not face the problem to negotiate copyright release for articles published already in paper journals. technologies will be available that can be used to transform these archives from cheap alternatives to paper journals to superior competitors.

technology (automatic electronic registration of “hits” on websites). what sort of artist should they be? Are we more similar to a sculptor. Because publishers guarantee the quality of the academic end product. Is it really copyright protection that we need. or a mixture of these measures? In what follows. If there are problems (for instance regarding the standing of online journals. they are not only entitled to but depended upon high charges for their journals. the sculpture? Or are we more like performance artists. This obviously has implications for the debate on intellectual property. that it is important to ensure the quality of this article. inviting others (academics and the general public) to participate as spectators in our work. The Stockholm syndrome is the psychological theory that hostages tend to fall in love with their captors and that they develop emotional bonds with them. One of the main reasons given by academic publishers for their high charges is the service that they provide. academics and academic products are at present more similar to sculptors. books. not as buyers of it. The main component of this service is the system of peer review. Each aspect of this equation influences every other one. as they will be ordered in addition to existing high impact journals. we “produce” finished objects (articles. Cheap as opposed to free online journals for instance might initially increase costs to libraries. not epistemological reasons. e. I think that what academia is suffering from collectively. arguing for more radically new forms of publishing which technology makes possible. is a sort of collective Stockholm Syndrome. or poet and novelist. I want to start with the first of them. at least initially. What we should be doing is much more similar to performance artist. or is there some leverage to be gained from the debate on the protection of rights similar to copyright. or the problem of citation of online archives) are solutions best developed in law (a moral right to be quoted). over the past 200 years at least.g. who produce a less tangible product? My argument is that for all the wrong reasons. often in unpredictable ways. Can institutions/publishers/governments financially support these loss making journals. to someone who invests a lot of hard work to produce a physically touchable end-product.(2005) 2:2 SCRIPT-ed 175 The main argument of my talk is hence the interaction of what is technologically possible. I think that this is the most problematic “Red Herring” which has had a very negative influence on the entire debate. I think that is very much what we did as academics. I think. in a change of these reward structures. We invented reasons why peer review is a good . Both presuppositions are false. and ask somewhat provocatively: what sort of “artists” are academics. or does this raise unfair competition issues? Universities and Ministries of education have established reward structures. like singers or actors. often based on instruments like citation indices. The precise nature of these instruments will determine the interest the author has in his work. institutionally required. I will explore the interplay of these different issues. so their argument. This claim has two presuppositions: Firstly that the main result of academic work is an object. the issue of peer review and quality control. inventions). and we fell in love with our captor. Secondly. and which pre-empts many of the pertinent legal issues (but raises new ones) The starting point for my own analysis is going to be where our discussion yesterday ended. We had to have peer review for mainly economic reasons. for instance the concept of neighbouring rights? The issue of quality control is directly at the heart of this matter. an article. legally permissible and economically or academically desirable.

where the worst of academic output was deselected at source. Alan Sokal and Jean Bricmont started the onslaught on the peer review system when their spoof article “transgressing the boundaries . it is not feasible to print every idea.(2005) 2:2 SCRIPT-ed 176 thing in itself. Consequently. There are very few papers or results that are enhanced by the process of peer review. where the expensive printing machinery is located. if we put our prejudices and comforting narratives aside for a moment. scientists could console themselves with the thought that this problem was unique for a very specific branch of . Since it took great effort and considerable investment in time to produce a single copy. All this is expensive. Peer review is not different in that respect from any other person commenting on a first draft of a paper. Now we have the internet. Firstly. sometimes one might have a good reviewer whose comments are genuinely helpful. But one can equally get helpful comments from colleagues or from people met at a conference. and that is not the same thing at all. and with that peer review loses its raison d'être.towards a transformative hermeneutic of quantum physics” was published in the peer reviewed “Social text”. This guaranteed quality due to the high standing of the author. But that is really only the same survival strategy hostages take towards their captor. the only book frequently copies was the bible. In an environment where the production of tangible forms of knowledge and its dissemination is expensive and the channels of communication are restricted. This adjustment between costs of production and quality control is by no means unique to our time. It produces or excludes research that is too bad to be published. thought or inspiration academics might have. It doesn’t increase or improve the quality. For all these reasons. Granted. It makes life so much easier if we can reinvent ourselves as the willing participants in a totally pointless process rather than its victims. To the extend however that we can reduce these costs. Now. It is said to enhance or ensure quality. it was crucial not to waste it on works of insufficient quality. we need a system. At a time when books where hand written. peer review is failing even measured against its own standards. exclusion of low quality material becomes less important. it only deselect pieces of possibly low quality. For some time. Prior to the invention of the printing press. You have to cut down trees and transport them to some other place where they are manufactured into paper. there are other issues involved here. Once that distinction is made we realize what the purpose of peer review really is. peer review and negative quality control were even more important than they are today. and as a result you have heavy books that need to be transported yet again and finally stored. we can produce output without very high costs. And with that we can stop to pretend that peer review is actually enhancing academic research. There is no any longer the economic pressure to deselect a low quality material. Traditional printing is expensive. Peer review does not enhance quality. some form of peer review is indeed necessary. Why is peer review a bad thing? Normally it is seen as the hallmark of academic publishing or academic activity. (though arguably. It is also costly on the libraries or the intended reader who has to find storage space Because of these costs. and then you have to transport the paper to somewhere else again. What peer review does is not primarily to enhance the quality of academic research. or rather we needed a system in the past. there wasn’t any peer review). Not only that. we can realise immediately that this is untenable. copies had to be hand-made. and why it is absolutely needed for quality enhancement.

The only way I can make these public is by a small footnote. it puts an epistemological important part of academic discourse behind closed doors. but little guidance on how the author came from an initial research question to . One is increasing specialisation. are often totally inappropriate for teaching proposes. generally the idea that academics produce products. Would it not be much better to publish both of them together ? The internet makes this possible. the high costs of in particular empirical research make replication of results increasingly difficult. Not only is peer review inefficient. Not only for the factual mistakes mentioned above. it re-enforces conservative tendencies within established scientific paradigmata.more likely than not. if a foreign country is in the possession of dangerous weapons. to know that someone considers this likely is not enough. this process is considerably more efficient. They are much less forthcoming in financing the testing and evaluation of old results. There are two reasons for the demise of peer review as a useful tool for quality control. last year’s scandal at the MIT paid put to this. and which are “junk science”. and open. Funding councils might be prepared to finance groundbreaking new research. I thank my anonymous referee for this brilliant comment. Sometimes the reviewers actually have good ideas. the US may be moving soon to a mandatory publication of failed pharmaceutical experiments – information typically filtered out through the peer review process of scientific publishing.g. lawyers (evidence) and other members of the public need to know which scientific theories they can trust. Indeed. In a peer review system traditional knowledge is more easily accepted. Policy maker. It is increasingly difficult to find competent and independent peers . Secondly. To judge the validity of a statement. To determine e. First. because they don’t invite the reader into the thought process which produced the paper. as discussed extensively in the literature following Kuhn’s “Structure of Scientific Revolutions”. With that. it is positively harmful. However. if they have ulterior motives or a stake in the outcome (disgruntled dissidents hoping for crime change etc. peer review in which readers annotate constantly the published text.) The same holds true for quality statements about academic work. To determine the credibility of such a statement. very much like the process at Amazon for instance. the leading articles in their field. the critical process of debate is in the open. The very best articles. An academic tipped for the Nobel price on the strength of his long list of publications in the leading peer-reviewed journals was finally found out as a fraudster who had deliberately (and not even in a particularly sophisticated way) used falsified data. Internet publishing allows us in principle a process of permanent. there is no more reason to keep the process of review behind closed doors.(2005) 2:2 SCRIPT-ed 177 contemporary philosophy. The number of reviewers is increased. as any professional spy would tell you. Traditional peer review – or. high quality finished products also meant that academic research and teaching became artificially separated. for everyone to read directly on line. and the reviewing process is not completed after an arbitrarily set (and rather short) period of time. Epistemologically speaking. You can think of an article which permanently accumulates comments. However. newcomers and new ideas have difficulties in passing through the border police of academic quality control. Second. it is crucial to know “sources and methods”. where people can comment and write reviews of books. even for that function the anonymous peer review fails. The one possible comeback argument in favour of peer review is that it creates trust in the wider public. They give us authoritative end results. they are going to be co-authors or direct competitors. it is crucial to know who these people are.

Based on what I said before. which also increases the chance to find out cheaters. Blogging is another development that supports the same idea. Blogs are an even better example of the process I indicated above: away from the static endproduct to participating in a performance by the author. That is why I think peer review is a “Red Herring”. as a result. Why? Because the way we produce our knowledge does not invite the reader to participate in the process that created that specific research output. This is already a big concession. In addition to the benefits mentioned in an earlier contribution. As teachers. allow a continuous process of writing. students in my master course used bloggs by influential researchers as source for their essays. We can publish all our papers as they emerge. Had he been forced to demonstrate how he came to his results by a step to step process that the reader could follow. We have very little production costs as papers are sent to referees by e-mail and the reviewers work for free We can do that really on a shoestring budget. epistemologically and pedagogically. A draft paper is put online. and the most reliable safeguard against plagiarism. the most famous being wikipedia. Last year. publishing as continuous annotating and commenting of articles. I think that is where we as academics are failing. and a process model of academic publishing allows us just that. for the first time. we all know this simple truth: Frequent meetings with our students and explanations of their work as it progresses are the best way to ensure that they are indeed the authors of the dissertations that they submit. and the authority bestowed by the reviewer’s seal of approval discourages the sort of critical attitude that we want to instil in our students. But the format which we are using is still relatively close to that of the traditional journal. and where issues are particularly controversial. it would have become evident at a much earlier stage that something wrong was going on. BILETA. One was already mentioned in a different context: Wikis. other people can post their . extend the encyclopaedia by a linked discussion board. the ability for all users to amend anytime an earlier entry. One of these more radical forms I have already indicated. Two emerging technologies on the internet already address this issue. is involved with a journal for internet law and technology which is free-to-air. Internet publishing and its low marginal costs allow us to move away from producing as output objects (papers and articles) to a process version of academic research which is much more appropriate to what we are doing. and now we have the means of academic production to get rid of it. Peer review was economically necessary in the past. That is the first argument I want to make. criticism and rewriting of information. We are slightly more flexible in having a category for work in progress. We give our students finished results. My second argument is that at the moment. allows to make optimal use of “the wisdom of the crowds” and brings the process of analytical criticism into the open.(2005) 2:2 SCRIPT-ed 178 just that result. This colleague at the MIT miraculously was able to publish 200 papers a year. but it always was epistemologically counterproductive. will make things worse before they get better. a true image of the process that is scientific discovery. The internet allows us finally to produce and disseminate knowledge in a way that is appropriate for what we are actually doing. we also observe a failure of the academic imagination and we also. We should think much more radically of new ways to publish on the internet. My own association. What do I mean by that? There are examples of free to air academic journals. At the centre is still the refereed article. We should expose our colleagues to the same scrutiny that we consider appropriate for our research students. and they sit there and marble how on earth anyone could come up with such an answer.

but it is of course not going to happen. Firsy. This market failure that allowed academic publishers to increase on average annually journal prices by 10% will get worse now that traditional publishers move into electronic publishing. online or offline and contributed to the undesirable present situation in which tax funded academic research is made available for free to publishers who can than reap the financial benefits of the initial public investment. we might be overenthusiastic in recommending journals for the library. and the savings we made in recommending journals for the library translated into salary increases. I think we have to be more specific than that and exclude at least one form of academic publishing. it is our university. excessive journal prices are at least partly due to market failure in the field of academic publishing. as opposed to the product model. provided that the “alternative payment” in terms of recognition. However. I now want to draw some very tentative conclusions for the legal issues involved in the alternative form of publishing. the people who decide which journals to buy are not the people who actually have to invest money. This would mean in effect to lose the investment made over the past ten years. Harvard University’s “open law” project produces compilations of comments by experts on draft submissions to courts. or better quality) competitor. things are going to get worse rather than better. Imagine that you subscribe for ten years to a previously rather good journal. competition law is going to provide academics and universities with some leverage. this might result in rather interesting competition in the field of academic publishing. for some time to come at least. In due time. a better or cheaper journal comes along. There are legally issues involved here. and if IP is part of the problem or of the solution. IP regimes based on the notion of commercial exploitation in the Anglo-American tradition are therefore least suitable for academic publishing. rather than acquire property of the journal itself. with a small number of publishers dominating the market. I do not need to exclude them from using my work. If library budgets and the salary budgets came from the same pot. and it is not us who are paying for it. is the LINUX software system: multiple users constantly upgrading and improving a system that is never really completed. This potentially makes it much more difficult to switch a journal against a (better priced. Academics aren’t primarily motivated by the commercialisation of their research and are by and large willing to make it available for free. I think that potentially. The best known example. This is a huge disincentive to switch. To swap for the better journal under the new system might lose the access to the past backup copies. and it might be possible to argue that there should be either a legal or contractual obligation to make backup copies available if subscription was paid for them at the time they were originally published. This is the process model of academic work. We tell our librarians that we really need this new important legal journal. which are also only available on the publisher’s website. Since I do not compete with fellow academics for a scarce resource “knowledge”. As a result. there isn’t enough competition between publishers. the publication of teaching materials. Second. at least as basis for an analogy. as members of our teaching institutions. First. Source of concern is especially the idea that we now need licences to access a website. We do compete however. citation and peer-esteem can be efficiently collected. for the scarce resource “good” (or . The author then re-writes the paper and updates it constantly. or the established journal loses its academic reputation together with its editor.(2005) 2:2 SCRIPT-ed 179 comments. I want to re-iterate a point made by most of the contributors. Similar forms of dynamic collaboration do already take place on the internet in contexts other than academic publishing. In particular. As we have seen yesterday. However.

Under the new system. Some of the problematic consequences of this competition can already be seen. IP rights creating limited monopolies) I already indicated above that the conceptual vocabulary of “neighbouring rights” might be useful to develop a theory of copyright protection for “process publication”. peer reviewers are either financially rewarded or work entirely for free. Some institutions. users can be asked to “grade” websites that they visit. Both models can be refined. As described above.g. the number of downloads made of an article.(2005) 2:2 SCRIPT-ed 180 “fee paying”) student . However. To work to its full potential. e. potentially limiting in the process their duty to disseminate knowledge. the IP rights of the commentators are in this model of equal importance. research. especially if papers are published on the website of the author or his institution. it is possible to use much more direct measurements of the impact of internet based publications that do not rely on the observation of IP law. including my own. they decrease the value of paid-for online education offered by competitors. Partly. The IP law protection against “denigrating treatment” could allow an author to delete or otherwise suppress comments that are seen as too negative. the new form of publication combines academic papers with publication of annotations and comments. Repositories managed by third parties. this new form of publishing also creates new problems.at least in the Anglo-American world. In addition to these citation indices. It is for instance possible to measure the hits that a website receives. their work would be of the same category as any other academic writing and should be recognised as such. A low quality article might get a substantial number of hits by users who leave the website more or less immediately when they realise that the work in question is irrelevant. . which is important if the predominant reward system is based on the ubiquitous citations indices. Nor is it necessarily the case that a “commons” in academic writing is equally benign if it comes to teaching material. IP protection of academic writing becomes less important if it becomes physically impossible for a user to access an article without leaving behind a trace that measures the impact or importance of this article. this is an issue for the reward systems employed by several higher education regimes. This can be seen as a new application of Lessing’s idea of “law as code”. Discussion of IP issues in academic publishing focus so far on the protection of the author of the original paper. might be a way forward. this is more a competition law problem than an IP issue (though of course. it is necessary to recognise the work invested in commenting on other peoples’ writing in the form described. Universities with sufficient endowment income might decide that while it is unnecessarily burdensome for them to move into online education themselves. “Moral rights” in the continental European tradition ensure the right to have one’s work attributed. However. conceptually they are the same. To address this distortion. make some teaching materials only available to registered students. and the contractual permission to accept annotations and comments. less established institutions who need this source of income might use it to compete eventually also in other fields. By making their own material available for free. Again. The move by MIT to make all their teaching material freely available online can also be interpreted as a rather sinister attempt to prevent competition. At present. Technological solutions can further contribute to an adequate protection of the author. The system of “moral rights “ however can potentially create conflicts between the commentator and the original author. or the links to a website (already used for ranking purposes in search engines like google).

On the other hand. As with most of the issues discussed here. The question of anonymity and pseudonymity is again a general problem for the internet. Obviously. It is a quality control problem in the sort of academic-work-as-process model which I am advocating here. . Readers are invited to write reviews on books. The third party repository would not so much guarantee the quality of the content of academic writing. This is where some equivalent to traditional publishers might become necessary. Again. commentators could artificially increase their own impact rating by posting low quality comments on high impact/high quality papers. in particular if there should be a right to be listed on search engines. search engines become so important under this system that they could seriously distort the quality measurements discussed above. Obviously. Technically. Most of the pertinent legal issues have already been discussed in other research on e-commerce. if intentionally misleading ranking should carry legal penalties. technological rather than legal solutions might mitigate this problem Another problem could be labelled: “quality control. taking up the theme of FIGARO. collects them in commercial databases with some “added value” and claim ownership for it. “harvest” articles from free websites. the issue becomes how we can make sure that the person who claims to have written an article online is really the person he claims to be. with discussions ranging from internet democracy issues to the manipulation of share values in internet discussion forums. how can we prevent someone appropriating the results? That is where IP is challenged. authors might feel tempted to post positive comments to their own papers. How can you make sure that people are who they claim to be? The final problem is how we can avoid the tragedy of the commons repeating itself? If most academic writing is made available for free online. if financial incentives to rank one’s site more highly (as in the case of google) are akin to fraud or conversely. I am not totally sure how far the Open Source software licence can really be translated or extended by analogy to academic publishing. In an extreme scenario. one consequence of my proposals will be the proliferation of low quality articles. the empire strikes back”. it would be possible under such a scenario that third parties can only access commercial collections (potentially protected by IP law on databases and similar collections) of academic articles. someone like Bill Gates (who is in the process of copyrighting electronic replications of all major works of art) could come around. this system is open to abuse if authors write positive reviews of their own work under a pseudonym. More generally. It would be partly an empirical question. while free sources are simply not listed any longer. this is indeed feasible provided again that owners of the main search engines also have a property interest in the collections of academic writing. In a system where peer review is replaced by continuous commenting and annotating. the solution once again might be in the field of competition law rather than IP law as traditionally understood.(2005) 2:2 SCRIPT-ed 181 Secondly. but the identity of the author. It might be a problem to use or to optimize the search engines to get only those articles that are of relevance to a potential user. Theoretically. A potentially even bigger problem is to authenticate the authors online. and their reviews influence buying behaviour. Amazon can again serve as an example. partly a legal question of how we can ensure that things that have been produced for free by the academic community remain free. I think the way you might use Linux differs from the way people might misappropriate academic results in publishing. if the use of misleading meta tags by authors to increase the popularity of their site should carry similar punishment.

In French law we are rather speaking about the right of the author. After this introduction.” What is then its legal regime? I won’t lay emphasise on this debated issue anymore but I would like to strengthen an idea related to one of the contributions presented yesterday. I wanted to say that it is a pleasure to be here and talk about copyright in a country where printing has its origins. most of the time. like an exception to the rights granted to the author of a copyrightable work. There. And why I am talking about the author's right? Because according to French law. I don’t know if the question of open source could be a good solution for academic research. “droit d'auteur”. European or national ones. the author may not prohibit copies or reproductions reserved strictly for the private use of the copier and not intended for collective use with the exception of copies of works of art to be used for purposes identical with those for which the original work was created and copies of software. So starting with that. but there are possibilities to apply it properly. he is the one having all the power over his creation. I am wondering if there really is a right to access the information. In Europe we tend to regulate everything.(2005) 2:2 SCRIPT-ed Elena Bordeanu: 182 Firstly. So. I also criticize your opinion as I am thinking of self-regulation. All this is in connection with the freedom to access information. the internet has its origin in the self-regulation idea. there is the experience of selfregulation which is used. We are all aware of the fact that new technologies increase the transfer . The Intellectual Property French code says that “once a work has been disclosed. And once again. And talking about self-regulation I am referring to the Common Law perspective which is very different from the European one. and in France appear in the declaration of 1789. to have rules for every practical issue. who is the owner. for example. the right to copy. As I said. The right to make private copies appeared in legal texts. teachers wanting to evaluate a student’s work and having reasonable doubts over the origin of such a work. I totally agree with your opinion on the quality control regarding the difficulty to find out the origin of information on the Internet. Secondly. we will see that there is a big difference between the two approaches. From this point of view can we talk about a real right to copy? Is there a right that is conferred to the users to make copies? And I am only referring to private copies here. Let me bring to you an example from my teaching experience. Indeed. I am talking about the idea of copyright related to the possibilities of reproducing the work. Opposite to this. do searches on the Internet with keywords for finding any article that would treat the same subject or that can be connected to the idea expressed by the student. He is the person having all the rights. We are talking about copy. which is a part of the freedom of expression. other than backup copies. let me carry on with my presentation. It is about the relation between DRM and this exception (could this be extended to all the other exceptions mentioned in the European Directive?). for arbitration. Even the concept of copyright refers to the American approach. Digital Rights Management could be a valid answer to the question of implementing copyright law. the author is all important. Some authors found the source of such a right in the European Convention of Human Rights – these rights are well known.

that the buyer would not have bought or would only have given a lesser price for it had he known of them”. And in the second case the person used contract law to prove that the product bought had latent defects. to find out who the real author of a work is.and I would also like to refer to the four scenarios presented yesterday by one of the participants. which actually implies making copies. In particular. And if we are talking about technology. let me present you two other recent French cases (the first one of June 2003. In this case it was technology that brought the answer. I am now bringing this to you in order to underline the issue of proof on the internet. The status is . It is important to look at the legal regimes invoked in the two cases. judges gave an injunction to the distributor to use the following sentence on the products: “Attention. I am referring to a French case where a text belonging to the public domain was published on two websites. Thus. There is an article in the French civil code saying that “a seller is bound to a warranty on account of the later defects of the thing. this product can't be read on any machine or car radio. In the first one the person chose to act on a consumer law basis and to refer to a failure in delivering the correct information on the qualities of the product. the second of September 2003). The parties managed to show that it was possible to compare two texts on the internet. The simple ones could accompany the legal framework. for example on their car radios. we need to have access to other resources. we need to communicate our ideas from a scientific point of view.” Therefore the use of different anticopying devices won't be easy in practice. for example spaces between the words.(2005) 2:2 SCRIPT-ed 183 of information and make it difficult to track a copyrightable content. What happened in these cases? Two persons bought a CD with music and discovered later that it was impossible to listen to these CDs on other devices. In these cases. which is not adequate for the use for which it was intended or which so impair that use. Most of the users understand to preserve their right to copy. The persons finally complained against the distributor and the editor and they won. On the one hand. but also from a commercial point of view . And on the CDs it was only mentioned the followings: “this CD contains a technical device that limits the possibilities of copying it”. And related to this. And they will find more traditional legal ways for protecting it.I am referring to software. I would like to mention that not only the most advanced of them could be used for protection. not only the technical ones or those related to intellectual property. And it seems to me that such techniques could not only establish content monopolies. It was a question of infringement of the author rights of the first party by the second. it is common nowadays to buy a CD or some other goods on the internet and to discover that they can’t be used on another medium. For example. How can we prove that we have an original or a copy? This is an important question. But why is this relevant for this meeting? We should actually be debating over the academic research and copyright. Are we really concerned about this aspect of copyright? It seems to me that academic research has two sides. And on the other hand. if we can actually talk about a right to copy. And we have seen that there are some limits . the two ways of typing the text were compared and the result was that in the same cases we had the same errors. they could also contribute to the disappearing of any exception to this. we can find other ways of protecting information on the internet.

deploying or maintaining a DRM infrastructure. I would like to concentrate on personal data aspects. A question was raised yesterday about the difficulties to put a boundary to such a creation.(2005) 2:2 SCRIPT-ed 184 complex and is placed in the very heart of the debate on the relation between copyright and new techniques. the French universities make contracts with different organizations of authors for determining the conditions of use of the works protected by copyright. The French commissioner for personal data protection was requested to give his opinion about the use of such a device. Someone mentioned yesterday the control of academic content through subscriptions. But Digital Right Management does not come without a price. Of course. What exactly is more appropriate to such a case? As I already said. Another aspect I would like to focus on is the exception to the reproduction right mentioned in the Article 5. I would like to underline this. But what happened with the collective works or the works made for hire? It seems to me that traditional IP law. than this might translate into millions of dollars of otherwise unrealised revenues. that of using different techniques for tracking illegal content on the internet. And apparently this situation will continue because France isn't ready to grant such a right to universities or research laboratories. I would like to quote John Perry Barlow who was saying that protections that will be developed will rely far more on ethics and technologies than on law. I already mentioned some limits of digital rights management. of all the organisations representing the authors in order to reach a fair and balanced deal. Encryption will be the technical basis for most intellectual property protection. . 3rd paragraph of the European Directive. But I am wondering if law is really the answer to the problem. The French commissioner disapproved it in March 2001 because IP-addresses are considered personal data. who was saying that this option can not be considered because it is incompatible with international treaties as well as with European directives. And here I would like to refer to the cost of building. of course. I am referring to the use of copyrighted works for the sole purpose of illustration of teaching or scientific research. In practice. “letter et esprit de la directive”. But this implies a large amount of personal data and we all remember what happened with Microsoft Passport. Meanwhile the French government has created a working group of representatives of universities and. The main reason for mentioning it is that most of the teachers are using illegally all kind of information for their lectures. I could think of Open Source as an example. software called Web Control that was collecting IP-addresses on the internet. The third point of my presentation is rather a complex of different legal aspects. The fact of copying information and using it for your lecture is today illegal. In France there is a long debate over this issue. you all know that in France it is impossible to transfer the moral rights of the author. We should all admit that Digital Right Management could partially affect piracy. I would also like to underline another issue. Here I want to refer to a presentation done yesterday that was addressing contracts and the fact of transferring rights. Furthermore. If piracy could be decreased by just a few percentages points from using the digital right management. The emphasis lies once again on the author. is able to deal with such kind of situations. I want to quote a reply of the French Ministry of Culture from April 2003.

Federica Gioia: 1. The question is whether or not the benefits of DRM outweigh its costs. all that is required is that the laws on copyright and related rights be “adapted and supplemented to respond adequately to economic realities such as new forms of exploitation”. deploying DRM will result in fewer sales of legitimate content. 5 of the preamble to the “Info Directive” (Directive 2001/29/EC of 22 May 2001 on the harmonization of certain aspects of copyright and related rights in the information society) states that while “technological development has multiplied and diversified the vectors for creation. So. could this be considered a circumvention of the technical protection? Will all this be enough for protecting copyright? Can we actually think about the end of copyright law? I am confident that a balance will be drawn between the conflicting interests of the authors and of all the other users or organization interested in it. Thus. namely: i. It is widely accepted that the pursuit of academic research and the related academic freedom are among the interests protected by freedom of speech. Are we sure that the digital environment per se requires any adjustment of the traditional principles of intellectual property whatsoever? A number of factors seems to indicate a negative answer. the drafters of the UDHR were well aware of intellectual property: Article 27 provides. With specific regard to publishing. The agenda prompted participants to envisage “how a legal framework might be adjusted in order to preserve intellectual property rights of scientific authors”. Whereas no. that “Everyone has the right freely to participate in the cultural life of the community. Social and Cultural Rights). “no new concepts for the protection of intellectual property are needed”. . production and exploitation”. If I manage to do the forbidden action by simply using different software. In addition. it is interesting to have a look at the Universal Declaration of Human Rights (adopted and proclaimed by the General Assembly of the United Nations more than half a century ago. the DRM protected content is economically less valuable than unprotected content.. on the one hand. on the assumption that this is one of the most significant ways in which the fundamental right to expression is exercised. Furthermore. when the digital world was long to come: on 10 December 1948. to enjoy the arts and ii. I can forbid printing or copying such a document. The EU approach. I would like to question such an assumption. Article 19 UDHR provides that the right to freedom of opinion and expression includes the right “to seek. receive and impart information and ideas through any media and regardless of frontiers”. Similar provisions are to be found in the European Convention for the Protection of Human Rights and Fundamental Freedoms as well as in the International Covenant on Economic. it sounded as if it assumed that such adjustment is indeed necessary for the purposes of the digital environment.(2005) 2:2 SCRIPT-ed 185 Moreover. For example. Indeed. I would like to mention the issue of proprietary technology that does not allow us to read a certain file on several media. Acrobat gives us the possibility to draw some limits to the use of documents. As formulated. the last qualifications seem to encompass the very characteristics of the Internet.

and the economy”. editors’ liability and the like) shall be the same. irrespective of the paper or digital nature of the publication at stake (Italian Law 7 March 2001. by giving them reasonable control over the exploitation of those materials and allowing them to profit from them. science. As Beckerman-Rodau clarifies. All such objectives seemed to be already well enshrined in Article 27 UDHR. after some uncertainties and discrepancies had arisen in case law. particularly education. it still has to take care of the latter. While the EU has been most effective in ensuring protection of the former aspect. recognizing the importance of their contributions. which militates in favour of strong private property protection for intellectual property”. Against this background. technological advance useful in the dissemination of information”. as clearly mandated by Article 27 UDHR. no. with a view to ensuring the moral rights of the authors. including electronic journals. radio and television all represent vast improvements in the ability to disseminate information. the “migration of intellectual property to the Internet” only “exacerbated the problem of unauthorised copying and distribution of intellectual property. albeit the most effective. Not only is there definitely such a need. development of the mail system. make creation possible in the first place. with a view to preserving the revenues possibly arising in connection with the protected subject-matter. The need for all journals. Both approaches are needed: the Anglo-American. 2. is aimed at protecting authors? At least in the EU. When it comes to copyright protection in the EU legislative documents. inter alia Italy. 62. iv. telegraph. not influenced by the specific characteristics of the means in which it is embodied or through which it is exploited. subject only to the need to take into account any new factual scenario which might develop. to comply with the regulations on publishing can now be safely established. by investing money and dedicating financial resources. literary or artistic production of which he is the author”. have enacted legislation to the effect that any and all rules governing the publishing industry (including issues such as authorizations. research and access to information.(2005) 2:2 SCRIPT-ed 186 to share in scientific advancement and its benefits”. . For example. that “Everyone has the right to the protection of the moral and material interests resulting from any scientific. printing press. on the other hand.e. telephone. a glance at the historical perspective: it has been noted that “throughout history numerous innovations and advances have created more efficient systems to disseminate data. but there is no such thing as an alternative between the AngloAmerican as opposed to the continental-European approach. I will here forward a second provocative remark: are we sure that intellectual property and specifically copyright. “the fundamental guiding principles of copyright and related rights … remain constant whatever may be the technology of the day: giving incentives to creators to produce and disseminate new creative materials. and thereby ultimately benefiting society. The Internet is simply the most recent. article 1). some countries. I have no doubt as to the need for scientific authors to have their rights protected under intellectual property laws (I come thus to another point of the agenda). iii. the trend seems rather aimed at protecting those who. the continental. The foregoing elements seems to convey and support the idea of intellectual property law as being per se media-neutral. as currently conceived and shaped. As stated by WIPO. i. providing appropriate balance for the public interest. by promoting the development of culture.

possibly including advertising banners. type of user). provided however that the University “will not assert any claim to the ownership or copyright in … artistic works.a. the most obvious forms of exploitation of academic publishing are likely to fall within the scope of copyright exceptions. under US law. provided however that “even when intellectual property rights are held by the University. as well as from the one provided for “use for the sole purpose of illustration for teaching or scientific research. InfoDirective ). books. type of work and perspective forms of exploitation available. made or created … by persons employed by the University in the course of their employment”. On the other hand. plays. is not sufficient and needs to be supplemented by other forms of protection ensuring preservation of the economic viability. is indicated. quality high and maximising distribution of the information might require publishers to resort to alternative forms of financing (other than charging a fee for consultation of and access to the information as such.3. Most of the initiatives underpinning European copyright legislation seem to strengthen investors. or by archives. authorship seems on its way to evolve very much along the lines which led. it might also be argued that copyright protection. scores. legal or artistic writing and publishing. Columbia University vests the faculty with ownership of copyright whenever authors “make substantial use of financial or logistical support from the University beyond the level of common resources provided to faculty”. to enact rules vesting the employer with economic rights of patents for inventions developed within the context of employment relationships. economic growth and employment are far more frequently mentioned than individual creativity as objectives worth pursuing and achieving. if title to economic rights is to be vested with the party sustaining the financial burden underlying the scientific contribution and making it possible. This seems the approach already taken by a number of universities. as opposed to authors. as mandated recently by EU legislation: namely. articles. infrastructure providers. with a view to preserving adaptability and flexibility depending on the circumstances at stake (namely. which are not for direct or indirect economic or commercial advantage” (Article 5. The University of Oxford vests itself with rights’ ownership in respect of “all intellectual property … devised. type of relationship. In the context of scientific publishing. 3. whilst necessary. as long as the source. amount of support by the institution. including the author's name.(2005) 2:2 SCRIPT-ed 187 productivity. The assumption underlying the whole European copyright system is that an appropriate reward of those who sustain the financial burden of creativity is the condition upon which the survival of the latter relies. Accordingly. educational establishments or museums. Similarly. lyrics. unless this turns out to be impossible and to the extent justified by the non-commercial purpose to be achieved”( Article 5. in the field of patents and scientific research. it might be appropriate to draw distinctions among protected subject-matters based upon the kind and amount of financial and economic investment involved in their creation. sponsoring and the like).2. revenues from new digital media and other property should be shared among its creators”. as opposed to content-providers. research-oriented purposes have always be . It is common knowledge that scientific research relies more heavily and significantly on financing than literary. InfoDirective). apart from those specifically commissioned by the university”. As a matter of fact. those activities could benefit from the exception to the right of reproduction provided for “specific acts of reproduction made by publicly accessible libraries.c. various contractual schemes and devices might be developed. Pursuing the objectives of keeping prices low. or lectures. Once the principle is established. uses of protected materials for scientific.

the Info Directive and the Database Directive. provided not entailing likelihood of confusion and in accordance with honest business practices: see EU Directive and CTMR). as it is uncontroversial (Clark) that “an issue of a learned journal is… a database. . authors. teaching or scholarship). explaining the underlying relationships of the stakeholders involved (i. according to which “the academic community embraces the concepts of copyright and fair use and seeks a balance in the interests of owners and users in the digital environment. and ensuring preservation”.e. whose ultimate purpose has been identified in preserving the freedom of “important forms of communication that are the types of speech that the first Amendment seeks to protect from government intervention” (Beckerman-Rodau). It is uncontroversial that a collection of scientific works should qualify as a database within the meaning of the Directive. including “attaining the highest standards of quality.2 thereof. as opposed to commercial uses. I fail to see the “new” issue at stake. and will continue to be a matter of balance. maximising current and future access. a database is “a collection of independent works. We have already mentioned copyright. The academic community seems to be well aware of this: in June 2001. IPR law. a group of Dutch academics known as the Zwolle Group agreed to develop “a set of principles aimed at optimising access to scholarly information in all formats. It has been stated (Kamperman Sanders) that “in the face of expanding protection of intellectual property rights. and especially their faculties should manage copyright and its limitations and exceptions in a manner that assures the faculty access to and use of their own published works in their research and teaching”. data or other materials arranged in a systematic or methodical way and individually accessible by electronic or other means”. colleges. should fall outside the scope of the exclusive rights. whether issued in paper or electronic form”. universities and the general public) and providing a guide to good practice on copyright policies in universities”. research-oriented uses of protected materials. The question as to whether charges for scientific online publications should be understood as “hindrance to science as opposed to proper business” also should be addressed under the traditional principles governing intellectual property laws. this is also true for patents (experimental use) and even trademarks (use for descriptive purposes is allowed. They explicitly endorsed the so called “Tempe principles”. with a view to assisting such stakeholders “to achieve maximum access to scholarship without compromising quality or academic freedom and without denying aspects of costs and rewards involved” and on the assumption that the aforementioned stakeholders share common goals. The whole intellectual property system has long been familiar with the idea that scientific.(2005) 2:2 SCRIPT-ed 188 construed as an exception falling under the fair use clause (parody. under the heading “Balancing stakeholder interests in scholarship friendly copyright practices”. According to Article 5. Universities. This is apparent even from the UDHR (Article 27) and it has been long acknowledged. In December 2002 those statements were elaborated into a set of principles. criticism. Again. is.. In that respect. comment. epitomized by developments such as the WIPO treaties. the sui generis protection guaranteed to the maker of a database under the EU legislation (Directive 96/9/EC of 11 March 1996 on the legal protection of databases) seems particularly promising (and a possible response to worries arising in connection with “the situation of companies and especially online publishing houses that scientific authors work for”). librarians. 4. has always been. publishers. news reporting. freedom of access for academics and civilians alike is currently of grave concern”.

and more specifically. possibly coupled with technological measures of protection within the meaning of the InfoDirective. This provides the web site authors with a type of protection which very much resembles the “right in the typographical arrangement of published editions” granted to publishers under British law. Only. I would like to conclude by addressing the question as to whether Internet is “a source of danger to the rights of scientific authors” by striking a note of optimism. or at least sui generis protection under the EU Database Directive. Furthermore. Cornish’s view that Internet is rather “an exotic concoction of prospects”. However. but might as well be found in contractual mechanisms (including the phase of dispute resolution. once established that the web site falls within the scope of copyright protection. Initiatives such as the Zwolle principles witness the desire of the publishing community to nail down the respective needs and interests of the various constituencies and to shape flexible practices adequately reflecting and balancing such needs and interests. both publishing companies and authors could benefit from and take advantage of the new rights arising in connection with the Internet: web sites are now conceived as attracting copyright protection. by encouraging a wider recourse to Alternative Dispute Resolution mechanisms such as conciliation or arbitration). the various stakeholders should be aware that appropriate answers not necessarily lie in laws and regulations. . or that the relationship between copyright and technology is “uneasy” (Ginsburg). contractual law seems not to be sufficient to protect the author in case of non-authorized exploitation acts carried out by third parties. it also follows that the imitation of the general features of its appearance so as to entail a likelihood of confusion is prohibited as an act of unfair competition. authors of scientific works. but also definitely provides authors with enhanced visibility and wider audiences. Further issues which might arise in respect of the use of trademarks or other distinctive signs can be adequately addressed under the traditional laws on trademarks. I would like to start stressing the need of protecting authors. Under Italian Law. and instead subscribing to Prof. And even in such cases a specific regulation to strengthen the always weaker position of the author to preserve contractual balance is necessary. by means of Copyright law. Excluding the need for a substantive review of the traditional principles and rules underpinning the intellectual property system is not tantamount to saying that no awareness of the specific features of the digital environment is advisable or that no amendment of the current practice might be envisaged. provided such supplementary instruments comply with the relevant mandatory provisions (and without prejudice to the possible impact that such need for individualised. Brief. The foregoing remarks do not purport to suggest that addressing the issue of IP rights in the digital context is a waste of time. Internet is an opportunity as much as a danger: it may provide enhanced opportunities for easy copying. ad hoc contractual negotiations might have on the feasibility of collective management of rights). unfair competition and the like. Contractual law is effective by protecting author’s interests in relation to the person who is going to exploit the work. I would like to conclude by rejecting the commonplace statement that “digital technology …introduces new and unprecedented threats of piracy” (Jeanneret). Concepción Saiz-Garcia: First of all.(2005) 2:2 SCRIPT-ed 189 5. 6.

the content distributor will need at least an authorization to undertake the acts of exploitation that will be indispensable for the on-line exploitation of the works. Although many propose free access . Just a decade ago. this exclusion is mandatory for all member States. Nevertheless. without copyright protection authors would be completely unprotected against any action of the publishing agents or any other third party. In regard to patrimonial rights. Other reproductions such as transient or incidental and essential for the technological process whose sole purpose is to enable a transmission in the network between third parties by an intermediary. who are currently giving their copyrights to commercial publishers for free. 2 of the Bern Convention could be subsumed under that description for instance works of architecture. In FIGARO. to the publisher in return for a payment. It is true that an exclusive rights transfer. except the communication to the public right. it is a previous step of the works up-loading.. As far as the moral rights are concerned.. their exclusion would raise problems in determining which works under art. Art. . And at this point. don’t need to be authorized. And second. I would like to say that in my country.or returning science to the scientists . let us remember that one of the aims of this exclusive right is to encourage the creation of intellectual works. If necessary. but from the viewpoint of public interest of free access to culture. It has triggered a great change.to such works. Besides. that in FIGARO the author does not need to assign his copyright to the Front Office. regardless of the financial system of the publishing agent. free distribution is nowadays a reality. the license they give the end user implies the transfer of the reproduction right for private use. 5 of the Directive excludes them. And this is achieved through the two-fold nature of the faculties contained in copyright: patrimonial and moral. the author of scientific works continues being the exclusive rightholder of his work. I think that scientific works should not be excluded from Copyright system. Reproduction: it must be authorized because. that in the most cases was but a small percentage of the money obtained from the distribution of copies of the work. I don’t believe that many authors would be willing to publicly share their work unless a juridical order warrants paternity and integrity (both inalienable and nontransferable rights) of those works. engineering projects. a regulation of the parties’ interests should be completed with subsequent terms in the contract. A good example of this new situation is FIGARO. It would be different if the author agreed on a free publication of his work. in the sense of the classical publishing contract. He has only authorized the uploading and the public availability of his work. .(2005) 2:2 SCRIPT-ed 190 Secondly. in my opinion. as I said before. for instance a pay per view or an open access systems. According to the above mentioned model. That’s why a contract between author and publishing agent will be necessary. not only from the point of view of authors of scientific work. In that contract both the “reproduction right” and “the right of making available to the public” must be authorized. scientific authors are not adequately remunerated. I would like to refer to Digital Management Systems. is not required. first. unlike other limitations that are observed. However. because. it used to be possible only by transferring the exploitation rights. Leo Waaijers said in one of his works. All excluded reproductions acts are economically irrelevant out of the on-line exploitation. so he still has to decide which way his work will be exploited. because the use of the new technologies has yielded a drastic reduction in the distribution costs of the works.

due to the impossibility of controlling the mechanic reproductions. number 2 letter b) that “Member States may provide for exemptions or limitations to the reproduction right provide for art. This is not advisable given the global nature of the Internet.…-.(2005) 2:2 SCRIPT-ed 191 Nowadays new technologies have allowed authors to control the on-line exploitation of his work. With this assertion we have come to one of the most controversial issues of the current copyright system: the legal limit of reproduction for private use. We all know that the way authors have been compensated for the private reproduction of their work is through a fixed amount of money. The voluntary character may yield different regulation for each country. this initial foundation on a market deficiency cannot be maintained for the reproduction right for private use by digital means. That’s why the detractors of the extension of reproduction right to the private sphere. 6 to the work or subject-matter concerned”. I would like to recall the voluntary character of this disposition for the Member States. Digital Rights Management Systems enables the author to decide whether his work shall be exploited through a license system . but also prevent it. pay per view system. legal exemptions to exclusive rights don’t create subjective rights for the users. The EC Directive 2001/29 of the European Parliament on the harmonisation of certain aspects of copyrights and related rights in the information society. g. . They are consequently norms that are excluded from the free negotiation principle. They just limit exploitation rights aiming at the balance between the authors and the public interests. education.free access system-. Nonetheless. As we all know. 5. or not . otherwise the agreement would be null and void. as with computer programmes and databases.e. disposes in its art. on condition that the rightholders receive fair compensation which takes account of the application or non-application of technological measures referred to in art. subscriptions. either legally or illegally. 2 in the following cases: b) In respect of reproductions on any medium made by a natural person for private use and for ends that are neither directly nor indirectly commercial. education. and normally the e-publisher will be the one who provides such technologies. science and information. They bring the constitutional requirement of promotion and protection of culture. as thanks to the DRMS the rightholder can control the reproduction of the work. This is because its primary foundation lies on the market deficiency of the technical impossibility of controlling the reproduction of protected works by users that have access to them. This means that technological measures that hinder their beneficiary the enjoyment of such exemptions would constitute a right abuse and a contra legem behaviour that should be corrected by the current authority. science and research to fruition. I don’t believe that all that I’ve said in general about the exemptions could be said in particular for the exemption of reproduction for private use. Generally. authors usually don’t have the proper infrastructure and skills to install such digital protection systems. Nowadays. This is the reason why I believe that there is neither legal support to affirm that there exists a subjective right of reproduction for private use. need to find a new justification. At the same time those measures will not only allow the control of the reproduction for private use. And this argument can only be found in the public interests on open access to culture. It is clear that the author must agree the use of such technological measures with the e-publisher.

To finish my exposition. an fixed amount by non controlled access. using technological measures regarding the number of reproductions of works. We have identified that the enablers of this . Regarding works whose initial exploitation is through on-line means.(2005) 2:2 SCRIPT-ed 192 nor objective reasons of general interests that justify the defence of reproduction for private use with compulsory character. This way does not seem to be the best one because the current legal regulation of publication contract is far from satisfying the new interests of the parties in the on-line exploitation. it will not prevent rightholders from adopting adequate measures regarding the number of reproductions. of art. That is why a new negotiation with those authors will be required. either on-line or offline. Beside. There is still a total lack of organisation in this field. Ziga Turk: I would like to start with a short overview of the SCIX-Project and the results of the process analysis that was made there. without preventing the rightholders from choosing not to do so. gives an absolute priority to the will of the parties over the general interest in which legal exemption are founded. exploiting their reproduction right and the right of communication to the public and in particular the right of making available to the public on-line their analogical archives. Obviously we are all looking at how to change the current scientific publishing process. in cases of onerous on-line transfers of works lacking technological measures of reproduction control. enterprises are in practice divided: Some of them add to their classical publication contracts the right of communication to the public. These contracts even regulate the kind of technological measures and the salary of the author. Finally. It allows generally to limit reproduction for private use. should be adapted to the new means and materials of digital reproduction. in paragraph 4th. with regard to on-line exploitation. as it happens with other exemptions. But this is surely not the case of contracts before the 90s. Thus this limit is a non compulsory one that would allow the rightholders to ignore it in the license contract through technological measures of access and reproduction control. which will be normally linked to the kind of use: a percentage by access or time of use. 6. I would like to refer shortly to the current situation of Spanish e-publishing enterprises. This practice wouldn’t be a problem if the original contracts had transferred them the right of communication to the public and in particular the right of making available to the public. Other enterprises negotiate simply the exclusive transfer of the reproduction rights and the right to communicate to the public. This seems to be the motivation of the European legislator who. number 4 of the Directive. Some enterprises have transferred to software houses specialized in e-publishing. which have been normally made available through a license of use for each reproduction. This solution has been adopted by the Spanish legislator on the draft reform of our intellectual property law. in any case. reproduction for private use will subsist the way it exists now: by compensating the damages through a fair compensation that.

And finally I will try to give my personal answer to the questionnaire. A framework will be presented listing key innovation opportunities. With Prof. Bjoerk we have been running the ITcon Journal. since 1996. The duration has been extended until 30th of April 2004. that the knowledge transfer process between the research community and the industry is one of the major obstacles to change in that area. Bob Martens from Vienna we have been setting up a kind of a topic archive of publications related to computer edited architectural design. This tells you that the benefits of electronic publishing are probably more imminent if readers do not have the infrastructure of big countries or big institutions. which is a well established mechanism to model industrial or intellectual processes in order to analyse the problems. because as readers they care about totally different things than they do as writers. I think. which was part of the programme of this seminar. Research in the area started in 2000.(2005) 2:2 SCRIPT-ed 193 change are on one hand soft .000 and 200 person months. SCIX-Project is a 24-months project. The part of SciX is the technical innovation and technical demonstration. We have made it a policy to have everything except the exploitation plan to be public. legal and business innovation kinds of issues. using computers in architecture and engineering. and that there is a kind of a schizophrenic attitude towards this. before the re-engineering of the processes can take place. We are adding another interesting angle to the project and that is how can this scientific information be tailor-made for the industry? Our partner from Iceland is analyzing how to use this open access content and make it more appealing for the industrial readers and people from the practice. A lot of the scanning of old historical work has been done also.net you can read all about the project and all its deliverables. We are otherwise involved in information technology for construction. At www. with an EU funding of 1. organizations or individuals to enter this open access publishing model.like social. We understand that if something is an open access publication. One is the business process analysis. (2) A redesign of the publishing process and (3) a study of the barriers to change.scix. On the other hand. As you can see. There are two main areas of work. It is publishing volume 8 now a good track record for an electronic journal. in which basically three types of studies are being done: (1) a theoretical model of the publishing process using the IDEF0 mechanism. The reason for starting the SCIX-Project in the first place was that some of the key partners have been involved in issues related to scientific publishing or to scientific information exchange before the project started. With Prof. the Electronic Journal of Information Technology in Construction. practically every active researcher in this community is a member and probably this site contains more or less all the relevant works that have been published in this area even since the 1960s. We have identified. What we have been learning is that there is a big gap between what we as researchers know is possible. the partners are more or less from the edges of Europe. in which we are looking at how to lower the technical bar for the independent publishers. there is a possibility of technical innovation. The conclusion was that people want to read electronic.000. This is known as CUMINCAD. and what the industry actually implemented. people then are free to exploit any business model and any . We did a survey in our scientific community on how people feel about electronic publishing or on the publishing process in general. that is. but publish on paper.

How do different media address these motives? This table has the paper based journals in the first column and then various types of open access. because actually by putting something in a scientific journal. Otherwise they got quite well. They do very poorly at dissemination. It is not entirely true what the Open-Access-proponents say: The authors give the journals copyright to their work for free. the journals provide the printing. because without any electronic means of comparing the content and given the sometimes very slow publication process. the more people they can employ. The green mark means that this is well taken care of. Neither the scientist nor the libraries nor the funding bodies are interested in making a profit out of that.(2005) 2:2 SCRIPT-ed 194 business ideas with that content. one is hesitant to give them more credit in that area. Paper based journals have some problems: restricted dissemination due to costs. actually they are better in every other category than any of the open access digital publishing outlets with the given exception of dissemination. The main motive of the library is archiving. it is quite clear from this diagram. that the control of the academic community is outside of the academic community. the vital problem in the current process. Traditionally they are roughly five different motives for scientific publishing. (1) Registration of “who thought of something first”. high prices because of the inelastic market. The funding bodies' main interest is bibliometrics – a measure to decide whom to give the research money and whom not. The question for the SCIX-Project is how open access media can be as good as or better than the paper based journals. The funding body that did make an investment into the research process is definitely not interested in making the return of investment through publishing. They have a vested interest. not investment into the printing the scientific articles. As we can see. (4) longterm archiving. the advertising and the indexing. the reach of that journal is rather small. which can last between one and three years. and a read one that there is room for an innovation in this area. (2) quality certification. And as we can see there is some problem. with the registration of who thought or who did something first. So this autonomy. it is limited to the subscribers of that journal and to the libraries. There was some discussion today about the need to restrict open access or whether you should allow anyone to make an advantage of that. because by the number of books and by the number of journals their budget is defined. The librarians want to handle paper based journals and books. the digital publishing in the remaining ones. But I think. bad service to the authors because authors “publish or perish”. (5) bibliometrics or some kind of measurement of the quality of the research. but we are speaking here on the investment into the research. I would like to start with the motives for scientific publishing. the publisher is interested in the return on investment. that the scientists have very different interests in the process from the publishers. And the larger the budget. which the . In exchange for that. so the work does not get lost. We can add to these the profit or earning money and the return on investment. (3) dissemination through which one also builds up his own prestige or becomes known in academic circles. and one that the scientific community should be most worried about is. Of course. that paper based journals today have.

has in fact been diminished by the commercial publishers. What can we do to improve the problems of open access publishing? What are the chances of innovation and improvement? When it comes to registration of primacy (who thought of it first). The problem of institutional archive is that I would never go searching Frauenhofer's and then go to the Stanford University and then go to MIT and so on and so on. Similarity measurements are another opportunity. but they would be probably a little dubious. To put it in other words. And this journal would. I think. who wrote this works. When it comes to institutional archives and personal archives. electronic journals are following more or less the same principals as paper based journals. But I can't remember when I was last going to a search engine that was based on OAI. so that their data could . this work is okay. and the publishing outlet. for example. accept some works. What you could have is a model where you have a paper published for example in institution archives. What these sites could have built in is digital signatures related to time stamping. but the certification happens through other means. I don't think there is much that can be done to improve that.(2005) 2:2 SCRIPT-ed 195 academic community should have or thinks it has. whose work passes and whose work does not pass the review process. that certifies that something is good. the publication process should be in their hands and not in the hands of some organizations. OAI. it is in the repository. this work is good. when something was submitted and did not change since that. I think we will come to the conclusion that open access publishing is not free and that the commercial publishers can improve their service. of course. if you publish a paper in a journal now. when it comes to that. which are otherwise not involved in the research or educational process. the originality. The publishers are in a position to steer what the priorities of scientific research are. I think. The paper does not end up in one journal. It applies the same to the personal archives. a scientific publisher can invent a new journal on some exotic topic. however. But there is a possible innovation in the organization of publication model in the sense. As far as dissemination is concerned. I am not so sure about the importance of the OAI compatibility. I go to Google or I go to some topic specific archive. By getting these research grants their organizations can afford to subscribe to these journals. that there is no need to have the oneon-one mapping between the organization. it is the editorial board of that journal that certifies that this work is okay. is very important as a means to aggregate institutional archives. These institutions need to make it friendly for the search engines. they need to be first and foremost Google-friendly. what kind of work passes. that they appoint the editors and they appoint the review boards. so that it is known. Not to mention. When I personally search for stuff. to some extent. But I think the key issue is that if we want scientists to keep their autonomy. If you have texts in electronic form it is fairly easy to find out very similar work and also to measure. electronic processing can be faster. which give them a certain type of control. would have good bibliometric results or would be evaluated fairly well and could therefore apply for research grants and money based on the fact that they have published in these new journals. People. And many different editorial boards would say. we have the upper hand in relation to paper journals already. The problem is. that. As far as the quality certification goes. Particularly regarding electronic journals and topic archives.

There are social problems like recognition of electronically published texts on the one hand and the acknowledgement of the public interest in scientific publications openly available on the other. If you combine them differently. A model. For some time one can work on a shoestring budget or for some time one can work on enthusiasm. But with technical innovation. if we could engage them handling electronic publications and maybe they could also take some of the roles the publishers in this area as well. In relation to all other types of archives it should be made easy. but throughout history I am not sure how many successful things have worked in the a long run on enthusiasm. who is doing electronic business? And as to the institutional and personal archives. the problem is . of scientific publishing and paper based scientific publishing .(2005) 2:2 SCRIPT-ed 196 somehow get accumulated. somehow the ISI . probably by topic archives using in the end OAI standards. which stores the work. workflow systems need to be put in place. or is in of As to the measurement of impact and other bibliometric studies. Some of the barriers on the transition path are technical. you create different types of publications. we tried to make it really simple for people to enter into the open access publishing. I think. It could be a major breakthrough. through technology. . Probably sustainability is the major problem of electronic journals and of topic archives today.is doing very well in providing that information. particularly the libraries. which would try to be competing with that. I think Elsevier is already developing a similar system. But if you want to support a conference or something similar. When it comes to archiving. Sooner or later business takes over. We are seeing that the major clients of publishing in general are becoming the organizations. when it comes to electronic journals? Is it just some guy on the internet? Is it a proper organization? Is it a library? Is it a university? Is it a professional publisher. Furthermore there are organizational problems: Who is the publisher. you probably have user management reviews and annotations as well as discussions. These components like the Lego blocks and the user decides what she wants to create with them. The work in SciX that I was mentioning is related to a business process analysis. to set things up and to manage it on a day to day basis.that the control is somehow shifting strongly from the scientists and the libraries to the funding bodies. the workflow system is different. And for electronic journals to enter into that system is a fairly long and slow process. in addition to the repository. who measure how good you are and how good you are not. innovation is needed in the sense that local libraries national libraries need to get involved. The current excuse for their existence handling of paper. If you look at the business perspective. As far as business related items are concerned there is a problem of who pays for the electronic archives. The innovation here could only come from some open citation approaches. there are new rolls for old organizations. The idea behind the architecture of the site is that there are a number of core-services. which are on this layer here.the Institute for Scientific Information .and it appears everywhere in the business of publishing. which would actually work on the long run has not been invented yet. In relation to electronic journals. Multiple languages and the support for sustainability in the sense that the maintenance of these archives really doesn’t take too much effort. If you want to have electronic journals.

who was doing the research the first one. We use XML for all that webservices. the structure of the data base itself and the interfaces to maybe other services. I am not contributing to the costs required to do the research. to the costs of making that video. which is described in the work. which in the first place made the invention possible. This diagram here shows the costs related to creating a scientific article or the costs related to lifecycle of a scientific work. All these services on this layer are. A minor part of information exchange can happen with the protocol for metadata harvesting by OAI. that this intellectual property right addresses the interest of the public. Spanish will be added fairly soon. The plain version looks like that. The review process is the third one. you can set it up through the web only through clicking on the web interface like you select on a form that you want to set up an open archive. Currently we have three languages implemented. if one would look at the copyright laws and on any kind of legal framework related to that. is a kind of support for rentable services. this is a very important difference. but there are some 200 parameters. e. homogeneous. German. This property was created by public funding and it is not the publication that is supposed to make some return on investment. again using the same code base. but probably will be fairly soon. There are some costs involved in the time of the scientist.g. The legal framework should ensure. Ideally one would have these online publications managed by existing organizations with existing budgets like. So the code is written in a way which allows translation by people and by programmers. Then this could be quite cost-efficient. One does not pay for the costs that were associated with making that work possible. one does not pay for the innovation. what I am contributing with that money is to the effort. What we need to know or have in mind when it comes to scientific publishing is that by buying scientific publication. which you can change such as the colours. When you go to a website and would like to create a journal or a personal archive or institutional archive. for example libraries. The final innovation or something that comes on top is not fully operational. What is up and running now is using this technology? Common code base is shared by one journal. who the manager is. that in the end somebody needs to pay. English. five topic archives including the electronic publishing conference series and their archives. what the password was and it is done for you and you can start uploading your works or your pictures or works of art or if you want . although using in part open source code. Charges for online publication? I think. I think. If I buy a CD of music or a video. It is intellectual property that should be somehow protected or managed. Writing the paper is the second one. in several columns. Parts of this system were developed within SciX. And the order of magnitude of the publisher’s profit is also in the order of magnitude of these boxes here.well. Slovenian. It uses the same path or access to the actual data. And this is dramatically different to any other kinds of publishing. If I buy a scientific journal. and that covered the cost of the invention. You fill in some information. Parts were using existing open source code. As to the IPR questions that this workshop asked: the baseline for answering those questions is actually on this slide.(2005) 2:2 SCRIPT-ed 197 A special module is dealing with this content syndication towards the industrial user. three conferences. whatever content. The publication itself is there light grey and the retrieval is the dark grey. . not the cost of the publishing.

if it is out there. we would have enormous progress. Why would their interest need to be taken care of? Somehow the Continental model seems much more appropriate than the Anglo-American one in this particular case. It may not be too good for their careers. students are using extensively to produce “their” seminar assignments. Danger is in the ease of copy-paste that. The other danger of the internet is that people get so much involved in electronic dissemination on the internet that they forget about how to score the proper points for their evaluation and promotion.(2005) 2:2 SCRIPT-ed 198 The important is the autonomy of the scientific community. It is what they will also need to do in real life. this is from there and this is from there and this is what I did. I knew nothing about the difference between Continental andAnglo-Saxon IP-law before I came here. this person who . but not much more. But it is also good. I can copy and paste this text or maybe improve it. what his relation to the academic community is . what the contribution of one or the second author was. Referring to Open Source Systems I am sure that there is a difference between Open Source and Open Access. In that respect something like BioMed is not different from Elsevier. He as author can be in conflict. Woody Allen cannot make a movie without somebody who gives him the money for the movie. So it should be the scientific community that is providing the publication media. . these revenues are so small in comparison to the cost of doing the research. In that case. I think for Open Source some invention is needed. What we need to teach them is to credit that properly. As a scientist I am not interested in earning one or two dollars for each click on the paper that I have published. people would be able maybe to steal my pictures or be able to improve on my pictures without a very clear reference to me in some sense. Or one could use some text analysis of systems do actually compute. in relation to copyright. this is from there. improve it and puts himself at the end of the author list. if he says. what Prof. Open Source means to me for example that I get the Word version with everything open. Gioia was saying: It is a challenge. Why type something in again. with the producer.I am not sure. Is internet the danger or an opportunity? I agree with. The problem with Open Source publishing is that if I would give word files out.is in exactly the same position as Elsevier setting up a new paper based journal. calculate. these costs. And if every person would do five percent original of what others have done. I think the scientific community is not interested in the legal limits of Digital Rights Management. It is marginal. And Open Access is a PDF or Postscript-File which let me do the reading or printing. There is a possibility of some invention on this Open Source model in the sense that somebody posts a paper and somebody else can take the three quarters of it. But Elsevier gave me no money at all to do some research. who decides to create a new journal on some exotic genetic theme. But I learned this morning that in scientific publishing it seems that there are some authors that need protection and others do not need to be protected. I have totally no problem accepting a paper from a student that would be 75 percent cut-paste from various sources. There is somebody at the top of BioMed.

Universities have to raise large amounts of money for procuring books. (5) bibliometrics or some kind of measurement of the quality of the research.(2005) 2:2 SCRIPT-ed Summary: A. does not result in any improvement in the quality of scientific works. Depending on their respective backgrounds. ultimate and unanimous opinion about the requirements of DRM and digital publishing. Turk there are five different motives for scientific publishing. Long-term operation of an electronic journal would be problematic because the finances for a long-term use would not be available. Turk criticized this development because the evaluation of scientific contributions should be done by the general public. This process dis-empowers authors and. General situation 199 As a result of the workshop it can be said that there is no comprehensive. Prof. Schafer´s opinion the current practice of peer-review is problematic. Prof. many universities have set up their own publishing units or have forged alliances with other universities and research institutions. there is no interest in changing this situation. similarly critical for universities and their libraries. From the point of view of the publishing houses. online-publishing would permit a very quick dissemination of knowledge. (3) dissemination through which one also builds up his own prestige or becomes known in academic circles. B. . The situation is. in spite of having been subject of a peer-review. In this context. on the other hand. and one can see which scientist has achieved what results at any point in time. could reach only a small group of readers within the scientific community and in universities. so the work does not get lost. As a reaction to the monopoly. Mr. Scientific works can thus be compared and evaluated more easily. according to several participants. since their financial position would only worsen with increased competition. The present day market situation mainly affects authors of scientific papers who have to pay the publishing houses for publishing their work. In Mr. Situation of online-publishing While outlining the general conditions of online-publishing. Schafer pointed out a few cases in the recent past. whose interests match those of the authors. Prof. Turk pointed out the following advantages: contrary to paper-based journals. (4) long-term archiving. Turk opined that the main problem of evaluating is the fact that measurement of all scientific publications is done by commercial investors rather than by the scientific community or libraries. According to the different statements the situation in the field of academic publishing is as follows: The competition between individual publishing houses is restricted by the monopoly of a few players in the market. Public money is thus spent twice: Once in paying the researchers for their research work and again for buying scientific books that are published. the individual participants (lawyers. journalists and computer professionals) had different answers to the questions posed in the workshop’s principles and gave importance of various aspects in the field of DRM and in the academic field of electronic publishing. First registration of “who thought of something first”. (2) quality certification. where scientific works had proved to be wrong or faked. Worldwide access to the results of scientific research can be significantly faster if online-publishing is used. According to Prof. Traditional scientific journals. at the same time.

These concerns have been met by various international and European instruments. Dr. Cheap as opposed to free online journals for instance would initially increase costs to libraries. They would however face potentially higher costs for peer reviewing. for instance through the possibility to allow users to customise hyperlinks between papers. at least initially. the combined effect would be quite serious. The acts which would infringe copyright would include the communication of a work to the public as well the making available (such as on an internet site). Both the reproduction right. in particular the introduction of a general right of communication to the public under the 1996 WIPO World Copyright Treaty for Authors and under the 1996 WIPO World Performers and Phonogram Producers Protection Treaty (Articles 8 and 6 respectively). but individuals and institutions are offered additional services (like for instance the hyperlinking facility discussed above. This right would be understood to complement the non-physical rights in respect of online transmission in closed or open networks. This would refer to acts of technically necessary copies which occur once information is transmitted through electronic networks. regardless of their physical location. Schafer in the near future technologies would be available that can be used to transform these archives from cheap alternatives to paper journals to superior competitors. allowing researchers to self-archive without the need to upload the same paper on several smaller discipline-based archives. Firstly. As a use right. copyediting. there is the potential for institutional self-archiving. open access journals would not face the problem to negotiate copyright release for articles published already in paper journals. academics can upload their papers in open access digital archives. a technology already used commercially in the Juris system. it would remain immaterial whether the act of access (or providing an opportunity to access for third parties by uploading information . Schafer emphasized that electronic publishing could offer a solution to the problems facing the academic archive. no limitation will apply. as understood in the sense the directive suggests. According to Mr. Finally. The European Union had introduced a general right of transient copying under the 2001 directive on Copyright and Related Rights in the Information Society. and the right of communication to the public will avail owners of copyrighted materials with a general use right. Possibilities to meet these costs range from “author pays” models to sponsorship to a system of “value added” publishing were the core texts remain free of charge. most of which lack the necessary mass to be successful competitors to the main journals. proof reading and general administration. these archives can be located on the level of universities. organizational problems like who is to be regarded as the publisher of an electronic journal would also have to be solved. as they will be ordered in addition to existing high impact journals. Schafer also said that unlike self-archiving. there would be difficulties with public acceptance for electronically published texts and knowledge that a text has been published electronically. Small institution-based servers can in this way create aggregates of research literature. In effect. Finally and more ambitiously there would be projects like the Budapest Open Access Initiative funded by the Soros Foundation Open Society Institute. For providers of scientific material. Mr. Mr. Westkamp said that exploiters of cultural works had detected a danger to the commercial viability of their businesses and pledges for stricter protection. In addition to publishing in traditional journals. Its protocol would allow all databases which comply with it to function as if they were part of one central server.(2005) 2:2 SCRIPT-ed 200 Moreover. On the next higher level. customising of papers etc).

According Dr. a monopolization is indicated (Elsevier / Springer-Kluwer Group) similar to the situation of traditional publishing houses. the participants rated the development of the market for commercial onlinepublishing companies as another critical factor. These publishers justify these costs saying that they provide other services in addition to the publishing. Prof. the result is a loss of knowledge. Dr. Turk on the contrary emphasized that the price of a music CD or a video would depend on the respective manufacturing costs while the price of a scientific journal would be much higher than the costs of the research. the participants called for a process of emancipation of the scientific community and the continuance and expansion of existing alliances. Peukert considered the abolition of copyright protection for scientific publications. According to some participants. on the one hand. According to the participants. some online publishers have demanded up to US-$ 500. There will certainly be disadvantages if a third person makes use of the work without the author’s permission. because the universities would no longer have access to it. restricting it to protection for the author against changes to his works by third persons. On the other hand. it is to be noted that the online-publishers often get access to further sources of income such as sale of paper editions. Commercial onlinepublishing thus contributes to the death of the free market. advertisements and sponsorships through the publication of scientific articles. even if these are of a very high quality. Thus.00 a year. such enormous amounts do not correspond to the savings that online-publishers receive by publishing on the Internet instead of in paper-based journals. or alternatively. libraries have to pay for access to knowledge. before signing a contract with an online-publisher. This would mean that publications could only be . To solve this problem. the current situation deteriorates quality of research and academics. resulting in the strong position of the publishing companies. If the subscription to an electronic journal is cancelled. this protection ought to be provided mainly at the contractual level. Grosheide’s opinion. Efforts undertaken so far in this direction have often failed due to the unimaginativeness of the scientific community.00 for publishing scientific treatises. In this context. Here. Apparently. it was pointed out that the administrative costs for digital archiving of scientific works would come up to a mere US-$ 55. Westkamp it should also be noted that it is still unclear which law will apply to the act of downloading a copyrighted work. The estimation of the role of copyright in the field of digital publishing varied. Commercial online-publishing is particularly detrimental to universities. Most of the participants agreed that the works of scientific authors needed copyright protection. universities are less willing to make use of new electronic journals. I. Legal solutions In Prof. Instead of being able to sell knowledge in materially embodied form such as books. such as peer-reviews.(2005) 2:2 SCRIPT-ed 201 Further. As a consequence. the legal right to copyright protection of scientific authors would not differ from that of other creative persons. That is why authors should take a careful look at the publishers’ usage regulations for their clients. The conventional concepts – either the law of origin or the law of the country for which protection is sought – would both be fraught with difficulties in practical application.

it was not necessary to adapt the traditional copyright regulations for the field digital publications. Gioia questioned protection of authors as a goal of the European copyright legislation. Peukert firstly referred to § 52a UrhG. Grosheide opined that the differences would play no role. two distinct copyright systems exist: According to wisdom currently prevailing in the European Commission. In her opinion. international norm setting had been achieved under the 1996 WIPO Copyright treaties dealing with the conceived dangers of the internet. Prof. Prof. 2 lit. Prof. according to her.(2005) 2:2 SCRIPT-ed 202 financed by public funding. In this respect. There were also different opinions on the advantages and disadvantages of the continental copyright regulations versus the Anglo-Saxon rights. b) RiLi 2001/29 problematic. One reason would be an increase in electronic uses of copyrightable material and the ease of copying and dissemination associated with it. restricted group of persons for their scientific research. Mr. She preferred solving the problem on the basis of self regulation through the legal departments. In Dr. Bordeanu. In addition to this identification and authentication of online-authors should also be ensured. Westkamp´s opinion copyright would be undergoing the perhaps most drastic changes of its history. Peukert doubted the practical feasibility of his suggestion. Finally. Schafer copyright regulations were needed that protect contents which were freely available originally from commercial exploitation by third parties. During their talks. Grassmuck was clearly against a legal model based on the Anglo-Saxon ideal. strengthening commercial users would seem to be the legal purpose. Prof. Gioia favored a mixed model based on both legislations. at least at the contractual level. Thus. because this would imply international treaties. some of the participants referred to national regulations and the legal situation in their home countries. 5 No. Prof. European harmonisation had been introduced by way of directives. This. On the whole. Dr. Gioia called attention to the fact that copyright itself was independent of the medium of publication. the final two of which – the directive on the legal protection of databases and the directive on copyright in the information society – would be most relevant to the workshop topic. a. there would be no need to modify and adapt the existing copyright legislations at the European level especially in the field of digital publishing. Moreover. With regard to the latest EU legislation. would be technology and corresponding ethics among the users. in her opinion. Mr. but the present problems of online-publications should be solved. which permits the usage of published works exclusively by a definite. Saiz-Garcia found the possibility of a voluntary implementation of Art. Schafer’s view was opposed by Prof. According to his views. would create non-uniform pre-requisites within the community and is not designed for global availability of online-publications. the impression was of a very heterogeneous regulatory structure. § 52a UrhG can only be applied to a small group of . According to Mr. Hence. At the same time Dr. necessitates harmonisation under Article 95 EC as it is conceived as a potential barrier to trade. The second reason would have a political dimension: In the European Union. The best protection.

Furthermore in the UK. Moreover. Edinburgh University is one of the leading partners in a consortium to create and promote such archives (The Nottingham University led SHERPA project). Gioia usage and copyright regulations in Italy and a few other member states would not depend on the medium used. digital usage is considered equivalent to usage on paper. The creator of a web-site would be covered by a protective right that is reminiscent of the “right in the typographical arrangement of published editions” under British law. Prof. since copying of information is not allowed. the joint information systems committee JISC would support institutional self-archiving. which would otherwise be possible only through a license. Furthermore the exception provided under Art. websites are covered under the provisions for copyright and competition protection in Italy. 5 Abs. The course material of most professors would have to be seen as illegal. Prof. Dr. . Saiz-Garcia reported that the Spanish legislator hade made provisions in principle for restricting the right to multiplication to private usage only. if the guidelines are applied. from involving students early on in research to more reader friendly writing by academics.(2005) 2:2 SCRIPT-ed 203 persons who can exchange the results of their research within a network. In the same time introduction of the Research assessment exercise the UK had dramatically increased the pressure on academics to publish. This would also be applicable for authors employed in a scientific institution. Bordeanu also said that it is not possible to transfer ethical values. 3 RiLi 2001/29 in relation to multiplication rights poses problems in the field of teaching and learning. Schafer reported that in the UK traditional academic libraries would face a crisis as research journal publishers had increased their prices by an average of 10% per annum over the last decade. although this is a necessary part of every national copyright regulation considering the global availability of online-publications. it would be necessary for universities to contractually allow the usage of the works of their scientists. According to Prof. Prof. Schafer pointed out that one possibility could be student edited journals. Edinburgh University would presently develop a student edited webjournal on IT and IP law. Student edited journals would also have pedagogical advantages. in their draft for the implementation of the copyright directive. Despite this problem the French government apparently would believe that there would be no need for further action as international treatises and European directives would oppose another regulation. As confirmed by the legal departments. they would be the norm in US law schools. this should not at the same time prevent the owner of the right from using technical means to circumvent this and make copies of the work. However. Hence. Peukert drew attention to the regulations in the German copyright act which tie these to an author. and the conditions for using copyright protected material would be laid down in these contracts. Further. This regulation did not touch upon the world-wide usage of research results via Internet. Mr. Considering the potential to reduce the high costs of electronic publishing Mr. Although these journals would be the exception in the UK. Bordeanu propounded that universities in France would be entering into contracts with authors. This is meant to apply to both the larger design of a web-site as well as to special features.

Legal limits and perspectives of DRM In the assessment of the possibilities and outlook for DRM in the field of academic publishing. Dr. Bordeanu and Dr. since it contains legal. At present there would be four different business models. He said that the use of DRM would be based on a strategic component as well. It is also . scientists would be only marginally interested in financial exploitation of their works as scientific authors use the Internet as an inexpensive way of presenting their work worldwide and acquiring a reputation thereby. namely free access to information. Some of the participants completely rejected DRM in the academic field. In the case of works. Since the publications in the Internet would also contain traditional works from an earlier date. since the regulations for existing contracts would not address the requirements of Internetpublications. participants agreed that DRM could not be bound by any legal limits. they said. Peukert were skeptical about the success of DRM. The significance of DRM would not be restricted to the possibility of generating high turnover through the Internet. the usage rights for these would not contain any contractual terms for such (digital) publication. Some publishing companies had shifted to publishing exclusively in the Internet. leading to high profits. however. Whether these investments would pay off would remain questionable. Unlike commercial authors. Prof. Implementation of DRM-systems in commercial online-publishing companies would probably be inevitable. According to Mr. opinion was divided. Bordeanu considered DRM as a possibility for ensuring copyright protection through technical measures. but mere access. Further. according to Prof. DRM would be suited for countering unauthorized usage of works. Buhse from the point of view of commercial legal representatives DRM would be the key to success in the electronic contents business. remunerations for one-time usage or access over a restricted period of time. The main argument against DRM was that it would restrict free access to knowledge and hence oppose the basic principle of the Internet. which are envisaged exclusively for online-publication. would incorporate comprehensive regulations for online-usage of works in their contracts. Prof. among other things. technical and business aspects. which could hold their own in future as well. For this reason. marked by a lack of organization. II. The need for DRM can be traced back to several other developments. practice is apparently divided: some publishing companies have merely added the possibility of using the work through Internet to the existing contract. on the other hand. One special field of application here would be author authentication. Other publishers. Saiz-Garcia. However. These models envisage. Peukert was mainly concerned for the difference between commercial and scientific publications. older contracts with the individual authors would have to be re-phrased accordingly. because the only alternative would be free usage of these services. and can be applied to the academic field. Saiz-Garcia advocates against this practice. would not offer a legal position to the people requesting the information. The usage of DRM-systems would necessitate considerable investments for setting up the DRM-infrastructure.(2005) 2:2 SCRIPT-ed 204 The situation of Spanish online-publishing houses is. DRM. As far as the authorization of DRM in the academic field was concerned. Prof.

Finally. protection for scientific publications would make room for the EU legislation.(2005) 2:2 SCRIPT-ed 205 possible to restrict usage to the type of usage. He associated the possibility of making changes to electronic documents with the concept of open source. watermarks. which would allow authors to exchange documents or parts of documents among themselves. At the same time. Turk. Prof. Prof. one could also consider a tool that could analyze text and summarize it. of the database directive (RiLi 96/9/EC) and would therefore enjoy the concomitant protection. Grosheide also drew attention to the difference between the management of digital rights and digital management of rights. Regarding the role of DRM Prof. one area of application for open source systems would be a model. both in terms of its definition as well as from an advanced perspective. Further. . open-source systems would be prone to the risk of third persons appropriating the contents of documents or even eliminating them. Open access on the contrary provides access to documents where only reading is possible. many participants highlighted the growing significance of database protection as an area of application for DRM. as in the case of PDFfiles. individual parts or even charge for particular services. Grosheide pointed out that individual usage business models for DRM and the usage forms in each case would have to be considered. metadata and languages. On the one hand management of digital rights would include technologies for identification. According to Prof. Grosheide basically welcomed the use of open source systems.2. protection technologies and payment systems. For him. there was no great difference between open source and open access. Prof. Open Source Solutions The use of open source systems was assessed differently by some of the participants. digital signatures. Turk was of a similar opinion. Here. for purposes of editing. because online-collections of scientific publications would have to be seen as a database in the sense of Art 5. Digital management of rights on the other hand would cover decryption. III. He also noted that no such system would be in place as yet in the field of academic publishing. as long as they would promote scientific encounters.