You are on page 1of 378

prelims 26/11/2003 16: 27 page I

SUSTAINABLE DEVELOPMENT OF ENERGY, WATER AND


ENVIRONMENT SYSTEMS

© 2004 by Taylor & Francis Group, LLC


prelims 26/11/2003 16: 27 page II

© 2004 by Taylor & Francis Group, LLC


prelims 26/11/2003 16: 27 page III

PROCEEDINGS OF THE CONFERENCE ON SUSTAINABLE DEVELOPMENT OF ENERGY,


WATER AND ENVIRONMENT SYSTEMS, 2–7 JUNE 2002, DUBROVNIK, CROATIA

Sustainable Development of
Energy, Water and
Environment Systems

Edited by

Naim H. Afgan
UNESCO Chair Holder, Instituto Superior Tecnico, Lisbon, Portugal
Željko Bogdan
Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb,
Croatia
Neven Duić
Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb,
Croatia

A.A. BALKEMA PUBLISHERS LISSE / ABINGDON / EXTON (PA) / TOKYO

© 2004 by Taylor & Francis Group, LLC


CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2004 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business
No claim to original U.S. Government works
Version Date: 20140806
International Standard Book Number-13: 978-1-4822-8393-8 (eBook - PDF)
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and
information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and
publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission
to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any
future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic,
mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or
retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.copyright.com/) or contact
the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides
licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has
been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without
intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com

© 2004 by Taylor & Francis Group, LLC


CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2004 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business
No claim to original U.S. Government works
Printed on acid-free paper
Version Date: 20140806
International Standard Book Number-13: 978-90-5809-662-3 (Hardback)
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and
information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and
publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission
to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any
future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic,
mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or
retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.copyright.com/) or contact
the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides
licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has
been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without
intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com

© 2004 by Taylor & Francis Group, LLC


prelims 26/11/2003 16: 27 page V

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Table of contents

Foreword IX

Sustainability concept

Sustainability concept for energy, water and environment systems 3


Naim Hamdia Afgan

Sustainable energy path 23


Hiromi Yamamoto & Kenji Yamaji
Methodology to construct material circulatory network in a local community 29
Ichiro Naruse, Masaya Hotta, Tomoyuki Goto & Kimito Funatsu
Application of emergy analysis to sustainable management of water resources 37
Laura Fugaro, Maria Pia Picchi & Ilaria Principi

Sustainability assessment method

Method of allocation of the weights by fuzzy logic for a sustainable urban model 47
Francesco Gagliardi & Mariacristina Roscia
Fuzzy cost recovery in planning for sustainable water supply systems in
developing countries 57
Kameel Virjee & Susan Gaskin

Possibility theory and fuzzy logic applications to risk assessment problems 67


M.N. Carcassi, G.M. Cerchiara & L. Zambolin

Social aspect of sustainable development

Research on woods as sustainable industrial resources – evaluation of tactile warmth


for woods and other materials 79
Yoshihiro Obata, Kozo Kanayama & Yuzo Furuta

New and renewable energy sources for water and environment


sustainable development

The surface water retention basins as a tool for new and renewable water and
energy sources 91
P.S. Kollias, V.P. Kollias & S.P. Kollias

© 2004 by Taylor & Francis Group, LLC


prelims 26/11/2003 16: 27 page VI

Solar photocatalytic oxidation: a sustainable tool for reclaiming biologically treated


municipal wastewater for high quality demand re-use? 99
H. Gulyas, I. Ilesanmi, M. Jahn & Z. Li
Development of solar testing station for flat-plate water-cooled solar collectors 109
Emin Kulić, Sadjit Metović, Haris Lulić & Muhamed Korić
The application of solar radiation for the treatment of lake water 119
Davor Ljubas, Nikola Ružinski & Slaven Dobrović
Helinet energy subsystem: an integrated hydrogen system for stratospheric applications 129
Evasio Lavagno & Raffaella Gerboni

Sustainable development of environment systems

Dynamic simulation of pollutant dispersion over complex urban terrains: a tool


for sustainable development, control and management 139
K. Hanjalić & S. Kenjereš

Study of environmental sustainability: The case of Portuguese polluting industries 151


Manuela Sarmento, Diamantino Durão & Manuela Duarte

The factors which affect the decision to attain ISO 14000 161
Šime Čurković
Creation of a recycling-based society optimised on regional material and energy flow 171
N. Goto, T. Tabata, K. Fujie & T. Usui

Environmental, energy and economic aspects and sustainability in thermal processing of


wastes from pulp production 181
Oral J., Sikula J., Puchyr R., Hajny Z., Stehlik P. & Bebar L.
Worldwide use of ethanol: a contribution for economic and environmental sustainability 189
Cortez, Luís A.B., Griffin, Michael W., Scaramucci, José A., Scandiffio,
Mirna Gaya & Braunbeck, O.A.
Environmental aspects of socio-economic changes for industrial region in Russia
in transition economy 197
Boris Korobitsyn & Anna Luzhetskaya
Waste incineration in Swedish municipal energy systems – modelling the effects of
various waste quantitites in the City of Linköping 203
Kristina Holmgren & Michael A. Bartlett
Tidal power generation: a sustainable energy source? 213
Alain A. Joseph

Modelling and simulation of energy, water and environment systems

Modelling of energy and environmental costs for sustainability of urban areas 221
Alfonso Aranda, Ignacio Zabalza & Sabina Scarpellini
Alteration of chemical disinfection to environmentally friendly disinfection by UV-radiation 231
Slaven Dobrović, Nikola Ružinski & Hrvoje Juretić

VI

© 2004 by Taylor & Francis Group, LLC


prelims 26/11/2003 16: 27 page VII

Modelling the geographic distribution of scattered electricity sources 241


Poul Alberg Østergaard

Dynamic stock modelling: a method for the identification and estimation of future waste
streams and emissions based on past production and product stock characteristics 249
Ayman Elshkaki, Ester van der Voet, Veerle Timmermans & Mirja Van Holderbeke
Comparison of fouling data from alternative cooling water sources 259
Malcolm Smith, Andrew Jenkins & Colin Grant
Influence of the critical sticking velocity on the growth rate of particulate fouling
in waste incinerators 269
M.S. Abd-Elhady, C.C.M. Rindt, J.G. Wijers & A.A. van Steenhoven

A general mathematical model of solid fuels pyrolysis 279


Gabriele Migliavacca, Emilio Parodi, Loretta Bonfanti, Tiziano Faravelli,
Sauro Pierucci & Eliseo Ranzi

Thermo-economic analysis of energy, water and environment systems

The EnergyPLAN model: CHP and wind power system analysis 291
Henrik Lund & Ebbe Münster
Electric Power System Expansion Planning 301
Tatjana Kovačina & Edina Dedović
Development of an air staging technology to reduce NOx-emissions in grate fired boilers 309
B. Staiger, S. Unterberger, R. Berger & Klaus R.G. Hein
Systemic approach for techno-economic evaluation of triple hybrid (RO, MSF and
power generation) scheme including accounting of CO2 emission 319
Sergei P. Agashichev & Ali M. El-Nashar
New fossil fuel energy technologies – a possibility of improving energy efficiency in
developing countries 337
Alija Lekić

Sustainable development of water systems

Water management of a small river basin toward sustainability (the example of


the Slovenian river Paka) 349
Emil Šterbenk, Alenka Roser Drev & Mojca Bole
A simplified model for long term prediction on vertical distributions of
water qualities in Lake Biwa 357
Takashi Hosoda & Tomohiko Hosomi

Author index 367

VII

© 2004 by Taylor & Francis Group, LLC


prelims 26/11/2003 16: 27 page VIII

© 2004 by Taylor & Francis Group, LLC


prelims 26/11/2003 16: 27 page IX

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Foreword

Sustainability has become a leading guideline for the prospect of our humanity. It is one of the
buzzwords that has been introduced through the discussion on economic and social development.
It was realized that our society would need a new strategy for its further development.
In this respect the Brundtland Commission has introduced a definition of sustainable develop-
ment as “development, that meets the needs of the present without compromising the ability of
future generations to meet their own needs.
The United Nations Conference on Environment and Economic Development held in Rio de
Janeiro in 1992 provided fundamental principles and the program of action for achieving sustainable
development.
10 years after Rio – the Global Summit on Sustainable Development – Johannesburg 2002 has
adapted the Plan of Implementation as the governance for the sustainable development.
“At the domestic level, sound environmental, social and economic policies, democratic institu-
tions responsive to the need of the people, rule of law, anti-corruption measures, gender equality
and enabling environment for investment are the basis for sustainable development. As a result
of globalization, external factors have become critical in determining the success or failure of
developing countries in their national efforts”.
The First Dubrovnik Conference on Sustainable Development of Energy, Water and Environment
Systems held in June 2002, was devoted to enlighten scientific and engineering problems related
to future development of these fields.
The conference was focused on the following objectives:
• To discuss the sustainability concept of energy, water and environment and its relation to the
global development
• To analyze potential scientific and technological processes reflecting energy, water and
environment exchange
• To present energy, water and environment system models and their evaluation
• To present multi-criteria assessment of energy, water and environment systems taking into
consideration economic, social, environmental and resource use aspects.
The Proceedings comprise of selected papers and lectures presented at the conference. Only those
papers which have been recommended by reviewers have been included in this volume.
The first part of the Proceedings is devoted to the Sustainability Sciences. It comprises of subjects,
which are of interest for the understanding of the sustainability frame.
The second part is devoted to enlighten Sustainability Concept for different systems. The papers
in this group of sessions are reflecting approaches in the design of the sustainability concept for
energy, water and environment systems.
The main subject of the third part is the Evaluation of Systems. There are three groups of
systems namely: energy, water and environment system. A number of papers are devoted to the sus-
tainability of systems with the aim to evaluate complexity of the systems under consideration.
The editors of the Proceedings would like to express high appreciation to the Reviewers for their
help in attaining high standard of scientific quality in this volume.

Editors
Naim Afgan, Željko Bogdan, Neven Duić

IX

© 2004 by Taylor & Francis Group, LLC


prelims 26/11/2003 16: 27 page X

© 2004 by Taylor & Francis Group, LLC


chap-01 19/11/2003 14: 44 page 1

Sustainability concept

© 2004 by Taylor & Francis Group, LLC


chap-01 19/11/2003 14: 44 page 2

© 2004 by Taylor & Francis Group, LLC


chap-01 19/11/2003 14: 44 page 3

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Sustainability concept for energy, water and environment systems

Naim Hamdia Afgan


UNESCO Chair Holder, Instituto Superior Tecnico, Lisbon, Portugal

ABSTRACT: This review is aimed to introduce historical background for the sustainability
concept development for energy, water and environment systems. In the assessment of global
energy and water resources attention is focussed in on the resource consumption and its relevancy
to the future demand. In the review of the sustainability concept development special emphasize
is devoted to the definition of sustainability and its relevancy to the historical background of the
sustainability idea.
In order to introduce measuring of sustainability the attention is devoted to the definition of
respective criteria. There have been a number of attempts to define the criterions for the assessment
of the sustainability of the market products. Having those criterions as bases, it was introduced a
specific application in the energy system design.

INTRODUCTION

Our civilisation through the history has been under constrains which has encompasses economic,
social and ecological perspective in its development. Since the beginning of industrial revolution
it was recognised the need for the harmonised development of different commodities leading to
the better life. In this respect, economic and social developments have been based on the natural
capital available at the respective level of technology development.
Through the history of human society there have been different patterns of social structure, which
have lead from the pre neolith to the industrial society. Each of the successive social structure
has been different with the complexity of its internal organization. The industrial revolution has
triggered a new pattern of complexity determined by the need to generate more and more power
to be used in everyday life. Invention of the steam generators, steam engines, steam turbine and
many other energy conversion systems have promoted the increased commodity in human life and
also initiated dramatic changes in the social structure of human society. It has become obvious that
the welfare harvested by increasing the energy resources use has brought additional complexity to
the organization of human society. New scientific achievements and technological progress have
opened a new venture in the development of our society. In this respect, it is our need to look ahead
in order to see if we can forecast our future in the near term and long term scale. This is a reason that
a number of scholars have devoted substantial attention to the future of our society. It is obvious
that there are need to dwell into the complexity of this issue in order to be able to understand the
processes, which are going to effect our future.
It should be notice that through the history of human society the changes in the pattern of the
social structures have been linked to the cyclic development of the human structure. These changes
are result of the critical state, which have been achieved at the specific period of time reflecting
the need for the addition of a new complexity in human society. In this respect the industrial
revolution has introduced commodities to our society, which by itself contributes to the increase
of the complexity. Nearing to the end of the industrial revolution, it has become evident that the

© 2004 by Taylor & Francis Group, LLC


chap-01 19/11/2003 14: 44 page 4

complexity indicators as population, economics, material resources, social structure and religious
devotion have reached the state, which requires our special attention.
There have been a number of scholars, which have emphasized individual aspect of the present
state of our civilization. In particular the attention was focused to the indicators related to the
material resources and environment. In our history there have been many attempts to emphasize
different aspect of the use of the material resources. Some of those are drawn from the ethic
principles founded in the religious faith that we owe to be in compliance with human role in the
divine. Warning has been issued as the sign that we are reaching certain limits after which the
irreversible changes are expected. The first and second energy crises have shown the vulnerability
of the present state of our society. Recent claims that the concentration of CO2 is reaching limit
which may trigger irreversible changes in the environment with catastrophic consequences for the
life on our planet.
Energy resources have always played an important role in the development of the human society.
Since the industrial revolution the energy has been a driving force for the modern civilization
development. Technological development and consumption of energy, along with the increase in
the world population are interdependent. The Industrial Revolution, especially the momentum
created by the change from reciprocal engines to the great horsepower of steam engines in the late
nineteenth century, which brought about a revolution in dynamics – began a drastic increase in both
consumption and population of the world.

LIMITS

Energy, water and environment are essential commodities, which are needed for the human life
on our planet. In the development of our civilisation these three commodities have served as the
fundamental resources for the economic, social and cultural development. In early days of human
history it was believed that there are abundant resources of these commodities. With industrial
revolution use of the resources has become the essential driving force for the economic and social
development. With the increase of population and respective increase of the standard of living,
the natural resources have become scarcity in some specific regions. With the further increase on
the demand it has become evident that the scarcity of the natural resources may lead to the global
dimension and effect human life on our planet. The Club of Rome was among the first to draw the
world scale attention to the potential limits in availability of the natural capital on our planet. Energy
crises in 1972 and 1978 have focussed attention of our community in large to investigate the limits
in energy resources [1]. This was a moment when our society through the different institutions has
launched programs aimed to investigate global scarcity of natural resources on our planet. It has
become obvious that modern society has to adapt a new philosophy in its development, which has to
be based on the limited natural resources.

Energy
Boltzman [2] one of the Father of modern physical chemistry, wrote, in 1886, that the struggle for
life is not a struggle for basic elements or energy, but a struggle for the availability of negative
entropy in energy transfer from the hot Sun to the cold Earth. In fact, life on the Earth requires a
continuous flux of negative entropy as the result of the solar energy captured by photosynthesis [3].
The Sun is an enormous machine that produces energy by nuclear fusion and offers planet Earth the
possibility of receiving large quantities of negative entropy. Every year the Sun sends 5.6 × 1024
joules of energy to the Earth and produces 2 × 1011 tons of organic material by photosynthesis.
This is equivalent to 3 × 1021 joules/year. Through the billions of years since the creation of the
planet Earth this process has led to the accumulation of an enormous energy in form of different
hydrocarbons. Most of the fossil fuels belong to the type of material where molecular binding is due
to Van der Waals potential between every two molecules of the same material. Mankind’s energy
resources rely heavily on the chemical energy stored in the fossil fuel. Table 1 shows assessed
energy resources [4,5].

© 2004 by Taylor & Francis Group, LLC


chap-01 19/11/2003 14: 44 page 5

Table 1. Assessed energy resources.

North Latin West Asia- Middle


Total CPE America America Europe Africa Pacific East
109 toe % % % % % % %

Oil 95 11.5 4.9 13.5 3.2 7.9 2.7 56.3


Gas 85 41.5 8.3 3.7 3.5 6.1 6.2 26.7
Coal 530 46.6 26.6 0.6 9.8 7.5 8.9 0.00

Energy and matter constitute the earth’s natural capital that is essential for human activities such
as industry, amenities and services in our natural capital as the inhabitants of the planet earth may
be classified as:
• Solar capital (provides 99% of the energy used on the Earth)
• Earth capital (life support resources and processes including human resources)
These, and other, natural resources and processes comprise what has become known as “natural
capital” and it is this natural capital that many suggest is being rapidly degraded at this time. Many
also suggest that contemporary economic theory does not appreciate the significance of natural
capital in techno-economic production.
All natural resources are, in theory, renewable but over widely different time scales. If the time
period for renewal is small, they are said to be renewable. If the renewal takes place over a somewhat
longer period of time that falls within the time frame of our lives, they are said to be potentially
renewable. Since renewal of certain natural resources is only possible due to geological processes,
which take place on such a long time scale that for all our practical purposes, we should regard
them as non-renewable. Our use of natural material resources is associated with no loss of matter
as such. Basically all earth matter remains with the earth but in a form in which it cannot be used
easily. The quality or useful part of a given amount of energy is degraded invariably due to use and
we say that entropy is increased.
The abundant energy resources at the early days of the industrial development of the modern
society have imposed the development strategy of our civilization to be based on the anticipated
thinking that energy resources are unlimited and there are no other limitations, which might affect
human welfare development. It has been recognized that the pattern of the energy resource use has
been strongly dependent on the technology development. In this respect it is instructive to observe
the change in the consumption of different resources through the history of energy consumption.
Worldwide use of primary energy sources since 1850 is shown in Figure 1. [6,7]
F is the fraction of the market taken by each primary-energy source at a given time. It could
be noticed that two factors are affecting the energy pattern in the history. The first is related to
the technology development and, the second, to availability of the respective energy resources.
Obviously, this pattern of energy source use is developed under constraint immanent to the total
level of energy resources consumption and reflects the existing social structure both in numbers
and diversity [8,9,10,11]. The world energy consumption is shown in Figure 2.
Looking at the present energy sources consumption pattern, it can be noticed that oil is a major
contender, supplying about 40% of energy. Next, coal supply is around 30%, natural gas 20% and
nuclear energy 6.5%. This means that current fossil fuel supply is 90% of the present energy use.
In the last several decades our civilization has witnessed changes, which are questioning our long-
term prospect. Fossil fuel, non-recyclable is an exhaustible natural resource that will be no more
available one day. In this respect it is of common interest to learn how long fossil fuel resources will
be available, as they are the main source of energy for our civilization. This question has attracted
the attention of a number of distinguished authorities, trying to forecast the energy future of our
planet. The Report of the Club of Rome “Limits to Growth”, published in 1972 was among the
first ones, which pointed to the finite nature of fossil fuel. After the first and second energy crisis

© 2004 by Taylor & Francis Group, LLC


chap-01 19/11/2003 14: 44 page 6

102
Evolution of market share (log F/1F)

Natural
101 Coal gas
Wood Nuclear
energy

100

Solar
10-1 or
fusion
Oil
10-2

1850 1900 1950 2000 2050 2100


Year

Figure 1. Market penetration of primary energy sources.

10000

9000 TOTAL CONSUMPTION IN 1850: COAL


500 Mtoes 23.4%
8000
TOTAL CONSUMPTION IN 1992: HYDRO-GEO
7000
9350 Mtoes 5.8%
6000
OIL
5000
33.7%
4000

3000 NATURAL GAS


19.9%
2000 NUCLEAR
1000 6.3 %
BIOMASS
0 11.7%
1850 1950 1992

Figure 2. World energy consumption.

the community at large has become aware of the possible the physical exhaustion of fossil fuels.
The amount of fuel available is dependent on the cost involved. For oil it was estimated that proved
amount of reserves has, over past twenty years, leveled off at 2.2 trillion of barrels produced under
$20 per barrel. Over the last 150 years we have already used up one-third of that amount, or about
700 billion of barrels, which leaves only a remaining of 1.5 trillion of barrels. If compared with
the present consumption, it means that oil is available only for the next 40 years. Figure 3 shows
the ratio of the discovered resources to the yearly consumption for the fossil fuels.
From this figure it can be noticed that coal is available for the next 250 years and gas for the next
50 years. Also, it is evident that as much as the fuel consumption is increasing, new technologies
aimed to the discovery of new resources are becoming available, leading to a slow increase of the
time period for the exhausting of the available energy sources.

© 2004 by Taylor & Francis Group, LLC


chap-01 19/11/2003 14: 44 page 7

230 Coal

220
Natural Gas
60

50
Years

40
Oil

30

20

0
1945 1955 1965 1975 1985 1990
Year

Figure 3. Residual life forecast of energy resources.

400 Canada

300 USA

Per-capita
energy
consumption
(M-Btu/Yr) 200 Germany
UK
France

N. Korea Japan
100
Italy
Korea

Hong Kong
0
0 10000 20000 30000
Per-capita income (US$/Yr)

Figure 4. Correlation between income per-capita and energy consumption levels per-capita of selected indus-
trialized and developing countries (Source: Herman Daly, Steady-state Economics, Washington, Island Press,
1991).

It is known that the energy consumption is dependent on two main parameters. Namely, the
amount of energy consumed per capita and the growth of population. It has been proved that there
is a strong correlation between the Gross Domestic Product and Energy consumption per capita.
Figure 4 shows the economic growth and energy consumption for a number of countries, in 1991.

© 2004 by Taylor & Francis Group, LLC


chap-01 19/11/2003 14: 44 page 8

Compared with the available resources it is easily foreseen that the depletion of the energy
resources is an immanent process, which our civilization will face in the near future. Nevertheless,
whatever is the accuracy of our prediction methods and models, it is obvious that any inaccuracy
in our calculation may affect only the time scale but not the essential understanding that the energy
resources depletion process has begun and requires the human action before adverse effects may
irreversibly enforce [12].
Natural resources scarcity and economic growth are in fundamental opposition to each other
[14]. The study of the contemporary and historical beliefs showed, that: (1) natural resources are
economically scarce, and become increasingly so with the passage of time; (2) the scarcity of
resources opposes economic growth. There are two basic versions of this doctrine. The first, the
Malthusian, rests on the assumption that there are absolutely limits; once these limits are reached
the continuing population growth requires an increasing intensity of cultivation and, consequently,
brings about diminishing returns per capita. The second, or Ricardian version, viewed the diminish-
ing returns as current phenomena reflecting the decline in the quality of resources brought within
the margin of a profitable cultivation. Besides these two models, there is also the so called “Utopian
case” where there is no resources scarcity. There have been several attempts to apply these models
to the energy resources in order to define the correlation between specific energy resources and
economic growth. The substantial questions related to the scarcity, its measurement and growth
are: (1) whether the scarcity of energy resources has been and/or will continue to be mitigated and
(2) whether the scarcity has “de facto” impacted the economic growth. An analysis based on the
relative energy prices and unit costs has been applied to natural gas, bitumen coal, anthracite coal
and crude oil. The USA analysis in this respect can serve as the indication for the future trend in a
world scale. Using two measures of scarcity – unit cost and relative resource price change in the
trend of resource scarcity for natural gas, bitumen coal, anthracite coal and crude oil, over three
decades are shown in Figure 5 [14].
It can be noticed that each one of the energy resources has become significantly scarcer during the
decade of the 1970s. The situation reversed itself during the 1980s. The change, that took place, has
implications for the future economic growth to the extent resources scarcity and economic growth
are interrelated, even if it was not proved that short term energy resources scarcity fluctuation has

700

NG
600
Bituminous coal

500 Anthracite coal


Crude oil

400

300

200

100

0
1890 1900 1910 1920 1930 1940 1950 1960 1970 1980 1990
1895 1905 1915 1925 1935 1945 1955 1965 1975 1985
Year

Figure 5. Scarcity factor for fossil fuels.

© 2004 by Taylor & Francis Group, LLC


chap-01 19/11/2003 14: 44 page 9

substantial implication on the long term economic growth. It has become obvious the need for an
active involvement in allocating scarce, non-renewable energy resources and its potential effect on
the economic growth.

Environment
Primary energy resources use is a major source of emissions [15,16,17,18]. Since fossil fuels have
demonstrated their economic superiority, more than 88 % of primary energy in the world in recent
years has been generated from fossil fuels. However, the exhaust gases from combusted fuels have
accumulated to an extent where a serious damage is being done to the world global environment.
The accumulated amount of CO2 in atmosphere is estimated at about 2.75 × 1012 t. The global
warming trend from 1900–1997 is shown in Figure 6 [19].
The future trend of the carbon dioxide concentration in the atmosphere can be seen from the
Figure 8.
It is rather obvious that the further increase of the CO2 will lead to disastrous effects to the
environment. Also, the emission of SO2 , NOx and suspended particulate matters will substantially
contribute to exasperate the effect on the environment.
In a world scale, coal will continue to be a major source of fuel for the electric power generation.
Many developing countries, such as China and India, will continue to use inexpensive, abundant,

0.6
0.4
From 1950–1990 mean
Temperature deviation

0.2
0

-0.2

-0.4
-0.6
-0.8
1850 1870 1890 1910 1930 1950 1970 1990

Figure 6. Global warming trend 1900–1990.

1000 million tonnes CO2 / year


25

20

15

10

0
1950 1955 1960 1965 1970 1975 1980 1985 1990 1996

Figure 7. Cumulative CO2 production.

© 2004 by Taylor & Francis Group, LLC


chap-01 19/11/2003 14: 44 page 10

20.000

18.000
Developing Countries
16.000 CHINA
14.000 INDIA
USA
12.000
JAPAN
10.000
NIS
8.000
Eastern Europe
6.000 EU
4.000

2.000

0
1990 2005 2030 2050

Figure 8. Forecast for CO2 emission.

indigenous coal to meet growing domestic needs. This trend greatly increases the use of coal
worldwide as economy in the other developing countries, continues to expand. In this respect the
major long-term environmental concern about coal use has changed from acid rain to greenhouse
gas emissions – primarily carbon dioxide from combustion. It is expected that coal will continue to
dominate China’s energy picture in the future. The share of coal, in primary energy consumption is
forecast to be no less than 70% during the period 1995–2010. In 1993 China has produced a total
of 1.114 billion tons of coal, in 2000 it is planned 1.5 trillion and in 2010 it will be 2.0 trillion.
Since China is the third largest energy producer in world, after USA and Russia its contribution to
the global accumulation of the CO2 will be substantial if the respective mitigation strategies will
not be adopted. The example of China is instructive in the assessment of the future development
of developing countries and their need for accelerated economic development.

Water
In this part sustainability of desalination systems, essential component of human-made or built
capital is discussed with respect to its important contribution to life support systems. Figure 9
shows the distribution of the global stock of water.
Ninety-seven point-five percent of the total global stock of water is saline and only 2.5% is fresh
water. Approximately 70% of this global freshwater stock is locked up in polar icecaps and a major
part of the remaining 30% lies in remote underground aquifers. In effect, only a miniscule fraction
of the freshwater available (less than 1% of total freshwater, or 0.007% of the total global water
stock) in rivers, lakes and reservoirs is readily accessible for direct human use. Furthermore, the
spatial and temporal distribution of the freshwater stocks and flows is hugely uneven. Hydrologists
estimate the average annual flow of all the World’s rivers to be about 41,000 km3 /yr. less than a third
of this potential resource can be harnessed for human needs. This is further reduced by pollution
such as discharges from industrial processes, drainage from mines and leaching of the residues of
fertilizers and pesticides used in agriculture. The World Health Organization (WHO) has estimated
that 1000 cubic meters per person per year is the benchmark level below which chronic water
scarcity is considered to impede development and harm human health.
Several countries are technically in a situation of water scarcity, i.e. with less than 1000 cubic
meters of renewable water per year per head of population. Water shortage is predicted to increase
significantly, mainly as a result of increase in population.

10

© 2004 by Taylor & Francis Group, LLC


chap-01 19/11/2003 14: 44 page 11

EARTH’S TOTAL STOCK OF WATER


Fresh water
(2.5%)

Polar ice caps On and under the


(70%) earth’s surface(30%)

Less than 1% of the world’s fresh water (about 0.007% of


the total water stock of the earth) is accessible for direct
human use. (Lakes, rivers, reservoirs, and accessible
shallow underground sources)

Fresh water lakes Saline Soil Atmosphere Stream


0.009% lakes and water 0.001% channels
inland seas 0.005% 0.0001%
0.008%

Figure 9. Global stock of water.

The Dublin Statement of January 1992 on Water and Sustainable Development and the subsequent
Rio Earth Summit Agenda 21, Chapter 18, Protection of the quality and supply of freshwater
resources, are closest to the present context since desalination augments fresh water resources.
Chapter 30 of Agenda 21 is also important in the context of desalination since it draws the attention
of leaders of business and industry including transnational corporations, and their representative
organizations in general, to their critical role in helping the world achieve the goals for sustainable
development.
The April 1998 report of the World Business Council of Sustainable Development under the
United Nations Environmental Program (UNEP) provides clear guidelines towards the role of
companies in the movement towards sustainable fresh water resources management. Prominent
among these are:
• Industry should take reasonable preventive action now.
• Companies can and should improve the efficiency with which they use, recycle and treat water.
• Companies can and should become more active in water basin and water catchments planning
and management.
• Water pricing more in line with the real costs encourage less wasteful consumption, encourage
recycling and reuse and more adoption of “best practice.”
Desalination systems are of paramount importance in the process of augmenting fresh water
resources and happen to be the main life support systems in many arid regions of the world. The
world has seen a 22-fold increase in desalination capacity since 1972 and the figure continues
to rise. In 1997 the total desalination capacity, was 22,730,000 cubic meters of fresh water per
day. That represents a doubling in global capacity over ten years and a 22-fold increase over 25
years. Yet, desalinated seawater is only about one thousandth of the fresh water used worldwide.
Desalinated water costs several times more than the water supplied by conventional means. The
countries in the Arabian Gulf Region heavily subsidize the costs to render it affordable. In some of
these countries, water is subsidized so heavily that users make little effort to curb their use. Water
consumption would be greatly reduced if the price were closer to the true cost of production.

11

© 2004 by Taylor & Francis Group, LLC


chap-01 19/11/2003 14: 44 page 12

SUSTAINABILITY DEFINITIONS

Lately, in a number of years “sustainability” has become a popular buzzword in the discussion
of the resources use and environment policy. Before any further discussion of the subject, it is
necessary to define and properly assess the term we are going to use. So, what is sustainability?
Among the terms most often adapted are the following.
(a) for the World Commission on Environment and Development (Brundtland Commission) [22]
“Development that meets the needs of the present without compromising the ability of future
generation to meet their own needs”
(b) for the Agenda 21, Chapter 35 [23]
“Development requires taking long-term perspectives, integrating local and regional effects
of global change into the development process, and using the best scientific and traditional
knowledge available”
(c) for the Council of Academies of Engineering and Technological Sciences [24]
“It means the balancing of economic, social, environmental and technological consideration,
as well as the incorporation of a set of ethic values”
(d) for the Earth Chapter [25]
“The protection of the environment is essential for human well-being and the enjoyment of
fundamental rights, and as such requires the exercise of corresponding fundamental duties”
(e) Thomas Jefferson, Sept. 6 1889 [26]
“Then I say the earth belongs to each generation during its course, fully and in its right no
generation can contract debts greater than may be paid during the course of its existence”
All five definitions stand for the emphasis of specific aspect of sustainability. Definition (a) and
(e) implies that each generation must bequeath enough natural capital to permit future generations
to satisfy their needs. Even if there is some ambiguity in this definition, it is meant that we should
leave our descendants the ability to survive and meet their own needs. Also, there is no specification
in what form resources are to be left and how much is needed for the future generation, because it
is difficult to anticipate the future scenarios.
Definitions (b) and (c) are more politic ply for the actions to be taken at global, regional and local
levels in order to stimulate United Nation, Government and Local Authorities to plan development
programs in accordance with the scientific and technological knowledge. In particular it should
be noticed in definition (c) the ethic aspect of the future development actions to be taken to meet
sustainable development.
Definition (d) is based on the religious believes playing the responsibility and duties toward the
nature and Earth. In this respect it is of interest to enlighten that the Old Testimony in which the story
of creation is told is a fundamental basis for Hebrew and Christian doctrine of the environment.
In the world of Islam, nature is the basis for human consciousness. According to the Koran, while
humankind is God’s vice-regent on Earth, God is the Creator and Owner of nature. But human
beings are his trusted administrators, they ought to follow God’s instructions, that is, acquiesce to
authority of Prophet and to the Koran regarding nature and natural resources.
With respect to the normative dimension, sustainability implies the acknowledgement of a hierar-
chy in dependence of economy, society and environment: market economy depends on society and
environment. While societies are possible without a market economy, neither can exist without nat-
ural environment. Thus, economic processes are subordinated to social and ecological constrains.
In this context, sustainability refers to claims and commitments to:
• Compatibility between social, economic and environmental goals at all levels;
• Social equity and social justice as an overriding goal;
• Recognition of cultural diversity and multiculturalism;
• Support and maintenance of biodiversity.
Strategically, sustainability implies a system of governance at all levels – local to global – that
appropriately implements policies that move toward sustainability, especially with respect to social

12

© 2004 by Taylor & Francis Group, LLC


chap-01 19/11/2003 14: 44 page 13

equity and social justice, the compatibility between social, economic and environmental goals, and
the participation of local actors.
Sustainability requires the identification of different goals and ways and means of their imple-
mentation, the critical re-evaluation and assessment of institutions and institutional arrangements,
as well as the identification of possible actors and conflicts among them.

SUSTAINABILITY SCIENCE

Meeting fundamental human need while preserving the life-support systems of planet Earth is the
essence of sustainable development, an idea that emerged in the early 1980s from scientific per-
spectives on the relation between nature and society [27]. During the late ’80s and ’90s, however,
much of the science and technology community became increasingly estranged from the prepon-
derantly societal and political processes that were shaping the sustainable development agenda.
This is now changing as efforts to promote a sustainability transition emerge from international
scientific programs, the world’s scientific academies, and independent networks of scientists.

Core questions
A new field of sustainability science emerging that seeks to understand the fundamental charac-
ter of interactions between nature and society [28]. Such an understanding must encompass the
interaction of global processes with the ecological and so characteristics of particular places and
sectors. The regional character of much what sustainability science is trying to explain means that
relevant research will have to integrate the effects of key processes across the full range of scales
from local to global. It will also require fundamental advances in our ability to address such issues
as the behaviour of complex self-organizing systems as well as the responses, some irreversible,
of the nature-society system to multiple interacting stresses. Combining different ways of know-
ing learning will permit different social actors to work in concert, even with much uncertainty and
limited information.
With a view toward promoting research necessary to achieve such advances, we propose an initial
set of core questions for sustainability science. These are meant to focus research attention on both
the fundamental character of interactions between nature and society and on society’s capacity to
guide those interactions along more sustainable trajectories.

Research strategies
The sustainability science that is necessary to address these questions differs to a considerable
degree in structure, methods, and content from science, as we know it. In particular, sustainability
science will need to do the following:
1. span the range of spatial scales between such diverse phenomena as economic globalisation and
local farming practices,
2. account for both the temporal inertia and urgency of processes like ozone depletion,
3. deal with functional complexity such as is evident in recent analyses of environmental
degradation resulting from multiple stresses,
4. recognize the wide range of outlooks regarding what makes knowledge usable within both
science and society,
5. define the criteria and indicators for the sustainability assessment of energy, water and envi-
ronment systems that are to provide guidance for the efforts directed to a transition toward
sustainability,
6. recognise the limits for the energy, water and environment that are marking irreversible changes
on our planet,
7. make sustainability become operational in everyday life with paradigm manifesting the
interdisciplinary and multidisciplinary.

13

© 2004 by Taylor & Francis Group, LLC


chap-01 19/11/2003 14: 44 page 14

Pertinent actions are not ordered linearly in the familiar sequence of scientific inquiry where
action lies outside the research domain [28]. In areas like climate change, scientific exploration,
and practical application must occur simultaneously. They tend to influence and become entangled
with each other [29].
In each phase of sustianability science research, novel schemes and techniques have to be
used, extended, or invented. These include observational methods that blend remote sensing with
fieldwork in; conceptually rigorous ways, integrated place-based models that are based on semi
qualitative representations of entire classes of dynamic behaviour, and inverse approaches that
start from outcomes to be avoided and work backwards to identify relatively safe corridors for
a sustainability transition. New methodological approaches for decisions under a wide range of
uncertainties in natural and socio-economic systems are becoming available and need to be more
widely exploited, as does the systematic use of networks for the utilization of expertise and the
promotion of social learning. Finally, in a world put at risk by the unintended consequences of
scientific progress, participatory procedures involving scientists, stakeholders, advocates, active
citizens, and users of knowledge are critically needed.

Next steps
In the coming years, sustainability science needs to move forward along three pathways. First, there
should be wide discussion within the scientific community North and South-regarding key ques-
tions, appropriate methodologies, and institutional needs. Second, science must be connected to the
political agenda for sustainable development, using in particular the forthcoming “Rio + 10” con-
ference: The World Summit on Sustainable Development that will be held in South Africa in 2002.
Third (and most important), research itself must be focused on the character of nature-society
interactions, on our ability to guide those interactions along sustainable trajectories, and on ways
of promoting the social learning that will be necessary to navigate the transition to sustainability. It
is along this pathway – in the field, in the simulation laboratory, in users’ meeting, and in the quiet
study – that sustainability science has already begun to flourish.

SUSTAINABILITY CONCEPT DEFINITION

Sustainable development encompasses economic, social, and ecological perspectives of conserva-


tion and change. In correspondence with the WCED, it is generally defined as “development that
meets the needs of the present without compromising the ability of future generations to meet their
own needs”. This definition is based on ethical imperative of equity within and between genera-
tions. Moreover, apart from meeting; basic needs of all; sustainable development implies sustaining
the natural life-support systems on Earth, and extending to all the opportunity to satisfy their aspi-
rations for a better life. Hence, sustainable development is more precisely defined as a process
of change in which the exploitation of resources, the direction of investments, the orientation of
technological development, and institutional change are all harmony and enhance both current and
future potential to meet human needs and aspiration [31,32,33,34].
This definition involves an important transformation and extension of the ecologically based con-
cept of physical sustainability to the social and economic context of development. Thus, terms of
sustainability cannot exclusively be defined from an environmental point of view or basis of atti-
tudes. Rather, the challenge is to define operational and consistent terms of sustainability from an
integrated social, ecological, and economic system perspective. This gives rise to two fundamental
issues that need to be clearly distinguished before integrating normative and positive issues in an
overall framework.
The first issue is concerned with the objectives of sustainable development; that is, “what
should be sustained” and “what kind of development do we prefer”. These are normative ques-
tions that involve value judgments about society’s objectives with respect to social, economic,
and ecological system goals. These value judgments are usefully expressed in terms of a social
welfare function, which allows an evaluation of trade-offs among the different system goals.

14

© 2004 by Taylor & Francis Group, LLC


chap-01 19/11/2003 14: 44 page 15

The second issue deals with the positive aspect of sustainable development; that is, the fea-
sibility problem of “what can be sustained” and “what kind of system we can get”. It requires
one to understand how the different systems interact and evolve, and how they could be man-
aged. Formally, this can be represented in a dynamic model by a set of differential equations and
additional constraints. The entire set of feasible combinations of social, economic and ecologi-
cal states describes the inter-temporal transformation space of the economy in the broadest sense
[35,36,37].

STRONG VS. WEAK SUSTAINABILITY

The meaning of sustainability is the subject of intense debate among environmental and resource
economists [38]. Perhaps no other issue separates the traditional economic view of the natural
world from the views of most natural scientists. The debate currently focuses on the substitutability
between the products so of the market economy and the environment – manufactured capital and
natural capital – a debate captured in the terms weak vs. strong sustainability. It is increasingly clear
that the criteria for weak sustainability, based on the requirements for maintaining economic output,
are inconsistent with the conditions necessary to sustain the ecosystem services of the natural world.

Weak sustainability
The concept of weak sustainability has come to dominate discussions natural resource and envi-
ronmental policy. According to Brekke “A development is said to be weakly sustainable if the
development is non-diminishing from generation to generation”. This is by now the dominant inter-
pretation of “sustainability”. Dominant, that is to say, among economists, not ecologists, scientists
and most other natural scientists.
In the pages below we follow E.O. Wilson in arguing for “consilience” between economics
and natural sciences. That is, definitions and procedure from one discipline should conform to
solidly verified knowledge in other disciplines. If our goal is to preserve necessary features of
the natural world, policies designed to insure sustainable economics should be consistent with
the requirements for the long term for survival of the human species including maintaining the
resilience and stability of ecosystems.
An instructive example of extreme implications of weak sustainability in practice is small Pacific
island nation of Nauru. In 1900 one of the world’s richest phosphate deposits was discovered on
Nauru and today, as a result of over ninety years of phosphate mining, about eighty percent of the
island is totally devastated. At the same time, the people of Nauru have had, over the past several
decades, a high per-capita income. Income from phosphate mining enabled the Nauruans to establish
a trust fund estimated to be as large as $1 billion. Interest from this trust fund should have insured a
substantial and steady income and thus the economic sustainability of the island. Unfortunately, the
Asian financial crisis, among other factors, has wiped out most of the trust fund.The people of Nauru
now face a bleak future. Their island is biologically impoverished and the money Nauruans traded
for their island home has vanished. The “development” of Nauru followed the logic of weak sustain-
ability, and shows clearly that it may be consistent with a situation of near complete environmental
devastation. More importantly, weak sustainability can cause extreme sensitivity to either natural
disturbances (e.g., diseases in the case of agriculture focusing on only a few crops) or economic
disturbances international financial markets as in the case of Nauru). Such extreme sensitivity of
regional systems to external factors illustrates a telling argument against weak sustainability.

Strong sustainability
The alternative to weak sustainability is strong sustainability. In Brekke’s words: “The sec-
ond interpretation, known as ‘strong sustainability’, sees sustainability as non-diminishing life
opportunities. . . .” This should be achieved by conserving the stock of human capital, technological
capability, natural resources and environmental quality.

15

© 2004 by Taylor & Francis Group, LLC


chap-01 19/11/2003 14: 44 page 16

Under the strong sustainability criteria, minimum amounts of different types: of capital (eco-
nomic, ecological, social) should be independently maintained in physical/biological terms. The
major motivation for this insistence is derived from the recognition that natural resources are essen-
tial inputs in economic production, consumption, or welfare that cannot be quasi-moral, namely,
acknowledgment of environmental integrity and “right of nature”.
Actually, both “weak” and “strong” criteria, as formulated above, involve an implicit assumption
that may be challenged. They both imply a centralized decision-making process and decision
maker who decides on behalf of “society” among alternative programmes and plans. But the real
world is not at all like that. In reality, virtually all-economic decisions are decentralised among
many narrowed interests, namely individuals, family groups, communities of people with common
interest or firms. Even with the best intentions as regards future generations and planetary welfare,
most decision makers will optimise within a much narrower context. On the other hand, if firms
were to sell “service” than “product” and all material goods were regarded by product “capital” as
“capital” rather than “throughput”, the incentives facing decentralisation managers would be much
more consistent with planetary sustainability. Decentralised decision maker at the family or firm
level would not, and need not, choose between weak and strong.

MEASURING SUSTAINABILITY

Measuring sustainability is a major issue as well as a driving force of the discussion on sustainability
development. Developing tools that reliable measure sustainability is a prerequisite for identify-
ing non-sustainable processes informing design-makers of the quality of products and monitoring
impacts to the social environment. The multiplicity of indicators and measuring tools being devel-
oped in this fast growing field shows the importance of the conceptual and methodological work in
this area. The development and selection of indicators require parameters related to the reliability,
appropriateness, practicality and limitations of measurement [39,40,41].
In order to cope the complexity of sustainability related issues for different systems the indicators
have to reflect the wholeness of the system as well as the interaction of its subsystems. Consequently,
indicators have to measure intensity of the interactions among elements of the systems and system
and its environment. In this view, there is a need for the indicator sets related to the interaction
processes that allow an assessment of the complex relationship of every system and its environment.

Characteristics of effective indicators


An indicator is something that points to an issue or condition. Its purpose is to show you how well
a system is working. If there is a problem, an indicator can help you determine what direction to
take to address the issue. Indicators are as varied as the types of systems they monitor [42, 43, 44].
However, there are certain characteristics that effective indicators have in common:
Effective indicators are relevant; they show you something about the system that you need to
know.
• Effective indicators are easy to understand, even by people who are not experts.
• Effective indicators are reliable; you can trust the information that the indicator is providing.
Lastly, effective indicators are based on accessible data; the information is available or can be
gathered while there is still time to act.
Indicators can be useful as proxies or substitutes for measuring conditions that are so complex
that there is no direct measurement. For instance, it is hard to measure the “quality of life in my
town” because there are many different things that make up quality of life and people may have
different opinions on which conditions count most. A very simple substitute indicator is “Number
of people moving into the town compared to the number moving out.”
Examples of familiar measurements used as indicators in everyday life include:
• Wave height and wind speed are indicators of storm severity
• Barometric pressure and wind direction are indicators of upcoming weather changes

16

© 2004 by Taylor & Francis Group, LLC


chap-01 19/11/2003 14: 44 page 17

• Won-lost record is an indicator of player skills


• Credit-card debt is an indicator of money-management skills
• Pulse and blood pressure are indicators of fitness
Note that these are all numeric measurements. Indicators are quantifiable. An indicator is not the
same thing as an indication, which is generally not quantifiable, but just a vague clue. In addition
to being quantifiable, effective indicators have the four basic characteristics noted below.

Relevant
An indicator must be relevant, that is, it must fit the purpose for measuring. As indicators, the gas
gauge and the report card both measure facts that are relevant. If, instead of measuring the amount
of gas in the tank, the gas gauge showed the octane rating of the gasoline, it would not help you
decide when to refill the tank. Likewise, a report card that measured the number of pencils used
by the student would be a poor indicator of academic performance.

Understandable
An indicator must be understandable. You need to know what it is telling you. There are many
different types of gas gauges. Some gauges have a lever that moves between “full” and “empty”
marks. Other gauges use lights to achieve the same effect. Some gauges show the number of gallons
of gasoline left in the tank. Although different, each gauge is understandable to the driver. Similarly,
with the report card, different schools have different ways of reporting academic progress. Some
schools have letter grades A through F. Other schools use numbers from 100 to 0. Still other schools
use written comments. Like the gas gauge, these different measures all express the student’s progress
or lack of progress in a way that is understandable to the person reading the report card.

Reliable
An indicator must be reliable. You must trust what the indicator shows. A good gas gauge and an
accurate report card give information that can be relied on. A gas gauge that shows the tank is
empty when in fact it is half full would make you stop for gasoline before it is needed. A gas gauge
that shows the tank is half full when in fact it is empty would cause you to run out of gas in an
inconvenient place. Similarly, if a student’s grade were reported wrong, an honours student could
be sent for remedial work and a student who needs help would not get it. An indicator is only useful
if you know you can believe what it is showing you.

Accessible data
Indicators must provide timely information. They must give you information while there is time to
act. For example, imagine a gas gauge that only gave you the amount of gasoline in the tank when
the engine was started. After you have been driving for several hours, that reading is no longer
useful. You need to know how much gasoline is in the tank at each moment. Similarly, a report card
distributed a week before graduation arrives too late to give a student remedial help. In order for
an indicator to be useful in preventing or solving a problem, it must give you the information while
there is still time to correct the problem.
However, there is a real danger that traditional data sources and traditional indicators will focus
attention on the traditional solutions that created an unsustainable community in the first place.
It may be tempting to keep measuring “number of jobs,” but measuring “number of jobs that pay
a liveable wage and include benefits” will lead to better solutions. Discussions that include the
phrase “but you can’t get that data” are not going to lead to indicators of sustainability. In fact, if
you define a list of indicators and find that the data is readily available for every one of them, you
probably have not thought hard enough about sustainability. Try to define the best indicators and
only settle for less as an interim step while developing data sources for better indicators [45].
In order to cope the complexity of sustainability related issues for different systems the indicators
have to reflect the wholeness of the system as well as the interaction of its subsystems. Consequently,
indicators have to measure intensity of the interactions among elements of the systems and system

17

© 2004 by Taylor & Francis Group, LLC


chap-01 19/11/2003 14: 44 page 18

Society
Population
Lifestyle
Culture
Social
Organization
Goods and
Services

Services

Impacts

Natural
Resources
Environment Economy
Atmosphere Agriculture
Hydrosphere Households
Land Industry
Biota Impacts Transport
Minerals Services

Figure 10. Indicators grouping.

General Index of Sustainability

Eco-efficiency

Socio-efficiency Economic-efficiency

Figure 11. General index of sustainability.

and its environment. In this view, there is a need for the indicator sets related to the interaction
processes that allow an assessment of the complex relationship of every system and its environment.
The example of energy system can be used to demonstrate the complexity of sustainability in
the definition of sustainability indicators. As regard energy systems the attention can be focused on
two approaches: one, based on complex system assignment to the energy system, and second, based
on the conceptual options of the system. The complex system approach will see energy system as
the entity changing in time as the result of internal and external interactions.
Sustainability can be presented in the form of triangle pyramid, where every corner on bases rep-
resents one of efficiencies to be included in the assessment of any system. Fourth corner representing
Sustainability Index value. Figure 11 shows the three efficiency indicators of sustainability as three

18

© 2004 by Taylor & Francis Group, LLC


chap-01 19/11/2003 14: 44 page 19

corners of a triangle and Sustainability Index. The Sustainability Index is obtained when a balance
is found between the issues of all three efficiencies reflecting imposed constrains. In order to obtain
Sustainability Index for the option under consideration the weighting coefficient for the efficiencies
has to be determined. The decision-making theory is used to calculate weighting coefficients. In
particular non-numerical constrains are generated to represent constrains between the criteria.
The interactions between the three aspects of sustainability emphasise that sustainable develop-
ment is not a static concept, which can be easily translated and quantified. It is a dynamic concept
that is the result of a process of social learning involving many actors. For instance, in order to know
what system is more sustained it is necessary to formulate and share visions about the value of non-
economic elements like biodiversity or cultural heritage.And because visions and the underlying eco
and social values change over time, it is imperative to take all three aspect of the sustainability includ-
ing the process of social learning and the environment global change to be a continuous process.
For the assessment of system the attention will be focused on following three efficiency
definitions:
Economic efficiency
The traditional method for the assessment of systems is based on the econometric justification of
the use of capital needed for unit production. This method has been essential basis for the decision-
making procedure in selection of systems. It has proved to be a driving force for the development
of economic welfare in the industrial society. One of the basic assumptions in this procedure was
assumption of the abundant resource. With the development notion that the scarcity of resources
is imposing limits to the use of resources it has been realised that beside the resources limits there
are also other limits which play important role in the decision making process.
Indicators for the economic efficiency assessment are: investment cost including material cost,
fuel cost, thermal efficiency and operation and maintenance cost. These indicators are result of the
optimisation procedure adopted for the with respective optimisation function and respective design
parameter of the system.
Ecological efficiency
Following recognition of the effect of combustion products on the environment, it has been intro-
duced a new indicators in the decision making procedure for the selection of the system. In this
respect Kyoto Protocol has imposed local, regional and global limits of CO2 , which are to be
followed in the design, operation and selection of new energy systems. This has lead to the devel-
opment and introduction of indicator, which are of importance for the ecological aspects of the
respective energy system.
Indicators for the ecological efficiency assessment are concentration of the product species,
which are, suppose to have adverse effect to the local, regional and global environment. The
monitoring and assessment of those indicators, which are contribution to the general quality of
the environment, can evaluate ecologic efficiency
Social efficiency
Social aspect of any human endeavour is of paramount importance for the successful selection of
possible options. Lately it has become evident that social aspect of any engineering system has
become important part of the total quality of the system. In this respect criteria, which are designed
to present assessment of social aspect of the system, are of the same importance as the economic
and environment criteria. For the formulation of social criteria it is necessary to create a system of
indicators of sustainable development, which provide reference for the respective type of the system
and may be used in the numerical evaluation of the system. In order to meet this requirement, it is
necessary to develop specific techniques for calculation of indicators, which are aimed to reflect
social merits of the energy system.
Indicators for the social efficiency assessment are: job opportunity, diversification of qualifica-
tion, community benefits and local safety consequence. Job opportunity indicator is designed to
take into a consideration number of job created by the respective system.

19

© 2004 by Taylor & Francis Group, LLC


chap-01 19/11/2003 14: 44 page 20

REFERENCES

1. Medows D., Meadows H., Randers D.L., Behrens J., The Limits of Growth, Universe Book, 1972,
New York
2. Botzman, Vorlesung uber Gas Theorie, Vol.1, Leipzig, 1896
3. Ohta T., Energy Technology, Elsevier Science, 1994
4. Marchetti C., Check on the Earth – Capacity for Man, Energy, Vol.4, pp.1107–1117, 1979
5. Master C.D., World resources of Crude Oil, Natural Gas, Bitumen’s and Shale Oil. Topic 25, World
Petroleum Congress Publ, Houston, 1987
6. Marchetti C., The “Historical Instant” of Fossil Fuel, Symptom of Sick World, Int. Journal Hydrogen
Energy, 16, pp.563–575, 1991
7. Marchetti C., Society as Learning System, Techno. Forecast. Soc. Changes, 18, pp.267–282, 1980
8. Arnold M.St.J., Kersall G.J., Nelson D.M., Clean Efficient Electric Generation for the Next Century:
British Coal Toping Cycle, Combustion Technology for Clean Environment, Ed: M.G. Carvalho, W.A.
Fineland, F.C. Lockwood, Ch. Papaloupolos
9. Mazzuracchio P., Raggi A., Barbiri G., New Method for Assessment the Global Quality of Energy
System, Applied Energy 53, pp.315–324, 1996
10. Noel D., A Recommendation of Effect of Energy Scarcity on Economic Growth, Energy, Vol.2, p.1–12,
1995
11. Farinelli U., Alternative Energy Sources for the Third World: Perspective, Barriers, Opportunity,
Pontifical Academy of Science, Plenary Session, Oct.25–29, 1994
12. Keatiny M., Agenda for Change, Center for Our Common Future, 1993
13. WEC Message for 1997, Briefing Notes
14. Barnett H.J., Morse Ch., Scarcity and Growth, Resources for Future, Inc., 1963
15. Mackey R.M., Probert S.D., National Policy for Achieving Thrift, Environmental Protection, Improved
Quality of Life and Sustainability, Applied Energy 51, pp.243–367, 1995
16. Price T., Probert S.D., An Energy and Environmental Strategy for the Rhymney Value, South Walls,
Applied Energy 51, pp.139–195, 1995
17. Mackey R.M., Probert S.D., NAFTA Countries Energy and Environmental Interdependence, Applied
Energy 52, pp.1–33, 1995
18. Mackey R.M., Probert S.D., Energy and Environmental Policies of the Developed and Developing
Countries within the Evolving Oceania and South-East Asian Trading Block, Applied Energy 51,
pp. 369–406, 1995
19. Hought R.A., Woodwell G.M., Global Climatic Change, Scientific American, April issue, 1989
20. Al Gobaisi D, Sustainability of Desalination Systems, EURO Course on Sustainability Assessment of
Desalination Plants, Vilamore, 2000
21. Darwish Al Gobaisi, “Sustainable Use of Our Planetary Natural Capital for Life Support on the Earth”,
IEEE Systems, Man and Cybernetics Conference, Tunisia, 1998.
22. Report of The United Nation Conference on Environment and Development, Vol.1, Chapter 7, June,
1992
23. Agenda 21, Chapter 35, Science for Sustainable Development, United Nations Conference on
Environment and Development, 1992
24. Declaration of the Council of Academies of Engineering and Technological Sciences
25. The Earth Chapter: A Contribution Toward its Realization, Franciscan Centre of Environment Studies,
Roma, 1995
26. Jenkinson C.S., The Quality of Thomas Jefferson’s Soul, White House Library
27. Annan K.A., WE, the Peoples of United Nations in the 21st Century, United Nations, New York, 2000
28. Robert W., Kates at all, Sustainability Science, Science 27, April 2001, Vol.292, pp.641–642
29. National Research Council, Board on Sustainable Development, Our Common Journey: Transition
Toward Sustainability, National Academic Press, DC, 1999
30. Watson R., et al., Protecting Our Planet, Securing Our Future, United Nations Environmental Programme,
Nairobi, 1998
31. van den Krooonenberg H.H., Energy for Sustainable Development: Post-Rio Challenges and Duch
Response, Resources, Conservation and Recycling, 12, 1994
32. Hammond G.F., Energy and the Environment, Towards a Collaborative Research Agenda: Challenges
for Business and Society, Macmillan Press, Basingstoke, 2000
33. Hammond G.F., Energy, Environment and Sustainable Development: A UK Perspective, Trans. ICHemE,
Vol. 78, Part B, July 2000

20

© 2004 by Taylor & Francis Group, LLC


chap-01 19/11/2003 14: 44 page 21

34. Veziroglu T.N., Ozay K., Achiving Sustaianble Future, International Journal Hydrogen Energy (to be
published)
35. Binswangen M., Technological Progress and Sustainability Development: what about the rebound,
Ecological Economics, Vol.36, pp.119–132, 2001
36. Neance M.B., Sustainable Development in 21st Century: making sustainability operational
37. Pemberton M., Ulph D., Measuring Income and Measuring Sustainability, Scand. J. of Economics,
Vol.10, No.1, pp.25–40, 2001
38. Ayres R.U., van den Bergh J.C.M, Gowdy J.M., Strong versus Weak Sustainability: Economics, Natural
Sciences and “Consilience”, Environmental Ethics, Vol.23, pp.155–168
39. UNEP Working Group on Sustainable Development, Internet Communication, 1997
40. Indicators of Sustainable Engineering, Physical Sciences Research Council, Dec.1996
41. D’Angelo E., Perrella G., Bianco R., Energy Efficiency Indicators of Italy, ENEA Centro Ricerche
Casaccia, Roma, RT/ERG/96/3
42. Cafier G., Conte G., Rome as a Sustainable City, Agency for a Sustainable Mediterranean Development,
1995
43. Afgan N.H, Carvalho M.G., Sustainability Assessment Method for Energy Systems, Kluwer Academic
Publisher, New York, 2000
44. Afgan N.H., Al Gobaisi D., Carvalho M.G., Cumo M., Energy Sustainable Development, Renewable
and Sustainable Energy Reviews, 2(1998), pp.235–286.
45. Afgan N.H., Carvalho M.G., Hovanov A.N., Energy System Assessment with Sustainability Indicators,
Energy Policy, 28 (2000), pp.603–612
46. Afgan N., Carvalho M.G., Multi-criteria Assessment of New and Renewable Energy Power Plants,
International Journal ENERGY, Vol.27, pp.739–755, 2002
47. Afgan N., Carvalho M.G., Prstic S., Bar-Cohen A., Sustainability Assessment Of Aluminium Heat Sink
Design, International Journal Heat Transfer Engineering, Vol.24, No.4, 2003

21

© 2004 by Taylor & Francis Group, LLC


chap-01 19/11/2003 14: 44 page 22

© 2004 by Taylor & Francis Group, LLC


chap-02 19/11/2003 14: 44 page 23

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Sustainable energy path

Hiromi Yamamoto
Socioeconomic Research Center, Central Research Institute of Electric Power Industry (CRIEPI),
Tokyo, Japan

Kenji Yamaji
School of Frontier Sciences, University of Tokyo, Tokyo, Japan

ABSTRACT: The purpose of this study is to analyse measures to reduce CO2 emissions and to
promote renewable energy. Using a global land use and energy model (GLUE) and CO2 emission
scenario with strict upper-limit, we conduct a simulation and obtained the following results.
Bioenergy will supply at 33% of all the primary energy consumption. However the consumption
of wind and photovoltaics will be at 1.8% and 1.4% of all the primary energy consumption,
respectively. Oceania, Sub-Sahara Africa, and Latin America, where the population densities are
low and the bioenergy resources are plenty, will consume renewables at over two-thirds of all the
primary energy in 2050. In order to realise the sustainable energy systems, it is not sufficient to
introduce strict limits of CO2 emissions. We need to use bioenergy resources as many as possible; in
addition we need to develop new technologies concerning energy savings, wind, and photovoltaic.

INTRODUCTION

The uses of fossil fuels cause not only the resources exhaustion but also the environmental problems
such as global warming. Before we exhaust all the fossil fuels or lead to the catastrophic climate
change, we need to develop the new energy systems that are fossil-fuel-free and perfectly renewable.
However, we do not find the clear path to the sustainable energy systems using completely renewable
energy.
In order to discover the path to the sustainable energy systems, the authors are developing the
global land use and energy model (GLUE) [1]. The model figures the global energy supply sys-
tems in the future considering the cost minimisation. The model includes overall energy resources
including fossil fuels and renewables and overall energy conversion technologies including a variety
of power generation, gasifier, and liquefaction technologies.
In this study, we conduct a simulation using the model of GLUE and develop a renewable-
intensive scenario. Then we discuss the role and the limit of the current renewables and the problems
we need to solve in order to realise the sustainable energy systems.

OUTLINE OF THE MODEL

In this section we outline the structure of GLUE model.


The model calculates the optimal energy systems including bioenergy systems from 2000 to
2050 at every ten year. The world is divided into 11 regions (Table 1).
The model is described using Liner Programming (LP) technique in GAMS package. The
objective function of the model is the summation of the energy system costs.
The model consists of two parts: an energy systems part and a land use part. The energy systems
part is based on a global energy systems model named New Earth 21 (NE21) and the land use part
is base on a global land use and energy model (GLUE-11) (Figure 1) [1].

23

© 2004 by Taylor & Francis Group, LLC


chap-02 19/11/2003 14: 44 page 24

Table 1. Regions in the model.

No. Regions

1 North America
2 Western Europe
3 Japan
4 Oceania
5 Centrally Planned Asia
6 Middle East North Africa
7 Sub-Sahara Africa
8 Latin America
9 Former USSR & Eastern Europe
10 Southeast Asia
11 South Asia

[Land Use Sub-Model]

(Wood Sector)
Process of Determining
Input Data for Wood Demand Regional Wood Supply Fuelwood (Trad. and Mod.)
* Population and Demand
* Wood Demand Per Capita
(Paper, Timber, Trad.
Fuelwood, and Woody Biomass Residues
Mod. Fuelwood)
Wood Felling

Parameters of Forest
Protection (Reforestation
and Un-Sustainable Slash Process of Determining Regional Land Uses
and Burn Farming)
Forest
Growing Forest Mature Forest

(Food Sector) Pasture Other Land

Additional Arable Land Arable Land


(Other Land to Arable Land) For Food For Energy Crop
Production Production

Input Data for Supply and


Demand of Food Arable Land for
Arable Land for
* Population Pasture Energy Crop
Food Production
* Food Demand Per Capita Production
(Vegetable and Animal)
* Productivity of Arable Process of Determining
Land and Pasture Energy Crops
Regional Food Supply and
* Productivity of Meat Demand
* Additional Arable Land (Considering Interregional Food Biomass Residues
Area Food Import and Export)
[Energy Sub-Model]
Process of Determining Supply of Ultimate
Input Data for Energy Regional Energy Supply Modern Bioenergy
Supply and Demand and Demand Bioenergy partly Potential

Figure 1. Structure of the model.

24

© 2004 by Taylor & Francis Group, LLC


chap-02 19/11/2003 14: 44 page 25

The land use part covers a wide range of land uses and biomass flow including food chains,
material recycling, and discharge of biomass residues.
Those two parts are connected through common variables concerning bioenergy supply potential.
The number of constrains is about 4,300 in the energy systems part and is 2,100 in the land use part.
We prepare data set for GLUE using the data of FAO, IPCC, World Bank, DOE, and so on. The
details of the data set are explained in the reference [1].

CO2 EMISSION SCENARIO

We set the CO2 emission scenario of CP3R that GLUE uses. The scenario of CP3R is a scenario
that the CO2 emissions are strictly controlled in.
We assume that in CP3F the CO2 emissions in 2010 will be at the amount determined in Kyoto
Protocol at COP3 in the developed regions and will be free in the developing regions in the model.
In addition, we admit the trading of CO2 emission rights among the developed regions.
We assume that the CO2 emissions in and after 2020 will be by 30% less than the amount in
2010 in the developed regions, and will be by 30% less than the amount without CO2 constraint
case in the developing regions. In addition, we admit the trading of CO2 emission right between
all the regions in the model.
In the CP3F scenario they will imposed the severe constraints of CO2 emissions in and
after 2020.

SIMULATION RESULTS

Using the model of GLUE and the CO2 emission scenario of CP3F we conduct a simulation and
obtain the following results.
We assume that the final energy demand will increase following IPCC SRES-B2 scenario. In
order to satisfy the demand, the primary energy consumption will increase too.
The total of fossil fuels consumption will decrease between 2040 and 2050. This is because the
costs of fossil fuels will become disadvantageous relatively compared to renewables due to the
increase in mining costs of fossil energy and the increase in the costs of CO2 discharge in CP3F
scenario (Figures 2 and 3).
However a certain amount of bioenergy resources that is practically usable will be used by 2050.
Most biomass residues that are practically usable will be used by 2050. All the supply potential
of energy crops produced on surplus arable lands will be used by 2050. Two thirds of the forest
resources in the world will be used. It will cause that two thirds of the forest will be converted into

1200
Biomass residues
Energy crops
Primary energy (EJ/year)

1000
Modern fuelwood
800 PV
Wind incl. geothermal
600
Hydro
400 Nuclear
Natural gas
200
Oil
0 Unconventional oil
2010

2020

2030

2040

2050

Solid fossil

Figure 2. Primary energy consumption in the world (in EJ/year).

25

© 2004 by Taylor & Francis Group, LLC


chap-02 19/11/2003 14: 44 page 26

100%
Biomass residues
90%
Energy crops
80%
Primary energy (percent)

Modern fuelwood
70%
PV
60%
Wind incl. geothermal
50%
Hydro
40%
Nuclear
30%
Natural gas
20%
Oil
10%
Unconventional oil
0%
Solid fossil
2010

2020

2030

2040

2050
Figure 3. Primary energy consumption in the world (in percent).

180
Primary energy consumtion (EJ/year)

160
Biomass residues
140
Energy crops
120
Modern fuelwood
100 PV
80 Wind incl. geothermal
60 Hydro
Nuclear
40
Fossil total
20
0
North America

Western Europe

Japan

Oceania

Centrally
Planned Asia
Middle East and
North America
Sub-Sahara Africa

Latin America

FSU and
Eastern Europe
South-east Asia

South Asia

Figure 4. Primary energy consumption in each region (in EJ/year).

the man-made forest where we may lose bio-diversity. (Currently the natural forest is at around
9/10 of all the forest and the man-made forest is at around 1/10 in the world.)
Using the bioenergy resources mentioned above, bioenergy would supply at 33% of all the
primary energy consumption.
However the consumption of wind and photovoltaic will be at 1.8% and 1.4% of all the primary
energy consumption, respectively (Figures 2 and 3).
Bioenergy resources will be used following energy demands, when we plan to harvest and
transport them.
However, wind and photovoltaic will be intermittent and they will have the upper-limit of the
introduction considering the stability of electric power system.
When we analyse the results in 2050 regionally, Oceania, Sub-Sahara Africa, and Latin America,
where the population densities are low and the bioenergy resources are plenty, will consume
renewables at over two-thirds of all the primary energy (Figures 4 and 5).

26

© 2004 by Taylor & Francis Group, LLC


chap-02 19/11/2003 14: 44 page 27

100%
consumption (percent) Biomass residues
Primary energy

Energy crops
Modern fuelwood
50% PV
Wind incl. geothermal
Hydro
0% Nuclear
North America

Western Europe

Japan

Oceania

Centrally Planned Asia

Middle East and North America

Sub-Sahara Africa

Latin America

FSU and Eastern Europe

South-east Asia

South Asia

World
Fossil total

Figure 5. Primary energy consumption in each region (in percent).

CONCLUSION

Fossil fuel, being consumed on a large scale by industrial societies, will make problems of global
warming as well as resource exhaustion. Therefore, the world started measures to reduce CO2
emissions and to promote renewable energy. In order to realise the purpose we developed a global
land use and energy model (GLUE).
Using the model and CO2 emission scenario with strict upper-limit, we conduct a simulation
and obtained the following results.
Bioenergy will supply at 33% of all the primary energy consumption. However the consump-
tion of wind and photovoltaic will be at 1.8% and 1.4% of all the primary energy consumption,
respectively.
Oceania, Sub-Sahara Africa, and Latin America, where the population densities are low and
the bioenergy resources are plenty, will consume renewables at over two-thirds of all the primary
energy in 2050.
In order to realise the sustainable energy systems, it is not sufficient to introduce strict limits of
CO2 emissions.
We need to use bioenergy resources as many as possible; in addition we need to develop new
technologies concerning energy savings, wind, and photovoltaic.
Especially, concerning wind and photovoltaic, we need to not only reduce the plant costs but
also develop new technologies that can avoid the problem of the stability of electric power system
such as innovative electric storage systems and space photovoltaic systems.
The authors will modify the model of GLUE and evaluate the sustainable energy systems.

REFERENCE

1. Yamamoto, H. et al., Bioenergy in Energy Systems Evaluated by a Global Land Use and Energy
Optimisation Model, CRIEPI Report Y01005, 2001.

27

© 2004 by Taylor & Francis Group, LLC


chap-02 19/11/2003 14: 44 page 28

© 2004 by Taylor & Francis Group, LLC


chap-03 19/11/2003 14: 44 page 29

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Methodology to construct material circulatory network


in a local community

Ichiro Naruse∗, Masaya Hotta and Tomoyuki Goto


Department of Ecological Engineering, Toyohashi University of Technology, Japan

Kimito Funatsu
Department of Knowledge-Based Information Engineering, Toyohashi University of
Technology, Japan

ABSTRACT: In order to realize a social system with sustainable development, it is necessary for
local communities to construct inter-industries material circulatory network. From this conception,
wastes evolving from one industry could be defined as reusable materials to another. This study
develops a methodology to construct circulatory networks among different industries for reusable
materials. Two types of databases were developed, based on industrial surveys’ data. One of the
databases reveals information about raw materials, products and wastes, received from or emitted to
each of the industries in a subject community. The second database defines conversion technologies
that enable transformation of wastes into reusable materials. Based on those databases, material
flows in the community were analysed by a network simulator program, which was developed in
this study.

INTRODUCTION

Problems on waste treatment have been commonly recognized worldwide, especially for densely
populated countries or urban areas, since it is hard to secure landfill areas for a long term. While,
present economic growth has obliged to manufacture huge amount of product, and absolutely
causing increase of waste amount. Although it is necessary to support economic growth due to
material production processes in order to keep human activities, local and global environmental
issues have triggered debate on whether the present social policies and/or systems could sustain
the future societies or not.
Recently, a lot of technologies to save energy and resources have been developed and commer-
cialised worldwide. It is important to continue developing and assisting those technologies in the
near future. However, if huge amounts of products and wastes will still continue being manufactured
and emitted, respectively, environmental load is bound to increase. From those viewpoints, a new
concept of sustainable development has been proposed. The term of Zero-Emission may also be
one of key words to reveal the meaning of sustainable development. In order to realize this concept,
it is necessary to develop the available and sophisticated methodologies. In reality, municipalities
and local governments in Japan have made many efforts to treat wastes to safe materials and to
advocate the reduction of municipal wastes to inhabitants. Additionally, they also continue to direct
safe treatment of industrial wastes, based on some laws on the industrial wastes treatment. Some
municipalities carry out investigation on annual amount of municipal and industrial wastes in the
community. Although the databases obtained in that community can be valuable, they do not always
apply to the future policies on waste management well. No available methodologies to utilize those

∗ Corresponding author. e-mail: naruse@eco.tut.ac.jp

29

© 2004 by Taylor & Francis Group, LLC


chap-03 19/11/2003 14: 44 page 30

databases accounts for this situation. Only circle or bar graphics are usually shown in reports on
waste investigation results obtained.
This study develops a methodology to construct circulatory networks among different industries
for reusable materials in order to suitably use developed databases. The databases provide input
data in the network simulator program, which was developed in this study. The network simulator
can analyse material flows in the local community. Finally, an appropriate inter-industries material
circulatory network to the community will be proposed. Under the present condition, it is impossible
to apply directly the databases of municipal and industrial wastes, which were investigated by the
municipality, as input data in the simulator. Therefore, two types of databases were prepared, based
on the information extracted from the industrial surveys’ data at Toyohashi city in Aichi-prefecture,
Japan. One of the databases reveals information about raw materials, products and wastes, received
from or emitted to each of the industries in the subject community. The second database defines
conversion technologies that enable transformation of wastes into reusable materials. Using the
two types of databases obtained, we analysed the material circulatory networks at Toyohashi.

CONCEPT FOR THE DEVELOPMENT OF INDUSTRIAL MATERIAL


CIRCULATORY NETWORK

Figure 1 describes the methodology used to develop material circulatory network in a local com-
munity. The methodology consists of 4 main stages. At the first stage, two types of database are
prepared. One of the databases reveals information about raw materials, products and wastes,
received from or emitted to each of the industries and waste treatment companies in a subject
community. This information’s were collected by a questionnaire. The second database defines
conversion technologies that enable transformation of wastes into reusable materials. This database
was drawn up by searching journals, newspapers, homepages, patents and so forth. The details will
be described later. At the second stage, based on those databases, the network simulator analyses

Step 1 Databases
Databases

Preparation of data base Input-Output information


in Industrial companies
1) Quality and volume of
resources, products and Input-Output information in
wastes waste treatment companies

2) Conversion technologies Information of conversion


of wastes technologies
Feed back

Step 22
Step
Analysis of material circulatory network Network
Net wwork
@@@@@@@@@@@
Net Simulator
orkSimul
Simul ator
ator

Step 33
Step Synthesis of problems and evaluation of material
circulatory network

Assessment indices: Reduction rates of resources and wastes,


Environmental emissions, Cost, etc.

Step
Step44
Propose of the best material circulatory network

Figure 1. Concept of development of industrial material circulatory network in a local community.

30

© 2004 by Taylor & Francis Group, LLC


chap-03 19/11/2003 14: 44 page 31

material circulatory networks. Under the present condition, the simulator optimises the total reduc-
tion mass of raw materials and wastes in the subject community by comparing the mass before and
after networking. The third stage plays the role of evaluating of the obtained answer. The simulator
can choose several assessment indices such as amount of resources introduced and wastes evolved,
environmental emissions like CO2 , NOx , SOx , cost, and so forth on the program. In this work we
select amount of resources introduced and wastes evolved as indices. It is probably hard to get the
optimum solution of effective network by only one calculation. Therefore, some problems obtained
by the analysis can contribute to revision of the contents in the 1st and 2nd stages. Unfortunately,
this feedback process cannot be automatically carried out now. The investigators have to find a
solution. After iterating the process mentioned, the best material circulatory network is proposed at
the fourth stage. The analytical network solution of materials will contribute to policy making on
waste management, in order to enhance trade of reusable materials, by introducing new conversion
technologies of wastes in a subject community.

SUMMARY OF DATABASE CONTENTS

Two types of essential database were first constructed from the industrial surveys’ data. The first
database reveals the raw material, product and waste flows to or from each company or industries
and waste treatment companies in a subject community. The second database defines conversion
technologies that enable transformation of wastes into reusable materials.
Contents of the questionnaire for industrial and waste treatment companies are shown in Figure 2.
In the questionnaire for industrial companies, the important information extracted was that of
material name, condition and consumption or production rates and so forth on resources, products
and wastes relating to three main products in each company. For the waste treatment companies, on
the other hand, the questionnaire asks about the main waste accepted, the treatment technologies
applied as well as the product and wastes involved in that technology. The input data include name,
characteristics, volume or mass of recoverable materials. The output information, on the other hand,
includes names of product, final disposal method and conversion technology involved.
Figure 3 indicates contents of data sources in the database of conversion technologies. The
important information in the database is names and amounts of wastes accepted and products

Industrial company
Input Output Output

·Material
·3 main name
Resources Products Waste
products ·Type of
Waste

Waste treatment company


Input Output

·Material Products ·Method


name Conversion
·Type of Wastes
technology
Waste
·Type of
Final disposal
disposal

Figure 2. Industrial and waste treatment companies questionnaire contents.

31

© 2004 by Taylor & Francis Group, LLC


chap-03 19/11/2003 14: 44 page 32

manufactured from the wastes as well as the conversion technologies applied. Figure 3 shows the
detail information extracted. Although it is difficult to collect all the necessary information, the
above-mentioned important items were at least obtained.
The results of these questionnaires and number of conversion technologies are shown in Fig-
ure 4. About 1,139 industrial companies in Toyohashi were selected, which had workers more than
10 people, and questionnaire administered. As a result, 236 companies responded well. For the
waste treatment companies, all companies in Mikawa area, located Toyohashi, were selected, and
half of those companies responded. The second database defines conversion technologies that
enable transformation of wastes into reusable materials. This database was drawn by searching
through journals, newspapers, homepages, patents and so forth. Finally, after analysis to isolate
similar technologies, about 383 were obtained.

Wastes Products
Conversion rate
(Input) (Output)

Production process

·Conditions ·Properties
·Reception Name of conversion ·Production
volume technology volume

Other
Plant
resources
Energy Environmental
emission

Figure 3. Information on the database of conversion technologies.

Input-Output information of industrial companies


Area: Toyohashi city

Number of companies Answer: 236 companies


(Workers > 10 people) (20.7%)
1,139 companies

Input-Output information of waste treatment companies


Area: East Mikawa region includingToyohashi

Number of companies Answer: 32 companies


64 (all) (50%)

Information on conversion technologies of wastes


Information sources: Journal, Patents, Newspaper, etc.

Number
Numberof
of data:
data:383
383technologies
technologies

Figure 4. Abstract of the questionnaires and number of conversion technologies.

32

© 2004 by Taylor & Francis Group, LLC


chap-03 19/11/2003 14: 44 page 33

CONCEPT OF NETWORK SIMULATOR OF MATERIAL CIRULATORY

All of information in the databases form the input data into the network simulator developed.
Contents of the input data are shown in Figure 5. When these databases are linked to the simulator,
it was necessary to convert the data sheet to a special data sheet, which the simulator could be
readable. Generally, an excel file is acceptable. Figure 6 introduces outline of the network simulator.
In the analysis of material flows of products and wastes between the companies in the community
thin red and black arrows are used. Material flows of resources, products and wastes between the
subject community and external communities are shown as thick a white, red and black arrow,

Plant name

Input
Output
Information
Information

Name of raw material Names of products


Volume received Simulator and wastes
Release volume

Plant
Area information classification
information

Coordinates Industrial companies


Waste treatment companies
Conversion technologies

Figure 5. Contents of the input data to the network simulator.

Industrial companies Waste treatment companies  Flow of waste


Input-Output Input-Output  Flow of product
Information Information
 Flow from or to
external community
Databases

Input
Information on
convesrsion technologies

Resource
Product
Network simulator
Output
Subject area

Simplex method
Volume
Waste
reduction
[t/month]
Purpose
function
:  关 Material
volume

Transfer
distance 关
Restriction terms : Minimization

Figure 6. Outline of the network simulator.

33

© 2004 by Taylor & Francis Group, LLC


chap-03 19/11/2003 14: 44 page 34

respectively. Graphics of a house responds a company. There are three designs of house in a
solution image, which indicate an industrial company, a waste treatment company and a company
with an appropriate conversion technology. Material flow between the companies is determined
by selecting the route with the minimum value of material volume multiplied by transfer distance.
Those calculations are optimised by the simplex method.
Locations of the industrial and waste treatment companies are decided on an X-Y coordinate
graph, using their address data. Position of the company with a conversion technology is analytically
installed on the graph, using the database of conversion technologies and the adoptive algorithms.
Basically, the program selects several conversion technologies when the name of the waste emitted
from a certain company corresponds to the name of resources of the conversion technologies in
the database. Finally, the program finds the companies, accepting the product manufactured by the
conversion technology. If several technologies are chosen as candidates, the program selects the
technology with high conversion efficiency. The location of a new company with the conversion
technology is automatically determined by the simplex method as described above.

ANALYTICAL RESULTS AND DISCUSSIONS

Material circulatory network among similar industrial group


First, the material circulatory network among similar industrial group was simulated. In simulating
the network, both the waste treatment companies and the conversion technologies were considered.
Furthermore, one company may dispose the same materials as the resource consuming in the
company. In order to account for this situation, the program calculates the fraction of self-recycling
the material in the company. Figure 7 shows an example of the solution analysed for the case
of plastics industries selected as a subject industrial company. In the calculation, the conversion
technologies are not accounted. In the figure, a plot of black diamond with a short line indicates
that the company with the mark has potential of self-recycling the waste. From the databases
9 plastics companies and 2 waste treatment companies are selected as object. The figure suggests

Input ⴝ Flow of waste


Toyohashi
ⴝ Flow of product
Resource Waste treatment 1
ⴝ External exchange
Plastic 1
6,547
ⴝ Self-recycling
Plastic 9

Plastic 6
Output
Plastic 4 Plastic 7
Plastic 2
Plastic 3
Product
Plastic 5
Plastic 8 4,212

Waste treatment 2 41 Waste


t/month
131
Unit [t/month]

Figure 7. Material circulatory network for plastic industrial companies with waste treatment companies.

34

© 2004 by Taylor & Francis Group, LLC


chap-03 19/11/2003 14: 44 page 35

that “Waste Treatment 1” treats the waste evolved from “Plastic 6”, and “Plastic 3” accepts the
product from “Waste Treatment 1” as a resource. Several companies seem to have potential of
self-recycling. External exchange of materials still remains in all of companies since a main task of
companies sells its own product. As a result, 41 t/month of respective masses of waste and resource
are reduced in the simulation, compared with the result before networking.
Figure 8 shows the simulation result when conversion technologies are considered. As seen from
the figure, the simulator selects a conversion technology, in which plastics is transformed into fuel
oil, in “Plastic 8”. Therefore, a new material flow from “Plastic 8” to “Plastics 1” appears. The
reason for the simulator to select the conversion technology and the location is as follows: “Plastic
8” evolves large amount of plastic waste. While, “Plastic 1” requires energy source to produce
electricity, process utility heat and so forth. The simulator determines one conversion technology
to satisfy demand and supply in both the companies, and determine location of the company with the
technology, based on the mass converted and the distance between demand and supply companies.
As a result, both amounts of resource consumed and waste emitted are reduced. Comparing this
figure with Figure 7, 38 t/month of resources and wastes are reduced, even if the total amount of
product is same.

Material circulatory network among inter-industries


The above-mentioned simulations in other types of industry were accomplished and the optimum
network in each industrial group was simulated. The overall reduction amount of resource was 1,047
t/month in all companies collected by the questionnaire. As a next step, material circulatory network
among Inter-industries, using the same procedures, were simulated. In this simulation, however, the
location of each type of industry could not be determined. Figure 9 shows the material circulatory
network among inter-industries. It is seen from the figure that two thick arrows appear. The thick
red arrow means a product flow from “Ceramic industry” to “Construction industry.” Thereafter,
wasted ceramic materials are converted to construction materials in the ceramic industries. While,
the thick black line denotes a waste flow from “Electrical and mechanical industry” to “Metal indus-
try”. The main waste is metallic disposal evolving from machinery processes in those industries.

Input ⴝ Flow of waste


Toyohashi
ⴝ Flow of product
Resource
Waste treatment 1
ⴝ External exchange
6,545 Plastic 1
ⴝ Self-recycling
Plastic 9

Output
Plastic 6

Plastic 4 Plastic 7
Plastic 2 Product
Plastic 3
Plastic 5 4,212
T Plastic 8

Conversion
tecnology from Waste treatment 2 79 Waste
palastics to fuel
t/month 93
Unit [t/month]

Figure 8. Material circulatory network for plastic industrial companies with waste treatment companies and
conversion technologies.

35

© 2004 by Taylor & Francis Group, LLC


chap-03 19/11/2003 14: 44 page 36

Input

Food and
Resource Output
beverage
industry
255,057 Product

254,010 Farming Textile


industry
244,208 413,252
-10,899 413,252
Construction Paper
industry industry 404,399
-8,666
Material volume Others Chemical
industry industry
Before making a
Waste
network
After making the Electrical and Metal
same category mechanical industry
network industry 16,619
Ceramic
After making industry 15,572
different category
network 10,899 14,386
Reduction volume -2,233
(Total) Unit: [t/month] t/month

Figure 9. Material circulatory network among inter-industries.

As “Metal industry” generally has recycling technologies and/or processes of wasted metals, “Metal
industry” accepts the metallic materials as a waste. Finally, 10,899 and 2,233 t/month of resources
and wastes are reduced, respectively, compared to the amounts before networking. The reason for
the reduction in the amount of product is due to increasing consumption in the subject community.

CONCLUSIONS

This study developed a methodology to construct circulatory networks among different industries
for reusable materials. Two types of databases were prepared, based on the industrial surveys’
data. One of the databases reveals information about raw materials, products and wastes, received
from or emitted to each of the industries in a subject community. The second database defines
conversion technologies that enable transformation of wastes into reusable materials. Based on
these databases, material flows in the community were analysed by a network simulator program,
which was developed in this study. As a result, an optimum hypothetical material circulatory
network among different industries was analysed, therefore reducing the amount of materials from
the region.

36

© 2004 by Taylor & Francis Group, LLC


chap-04 19/11/2003 14: 44 page 37

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Application of emergy analysis to sustainable management of


water resources

Laura Fugaro∗ , Maria Pia Picchi & Ilaria Principi


Department of Chemical and Biosystems Sciences, University of Siena, Siena, Italy

ABSTRACT: The purpose of this study is to evaluate water as a natural resource and to analyse its
management in a specific area by emergy analysis. The first part of this study is focused on natural
resources analysis. We evaluated the emergy flow that supports an Italian surface water: the Misa
river. The second part of the study consists of an evaluation of the domestic water distribution system
of 5 municipalities in the area. A preliminary embodied energy accounting and an emergy analysis
have been applied in order to underline the role of non-renewable inputs in producing drinking
water. We consider all the products and services necessary to extract water from reservoirs, to treat
it and, finally, to provide it to consumers. Results obtained underline the strict correlation between
the energy cycle and water distribution. Non-renewable inputs represent 64% of the final total
emergy value necessary to provide water to consumers.

INTRODUCTION

The concept of Sustainable Development applied to water resources management has to consider
water as the most precious resource on the whole planet. Reliable water supply and the protection
of aquatic resources through adequate water management are essential to support all aspects of
human life and the dependent aquatic and terrestrial ecosystem.
Abstracted freshwater in Italy is used for urban use (19.6%), agriculture (50%), industry (19.7%)
and for producing energy (11%) [1]. The principal source of water for domestic supply is ground-
water (85%), because, historically, groundwater has provided a local high quality and economical
source of drinking water. Water stress is generally related to an over-proportionate abstraction of
water in relation to the sources available in a particular area. Urban demand for freshwater can
exceed the local long-term availability of the resources.
The aim of this study was to evaluate the sustainability of water resources management focus-
ing on natural and artificial cycles. The analysis we performed evaluated both renewable inputs
related to natural water reservoirs and non-renewable inputs related to their exploitation for human
consumption.
The system we analysed is in the Province of Ancona, located in central Italy. The natural cycle
was studied focusing on a local river, the Misa. The Misa river runs over 48 km within the region
encompassing 375 km2 of watershed. The artificial cycle was investigated through a local aqueduct
system providing drinking water to five municipalities in the Province. The aqueduct distributed 6.6
millions of m3 of water in the year 2000 to 42,500 people, through a 668 km long pipeline system.

THE EVALUATION APPROACHES

By transforming flows of energy and material into the amount of emergy required for their pro-
duction, emergy analysis provides a basis to compare dissimilar flows, such as natural resources

∗ Corresponding author. e-mail: fugaro@unisi.it

37

© 2004 by Taylor & Francis Group, LLC


chap-04 19/11/2003 14: 44 page 38

and economic inputs. This ability makes emergy analysis a valuable tool to evaluate management
plans operating in the interface between natural and economic systems.
By definition “Emergy is the amount of available energy of one form directly or indirectly
required to provide a given flow or storage of energy or matter” [2]. Accordingly, solar emergy is
the sum of all input of solar energy directly or indirectly required in a process. The amount of input
emergy per unit of output energy is called solar transformity. It gives a measure of the convergence
of solar emergy trough a hierarchy of processes or levels; it can be considered a “quality factor”,
intended as a measure of the intensity of the biosphere to support the flow under study.
The total emergy of an item can be expressed as

solar emergy = amount of item × solar transformity

Each input flow can be expressed in different units Joule, gram, kcal and £. The solar emergy
is usually measured in solar emergy joule (seJ), while the unit for the solar transformity is solar
emergy joule per joule of product (sej/J) or solar emergy joule per mass of product (seJ/g).
Emergy analysis in based on a donor system of value, where the values of products are based on
how much work is required to produce them. Emergy analysis also corrects for different types and
qualities of energy by transforming all inputs back to the common denominator (solar emjoule).
This makes emergy evaluation appropriate for examining biophysical systems.
In order to underline the role of energetic inputs and to quantify the correlation between drinking
water and energy cycle we performed an embodied energy analysis. It is “the process of determining
the energy required directly or indirectly to allow a system to produce a specific good or service”
[3,4]. The major objective of the embodied energy analysis is to minimise conventional (fossil)
energy inputs for unit of desired system output.
The embodied energy of a product may be expressed as (1)
 
E= Ej = mj · c j (1)
j j

Cement, Fuels Chemicals


Imported Human
sand & & &
water labor
pipelines electricity machinery

Rain

Spring
water Domestic water
distribution Domestic
system use
Sunlight
& wind
Wastewater
Surface treatment
Ground
water system
water
Water
Earth out
heat
Agricultural Industrial
use use

Figure 1. Energy diagram of water resources.

38

© 2004 by Taylor & Francis Group, LLC


chap-04 19/11/2003 14: 44 page 39

where E is the total energy cost (J) of a given item, Ej is the energy associated to it and cj is the
global unit energy cost of production (J/kg) of the jth mass flow mj (kg).

RESULTS

Natural and artificial systems are diagrammed using the energy systems language (Figure 1) [5].
In the diagrams the principle variables, sources, storage, processes, and energy flows required for
water system are represented. In Table 1 the transformities from other studies that we used and also
the new values calculated in this study are reported [6, 7, 8, 9, 10, 11, 12].
The emergy evaluation of Misa River is reported in Table 2. It reports all the inputs necessary
to produce river water, each input was multiplied by its appropriate transformity (or emergy per
unit) to generate its emergy contribution in solar emergy joule per year. Once the total emergy
budget was obtained by adding together all the input rows in the emergy tables, the transformity
was calculated by dividing the total emergy required for the process by the energy content of the
Misa River.
Surface water reservoirs, together with ground water reservoir, represent both inputs and outputs
of artificial water distribution networks. In order to evaluate the role of natural resources in water
cycle better, we believe it is important to determine the emergy flow that supports the Misa River.
The transformity obtained was then applied to the emergy analysis of the local aqueduct. A natural

Table 1. Transformities or emergy per mass used in this study.

Item Transformity or emergy/mass References

Surface water 8.14 × 105 sej/g This study


Ground water 3.40 × 105 sej/g Updated value from Brown M. et al., 1991 [6]
Supplied water 2.37 × 106 sej/g This study
Electricity 1.43 × 105 sej/J Bastianoni S. et al., 2000 [7]
Chemicals 2.65 × 105 sej/g Bjorklund J. et al., 2001 [8]
Human labour 7.38 × 106 sej/J Ulgiati S. et al., 1994 [9]
Fuels 9.89 × 104 sej/J Susani L., 2001 [10]
Polyethylene and PVC 5.87 × 109 sej/g Buranakarn V., 1998 [11]
Concrete 1.00 × 109 sej/g Odum H.T., 1996 [5]
Pig iron 1.00 × 109 sej/g Odum H.T., 1996 [5]
Pipeline 3.52 × 109 sej/g Bastianoni S. et al., 2001 [12]
Sand 1.00 × 109 sej/g Odum H.T., 1996 [5]
Machinery 6.70 × 109 sej/g Updated value from Brown M. et al., 1991 [6]

Table 2. Emergy evaluation of the Misa River.

Item Raw units Unit Solar transformity (sej/unit) Emergy flow (sej/yr)

Input
1 Insulation 1.44 × 1018 J 1 1.44 × 1018
2 Precipitation 3.80 × 1014 J 1.45 × 105 5.52 × 1019
3 Earth heat 4.16 × 1014 J 1.20 × 104 4.99 × 1018
4 Spring water 1.70 × 1012 J 3.40 × 105 5.79 × 1017
Total emergy 6.07 × 1019
Output
Water 7.46 × 1013 g 8.14 × 105

39

© 2004 by Taylor & Francis Group, LLC


chap-04 19/11/2003 14: 44 page 40

Table 3. Embodied energy evaluation of an aqueduct supplying drinking water in the Province of
Ancona, Italy.

Input Raw unit Unit Toe/unit Energy (Toe/year)

1 Pipeline
Polyethylene 7.70 × 103 kg 1.00 × 10−3 7.72 × 100
Iron 8.02 × 104 kg 1.19 × 10−4 9.55 × 100
Concrete 4.01 × 103 kg 5.97 × 10−5 2.39 × 10−1
Pig iron 1.02 × 104 kg 1.19 × 10−4 1.21 × 100
PVC 7.16 × 102 kg 7.17 × 10−5 5.13 × 10−2
2 Electricity
Extraction 6.94 × 1012 J 3.11 × 10−11 2.15 × 102
Distribution 6.98 × 1012 J 3.11 × 10−11 2.17 × 102
3 Chemicals 6.91 × 104 kg 3.58 × 10−4 2.48 × 101
4 Fuels 8.25 × 104 kg 1.28 × 10−3 1.06 × 102
5 Machinery 6.75 × 104 kg 1.19 × 10−4 8.04 × 100
6 Tanks (concrete) 1.85 × 106 kg 5.97 × 10−5 1.10 × 102
7 Sand surface
Polyethylene 8.35 × 105 kg 2.39 × 10−7 2.00 × 10−1
Iron 1.20 × 106 kg 2.39 × 10−7 2.86 × 10−1
Concrete 5.42 × 104 kg 2.39 × 10−7 1.30 × 10−2
Pig iron 5.93 × 104 kg 2.39 × 10−7 1.42 × 10−2
PVC 4.61 × 104 kg 2.39 × 10−7 1.10 × 10−2
Total energy toe 7.00 × 102
goe 7.00 × 108
Joe 2.93 × 1013
Energy required per unit of drinking water
4.92 × 106 m3 1.42 × 10−4 toe/m3
4.92 × 106 m3 1.42 × 10+2 goe/m3
4.92 × 106 m3 5.96 × 106 Joe/m3

system is usually complicated to model, because of the high number of variables and of their
relations, concurring to the organisation of it. Modelling emergy flows of natural systems creates
a hierarchy of energy and material flows that support that system by ascribing different values of
transformity. If more solar emergy is used to support a system its transformity is usually greater
and its quality and position in the hierarchy is higher.
The transformity of the Misa River has been calculated considering solar energy, rain, deep earth
heat and spring water as inputs (Table 2). The watershed (377 km2 ) is a semi- impermeable surface
that collects and than carries precipitation to the water stream [13]. The yearly emergy flow due to
the Misa river is 6.07 × 1019 sej/g. The output of the system is represented by the amount of water
running every year in the river (7.46 × 107 m3 ). The transformity of the river has been calculated
dividing the emergy flow supporting the system by the water output and the value is 8.14×105 sej/g.
The result obtained is in accordance with the values reported in literature, 4.00 × 105 sej/g [5] for
global value, and 5.12 × 105 sej/g for national value [14].
The second part of the analysis is the study of the domestic water supply system, which involves
extracting water from the natural storage, treating it and finally distributing it [15]. The analysis of
construction material of the aqueduct is performed considering the pipeline, the sand (used in the
process of leaning down the tubes), and the concrete used for the tanks. Maintenance inputs are
implicitly considered in the different lifetimes of infrastructure.
The other inputs are the surface water and groundwater, the human labour, the fuels, the electricity
and the chemicals used for treating water. All the information about the system refers to personal
interviews. The notes that document the input data and the calculation required to generate the
values are available from the authors.
40

© 2004 by Taylor & Francis Group, LLC


chap-04 19/11/2003 14: 44 page 41

Table 4. Emergy evaluation of an aqueduct supplying drinking water in the Province of Ancona, Italy.

Input Raw unit Unit Solar transformity (sej/unit) Emergy flow (sej/yr)

1 Water
Surface water 3.65 × 1012 g 8.14 × 105 2.97 × 1018
Spring water 1.21 × 1012 g 3.40 × 105 4.10 × 1017
Ground water 1.75 × 1012 g 3.40 × 105 5.95 × 1017
2 Pipeline
Polyethylene 7.70 × 106 g 5.87 × 109 4.52 × 1016
Iron 8.02 × 107 g 3.52 × 109 2.83 × 1017
Concrete 4.01 × 106 g 1.00 × 109 4.01 × 1015
Pig iron 1.02 × 107 g 1.00 × 109 1.02 × 1016
PVC 7.16 × 105 g 5.87 × 109 4.20 × 1015
3 Electricity
Extraction 6.94 × 1012 J 1.43 × 105 9.92 × 1017
Distribution 6.98 × 1012 J 1.43 × 105 9.98 × 1017
4 Chemicals 6.91 × 107 g 2.65 × 109 1.83 × 1017
5 Fuels 3.71 × 1012 J 9.89 × 104 3.66 × 1017
6 Machinery 6.75 × 107 g 6.70×109 4.52 × 1017
7 Human labour 4.03 × 1010 J 7.38 × 106 2.97 × 1017
8 Tanks (concrete) 1.85 × 109 g 1.00 × 109 1.85 × 1018
9 Sand surface
Polyethylene 8.35 × 108 g 1.00 × 109 8.35 × 1017
Iron 1.20 × 109 g 1.00 × 109 1.20 × 1018
Concrete 5.42 × 107 g 1.00 × 109 5.42 × 1016
Pig iron 5.93 × 107 g 1.00 × 109 5.93 × 1016
PVC 4.61 × 107 g 1.00 × 109 4.61 × 1016
Total emergy 1.16 × 1019
Drinking water 4.92 × 106 m3 2.37 × 1012

The global energy table (Table 3) reports each mass input required sustaining the aqueduct sys-
tem; each item was multiplied by its specific associated energy (ton of oil equivalent per unit) [16].
The embodied energy analysis shows an energy consumption of 0.14 gram of oil equivalent per
litre of drinking water. Considering that, the average daily use of drinking water in the area is 293
litres per day per person, the final daily consumption of oil equivalent is 41 gram for each inhabitant
of the region. We can also evaluate CO2 releases producing potable water by multiplying the total
energy by a standard conversion [17], 3.22 g CO2 /g of oil equivalent. The result is that final daily
emission of CO2 per person, due to water consumption, is 134 g.
The main input is electricity, accounting for 61% of the total energy required to support the
system, followed by the fuels (15%) and the infrastructure (18%).
Emergy evaluation is showed in Table 4. The emergy flow of surface water is evaluated using
transformity calculated in the Misa river analysis. The transformities of spring and ground water
are taken from literature [5]. The water resources are 33% of the total emergy flow supporting the
aqueduct and it represents the only renewable input in the system. The other inputs concurring to
the production of drinking water are mainly dominated by the infrastructure materials (PVC, Poly-
ethylene, Iron, Concrete, Pig iron, Sand) accounting for 38% of the total emergy flow. The energy
sources used in the process (electric energy and fuels) represent 22% of the emergy supporting the
aqueduct.

DISCUSSION

Emergy analysis of the natural system, the Misa river, gives results similar but higher to the one
reported in literature. Since there is no way of improving the efficiency of a natural process, the

41

© 2004 by Taylor & Francis Group, LLC


chap-04 19/11/2003 14: 44 page 42

60%

50%

40%

30%

20%

10%

0%
Water

Pipeline

Electricity

Chemicals

Fuels

Machinery

Human

Tanks

Sand
labour
emergy analysis
embody energy analysis

Figure 2. Role of inputs in emergy analysis and in embodied energy analysis of an aqueduct in the Province
of Ancona, Italy.

conclusion we can reach is that our system needs more input from the environment to support the
production of surface water. The value of transformity, obtained comparing surface water systems,
does not give any information about the quality, of the resource analysed, in terms of its properties,
since this is not the aim of emergy analysis. To understand the artificial water supply system better
we applied two different methodologies: embodied energy and emergy analysis.
The first approach indicates that one litre of drinking water embodies 0.14 goe, the main con-
tribution is electricity, which represents 61% of the total embodied energy. This approach does
not consider the use of any resource that cannot be linked to fossil fuels as for water and human
labour. On the other hand, emergy analysis has the property of evaluating every input on a common
basis, giving their contribution in terms of the solar energy required to generate them. The analysis
performed on the aqueduct reveals that natural resources account only for 1/3 of the total emergy
flow that support the system, being the analysis dominated by non-renewable resources.
An important difference revealed by the comparison of the two methodologies is the role of the
sand used in the construction of the pipeline (Figure 2). Evaluating the embodied energy of this
input results in 0.07% of the total embodied energy of the system, while evaluating its emergy
contribution results in 19%. In order to explain the two different results it is necessary to consider
that in performing emergy analysis of sand, its sedimentary cycle and its turnover time are taken
into account.

CONCLUSIONS

In this study we have investigated a complex system, trying to take into account either natural or
artificial aspects. We have calculated the real value of water as natural resource, and we have anal-
ysed the environmental inputs involved in management of water resources. It clearly emerged that
both the methodologies we applied in the study of the aqueduct point out how artificial water cycle
is strictly dependent on the energy cycle related to its exploitation. Therefore reducing consumption
of drinking water means, not only preserving the natural resources, but also preventing the waste
of energy and of non-renewable resources. Water is too often considered as a “free” resource with
none or low economic value because it is thought being completely renewable. This study can be
considered as a basis for detailed monitoring studies in the future that will have the aim of assessing
the real value of water in order to improve its management.

42

© 2004 by Taylor & Francis Group, LLC


chap-04 19/11/2003 14: 44 page 43

REFERENCES

1. Ministero dell’ambiente, 1998. Relazione sullo stato dell’ambiente (in Italian), 1999.
2. Odum H.T., Self Organisation, Transformity and Information, Science, vol. 242, pp. 1132–1139, 1988.
3. Herendeen A.R., Embodied energy, embodied everything…now what?, Advanced in energy studies,
Energy Flows in Ecology and Economy, S. Ulgiati, M.T. Brown, M. Giampietro, R.A. Herendeen, and
K. Mayumi (Eds), MUSIS Publisher, Roma, Italy, pp. 13–48, 1998.
4. Federici M., Ulgiati S., Verdesca D. and Basosi R., Efficiency and sustainability of passengers and
commodities transportation system. The case of Siena, Italy, in press to Ecological Indicators, 2002.
5. Odum H.T., Environmental accounting. Emergy and environmental decision making, Wiley & Sons,
New York, 1996.
6. Brown M.T. and Arding J.E., Transformity Working Paper, Center for Wetlands, Environmental
Engineering Science, University of Florida, Gainesville, Florida, 1991.
7. Bastianoni S., Marchettini N., Principi I. and Tiezzi E., Sviluppo di un modello di analisi emergetica
per il sistema elettrico nazionale, University of Siena, Italy, unpublished manuscript in italian, 2000.
8. Bjorklund J., Emergy analysis of municipal wastewater treatment and generation of electricity by
digestion of sewage sludge, Resources, Conservation and Recycling, Vol. 31, pp. 293–316, 2001.
9. Ulgiati S., Odum H.T. and Bastianoni S., Emergy use, environmental loading and sustainability. An
emergy analysis of Italy, Ecological Modelling, Vol. 73, pp. 215–268, 1994.
10. Susani L., Analisi termodinamica dei processi di produzione dell’energia elettrica mediante il calcolo di
nuove transformity delle risorse petrolifere, M.S. Thesis, University of Siena, Department of Chemical
and Biosystems Sciences, in Italian, 2001.
11. Buranakarn V. Evaluation of recycling and reuse of building materials using the emergy analysis
method, Doctoral Dissertation; College of architecture, University of Florida, Gainesville, 1998.
12. Bastianoni S., Fugaro L., Principi I. and Tiezzi E., Implementazione di un sistema di contabilità
ambientale su scala provinciale e intercomunale, University of Siena, Italy, unpublished manuscript in
italian, 2001.
13. Regione Marche, Relazione sullo Stato dell’Ambiente della Regione Marche, in italian, 2000.
14. Fugaro L., Marchettini N. and Principi I., Environmental accounting of water resources in the
Samoggia river area, Paper submitted to the Second Emergy Research Conference, Gainesville, FL,
20–22 September 2001.
15. Buenfil A.A., Sustainable use of potable water in Florida: an emergy analysis of water supply and
treatment alternatives. Emergy Synthesis, edited by M.T. Brown, Florida pp. 107–116, 2000.
16. Boustead I. and Hancock G.F., Handbook of industrial energy analysis, Ellis Horwod Limited, p. 442,
1978.
17. Sipila K., Johansson A, Saviharju K. Can fuel-based energy production meet the challenge of fighting
global warming? A chance for biomass and cogeneration? Bioresour Technol. 43, 7–12, 1993.

43

© 2004 by Taylor & Francis Group, LLC


chap-04 19/11/2003 14: 44 page 44

© 2004 by Taylor & Francis Group, LLC


chap-05 19/11/2003 14: 45 page 45

Sustainability assessment method

© 2004 by Taylor & Francis Group, LLC


chap-05 19/11/2003 14: 45 page 46

© 2004 by Taylor & Francis Group, LLC


chap-05 19/11/2003 14: 45 page 47

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Method of allocation of the weights by fuzzy logic for


a sustainable urban model

Francesco Gagliardi & Mariacristina Roscia


Department of Electric Engineering, University of Naples “Federico II”, Italy

ABSTRACT: The environmental indicators are characterized by a low degree of aggregation


and a high amount of information [1]. An indicator must show a synthetic representation a real
environmental so that they can be easy used by policy maker. It is necessary to connect the various
systems of the environment in an integrated system. A possible model of ecosystem-city, illustrated
in this paper, o define a model that allows us to estimate the sustainable city, the indicators are not
aggregated, seeing that the various structures of the some ones, then for every indicator a weight is
assigned with reference to an other weight indicator, for the calculation of which a procedure based
on logic fuzzy has been used. The final result will be a combination of values assigned by various
judges for various criteria, processed through fuzzy logic, so that to obtain a major objectivity.

INTRODUCTION

One of environment observation method that is increasingly standing out is the one that proceed
by the use of indicators, which concur “to read” the state of environment in its several aspects,
selecting – among all information available – those characterized like really meaningful to explain
a particular situation, with descriptive, valuable, forecastable or decisional aim.
To this point the problem is to define the meaning of environmental indicator: an indicator
furnishes a synthetic description of an environmental reality, by a value or a parameter and, however,
the information that follow is wider than the value itself and it would have to be specified in relation
to the type of indicator customer and to the context in which it is placed.
The indicator choice process for studying a specific context, results a fundamental passage, in
relation to the objectives, considering that a good indicator must be specific, sensitive, practical
and pertinent to the case understudy, carefully defined, but above all dynamic and in continuous
evolution, because the environment is a complex system that is not observable in a only way [2].

SUSTAINABLE INDICATORS: UNCERTAINTY IN THE DEFINITION

The sustainability (or un-sustainability) is not easy measurable: in fact it is not directly remarkable
like a natural or directed consequence of the reading of environmental indicators [3], the sustain-
ability is not always easy measurable and an International and European agreement on sustainable
indicators has not been found.
The risk to generate confusion to use an indicator for environmental and sustainability mea-
surement frequently occurs. The relationship between environmental and human decisions is inter-
connected and now it is impossible to assert that exist a difference between objective and subjective
indicators.
However it remain to explain in which way the environmental information are carefully prepared
as to allow synthesis evaluation. Since, the codification in categories of environmental indicators,
encloses in itself some elements of arbitrariness, the moment of the technical definition of an

47

© 2004 by Taylor & Francis Group, LLC


chap-05 19/11/2003 14: 45 page 48

indicator is the one in which the characteristics justifying the use in a determined direction mainly
would have to issue.
Some essential terms for the predisposition of environmental indicators are the following
(Opschoor Reijnders 1991):
• identification of the space and temporal context that is taken as reference for the survey of the
data base;
• decision on the type of information that it must be transferred and choice of a synthesis
method;
• check of some property that would characterize the definition of an environmental indicator.
Therefore it will be possible to equip the policy maker of information of “ready consultation,” to
provide him the information that puts him in conditions to attend and to estimate the effects of the
intervention.

APPLICATIONS OF FUZZY LOGIC FOR EVALUATING ENVIRONMENTAL PLANS

The indicators arranged by the scientific community are commonly characterized by a low degree
of aggregation and a high amount of information, while an increase of the aggregation degree and
a lessening of the information amount would be necessary to policy makers. Since the different
indicators are not homogeneous, as results from their various structures, it can possible to assign
a weight to every indicator for allowing a possible aggregation. This assignment can be made by
means of a combination of values assigned from different judges and different criteria.
The daily natural language is provided by indefinite, inaccurate and polyvalent concepts, that
can make approximate decisional processes. The theory of the “fuzzy logic,” or “fuzzy set theory,”
resembles human reasoning in its use of approximate information and uncertainty to generate
decisions. It was specifically designed to mathematically represent uncertainty and vagueness and
provide formalized tools for dealing with the imprecision intrinsic to many problems [4]. The scope
of this work is to assign, by fuzzy logic, the weights to the different indicators that can be taken in
consideration in an environmental impact, so to obtain a major homogeneity and objectivity.
Typically the base structure for an environmental plan is a matrix (1):

G1 ... GJ
A1 ϕ11 ... ϕ1J
.. .. .. (1)
. . .
AI ϕI 1 ... ϕIJ

where Gj indicates an objective or an environmental characteristic; G = {G1 , G2 , . . . , GJ } is a set


of J environmental characteristic, Ai is an alternative or option and A = {A1 , A2 , . . . , AI } is a set of
the plans mutually excluded; ϕij indicates the result of the plan Ai regarding the objective Gj .
Generally weights {W1 , W2 , . . . , Wj } are introduced to represent the different value of various
opportunities. The following method allows to assign at m alternatives A1 , …, Am their weights.
Therefore n expert or judges J1 ,…, Jn are used to provide information based on the C1 , … ,Ck
criteria. The information assigned by judges is fuzzy trapezoid numbers∗ (2):

(α/β, γ/δ)1 (2)

where a, b, g, d are the real numbers that satisfy the relation a ≤ b ≤ g ≤ d, [5] see Figure 1.

∗ The fuzzy number trapezoid are used because they are more comprehensible by the expert-judges. In fact, to
say “about 7”, can be indicated with notation (6/7, 7/8), while “included between 6 and 7” it can be indicated
by notation (6/6, 7/7).

48

© 2004 by Taylor & Francis Group, LLC


chap-05 19/11/2003 14: 45 page 49

a b g d

Figure 1. Relation a ≤ b ≤ g ≤ d.

The followings steps give the weights of the indicators:


1. The judges express their opinion both in terms of criteria evaluation of the indicators and in
terms of indicator importance as regards every criterion the interval of values [0, L]. The matrix
of criteria obtained is:
J1 J2 . . . Jn
C1
T = C2 (3)
..
. b kj
Ck
where:  
bkj = εkj /ζkj , ηkj /θkj (4)
and the alternatives matrix is (5):
J1 J2 ... Jn
A1
Tk = A2 (5)
..
. akij
Am
for every criterion Ck (1 ≤ k ≤ K), and where (6):
 
akij = αkij /βijk , γijk /δkij (6)

2. The weight can be possibly accounted by two ways:


a) For every judges Ji the indicator weight is obtained by criteria (7):
      (2)
1
wij = ⊗ a1ij ⊗ b1j ⊕ · · · ⊕ aK ij ⊗ bKj (7)
KL
and so on for all judges; then the average value of fuzzy weight wij is (8):
 
1
wi = ⊗ [wi1 ⊕ · · · ⊕ win ] (8)
nL
this is again a fuzzy number.
b) The judges Ji makes fuzzy number akij = (αkij /βijk , γijk /δkij ) and bkj = (εkj /ζkj , ηkj /θkj ) then
average values are given by (9):
k
n αij
αik = (9)
j=1 n

2 The symbol ⊗, ⊕ represent a multiplication and addition fuzzy, respectively. For example if A = (1, 2, 3, 4)
and B = (2, 3, 3, 4), A ⊗ B = (1·2, 2·3, 3·3, 4·4) = (2, 6, 9, 16), andA ⊕ B = (1+2, 2+3, 3+3, 4+4) = (2, 5, 6, 8).

49

© 2004 by Taylor & Francis Group, LLC


chap-05 19/11/2003 14: 45 page 50

to obtain:
mik = (αik /βik , γik /δik ) (10)
nk = (εk /ζk , µk /θk ) (11)
then the indicator weight can be considered by the relation (12):
 
1
wi = ⊗ [(mi1 ⊗ n1 ) ⊕ · · · ⊕ (miK ⊗ nK )] (12)
KL
   
3. Once obtained the value akij , bkj or mik , nk , the weights can be expressed as (13)
 
[L1 , L2 ] 1
Wi · , Yi (13)
Xi Zi · [U1 , U2 ]
where the diagram of the membership function is[ ]:
• zero to the left of Wi ,
• L1 y2 + L2 y + Wi = x in [Wi , Xi ],
• horizontal line by (Xi , 1) to (Yi , 1),
• U1 y2 + U2 y + Zi = x in [Yi , Zi ],
• zero to the right of Zi .
with:
K
 αik εk
Wi = (14)
KL
k=1
K
 βik ζk
Xi = (15)
KL
k=1
K
 γik ηk
Yi = (16)
KL
k=1
K
 δik θk
Zi = (17)
KL
k=1
K
 (βik − αik ) (ζk − εk )
L1 = (18)
KL
k=1
K
 αik (ζk − εk ) + εk (βik − αik )
L2 = (19)
KL
k=1
K
 (δik − γik ) (θk − ηk )
U1 = (20)
KL
k=1
K
 δik − γik ) + δik (θk − ηk )
U2 = − θk (21)
KL
k=1

The terms Wi , Xi , Yi , Zi represent the weight components (number fuzzy), while the terms
L1 , L2 , U1 , U2 are the coefficients of 2◦ order polynomial, which represents the membership of
the number fuzzy weight (see Fig. 2). The membership functions are:
mik = (αik /βik , γik /δik ) (22)
nk = (εk /ζk , ηk /θk ) (23)

50

© 2004 by Taylor & Francis Group, LLC


chap-05 19/11/2003 14: 45 page 51

1 1

aik bik gik dik k zk hk uk

L1y2 + L2y + Wi = x U1y2 + U2y + Zi = x

Wi Xi Yi Zi

Figure 2. Number fuzzy weights.

they are equal to: 0 for x ≤ a and x ≥ d and x ≤ e and x ≥ θ respectively, equal to 1 for b ≤ x ≤ g
and z ≤ x ≤ h respectively. In the average range, as between ai and bi the membership functions
are linear and can be expressed by:

xi = (βi − αi )y + αi (24)

Considering that the products fuzzy, the membership functions of the weights obtained, are
expressed by following relations:

L1 y2 + L2 y + Wi = x
U1 y2 + U2 y + Zi = x (25)

consequently the weight wi are expressed [6], [7] by (Wi [L1 , L2 ]/Xi , Yi /Zi [U1 ,U2 ]).
4. Once the weights that are fuzzy number are obtained, it is necessary to obtain a real number
or “crisp” number by a “defuzzification” method. One of this methods is based on the average
values using the following relation [8]:


1
1
F(Ai ) = [g1 (y|Ai ) + g2 (y|Ai )] dy
2
0
1 1 1
= (L1i + U1i ) + (L2i + U2i ) + (Zi + Wi ) (26)
6 4 2

SUSTAINABLE URBAN MODEL

The example regards a sustainable possible city model. It is obtained by 5 judges, 4 criteria (econ-
omy, environment, energy and urban plan) and 18 indicators. For giving the indicators homogeneity
to indicators, so as to compare them, their weights are calculated with fuzzy logic [9] and the
methodology is the following: judges express by fuzzy numbers their opinion on the criteria and
indicators evaluated respect to all evaluated criteria. The criteria and indicators matrix obtained
are shown in the Tables 1–5.
The resulted database can be considered for calculating the weights by the averages values of
criteria and by indicators given from the judges. The fuzzy average value nk obtained by criteria
and the value mik obtained by i-th indicator for k-th criterion are shown in the Tables 6–7.

51

© 2004 by Taylor & Francis Group, LLC


chap-05 19/11/2003 14: 45 page 52

Table 1. Criteria matrix.

Criteria J1 J2 J3 J4 J5

Economy 4 5 5 6 5 5 5 5 6 7 7 8 4 5 6 7 6 6 7 7
Environment 6 7 7 8 5 5 5 5 7 8 8 9 5 5 7 7 7 8 8 8
Energy 8 8 9 9 6 7 8 9 6 6 7 7 7 7 8 9 6 6 6 6
Urban plan 4 5 6 7 5 5 6 6 4 6 7 7 5 5 6 6 5 6 6 7

Table 2. Indicators matrix, evaluated by economy criteria.

Economy criteria

J1 J2 J3 J4 J5

Pollution monitoring 5 5 6 6 5 5 5 6 4 5 6 6 6 6 7 7 3 5 6 6
NO2 6 6 6 6 5 6 6 6 4 5 6 6 5 5 6 6 7 7 7 7
CO 6 7 7 7 5 6 6 7 4 5 5 5 5 5 5 6 5 5 5 5
Water waste 4 5 6 6 5 5 6 6 4 4 5 5 4 5 5 5 6 6 6 6
NO3 6 6 7 7 6 6 6 6 6 6 6 7 5 5 5 5 4 6 6 6
Cleaning efficieny 6 6 6 6 6 6 7 7 3 3 4 4 4 4 5 5 3 4 4 5
RSU 5 5 6 6 5 5 6 6 4 4 4 4 3 4 4 5 5 5 5 5
Separeted littery 6 6 6 6 6 6 6 7 5 5 6 6 6 6 6 7 5 5 6 7
Public transportation 7 7 8 8 7 7 7 7 6 6 7 7 4 5 6 7 6 7 8 8
Only pedestrian way 4 5 5 5 4 4 4 5 3 4 4 5 5 6 6 6 4 4 5 6
Cycling-path 3 4 5 5 3 3 4 5 4 4 4 4 3 4 4 4 4 4 5 5
Green area 7 7 8 8 7 7 7 8 7 7 7 7 6 6 7 7 5 6 8 8
Car 8 8 9 9 7 8 8 8 8 8 8 8 6 6 7 7 8 8 8 8
GWh household 9 9 9 9 9 9 9 9 7 7 8 8 7 9 9 9 6 7 8 9
Fuel 9 9 9 9 9 9 9 9 8 8 8 7 7 9 9 9 8 8 8 9
Breath pathologies 7 8 8 9 7 7 8 8 6 6 7 7 6 7 8 9 8 8 9 9
dead
ISO certification 4 4 5 6 4 4 5 5 5 5 6 6 3 4 5 6 4 4 5 5
Agenda XXI 2 3 3 4 3 3 4 4 2 3 4 5 3 3 4 4 4 4 4 4

Table 3. Indicators matrix, evaluated by environment criteria.

Environment criteria

J1 J2 J3 J4 J5

7 7 8 8 7 7 7 7 6 7 7 7 6 6 7 7 8 8 9 9
7 7 7 7 6 7 7 7 6 6 7 7 6 6 8 8 7 7 7 7
7 7 7 7 7 7 7 7 5 5 7 7 5 6 7 8 6 6 7 7
5 6 6 7 5 5 5 6 6 6 6 6 6 6 7 7 5 7 8 8
7 7 7 7 7 7 7 7 6 6 7 7 7 8 8 8 6 7 8 8
8 8 9 9 7 8 8 8 6 6 7 7 6 6 8 8 6 7 8 9
8 9 9 9 8 8 8 9 7 7 8 8 6 6 8 8 8 8 8 8
7 8 8 9 7 7 8 9 7 7 8 8 6 7 7 8 6 6 8 8
5 6 6 7 5 5 5 6 5 5 6 6 4 5 6 7 6 6 7 7
5 5 5 5 4 4 5 5 3 4 5 6 3 3 5 5 4 4 5 5
3 4 5 6 3 4 4 4 3 3 4 4 5 5 5 5 4 4 4 4
6 6 7 7 6 7 7 7 5 5 7 7 6 6 6 6 6 6 6 7
6 6 6 6 6 6 6 6 6 6 6 7 7 7 7 7 5 6 7 7
5 6 7 8 5 5 6 6 4 4 6 6 5 5 6 6 6 6 7 8
6 7 7 7 6 6 7 7 6 7 8 8 6 6 8 8 5 6 6 6
6 6 6 6 5 5 6 6 4 4 6 6 5 5 6 6 6 6 6 6
4 4 5 5 5 5 5 5 4 4 5 5 4 5 6 6 3 3 5 5
2 2 3 3 4 4 5 5 3 3 4 4 5 5 6 6 3 3 4 4

52

© 2004 by Taylor & Francis Group, LLC


chap-05 19/11/2003 14: 45 page 53

Table 4. Indicators matrix, evaluated by energy criteria.

Energy criteria

J1 J2 J3 J4 J5

3 4 4 4 2 3 4 4 3 3 4 4 2 3 4 4 2 2 2 2
5 5 6 6 4 5 6 7 5 6 6 6 6 6 7 7 5 5 6 6
5 5 6 6 4 5 6 7 5 6 6 6 6 6 7 7 5 5 6 6
1 1 2 2 1 2 2 3 2 2 2 2 2 2 3 3 2 3 4 4
5 5 6 6 4 5 6 7 5 6 6 6 6 6 7 7 5 5 6 6
2 2 3 4 1 2 2 3 2 3 3 4 2 3 4 5 1 2 3 4
2 2 3 4 1 2 2 3 2 3 3 4 2 3 4 5 1 2 3 4
2 2 3 3 1 2 3 4 2 2 3 3 2 2 4 4 3 3 4 4
6 6 7 7 7 7 8 8 7 7 8 8 6 7 8 9 7 7 8 9
2 2 3 3 1 2 3 3 2 2 2 2 3 3 3 3 2 2 3 3
2 2 3 3 1 2 3 3 2 2 2 2 3 3 3 3 2 2 3 3
1 1 3 3 2 2 3 3 1 2 2 3 2 2 3 3 1 1 1 1
7 7 8 8 6 6 7 7 5 6 7 8 5 5 8 8 7 7 9 9
8 8 8 8 7 7 9 9 6 7 8 9 7 7 8 8 7 7 8 9
8 8 8 8 7 7 9 9 6 7 8 9 7 7 8 8 7 7 8 9
4 4 5 6 3 3 4 5 4 4 5 5 1 2 3 4 2 2 3 3
1 1 1 1 1 1 2 2 2 2 2 2 1 1 2 2 1 1 1 1
4 5 6 7 5 5 6 6 6 6 7 7 7 7 7 7 5 5 6 6

Table 5. Indicators matrix, evaluated by urban plan criteria.

Urban plan criteria

J1 J2 J3 J4 J5

1 1 2 2 2 2 3 3 1 1 1 1 2 2 2 2 1 1 2 2
3 3 3 3 2 2 3 3 2 2 2 2 2 2 3 3 1 1 2 2
3 3 3 3 2 2 3 3 2 2 2 2 2 2 3 3 1 1 2 2
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
3 3 3 3 2 2 3 3 2 2 2 2 2 2 3 3 1 1 2 2
6 6 7 7 7 7 8 8 5 6 7 8 7 7 8 8 7 7 9 9
7 7 7 7 7 7 8 8 7 7 9 9 6 6 8 8 6 7 8 9
3 4 5 5 4 4 6 6 2 2 3 3 2 2 4 4 3 3 3 3
7 7 8 8 6 6 8 8 6 7 8 8 8 8 9 9 8 8 8 8
7 8 8 8 8 8 9 9 6 6 8 8 7 7 7 8 7 7 9 9
5 5 6 6 4 4 5 6 5 5 5 5 6 6 6 6 4 4 6 6
9 9 9 9 6 6 8 8 7 7 8 8 6 7 8 9 5 6 7 8
6 6 6 6 6 6 7 7 6 6 8 8 5 5 9 9 5 6 6 7
2 2 3 3 3 3 4 4 1 1 2 2 2 2 2 2 3 3 3 3
2 2 2 2 1 1 2 2 2 2 3 3 1 1 1 1 1 1 2 2
2 2 3 3 3 3 3 3 1 1 3 3 2 2 2 2 3 3 3 3
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
5 5 6 6 5 5 6 6 4 4 6 6 6 6 6 6 5 6 7 7

Table 6. Criteria average value.

n1 = 5 5.6 6 6.6
n2 = 6 6.6 7 7.4
n3 = 6.6 6.8 7.6 8
n4 = 4.6 5.4 6.2 6.6

53

© 2004 by Taylor & Francis Group, LLC


chap-05
© 2004 by Taylor & Francis Group, LLC

19/11/2003
Table 7. Indicators average value.

14: 45
m11 = 4.6 5.2 6 6.2 m12 = 6.8 7 7.6 7.6 m13 = 2.4 3 3.6 3.6 m14 = 1.4 1.4 2 2
m21 = 5.4 5.8 6.2 6.2 m22 = 6.4 6.6 7.2 7.2 m23 = 5 5.4 6.2 6.4 m24 = 2 2 2.6 2.6
m31 = 5 5.6 5.6 6 m32 = 6 6.2 7 7.2 m33 = 5 5.4 6.2 6.4 m34 = 2 2 2.6 2.6

page 54
m41 = 4.6 5 5.6 5.6 m42 = 5.4 6 6.4 6.8 m43 = 1.6 2 2.6 2.8 m44 = 1 1 1 1
m51 = 5.4 5.8 6 6.2 m52 = 6.6 7 7.4 7.4 m53 = 5 5.4 6.2 6.4 m54 = 2 2 2.6 2.6
m61 = 4.4 4.6 5.2 5.4 m62 = 6.6 7 8 8.2 m63 = 1.6 2.4 3 4 m64 = 6.4 6.6 7.8 8
m71 = 4.4 4.6 5 5.2 m72 = 7.4 7.6 8.2 8.4 m73 = 1.6 2.4 3 4 m74 = 6.6 6.8 8 8.2
m81 = 5.6 5.6 6 6.6 m82 = 6.6 7 7.8 8.4 m83 = 2 2.2 3.4 3.6 m84 = 2.8 3 4.2 4.2
m91 = 6 6.4 7.2 7.4 m92 = 5 5.4 6 6.6 m93 = 6.6 6.8 7.8 8.2 m94 = 7 7.2 8.2 8.2
54

m101 = 4 4.6 4.8 5.4 m102 = 3.8 4 5 5.2 m103 = 2 2.2 2.8 2.8 m104 = 7 7.2 8.2 8.4
m111 = 3.4 3.8 4.4 4.6 m112 = 3.6 4 4.4 4.6 m113 = 2 2.2 2.8 2.8 m114 = 4.8 4.8 5.6 5.8
m121 = 6.4 6.6 7.4 7.6 m122 = 5.8 6 6.6 6.8 m123 = 1.4 1.6 2.4 2.6 m124 = 6.6 7 8 8.4
m131 = 7.4 7.6 8 8 m132 = 6 6.2 6.4 6.6 m133 = 6 6.2 7.8 8 m134 = 5.6 5.8 7.2 7.4
m141 = 7.6 8.2 8.6 8.8 m142 = 5 5.2 6.4 6.8 m143 = 7 7.2 8.2 8.6 m144 = 2.2 2.2 2.8 2.8
m151 = 8.2 8.6 8.6 8.6 m152 = 5.8 6.4 7.2 7.2 m153 = 7 7.2 8.2 8.6 m154 = 1.4 1.4 2 2
m161 = 6.8 7.2 8 8.4 m162 = 5.2 5.2 6 6 m163 = 2.8 3 4 4.6 m164 = 2.2 2.2 2.8 2.8
m171 = 4 4.2 5.2 5.6 m172 = 4 4.2 5.2 5.2 m173 = 1.2 1.2 1.6 1.6 m174 = 1 1 1 1
m181 = 2.8 3.2 3.8 4.2 m182 = 3.4 3.4 4.4 4.4 m183 = 5.4 5.6 6.4 6.6 m184 = 5 5.2 6.2 6.2
chap-05 19/11/2003 14: 45 page 55

Table 8. Weights components.

W X Y Z L1 L2 U1 U2

1 2.152 2.582 3.224 3.479 0.015 0.415 0.003 −0.258


2 2.69 3.089 3.771 4.064 0.011 0.388 0.002 −0.295
3 2.58 2.995 3.646 4.031 0.014 0.401 0.01 −0.395
4 1.764 2.165 2.609 2.907 0.017 0.384 0.006 −0.304
5 2.72 3.155 3.776 4.101 0.014 0.421 0.005 −0.33
6 2.54 3.098 3.959 4.528 0.017 0.541 0.017 −0.586
7 2.683 3.224 3.995 4.565 0.014 0.527 0.017 −0.587
8 2.342 2.718 3.562 4.056 0.011 0.365 0.017 −0.511
9 3.394 3.915 4.883 5.435 0.017 0.504 0.013 −0.565
10 2.205 2.65 3.398 3.799 0.017 0.428 0.013 −0.414
11 1.847 2.214 2.83 3.127 0.013 0.354 0.007 −0.304
12 2.66 3.131 3.961 4.418 0.015 0.456 0.011 −0.468
13 3.459 3.924 4.918 5.362 0.011 0.454 0.006 −0.45
14 3.108 3.527 4.402 4.892 0.013 0.406 0.011 −0.501
15 3.211 3.673 4.418 4.801 0.016 0.446 0.004 −0.387
16 2.345 2.673 3.444 3.878 0.007 0.321 0.012 −0.446
17 1.413 1.62 2.149 2.371 0.006 0.201 0.006 −0.228
18 2.326 2.663 3.517 3.85 0.011 0.326 0.008 −0.341

Table 9.

Defuzzification Weight normal Defuzzification Weight normal

1 2.858 0.48 10 3.011 0.51


2 3.402 0.57 11 2.503 0.42
3 3.311 0.56 12 3.54 0.59
4 2.359 0.4 13 4.414 0.74
5 3.436 0.58 14 3.98 0.67
6 3.528 0.59 15 4.024 0.68
7 3.614 0.61 16 3.083 0.52
8 3.167 0.53 17 1.887 0.32
9 4.404 0.74 18 3.087 0.52

The weights components are obtained as in Table 8.


For obtaining the crisp number of the weight the “defuzzification” is made using the average
value method and then the normal is made as average weight as shown in Table 9.
The analysis of the weight results shows that on the opinion expressed by the judges, the sustain-
able city is particularly influenced by public transportation, fuel, household GWh and cars, while
a low sensibility is associated to hydro consumption and ISO 14000 certified companies.

CONCLUSION

The applied methodology for calculating indicator weights as regards to selected criteria, points
out the importance of decision maker subjectivity. In fact assigning the weight of an indicator as
regards to another one, every decision maker is brought to reason in a less objective way.
An urban planner, for example, will give more importance to cycling path or green areas, but on
the contrary a chemical engineering will take care to air pollution problems. Anyway the proposed
systems, even starting from subjective evaluation, permits to combine different opinions on various
indicators, by means of different criteria. Moreover, the final results will be a combination of values

55

© 2004 by Taylor & Francis Group, LLC


chap-05 19/11/2003 14: 45 page 56

assigned by different judges for various criteria by fuzzy number, which translates verbal expression
in a numerical quantity.

ACKNOWLEDGMENT

The authors thank Prof. D. Zaninelli for the precious contribution.

REFERENCES

1. Agati L., Ancilla G., Indicators/Index finalized to the acquaintance of the environment. Seminar
“Sistemi informativi di governo per l’ambiente” Italy 4–5 May 1999.
2. www.regione.liguria.it/territor/9_agenda/not_ind.htm
3. Italian Coordinament Agende 21 Locali: Sustainable indicators, Florence 10 September 1999.
4. Smith, P.N., Applications of Fuzzy Sets in the Environmental Evaluation of Projects. Journal of
Environmental Management (1994) 42, 365–388.
5. Buckley, J. J., Ranking alternatives using fuzzy number. Fuzzy Sets and Systems (1985) 15, 21–31.
6. Yager, R. R., Fuzzy decision-making including unequal objectives. Fuzzy Sets and System (1978) 1,
87–95.
7. Smith, P. N., Applications of Fuzzy Sets in the Environmental Evaluation of Projects. Journal of
Environmental Management (1994) 42, 365–388.
8. Yager, R. R., A procedure for ordering fuzzy subsets of the unit interval. Information Sciences (1981)
24, 143–161.
9. Buckley, J. J., The fuzzy mathematics of finance. Fuzzy Sets and Systems (1987) 21, 257–273.

56

© 2004 by Taylor & Francis Group, LLC


chap-06 19/11/2003 14: 45 page 57

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Fuzzy cost recovery in planning for sustainable water supply


systems in developing countries

Kameel Virjee & Susan Gaskin


Department of Civil Engineering and Applied Mechanics, McGill University, Montreal, Canada

ABSTRACT: Providing water to all the world’s inhabitants is a daunting task. In order to make the
task at all feasible, efficiency in planning is required. Demand-responsive project design is directly
related to the sustainability of rural water systems, and cost recovery is a significant indicator of
demand responsiveness. As such, cost recovery can be used as a proxy indicator of sustainability.
Here, a cost recovery criterion is developed using fuzzy set theory as an uncertainty representation
tool. This criterion is based on the Hamming distance measure and allows for the quantitative
distinction between costs and revenues. Such a tool has utility in the comparison of alternative
projects and policies for sustainable project selection.

INTRODUCTION

The ubiquitous provision of basic water supplies to all of the world’s population is, most definitely,
a necessity. For this goal to become at all feasible, it is required that scarce resources be used
effectively. The effective use of resources implies that any particular implemented project or system
shall meet the demands of the anticipated users for the entirety of the system’s design life. This
definition of effectiveness is congruous with the notion of project sustainability as demarcated in
[1]. So, the objective of effective resource utilization is equivalent to the goal of sustainable project
design.
The sustainability of projects is influenced by a number of different factors. For example, environ-
mental quality, financial management, and institutional capacity all influence project sustainability
[2]. In the developing countries’ water supply and sanitation context, it has been shown that the
degree to which a system meets the demands of the anticipated users is directly related to the ability
of the system to meet its design objectives through time, that is, its sustainability [3]. Here, the
demands of the users are differentiated from their “needs”, which can be regarded as having been
developed without consultation with the users. As a relationship between the demand responsive-
ness of a system and its sustainability exists, it is reasonable to assume that the degree to which
a particular solution meets the demand of the users can be treated as a sustainability criterion.
Demand responsiveness is a multidimensional concept, and can be defined through four major
principles [4].
• Water should be managed as an economic, as well as social, good.
• Management of the water resource should be at the lowest appropriate level.
• A holistic approach to the utilization of water should be employed.
• Women should play a key role in the management of water.
The requirement that water be managed as an economic good, implies that the charge levied
upon users be commensurate with the costs of making the resource available. The degree to which
the costs of producing water are met by revenue generated through user payments is dependent
upon the willingness to pay for the service by the users. Cost recovery, regarded as the degree to

57

© 2004 by Taylor & Francis Group, LLC


chap-06 19/11/2003 14: 45 page 58

which user payments meet the costs of operating the system, then, can be regarded as a measure of
demand responsiveness. By extension, the notion of cost recovery also should be correlated with
the sustainability of the project. If the costs of the system can be recouped through user payments
it will be able to pay for its maintenance through time. Also, when users are paying for the system,
they will place demands for the level of service it delivers, and will ensure that the institutions
supporting the system are developed and maintained. Thus, user payments and their relation to the
costs of operating and maintaining the system provide insight into the financial and institutional
sustainability of a water supply system.
Cost recovery, then, can be regarded as a necessary condition for project sustainability. Given this,
it seems quite useful to ensure that its estimation be conducted prior to the implementation of any
new system so that the potential effectiveness of a proposed system can be predicted. In attempting
to predict cost recovery, both the costs and revenues must be predicted. Also, due to uncertainty in
these predictions, a suitable method for comparing predicted costs and revenues must be developed.
In this paper, the prediction of revenues and costs, using fuzzy methods is discussed and a
fuzzy cost recovery criterion is developed. In the section immediately following, the prediction and
modeling of costs and revenues in rural water supply systems is discussed. A brief discussion of
fuzzy set theory and fuzzy linear regression follows. Finally, a cost-recovery sustainability criterion,
for the comparison of fuzzy costs and revenues, is developed.

REVENUE AND COST PREDICTION IN WATER SUPPLY SYSTEMS

In order to predict the level of cost recovery in a rural water system, it is required that revenues be
anticipated. Revenues are dependent upon the number of connectors and the tariff charged to them.
The number of connectors is also dependent upon a number of factors, such as the tariff to be levied
for use, the specifications, in terms of quality and reliability, of the new system and various socioe-
conomic factors, such as income and education. It is useful, then, to be able to model the number of
connectors given a particular suite of project features, as this allows for the prediction of revenues.
In order to assess the number of potential users who will connect to a system, it is required
that their maximum willingness to pay be established. There are a number of methods available
to estimate maximum willingness to pay. The travel cost method uses an estimation of the actual
time spent travelling to and waiting for the service (e.g. the time taken to walk to and wait in line
for a days water supply) to derive some economic value of the service [5]. Alternatively, choice
modeling, where respondents are presented with a number of different policy options and asked to
value them in a pairwise manner, can be used [6]. The most frequently applied method in rural water
supply willingness to pay estimation, however, is the contingent valuation method [7]. This method
involves the presentation of a single hypothetical policy option to respondents, who are then required
to indicate whether they would participate in the project given some financial cost. A number of
methods are available to arrive at the financial cost that is agreeable to the respondent, their
maximum willingness to pay. Dichotomous choice models allow for the respondent to answer only
yes or no to a single improved policy. Alternatively, respondents can be repeatedly asked as to their
participation given different prices for the same policy option. The previous is known as an iterative
bidding game, and is the most used method in willingness to pay for water service surveys [7]. In all
of these methods, the various explanatory variables, as given by economic theory, are also surveyed.
Regression analysis is often used to construct a mathematical model representing respondents’
willingness to pay and its variation with the various explanatory socioeconomic variables. The
correct method for constructing such models is to use a probit regression model, with the dependent
variable as the decision to connect to the system or not. Ordinary least squares regression, with the
dependent variable as the midpoint of the interval in which the true maximum bid lies, however,
has been shown to yield results which are consistent with those given by a probit regression model
[8]. Virjee and Gaskin [9] show that fuzzy linear regression [10, 11] has the potential to model
imprecision in the structure of the regression model, but that current methods do not have the ability
to adequately treat independent variables whose values are nominal. Such fuzzy regression methods

58

© 2004 by Taylor & Francis Group, LLC


chap-06 19/11/2003 14: 45 page 59

are conceptually enticing as they allow for non-stochastic uncertainty. Also, fuzzy regression allows
for extensions in survey methods to capture linguistic uncertainty in willingness to pay bids.
Willingness to pay functions can be used to predict the number of connectors given various
tariff levels. Given the number of connectors, the revenue- tariff relation can be built. Using fuzzy
regression techniques to build the willingness to pay behavioral model allows for the construction
of a fuzzy revenue function. Such a function gives the interval in which the revenue lies for each
tariff level, and so connection rate.
In the development of cost estimates, expert judgement is required. Also, there is uncertainty
inherent in the estimate of construction times, soil types, etc. Particularly where judgements are
based on linguistic assessments, as is often the case with expert judgements, fuzzy set theory
has application. Thus, fuzzy cost estimates of feasible alternatives will represent the uncertainty
involved. Costs involved in a particular system will be fixed, independent of number of connectors,
as well as variable. Based on past experience with rural water schemes in similar geographic
contexts and expert judgement, fuzzy cost estimates can be developed and used in conjunction
with the fuzzy revenue functions developed via fuzzy regression, to assess the possibility of cost
recovery for a given alternative. For example, Chang et al. [12] discusses the use of fuzzy regression
in the modeling of costs of wastewater treatment facilities in Taiwan.
The use, here, of fuzzy sets allows for a more robust measure, of cost recovery, than the com-
parison of the expected costs and revenues. Also, it has the potential to represent uncertainty in the
human judgement used to estimate costs and revenues and so portray more accurately the potential
for cost recovery and, so, sustainability of a rural water supply system.

FUZZINESS AND FUZZY LINEAR REGRESSION

Central to the idea of incorporating fuzziness into the assessment of cost recovery for rural water
supply projects is the concept of a fuzzy number. A fuzzy number is defined over an interval, where
each point in the interval is assigned some degree of membership in the set of the number. Dubois
and Prade [13] defined a Left-Right (L-R) fuzzy number, M, as having a membership function
µM (x), where
L((α − x)/cleft ) x ≤ α; cleft > 0
µM (x) = (1)
R((x − α)/cright ) x ≥ α; cright > 0
L and R represent left and right reference functions, α is the mean value of M , and cleft and cright
are the left and right spreads of M . The spreads of the fuzzy number are indicative of its fuzziness,
and so with increasing spreads, fuzziness, too, increases. Symbolically,
M = (α, cleft , cright )LR (2)

The reference functions, L and R, have the following properties:


1. L(z) = L(−z)
2. L(0) = 1
3. L is no increasing on the interval [0, ∞)
Figure 1 shows a general representation of the above reference functions for fuzzy numbers. Figure 2
shows an example of a L-R fuzzy number. Reference functions to describe L and R often take the
form 1 − xp [14]. With p = 1, a triangular fuzzy number is the result. This simple representation
of a fuzzy number is often used when the shape of the L-R reference functions is not known more
specifically. A graphical representation of a symmetric triangular fuzzy number (TFN) is shown in
Figure 3. α represents the center of the fuzzy number and c is equal to the fuzzy half width.
As each point in a fuzzy set has, related to it, a membership value, we can define the level of
inclusion, h, as in Figure 3. The h-level set of some fuzzy set A is
Ah = {x ∈ X , µA (x) ≥ h} (3)
Or in words, Ah is the set of all elements of A whose membership in A is larger than h.

59

© 2004 by Taylor & Francis Group, LLC


chap-06 19/11/2003 14: 45 page 60

L(z)‚
R(z) L(z)
1

R(z)

0
0 1 z

Figure 1. Example of L and R reference functions.

(x)
L((ax)/cleft)
1
R((xa)/cright)

a x

Figure 2. L-R fuzzy number.

(x)

1.0

h-level set

0.5
degree of fit
h = 0.5

0.0 a x
c

Figure 3. Symmetric triangular fuzzy number.

60

© 2004 by Taylor & Francis Group, LLC


chap-06 19/11/2003 14: 45 page 61

Based on the above, and using simplified symmetric triangular fuzzy numbers (TFNs), where
L = R = 1 − x, fuzzy linear regression is a tool that can be used to fit observed data to linear models.
The resultant model gives values of the dependent variable as symmetric TFNs. Thus, in the case of
willingness to pay surveys, the modeled willingness to pay of users is fuzzy. Using fuzzy regression
allows for the building of a fuzzy revenue function, based on the fuzziness induced in predicted
willingness to pay by model uncertainty.
Fuzzy linear regression is normally formulated as a linear or quadratic programming problem
as follows.

min V
subject to:

αt xi + (1 − h) cj xj ≥ yi + (1 − h)ei (4)
j

− αt xi + (1 − h) cj xj ≥ − yi + (1 − h)ei for cj ≥ 0 i = 1 . . . M
j

where V is the vagueness of the model, normally equal to the sum of the fuzzy half widths of the
model parameters, h is the level of fit desired, specified by the modeler, and ei is the measured
uncertainty in the value of the dependent variable, Yi .

FUZZY COST RECOVERY

So, based on the fuzziness in the willingness to pay for improved services, a fuzzy revenue function
can be developed. Equally, due to the fuzziness induced in the cost estimate, a fuzzy cost relation
can be built. Figure 4 shows a general fuzzy cost and revenue graph. The revenues are zero at a
tariff of zero and increase to a maximum before falling to zero again at some absolute maximum
tariff. Costs, composed of a fixed and variable cost, are maximum when the number of connectors is
maximum, when tariffs are zero, and fall to be equal to the fixed costs when there are no connectors.
Here, both revenues and costs are only considered as the operations and maintenance values. Capital

Revenues
Costs
cr
cc
Revenues

αr‚ αc
h 0
h1
h 0
Costs
h 0
h  0 h 1

0 Pi‚i
Increasing Tariff (P )
Decreasing Connectors ()

Figure 4. Fuzzy costs and revenues.

61

© 2004 by Taylor & Francis Group, LLC


chap-06 19/11/2003 14: 45 page 62

(x)

1
R(p i)

C(p i)

αr x‚
αc Costs‚
revenues

cr cc

Figure 5. Fuzzy costs and revenues at pi , i .

costs are generally not covered entirely through user payments [8]. Instead they are met through
the contribution of donors, central and regional governments and other bodies. As such, the cost
recovery of capital costs is less important with regard to the systems functionality through time.
Figure 4 shows costs and revenues given a tariff, pi , or a rate of participation in the project, i .
The fuzzy revenue is represented by {αr , cr } and the costs by {αc , cc }. These two fuzzy sets are
shown in Figure 5. As can be seen, the fuzzy set representing the costs is included in the revenue
set. Based only on the equating of the costs and revenues with µ(x) = 1, the most possible curve,
cost recovery would be anticipated. The use of fuzzy sets here illustrates that there is uncertainty in
this as both the costs and revenues can take values in their support intervals. It becomes apparent
that it is necessary to develop some comparison method, which defines the degree to which costs
could be recovered.
The problem of discerning the certainty of cost-recovery given two fuzzy sets can be viewed as
an issue of ranking two fuzzy numbers. That is, if the fuzzy revenue is greater than or equal to the
fuzzy cost, cost recovery will exist. A number of methods have been developed in the literature in
order to tackle the problem of ordering sets.
Yager [15] introduced a function, the integral of the mean of the h-level sets of a fuzzy set, to
give a value in + to each set. Sets with higher values, based on this function, are ranked higher.
This is of little use in the comparison of costs to revenues, as it does not help in the distinguishing
of degrees of certainty that costs will be recovered.
Modarres and Nezhad [16] introduce a preference ratio, which investigates the degree to which a
fuzzy number is larger than another over the x-axis. This index separates the x-axis into two regions,
one where one of the numbers is preferred, and another where the other is preferred. Again, this
has little utility in the case of cost-recovery, as an overall impression of to whether costs will be
recovered, is desired.
For a fuzzy revenue set, with membership µR (i ), and fuzzy cost set, with membership µC (i ),
defined at some level of project participation, i , we can define a cost recovery criterion, ICR as
having the following properties:
1. ICR = 1 iff αr − cr ≥ αc + cc
2. ICR = 0 iff αr + cr ≤ αc − cc
3. ICR ∈ [0, 1]

62

© 2004 by Taylor & Francis Group, LLC


chap-06 19/11/2003 14: 45 page 63

That is, if there is no overlap between µR (i ) and µC (i ), ICR can take values only in {0, 1}, where
ICR = 0 implies that cost recovery is impossible and ICR = 1 implies that cost recovery is certain.
Thus, there is a transition between necessity and impossibility of cost recovery. The intermediate
stages of possibility are related to the uncertainty arising due to overlap between the cost and
revenue sets and so the index, ICR , should be related to the degree of overlap between the two sets.
We can use the concept of distance to distinguish between the cost and revenue fuzzy sets. The
Minkowski distance represents a class of distance measures:
 1/p

d(A, B) =  |µA (x) − µB (x)| dxp


(5)
X

where p ≥ 1. The Minkowski distance is a summation of the difference between the membership
functions of the two fuzzy sets, where the definition of distance is effected by the parameter p. By
setting p = 1, the Hamming distance is specified.
The membership functions of the symmetric TFNs, the costs and revenues, are defined as follows.
|αR − x|
µR (x) = 1 − for αR − cR ≤ x ≤ αR + cR (6)
cR
|αC − x|
µC (x) = 1 − for αC − cC ≤ x ≤ αC + cC (7)
cC
Also,
µA∩B (x) ≤ min (µA (x), µB (x)) (8)

And the fuzzy set defined by the intersection operator, can be regarded as zone of indifference or
equivalence. As such, the total area of the revenue set which does not overlap with the cost set can be
considered as the content of the revenue set which is not equal to the cost. Of this non-overlapping
region, some may be less than the cost and some more. In an attempt to characterize the potential for
cost recovery, the areas defined by the revenue exceeding the cost for a given h will be considered
positive. Those areas where costs exceed revenues shall be negative.
We shall define the point E, on X , the universal set, as the point with the highest membership in
the fuzzy intersection set.
µR=C (E) = sup min (µR (x), µC (x)) (9)

We can now define the degree to which the relations, ≤, ≥, as m≤ and m≥ , hold true.


m≥ = m(µR ≥ µC ) = µR (x) − µC (x) dx (10)
E


E
m≤ = m(µR ≤ µC ) = µR (x) − µC (x) dx (11)
0
After the Hamming distance measure. Also we will define

m= = µA∩B (x) dx (12)

Also, mR , the area of the revenue membership is defined as:


mR = µR (x) dx (13)
X

63

© 2004 by Taylor & Francis Group, LLC


chap-06 19/11/2003 14: 45 page 64

(x)

C(x) R(x)
1

0.5 m>

m
0 x
0 1 2 3 4 5 6 7

Figure 6. Example 1.

The cost-recovery index, then is:


  
 1 1

m m = + m≥ for αr ≥ αc
R 2
ICR =   (14)

 1 1
1 − m = + m≤ for αr ≤ αc
mR 2
We can see that the properties required of this index are met in the above formulation.

Proofs
1. If αr − cr ≥ αc + cc ; m= = 0; αr ≥ αc ; so m≥ = mR ; ICR = 1
2. If αr + cr ≤ αc − cc ; m= = 0; αr ≥ αc ; so m≤ = mR ; ICR = 0
3. By definition, m= + m≥ ≤ mR and m= + m≤ ≤ mR ; soICR ≤ 1 and if αr + cr ≤ αc − cc ; min
(ICR ) = 0

EXAMPLE

Figure 6 shows two fuzzy sets with membership functions, µR (x) and µC (x).
   
1 1 1 1 1 1
αr ≥ αci ∴ ICR = m= + m≥ ; mR = (6 − 2)(1) = 2; m= = (4 − 2) = ;
mR 2 2 2 2 2
   
1 1 1 1 1 1 1 7
m≥ = (6 − 4)(1) + (1)(1) = 1 ; ICR = +1 =
2 2 2 2 2 2 2 8
Such a value implies that cost recovery is quite possible, but not fuzzy certain.

CONCLUSIONS

In this paper a cost recovery criterion is developed. Based on the Hamming distance, this criterion
allows for the comparison of costs and revenues, represented as symmetric TFNs, of water supply
systems. The criterion developed has the beneficial characteristic of approaching a value of one as

64

© 2004 by Taylor & Francis Group, LLC


chap-06 19/11/2003 14: 45 page 65

the certainty of cost recovery increases to certainty. Also, a value of zero implies that cost recovery
is impossible. Thus distance from one of these two extremes not only represents an increase in the
certainty of the other, but also an increase in the general uncertainty involved in the comparison of
the two values, with uncertainty at the maximum when ICR = 1/2.

NOMENCLATURE

Ah h-level set
c The fuzzy spread of fuzzy number M
d(A, B) Distance between sets A and B
h Degree of fit
ICR Cost recovery index
L(x), R(x) Left and Right reference functions
m≤ Degree to which µR ≤ µC
m≥ Degree to which µR ≥ µC
m= Degree to which µR = µC
mR Area under µR
p Level of tariff for water services
V Vagueness measure
α The mean value of fuzzy number M
µA∩B Intersection between sets A and B
µM (x) The membership function of a fuzzy number M
 The proportion of potential users connecting to improved system

REFERENCES

1. Kleemeier, E., The impact of participation on sustainability: An analysis of the Malawi Rural Piped
Scheme Program, World Development, Vol. 28, No. 5, pp 929–944, 2000.
2. European Union, Towards Sustainable Water Resources Management, Part 1: Strategic Approach, Ratio-
nal and Key Concepts at www.europa.eu/int/comm/development/publicat/water/en/part1index_en.htm,
last seen June 2000.
3. Katz, T., The link between demand-responsiveness and sustainability; evidence from a global study,
World Bank, Washington, DC, 1998.
4. Water and Sanitation Program, Community Water Supply and Sanitation Conference, UNDP-World
Bank, www.wsp.org, 1998.
5. Loomis, J., Environmental valuation techniques in water resource decision making, Journal of Water
Resources Planning and Management, Vol. 126, No. 6, pp 339–344, 2000.
6. Blamey, R., Gordon, J. and Chapman, R., Choice modelling: Assessing the environmental values of
water supply options, Australian Journal of Agricultural and Resource Economics, Vol. 43, No. 3, pp
337–356, 1999.
7. Briscoe, J., Furtado de Castro, P., Griffin, C., North, J. and Olsen, O., Toward equitable and sustainable
rural water supplies: A contingent valuation study in Brazil, The World Bank Economic Review, Vol.
4, No. 2, pp 115–134, 1990.
8. Whittington, D., Mujwahuzi, M., McMahon, G., and Choe, K., Willingness to pay for water in Newala
District, Tanzania: Strategies for cost recovery, WASH field report No. 246, 1989.
9. Virjee, K. and Gaskin, S., Fuzzy linear regression and willingness to pay for water services, Proceedings
1st Annual Environmental and Water Resources Systems Analysis Symposium, Roanoke, VA, May 19–22,
2002.
10. Tanaka, H., Uejima, S. and Asai, K., Linear regression analysis with fuzzy model, IEEE transactions
on systems, man, and cybernetics, Vol. SMC-12, No. 6, pp 903–907, 1982.
11. Tanaka, H., Interval regression analysis by quadratic programming approach, IEEE transactions on
fuzzy systems, Vol. 6, No. 4, pp 473–481, 1998.
12. Chang, N-B., Chen, Y. L. and Chen H. W., A fuzzy regression analysis for the construction cost
estimation of wastewater treatment plants. I. Theoretical development, Journal of Environmental Science
and Health, Part A – Environmental Science Engineeering, Vol. 32, No. 4, pp 885–899, 1997.

65

© 2004 by Taylor & Francis Group, LLC


chap-06 19/11/2003 14: 45 page 66

13. Dubois, D. and Prade, H., Operations on fuzzy numbers, International Journal of Systems Science, Vol. 9,
No. 6, pp 613–626, 1978.
14. Bardossy, A., Bogardi, I. and Duckstein, L., Fuzzy regression in hydrology, Water Resources Research,
Vol. 26, No. 7, pp 1497–1508, 1990.
15. Yager, R., A procedure for ordering fuzzy subsets of the unit interval, Information Sciences, Vol. 24,
pp 143–161, 1981.
16. Modarres, M. and Sadi-Nezhad, S., Ranking fuzzy numbers by preference ratio, Fuzzy Sets and Systems,
Vol. 118, pp 429–436, 2001.

66

© 2004 by Taylor & Francis Group, LLC


chap-07 19/11/2003 14: 45 page 67

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Possibility theory and fuzzy logic applications to risk


assessment problems

M.N. Carcassi, G.M. Cerchiara & L. Zambolin


Dipartimento di Ingegneria Meccanica Nucleare e della Produzione (DIMNP),
Università degli Studi di Pisa (Italia)

ABSTRACT: In the process of the risk analysis is common to use the probability to determine
the degree of occurrence of an event. In most cases the individuation of main factors of risk in
an industrialized area is afflicts by great errors caused by the non complete knowledge either the
behavior or the relations among the parameters involved. If the target is to reach a more precise
description of phenomenon, we can use fuzzy sets to describe the parameters of the problem
and fuzzy measures to growth the knowledge of the problem. The quantities that the Theory of
Possibility employs are Necessity, Possibility, Belief and Plausibility; these, when the ignorance
associated to a problem is zero, are equal to the classical probability; thus the substance of these
quantifiers is to characterize a complex problem decreasing the uncertainty associated. When the
uncertainty is zero, the problem isn’t uncertain but only imprecise and the probability is enough for
a complete description of the phenomenon. The paper will illustrate in which area of the industrial
risk assessment, and related case studies the Possibility Theory can be applicate with advantage.

INTRODUCTION

In the Risk Assessment the quantity widely used as measure of the degree of occurrence of an
event is the probability. Such technique is widely used but the results are not always satisfactory.
Some international Benchmark showed how often the results can be affected by mistakes whose
can have a error band of few orders of magnitude. Recent research programs directed themselves
toward the study of the uncertainties concerning calculations of PRA with the purpose to reduce the
uncertainty on the results. A problem is definite complex when verifies that its parameters are not
schematizable in strict way (e.g. human behavior), or the relations existing between them are vague
(e.g. vulnerability models) or yet when some of them are unknown (e.g. when happen the transition
from deflagration to detonation in an explosion). The utilization of such theory, as we will see,
involves a little different mental approach from that deterministic. This approach doesn’t exclude to
the use of PRA. It’s important point out that this kind of results cannot substitute the classic studies
of PRA and the target is to complete the analysis to help the decision maker of a complex problem
and to give more informations than are obtainable by use of only classic probabilistic approach.

THE MEASURES OF THE EVIDENCE

The Theory of the Possibility, for the treatment of problems with a high complexity degree, defines
four quantities, Necessity, Possibility, Credibility and Plausibility [1,2], which are the concep-
tual and mathematical generalization of the Probability. Whereas the ignorance associate to the
parameters of the problem is different from zero the probability will not be sufficient to describe
the problem. Starting from the Aristotelian hypothesis that if an event is necessary its contrary
is impossible, the Necessity is defined like the measure of an event for which not only subsist
the preconditions of occurrence but the conditions are such to make ineluctable its occurrence.

67

© 2004 by Taylor & Francis Group, LLC


chap-07 19/11/2003 14: 45 page 68

A quantity that expresses a weaker concept than the Necessity is the Possibility of an event for
which subsist only the presuppositions for its occurrence. The uncertainty and inaccuracy can be
defined through two above-mentioned measures. The substantial difference between uncertainty
and inaccuracy is that the first depends on the internal ignorance to the problem (complex), the sec-
ond is connected to the evaluations and measures on the parameters of the problem. The Credibility
and the Plausibility are epistemological measures of the event, they conceptually refer respectively
to the Necessity and the Possibility and they are defined on a set of knowledges of an event, accord-
ing to the logical of the inference. This technique can be of two kinds, deductive and inductive,
from this concept follows that all what which can be inferred by the set of knowledges of an event
is credible, all what which is not in contradiction with the set is plausible. In the inaccuracy world,
if we consider a measures of a quantity G with magnitude M that must belong to an interval the of
values I, we can write [2]:

G ∈ I is possible if M ∩ I = ∅; G ∈ I is necessary if M ⊆ I.(1)

The concepts of possibility and necessity can be widespread taking into consideration the uncer-
tainty instead of the inaccuracy. In the next paragraph we will treat the measures of credibility
and plausibility of an event, the relations which have with the probability and the Theory of the
Evidence.

THE CREDIBILITY AND THE PLAUSIBILITY

All what is known (also not completely) of a phenomenon represents its evidence, all what it is
possible to deduce from the body of evidence to the phenomenon represent its credibility, what,
on the contrary, is not in contrast (induction) with the evidence of the phenomenon is identified
with its plausibility. Given the credibility definition, it is possible to infer easily that the ignorance
associate to a complex event A, belong to the universe of events X, is included between that states
which is credible of A and what which is credible of Ā, in formulas [4] (with Cr (A) = Credibility
of the event A; Pl (A) = Plausibility of the event A):
Ignorance (A) = 1 − [Cr(A) + Cr(Ā)] (1)
Ignorance (A) = [Pl(A) + Pl(Ā)] − 1 (2)
In Figure 1 is represented the universe of states X and the set which give the evidence of A and
of Ā, in the case of a not complex event [Anc ; Ānc ] and complex event [Ac ; Āc ].
Be Pr(Anc ) = Probability of the event (Anc ), when the event is not complex [Anc ] the ignorance
is equal to zero from the equations (1) and (2) we obtain:
Cr(Anc ) = Pl(Anc ) = Pr(Anc ) (3)

Figure 1. Graphic representation of a complex and vague Ac event and non complex and crisp Anc event.

1 Insiemistic Symbols used: ∩ = intersection; ∪ = union; ⊆ = conteined; ∈ = belong; ∈


/ = not belong; Ā = not
A; ∅ = null set

68

© 2004 by Taylor & Francis Group, LLC


chap-07 19/11/2003 14: 45 page 69

Pr(Anc ) + Pr(Ānc ) = 1 (4)


Starting by conceptual definition of Credibility and of Plausibility we have defined them
linguistically. The relations between Cr(A) and Pl(A) must satisfy both (1) and (2) equations:
Pl(A) = 1 − Cr(Ā) (5)
Cr(A) = 1 − Pl(Ā) (6)
For example if A ≡ [a, b, c] with a number An = 3 of elements, the power set of A called PA we will
have a number of elements PAn equal to 2An , called Focal Elements such that PA ≡ [(∅), (a), (b), (c),
(a, b), (a, c), (b, c), (a, b, c)]. If x as an element of an universe X, we can assign a membership value
of x to each of subsets A ⊂ PX [PX = Power set of X] in reference to the knowledge that we have
of the x phenomenon, in such way we obtain a map credibility of the x measured in membership
terms. The Credibility and Plausibility will be functions such what [2,4]:
Cr(A) + Cr(Ā) ≤ 1 (7)
Pl(A) + Pl(Ā) ≥ 1 (8)
This is an obvious result if one thinks that an event before becoming credible must be plausible and
that therefore Plausibility measure is always greater or to the limit the same as that of Credibility,
in definitive we have:
Cr(A) ≤ Pr(A) ≤ Pl(A) (with ignorance = 0) (9)
Cr(A) = Pr(A) = Pl(A) (with ignorance = 0) (10)

THE PROBABILITY CORES

To calculate the Credibility and the Plausibility, we define the Probability Cores, as quantities that
represent the probabilistic value assigned to the evidence of the parameters of the complex problem.
They constitute the knowledges, also poor, that we have of the problem. The function expressed by
these quantities is definite [2]:
m: P(X) → [0, 1] (11)

With P(X) power set of the X universe. The binding conditions to which the functions of the
Probability Cores represented must satisfy are:
m(∅) = 0 (12)

m(A) = 1 (13)
A∈P(X)

Is important to pointing out that the Probability Core m(A) represents the evidence associate to the
set that the event constitutes but not the evidence relative to all its subsets, we define:

Cr(A) = m(B) (14)
B⊆A

Pl(A) = m(B) (15)
B∩A =∅

Cr(A) is the total evidence of A and of all the subsets B contained in A; Pl(A) is the total evidence
of A, of all the subsets B contained in A and of all the sets B that intersect A. From the relations (14)
and (15) is verified that Cr(A) ≤ Pl(A). The Probability Cores are data of the problem in analysis,
which are provided by experts and constitute the knowledge that we have of the parameters of the
problem and the interactions among them.

69

© 2004 by Taylor & Francis Group, LLC


chap-07 19/11/2003 14: 45 page 70

NECESSITY AND POSSIBILITY MEASURES

Consider a set X of imprecise and uncertain knowledges, as subsets of X we will have the null
set  that represents the impossible event and the generic set A ⊂ X that have a certain degree of
evidence based on knowledges that we have regard the elements that constitute. We call g(A) the
confidence measures of what that can occur, we have:

g(∅) = 0; g(X) = 1 (16)

If A is impossible g(A) = 0, if on the contrary represent the certain event then g(A) = 1.
In a Universe X of infinite elements, we can consider a nested sequence on PX such that, called Ai
the subsets of PX , with 1 < i < n, we have A1 ⊆ A2 ⊆ . . . ⊆ ⊆ Ai (first type) or A1 ⊇ A2 ⊇ . . . ⊇ Ai
(second type):

If A1 ⊆ A2 ⇒ g(A1 ) ≤ g(A2 )
lim g(An ) = g (lim A) (17)
n→+∞ n→+∞

Let’s consider, for the rest, a nested sequence of first type on X, we define a congruous body of
evidence a set of Plausibility Pl(Ai ) [or Credibility Cr(Ai )] not in logical conflict among them. If
A is a generic event and S is the sure event belong to a nested sequence of PX then the Necessity
of A is defined as the function g(A) = ν(A) such that:

ν(A) = 1 if S ⊆ A; ν(A) = 0 in the other cases. (18)

we define Possibility of A the function g(A) = π(A) such that:

π(A) = 1 if A ∩ S  = ; π(A) = 0 in the other cases. (19)

Let’s consider A, B ∈ PX we define on PX :

g(A ∩ B) = ν(A ∩ B) ≤ min [g(A), g(B)] = min[ν(A), ν(B)]


g(A ∪ B) = π(A ∪ B) ≥ max [g(A), g(B)] = max[π(A), π(B)] (20)
ν(A ∩ B) ≤ min[ν(A), ν(B)]; π(A ∪ B) ≥ max[π(A), π(B)] (21)

We define on a nested sequence of PX :

g(A ∩ B) = Cr(A ∩ B) = ν(A ∩ B) = min[ν(A), ν(B)]


g(A ∪ B) = Pl(A ∪ B) = π(A ∪ B) = max[π(A), π(B)] (22)

For a dual universe X, constituted by A and Ā, from equations 5, 6 and 22, we write:

g(A ∩ Ā) = Cr(A ∩ Ā) = ν(A ∩ Ā) = min[ν(A), ν(Ā)] = 0;


g(A ∪ Ā) = Pl(A ∪ Ā) = π(A ∪ Ā) = max[π(A), π(Ā)] = 1 (23)
ν(A) = 1 − π(Ā); π(A) = 1 − ν(Ā) (24)

The following relations are the analogous to the equations 9 and 10 in the cases, respectively, of
uncertainty associate to the event different from zero and of uncertainty equal to zero:

ν(A) ≤ Pr(A) ≤ π(A) (25)

ν(A) = Pr(A) = π(A) (26)


In the particular case of dual universe X ≡ (A; Ā), the Necessity and Possibility measures coincide
with Credibility and Plausibility.

70

© 2004 by Taylor & Francis Group, LLC


chap-07 19/11/2003 14: 45 page 71

THE POSSIBILITY DISTRIBUTION FUNCTION

Given the dual character of Necessity and Possibility Measures described by the relations 24, A
Possibility Distribution Function p (x) is defined as a linear application of x ∈ X in the unitary
interval:
p : X → [0, 1] (27)
Considering a nested sequence of Ai on PX constituted by n elements, the possibility measure of
A ∈ PX will be [2] [4]:
π(A) = max p(x) (28)
x∈A

The distribution will be a sequence ordered of values pi con 1 ≤ i ≤ n with n length of the
distribution:
p(x) = [p1 (x), p2 (x), . . . , pn (x)] (29)
Since the Plausibility can be calculated through the equation (15) in terms of Probability Cores,
we obtain Pl(xi ) = π(xi ) = p(xi ) fixing the of Probability Cores Distribution m = (µ1 , µ2 , … , µn )
with µi = (Ai ) we’ll be:
n
π(xi ) = m(AK ) (30)
K=i

Or recursively π(xi ) = µi − µi+1 with µn+1 = 0, we’ll be:

π(x1 ) = µ1 + µ2 + µ3 . . . + µn {this term is always equal to one for equation (13)}


π(x2 ) = µ2 + µ3 . . . + µn
π(x3 ) = µ3 . . . + µn
...
π(xn ) = µn

The procedures allow to determine Possibility Distribution are substantially two. The first
prescribes to fix the sure event x1 belonged to A1 as the first element of the sequence
A1 ⊆ A2 ⊆ … ⊆ Ai , with a value π(x1 ) = 1, successively to define, with the probability cores µi ,
the events A2 … An with greater ignorance and in conclusion to calculate the distribution π(xi ).
Another way of determining π(xi ) is to fix the Probability cores, subsequently to identify the nested
sequence of maximum length which represents the biggest set of evidence for that problem, at last
calculate the π(xi ) distribution. If we consider a quantity, which is associated with a big uncertainty,
it is possible to show that, in the Fuzzy Logic world, is possible to use the Possibility Distribution
as Membership Function. With analogous considerations it is possible to determinate the Necessity
Distribution ν(xi ) [2]. From the equation 25 and the distribution definition given by the equation
30, it is possible to conclude that the Probability Distribution is, on the upper part, limited from a
Possibility Distribution and lowery of Necessity one. When the ignorance associate to the problem
is null, the two distributions π(xi ) and ν(xi ) converge to Probability. This involves that while a
low Possibility degree indicates also a low Probability (in fact, to the limit, an impossible event is
also improbable), a high Possibility degree is not either necessary or sufficient to guarantee a high
Probability. We can deduce also that a high Necessity degree involves a high Probability degree (in
fact, to the limit, a certain event has a probability of 100 %), a low Necessity degree is not either
necessary or sufficient to define a low Probability degree for the object study event.

CALCULATION OF CREDIBILITY AND PLAUSIBILITY THROUGH EVENTS TREE

As example we consider the layout of a deep fryer in Figure 2, choosing as event initiator (E.I) the
break of the Thermostat (3) Events Tree of Figure 3 gives an incidental sequence {D}.

71

© 2004 by Taylor & Francis Group, LLC


chap-07 19/11/2003 14: 45 page 72

Figure 2. Scheme of the plant: (1) Electric deep fryer; (2) Oil; (3) Thermostat; (4) Switch high temperature;
(5) Smoke detector; (6) Sprinkler.

Figure 3. Configuration of the incidentals sequences of the events for the plant of figure 2.

Now submit the problem to a Possibilistic analysis through the Events Tree. Considering an
existing ignorance degree on the working of Smokes Detector (invalidate from possible air cur-
rents) and of Sprinkler (wrong measurement of the temperature, tardy intervention) and supposing
hypothetically that all the considered components are independent, even though the same feed-
ing serves both the Thermostat (3) and the Switch (4), a configuration of the events (breaks
of the components) that will possibly bring to the fire, as illustrated in Figure 3. We have
Pr(R3 ) = Pr(R4 ) = Pr(R5 ) = Pr(R6 ) = 10−3 , with Pr (Rj) is the Probability of failure of the j-
component, in consequence Pr(A) ∼ = 10−3 ; Pr(B) ∼
= 10−6 ; Pr(C) ∼= 10−9 ; Pr(D) ∼
= 10−12 . Fixed the
base events (failures of the components), from the equation 13 the values of the Probability Cores
are fixed for every focal element of the Power Set of reference in such way we will be able to assign
a Probabilistic value of membership of the event “fire” to each of the sets, which constitute it.
The value of the Probability of an event without ignorance factors, coincides with the values
of Credibility and Plausibility of the event itself {see equation (10)}, substantially for the event
“failure of the Thermostat (3) (event A) the Probabilistic value Pr(R3 ) = 10−3 , coincides with its
values of Pl(R3 ) and Cr(R3 ). Proceeding with the assignation of the Probability Cores, we have
divided the ignorance between two events R5 and R6 (Case 1) and their combinations with the other
focal elements, using the equation (13) and (14) for the calculation of the Credibility, the values

72

© 2004 by Taylor & Francis Group, LLC


chap-07 19/11/2003 14: 45 page 73

Table 1. Probability Cores and Credibilities of the focal elements in the cases 1 & 2.

Elements of N.d.P. N.d.P.


power set Case 1 Case 2 Credibility case 1 Credibility case 2

R3 0.001 0.001 Cr (R3 ) = 0.001 Cr (R3 ) = 0.001


R4 0.001 0.001 Cr (R4 ) = 0.001 Cr (R4 ) = 0.001
R5 0.023 0.001 Cr (R5 ) = 0.023 Cr (R5 ) = 0.001
R6 0.023 0.001 Cr (R6 ) = 0.023 Cr (R6 ) = 0.001
R3 ∪ R4 0.002 0.002 Cr (R3 ∪ R4 ) = 0.004 Cr (R3 ∪ R4 ) = 0.004
R3 ∪ R5 0.024 0.002 Cr (R3 ∪ R5 ) = 0.048 Cr (R3 ∪ R5 ) = 0.004
R3 ∪ R6 0.024 0.002 Cr (R3 ∪ R6 ) = 0.048 Cr (R3 ∪ R6 ) = 0.004
R4 ∪ R5 0.024 0.002 Cr (R4 ∪ R5 ) = 0.048 Cr (R4 ∪ R5 ) = 0.004
R4 ∪ R6 0.024 0.002 Cr (R4 ∪ R6 ) = 0.048 Cr (R4 ∪ R6 ) = 0.004
R5 ∪ R6 0.054 0.002 Cr (R5 ∪ R6 ) = 0.1 Cr (R5 ∪ R6 ) = 0.004
R3 ∪ R4 ∪ R5 0.05 0.003 Cr (R3 ∪ R4 ∪ R5 ) = 0.125 Cr (R3 ∪ R4 ∪ R5 ) = 0.12
R3 ∪ R4 ∪ R6 0.05 0.003 Cr (R3 ∪ R4 ∪ R6 ) = 0.125 Cr (R3 ∪ R4 ∪ R6 ) = 0.12
R3 ∪ R5 ∪ R6 0.1 0.003 Cr (R3 ∪ R5 ∪ R6 ) = 0.249 Cr (R3 ∪ R5 ∪ R6 ) = 0.12
R4 ∪ R5 ∪ R6 0.1 0.003 Cr (R4 ∪ R5 ∪ R6 ) = 0.249 Cr (R4 ∪ R5 ∪ R6 ) = 0.12
R3 ∪ R4 ∪ R5 ∪ R6 0.5 0.972 Cr (R3 ∪ R4 ∪ R5 ∪ R6 ) = 1 Cr (R3 ∪ R4 ∪ R5 ∪ R6 ) = 1

Table 2. Values of the Probability Cores, Plausibilities and Credibilities in the case 3 in which the Thermostat
(3) is considered as critical element.

Elements of N.d.P.
power set (Case 3) Credibility (Case 3) Plausibility (Case 3)

R3 0.6 Cr (R3 ) = 0.6 Pl (R3 ) = 0.941


R4 0.001 Cr (R4 ) = 0.001 Pl (R4 ) = 0.306
R5 0.001 Cr (R5 ) = 0.001 Pl (R5 ) = 0.266
R6 0.001 Cr (R6 ) = 0.001 Pl (R6 ) = 0.266
R3 ∪ R4 0.06 Cr (R3 ∪ R4 ) = 0.661 Pl (R3 ∪ R4 ) = 0.996
R3 ∪ R5 0.02 Cr (R3 ∪ R5 ) = 0.621 Pl (R3 ∪ R5 ) = 0.996
R3 ∪ R6 0.02 Cr (R3 ∪ R6 ) = 0.621 Pl (R3 ∪ R6 ) = 0.996
R4 ∪ R5 0.002 Cr (R4 ∪ R5 ) = 0.004 Pl (R4 ∪ R5 ) = 0.379
R4 ∪ R6 0.002 Cr (R4 ∪ R6 ) = 0.004 Pl (R4 ∪ R6 ) = 0.379
R5 ∪ R6 0.002 Cr (R5 ∪ R6 ) = 0.004 Pl (R5 ∪ R6 ) = 0.337
R3 ∪ R4 ∪ R5 0.05 Cr (R3 ∪ R4 ∪ R5 ) = 0.714 Pl (R3 ∪ R4 ∪ R5 ) = 0.975
R3 ∪ R4 ∪ R6 0.05 Cr (R3 ∪ R4 ∪ R6 ) = 0.714 Pl (R3 ∪ R4 ∪ R6 ) = 0.975
R3 ∪ R5 ∪ R6 0.05 Cr (R3 ∪ R5 ∪ R6 ) = 0.694 Pl (R3 ∪ R5 ∪ R6 ) = 0.935
R4 ∪ R5 ∪ R6 0.05 Cr (R4 ∪ R5 ∪ R6 ) = 0.059 Pl (R4 ∪ R5 ∪ R6 ) = 0.3
R3 ∪ R4 ∪ R5 ∪ R6 0.091 Cr (R3 ∪ R4 ∪ R5 ∪ R6 ) = 1 Pl (R3 ∪ R4 ∪ R5 ∪ R6 ) = 1

are illustrated in Table 1. The Credibility of a fire in the real case of not null ignorance is the same,
according to the equation (23) to 10−3 .
Comparing the data on the failures of the components affected by uncertainty (case 1) with them-
selves ones with a less uncertainty degree (case 2, see Table 1) we notice that the limit imposed in
the case 2 corresponds to the maximum uncertainty degree, and in this limit hypothesis, we have
supposed that the ignorance is diffuse on the working of all the components of the system and it
is very little. {N.d.P. (R3 ∪ . . . ∪ R6 ) = Probability Core of the event “fire” associate to the event
(R3 ∪ . . . ∪ R6 ) = 0.948}. The values of the Plausibility calculated through the focal elements in
the cases 1 and 2 {equation (15)} are very high, superior than 0.75. As regards the Plausibility,
calculated according to as prescribed in the equation (15), the situation is considered as described
in Table 1. Let’s consider the case 2 of ignorance not well identifiable in a component (or reduced
group of components). In the Table 2 the results are illustrated of a Possibilistic analysis considering

73

© 2004 by Taylor & Francis Group, LLC


chap-07 19/11/2003 14: 45 page 74

(case 3) a big ignorance associate to the working of the Thermostat (3). The results of the calculation
of the Credibility show a lower Probability limit not further improvable if not with the technolog-
ical development of the components or the addition of redundancies, while the calculation of the
plausibility on the contrary not from significant results.

CALCULUS OF THE POSSIBILITY DISTRIBUTIONS THROUGH THE EVENTS TREE

If the Probability Cores Distribution is m = (µ1 , µ2 , … ,µn ) for the three cases analyzed in the
previous paragraph, it is possible to obtain the Possibility Distributions π (xi ) according how much
prescribes the equation (31) {or in recursive way π(xi ) = µi − µi+1 with µn+1 = 0}. Referring to
the Events Tree of Figure 3 the nested sequence of events to consider is Event A ⊆ Event B ⊆ Event
(n)
C ⊆ Event D. The distributions of Probability Cores [mi ] were normalized [mi ] utilizing the sum
of µj , [Sµj ], so that equation (13) is respected:

(n)
m1 = (0.001, 0.002, 0.05, 0.5), Sµj = 0.053 → m1 = (0.003, 0.007, 0.09, 0.9)
(n)
m2 = (0.001, 0.002, 0.003, 0.972) → Sµj = 0.978 → m2 = (0.001, 0.002, 0.003, 0.996)
(n)
m3 = (0.6, 0.06, 0.05, 0.091) → Sµj = 0.801 → m3 = (0.744, 0.075, 0.061, 0.12)

The Possibility distributions, for the considered cases, of the event fire (I) are illustrated in
Figure 4. Considering that the Possibility Distribution Function constitutes the superior limit of
the Probability of the event, from the analysis Figure 4 we can see that for the case 1 and the
case 2 (respectively of ignorance on two components and diffuse ignorance), the possibility of
a fire represents a superior too high limit to be significative. In the third case we are able to
discriminate the most representative terms. The case 3 gives several information about a high risk
degree (Possibilistic rather than Probabilistic) when we have a big ignorance on the correct working
of a component whose break coincides with the initiator event considered in the study made with
the Events Tree. If the Possibility Distribution Function πi (I) is the superior limit of the Probability
of the event, the Necessity Distribution Function νi (Ī) is the lower limit of the Probability of the
event “not fire” obtainable according to the equations (25), as illustrated in Figure 4.
Being the νi (Ī) the lower limit of the probability of the fire, for the cases 2 and the Probability
that a fire does not verify cannot be lower, respectively, to 10−1 and 5.10−2 . For the case 3 the
Probability of (Ī) is lowery limited by a distribution which shows as critical event only the first
(break of the Thermostat = Event A). Of remarkable interest would be the calculation, for the
three considered cases, of νi (I), to determine a lower limit of the probability of fire. Consider

Figure 4. Distributions of possibility and necessity of the event fire versus the failures (A, B, C, D) in the
three considered Cases.

74

© 2004 by Taylor & Francis Group, LLC


chap-07 19/11/2003 14: 45 page 75

the histograms of Figure 5, the Probability Cores as used, according to on equation (31), for the
Possibility calculation. The Necessity of the generic Ei Event belonging to the X Universe can be
obtained through the Probability cores [3]:

∀Ei ∈ X → ν(Ei ) ≡ Max[m(Ei ) − m(Ēi )]; with m(Ēi ) ≡ Max[m(Ej )]|Ej ∈ X, Ej = Ei

With a compound Event A we obtain: ∀A ⊆ X → ν(A) ≡ Ei ∈A Max [m (Ei )−m (Ā)] where m (Ā)
≡ Max [m (Ei )] | Ei ∈ / A
In the first two considered cases the two Necessity Distributions ν1 (I) = ν2 (I) = 0. In the case 3
the distribution is illustrated in Figure 6. We can see as the importance to consider, at the purpose of
a correct working of the plant, the Thermostat that, for hypothesis, is the component to which a big
ignorance was associated. While in the first two cases considered the analysis by the distributions
of Necessity and Possibility does not give significant results.

Figure 5. Values of the Probability Cores m (I) relative to the event fire, used in the three considered cases
and graphic inference for the calculation of the necessity distribution.

Figure 6. Possibility and necessity distributions of the event fire versus the failures (A,B,C,D) in the case 3.

75

© 2004 by Taylor & Francis Group, LLC


chap-07 19/11/2003 14: 45 page 76

CONCLUSION

The Theory of the Evidence united to classical Probability has been applied with success to Risk
Assessment problems. We can observe, from the results of the applications, that this theory is useful
in the sense that can help decision maker given more informations of a complex problem, e.g. the
theory can be used as analysis tool for the identifying of the critical members of a system and
the Probability trend of the Top Event versus the failures of the components of the system itself.
The application also show as the role of expert constitute a important part in the building of the
model. To improve the model and to be more incisive in importance of the results, we can thing to a
composition of the opinions by the Dempster law, not executed in this work. An interesting evolution
of the model consist in the use of frequencies in place of the Probability Cores [3] compound, by
the Dempster law with the opinions provided by experts.

REFERENCES

1. A mathematical theory of evidence/Glenn Shafer – Princeton University Press – 1976


2. Possibility Theory, an Approach to Computerized Processing of Uncertainty/ Didier Dubois and Henri
Prade – Plenum Press, New York – 1988
3. Unfair Coins and Necessity Measures: Towards a Possibilistic Interpretation of Istograms/Didier Dubois
and Henri Prade – England Publishing Company, Fuzzy Sets Systems 10 (April 1983)
4. Fuzzy logic with engineering applications/T.J. Ross – McGraw-Hill, New York, 1995.
5. Fuzzy sets as a basis for a theory of possibility/Zadeh L.A., F.S.S., 1978
6. A Class of Fuzzy Measures Based on Triangular Norms. A General Framework for the Combination
of Uncertain Information/Didier Dubois and Henri Prade – Int. J. Gen. Systems (1982)

76

© 2004 by Taylor & Francis Group, LLC


chap-08 19/11/2003 14: 45 page 77

Social aspect of sustainable development

© 2004 by Taylor & Francis Group, LLC


chap-08 19/11/2003 14: 45 page 78

© 2004 by Taylor & Francis Group, LLC


chap-08 19/11/2003 14: 45 page 79

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Research on woods as sustainable industrial resources – evaluation


of tactile warmth for woods and other materials

Yoshihiro Obata∗ & Kozo Kanayama


Institute for Structural and Engineering Materials, National Institute of Advanced Industrial Science
and Technology, Nagoya, Japan

Yuzo Furuta
Biological Function Science Course, Graduate School of Kyoto Prefectural University, Kyoto, Japan

ABSTRACT: The importance of wood from viewpoint of sustainable development is mentioned,


because it is a sustainable resource against the limited mineral resources and it is a stock of
fixed carbon from carbon dioxide by photosynthesis against the global warming. The engineering
evaluation of its good points can encourage the more effective use of wood as substitute of mineral
resources and give us the sustainable forestry. In this paper, good tactile warmth of wood is treated
as one of its good points. The relationship between the contact surface temperature and the thermal
effusivity is derived from the theoretical analysis of heat transfer phenomenon, which explains
the some experimental knowledge on tactile warmth of wood and other materials rationally and
quantitatively. Finally, the contact surface temperature and the thermal effusivity are proposed as
engineering measures to evaluate the tactile warmth of wood and other materials.

INTRODUCTION

Wood as prospective sustainable resource


Recently, the concept of “sustainable development” has been taken seriously in the fields of
resources, energy and environment, which are commodities essential for continuous development
of human society. The sustainable use of resources without damages to environment is our challenge
and duty to meet not only the need of the present society but also one of the future generations.
Minerals such as metals, oils and gasses are important resources for engineering materials and
energy. We are afraid of their shortage in the near future because minerals are finite resources. The
development of new substitutable resources for industrial materials is required in order to save the
use of limited mineral resources as much as possible for the need of the future generations. We also
face to the serious global environment problem, i.e., the global warming, caused by greenhouse
effect with several gasses such as carbon dioxide, chlorofluorocarbons, methane, nitrous oxide and
so on. Especially, carbon dioxide is estimated to have 57% greenhouse contribution [1]. The prin-
cipal sources of carbon dioxide are considered of fossil fuels and deforestation. The reduction of
greenhouse gas emission is a global problem that all countries should tackle with.
Wood is an attractive resource against the mineral resources with limitation, because it can be a
sustainable resource if the cycle of cutting down, planting and growing trees is continued [2]. It has
also advantage against the global warming problem, since trees grow up with fixing carbon from
carbon dioxide by photosynthesis. Some people may think about that we should not cut down trees
against the global warming. Generally speaking, the younger forests have the higher ability to fix
carbon from carbon dioxide than the older ones [3]. It implies that it is important to cut down older

∗ Corresponding author. e-mail: y-obata@aist.go.jp

79

© 2004 by Taylor & Francis Group, LLC


chap-08 19/11/2003 14: 45 page 80

limited mineral resources


Sustainable Forestry
demand save
Growing
Resource for
CO2 Cutting
TREES WOOD Industrial
Down
Materials
Planting supply Recycle

keep WOODEN WASTE


young and active forest
with CO2
high ability to fix CO2 Energy Source store of the fixed CO2
for longer time
CO2

Figure 1. Better use of wood for longer time contributes to the sustainable development.

and inactive trees and to displace them with younger and active ones. The Five-Story Pagoda in
the Horyu-temple in Japan, which is the world’s oldest wooden surviving structure and the world
cultural heritage of UNESCO, gives us another important hint against the global warming. The
main wood pole of the pagoda was cut down in A.D. 594 [4]. It means that the wood pole has stored
the fixed carbon dioxide for over 1400 years until now. Another important idea is to use wood as
long as possible to postpone the time that the fixed carbon from carbon dioxide by photosynthesis
will become carbon dioxide again.
Figure 1 shows the flow chart that the use of wood as substitute of mineral resources for industrial
materials will be useful against the problems of resource shortage and global warming. Figure 1
contains also the idea of recycling of wood for longer storing of fixed carbon dioxide. The technolo-
gies to improve the bad points of wood for more effective use of worthless wood such as thinned
wood, to evaluate the good points for more use of wood as substitute of mineral resources, and to
recycle wood for longer use of wood as stock of fixed carbon dioxide will encourage us to challenge
the sustainable development with the system shown in Figure 1.

Good tactile warmth of wood


Good tactile warmth is one of the better points of wood than the other materials. For example, a
wooden exterior handrail has good tactile warmth regardless of the seasons but a metallic one is felt
too hot in summer and too cold in winter to touch. Recently, a compressed wood of thinned wood
has been used as an exterior handrail. It can be expected as a substitute of the metallic handrail
because it has not only the good tactile warmth but also the enough strength. The engineering
measures to evaluate tactile warmth of both wood and metals are needed to compare their tactile
warmth objectively for more use of wood as substitute of metal.
There have been some reports on the relationship between the tactile warmth and the physical
quantities [5–8]. Their measure of tactile warmth is based on statistics of the judge of experimental
subjects. Some of their papers have paid their attention to the relation between the sensory tactile
warmth and the thermal conductivity. They pointed out that materials with smaller thermal conduc-
tivity were felt warmer. Okajima et al. treated building materials containing wood and metals but
they concluded that it was difficult to find out the simple expression to show the relation between
the thermal conductivity and the sensory tactile warmth [6]. Harada et al. reported that the sensory
tactile warmth of wood has a high and negative linear-correlation with the logarithm of thermal
conductivity [8]. Harada’s result has been generally accepted as a relation between the sensory
tactile warmth and the physical quantities of wood [9]. But it is impossible to compare the tactile
warmth of wood and metals from their results, because their measure of sensory tactile warmth is
available in the closed group of materials that the sensory inspection was done.

80

© 2004 by Taylor & Francis Group, LLC


chap-08 19/11/2003 14: 45 page 81

The aim of this study is to obtain the relation between the tactile warmth and the material
properties from the theoretical analysis of the heat transfer phenomenon common to wood and
other materials, and to establish the evaluation system of tactile warmth of materials. The measure
of tactile warmth is desirable to be that it can be determined absolutely from physical quantities,
that it can be available for not only woods but also other materials and that it is easy to understand
in order to explain the good point of wood to the end users for more use of wood as substitute
of mineral resources. Firstly, we analyze the transient one-dimensional heat conduction problem
that two semi-infinite bodies come in contact each other. We derive the relationship between the
contact surface temperature and the thermal effusivity [10]. We review the sensory tactile warmth of
wood in Ref. [8] with the contact surface temperature and thermal effusivity. The other experimental
knowledge on tactile warmth is also discussed with these properties. Finally, we propose the contact
surface temperature as an engineering measure of tactile warmth as a result of such discussions.

THEORY

Let us consider the governing heat transfer phenomenon when our hand comes in contact with
various materials. We put back our hand unconsciously and quickly when we get in touch with
something too hot or too cold. It suggests that the tactile warmth is sensed sharply immediately after
the contact and then it is judged whether the material is safety or not to keep our hand in contact with
it. On the other hand, the human’s sensory organs to sense warmth and coldness are located at 0.4 mm
and 0.2 mm below the human’s skin, respectively [11]. So the governing heat transfer phenomenon
on tactile warmth is the phenomenon, which occurs near the contact surface in a short time after
the contact. Then we can derive an analytical model for such transient response approximately with
the transient one-dimensional heat conduction problem when two semi-infinite bodies, which have
different material properties and different uniform initial temperatures, are placed in contact at
their free surfaces each other.
Now we consider the heat transfer problem that two semi-infinite bodies come in contact with
each other at 0 in x-coordinate and the temperatures become same at the surfaces of both semi-
infinite bodies. Then the basic equations of transient one-dimensional heat conduction, the initial
conditions and the boundary conditions are given as follows,
Basic equations:

∂2 TH (t, x) ∂TH (t, x)


λH = CH ρH (1)
∂x2 ∂t
∂2 TM (t, x) ∂TM (t, x)
λM 2
= CM ρM (2)
∂x ∂t
Initial conditions:

TH (0, x) = TiniH at t = 0 (3)


TM (0, x) = TiniM at t = 0 (4)

Boundary conditions:

TH (t, 0) = TM (t, 0) = Tcs (t) at x = 0 (5)


   
∂TH (t, x) ∂TM (t, x)
λH = λM at x = 0 (6)
∂x x=0 ∂x x=0

where T is the temperature, x is the coordinate, t is the time, λ is the thermal conductivity, C is the
specific heat, ρ is the density, Tini is the initial temperature before contact and Tcs is the contact
surface temperature. The subscripts of H and M denote hand and material, respectively.

81

© 2004 by Taylor & Francis Group, LLC


chap-08 19/11/2003 14: 45 page 82

The procedure of analysis is well known and the contact surface temperature Tcs is given with a
very simple expression as follows [12];
TiniH − TiniM
Tcs − TiniM = (7)
1 + ηM /ηH

Table 1. Thermophysical properties used for calculation.

λ C ρ η
No. Materials Face [W/(m · K)] [kJ/(kg · K)] [kg/m3 ] [kJ/(m2 · s1/2 · K)]

Woods [8]
1 Balsa (Ochroma lagopus) long. 0.070 1.633 130 0.122
2 Kiri∗ (Paulownia tomentosa) long. 0.134 1.591 330 0.265
3 Hinoki* (Chamaecyparis obtusa) long. 0.155 1.633 380 0.310
4 Karamatsu* (Larix leptolepis) long. 0.203 1.633 490 0.403
5 Karamatsu* (Larix leptolepis) end 0.313 1.674 510 0.517
6 Seraya (Parahorea sp.) long. 0.190 1.633 560 0.416
7 Seraya (Parahorea sp.) end 0.350 1.633 570 0.571
8 Buna* (Fagus crenata) long. 0.230 1.633 700 0.513
9 Yachidamo* (Fraxinus mandshurica) long. 0.229 1.633 710 0.515
10 Yachidamo* (Fraxinus mandshurica) end 0.494 1.674 710 0.767
11 Kapur (Dryobalanops sp.) long. 0.243 1.633 730 0.538
12 Kapur (Dryobalanops sp.) end 0.394 1.674 740 0.699
13 Itayakaede* (Acer mono) long. 0.262 1.633 760 0.570
14 Itayakaede* (Acer mono) end 0.445 1.674 710 0.728
15 Shirakashi* (Quercus myrsinaefolia) long. 0.330 1.633 1020 0.742
16 Shirakashi* (Quercus myrsinaefolia) end 0.486 1.674 940 0.874
Other materials [8]
17 polystyrene foam 0.034 1.340 12 0.023
18 polyurethane foam 0.048 1.800 27 0.048
19 epoxy resin 0.386 1.047 1180 0.690
20 cement mortar 1.419 0.921 2050 1.637
Glass [14]
21 Pyrex 1.100 0.730 2230 1.338
Rocks [14]
22 marble 2.8 0.810 2600 2.428
23 granite 4.3 1.100 2650 3.540
Metals & Alloys [14]
24 bismuth 7.86 0.126 9800 3.115
25 manganese 7.82 0.479 7470 5.290
26 titanium 21.9 0.522 4506 7.177
27 steel 43 0.465 7850 12.53
28 aluminum alloy 193 0.893 2730 21.69
29 gold 315 0.129 19300 28.00
30 silver 427 0.237 10490 32.58
31 copper 355 0.415 8940 36.29
Organs of human body [15]
32 palm 0.512 – – 1.263
33 back of the hand 0.593 – – 1.346
34 sole 0.407 – – 1.012
35 instep 0.593 – – 1.346
∗ Japanese names are used for these woods without corresponding English names.

Legend: λ; thermal conductivity, C; specific heat, ρ; density, η; thermal effusivity.

82

© 2004 by Taylor & Francis Group, LLC


chap-08 19/11/2003 14: 45 page 83

where,

η= λCρ (8)

which is the thermal effusivity. We should note that Tcs in Eq. (7) is constant on time.

NUMERICAL RESULT AND DISCUSSION

Numerical conditions and material properties


Let us consider the situation that a human’s hand comes in touch with the various materials
at room temperature. Considering that a man feels moderate warmth when his skin’s average
temperature is 33–34◦ C [13], we give the following initial temperatures to hand and materials:
TiniH = 32◦ C = 305 K, TiniM = Troom = 20◦ C = 293 K. TiniH is 1–2 degree Kelvin lower than the
average skin’s temperature since the hand is an end organ in a human body. Table 1 shows
thermophysical properties used for the numerical calculation in this work.

Evaluation of the sensory tactile warmth on woods with contact surface temperature
The sensory tactile warmth on wood was reported by Harada et al. in detail [8]. They showed that the
sensory tactile warmth of woods is proportion to the logarithm of the wood’s thermal conductivity.
We review their result with the contact surface temperature. Figure 2 shows the relationship between
their sensory tactile warmth and our contact surface temperature. The sensory tactile warmth has
a high positive correlation with the logarithm of the temperature difference between the contact
surface temperature and the material’s initial one.
The relationship is expressed as follows:

S ∝ K1 log (Tcs − Troom ) ∝ K2 log (1 + ηM /ηH )−1 (9)

where K1 and K2 are constants. Eq. (9) is coincident to the Fechner’s formula, which shows a
relation between state of mind S and stimulus R as follows:

S = K log R (K: constant) (10)

It suggests that the contact surface temperature and the thermal effusivity can be used as measures
of the tactile warmth.

6 insulation TiniH = 305 K


Sensory tactile warmth, S

5 plastics TiniM = 293 K

4 cement
mortar
3 wood
(long. face)
2
wood
1 (end face)

0
1 10 100
TcsTiniM [K]

Figure 2. Sensory tactile warmth and contact surface temperature.

83

© 2004 by Taylor & Francis Group, LLC


chap-08 19/11/2003 14: 45 page 84

Table 2. Sensory tactile warmth and material properties.

Materials Face S λ[W/(m · K)] η [kJ/(m2 · s1/2 · K)]

Seraya wood long. 4.36 0.190 0.416


Seraya wood end 3.25 0.350 0.571
Shirakashi wood long. 3.09 0.330 0.742
Shirakashi wood end 1.88 0.486 0.874

Legend: S; Sensory tactile warmth, λ; Thermal conductivity, η; Thermal


effusivity.

15
wood insulation
(long. face) glass
wood cement mortar
(end face) rock
10
Tcs–TiniM [K]

metal & alloy

hH = hcopper
5 TiniH = 305 K
TiniM = 293 K
hH = hpalm

0
0 5 10 15 20 25 30 35 40
hM[kJ/(m2s1/2K)]

Figure 3. Contact surface temperature for various materials.

The thermal conductivity in fibrous direction of wood is 2.25–2.75 times of the vertical one to
it [16]. Harada et al. treated the influence of the wood’s anisotropic thermal conductivity upon the
sensory warmth and reported that the touch with end face of wood felt colder than the touch with
longitudinal face of the same wood. But Table 2 shows that the order of thermal conductivity does
not correspond to the order of the sensory tactile warmth in a mixed contact system with end and
longitudinal faces for some woods. On the other hand, the order of thermal effusivity coincides
to the order of the sensory tactile warmth. This result suggests that the thermal effusivity evaluate
tactile warmth of wood more accurately than the thermal conductivity.

Contact surface temperature for woods and other materials


Equation (7) is applicable to not only wood but also other materials. Figure 3 shows the relationship
between the contact surface temperature and the thermal effusivity for various materials. The solid
line represents the contact surface temperature when the thermal effusivity of palm is used as ηH in
Eq. (7). The line shows that materials with smaller thermal effusivity are felt warmer than that with
larger one. It shows also that the difference of contact surface temperature between different woods
is very large. The result explains our experience that we can distinguish the difference of tactile
warmth easily for different woods in spite of the little difference of their thermophysical properties.
Copper was used as a heat source instead of human’s palm in some experimental works [6, 8]. The
dotted line shows the contact surface temperature in the case that the thermal effusivity of copper
is used as ηH instead of palm’s one. It is difficult to distinguish the contact surface temperatures
for different woods in this case.
Figure 4 shows the contact surface temperature for the ratio of material’s thermal effusivity to
hand’s one. The horizontal axis is in logarithmic scale. The slope of the line is very gradual when

84

© 2004 by Taylor & Francis Group, LLC


chap-08 19/11/2003 14: 45 page 85

15
hH = hpalm = 1.263 kJ/(m2s1/2K)
TiniH = 305 K
10 glass
TiniM = 293 K
Tcs–TiniM [K] rock
insulation
concrete
5
metal & alloy
wood (long. face)
wood (end face)
0
0.01 0.1 1 10 100
hM/hH

Figure 4. Sensible materials for human’s hand.

20
Kiri wood (long. face)
Shirakashi wood (long. face)
Pyrex
10
Tcs–TiniM [K]

0
steel
hH = hpalm aluminum alloy
copper
-10
0.1 1 10 100
hH [kJ/(m2s1/2K)]

Figure 5. Influence of different thermal effusivity of hand.

the ratio of ηM /ηH is not only larger than 10 but also smaller than 0.1. Insulations are also difficult
materials to be distinguished by the tactile warmth as well as metals. The rate of wood’s thermal
effusivity to palm’s one is located from 0.1 to 1 and the slope is very sharp. This result gives us a
new understanding that wood is very sensitive material for human beings from viewpoint of tactile
warmth.
There are differences of hand’s thermal effusivity among individuals. Figure 5 shows the contact
surface temperature with some materials for the various thermal effusivities of hand. It shows that
the hand with smaller thermal effusivity feels woods colder than the hand with larger one. But the
former can choose the warmer wood more easily than the latter, because the difference of contact
surface temperature between two woods is larger for the former than the latter. This idea may be
also available for wild animals. The animals with smaller thermal effusivity feel the wood colder
but they can choose the warmer wood for their nest. On the other hand, the animals with larger
thermal effusivity are not good at choosing warmer wood but they can feel the same wood warmer
than the animals with smaller thermal effusivity.
Now we treat the contact with materials in summer and winter. Figure 6 shows the contact
surface temperatures when the hand comes in contact with several materials at the various initial

85

© 2004 by Taylor & Francis Group, LLC


chap-08 19/11/2003 14: 45 page 86

50
aluminum alloy
steel

Tcs–TiniM [K]
TiniH = 305 K
-50
Pyrex
Shirakashi wood
Kiri wood
-100
250 300 350 400
TiniM [K]

Figure 6. Contact surface temperature to materials with high/low initial temperatures.

temperatures. The temperature difference of the contact surface temperature from the initial tem-
perature is almost zero for metals in spite of the initial temperatures. This means that the contact
surface temperature with metals is almost the metal’s initial temperatures. So metals are felt too
hot at higher initial temperature and too cold at lower one. On the other hand, wood is warmer
than metal at the lower initial temperature and colder at the higher one. It explains that the exterior
handrail made of aluminum is too hot in summer and too cold in winter to touch but the wooden
handrail has good tactile warmth regardless of seasons.

CONCLUDING REMARKS

We introduced that wood is a prospective material against the shortage of resources and the global
warming from the viewpoint of sustainable development. We treated tactile warmth of wood in
order to encourage wood in use more widely as substitute of other industrial materials. We give the
concluding remarks as following from the discussion on the evaluation of tactile warmth of wood
and other materials.
1. The relationship between the contact surface temperature and thermal effusivity was derived
from the theoretical analysis of transient one-dimensional heat conduction problem for the
contact of two semi-infinite bodies.
2. The contact surface temperature decreases for the materials with higher thermal effusivity.
3. The sensory warmth of wood is proportional to the logarithm of the contact surface temperature.
4. The thermal effusivity evaluates sensory warmth properly in a mixed contact system of end and
longitudinal faces of wood.
5. The relationship of the contact surface temperature and the thermal effusivity explains rationally
that woods are felt much warmer than metals. It also explains that each wood has large difference
of tactile warmth from other species of wood in spite of the small difference of their material
properties.
6. The relationship of the contact surface temperature and the thermal effusivity explains rationally
that wood has good tactile warmth in spite of seasons, although metals are felt too hot in summer
and too cold in winter to touch.
As a result of the above remarks, we propose the contact surface temperature and the thermal
effusivity as the engineering measures to evaluate the tactile warmth of wood and other materials
objectively.

86

© 2004 by Taylor & Francis Group, LLC


chap-08 19/11/2003 14: 45 page 87

NOMENCLATURE

T: Temperature [K]
Tini : Initial temperature [K]
Tcs : Contact surface temperature [K]
λ: Thermal conductivity [W/(m · K)]
C: Specific heat [J/(kg · K)]
ρ: Density [kg/m3 ]
η: Thermal effusivity [J/(m2 · s1/2 · K)]
S: Sensory tactile warmth

REFERENCES

1. Masters, G.M., Introduction to environmental engineering and science, Prentice Hall, Englewood Cliffs,
p. 389, 1991.
2. Kanayama, K., Importance of forestry resource from the viewpoint of industries (in Japanese), Wood
Industry, vol. 52, pp. 446–449, 1997.
3. Sugiyama, M., Usage of wood and environment problem – ideas in Finland – (in Japanese), Wood
Industry, vol. 54, pp. 440–443, 1999.
4. Article of Mainichi News Paper, Main pole was cut down in A.D. 594. Look over the argument on
rebuilding of Horyu- temple. (in Japanese), Mainichi morning news paper in Feb. 20, 2001.
5. Wada, Y., Oyama, T. and Imai, S. ed., Psychology handbook on sensation and perception (in Japanese),
Seishin-shobo, Tokyo, pp. 15–16, 778, 807, 1969.
6. Okajima, T., Tanahashi. I., Yasuda, T. and Takeda, Y., Tactile warmth of building materials (in Japanese),
Transaction Architectural Institute of Japan, vol. 245, pp. 1–7, 1976.
7. Matsui, I. and Kasai, Y., Study on the surface sensation of building materials – Warms and cool: Part
I (in Japanese), Transaction Architectural Institute of Japan, vol. 263, pp. 21–32, 1978.
8. Harada, Y., Nakado, K. and Sadoh, T., Thermal properties and sensory warmth of wood surfaces (in
Japanese), Journal of the Japan Wood Research Society, vol. 29, pp. 205–212, 1983.
9. Imamura, Y., Kawai, S., Norimoto, M. and Hirai, T., Wood and woody materials (in Japanese),
Toyo-shoten, Tokyo, pp. 301–303, 1997.
10. Obata, Y., Kohara, M., Furuta, Y., Kanayama, K., Evaluation of Tactile Warmth of Wood by Thermal
Effusivity (in Japanese), Journal of the Japan Wood Research Society, vol. 46, pp. 137–143, 2000.
11. Japan Society of Mechanical Engineers ed., Biomechanics (in Japanese), Ohmsha, Tokyo, pp. 213–218,
1991.
12. Frank, P.I. and David, P.D.W., Fundamentals of heat and mass transfer, 3rd ed., John Wiley & Sons,
New York, pp. 259–262, 1990.
13. Bioengineering Publishing Committee ed., Bio-engineering (in Japanese), Baihukan, Tokyo, p. 54,
1992.
14. Japan Society of Thermophysical Properties ed., Thermophsical property handbook (in Japanese),
Yokendo Ltd., Tokyo, pp. 22–27, 64, 450–451, 493, 1990.
15. Yokoyama, S., Heat Transfer Phenomenon in Living Body (in Japanese), Hokkaido University Press,
Sapporo, p. 75, 1993.
16. Nakado, K., Wood Engineering (in Japanese), Yokendo Ltd., Tokyo, pp. 113–116, 1985.

87

© 2004 by Taylor & Francis Group, LLC


chap-08 19/11/2003 14: 45 page 88

© 2004 by Taylor & Francis Group, LLC


chap-09 19/11/2003 14: 45 page 89

New and renewable energy sources


for water and environment
sustainable development

© 2004 by Taylor & Francis Group, LLC


chap-09 19/11/2003 14: 45 page 90

© 2004 by Taylor & Francis Group, LLC


chap-09 19/11/2003 14: 45 page 91

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

The surface water retention basins as a tool for new and renewable
water and energy sources

P.S. Kollias
Dr. civil sanitary engineer

V.P. Kollias
Dr. physicist researcher, National University of Athens

S.P. Kollias
Dipl. mathematician, University of Athens

ABSTRACT: The alterations to hydrological cycle and the inequality of yearly rainfall to the
different departments, in regional, local and global level, created the necessity for the construction
of retention basins. Collected water can be used for water supply, irrigation, industry use, fire
purposes, environmental necessities and other uses. Hydrogeological studies are necessary for
searching suitable drainage basin areas. In addition treated used water, from secondary or advanced
treatment plants can be used efficiently for secondary uses. Power production from existing or
created falls to retention basins, used for multipurpose targets, can be used to offer energy.

INTRODUCTION

The Environmental degradation produced climatic changes and affected the hydrologic cycle, which
is already burdened by human activities. Moreover the inequality of yearly rainfall to the different
departments, created the necessity of water storage, through retention basins and diminution of
water rejection to the sea. This good quality water can be used after feasible complementary treat-
ment for water supply, irrigation, industrial and other uses. Site selection for retention basins
construction, needs extensive investigation of the hydrological basin, the place of flooding area,
the kind of the dam and further geotechnical surveys.
Moreover treated used water, after secondary or advanced treatment, can be used for irrigation,
industrial and other secondary uses. Economicotechnical design is also examined.

CLIMATIC CHANGE AND HYDROLOGIC CYCLE

The natural environment degradation produces climatic changes and affects the hydrological cycle.
Water through precipitation is deposited on land and water bodies’ surface. Then by surface runoff,
it flows into lakes, streams, rivers and oceans. Some water moves into the earth beneath us, as
ground water. Human activities interfere through the following mechanisms, to hydrological cycle.
First when overgrazing, overcultivation and deforestation, enforce sunlight rejection from the earth
surface, and heat the atmosphere. Clouds are dispersed and rain becomes less frequent [1].
Second the loss of vegetated areas lowers the water evaporation from them and the rain cloud
formation, and this results to less water. Moreover the increasing amount of dust in the atmosphere
increases reflectivity and decreases rainfall.
Finally studies of the hydrological cycle, showed an inequality of yearly rainfall, to the different
departments, in regional, local and global level. All that leaded to the necessity, of creating retention

91

© 2004 by Taylor & Francis Group, LLC


chap-09 19/11/2003 14: 45 page 92

basins and artificial lakes, in order to diminish the rejection of precious surface water flows, through
the hydrographical network to the sea. World runoff is estimated if evenly distributed, to support a
world population of ten times larger than today’s.
Water needs include water supply, irrigation, industry, fire purposes, environmental necessities
and others and must be faced in the best way that safeguards sustainable development with envi-
ronmental protection. Water managers must be involved for contribution to the objectives of envi-
ronmental management. Also the intensive information to the general public, for his contribution
for using water economically, will reinforce sustainability aspect for water retention basins.

SURFACE RUNOFF CONTAINMENT THROUGH THE CREATION OF


RETENTION BASINS

The deficit of hydrologic balance, created from overexploitation, cannot be faced except by surface
runoff containment. A containment of rain waters could be done, through the construction of
retention basins placed to selected sites that will cover the necessary requirements, for the indicated
use. This could be water supply of a specific population, irrigation of an agricultural land, industrial
use and environmental use. Also there could be examined the use of retention basins located at
special places inside woody areas, for fire purposes. Secondary treated used waters or tertiary
treated (filtration through activated carbon or sand, to simply or double filters), could give the
option of water recirculation, to selected suitable secondary uses. With these applications we can
achieve the improvement of hydrologic balance.

HYDROLOGICAL INVESTIGATIONS FOR THE SITE SELECTION OF A RETENTION


BASIN AND THE POSSIBILITY OF A SMALL HYDROELECTRICAL WATER POWER
UNIT CONSTRUCTION

Investigations for site selection possible water quality


The first step consists of examining the existing precipitation and evapotranspiration data, and the
soil water losses, from filtration and percolation to ground water aquifers. The geologic formations
and soil structure are investigated, through drilled sampling wells.
Further there are examined the water region needs, for water supply, irrigation, fire purposes
etc. and is estimated the water volume necessary for storage.
The height of the dam must be greater, from that required for water storage, in order to face the
level of basin waves, especially during high flooding. Also there must be planning for the necessary
water storage, to face the dry year needs (in USA every 25 years).
In reference to the hydrological basin area of water supplying Es and the collection basin area
Ec, there exists a rule of thumb that gives, Ec = 0,1 Es. A topographical survey is necessary, for
the fixing of the collection basin [1,2].

Collected water
[1,2] must have suitable quality, for planned uses. A treatment is necessary:
• natural treatment and disinfection such as: rapid filtration and disinfection
• chemical treatment and disinfection such as: prechlorination, coagulation, flocculation, sedi-
mentation, filtration, disinfection.
Also to use collected water for irrigation dissolved salts and conductivity should range within
defined standards.
Geotechnical survey of the site, for the retention basin construction and the place of the dam
The previously described hydrogeological investigations must be more detailed. In the place of the
dam construction for the formation of the retention basin for water storage, a geological section is

92

© 2004 by Taylor & Francis Group, LLC


chap-09 19/11/2003 14: 45 page 93

necessary. This is realized from drillings and well logs, including the kind of soil layers. Also there
is examined the surface of the ground, for water storage and for her permeability (injection tests).
Other studies examine the stability of banks around the basin, to face possible dangers of earth
slide, inside the storage water space. The necessary materials, for the dam construction (earth for
earth dam, stones for stone dam and inert materials for concrete dam), should be good to be in the
closer possible distance, from the dam place.

The construction of a small hydro


The small hydroelectric plants consist an interesting energy source, which is environmentally
acceptable, as it diminishes the dependence from fossil fuels. Their use started to increase in
our days, in order to contribute to the big energy necessities and reinforce sustainability. With the
small hydro we can succeed to offer cheap energy in the agricultural area or to sell this in the
Public Electricity Network. The electric current efficiency is classified according to water flow,
and the height of water fall. This can reach sometimes the 10 MW. Hydro can further be divided
to minihydro <500 KW and micro-hydro <100 KW.
A hydro, in combination to the constructed retention basin, must have the following units:
• a discharge chamber from whom starts the waterfall. It regularizes of the transmitted flow
• the valves chamber constructed after the discharge chamber at the top of the pressure pipe. For
the adjustment of the feeding of the water turbine
• the pressure pipe that transfers the water, from the retention basin, to the water turbine
• the building substructure of the hydroelectric installation. If it is near a residential area, it must
be supplied with special sound insulation
• the escape channel from which is removed the fluid that passes through the water turbine and
• the line of transfer of the electric power.

TECHNOLOGY AND HYDRAULIC CALCULATIONS, FOR RETENTION BASINS


CAPACITY AND OCCUPIED AREA

Technology of constructions
The retention basins are constructed principally, in agricultural areas, with storage capacities that
ranges from 10,000 m3 to 40,00,000 m3 . Their creation is realized through a dam embankment of
the land area that will be submerged. This is preferred to be from earth, because is best adjusted to
foundations requirements. Also from stonework or concrete (gravity or buttress dams). The dam’s
height ranges from 5 to 20 m.
Table 1 presents the relation between height, nature of material and slopes for earth dams.

Table 1. Relations between height, nature of material and slopes for earth dams.

Slope

Height of the dam Type of the dam Upstream Downstream

<5 m Homogeneous material 1/2,5 1/2


with zones 1/2 1/2
Homogeneous material 1/2 1/2,5
5–10 m Homogeneous with great clay content 1/2,5 1/2,5
With zones 1/2 1/2,5
Homogeneous material 1/2,5 1/2,5
10–20 m Homogeneous with great clay content 1/3 1/2,5
With zones 1/2,5 1/3

93

© 2004 by Taylor & Francis Group, LLC


chap-09 19/11/2003 14: 45 page 94

Calculations for retention basins capacity and occupied area


First there are calculated the water requirements for water supply, irrigation and other uses (if it is
a multipurpose storage basin). By adding these quantities, we can find the total amount S for water
storage. If Ht is the water height to the highest point and Ec the occupied area at first estimations,
we must have a water volume S equal to Ec. Ht/3. If we use for the embankment an earth dam, we
find his height, by adding to the water height, the followings:
the wave height;
v2
P =h+ (1)
2g
from Mallet – Pacquand formula where;
1 1 √
h= + · l (2)
2 3
v = velocity and l = the basin length in km.
• the height of the wind hwind, as also of water to overflow weir hw, and a further for security
reasons, hs
• Total height Ht and occupied area Ec .

Ht = H + p + hwind + hw + hs (3)
3S
EC = (4)
Ht

BASINS FOR COLLECTION OF TREATED USED WATERS FROM SECONDARY OR


ADVANCED TREATMENT PLANTS

The treated water to secondary or advanced treatment plants, can be collected to storage basins and
used according to their quality, for secondary uses as irrigation or industrial use, in relevant to the
effluent quality, as following:
• After secondary treatment, water flow is collected to retention basins that work also as stabi-
lization ponds. These need an extensive area for water storage, because of their shallow depth
(almost 1.5 m), and a residention time for further BOD5 removal and quality improvement.
• After advanced treatment that leads to a better effluent quality. The water could be stored to
a smaller volume retention basin, because a lower residence time is needed, before the treated
water could be used in a secondary use.
We must mention that the sanitary dangers for treated water use are toxic chemicals included
to them or causing decease to organisms. For that reason in the case of heavy metals transferred
with irrigation, there must be calculated the tolerable feeding quantities with treated water per ha.
Also if we have sprinkled irrigation, the irrigated area must be far from dwellings (more of 1 km
because the aerosol created influences to the breathing system).

HYDRAULIC TECHNOLOGY FOR POWER PRODUCTION FROM EXISTING


FALLS FOR RENEWABLE ENERGY POSSIBILITIES

The created water falls


These could include the height of the water to the retention basin, to which is added, the difference
of the height from the foot of the dam, until the place of the respective special hydromechanical
equipment.

94

© 2004 by Taylor & Francis Group, LLC


chap-09 19/11/2003 14: 45 page 95

Table 2. Classification of energy produced, with the considered


fall value.

Water fall (m)

Power (KW) Low Medium High

5–50 1,5–15 15–50 50–150


50–500 2–20 20–100 100–250
500–5000 3–30 30–120 120–400

In Table 2 we see a classification of energy produced, with the considered fall value.
The hydraulic energy P, is given from the formula P = p · g · Q · H where: g the gravity acceleration
that equals to 9,81 m/s2 , p water density (1000 kg/m3 ), Q the water flow and H the water fall height to
m. A rule of thumb gives P = 10 QH (when we take 10 instead of 9,81 the gravity acceleration). The
power produced is given from the formula W = P · t · n · f where: W the electrical energy produced
in kWh, P hydraulic energy, t operation time f.i 24 × 365 = 8760 hr for a year, n the efficiency of
turbine – generator waving from 0,5 to 9 and f a coefficient that is related with seasonal water flow
variations.

Calculation of the active height of a waterfall


The active height of a waterfall, is the sum of the geometrical difference between the retention
basin steady water level and the point of the water discharge channel, through the turbine, from
which is subtracted the friction losses to the pressure pipe and the water turbine.

Classification of water turbines according to water movement


We distinguish the following types:
• if water moves along a turbine rayon, the turbine belongs to Banki – Mitchell rayonal type. For
waterfalls, 2-200 m and water flows from 20 l/s to 10000 l/s
• if the movement is done lengthwise of the turbine axis, belongs to Kaplan turbines. For waterfalls
about 10 m and water flows from 5–100 m3 /s
• if we have a combination between the two precedents movements, we ended to the mixed type
turbines, the Francis type. For waterfalls 10 to 100 m and water flows until 30 m3 /s.
• finally for the cases that movements become tangentially to the wheel, we have the Pelton turbine.
For more than 200 m waterfalls and small water flows.

Economicotechnical design of a small hydro


In general the cost depends from the disposed hydraulic height.
• The small hydro with great height have low cost. They need less water flow, for a given power
and are serviced with low equipment cost
According to French statistics for heights 50–200 m and electric power 150–1000 KW, the cost
is separated to:
– 50–60% for electromechanical equipment and turbine
– 40–50% for civil engineer works
The cost ranges from 12000–16000 F/KW
• For small hydro with height <10 m in USA, the cost waves from 12000–16000 F/KW
The maintenance cost is low and the duration could be about thirty (30) years.

95

© 2004 by Taylor & Francis Group, LLC


chap-09 19/11/2003 14: 45 page 96

ENVIRONMENTAL PROBLEMS

Water pollution transferred to retention basins and faced measures


Pollution is transferred through collected waters from human activities:
• Excess use of fertilizers, pesticides, insecticides that create agricultural wastes
• Increase of nitrate concentration at values more than 50 mg/l to transferred waters, polluted from
stock-farm units (danger for cyanosis and production of nitrozamines that are suspect cancer
producing)
• Leachate pollution from landfills, through ground waters filtrations to receivers
• Other pollution from cess pools etc.
Also the development of an eutrofication problem, from the insertion to the water of big quantities
of nitrogen, phosphorous and organic materials. This results in water quality degradation and
impacts to aquatic life.
To face pollution the following measures are taken:
• Short term with mass dispersion of sulfate of copper
• Medium term that aims through clear oxygen diffusion or air, to maintain aerobic conditions
• Long-term that aims to maintain aerobic conditions, through continuous oxygen insertion, with
mechanical apparatus appliance to the necessary depth.

Environmental impacts
The environmental impacts of small hydro are the following:
• Aesthetic impact to the landscape, from the constructions (intake, basin, pressure pipe, lines of
electric energy transfer)
• Sound impacts from the water turbine that must be contained to 50 dBA, in a distance 10 m from
the small hydro.
• Biological impacts which refers to aquatic life that is afflicted, from eutrofication problems etc.

A CASE STUDY

Description of existing retention basin


The community of Great Eleftherohori (Larissa District, 800 hab) depends for its water supply, on
a retention basin. This is constructed at 2 km distance from the community and serves also for the
irrigation of a small agricultural area. From the topographical study resulted, the possible storage
of water volumes given to the Table 3.
Other data
• Hydrological basin 36.0 ha
• Water requirements 40000 m3
• Water storage (maximum) 90000 m3
• Height of the dam 13 m
• Width to the coronation 5m
• Upstream and downstream slopes 1/3.5 and 1/2.5

Table 3. Possible storage of water volumes.

Depth (m) 6 7 8 9 10 11 12 13

Volume (m3 ) 3080 8810 15850 25330 37110 50710 67550 90000

96

© 2004 by Taylor & Francis Group, LLC


chap-09 19/11/2003 14: 45 page 97

• Thickness of the filter 0.6 m


• Sloping protection with insulation layer from masonry
• Maximum precipitation water flow 17 m3 /s
• Storm weir and downstream protection with concrete stones
• Laboratories tests for water quality showed following results [12]
– Satisfied water transparency but with aquatic plants appearance
– No eutrofication problem

The prospect of creating an hydro


The disposed height from the retention basin when it is full is 13 m as we see from Table 3. The
small hydro could be constructed near an agricultural area that will be irrigated. When we subtract
losses for the water transferred and we add the geometric difference until the irrigation place, there
could be created a disposal height H of 35 m.
In connection to the water distribution, the retention basin has multipurpose targets (water
supply – irrigation):
• For water supply 800 hab · 110 l/day · 365 day = 33120 m3 /year
• For irrigation 90000 − 33120 = 56880 m3 /year
Accepting a two months irrigation period we will have

56880
Q= =0, 012 m3 /s and P = 10 · 0, 012 · 35 = 5 KW (5)
60 · 86400

This could be sold to the electrical power network.

CONCLUSIONS

The consequences of human activities on the hydrologic cycle and the inequality of yearly rainfall
to the different departments, created the necessity for retention basins construction, to stop the
rejection of precious surface water to the sea. Water supply, irrigation, industrial use, may be
served from this water, more efficiently. Supplementary treated wastewaters, from secondary or
advanced treatments, could be stored to retention basins and used for suitable secondary uses. It is
necessary to perform hydrogeologic and geotechnic investigations for the place of retention basins
construction, and for the technology that will be used. Also the evaluation of existing or created
falls, for power production, with small hydro, will support the renewable energy possibilities. The
tendency in our days is to look for the application of sustainable development, and retention basins
as a real tool for such a perspective.

REFERENCES

1. Aarne Vesilind, Environmental Pollution, 1975.


2. Kollias P.S., Water treatment, 1990.
3. Service Technique de l’ Urbanisme, France. Guide Technique des bassins de retenue d’ eaux pluviales,
1994.
4. Kollias P.V., Batrakouli V., The small hydropower plants and their energy evaluation. The case of the
hydropower plant of Vatsounia community. Mediterranean Conference on Policies and Strategies for
Desalination and Renewable Energies, Santoniri 21–23 June, 2000.
5. Kollias P.S., Kollias V.P., The retention basin technology and the eutrofication facing. Third Greek
Congress of Hydrotechnical Union, 2000.
6. Barnes D., Bliss P.J., Gould B.W., Valentine., Water and Wastewater engineering systems, 1983.

97

© 2004 by Taylor & Francis Group, LLC


chap-09 19/11/2003 14: 45 page 98

7. Monition L., Lenir M., Roux J., Les microcentrales Hydroelectriques, pp. 188, 1981.
8. Lamb J.C., Brookhart M.V., Next generation BRP process achieves higher phosphorus removal levels,
Pollution prevention European Edition, Volume 4, Issue 4, August 31–32, 1994.
9. Keplan F., Senelier Y., La pollution des eaux par les composes de l’azote et de phosphore, TSM L’EAU,
No. 6, 1985.
10. Riguard A., Brandel E., Technologie de l’injection, d’oxygene pur dans les retenues d’eau de grande
profondeur. Example, de la retenue du Gouet-Saint Brienc. T.S.M. L’EAU, No. 6, 1985.
11. Saunier B., Saout le M., Bilan des substances nutritives dans les retenues. Cas de la Rance superieure,
T.S.M. L’ EAU, No. 6, 1985.
12. Kollias P.S., Blatsi A., Study of situation of Great Eleftherohori retention basin, 1980.

98

© 2004 by Taylor & Francis Group, LLC


chap-10 19/11/2003 14: 45 page 99

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Solar photocatalytic oxidation: a sustainable tool for


reclaiming biologically treated municipal wastewater for
high quality demand re-use?

H. Gulyas∗, I. Ilesanmi, M. Jahn & Z. Li


Institute of Municipal and Industrial Wastewater Management, Technical University Hamburg-Harburg,
Hamburg, Germany

ABSTRACT: Secondary effluents of different municipal wastewater treatment plants have been
treated by heterogeneous photocatalytic oxidation with 1 or 10 g TiO2 /l in lab scale stirred vessel
reactors using UV-A lamps. The results show that it is generally possible to use solar photocat-
alytic oxidation as a sustainable process for removing organics from biologically treated municipal
wastewaters in order to reclaim them for high-quality re-use options.Also disinfection was achieved.
However, the obtained kinetic data show that area demand for solar photocatalytic oxidation for
reclamation of municipal effluents is high even with high sky and solar radiation. In order to
avoid contaminants which cannot be removed by advanced oxidation processes (like heavy met-
als), it is recommended that solar photocatalytic oxidation is applied for reclaiming biologically
treated greywater separately collected in ecological sanitation schemes (without mixing it with
toilet wastewater or industrial wastewater) instead of biologically treated municipal wastewaters.

INTRODUCTION

In the future, scarcity of water is expected to increase in many regions all over the world. This makes
the exploitation of uncommon water sources necessary. The re-use of reclaimed wastewaters is one
option to relieve situations of water famine. In such regions with severe water scarcity, the re-use
of reclaimed wastewater even for drinking purposes or at least for irrigation of agricultural areas
must be taken into consideration.
If reclaimed wastewater is reused as drinking water, one problem occurs with organic wastewater
constituents, which are not completely removed during secondary wastewater treatment. Hundreds
of different organic compounds have been detected by HPLC as well as by gas chromatography
coupled to mass spectrometry in effluents of municipal wastewater treatment plants [1,2]. Among
them are organics with known genotoxic and/or carcinogenic potential [1], endocrine disruptors
(like tributyl tin [3]) and pharmaceuticals [4] with unknown subchronic effects in case of using
secondary effluents as drinking water source without further treatment except disinfection.
Also humic substances, which represent high amounts of organics in biologically treated munic-
ipal wastewaters [2,5], have been shown to affect animals, for example exposure of carp (Cyprinus
carpio) to fulvic acids resulted in expression of heat shock protein 70 in gills, but not in muscles
which have less intense contact with the surrounding water phase [6]. Occurrence of heat shock
protein is an indicator for stress situations. Fulvic acids are also suspected to act as endocrine dis-
ruptors as they affect reproduction of the nematode Cenorhabditis elegans [7]. Moreover, organic
constituents of raw drinking waters – especially humic substances – represent trihalomethane forma-
tion potential in case of drinking water disinfection by chlorination. Jekel and Ernst [8] recommend

∗ Corresponding author. e-mail: holli@tu-harburg.de

99

© 2004 by Taylor & Francis Group, LLC


chap-10 19/11/2003 14: 45 page 100

that the DOC of tertiary effluents of municipal wastewater treatment plants infiltrated to aquifers
for replenishment of insufficient groundwater should not exceed the DOC of the groundwater –
especially when the groundwater is intended to be used for drinking water purposes. DOC concen-
trations of aquifers at the German capital are reported to be in the range of 3 to 5 mg/l [8] while DOC
concentrations of secondary effluents of German municipal wastewater treatment plants frequently
are in the range of 15 mg/l.
Advanced oxidation processes (AOPs) are a suitable means to remove low concentrations of
organics from wastewaters efficiently and unselectively [9]. In these processes, hydroxyl radicals
are produced at ambient temperature. Hydroxyl radicals are strong oxidants and abstract hydrogen
atoms from C-H groups forming organic radicals, which undergo further reactions with dissolved
oxygen, a diradical, generating organic perhydroxyl radicals. In this way, the organic compounds
are oxidized stepwise finally yielding carbon dioxide. However, AOPs require high energy inputs
(e.g. by producing ozone and operating UV lamps in the process ozone/UV). An AOP, which is
capable of using solar energy, is the heterogeneous photocatalytic oxidation [10].
Heterogeneous photocatalytic oxidation is a simple process utilizing semiconductor particles
(like the nontoxic titanium dioxide) suspended in wastewater which are irradiated with UV light
(wavelengths below 400 nm). By this, electrons are excited from the valence band to the conduc-
tion band. The electrons elevated energetically to the conduction band as well as the resulting
positively charged electron holes, h+ , in the valence band are mobile and migrate to the surface
of the semiconductor particle. Here the electrons may reduce dissolved oxygen adsorbed to the
particle surface yielding superoxide anion radicals, ·O− 2 , which disproportionate into hydrogen
peroxide and molecular oxygen. Hydrogen peroxide is reduced by mobile electrons at the parti-
cle surface in a Fenton type reaction forming hydroxyl radicals, ·OH, and hydroxide anions. The
electron holes, on the other hand, are strong oxidants able to oxidize water molecules to hydroxyl
radicals. They also can directly oxidize organic molecules adsorbed to the photocatalyst surface.
The hydroxyl radicals too are strong oxidants which oxidize organic wastewater constituents. A
good overview of relevant reactions in heterogeneous photocatalytic oxidation is given by Serpone
et al. [11]. Because of the very short life span of the oxidants (electron holes, hydroxyl radicals),
their oxidative action is restricted to the surface of the photocatalyst particles. As a consequence,
only those organic wastewater constituents adsorbed to the photocatalyst are oxidized. There-
fore, the reaction rate of heterogeneous photocatalytic oxidation obeys Langmuir-Hinshelwood
kinetics:

k·K·c
r= (1)
1+K·c

This equation gives the reaction rate, r, as a function of the concentration of the dissolved
organic, c. In this equation, k is the rate constant of the reaction, and K is the (photo) adsorption
coefficient. With very low concentrations of the dissolved organic, the Langmuir-Hinshelwood
equation converges to a pseudo-first order kinetics. This range can be easily applied for design of
solar photocatalytic oxidation reactors [12].The pseudo-first order kinetics is assumed to be valid for
photocatalytic oxidation of biologically treated municipal wastewaters as their DOC concentrations
are in the range of about 10 to 20 mg/l [2].
Although solar photocatalytic oxidation requires high reactor surface areas, it is assumed to
be highly suitable for many developing countries where high amounts of sky and solar radiation
are available. Most of the investigations on wastewater treatment by heterogeneous photocatalytic
oxidation have been performed with aqueous solutions of organic model compounds. However,
predictions based on results of such experiments are limited. Therefore, this study has been executed
with actual secondary effluents of several municipal wastewater treatment plants in North Germany
in order to obtain design data for low-tech solar photocatalytic oxidation reactors. For obtaining
informations about the dimension of lagoons (with poor mechanic agitation) which can be used as
simple reactors for photocatalytic oxidation suggested by Matthews [13], batch experiments were
performed in laboratory scale reactors which have been only moderately agitated.

100

© 2004 by Taylor & Francis Group, LLC


chap-10 19/11/2003 14: 45 page 101

EXPERIMENTAL PROCEDURES

Sampling and preparation of biologically treated wastewaters


Grab samples of secondary effluents of five different municipal wastewater treatment plants in
North Germany (Bad Bramstedt, Dradenau, Lüneburg, Seevetal, Winsen/Luhe) were allowed to
settle for at least one day in a refrigerator in order to remove residual settleable flocs. The clear
supernatants were then used for photocatalytic oxidation experiments either directly or after removal
of inorganic carbon compounds (total inorganic carbon, TIC). For TIC removal, concentrated
hydrochloric acid (p.a.) was added dropwise to 1.4 l of the secondary effluent until pH has been
adjusted to approximately 2. Then oxygen gas was bubbled through the solution for 10 min for
purging CO2 generated from TIC by acidification. Subsequently, pH was readjusted to about 7 by
addition of 1 N NaOH.
For comparison, the effluent (grab sample) of a constructed wetland with vertical flow for
biological treatment of separately collected greywater from the EXPO settlement “Flintenbreite”
in the city of Lübeck [14] has been subjected to photocatalytic oxidation. In these samples also,
numbers of E. coli and total faecal coliforms have been determined.

Photocatalytic oxidation experiments


For photocatalytic oxidation, suspensions of the photocatalyst titanium dioxide (“P25”, Degussa-
Hüls, Hanau, FRG; concentrations: 1 or 10 g/l) in secondary effluents were irradiated with a UV-A
radiator (face tanning unit HD 172, Philips GmbH, Hamburg, FRG) with known UV light intensity
(about 25 W/m2 at a distance of 30 cm from the casing edge of the radiator) at room temperature
[(20 ± 3)◦ C].The Bad Bramstedt effluent was treated in a porcelain bowl with a magnetic stirring bar
(length: 4 cm; stirrer speed: 250 rpm). After taking the first sample of the TiO2 /effluent suspension,
irradiation of a residual 350 ml of the suspension was started. Diameter of the suspension surface
was 14 cm at the beginning of the experiment. The TiO2 suspensions in secondary effluents of the
other four municipal wastewater treatment plants have been irradiated in glass beakers (diameter:
10.8 cm; initial suspension volume: 0.95 l). Mixing was also executed by magnetic stirring (250 rpm,
length of stirring bar: 7 cm). Samples (50 ml) from the irradiated suspensions were taken hourly. The
samples were subsequently centrifuged (10 min, 4500× g) and the supernatants analyzed for TOC.
The TOC concentrations have been corrected for evaporation of water, which has been determined
by weighing the entire reactor before and after taking samples for TOC analyses.

Chemical and microbiological analyses


TOC analysis and determination of buffer capacity KS4.3 (mols of protons necessary to adjust pH of
1 l of wastewater to 4.3) have been performed according to “German Standard Methods for Water,
Wastewater and Sludge Analyses” [15]. Analysis of ortho-phosphate concentrations was executed
by means of cuvette tests containing the same reagents as prescribed in the German Standard
Methods. The pH was analysed with pH probes.
E. coli and total coliforms were determined by means of plates containing selective agar “Chro-
mocult” (Merck, Darmstadt, FRG) according to a method of Manafi and Kneifel [16]. The surface
of the medium was either inoculated directly with small volumes of the greywater samples or in
case of low bacteria concentrations with membrane filters after filtration of 100 ml of the grey-
water. Colonies of E. coli were stained dark blue-violet while other coliforms gave pink colonies
after being incubated for 24 h at 36◦ C.

RESULTS AND DISCUSSION

TOC removal rates during oxidation of the Bad Bramstedt secondary effluent slightly decreased
when photocatalyst concentration was increased from 1 to 10 g/l (Figure 1.). This is not unexpected,

101

© 2004 by Taylor & Francis Group, LLC


chap-10 19/11/2003 14: 45 page 102

Figure 1. TOC removal by photocatalytic oxidation of the settled secondary effluent of the municipal treatment
plant “Bad Bramstedt” with different photocatalyst concentrations; VLo = 0.35 l.

Figure 2. TOC removal by photocatalytic oxidation of the settled secondary effluents of the municipal treat-
ment plants Winsen/Luhe, Lüneburg, Seevetal, and Dradenau in the absence of photocatalyst and with different
photocatalyst concentrations; VLo = 0.95 l; squares and thin lines indicate results of experiments with original
effluents, bold lines and asterisks give results of experiments with effluents with TIC being removed prior to
oxidation.

since Kawaguchi [17] and also Gupta and Tanaka [18] have shown that enhancement of mineral-
ization velocity with increasing photocatalyst concentrations can be described by a saturation type
curve, because high photocatalyst particle concentrations lead to a high amount of reflected photons
which consequently are not contributing to the generation of electron holes and hydroxyl radicals.
In Figure 2, results of photocatalytic oxidation experiments with the secondary effluents of the
other four municipal wastewater treatment plants are shown. As expected, UV-A light did not cause
any mineralization in the absence of TiO2 , irrespective of the presence of the radical scavenger
hydrogen carbonate (which represents most of the TIC in the pH range around 7). That the graphs

102

© 2004 by Taylor & Francis Group, LLC


chap-10 19/11/2003 14: 45 page 103

Table 1. Phosphate concentrations and buffer capacities of investigated secondary effluents as well as
kinetic data received in the different photocatalytic oxidation experiments; n.a.: not analyzed; r: coefficient of
correlation.

B. Bramst. Winsen/L. Lüneburg Seevetal Dradenau

phosphate conc. [mg PO3−4 /l] n.a. 0.09 0.29 0.39 0.59
buffer capacity KS4.3 [mmol/l] n.a. 2.7 1.6 2.3 2.1
VLo [l] 0.35 0.95 0.95 0.95 0.95
VLmean [l] 0.225 0.825 0.825 0.825 0.825
1 g TiO2 /l; original effluents:
pH 7.8–9.2 7.4–8.2 7.2–8.3 6.8–8.5 6.8–7.7
kTOC [(Wh)−1 ] 0.388 0.10 no degr. 0.02 0.063
r −0.9630 −0.7999 – −0.2931 −0.7556
VL · kTOC [l/Wh] 0.087 0.083 0 0.017 0.052
1 g TiO2 /l; effluents acidified, CO2 purged, pH readjusted:
pH – 5.9–7.0 6.5–7.2 6.6–7.4 5.0–6.9
kTOC [(Wh)−1 ] no exp. 0.440 0.40 0.207 0.135
r – −0.9644 −0.8245 −0.9792 −0.8529
VL · kTOC [l/Wh] – 0.363 0.33 0.171 0.112
10 g TiO2 /l; original effluents:
pH 7.9–8.3 7.4–7.8 7.0–7.8 7.0–8.0 6.9–7.7
kTOC [(Wh)−1 ] 0.278 0.173 0.085 no degr. 0.162
r −0.9622 −0.7661 −0.2993 – −0.5639
VL · kTOC [l/Wh] 0.063 0.143 0.070 0 0.134
10 g TiO2 /l; effluents acidified, CO2 purged, pH readjusted:
pH – 5.3–5.8 5.6–6.3 – 5.0–5.6
kTOC [(Wh)−1 ] no exp. no degr. 0.157 no exp. 0.386
r – – −0.4633 – −0.9170
VL · kTOC [l/Wh] – 0 0.13 – 0.316

of original effluents are lying above the graphs for effluents liberated from hydrogen carbonate may
indicate a certain error in TOC analysis (TIC determined in the difference method might be too
low) or presence of some volatile organics in the original effluent, which might have been purged
together with CO2 .
Except for the secondary effluent of the treatment plant Lüneburg, mineralization was detected
in all original effluents when TiO2 concentration was 1 g/l. However, the rates were smaller than in
the oxidation experiment with the Bad Bramstedt effluent. But this is mainly caused by the higher
volumes of wastewaters processed in the experiments given in Figure 2. Increasing photocatalyst
concentration from 1 g/l to 10 g/l resulted in increased reaction rates in three original effluents
given in Figure 2 (Winsen/Luhe, Lüneburg, Dradenau), but not for the Seevetal effluent. With a
second grab sample of the secondary “Seevetal” effluent, a photocatalytic oxidation experiment
was run with 10 g TiO2 /l for a period of three days without purging CO2 . In this experiment, reaction
rate was lower than in comparable experiments with the effluents of the plants Winsen/Luhe and
Dradenau, but higher than in the experiment with the Lüneburg effluent (data not shown).
In the experiments with 1 g TiO2 /l, removal of TIC prior to photocatalytic oxidation resulted in
reaction rates enhanced for up to one order of magnitude (see also Table 1). This shows that CO2
species in biologically treated municipal wastewaters represent an important radical scavenger.
With 10 g TiO2 /l, the rise of reaction rate because of CO2 purging was less pronounced than in the
experiments with the lower photocatalyst concentration.
In Table 1, the kinetic parameters of all experiments as well as some effluent characteristics are
given. The constants kTOC have been derived from linear regression of lnTOCcorr . (Eaccum. ) data

103

© 2004 by Taylor & Francis Group, LLC


chap-10 19/11/2003 14: 45 page 104

Figure 3. Constants kTOC · VL for photocatalytic oxidation (TiO2 concentration: 1 g/l) of secondary effluents
Winsen/Luhe, Lüneburg, Seevetal and Dradenau which have been acidified and purged for TIC removal (with
subsequent pH re-adjustment to about 7).

assuming a pseudo-first order kinetics, with reaction time substituted by accumulated light energy
passing the suspension surface in the reactors. Coefficients of correlation, r, have been very poor in
many experiments indicating a small decrease of TOCcorr. within the experiments and scattering of
TOC data. Best quality of correlations was obtained in the experiments with 1 g TiO2 /l in effluents
with purged TIC.
As the constant kTOC is dependent on liquid volume in the reactor, it is multiplied with the mean
liquid volume in the reactor [12]. The resulting constant kTOC · VL is not dependent on liquid vol-
ume. In Figure 3, the constants for experiments with 1 g TiO2 /l in effluents liberated from hydrogen
carbonate are presented as a function of phosphate concentrations of the secondary effluents Win-
sen/Luhe, Lüneburg, Seevetal and Dradenau. The decrease of kTOC · VL with increasing phosphate
concentration is in accordance with the fact that phosphate is also a radical scavenger and that
phosphate ions compete with organics for binding sites on the photocatalyst.
Although removal of TIC prior to photocatalytic oxidation of secondary effluents is beneficial
for the efficiency of the process (more TOC is removed by the same amount of photons), it is
not recommended for technical scale photocatalytic oxidation process as it means consumption
of chemicals (acids, bases). Therefore, for design of sequential batch reactors for photocatalytic
oxidation of biologically treated municipal wastewaters the lower constants kTOC · VL for the orig-
inal effluents have to be taken into consideration. As the high photocatalyst concentration of 10 g/l
does not efficiently increase kTOC · VL , the technical scale photocatalytic purification of secondary
effluents should be performed with 1 g TiO2 /l. The average of the constants determined under these
conditions given in Table 1 is about 0.05 l/Wh. For safety, in the following design example for
lagoons for photocatalytic oxidation of secondary effluents with a suspended TiO2 concentration
of 1 g/l, a constant of 0.01 l/Wh is selected. The lagoon is processed as an SBR (solar irradiation
for several days, sedimentation of the photocatalyst over night).
The following equation for design of photocatalytic oxidation sequential batch reactors can be
used [12]:
mTOC
= −kTOC · VL · cTOC (2)
E
The amount of a TOC mass, mTOC , mineralized by irradiation with the light energy E
depends on the TOC concentration. In this simple design method, mTOC is calculated stepwise
for certain light energy “packages” (e.g. 10 kWh) passing the reactor surface. After calculating
mTOC for the initial TOC concentration, the TOC concentration resulting from this theoretical
mineralization in a given volume of wastewater is calculated and the resulting concentration is
used again in equation (2) for determination of mTOC in the next calculation step etc. With the

104

© 2004 by Taylor & Francis Group, LLC


chap-10 19/11/2003 14: 45 page 105

Figure 4. E. coli counts in greywater (“infl. constr. wetl.”), greywater which has been treated in a con-
structed wetlands (“effl. constr. wetl.”) and additionally by UV-A irradiation (“effl. 3 h irr.”), by addition
of 10 g/l TiO2 (“effl. + 10 g/l TiO2”), or by simultaneous irradiation for 3 h in the presence of 10 g/l TiO2
(“effl. + TiO2 + irr.”).

assumed kTOC · VL = 0.01 l/Wh (for secondary effluents and a photocatalyst concentration of 1 g/l),
a UV-A light energy demand of 1500 kWh is calculated for the reduction of the TOC of 10 m3
of a secondary effluent from 10 to 2 mg/l. Considering that only about 5 % of the entire sky and
solar radiation is UV irradiation (i.e. light with wavelengths below about 400 nm which is suitable
for utilization in photocatalytic oxidation), about 30,000 kWh of sunlight must be absorbed by a
photocatalytic oxidation reactor for the desired TOC reduction in 10 m3 . Taking minimum values
for sky and solar radiation into consideration, an area of about 13,000 m2 would be necessary in
Hamburg/Germany [minimum sky and solar radiation in December: 0.33 kWh/(m2 · d)], but only
1065 m2 in the Northern Province in South Africa [minimum sky and solar radiation in June:
4.03 kWh/(m2 · d)] for achieving a final TOC of 2 mg/l within 7 days. By distributing a volume of
10 m3 on to such a large area of about 1000 m2 , the depth of the titanium dioxide suspension in
secondary effluent will be in the range of about 1 cm. This would cause problems with evaporation.
Therefore, longer solar irradiation periods will have to be selected reducing the required area and
increasing the depth of the suspension. Moreover, covering the lagoons e.g. with UV-translucent
Plexiglass® planes would be helpful in preventing evaporation losses.
As heterogeneous photocatalytic oxidation includes an adsorption step, reaction rates are highly
dependent on hydrodynamics. Poorly agitated lagoons represent a simple technology, but they are
not very efficient. Solar photocatalytic oxidation in Plexiglass® double skin sheet reactors exhibits
higher constants kTOC · VL , but requires pumps [12].
Photocatalytic oxidation is a treatment process which also leads to disinfection of wastewaters
as has been shown in experiments with greywater biologically pre-treated in constructed wetlands:
E. coli (Figure 4) have been reduced in the effluent of a constructed wetland for greywater treatment
from 58/100 ml to 1/100 ml and total coliforms (data not shown) from 36,000/100 ml to 14/100 ml
by a three hour UV-A irradiation of 400 ml of the greywater in the presence of 10 g TiO2 /l. As the
addition of titanium dioxide reduced E. coli by one log (Figure 4: “effl. + 10 g/l TiO2”), adsorption
of microorganisms to photocatalyst is assumed to contribute to the disinfection. UV-A irradiation in
the absence of the photocatalyst hardly affected E. coli counts (Figure 4: “effl. 3 h irr.”). Disinfection
in photocatalytic oxidation is therefore mainly referred to generation of hydroxyl radicals.
There will be certain advantages when biologically treated greywater, a partial stream of domestic
wastewater, is used instead of biologically treated municipal wastewater: At first, the kinetics of the
process are in the same order of magnitude for both types of effluents (kTOC · VL = 0,064 l/Wh for
an effluent of constructed wetlands for greywater treatment with 1 g TiO2 /l without removing TIC).
Secondly, it is expected that biologically treated greywater represents smaller concentrations of the
radical scavenger ortho-phosphate (unless automatic dishwashing products are used which contain
phosphate!). Thirdly, biologically treated entire municipal wastewaters may be contaminated with

105

© 2004 by Taylor & Francis Group, LLC


chap-10 19/11/2003 14: 45 page 106

heavy metals by industrial wastewater discharges to the sewer, which are not removed by photo-
catalytic oxidation. Finally, greywater is poorer in faecal microorganisms, so that its disinfection
with photocatalytic oxidation is safer than disinfection of municipal effluents.

CONCLUSIONS

Photocatalytic oxidation of secondary effluents of municipal wastewater treatment plants in lagoons


is a suitable process for preparing them for re-infiltration to aquifers. However, it requires relatively
large areas for safe TOC reduction, and the application of the sunlight-operated photocatalytic
oxidation will be restricted to regions with relatively high sky and solar irradiation. Less area
demand can be achieved by using more sophisticated reactors like the Plexiglass® double skin sheet
reactor, which displays better hydrodynamic conditions. Nevertheless, photocatalytic oxidation may
be looked at as a sustainable process for improving the quality of biologically treated municipal
wastewaters, because it will not require high inputs of energy except the renewable solar energy.
Also hygienic conditions of effluents of biological wastewater treatment plants are improved by
heterogeneous photocatalytic oxidation. A disadvantage of using municipal effluents, as a source is,
that harmful inorganic wastewater constituents like heavy metals are not removed by photocatalytic
oxidation.Therefore, separate collection of greywater, i.e. that part of municipal wastewater which is
not contaminated with high concentrations of nutrients (originating from urine and faeces) and with
hazardous constituents potentially present in industrial wastewaters, and its biological treatment
with subsequent photocatalytic oxidation is expected to be a safer source for indirect drinking water
re-use (after re-infiltration into aquifers).

REFERENCES

1. Clark, L.B., Rosen, R.T., Hartman, T.G., Alaimo, L.H., Louis, J.B., Hertz, C., Ho, C.-T. and Rosen,
J.D., Determination of Nonregulated Pollutants in Three New Jersey Publicly Owned Treatment Works
(POTWs), Res. Journal WPCF Vol. 63, No. 2, pp 104–113, 1991.
2. Gulyas, H., Discharge of Organic Contaminants to Rivers with Treated Municipal Wastewaters, Water
Pollution IV – Modelling, Measuring and Prediction, eds. R. Rajar and C.A. Brebbia. Computational
Mechanics Publications, Southampton, UK, 1997, pp 711–722.
3. Donard, O.F.X., Quevauviller, P. and Bruchet, A., Tin and Organotin Speciation during Wastewater and
Sludge Treatment Processes, Water Res. Vol. 27, No. 6, pp 1085–1089, 1993.
4. Ternes, T.A., Occurrence of Drugs in German Sewage Treatment Plants and Rivers, Water Res. Vol.
32, No. 11, pp 3245–3260, 1998.
5. Manka, J., Rebhun, M., Mandelbaum, A. and Bortinger, A., Characterization of Organics in Secondary
Effluents, Environ. Sci. Tech. Vol. 8, No. 12, pp 1017–1020, 1974.
6. Wiegand, C. and Steinberg, C.E.W., Direct Effects of Suwannee River Humic Substances on Carp
Detoxication Enzymes and hsp 70, Poster, 8th Meeting of the Nordic Chapter of the IHSS, Frederiksborg,
May 2001.
7. Höss, S., Haitzer, M., Traunspurger, W. and Steinberg, C., Refractory Dissolved Organic Matter Can
Influence the Reproduction of Caenorhabditis elegans (Nematoda), Freshwat. Biol. Vol. 46, No. 1,
pp 1–10, 2001.
8. Jekel, M. and Ernst, M., Entfernung gelöster organischer Stoffe bei der Wiederverwendung kommunaler
Abwässer zur Grundwasseranreicherung, 58. Darmstädter Seminar, Abwasserwiederverwendung in
wasserarmen Regionen. – Einsatzgebiete, Anforderungen, Lösungsmöglichkeiten, Schriftenreihe WAR
116, Verein zur Förderung des Instituts WAR, Darmstadt, FRG, 1999, pp 117–132.
9. Gulyas, H., Processes for the Removal of Recalcitrant Organics from Industrial Wastewaters, Wat. Sci.
Tech. Vol. 36, No. 2–3, pp 9–16, 1997.
10. Freudenhammer, H., Bahnemann, D., Bousselmi, L., Geissen, S.-U., Ghrabi, A., Saleh, F., Si-Salah,
A., Siemon, U. and Vogelpohl, A., Detoxification and Recycling of Wastewater by Solar-Catalytic
Treatment, Wat. Sci. Tech. Vol. 35, No. 4, pp 149–156, 1997.
11. Serpone, N., Pelizzetti, E. and Hidaka, H., Identifying Primary Events and the Nature of Intermediates
Formed during the Photocatalyzed Oxidation of Organics Mediated by Irradiated Semiconductors,

106

© 2004 by Taylor & Francis Group, LLC


chap-10 19/11/2003 14: 45 page 107

Photocatalytic Purification and Treatment of Water and Air, eds. D.F. Ollis and H. Al-Ekabi, Elsevier
Science Publishers, Amsterdam, 1993, pp 225–250.
12. Gulyas, H., Stürmer, R. and Hintze, L., Experiences with Solar Application of Photocatalytic Oxidation
for Dye Removal from a Model Textile Industry Wastewater, Water Pollution VI – Modelling, Measuring
and Prediction, ed. C.A. Brebbia, WIT Press, Southampton, UK, 2001, pp 153–165.
13. Matthews, R.W., Photocatalysis in Water Purification: Possibilities, Problems and Prospects, Photocat-
alytic Purification and Treatment of Water and Air, eds. D.F. Ollis and H. Al-Ekabi. Elsevier Science
Publishers, Amsterdam, 1993, pp 121–138.
14. Otterpohl, R., Design of Highly Efficient Source Control Sanitation and Practical Experiences,
Decentralised Sanitation and Reuse: Concepts, Systems and Implementation, eds. P. Lens, G. Zeeman
and G. Lettinga, IWA Publishing, London, 2001, pp 164–179.
15. Wasserchemische Gesellschaft–Fachgruppe in der Gesellschaft Deutscher Chemiker in Gemeinschaft
mit dem Normenausschuß Wasserwesen (NAW) im DIN Deutsches Institut für Normung e.V., ed.,
Deutsche Einheitsverfahren zur Wasser-, Abwasser- und Schlammuntersuchung, loose-leaf-collection,
51st supplementary leaflet, Deutsches Institut für Normung e.V., Berlin, 2002.
16. Manafi, M. and Kneifel, W., Ein kombiniertes Chromogen-Fluorogen-Medium zum simultanen Nachweis
der Coliformengruppe und von E. coli in Wasser, Zentralbl. Hyg. Vol. 189, No. 3, pp 225–234, 1989.
17. Kawaguchi, H., Dependence of Photocatalytic Reaction Rate on Titanium Dioxide Concentration in
Aqueous Suspensions, Environ. Technol. Vol. 15, No. 2, pp 183–188, 1994.
18. Gupta, H. and Tanaka, S., Photocatalytic Mineralisation of Perchloroethylene Using Titanium Dioxide,
Wat. Sci. Tech. Vol. 31, No. 9, pp 47–54, 1995.

107

© 2004 by Taylor & Francis Group, LLC


chap-10 19/11/2003 14: 45 page 108

© 2004 by Taylor & Francis Group, LLC


chap-11 19/11/2003 14: 46 page 109

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Development of solar testing station for flat-plate water-cooled


solar collectors

Emin Kulić∗, Sadjit Metović & Haris Lulić


Mechanical Engineering Faculty, University of Sarajevo, Sarajevo, Bosnia and Herzegovina

Muhamed Korić
Unioninvest dd, Sarajevo, Bosnia and Herzegovina

ABSTRACT: Theoretical aspects of thermal processes in solar collector operation under steady
state working conditions have been shortly elaborated as a background of Testing Methods given
in ASHRAE Standard 93-86 and European Standard EN 12975-2.
A short technical description of operational flow diagram including data acquisition system
and its “interconnection” with Labview software for result evaluation is presented. Measured
thermal efficiencies of some solar collectors produced in Bosnia and Herzegovina and abroad
are shown, also.

INTRODUCTION

The work on solar energy utilization in B&H was initialized in early 1980s by developing and build-
ing own of flat-plate, water-cooled solar collector. For that purpose of collector thermal efficiency
measurement solar test station was designed and built at the Mechanical Engineering Faculty in
Sarajevo. Unfortunately, during the recent war test facility and measuring equipment was destroyed.
This stopped our further activities to improve the performance of own-produced solar collector.
Therefore, in time after the War our first step was reconstruction of flat-plate, water-cooled collec-
tor test station. The next task would be to measure thermal efficiencies of some donated collectors.
The main goal of our further activities is to redevelop, redesign, model and produce an improved
solar collector of high performances, based on an earlier version, which will be shortly described
in this text.

THE BASIC INFORMATION ABOUT SOLAR COLLECTORS TEST


METHODS WHEN USING EUROPEAN STANDARD EN 12975-2
AND ASHRAE STANDARD 93-86

Determination of solar collector characteristics by testing


There is a number of solar collector testing methods available around the world. Our attention was
focused on two most widely spread techniques (European Standard EN 12975-2 and ASHRAE
Standard 93-86). After studying them we decided to choose European standard EN 12975-2. It is
worth to point out that this standard and ASHRAE Standard 93-86 have some small differences in
prescribed testing procedures.

∗ Corresponding author. e-mail: ekulic@yahoo.com

109

© 2004 by Taylor & Francis Group, LLC


chap-11 19/11/2003 14: 46 page 110

According to European standard EN 12975-2 to obtain complete certificate for tested collector,
it is necessary to perform a number of separate tests as it follows:
1. Internal pressure tests for absorbers 6. Rain penetration
2. High-temperature resistance 7. Mechanical load and Impact resistance
3. Exposure 8. Thermal performance
4. External thermal shock 9. Freeze resistance
5. Internal thermal shock 10. Final inspection
Tests under numbers 1, 2, 3, 4, 5, 6, 7, 9 and 10 give information on physical characteristics of
collector, which they have to withstand-endure during its operating lifetime. Test under number 8
gives information, on effectiveness of solar collector, to transform incoming solar radiation into
useful thermal energy by the cooling medium.
We consider test number 8 as the most important of all of the stated tests because all essential
thermal and operational performance of the collectors like: (i) Instantaneous Thermal Efficiency,
(ii) Incidence Angle Modifier and (iii) Collector Time Constant, are determined using this test.
This information enables determination of thermal efficiency curve, and thus the sizing of solar
system based on thermal energy loads and available solar radiation in the locality, using e.g. f-Chart
method.
Equation of Instantaneous Thermal Efficiency can be written as:
Qu (Ti − Ta )
ηi = = FR (τα)e − FR UL (1)
Aa · I T IT
where:
Qu – useful energy gain
Aa – aperture area
IT – irradiance on tilted plate
FR – collector heat removal factor
(τα)e – transmittance-absorptance product
UL – collector overall heat loss coefficient
Ti – inlet water temperature
Ta – ambient air temperature
Equation (1) shows the quantity of the useful heat output obtained, at pre-determined temperature
level of operating fluid, when the collector is exposed to real outdoor conditions. The term FR in
Equation (1) is number, which in combination with (τα) and UL , can be used to characterize
a collector. These values are determined under test conditions, which are generally similar to its
operating conditions, under which the collector will produce most of its useful heat output. Collector
tests are done when the level of solar radiation is high enough, and most of it is beam radiation
(incident angle less then 30◦ ). The same or similar value of useful heat output determined under
test conditions can be obtained only during the period of day around solar noon in the real operating
conditions of the collector.

Incidence Angle Modifier (Kτα )


Incidence Angle Modifier (Kτα ) is a function of solar radiation incidence an it is defined as the
ratio of transmittance-absorptance product at any incidence angle (τα)e , and at incidence angle
zero degrees (τα)n , as shown by Equation (2)
(τα)e
Kτα = (2)
(τα)n
The dependence of (τα)e on the angle of incidence of radiation to which is collector exposed
varies from one collector to another, and the standard test methods include experimental estimation
of this effect.
Effective transmittance-absorptance product (marked as (τα)e ) is very important optical charac-
teristic of solar collector and this value can be determined from the free term, which is included in the

110

© 2004 by Taylor & Francis Group, LLC


chap-11 19/11/2003 14: 46 page 111

a
xl
a
rb
be

la
L

or
s
Ap
ba

Figure 1. Solar collector.

equation (1) for small incident angle. Incidence Angle Modifier introduces correction of effective
transmittance-absorptance product due to its dependence on incident angle of solar radiation.

Collector Time Constant


In the procedure of thermal testing of solar collectors, Collector Time Constant is the first value,
which has to be determined. This parameter is required to define time interval for all required
measurements (water and air temperatures, radiation), as prescribed by corresponding Standard.
This very important characteristic shows behavior of the collector under transient conditions when
sudden changes in collector operation occur, e.g. incident radiation angle or inlet fluid temperature
change suddenly.

Energy balance of solar collector


Setting of appropriate area of solar collector
Various areas of the solar collector (shown in Figure 1.) are used to define ηi . It is necessary to
specify the used area clearly so the same area basis can be used in the subsequent design calculations
based on the collector tests results. According the Figure 1. following areas are presented:
a) Gross area: Ag = B × L (m2 )
b) Aperture area: Aa = ba × la (m2 )
c) Unshaded absorber plate area: AA = bA × lA (m2 )
Gross collector area is defined as total area occupied by a collector module, that is, the total area
of collector array divided by the number of modules in the array. Aperture area is defined as the
unobstructed cover area or the total cover area minus the area of cover supports.

Instantaneous energy balance of the solar collector in steady operation


At appropriate time of a clear day, the solar collector can be considered to operate stationary.
Assumption is that mass flow rate of heat transfer fluid can be maintained and other inlet parameters
(IT , ti , to , ta ) during appropriate interval of time are considered constant. In this case, the equation
of instantaneous energy balance is written below (see Figure 2):
Absorbed solar irradiance and overall heat loss of the collector are calculated as:

Qa = IT · (τα)e · Aa [W] (3)


Qov.loss = Aa · UL · (tA − ta )[W] (4)

where: tA [◦ C] – temperature of absorber area.

111

© 2004 by Taylor & Francis Group, LLC


chap-11 19/11/2003 14: 46 page 112

Qov.loss
Qu

to

IT
Q = 0

Thermal insulation Qa  Qu  Qov.loss = 0
Qu = Qa  Qov.loss
Qa
Transparent cover, τ
Absorber plate, , ta

Qa – absorbed solar irradiance


m· Qov.loss – overall heat loss of the
ti collector

Figure 2. Solar collector energy balance.

Analytic form for useful heat gain of collector is:

Qu = Aa · [IT · (τα)e − UL · (tA − ta )] [W] (5)

When calculating, Qu , from equation (5) one should have in mind “changes” of absorber area
temperature tA during the testing time interval.

Heat removal factor – FR


It is convenient to define a quantity that relates the actual useful energy gain of a collector, to the
useful gain if the whole collector surface, at the fluid inlet temperature level. This quantity is called
heat removal factor, FR , which can be determined from equation (6) as:

ṁ · cp · (to − ti )
FR = (6)
Aa [IT · (τα)e − UL · (ti − ta )]

The quantity FR is equivalent to a conventional heat exchanger effectiveness, which is defined as


the ratio of the actual heat transfer and the maximum possible heat transfer. The maximum possible
useful energy gain (heat transfer) in a solar collector occurs when the whole collector is at the inlet
fluid temperature; since in that case heat loss to the surroundings are at a minimum. The collector
heat removal factor times this maximum possible useful energy gain is equal to the actual useful
energy gain, Qu
Qu = Aa · FR · [IT · (τα)e − UL · (ti − ta )] [W] (7)

Experimental determination of the instantaneous thermal


efficiency function diagram
As the first step in drawing of the diagram it is necessary to calculate the values of instantaneous ther-
mal efficiency function (η = f (x)) depending on independent variable x, given by: x = (ti − ta )/IT.
Values of x are determined during experiment. According to calculated values (minimum 16 inde-
pendent tests) it is possible to “export” measuring values and draw the diagram. After that it is
necessary to determine an equation of instantaneous thermal efficiency using one of the recognized
methods of regression analyses.
According to European Standard EN 12975-2 and ASHRAE Standard 93-86 at least 16 inde-
pendent tests for determining 16 instantaneous points of thermal efficiency are required in order to
draw graph and numerically determine equation of this function. To obtain each specific point of

112

© 2004 by Taylor & Francis Group, LLC


chap-11 19/11/2003 14: 46 page 113

instantaneous thermal efficiency, the following characteristics involved in calculation of thermal


efficiency values must be measured:
It : global solar irradiation on aperture area of collector
m: mass flow rate of heat transfer fluid
t1 : inlet temperature of heat transfer fluid
t2 : outlet temperature of heat transfer fluid
ta : ambient temperature
cp : specific heat of heat transfer fluid at its average temperature
According to ASHRAE standard, prior beginning of each test measurement of the above men-
tioned characteristics, steady conditions of the collector operation lasting not less then 15 minutes
have to be provided on the solar testing station. That means, steady values of temperature and mass
flow rate have to be provided and the solar irradiation has to be more than 790 W/m2 .
Each point of the instantaneous thermal efficiency is calculated as:
 T2
T mcp (t2 − t1 ) dT
η= 1 T (8)
Ag T12 It dT

T1 : starting time of test measurement for determination of the instantaneous thermal efficiency
at that point
T2 : ending time of test measurement for determination of the instantaneous thermal efficiency
at that point
According to ASHRAE Standard 93-86 time interval (T2 − T1 ) is the same as the time constant
of the collector or, if the time constant is less then 5 minutes, it is accepted to be 5 minutes.

Thermal efficiency testing of solar collectors according to


ASHRAE Standard 93-86
According to this standard there are two basic ways to perform all of the needed thermal efficiency
tests of solar collectors:
• in closed laboratory-space where solar radiation simulator is used,
• at natural outdoor conditions where the collector is exposed to solar radiation on a clear day and
required intensity of the radiation is provided.

For this investigation the following outdoor conditions are


to be provided
• average intensity of the global solar radiation acting on the plane of the collector aperture is to
be not less than 790 W/m2 during the test interval,
• angle between the direct radiation and normal to the plane of the collector aperture is to be less
than 30◦,
• rate of the diffusive radiation is to be no more then 20%,
• ambient temperature: ta < 30◦ C
• wind velocity over the collector aperture: 2.2 < u < 4.5 m/s.

Selection of the heat transfer fluid operation-temperature levels


To determine the function of instantaneous thermal efficiency, collectors are tested with 4 different
inlet temperature levels of the heat transfer fluid:
1. level:t1 ∼ ta where η1 = ηmax
2. level:t1 for which η2 = 0.7ηmax
3. level:t1 for which η3 = 0.4ηmax
4. level:t1 for which η4 = 0.1ηmax

113

© 2004 by Taylor & Francis Group, LLC


chap-11 19/11/2003 14: 46 page 114

The basic information about solar collectors test methods


according to European Standard EN 12975-2
Basic definitions according to these standards for collector tests in this area have no important dif-
ferences in comparison to ASHRAE Standard 93-86 or the first version of this standard from 1977.
In comparison to American standard there are some differences and additions:
• definition of the operation temperatures of the heat transfer fluid used to determine function of
the instantaneous thermal efficiency,
• calculation of the individual instantaneous thermal efficiency values,
• argument of the function of the instantaneous thermal efficiency of the collector is determined
according to average temperature of the heat transfer fluid during test interval,
• test interval length,
• presentation of obtained test results,
• thermal capacity of the solar collector,
• determination of the incident angle modifier.

SCHEMES AND DIAGRAMS

Technological scheme
Basic technological scheme of the Solar Testing Station is shown in Figure 3, while Figure 4 until
6 shows photographs of testing station from different angles.

Data acquisition and transformation


A flow diagram made for LabVIEW software used as a part of data acquisition system is shown in
Figure 7. Signals coming from different measuring devices placed on specific positions on parts
of Solar Test Station system and solar collector being tested, are collected by data acquisition card,
and then sorted and used for calculation of different parameters required by the test procedure.
Collected data are different temperatures, fluid flow rates, irradiation, wind velocity etc.

Technological and thermal characteristics of initial own collector


Initial technology for further solar collector development, which consists of sandwich panel, with
added absorber and glazing is shown in Figures 8 and 9. This collector can serve as the construction
element (wall or roof), as well.

1. Testing panel
2. Heat exchanger – chiller
3. Storage
4. Circulating pump
5. Fine filter
6. Expansion vessel
7. Flow rate controller
8, 12, 13. Flow rate measurement
9, 10. Electrical heaters
11. Temperature controller
14, 15, 16. Temperature senzors
17. Temperaature recorder
18. Temperature olotter
19, 20, 21. Global, difuse and long
wave, radiation measurement
respectively
22. Solar radiation integrator
23. Solar radiation plotter
24. Thermo-anemometer

Figure 3. Schematic technological drawing of solar testing station.

114

© 2004 by Taylor & Francis Group, LLC


chap-11 19/11/2003 14: 46 page 115

Figure 4. Testing station. Figure 5. Testing station.

Figure 6. Testing station.

Figure 7. LabVIEW acquisition data scheme.

115

© 2004 by Taylor & Francis Group, LLC


chap-11 19/11/2003 14: 46 page 116

Standard construction of panal and its transformation into


Sandwich Solar Collector - SSC
a) Standard construction panel

35  50  75  100

600

b) Transformed solar collector

50  75  100

600

JOINING OF PANELS SEALING OF PANELS

Figure 8. Further solar collector development.

Figure 9. Different views and cross sectional area of solar sandwich collector.

Figure 10 shows experimental values for Incident angle modifier for one of tested collectors,
while Figure 11 shows measured values of thermal efficiency for the same collector.
Thermal efficiency of some collectors tested in Sarajevo, before the last war, is shown in
Figure 12.

116

© 2004 by Taylor & Francis Group, LLC


chap-11 19/11/2003 14: 46 page 117

Figure 10. Experimental determination of Incident angle modifier.

Figure 11. Experimental values of thermal efficiency of one of tested collectors.

117

© 2004 by Taylor & Francis Group, LLC


chap-11 19/11/2003 14: 46 page 118

Figure 12. Comparative plots of thermal efficiencies for 4 different collectors.

CONCLUSIONS

From the presented materials one may conclude that the experimental test station has been success-
fully reconstructed. A number of collectors is tested and their thermal efficiencies obtained and
compared with manufacturers information.

REFERENCES

1. Duffie J.A., Beckman W.A.: Solar Engineering of Thermal Processes. John Wiley & Sons, New York,
1985.
2. Method of Testing to Determine the Thermal Performance of Solar Collectors, ASHRAE ANSI/ASHRAE
93-1986.
3. Methods of Testing to Determine the Thermal Performance of Solar Collectors; ASHRAE,
ANSI/ASHRAE 93-1986 (RA 96).
4. Thermal Solar Systems and Components – Collectors Part 1: General requirements, prEN 12975-1:1997.
5. Thermal Solar Systems and Components – Collectors Part 2: Test Methods, prEN 12975-2:1997.
6. Methods of Testing to Determine the Thermal Performance of Unglazed Flat-Plate Liquid Type Solar
Collectors, ASHRAE, ANSI/ASHRAE 96-1980 (RA 89).
7. Test Methods for Solar Collectors – Part 3: Thermal Performance of Unglazed Liquid heating Collectors
(Sensible heat transfer Only) including pressure Drop, ISO; ISO 9806-3, 1995.
8. Kulić E., Sirbubalo S., Korić M.: Survey of Solar Energy Utilization, Prewar and Postwar Activities in
Bosnia and Herzegovina, Workshop on Renewable Energy Technologies – Dissemination and Market
Development, 3–6 April, 2000, Islamabad, Pakistan.
9. Kulić E., Koric M.: Solar testing station for testing flat-plate water cooled solar collectors, Interklima,
Zagreb, April, 26–27, 2001.

118

© 2004 by Taylor & Francis Group, LLC


chap-12 19/11/2003 14: 46 page 119

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

The application of solar radiation for the treatment of lake water

Davor Ljubas∗ , Nikola Ružinski & Slaven Dobrović


Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb, Croatia

ABSTRACT: The topic of this research is the use of solar radiation for reduction of Natural
Organic Matter (NOM) content in surface waters. NOM in drinking water could cause several health
problems, especially after contact with disinfecting chemicals. Solely solar radiation does not have
enough energy for sufficient degradation of NOM, but in combination with heterogeneous photo-
catalyst – titanium dioxide (TiO2 ) and other chemicals the degradation potential could increase.
A number of tests with lake water exposed to solar radiation in a non-concentrating reactor was
performed and photodegradation of NOM for various combinations of doses and crystal structures
of TiO2 with H2 O2 was studied. Irradiation intensity was estimated from global radiation measure-
ments. The best performance for the NOM degradation had combination of 1 g/L TiO2 both anatase
and rutile + solar radiation. Economically acceptable doses of H2 O2 decrease the efficiency of the
process.

INTRODUCTION

Advanced oxidation processes (AOPs) are based on the generation of very reactive species –
hydroxyl radicals (OH• ) that could oxidize wide spectra of organic matter in water quickly and
nonselectively. Solar photocatalytic oxidation with semiconductor such as TiO2 is one of the
AOPs [1–5].
The topic of this study is the use of solar radiation for reduction of NOM content in surface
waters, because NOM in drinking water could cause several health problems – especially after the
contact with disinfecting chemicals [6–9]. We tried to involve the solar radiation in water treatment
and estimate the benefits of it [10,11]. Solely solar radiation has not enough energy for sufficient
degradation of NOM, but in combination with heterogeneous photocatalyst – titanium dioxide
(TiO2 ) and other chemicals the degradation potential could increase. As TiO2 is illuminated by
UV radiation with wavelength below 400 nm the photons excite valence band electrons across the
band gap into conduction band, leaving holes in the valence band. The holes in TiO2 will react with
water molecules or OH− ions and then produce hydroxyl radicals OH• [12].
Experimental part of this work involves the series of tests with lake water exposed to solar
radiation in a non-concentrating reactor and studying the photodegradation of NOM for various
combinations of doses and crystal structures of TiO2 (anatase and rutile) with various doses of
hydrogen peroxide (H2 O2 ). The degradation of NOM was followed by TOC, UV absorbance at
254 nm (A (254)) and colour measurements. Irradiation intensity was estimated from global solar
radiation measurements using the fact that only 3–5% of the solar spectrum on Earth’s surface is
UV radiation (under 400 nm) [10,13]. The autors used an average value 4% of global solar radiation
for the estimation of useful radiation intensity for the photocatalytic process. The experiments were
performed in Zagreb, Croatia, with latitude 45.82◦ N.
Results of the research showed that the best performance for the NOM degradation had combina-
tion of 1g/L TiO2 both anatase and rutile + solar radiation. The addition of economically acceptable
∗ Corresponding author. e-mail: davor.ljubas@fsb.hr

119

© 2004 by Taylor & Francis Group, LLC


chap-12 19/11/2003 14: 46 page 120

doses of H2 O2 decreases the efficiency of photocatalytic degradation process. Only a very high
dose of H2 O2 has positive impact on the degradation rate of NOM. Additionally, the observed
degradation rate of NOM presents a significant potential for the use in the real plants, especially
in combination with other technological steps for drinking water preparation, like settling, coagu-
lation – flocculation or biological filtration, while more biodegradable substances of NOM could
be formed [14–16].

MATERIALS AND METHODS

Materials
The water used in experiments was lake water from “Jezero kod Njivica” at Njivice, Island of Krk,
Croatia. Average content of lake water is given in Table 1.
TiO2 powder of different crystal structures anatase and rutile, purity of 99.9+, was provided by
Aldrich. Analytical grade H2 O2 (30%) was provided by Kemika, Zagreb, Croatia. The samples were
filtered by vacuum filtration through 0.45 µm cellulose nitrate membranes, provided by Sartorius,
Germany. All the chemicals and materials were used as received, without further purification.

Apparatus
The experimental setup of this study is shown in Figure 1. The effective volume of the reactor
(135 mm diameter × 72 mm H) was 800 mL.
The dissolved oxygen content was measured by oximeter OXI 325, WTW, Germany. The global
solar radiation was measured by a pyranometer KIPP & ZONEN, mounted at an automatic mete-
orological station META 801, by the Institute Jozef Stefan, Slovenia. Temperature was controlled
with digital thermometer type 110, TESTO, Germany. pH value of the water was measured with a
pH-meter, type 540 GLP, WTW, Germany.

Procedures
Preparation of the samples
The samples of lake water were carried in 25 L polycarbonate resin containers and kept in refrig-
erator (+4◦ C) for max. 7 days. All the dishes were washed with the solution Helmanex II,
HNO3 : H2 SO4 1:1, demi water Millipore quality (0.05 µS/cm) and dried on 50◦ C. 12–15 hours
before the experiments, the water was taken out of the refrigerator to reach the temperature of
approximately 24◦ C. After dosing TiO2 or/and H2 O2 into 800 mL of raw water, the reactor was
kept “in the dark” and mixed for 10 minutes at the speed of 400 rpm to homogenise the solution.

Table 1. Parameters of the “Jezero kod Njivica” lake water.

Parameter Range

pH value 8.10–8.20
Turbidity, NTU 0.6–1.2
Absorbance, A (254) nm/cm−1 0.111–0.135
KMnO4 consuming capacity, mg/L 18.50–20.50
Colour – ADMI, mg/L 9–13
Colour – Pt-Co-455 nm (0.45 µm filtrated) 5–11
Colour – Pt-Co-465 nm (0.45 µm filtrated) 4–10
Alkalinity, mg/L as CaCO3 185–200
Iron, µg as Fe (II+III) 40
Aluminium, µg as Al 50
DOC, mg/L C 4.65–5.10

120

© 2004 by Taylor & Francis Group, LLC


chap-12 19/11/2003 14: 46 page 121

Solar light experiments


In the main series of experiments the solar light exposure time was 2 hours. The samples for
the analysis were taken at the beginning, after 30, 60 and 120 minutes. During the experiments
the following parameters were controlled and analysed: temperature, dissolved oxygen content,
absorbance at 254 nm after filtration through 0.45µm membrane, the colour and pH value. During
the experiments the mixing speed range was kept at 400 rpm to ensure the creation of the whirlpool
and therewith to enable the dissolving of the oxygen from the air, necessary for the photocatalytic
process performance.

Experiments “in the dark”


Experiments “in the dark” were conducted with aluminium foil coverage to block the impact of
solar radiation. Thereby the absorption process of NOM on TiO2 particles could be observed, too.

Analyses
Analytical measurements of NOM content in water was determined, after filtration through
0.45 µm membrane, with TOC analyser type 5050, SHIMADZU, Japan, and with spectrophotome-
ter UV-VIS, type 8430, HEWLETT PACKARD, USA, for absorbance measurement. DR/4000
spectrophotometer, HACH, USA was used for colour measurement.

RESULTS AND DISCUSSION

The main experiments were performed with 2 hours exposure time. Dissolved oxygen content in
all the experiments was between 95 and 100% of the saturation value. pH value at the end of the
experiments was between 8.1 and 8.6.
The degradation potential of the pure TiO2 crystal structures anatase and rutile showed the
following results – Table 2.

Figure 1. Reactor made of borosilicate glass 3.3, 2. ceramic support, 3. PVC thermostatic bath with
demineralised water, 4. magnetic stirrer, type IKA MAG, IKA, Germany, 5. pipes for connection to cooling
bath type L, LAUDA, Germany, 6. PVC insulation lid, 7. PTFE stir bar (10 mm diameter × 40 mm L).

121

© 2004 by Taylor & Francis Group, LLC


chap-12 19/11/2003 14: 46 page 122

Table 2. Relative absorbance and DOC of the anatase (An) and rutile (R) TiO2 solution, after
2 hours of experiment and filtration through 0.45 µm membrane.

TiO2 , g/L Temperature, ◦ C A/A0 (254), % DOC/DOC0 , % A/A0 (254), %, “in dark”

0.05 An 24.0±1 73.56 99.38 98.6


0.1 An 24.0±1 57.49 100 97.9
0.5 An 24.0±1 46.37 86.76 93.2
0.8 An 24.0±1 39.31 81.16 94.2
1.0 An 24.0±1 31.47 80.13 91.1
1.5 An 24.0±1 32.41 83.93 87.3
0.05 R 24.0±1 85.56 98.84 95.3
0.1 R 24.0±1 71.47 100 96.2
0.5 R 24.0±1 35.23 88.24 92.4
0.8 R 24.0±1 36.45 77.49 100.0
1.0 R 24.0±1 25.62 78.99 100.0
1.5 R 24.0±1 26.79 79.40 96.1

0.14

0.12 1 g/L An TiO2


1 g/L R TiO2
0.1

0.08
A (254)

0.06

0.04

0.02

0
0 60 120 180 240
t, min

Figure 2. Absorbance at 254 nm A (254) during 4 hours of solar irradiation. Average gloabl solar radiation:
685.4 W/m2 , estimated UV radiation: 27.4 W/m2 .

Following the results from Table 2, it is decided to experiment further only with doses of 1 g/L
TiO2 . These doses showed the best result at absorbance removal for both TiO2 photocatalists. The
best DOC removal was obtained for the dose of 1 g/L an TiO2 , too, but 1 g/L R TiO2 showed the
second best result. Regarding absorbance removal during the experiments “in dark”, Table 2, it
could be indicated that a week adsorption relationship exists between NOM and TiO2 particles.
The bicarbonate content in lake water is relatively high: alcalinity range was 185–200 CaCO3
mg/L. It could be supposed that NOM in lakes with lower bicarbonate content could be degraded
more efficient [17].
Degradation potential of NOM during 4 hours of the solar irradiation is shown at Figure 2. This
experiment showed continuous activity of TiO2 in degradation of NOM. Following Figure 2, there
is a reason to believe that the degradation process could lead to the total degradation and destruction
of NOM if the solar exposure time is long enough.
Additional use of H2 O2 and mixing of crystal structures of TiO2 did not improve the performance
of the degradation process, as shown in the Table 3. Only economically unacceptable dose of
54 mg/L H2 O2 showed better DOC and absorbance removal.
Qualitative view at the degradation process of NOM for anatase and for rutile TiO2 photocatalyst
is given at Figures 3–6.

122

© 2004 by Taylor & Francis Group, LLC


chap-12 19/11/2003 14: 46 page 123

Table 3. Additional use of H2 O2 and mixing of An and R TiO2 .

TiO2 , g/L H2 O2 , mg/L Temperature, ◦ C A/A0 (254 nm), % DOC/DOC0 , %

0.5 An + 0.5 R 0 24.0±1 41.18 91.34


0.25 An + 0.25 R 0 24.0±1 43.24 90.51
1.0 An 6 24.0±1 36.09 94.31
1.0 An 18 24.0±1 34.57 90.48
1.0 An 54 24.0±1 31.97 91.55
1.0 R 6 24.0±1 24.09 87.87
1.0 R 18 24.0±1 21.92 85.68
1.0 R 54 24.0±1 9.26 74.89

0.200

Raw water

30 min
0.150
60 min

120 min
0.100
A

0.050

0.000
200 250 300 350 400
λ, nm

Figure 3. The change of the UV spectrum after 30, 60 and 120 minutes of solar irradiation of the sample
with 1 g/L An TiO2 and filtration through 0.45 µm membrane. Global solar radiation: 802.2 W/m2 , estimated
UV radiation: 32.1 W/m2 .

100

80
Colour removal, %

60
ADMI
40 mg/L Pt-Co,455 nm
mg/L Pt-Co,465 nm
20

0
0 20 40 60 80 100 120
t, min

Figure 4. The colour removal during the experiment with 1 g/L of An TiO2 . Global solar radiation:
802.2 W/m2 , estimated UV radiation: 32.1 W/m2 .

123

© 2004 by Taylor & Francis Group, LLC


chap-12 19/11/2003 14: 46 page 124

0.200
Raw water
0.150 30 min
60 min
0.100 120 min
A

0.050

0.000
200 250 300 350 400
λ, nm

Figure 5. The change of the UV spectrum after 30, 60 and 120 minutes of solar irradiation of the sample
with 1 g/L R TiO2 and filtration through 0.45 µm membrane. Global solar radiation: 826.8 W/m2 , estimated
UV radiation: 33.1 W/m2 .

100

80
Colour removal, %

60

40 ADMI
mg /L Pt-Co,455 nm
20 mg /L Pt-Co,465 nm

0
0 20 40 60 80 100 120
t, min

Figure 6. The colour removal during the experiment with 1 g/L of R TiO2 . Global solar radiation: 826.8 W/m2 ,
estimated UV radiation: 33.1 W/m2 .

0.8 An TiO2 Settling

R TiO2 Settling
0.6
V /Vmax

0.4

0.2

0
1 10 100 1,000 10,000
log t, sec

Figure 7. Settling velocity for both An and R TiO2 .

124

© 2004 by Taylor & Francis Group, LLC


chap-12 19/11/2003 14: 46 page 125

0.8
1st run (UV: 14.2 W/m2)
2nd run (UV: 24.6 W/m2)
0.6
3rd run (UV: 22.4 W/m2)
A(254)/A0(254)

4th run (UV: 18.8 W/m2)


0.4

0.2 2nd

2nd
3rd

3rd

4th
4th
1st

1st
0
An TiO2 R TiO2

Figure 8. Relative absorbance at 254 nm of water after 2 hours of exposure to solar radiation with three times
reused TiO2 .

Reuse of TiO2
As could be seen at Figure 7, after the photooxidadtion process the settling process of TiO2 , without
any chemical settling accelerator, has finished practically after 1 hour. The settling was performed
at the separation flask. The TiO2 slurry was taken from the flask, dried, weighed and again involved
in experiments.
Experiments for the observing of degradation potential of 3 times reused TiO2 photocatalyst
showed the following results – Figure 8.
The experiments with settling down and separation of TiO2 from water in simple separation
flask showed the average result of 98.5% of recovery of An TiO2 and 97.8% of R TiO2 . However,
it could be estimated that an average loss of TiO2 per cycle is 2%. In [18] a commercial price of
TiO2 was taken as 14.5 $/kg. Thus the operational cost for technological step – solar photocatalytic
degradation of NOM – at water treatment plant could be estimated as 0.29 $/m3 of row water.
Additional dose of 54 mg/L of H2 O2 would increase the cost of this water treatment step for
0.24 $/m3 of water, assuming the commercial price 4.5 $/kg of H2 O2 .

CONCLUSIONS

The need for solar energy as an alternative resource to meet future energy and ecological demands
has yet to be exploited in many parts of human activities. Drinking water preparation could be one
of them, especially the use of solar radiation in Advanced Oxidation Processes (AOPs).
The following conclusions could be drawn from the above study:
• Natural Organic Matter (NOM) in lake water of specific content could be partly mineralised
(up to 25%) and partly degraded in photocatalytic process including 2 hours of solar exposure.
Degradation efficiency of NOM showed average absorbance A(254) removal of 60–80%. The
degradation is achieved at basic pH values, avoiding it’s necessity of adjusting.
• TiO2 photocatalyst could be separated and reused after each treatment with the average loss of
2%. The operating cost of technological step with TiO2 solar photooxidation, made for lab-scale
system, is calculated to be about 0.29 $/m3 of water.
• Increase of catalyst loading increases the efficiency of degradation and mineralisation of NOM
only up to 1 g/L.
• The photocatalyst TiO2 , both anatase and rutile, showed almost the same potential of degradation
of NOM even after 3 times of separation and repeated use in the experiment.

125

© 2004 by Taylor & Francis Group, LLC


chap-12 19/11/2003 14: 46 page 126

• Economically acceptable doses of H2 O2 in solar photocatalytic process decrease the efficiency


of NOM degradation.
This research showed that photocatalytic processes have a potential to be applied, after series of
lab- and pilot-scale experiments, to the existing water preparation plants and therewith increase the
efficiency of some technological steps, like biological filtration and degradation.

ACKNOWLEDGEMENTS

This study was supported by Ministry of Science and Technology of Republic of Croatia
within the framework of the Project No. 120 040. The authors wish to thank to Department of
Geophysic – Faculty of Science of University of Zagreb for the global solar radiation measure-
ments, to the Analytical Labour of the Public Waterwork Company “Vodoopskrba i odvodnja” –
Zagreb for TOC/DOC measurements and to the Public Waterwork Company “Ponikve” – Krk for
technical support.

NOMENCLATURE

NOM – Natural Organic Matter


AOPs – Advanced Oxidation Processes
UV – Ultraviolet
VIS – Visible
λ – Wavelength, nm
A (254) – Absorbance at 254 nm, path 1 cm
OH• – Hydroxyl radical
DOC – Dissolved organic carbon
An TiO2 – Anatase crystal form of TiO2
R TiO2 – Rutile crystal form of TiO2
TOC – Total organic carbon
Pt-Co – Colour unit being equal to 1 mg/L platinum as chloroplatinate ion
ADMI – Colour value of American Dye Manufacturers Institute

REFERENCES

1. Hoffmann, M. R., Martin, S. T., Choi, W., Bahnemann, D. W., Environmental applications of
semiconductor photocatalysis, Chem. Rev., Vol. 95, No. 3, pp 69–96, 1995.
2. Alfano, O. M., Bahnemann, D., Cassano, A. E., Dillert, R., Goslich, R., Photocatalysis in water
environments using artificial and solar light, Cat. Today, Vol. 58, pp 199–230, 2000.
3. Mills, A., Le Hunte, S., An overview of semiconductor photocatalysis, Journal of photochemistry and
photobiology A: Chemistry, Vol. 108, pp 1–35, 1997.
4. Legrini, O., Oliveros, E., Braun, A. M., Photochemical Processes for Water Treatment, Chem. Rev.,
Vol. 93, No. 2, pp 671–698, 1993.
5. Goslich, R., Dillert, R., Bahnemann, D., Solar water treatment: principles and reactors, Wat. Sci. Tech.,
Vol. 35, No. 4, pp 137–148, 1997
6. Minear, R. A., Amy, G. L., Water Disinfection and Natural Organic Matter: Characterization and
Control, ACS Symposium Series 649, American Chemical Society, Chicago, 1996.
7. Waller, K., Swan, S. H., Delorenze, G., Hopkins, B., Trihalomethanes in drinking water and spontaneous
abortion, Epidemiology, Vol. 9, No. 2, pp 134–140, 1998.
8. Fawell, J., Robinson, D., Bull, R., Birnbaum, L., Butterworth, B., Daniel, P., Galalgorchev, H., Hauchman,
F., Julkunen, P., Klaassen, C., Krasner, S., Ormezavaleta, J., Tardiff, T., Disinfection by-products in
drinking water – critical issues in health effects research, Environmental Health Perspectives, Vol. 105,
No. 1, pp 108–109, 1997.

126

© 2004 by Taylor & Francis Group, LLC


chap-12 19/11/2003 14: 46 page 127

9. Bull, R. J., Birnbaum, L. S., Cantor, K. P., Rose, J. B., Butterworth, B. E., Pegram, R., Tuomisto, J.,
Water chlorination – essential process or cancer hazard, Fundamental and Applied Toxicology, Vol 28,
No. 2, pp 155–166, 1995.
10. Serpone, N., Salinaro, A., Hidaka, H., Horikoshi, S., Knowland, J., Dunford, R., Beneficial and
deleterious effects of solar radiation, Proc. of the Inter Solar Energy Conference, 14–17. 06. 1998.,
Solar Engineering ASME, pp 287–298, 1998.
11. Vidal, A., Developments in solar photocatalysis for water purification, Chemosphere, Vol. 36, No. 12,
pp 2593–2606, 1998.
12. Linsebigler, A. L., Lu, G., Yates Jr., J.T., Photocatalysis on TiO2 surfaces: Principles, Mechanisms and
selected results, Chem. Rev., Vol. 95, No. 3, pp 735–758, 1995.
13. Kulišić, P. Novi izvori energije: sunčana energija i energija vjetra, Školska knjiga, Zagreb, 1991.
14. Herrmann, J. -M., Matos, J., Disdier, J., Guillard, Ch., Laine, J., Malato, S., Blanco, J., Solar
photocatalytic degradation of 4-chlorophenol using the synergistic effect between titania and activated
carbon in aqueous suspension, Cat. Today, Vol. 54, pp 255–265, 1999.
15. Kagaya, S., Shimuzu, K., Arai, R., Hasegawa, K., Separation of titanium dioxide photocatalyst in its
aqueous suspensions by coagulation with basic aluminium chloride, Water Research, Vol. 33, No. 7,
pp 1753–1755, 1999.
16. Bekbölet, M., Çeçen, F., Özkösemen, G., Photocatalytic oxidation and subsequent adsorption
characteristics of humic acids, Wat. Sci. Tech., Vol. 34, No. 9, pp 65–72, 1996.
17. Bekbölet, M., Balcioglu, I., Photocatalytic degradation kinetics of humic acid in aqueous TiO2
dispersions: the influence of hydrogen peroxide and bicarbonate ion, Wat. Sci. Tech, Vol. 34, No. 9, pp
73–80, 1996.
18. Vidal, A., Diaz, A. I., El Hraiki, A., Romero, M., Muguruza, I., Senhaji, F., Gonzalez, J., Solar
photocatalysis for detoxification and disinfection of contaminated water: pilot plant studies, Cat. Today,
Vol. 54, pp 283–290, 1999.

127

© 2004 by Taylor & Francis Group, LLC


chap-12 19/11/2003 14: 46 page 128

© 2004 by Taylor & Francis Group, LLC


chap-13 19/11/2003 14: 46 page 129

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Helinet energy subsystem: an integrated hydrogen system


for stratospheric applications

Evasio Lavagno∗& Raffaella Gerboni


Energy Department, Politecnico di Torino, Torino, Italy

ABSTRACT: HeliNet is a telecommunication infrastructure based on HAVE – High Altitude


(17 km) Very long (6 months) Endurance – unmanned solar aerodynamic platforms, named
HELIPLAT® (Helios Platforms).
HELIPLAT® is a monoplane driven by eight brushless motors, twin-boom tail type. Its energy
subsystem is based on a closed Hydrogen cycle. The platform surfaces are covered with photovoltaic
arrays; during the day, the solar power is used to supply the electric motors and to feed an electrolyser
which produces hydrogen and oxygen. The gases are stored at high pressure and used during the
night to feed the fuel cells which supply the motors; the water feeds the electrolyser to close the cycle.
The key features of this hydrogen-based propulsion system are the cost-effectiveness infrastruc-
ture (compared to satellite systems), and the sustainability of the solution thanks to the favourable
relationship with the environment: it doesn’t induce any atmospheric and electromagnetic pollution.

THE HELINET PROJECT

Goals
HeliNet is a telecommunication infrastructure based on HAVE (High Altitude Very long Endurance)
unmanned solar aerodynamic platforms, named HELIPLAT® (Helios Platforms). The project
addresses the design of the telecommunication network topology, architecture, protocols, and
communication interface towards the applications. As for these latter, three pilot applications in
strategic areas are addressed: positioning and navigation, environmental surveillance and broadband
communications.
Its main expected outputs from the platform point of view are the aerodynamic and structural
design of the HELIPLAT® platform, the design of the energy subsystem, including solar cells,
fuel cells and electrolyser, and the design and realisation of a scaled size prototype. Besides, from
the payload point of view the main output will be the study of pilot applications in the fields of
localisation, environmental data processing and transmission, broadband services.
HeliNet is to be considered as an integrated infrastructure able to yield several services in a cost
effective and sustainable manner.

Partners
Partners in the project are some Universities (Politecnico di Torino (I), Ecole Polytecnique Fédérale
de Lausanne (CH), Universitat Politècnica de Catalunya (E), Budapest University of Technology
and Economics (HU), University of York (UK)), research centres (Institut Jozef Stefan (SLO)) and
industrial partners (Fastcom S.A. (CH), EnigmaTEC (UK), Construcciones Aeronáuticas S.A. (E),
Carlo Gavazzi Space S.p.a. (I)).

∗ Corresponding author. e-mail: lame@polito.it

129

© 2004 by Taylor & Francis Group, LLC


chap-13 19/11/2003 14: 46 page 130

Figure 1. HELIPLAT.

Features
The project (Ref. IST-1999-11214) started in January 2000 and will come to the end in December
2002. The EC funded budget was 2,90 million Euros in the framework of the 5th FWP.
The whole project can be divided in two areas of interest: the stratospheric platform and the
payload. A brief description of the two aspects follows.

The platform
HELIPLAT® is an aircraft able to autonomously climb at 17 km and to fly at that altitude powered
by the sun. It is a carbon fibre monoplane driven by eight brushless motors, twin-boom tail type
(Figure 1).
The platform has a 73 m long and 2.71 m. wide wings for a total area of about 163 m2 and an
horizontal tail, 33 m long and 1.8 m wide, for an area of 60 m2 . The total area of the two wings,
available for the covering with the photovoltaic arrays is about 212 m2 . Each of the two booms can
be considered as a tapered cylindertapere which length (g1) is about 10 m with a diameter of about
600 mm. The payload will be positioned in a central box.

The payload
The payload will be represented by the hardware and software necessary to exploit HELIPLAT®
unique features for localisation, broadband applications and environmental surveillance. To this
latter end, for instance, an optical hardware is employed, compatible with the HELIPLAT require-
ments, able to take and locally analyse pictures of the ground for risk prevention, long term changes
monitoring and emergency management.

THE ENERGY SUBSYSTEM

In the framework of a research about sustainability of whatever transportation mean, the adopted
fuel represents a fundamental issue. It’s hard to define whether HELIPLAT is fuelled by the sun or
by hydrogen: in any case the direct impact on the environment is null. The design of subsystem,
which can integrate these two resources, is a challenge and the heart of the whole project.

130

© 2004 by Taylor & Francis Group, LLC


chap-13 19/11/2003 14: 46 page 131

SOLAR CELLS ELECTRIC MOTORS

ELECTROLYSER WATER FUEL CELLS

H2 TANK O2 TANK

Figure 2. The energy subsystem.

220
Available Power (incl. SC efficiency)
200 Solar Power

180
19th June
160
Electric Power [kW]

140

120

100
Energy
Power provided by
80 demanded by Extra Energy to
the fuel cells the electrolyser
the motors
60

40

20

0
4056 4059 4062 4065 4068 4071 4074 4077 4080

Hours from 1st Jan. [h]

Figure 3. Solar radiation curve during one day and subsystem’s energy management.

The closed loop


The most innovative aspect of this project is the integrated propulsion system constituted by fuel
cells, electrolyser and electric motors. The flow of energy through the system is described in
Figure 2.
Gas and water tanks and auxiliaries complete the system.
The power collected by the solar panels during the day is supplied to the electric motors to
drive the platform. Not all the available power is used for this task (Figure 3): what’s left feeds the
electrolyser, which produces high-pressure hydrogen and oxygen from the stored water. The gases
are stored in separated cylinders and feed the fuel cells during the night, so that the electric power
needed by the motors is always assured.

Power from the sun


Unlike all other sun-powered aircraft, a stratospheric platform doesn’t need to worry about the
clouds as it flies above. A detailed simulation of the solar radiation variations during the day and

131

© 2004 by Taylor & Francis Group, LLC


chap-13 19/11/2003 14: 46 page 132

10
Hydrogen Mass [kg]

8
6
4
2
0
2500 3000 3500 4000 4500 5000 5500
Hours from 1st Jan. [h]

Figure 4. Hydrogen mass variation during the platform mission.

during the year was performed using analytical formulas provided by a NASA study [1], [2].
The input data were the wing surface covered by panels (total irradiated area), the flight altitude
and latitude. For our simulations the chosen latitude was 38◦ N. According to the calculations, the
maximum peak power obtained by the sun over the whole wing surface was 35 kW. Of course,
during the day, the power assumes the full range of values between 0 and the peak power value (at
noon). In Figure 3 a typical one-day solar radiation curve is represented, combined with the real
available electric power after the solar panel conversion. There is a consistent variation of solar
power availability during the year: this means that the duration of the mission can’t cover the whole
year, because in wintertime there wouldn’t be enough power to be stored to overpass the long
nights.

Power transformation: the electrolyser


The power needed by the motors has a quite constant value of 6.2 kW throughout the day and the
night, thus there is an extra power by day and a lack of power by night (Figure 3 dotted area).
To exploit the extra power, a polymeric membrane electrolyser is chosen. To evaluate the mass
of gases produced by the electrolyser during each day, only the net power is to be considered. If
a take-off day is fixed, then it is possible to evaluate how much hydrogen is produced and stored:
it is interesting to notice that in summertime more hydrogen is produced than what is consumed,
so that the mass of hydrogen stored would grow significantly, forcing to choose bigger cylinders
or higher pressures. It was chosen to limit the maximum hydrogen mass storage to 10 kg. Figure 4
shows the hydrogen mass pattern during the mission.
Once the hydrogen cylinder is full, it is asked the electrolyser to stop the production of gas. The
production starts again on the following day to restore the amount of gas consumed during the
night.
Another option could be to ask the electrolyser to produce just the exact amount of gas necessary
for the following night, setting its working point at an average power value and working all the day
long. This solution would represent two advantages: (1) it’s not necessary to install a big electrolyser
able to manage peak powers of 35 kW as it wouldn’t reach those high level values remaining at a
lower constant level; (2) the operation of the equipment would be more regular, without sudden
stops from high power values. The weak point would be the precise regulation of the operation: it
would be necessary to foresee everyday the needed energy for the following night. This would link
very tightly the production system and the software/control system with a loss of reliability (e.g.
need for active protection).

Gas transformation: the fuel cell


A polymeric electrolyte membrane fuel cell (PEM) stack is adopted to convert hydrogen and oxygen
streams into electric energy for the motors. The only by-product is water (and heat), which restores
that used by the electrolyser.

132

© 2004 by Taylor & Francis Group, LLC


chap-13 19/11/2003 14: 46 page 133

Figure 3 (dashed area) shows the task of the fuel cell on a generic day: to produce the required
6.2 kW during the night and to integrate the lacking power during the particular periods of dusk
and dawn.
In the exact moment when it stops to operate, the electrolyser starts its own.

The mission
The duration of the mission is limited by the maximum amount of hydrogen the platform can carry.
An early departure in the year means smaller amounts of available solar energy, shorter daylights
and the need to use previously saved hydrogen to overpass the long nights. If a considerable
hydrogen reserve is stored before the departure this allows flying for many long nights. The reserve
is restored during spring and summer (Figure 4). When the storage tanks are full, the extra power
from solar cells is exploited for facility conditioning. The stored mass will decrease again when the
summertime is ending and nights are again going to lengthen. The mission lasts until the hydrogen
is available. An example of mission period could be: take off on the 17th of April and landing on
the 20th of August with an initial reserve of 10 kg of hydrogen and oxygen.

The platform system integration


One of the most demanding points of this project is the weight and volume balance. Equipment,
which on ground applications can easily weight tons without any relevant problem given to the
overall system, couldn’t be sustainable in an aerospace application.

Requirements and components


The total maximum weight admitted for the energy subsystem is about 400 kg. According to an
in-deep market analysis, it was possible to individuate the fuel cell stack and electrolyser, which
could fit the small space inside the platform. The fuel cell’s power density is about 350 W/kg, while
the electrolyser’s power density is about 1000 W/kg [3]. It should be remembered that the external
temperature is −60◦ C and all these equipment have to deal with water. This means that proper
conditioning and, thus, extra weight is to be considered for the insulating materials.

Fluids weights and volumes


As said before, the foreseen typical mission requires an initial reserve of hydrogen and oxygen. At
take-off, the platform will carry almost only gases because water is produced by fuel cells activity
during the night. When the hydrogen mass level reaches for the first time the lowest value (Figure 4)
it means that all the gases have been transformed into water, and the water tank will be full (about
90 kg), ready to be the feeder to the electrolyser. Thus, the total weight of the three fluids (gases +
water) will not change very much during the mission (about 100 kg).
The challenging point with hydrogen is its very low density: this forces to increase significantly
the operating pressure. A pressure value of 150 bars would reduce the volume of the total mass to
0.75 m3 (0.375 m3 for oxygen).

Layout
The subsystem equipment can find a proper placement inside the booms (cylinders with a promi-
nence – “room” – in the part under the wing, 600 mm diameter, 1000 mm maximum height in the
prominence) and inside the horizontal wing spar (cylinder, 300 mm diameter). Figure 5 shows the
central box with sections of the two spars: only the bigger one would be exploited. Figure 6 shows
the layout of equipment inside the boom.
For best weight balancing and for reliability reasons, it was decided to share the masses between
the two booms, so all the equipment are replicated inside each of the two booms.
Cylinders containing the gases, which don’t suffer for very low temperature but, instead, benefit
of it for further reducing their volume, are inserted inside the wing spar and in the cylindrical part
of the boom, while the electrolyser, which needs to exploit as much as possible gravity is placed
inside the “room”, so that it can develop in height.

133

© 2004 by Taylor & Francis Group, LLC


chap-13 19/11/2003 14: 46 page 134

Figure 5. The central box with sections of wing spars.

Figure 6. Lateral view of the boom: layout of H2 /O2 tanks and electrolyser unit.

Heat exchange
The “room” has to be fully and carefully insulated and conditioned in order to avoid freezing
of the water. It’s easier to insulate a single room instead of protecting the whole platform, so
the delicate equipment has all been concentrated in this space. It should be considered, also,
that the fuel cell and the electrolyser work in different parts of the day – never simultaneously –
and their efficiencies (55% and 88% respectively) mean that relevant power is dissipated as heat.
The same heat can be used to warm the room. A careful study was made to understand if this
integration of heat sources was effective and it was demonstrated that a temperature in the range
of 20◦ C with a variation of ±10◦ C could be maintained if a proper mixture of insulating materials
and heat bridges was chosen.

Vibrations
The thermal bridges could be positively used not only to help in the conditioning management of
the room but also to limit the vibrations of the equipment and so to lengthen their life. It has been
demonstrated by several terrestrial applications that fuel cell stacks suffer for excessive vibrations.
The life cycle of a stratospheric platform is considerably shorter than a car one, but failure has to be
all the same avoided because the malfunctioning of critical equipment as the fuel cell could cause
the abortion of the mission with a consequent high damage to the service.

Innovation
During the design of the subsystem it was possible to recognise that the electrolyser represents the
most critical point. Its concept is based on the separation by gravity of water and gases: this leads

134

© 2004 by Taylor & Francis Group, LLC


chap-13 19/11/2003 14: 46 page 135

Vessel

Hydrogen O2 pipe

H2 pipe
Water

Electrolyser

Auxiliaries

Figure 7. In-vessel solution.

to a need to vertical development. At the same time the gases produced are at ambient pressure and
only heavy (order of magnitude of tons) machines can produce gases at about 100 bar. This solution
is unfeasible on a stratospheric application where the total weight of the platform is about 800 kg.

In-vessel solution 1
An integrated solution was proposed. The electrolyser could be inserted inside the water tank with
only a small increase of the tank volume. In the tank some free space should be left in the upper
part where the just produced hydrogen could be stored (while oxygen is directly driven inside its
storage cylinder). (Figure 7). The vessel could be kept closed by a throttle valve, which opens only
when the desired inner pressure is reached. As the electrolyser is immersed in water, the differential
pressure between the two end plates is maintained null vanishing the need for heavy plates. The
heat dissipated by the electrolyser could be immediately absorbed by water, which could dissipate
it by conduction and convection.
Theoretically, the reachable pressure could be very high: some FEM analysis onthe vessel is
on the run in order to define whether pressure of about 150 bars are sustainable. First results show
no big trouble with this solution.

In-vessel solution 2
An even easier heat balance could be reached if inside the vessel a Regenerative Fuel Cell (RFC)
were placed, which could work in both directions throughout the entire mission without the need
for double equipment. RFCs are under study in several research centres and some companies are
evaluating their performances.

SUSTAINABILITY AND FINAL CONCLUSIONS

The above described energy subsystem allows some final consideration about the relationship
between a network of stratospheric platform and the environment.
HeliNet operation and main scope of existence can be the full warranty of sustainability. It
operates tanks to a closed loop based on hydrogen and oxygen, two non-polluting gases and the
product of their “combustion” is water. In case of accidental release of gas no damage would be
carried to the ozone layer. The reliability of the equipment installed is assured by redundancy.
Compared to common satellites, the take-off of HeliNet is absolutely non-polluting as it is able to
leave land without any external help. One of HeliNet main tasks is to monitor the territory in search
of environmental troubles (e.g. flood and fire prevention and fast alarm). Besides, its operation
does not induce electromagnetic pollution.
It can be said that if the last evaluations about the energy subsystem and the system integration
with the other parts of the platform lead to a positive result about its feasibility, HeliNet will be a
brilliant example of environment-friendly technology and the witness of the power of water and sun.

135

© 2004 by Taylor & Francis Group, LLC


chap-13 19/11/2003 14: 46 page 136

REFERENCES

1. Colozza, A.J., Effect of Power System Technology and Mission Requirements on High Altitude Long
Endurance Aircraft, NASA CR 194455, 1994.
2. Colozza, A.J., Effect of Date and Location on Maximum Achievable Altitude for a Solar Powered
Aircraft, NASA CR 202326, 1997.
3. Dornheim, M.A., Special Fuel Cells Key to Months-Long Flight, Aviation Week & Space Technology,
February 28th 2000.

136

© 2004 by Taylor & Francis Group, LLC


chap-14 19/11/2003 14: 46 page 137

Sustainable development of
environment systems

© 2004 by Taylor & Francis Group, LLC


chap-14 19/11/2003 14: 46 page 138

© 2004 by Taylor & Francis Group, LLC


chap-14 19/11/2003 14: 46 page 139

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Dynamic simulation of pollutant dispersion over complex urban


terrains: a tool for sustainable development, control and
management

K. Hanjalić & S. Kenjereš


Department of Applied Physics, Delft University of Technology, Delft, The Netherlands

ABSTRACT: We present computer simulation and animations of diurnal air movement and
pollutant dispersion over complex terrain with heat and emission islands. The method, based on
numerical solution of momentum, energy and concentration equations in time and space using an
algebraic turbulence closure for subscale (unresolved) motion, can account for terrain topography
and dynamics of meteorological synoptic conditions. The case study presented is a realistic scenario
over a medium sized town situated in a mountain valley during windless winter days when the lower
atmosphere is capped by an inversion layer preventing any escape of pollutants. The air movement
and pollutant dispersion are governed primarily by the day ground heating and night cooling and
by the terrain configuration. The results include the predictions of local values (and their time and
space variation) of air velocity, temperature and pollutant concentration. The approach can be used
for regulation of emission during critical weather periods, as well as for long-term planning of
urban and industrial development, for optimum location of industrial zones and for design of city
transportation and traffic systems.

INTRODUCTION

The concept of sustainable development of urban environment implies much broader challenge
than just regulating, monitoring and restricting pollutant emission, as often understood by some
environmentalists and policy makers. Imposing general emission standards, while not accounting
for local terrain topography, climate and weather peculiarities, which all affect the dynamics of
pollutant spreading and transformation, may lead to either too stringent or to too lenient regula-
tions, each with undesirable consequences. Too stringent and static emission regulations, which
may be justified only over short periods of time and under extreme circumstances, can lead to
disruption in industrial processes and limitation to further urban and industrial development. On
the other hand, complex terrain topography and local microclimate can under some circumstances
(i.e. severe weather conditions) lead to excessive local environment degradation despite imposing
strict limitation on emission. The true sustainability concept asks for a dynamic, more exact and
accurate approach than hitherto applied, which should account for most factors influencing the air,
water and soil quality, including their absorptive and regenerative capacities and their dynamics in
time and space. This calls for accurate field predictions of details of pollutant spreading, its chem-
ical and physical transformation, deposition and absorption, accounting for terrain configuration,
urban canopy, and for the dynamics of local atmosphere and weather conditions.
Various approaches have been proposed for numerical modelling and simulation of lower
atmosphere over mesoscale rural and urban areas. Most work reported assumed wind velocity
distribution, while treating pollutant as a passive scalar, with possible inclusion of thermal buoy-
ancy effects. The dispersion models are usually based on semi-empirical deductions and integral
parameters. Situations at micro and meso scales dominated by buoyancy are usually beyond the

139

© 2004 by Taylor & Francis Group, LLC


chap-14 19/11/2003 14: 46 page 140

reach of such models, Hunt [1,2]. Large eddy simulations (LES) is a possible option (Nieuwstadt
[3]) a hybrid LES/RANS approach (‘ultra’ CFD problems; Hunt [4]) seems a more viable option
that can provide, under lower costs, required detailed insights into these complex phenomena.
Urban areas with street canyons and heat island make more complex canopy layer than rural
areas. Moreover, urban settlements contain various sources of heat and pollution as a result of a
variety of human activities, e.g. fossil power plants, industry, transportation, agriculture, etc. A
high percentage of urban areas are also covered with concrete canopy and asphalt, which store and
reflect incoming radiation warming significantly the near-surface layer of air above the temperature
of their surrounding. As a result, the urban areas form kind of local heat islands in the surrounding
countryside, Stull [5]. Urban city landscape with tall buildings and streets of different sizes together
with surrounding terrain orography make especially complex local configuration. Together with
an imposed initial temperature distribution in atmosphere (thermal stratification), such terrain
configurations and time variation in ground temperature and emission activities create complex
unsteady pattern of air movement, which governs the transport and spreading of atmospheric
pollutant emitted from multiple sources. The prediction of these complex interactions is of vital
importance for estimating the distribution of pollutants, especially the toxic ones that may pose
a risk to human health. This is also the major prerequisite for optimum control of air quality and
sustainable development of urban environment.
We present some results of numerical simulations of air movement and pollutant dispersion
in a medium-size valley town with significant residential and industrial pollution sources. The
simulations have been performed using a transient Reynolds-averaged Navier-Stokes (T-RANS)
approach, which proved to be efficient, numerically robust and physically more accurate than
most methods reported in the literature. The method was earlier validated for well-documented
laboratory or simulation results that include separately each of the major factors influencing the flow
and dispersion dynamics: the thermal stratification and terrain orography, i.e. unsteady turbulent
penetrative convection of an unstable mixed layer, and the Rayleigh-Bénard convection over flat
and wavy surfaces of different topology and over a range of Ra numbers. In order to accommodate a
very complex terrain orography, we use a finite-volume Navier-Stokes solver for three-dimensional
flows in structured non-orthogonal geometries, based on Cartesian vector and tensorial components
and collocated variable arrangement.
This article focuses on critical windless periods during winter when the lower atmosphere in
the valley is capped with an inversion layer preventing any convection through it. The day ground
heating and night cooling, affected by terrain orography and thermal stratification, solely generates
the air movement and the pollutant dispersion. While any realistic conditions can be imposed, we
consider at present as an illustration a simplified situation with space and time sinusoidal variation
of temperature and concentration, both imitating two diurnal cycles.
The results include the predictions of local values (and their time and space variation) of air
velocity, temperature and pollutant concentration. The approach can be used as a tool for ensuring
sustainable development and long-term planning of urban and industrial growth, for optimising
location of industrial zones, of city transportation and traffic systems, as well as for ad hoc regulation
of emission during critical synoptic periods. The same approach can be applied to predict water
movement and pollutant transport in lakes and oceans of complex coastal and bed topography,
accounting for daily solar radiation and night cooling, as well as for wind effects.

THE T-RANS RATIONALE: EQUATIONS AND SUBSCALE MODELS

The T-RANS is in essence Very Large


 Eddy Simulations (VLES) in which the stochastic motion
is modelled using a %k& − %ε& − θ 2 algebraic stress/flux/concentration (ASM/AFM/ACM) single-
point closure models as the “subscale model”, where % & denotes the ensemble-averaged time-
resolved turbulence properties. The turbulent stress tensor, τij = ui uj , heat flux vector, τθj =
 
%θui &, and the concentration flux vector, τcj = cuj were derived by truncation of the modelled
RANS parent differential transport equations by assuming weak equilibrium but retaining all major

140

© 2004 by Taylor & Francis Group, LLC


chap-14 19/11/2003 14: 46 page 141

flux production terms (all treated as time-dependent). In contrast to Large Eddy Simulation, the
contributions of both modes to the turbulent fluctuations are of the same order of magnitude.
Environmental fluid flows are described by standard conservation laws for mass, momentum,
energy and concentration. For the resolved motion, equations can be written in the essentially same
form as for the LES:
   
∂%Ui &   ∂%Ui & ∂ ∂%Ui & 1 ∂ %P& − Pref  
+ Uj = v − τij − + βgi %T & − Tref (1)
∂t ∂xj ∂xj ∂xj ρ ∂xi
 
∂%T &   ∂%T & ∂ v ∂%T &
+ Uj = − τθj (2)
∂t ∂xj ∂xj Pr ∂xj
 
∂%C&   ∂%C& ∂ v ∂%C&
+ Uj = − τcj (3)
∂t ∂xj ∂xj Sc ∂xj
Where % & stands for resolved ensemble-averaged quantities and τij , τθi and τci represent contribu-
tions due to unresolved scales to momentum, temperature and concentration equations respectively,
which were provided by the subscale model. In the present work, which is still at the preliminary
stage, the adopted “subscale” expressions are given as follows. For turbulent stresses we applied
eddy viscosity model, modified to include the production due to thermal buoyancy:
  
∂%Ui & ∂ Uj 2 %k&  
τij = −vt + + %k& δij − Cφ β gi τθj + gj τθi (4)
∂xj ∂xi 3 %ε&

Algebraic AFM/ACM counterparts express the turbulent heat and concentration fluxes:
 
%k& ∂%T & ∂%Ui &  
τθi = −Cφ τij + ξτθj + ηβgi θ 2 (5)
%ε& ∂xj ∂xj
 
%k& ∂%C& ∂%Ui &
τci = −Cφ τij + ξτcj (6)
%ε& ∂xj ∂xj
The latter expression presumes neutrally buoyant pollutant. The closure of the expressions for the
subscale quantities is achieved by solving in time and space  the equations for turbulence kinetic
energy %k&, its dissipation rate %ε& and temperature variance θ 2 , resulting in a three-equation model
(Kenjereš and Hanjalić [6,7]; Hanjalić and Kenjereš [8]):
D%k&
= Dk + Pk + Gk − %ε& (7)
Dt
D%ε&
= Dε + Pε1 + Pε2 + Gε − Y (8)
Dt
 
D θ2
= Dθ + Pθ − %εθ & (9)
Dt

CASE STUDY: DIURNAL AIR CIRCULATION AND POLLUTANT DISPERSION


OVER A TOWN VALLEY

The potential of the T-RANS approach for predicting atmospheric thermal convection and pollutant
transport in a real, full scale situation, over a mountainous conurbation, is illustrated in the case
of diurnal variation of air movement and pollutant dispersion over a medium-sized town situated
in a mountain valley, with distinct residential and industrial zones. The simulated domain covers
an area of 12 × 10 × 2.5 km, filled by a numerical mesh of the averaged cell size of 100 m in each
direction.
141

© 2004 by Taylor & Francis Group, LLC


chap-14 19/11/2003 14: 46 page 142

In the absence of any field data, a hypothetical scenario of a diurnal cycle was adopted by which
the ground over the complete solution domain is uniformly heated and cooled over a cycle in a
sinusoidal manner with the diurnal and nocturnal temperature amplitudes of ±10 C. On top of this
time-dependent but spatially uniform ground temperature variation, we superimposed additional
ground heating and cooling over the residential and industrial islands. Again a sinusoidal variation
was imposed, but here both in time (over a diurnal cycle) and in space, with temperature extreme
of Tg = ±2 and ±1 respectively at the centre of each of these two zones.
The two zones are also represented by different pollutant emission (C = 50% and 100% respec-
tively) during the day, and zero emission during the night. A satellite picture of the conurbation
considered, together with the numerical mesh over the terrain, solution domain and a schematics
of the imposed boundary conditions, are shown in Figure 1. Two consecutive diurnal cycles were
simulated (0-24h, day (I) and day (II)) with a time step of 150 sec. Wall functions were applied for
the ground plane. At the top boundary, we prescribed constant potential temperature and assumed

Figure 1. Above: IRS-IC Satellite image showing the specified case: a middle-sized town located in a moun-
tain area. Below: imposed boundary conditions for two distinct areas (industrial and residential) represented
by different pollution emission values (C = 100 and 50% respectively) and different heat source intensities
(Tg = ±1 and ±2 respectively).

142

© 2004 by Taylor & Francis Group, LLC


chap-14 19/11/2003 14: 46 page 143

symmetry boundary condition for the velocity (frictionless condition) and passive pollutant concen-
tration. The side boundaries were artificially moved sufficiently far away from domain of interest
and, for convenience, treated as symmetry plane for all variables, in order to simulate as close
as is possible a windless situation. Two different situations with respect to the imposed thermal
stratification were analysed. The imposed vertical profile of the potential temperature of dry air
was assumed uniform in the lower atmosphere, and linear in the upper layer with an increment
of T/z = 4 K/km. The base of the inversion layer (the switch from the uniform to the linear
temperature) is located at z/H = 2/3 (∼ = 1600 m from the valley deepest point) for the first case
(“weak stratification”), and at z/H = 1/3 (∼ = 800 m) for the second case (“strong stratification”).
The domain height (H) and the characteristic initial temperature gradients give very high values of
Rayleigh number, i.e. O (1017 ).
Since no measurements were available, no quantitative validation of the presented simulations
was possible. Instead, we present some results that illustrate features of interest to environmental
studies. The time evolutions of the thermal plumes in the initial stage of the first-day cycle for
two types of thermal stratification are shown in Figures 2 & 3, respectively. Thermal plumes are
identified as isosurfaces of the potential temperature, coloured by the intensity of the vertical
velocity component. For the weak stratification, Figure 2 shows that strong thermal plumes appear
at the initial stage of heating, originating both over the areas characterized by locally increased
rate of heating (residential and industrial zones), as well as from ridges of the surroundings hills.
As time progresses, the plumes reach and penetrate the capping inversion region, inducing mixing
and deformation of initially almost horizontally distributed inversion layer, Figure 2a. At the end
of the first-day cycle, due to nocturnal cooling, the thermal plumes are not observed any more. The
capping inversion layer has moved closer to the ground and has returned almost fully to the initial
horizontal organization.
Many of the above-described features of the thermal convection can also be observed for strong
stratification, Figure 3. The main difference is observed in the dynamics of the time evolution,
since the capping inversion layer is located significantly closer to the ground (approximately to

(a)

(b)

Figure 2. Thermal plume evolution (isosurfaces of potential temperature %T & coloured by %W & velocity): weak
stratification case.

143

© 2004 by Taylor & Francis Group, LLC


chap-14 19/11/2003 14: 46 page 144

(a)

(b)

Figure 3. Thermal plume evolution (isosurfaces of potential temperature %T & coloured by %W & velocity):
strong stratification case.

(a)

(b)

(c)

Figure 4. Time evolution of the instantaneous trajectories in a vertical plane over residential and industrial
zones: left – weak stratification, right – strong stratification.

the peak elevation of the highest hill in the domain considered) and significantly suppresses the
development of the convective motion. This is confirmed in Figure 3a, where the communication
between the ground and the inversion layer is only established over the residential and industrial
zones in form of an “urban” thermal plume.
In order to provide a more detailed picture of the specific effects of local thermally active
heat islands and of terrain topology, in Figure 4 we show in parallel the instantaneous trajectories
of massless fluid particles in a vertical plane over residential and industrial zones, for different
time instants for two stratification conditions. The first two rows represent active heating periods
for the first-day cycle and the last row for the nocturnal cooling periods of the second-day cycle.

144

© 2004 by Taylor & Francis Group, LLC


chap-14 19/11/2003 14: 46 page 145

(a)

(b)

(c)

Figure 5. Time evolution of the instantaneous trajectories in a vertical plane over hilly terrain: left – weak
stratification, right – strong stratification.

Figure 6. Velocity vectors and horizontal velocity profiles indicating the inertia of the upper layer: 2h after the
onset of cooling/heating, day (II), with weak stratification: left – up-slope motion, right – down-slope motion.

It is obvious that the capping inversion layer acts as a kind of a barrier for the vertical convec-
tive movement, suppressing the plume penetration and mixing, especially in the case of strong
stratification. Distinctive roll structures created during the initial stage of diurnal cycle lose their
strength and identity during the stabilizing effect of the nocturnal cooling. It is noted that even a
very small undulation of the terrain surface has a great impact on the formation and orientation of
the convective rolls, Figure 4a. As the time progresses, the interactions between these large struc-
tures becomes more intensive, resulting in vigorous motion which is especially noticeable over the
urban and industrial areas with elevated ground temperature. The strongest vertical deformations
of the capping inversion layer occur above these areas, Figure 4b. During the nocturnal periods, the
convective motion and the associated mixing decays, Figure 4c and the characteristic roll structures
gradually disappear. They are replaced by weak inertial motion, which further decays with time.
Another source of the roll structure systems is the terrain orography, what is clearly visible in the
vertical cross-sections at locations where there are no local heat sources, i.e. outside the urban and
industrial areas, Figure 5. Here, the strongest deformation of the inversion layer are observed above
the highest hill peaks, Figures 5a and b.
It is interesting to note that a distinct roll structure can still be observed during the nocturnal
period, Figure 5c, albeit of low intensity for the strong stratification case. Further investigation of
the effect of the local terrain on the convective processes revealed interesting phenomena of the
inertial motion, which is observed even 2 hrs after the onset of ground heating or cooling, Figure 6.

145

© 2004 by Taylor & Francis Group, LLC


chap-14 19/11/2003 14: 46 page 146

Figure 7. Time evolution of velocity vectors in a plane close to the ground (coloured by intensity of vertical
velocity component) for weak stratification.

Time evolutions of velocity vectors coloured by intensity of the vertical velocity in a plane close to
the ground (for two stratifications) are shown in Figures 7 and 8. It is important to note the difference
in the dynamics of velocity vectors: weak stratification – fast changes, strong stratification – slow
changes.
Similar conclusions can be drawn from the spatial distribution of massless particles released
over industrial and residential areas, Figures 9 and 10. It is interesting to observe the differences
in the vertical distances that can be reached for two different situations (direct consequence of the
capping inversion layer location).
The time evolution of the concentration field of a passive pollutant is shown in Figure 11. Three
characteristic values of concentration were selected corresponding to very strong (0.9 · %Cmax &),
mild (0.1 · %Cmax &) and weak (0.01 · %Cmax &) pollution effects. The latter is used as a marker in
order to visualize how far the pollution front disperses over the simulated domain within the
two diurnal cycles. As mentioned earlier, in the specification of the boundary conditions for
the emission of the passive scalar, the industrial zone is characterized with twice more inten-
sive emission than the residential area. Intensive pollution emission is presented only during the
day imitating normal city day activities (transportation, industrial activities), which are switched
off during the night. The dominant red-coloured plumes are clearly present over both the indus-
trial and residential zones during these active periods for both simulated scenarios. There are
several distinct differences between the two stratifications. The first difference is in the vertical
distance, which is reachable by the pollutant plumes. The second difference is in the dynamics
of these plumes, since the weak stratification provides better mixing and dispersion over sig-
nificantly larger areas compared to the case with a strong stratification. It is interesting to note
the absence of highly polluted red-coloured cloud at the end of the second day cycle for the
weak stratification case, Figure 11d. The latter confirms the more efficient dispersion of a passive
contaminant.
The time evolutions of the pollution front (for two stratification situations) are shown in Figure 12.

146

© 2004 by Taylor & Francis Group, LLC


chap-14 19/11/2003 14: 46 page 147

Figure 8. Time evolution of velocity vectors in a plane close to the ground (coloured by intensity of vertical
velocity component) for strong stratification.

Figure 9. Spatial distribution of massless particles released over industrial and residential areas (coloured by
intensity of vertical velocity component) for weak stratification.

147

© 2004 by Taylor & Francis Group, LLC


chap-14 19/11/2003 14: 46 page 148

Figure 10. Spatial distribution of massless particles released over industrial and residential areas (coloured
by intensity of vertical velocity component) for strong stratification.

Figure 11. Time evolution of different passive pollutant concentrations: left – weak stratification, right –
strong stratification.

148

© 2004 by Taylor & Francis Group, LLC


chap-14 19/11/2003 14: 46 page 149

Figure 12. Time evolution of the pollution front (isosurface of small concentration value) for two different
conditions: above – weak, below – strong stratification.

149

© 2004 by Taylor & Francis Group, LLC


chap-14 19/11/2003 14: 46 page 150

CONCLUSION

Computer simulations of diurnal dynamics of air circulation and pollutant dispersion under various
thermal stratification in the lower atmosphere over a town valley with complex terrain orography
were performed using the time-dependent Reynolds-averaged Navier-Stokes method (T-RANS).
The approach can be regarded as very large eddy simulation, with a single-point closure play-
ing the role of the “subscale” model. In comparison with the conventional LES, the model of
the unresolved motion (here a reduced algebraic stress/flux model, closed by ensemble-averaged
transport equations
  for turbulence kinetic energy, its dissipation rate and the temperature variance
%k& − %ε& − θ 2 covers a much larger part of turbulence spectrum whereas the large determinis-
tic structure was fully resolved. Validation of the proposed T-RANS approach was performed for
situations where the effects of thermal stratification and terrain orography were separated, and for
which a good experimental and numerical database exist. The full-scale simulations of pollutant
dispersion in a town valley with distinct residential and industrial zones under different imposed
thermal stratification, portrayed qualitatively very reasonable results and at the same time con-
firmed numerical efficiency and robustness of the proposed approach. We believe that the T-RANS
approach can be used as a potentially powerful and efficient tool for predicting the meso- and
sub-meso scale local environment over urban canopy with complex terrain orography. This was
demonstrated in an example of critical situation during a windless period capped by a strong inver-
sion, when the air movement and pollutant dispersion are governed solely by the terrain orography
and diurnal activities in the residential and industrial zones. The method can be used as a tool for
ensuring sustainable development and long-term planning of residential and industrial growth of
urban areas, for optimising location of industrial zones, of city transportation and traffic systems,
as well as for ad hoc regulation of emission during critical synoptic periods. The method can also be
applied to predict water movement and pollutant transport in lakes and oceans of complex coastal
and bed topography, accounting for daily solar radiation and night cooling, as well as for wind
effects.

ACKNOWLEDGEMENTS

The research of Dr. Saša Kenjereš has been made possible by a fellowship of the Royal Netherlands
Academy of Arts and Sciences (KNAW).

REFERENCES

1. Hunt, J.C.R., Eddy dynamics and kinematics of convective turbulence, In E.J. Plate et al. (eds.) Buoyant
Convection in Geophysical Flows, Kluwer Academic Publishers, The Netherlands, pp.41–82, 1998.
2. Hunt, J.C.R., Environmental forecasting and turbulence modelling, Physica D, Vol.133, pp 270–295,
1999.
3. Nieuwstadt, F.T.M. and Mason, P.J. and Moeng, C.H. and Schumann, U., Large-eddy simulation of the
convective boundary layer: A comparison of four computer codes, Proceedings of the 8th Turbulent
Shear Flow Symposium, Munich, 9–11 September, pp 1.4.1–1.4.6, 1991.
4. Hunt, J.C.R., UltraCFD for Computing Very Complex Flows, ERCOFTAC Bulletin, No.45, pp 22–23,
2000.
5. Stull, R.B., An Introduction to Boundary Layer Meteorology, Kluwer Academic Publ., 1988.
6. Kenjereš, S. and Hanjalić, K., Transient analysis of Rayleigh-Bénard convection with a RANS model,
Int. J. Heat and Fluid Flow, Vol.20, pp 329–340, 1999.
7. Kenjereš, S. and Hanjalić, K., Combined effects of terrain orography and thermal stratification on
pollutant dispersion in a town valley: a T-RANS simulation, Journal of Turbulence, Vol.3/026, 2002.
8. Hanjalić, K. and Kenjereš, S., T-RANS simulations of deterministic eddy structure in flows driven by
thermal buoyancy and Lorentz force, Flow, Turbulence and Combustion, Vol.66, pp 427–451, 2001.

150

© 2004 by Taylor & Francis Group, LLC


chap-15 19/11/2003 14: 46 page 151

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Study of environmental sustainability: the case of Portuguese


polluting industries

Manuela Sarmento∗
Department of Management, Armed Forces University, Portugal

Diamantino Durão
Department of Mechanical Engineering, Lusíada University, Portugal

Manuela Duarte
Department of Management, Superior Institute of Accounting and Administration, Portugal

ABSTRACT: Globalisation of economy is a reality that every region, country and company
are facing. One of the globalization’s perspectives is the environment, which strongly depends on
industries behaviour and the utilization of natural resources. In fact the industry’s management has
more social responsibilities, once environment is a public good. For improving the environmental
sustainability, the industry’s management have to consider the account of environmental facts, which
is demonstrated through the internalization of externalities, using the financial accounting and its
financial statements. The majority of developed countries follow the rules of financial accounting
harmonization made by international organizations, namely the International Accounting Standard
Committee. The present paper analysis the environmental sustainability, based on an original survey
distributed among the more potential polluting Portuguese industries. It evaluates the industries’
behaviour regarding the environment and presents the correlation between financial accounting
harmonization and environmental polluting facts.

INTRODUCTION

Consumers and ecological lobbies are requiring from companies the implementation of cleaner
production processes and “green” products. They are also forcing governments to respect anti-
pollution laws, namely the principle of the polluting-payer that in economic terms corresponds to
the “internalisation of the environmental negative externalities” [1,2,3].
Once environment is a public good, the present study on (1) accounting harmonisation of
environmental facts by industries and (2) evaluation of companies’ behaviour regarding envi-
ronment – is a subject of great contemporary relevance and of interest to the international
scientific community, companies, countries, regions and cities. Accounting harmonisation of envi-
ronmental facts has implications in the environmental sustainability, since measures the damages
caused by polluting industries. The importance of this subject is demonstrated by the increas-
ing number of international conferences and meetings on environment issues sponsored by the
United Nations (UN) and also the environmental protection concerns of the European Union
(EU) [4,5]. An appropriate environmental strategic management by potential polluting indus-
tries will improve citizens’ quality of life and will leave to next generation an healthy planet
[6,7]. Other authors suggest that environmental leadership can bring competitive advantages
to companies [8,9,10], if they implement cleaner production processes [11] and sell “green”

∗ Corresponding author. e-mail: msc@clix.pt

151

© 2004 by Taylor & Francis Group, LLC


chap-15 19/11/2003 14: 46 page 152

products. Their public image and market share will be improved what contributes to increase
profits [12,13].
This paper aims at giving a better understanding of some environmental strategies implemented
by the management of potential polluting industries in Portugal. In order to achieve this objective
a survey with twenty five questions was answered by 100 industries (large, medium and small)
located throughout Portugal.

ACCOUNTING HARMONIZATION

To measure correctly the environmental sustainability of industries, it is necessary to analyse


the position paper of the United Nations, the International Accounting Standards (IAS) and the
recommendation of the European Commission regarding international accounting standards.

Position Paper of the United Nations


According to the Position Paper produced by the United Nations Working Group on International
Standards of Accounting and Reporting [14], the accounting for the environment has became
increasingly relevant to organizations (profit, non-profit or central and local governmental organiza-
tions), once pollution become a more prominent economic, social and political problem throughout
the world. Steps are being done at national and international levels to protect the environment and
to reduce, prevent and mitigate the effects of pollution. The Position Paper deals with accounting of
environmental costs and liabilities resulting from transactions and events that affect the economic
and financial results of an organization which should be reported in the financial statements.

International Accounting Standards (IAS)


The International Accounting Standards 36 [15], 37 [16] and 38 [17] were approved by the IASC
Board in April 1998, and became effective for financial statements after July 1st, 1999.

IAS 36 – impairment of assets


The objective of this standard is to prescribe the procedures that an enterprise applies to ensure that
its assets are carried at no more than their recoverable amount. An asset is carried at more than its
recoverable amount if its carrying amount exceeds the amount to be recovered through use or sale
of the asset. In this case, the asset is described as impaired and the standard requires the enterprise
to recognize an impairment loss. The standard also specifies when an enterprise should reverse an
impairment loss and it prescribes certain disclosures for impaired assets. The application of IAS 36
to the accounting release of the environmental facts is related with the lost of value (impairment)
that occurred with:
• Tangible assets (technologies): impairment is connected with the obsolescence of the company’s
technologies.
• Intangible assets (rights and contracts): impairment is associated to abrogation of the contract.
In these cases the company should recognize an impairment loss in order that financial statements,
particularly the balance sheet demonstrate a faithful and true image of assets.

IAS 37 – provisions, contingent liabilities and contingent assets


The objective of this standard is to ensure that appropriate recognition criteria and measurement
basis are applied to provisions, contingent liabilities and contingent assets and that sufficient
information is disclosed in the notes of the financial statements to enable users to understand their
nature, timing and amount. The application of IAS 37 to the accounting release of the environ-
mental facts is related with:
• Information inside the balance sheet: provisions for environmental liabilities and charges, that
allow the company answer to a possible risk.

152

© 2004 by Taylor & Francis Group, LLC


chap-15 19/11/2003 14: 46 page 153

• Information outside the balance sheet: notes attached to the financial statements that company
must present including all the information related to environmental facts in order that accounting
can show a truly and faithful company image.

IAS 38 – intangible assets


The objective of this standard is to prescribe the accounting treatment for intangible assets that
are not dealt with specifically in another IAS. This standard requires an enterprise to recognize
an intangible asset if, and only if, certain criteria are met and also specifies how to measure the
carrying amount of intangible assets and requires certain disclosures about intangible assets. The
application of IAS 38 to the accounting release of the environmental facts is related with:
• Lost of value (impairment) that can occur due to environmental accidents;
• Intangible assets costs due to environmental facts for example R&D.

Recommendation of the European Commission


The objective of this Commission Recommendation [18] approved by European Commission
(EC) in May 2001 is the recognition, measurement and disclosure of environmental issues in
the companies annual accounts and annual reports.
The EC issued this recommendation considering:
• Treaty establishing the European Community, in particular Article 211 EC;
• Fourth Council Directive [19];
• Seventh Council Directive [20].

APPLICATION OF THE ANALYSED ACCOUNTING TOOLS

Environmental expenditures includes the costs of steps taken by an enterprise, or on its behalf by
third parties, to prevent, reduce or repair damage to the environment which results from its operating
activities. These costs include, amongst others, waste disposal and avoidance, protection of soil
and surface water and groundwater, protection of clean air and the climate, noise reduction, and
the protection of biodiversity and landscape. Only additional identifiable costs that are primarily
intended to prevent, reduce or repair damage to the environment should be included. Costs that
may influence favourably the environment but whose primary purpose is to respond to other needs,
for instance to increase profitability, health and safety at the workplace, safe use of the company’s
products or production efficiency, should be excluded. Where it is not possible to isolate separately
the amount of the additional costs from other costs in which it may be integrated, it can be estimated
in so far as the resulting amount fulfils the condition to be primarily intended to prevent, reduce or
repair damage to the environment.

Recognition
An environmental liability is recognised when it is probable that an outflow of resources embodying
economic benefits will result from the settlement of a present obligation of an environmental nature
that arose from past events, and the amount at which the settlement will take place can be measured
reliably. The nature of this obligation must be clearly defined and may be of two types:
• legal or contractual: the enterprise has a legal or contractual obligation to prevent, reduce or
repair environmental damage;
• constructive: a constructive obligation arises from the enterprise’s own actions when the enter-
prise has committed itself to prevent, reduce or repair environmental damage and has no dis-
cretion to avoid such action because, on the basis of published statements of policy or intention
or by an established pattern of past practice, the enterprise has indicated to third parties that it
will accept the referred responsibility.

153

© 2004 by Taylor & Francis Group, LLC


chap-15 19/11/2003 14: 46 page 154

Where the enterprise expects that some or all of the expenditures related to an environmental
liability will be reimbursed from another party, the reimbursement should be recognised only when
it is virtually certain that it will be received if the enterprise settles the obligation. Environmental
expenditure should not be capitalised but charged to the profit and loss account if it does not give
rise to future economic benefits.

Measurement
Environmental developments or facts may cause an existing fixed asset to be impaired, for example
in the case of site contamination. A value adjustment should be made if the amount recoverable
from the use of the site has declined below its carrying amount.
If the carrying amount of an asset already takes account of a loss in economic benefits because
of environmental reasons, the subsequent expenditure to restore the future economic benefits to its
original standard of performance can be capitalised, to the extent that the resulting carrying amount
does not exceed the recoverable amount of the asset. An environmental liability is recognised, when
a reliable estimate of the expenditure to settle the obligation can be made.
Furthermore, for measuring the amount of an environmental liability, the following should be
taken into consideration:
• Legal or contractual: the enterprise has a legal or contractual obligation to prevent, reduce or
repair environmental damage;
• Incremental direct costs of the remedial effort;
• Cost of compensation and benefits for those employees who are expected to devote a significant
amount of time directly to the restoration effort;
• Post-remedial monitoring requirements;
• Advances in technology, so long as it is probable that the governmental authority will approve
the technology.

ACCOUNTING DISCLOSURES OF ENVIRONMENTAL FACTS IN THE


FINANCIAL STATEMENTS

The accounting of environmental facts using the IAS 36, 37 and 38, EU Recommendation and UN
Position Paper should be applied in the following financial statements:
• Balance sheet: provisions for environmental accidents are carried at “Provisions for Liabilities”,
in detailed sub-accounts.
• Profit and loss statement: environmental costs are carried at a special account that should be
separated from the others.
• Annexes (balance sheet and profit/loss statement):
– description of the recognition and measurement methods;
– extraordinary environmental expenditures carried at the result account;
– detailed information in “Provisions for Liabilities and Charges”;
– disclosure of environmental contingent liabilities;
– references to the uncertain evaluation;
– detailed description of the related situations with significant environmental liabilities;
– information related to the use of present value;
– disclosure of the discount tax and of the amount not yet discounted in liabilities;
– description and amount of environmental expenditures considered in profit and loss statement;
– description and amount of environmental expenses capitalized during the fiscal year;
– description of incurred costs in fines, penalties and compensations to third persons, when
those are significant;
– public incentives received due to environmental protection;
– references to the separated environmental report;
– references to the environmental audit.
154

© 2004 by Taylor & Francis Group, LLC


chap-15 19/11/2003 14: 46 page 155

• Activity report:
– description of the relevant environmental issues, namely the investments done or in course;
– strategies and programs of the environmental protection;
– enhancements made in key-domain of the environmental protection;
– measures in course;
– qualitative and comparative ratios of eco-efficiency.

Portuguese official accounting plan


According to the Portuguese official accounting plan, the annex to the balance sheet and profit and
loss statement have the followings items where the disclosure of accounting environmental facts
can be released on an off-balance sheet basis:
#3: measurement criteria and calculation methods used for determination of value;
#8: installation and R&D expenditures, where relevant, should be justified;
#31: financial commitments, including contingencies for responsibilities for environmental
damages, if they are not reflected in the balance sheet;
#34: environmentally relevant provisions during the fiscal year should be affected in the
provisions of the liabilities account;
#46: relevant extraordinary costs and profits should be included in the profit and loss
statement;
#48: other information of relevance to comprehensive accounts such as environment
protection subsidies or tax incentives.

PORTUGUESE COMPANIES FACING THE ENVIRONMENT

The environmental sustainability of the Portuguese companies was analysed through a survey sent
by mail to 520 top managers of possible polluting companies in Portugal [21].

Characteristics of the survey


The survey aimed at assessing the behaviour of potential environmental polluting companies and the
inclusion of environmental facts in the accounts as referred in the UN position paper, International
Accounting Standards and EC recommendation. Before launching the survey with twenty five
questions, it was validated by a panel composed of 10 top managers of industrial companies.
In order to calculate the size (n) of the adequate sample of a finite population, that guarantees
a confidence level (λ) and a precision level (D) for the population proportion (p), the following
expression (1) was used [22,23]:
p × (1 − p)
n=  2 (1)
D/(zα/2 ) + [p × (1 − p)] /N

For a precision level D = ±8% and a confidence level λ = 95%, the normal distribution has the value
Zα/2 = 1.96. In the most pessimist hypothesis, when the dispersion is maximum, the proportion is
p = 0.5. So, for these values, the sample should have the size of n = 100. For the construction of
the database we received 119 inquiries; however 19 were rejected because they had several missing
values. Therefore the answering rate to the inquiry was 19.2%.

Results of the survey to polluting industrial companies


The database built with the information gathered from the survey was analysed using the Statistical
Package for Social Sciences – SPSS 10.0 [24,25]. The identification (industrial activity and size)
of the companies belonging to the sample are shown in Table 1.

155

© 2004 by Taylor & Francis Group, LLC


chap-15 19/11/2003 14: 46 page 156

Table 1. Industrial activity and dimension of the inquired companies.

Industrial activity % Dimension %

Oil 8 Large enterprise 22


Paint, ink, polish & lacquer 15 Medium enterprise 31
Plastic 12 Small enterprise 47
Paper 11
Tanning 28
Cement 3
Cattle (breed, conserve, slaughter) 23

Table 2. Mean values and standard deviations of factors.

Mean Standard
Factors value (xm ) deviation (s)

Company has environmental concerns 4.3 0.6


Company has caused polluting accident 1.9 1.1
Environmental investments 3.6 1.1
Loss in financial investments due to 1.3 0.6
environmental accidents
Environmental insurance policy 2.0 1.0
Inclusion of environmental facts in accounts 2.5 1.2

Environmental Investments Environmental Assets

Never Intangible None


8 Always
4 8
27
Tangible/
intangible
Sometimes 21
Often Tangible
42 67
23

Figure 1. Environmental investments and assets.

Table 2 presents the mean values (xm ) and standard deviations (s) for the six factors under
research. These factors were chosen based on an empirical research done by the authors and also
considering the environmental concerns of the selected panel composed by top managers of potential
polluting industries.
The inquiry applies the Likert scale with five levels [1: never, 2: seldom, 3: sometimes, 4: often
and 5: always]. The questions “does your company have environmental concerns?” and “has your
company ever caused any polluting accident?” with mean values, xm = 4.3 and xm = 1.9 respec-
tively, show that the companies are highly concerned by environmental issues.
Significant are also the values of xm = 3.6 and xm = 1.3, obtained respectively in the questions
“does your company makes environmental investments?” and “did your company suffered losses
in financial investments due to environmental accidents?”
Figure 1 shows that 92% of the industrial polluting companies are investing in the environment.
Also 67% declared that are investing in tangible assets, 4% in intangible assets and 21% in both
tangible/intangible assets.
Figure 2 illustrates that 88% of the companies have never suffered losses in their financial
investments due to environmental accidents. The question presented in Table 2 “have your company

156

© 2004 by Taylor & Francis Group, LLC


chap-15 19/11/2003 14: 46 page 157

Losses due to Environmental Accidents Environmental Insurance

Always
Often Often
Always 4
6 2
0 Sometimes
6 Never
38 Sometimes
19

Never
88 Seldom
37

Figure 2. Loss in financial investments and environmental insurance.

Accounting Release of Environmental Where is Carried the Accounting


Facts Release
Balance
sheet
Always
Never Often 8
8 Do not
23 8 carry
23
Out of
Sometimes balance
Seldom
34 sheet
27
69

Figure 3. Companies that include environmental factors in their accounts.

Environmental Provisions Environmental Facts in Annual Report

80 75 70 80
66
63 58
60 60 52 48
37 42
40 34 Yes 40 Yes
25 30
No No
20 20

0 0
1999 2000 2001 1999 2000 2001

Figure 4. Environmental provisions and inclusion of environmental factors in annual reports.

environmental insurance policy” corresponds a mean value, xm = 2.0. In fact, 38% of the companies
have never taken out any form of environmental insurance and 62% declared that their environmental
insurance policies are those required by law.
Figure 3 shows that 23% of companies never included environmental facts in their accounts
and only 8% did it on a permanent basis. According to Table 2, the mean value of “companies
that include environmental facts in their accounts” is xm = 2.5. Only 8% of companies disclose
environmental facts in their balance sheet and 69% on an off-balance sheet basis.
Figure 4 presents the percentage of companies that carry environmental provisions in their
balance sheet using IAS 36, 37 and 38. They have been increasing over the last three years, rising

157

© 2004 by Taylor & Francis Group, LLC


chap-15 19/11/2003 14: 46 page 158

to 34% in 2001. The inclusion of environmental facts in the annual management report is also
increasing from 37% to 52% in 2001.

CONCLUSIONS

The environmental behaviour of companies is of crucial importance. Since environment is a public


good, the companies have social responsibilities and should include the internalisation of exter-
nalities within their financial statements. Different stakeholders, including regulatory authorities,
investors, financial analysts and the public in general need to know how companies deal with
environmental issues. As a result, the accounting of environmental factors has advantages not only
for the company and users of financial information but also for the society as a whole:

• Company has an advantage that is directly/indirectly related to its assets. The direct relationship is
with intangibles, namely market image that contributes to increased competitive and comparative
advantages for the company. The indirect relationship is with tangibles, such as increased sales
and turnover.
• Harmonised guidelines mean that users of financial statements will receive meaningful, compa-
rable and consistent information, enabling them to compare companies and reinforce initiatives
in the area of environmental protection.
• Society also benefits from the inclusion of environmental factors in accounts, since it will
encourage companies to be concerned with a clean environment and thereby benefit the
community.

Accounting harmonisation of environmental facts can be made using tools based on the stan-
dards and recommendations studied in this paper: IAS 36, 37 and 38, UN Position Paper and EC
Recommendation. The European Commission intends to clarify existing rules and provide more
specific guidance on subjects of recognition, measurement and disclosure of environmental issues
in the annual reports and accounts of companies. This is particularly relevant for the disclosure
within the notes to the accounts, of environmental expenditures that have been charged to the profit
and loss account or have been amortised. This is a key factor that facilitates the transparency of
financial information.
This paper discussed the behaviour of potential polluting industries in Portugal in relation to
environmental sustainability based on a survey sent to 520 companies (22% large enterprises, 31%
medium enterprises and 47% small enterprises) belonging to 7 economic activity sectors.
Important conclusions of the survey are presented in the previous section. Further conclusions
can be drawn from the enquiry, such as:

• 92% of the companies make environmental investments because of the negative impacts of
probable ecological accidents.
• 67% of environmental investments are in tangible assets.
• The investment done by industries in order to protect the environment has, in the Likert scale
with five levels (1 = never, 5 = always), the mean value xm = 3.6.
• Few companies have environmental insurance policy, xm = 2.0.
• 23% of companies never disclosure the accounting of environmental facts.
• In 2001, 69% of the companies include environmental facts in their accounts on an off-balance
sheet basis.
• The environment provisions in the balance sheet have been increasing for the last three years,
reaching 34% in 2001.
• In 2001, 52% of companies included environmental facts in their annual management report.

Sustainable development will be achieved when all companies incorporate environmental facts
into their management strategies and also in their financial statements, showing social responsibility
and concerns with the community.

158

© 2004 by Taylor & Francis Group, LLC


chap-15 19/11/2003 14: 46 page 159

REFERENCES

1. Brockhoff K, Chakrabarti AK, Kirchgeorg M. Corporate strategies in environmental management.


Journal of Research Technology Management 1999; July–August: 26–30.
2. European Commission. Application du principe du polluteur-payeur dans les états-membres bénéficiaires
du fonds de cohesion. EC, 2000.
3. Whalley N, Whitehead B. It’s not easy being green. Harvard Business Review 1994; May–June: 46–52.
4. European Commission. Global evaluation – the environment in Europe: orientations for the future. EC,
2000.
5. Duarte M. Environmental companies and financial accounting. MSc thesis, IST, Technical University of
Lisbon, 2001.
6. Henriques I, Sadorsky P. The determinants of an environmentally responsive firm: an empirical
approach. Journal of Environmental Economics and Management 1996; 30:381–395.
7. Porter ME, Linde C. Toward a new conception of the environment-competitiveness relationship. Journal
of Economic Perspectives 1995; 9, 4:97–118.
8. Klassen RD, McLaughlin CP. The impact of environmental management on firm performance. Journal
of Management Science 1996; 42:1199–1214.
9. Roy M, Boiral O, Lagacé D. Environmental commitment and manufacturing excellence: a comparative
study within canadian industry. Journal of Business Strategy and the Environment 2001; 10, 5:257–268.
10. Shrivastava P. Environmental technologies and competitive advantage. Journal of Strategic Management
1995; 16:183–200.
11. Prakash A. Why do firms adopt ‘beyond-compliance’ environmental policies. Journal of Business
Strategy and the Environment 2001; 10, 5:286–299.
12. Duarte M, Sarmento M. Environmental analysis of polluting industrial companies. In: Carvalho G,
editor. VI International Conference on Technologies and Combustion for a Clean Environment, Porto,
Portugal, vol. 3. 2001. pp. 1237–1244.
13. Eaty DC, Porter ME. Industrial ecology and competitiveness. Journal of Industrial Ecology 1998;
2:35–43.
14. United Nations. Position paper of the United Nations working group on standards of accounting and
reporting. ISAR-Td/COM.2/ISAR/3, 12 March, 1998.
15. International Accounting Standard Board. International accounting standard 36. IASB, September 1998.
16. International Accounting Standard Board. International accounting standard 37. IASB, September 1998.
17. International Accounting Standard Board. International accounting standard 38. IASB, September 1998.
18. European Commission. Recommendation C. 1495, 2001/453/EC, 30/May, 2001.
19. European Commission. Fourth directive. 78/660/EEC, 25/July, 1978.
20. European Commission. Seventh directive. 83/349/EEC, 13/June, 1983.
21. National Institute of Statistics. Statistics of enterprises. Lisbon: Ed. INE, 2000.
22. Newbold P. Statistics for business and economics. New Jersey: Prentice Hall, 1995.
23. Reis E. Multivariate statistics. Lisbon: Ed. Silabo, 1997.
24. Sarmento M. Behaviour of quality groups facing key variables. Journal Technica 1997; 2:17–27.
25. Sarmento M. Evaluation based on strategical factors: the case of Lisbon expo’98. In: Bureau International
d’ Expositions editor. International expositions. Paris: BIE. 1999 p. 37–54.

159

© 2004 by Taylor & Francis Group, LLC


chap-15 19/11/2003 14: 46 page 160

© 2004 by Taylor & Francis Group, LLC


chap-16 19/11/2003 14: 47 page 161

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

The factors which affect the decision to attain ISO 14000

Šime Čurković
Western Michigan University, Haworth College of Business Management Department,
Kalamazoo, USA

ABSTRACT: Formally adopted in 1996 by the International Organization of Standardization,


ISO 14000 represents a new voluntary international environmental standard which will likely be
adopted by the vast majority of corporations. While the literature is clearly divided in its assessment
of ISO 14000, an underlying common theme is that the decision to achieve ISO 14000 certification
constitutes a major undertaking for most firms. Such an undertaking, it is argued, does not take place
in a vacuum. Rather, it is a response to a number of factors or influences. However, no research
to date has empirically identified these factors and explained how they can be leveraged into a
competitive advantage. In this article, we use qualitative case studies to identify which factors
affect the decision to attain ISO 14000 certification and we also explain how these factors can
influence the level of success achieved during the certification process.

INTRODUCTION

The primary objective of this article is to explore the implications of ISO 14000 for environmental
management. Developing a more accurate and realistic understanding of the implications of ISO
14000 certification will help alleviate some of the potential disappointments in the outcomes often
associated with ISO 14000. The literature is clearly divided in its assessment of ISO 14000, which
is viewed as either a variant of TQEM or a paper-driven process of limited value. An examination
of this international environmental standard was inspired by recent visits to a number of manufac-
turing facilities. It was discovered that not only do managers embrace the ISO 14000 criteria, they
view it as an integral part to their future success. These mangers insist that ISO 14000 is worth
chasing, not only because their customers might demand it, but also because ISO 14000 improves
performance.
These findings raise an interesting issue. The issue pertains to the decision to pursue ISO 14000
certification. That is, if there is a real benefit to being ISO 14000 certified, then what factors
influence this decision? Examples from these field visits will introduce the factors, which influenced
certification, and critically challenges the criticisms commonly associated with ISO 14000. The
article is organized as follows. First, we define and provide a background of ISO 14000. Then
we use examples from managerial experiences to identify the factors, which affected certification
status. The research concludes with an evaluation of the factors underlying the decision to attain
ISO 14000 certification and how these factors can be leveraged to obtain a competitive advantage.

ISO 14000 AS A MEANS OF ACHIEVING A COMPETITIVE ADVANTAGE

In the course of interviewing managers and touring manufacturing facilities for a number of
recent research projects, the author was repeatedly struck by certain factors which were iden-
tified as having a critical impact on predisposition and progress toward attaining ISO 14000
certification. Many times we were told that these factors not only influenced their decision to
pursue ISO 14000 but these factors also influenced the level of success achieved during the

161

© 2004 by Taylor & Francis Group, LLC


chap-16 19/11/2003 14: 47 page 162

certification process. What follows is an attempt to re-conceptualize ISO 14000 as a program


that can lead to a competitive advantage. Our approaches to studying ISO 14000 are qualitative
and based on field studies. The next section details the qualitative methods used to conduct this
research.

METHODOLOGY

The purpose of this study was to identify why companies seem to embrace ISO 14000 even though
the standards have been the subject of great debate and criticism. Since the focus of this research
was exploratory in nature (rather than confirmatory), qualitative data collection methods were
used. Field-based data collection methods were used to ensure that the important variables were
identified. It also helped us develop an understanding of why these variables might be important.
A small detailed sample fit the needs of the research more than a large-scale survey would have.
The method followed was similar to the grounded theory development methodology suggested
by Glasser and Strauss (1967). In instances where a well-developed set of theories regarding a
particular branch of knowledge does not exist, Eisenhardt (1989) and McCutcheon and Meredith
(1993) suggest that theory building can best be done through case study research. The researchers
participating in this project relied primarily on the methods of qualitative data analysis developed by
Miles and Huberman (1994), which consisted of simultaneous data collection, reduction, display,
and conclusions testing. The end result was a series of case studies in which each case was treated
as a replication.

Sample selection
Cook and Campbell (1979) suggested that random samples of the same population be used in theory
testing research. However, the sample selected for qualitative research such as in this study should
be purposeful (Eisenhardt 1989; Miles and Huberman 1994). The goal of this study was to identify
variables that explain the predisposition of ISO 14000 across manufacturing settings. Furthermore,
the research set out to address a variety of ISO 14000 outcomes. Firms from different stages in
ISO 14000, industries, products, processes, and sizes were selected based on literature search and
general knowledge of appropriate case study candidates. In addition, other issues important to
manufacturing strategy were addressed which would not have been served by limiting the sample
solely to successful adopters of ISO 14000. Therefore, the sample included industries such as office
furniture which the literature suggested would have a high, but not universal, rate of ISO 14000
adoption. Table 1 describes the number of firms involved in the field research, the industry, and
the annual sales.
Each of the firms selected was chosen to represent different stages of ISO 14000 certification
(e.g., assessing suitability, planning to implement, currently implementing, successfully imple-
mented). The other firms included in the study were chosen because they were in the same industries
as the firms found in the literature search. The objective of this sampling approach was to construct
a sample of firms that would be diverse enough to capture the variance of ISO 14000 variables
across firms and products that may be overlooked in a single industry or product sample.

Table 1. Firms in the sample.

No. of firms Industry Annual sales ($)

5 Tier I automotive suppliers 25M-5B


3 Chemical 15B
3 Office and furniture 1B
2 Aerospace 33M
1 Windows and doors 1B
1 OEM specialty trucks 25M
1 Pharmaceutical 15B

162

© 2004 by Taylor & Francis Group, LLC


chap-16 19/11/2003 14: 47 page 163

An initial idea of the level of ISO 14000 understanding and implementation at each potential
firm was obtained through preliminary screening over the telephone. Some of the questions used
in making our initial assessment can be found in Table 2. Twenty-two firms were initially contacted
and screened.
After the initial screening, which also addressed the willingness of the company to participate,
16 firms were again contacted and site visits were arranged. The interviews were conducted with
several managers responsible for portions of the ISO 14000 certifications process at each site.
Some titles of the people interviewed include “manager of ”: environmental health and safety, cor-
porate quality services, supervisor/planning group, plant planner, global director of development,
environmental science and assessment, new product group, and design engineering.

Interview protocol
Eisenhardt (1989) suggested that a researcher should have a well developed interview protocol
before making the site visits. A structured interview protocol was used at all of the plants. The
interview protocol was developed based on the researchers’ general understanding of ISO 14000.
The protocol was pre-tested at four manufacturing facilities and then used for the 16 firms included
in this study. Minor changes were made to the protocol after the pre-test. Questions focused on
previous and current EMS, and the roles and experiences of the players involved. Interviews were
conducted in the respondents’ facilities, and discussions focused on the consideration of ISO 14000
as an important part of their EMS and the factors affecting their predisposition towards ISO 14000.
Research concerning environmental issues is fraught with Socially Desirable Response issues. To
avoid responses exhibiting social desirability, different managers were questioned. The same struc-
tured interview protocol was used at all of the site visits. After each visit the protocol was reviewed,
and/or updated to accommodate new lessons learned. This constant updating of the protocol after
each visit is the foundation of grounded theory development (Glasser and Strauss 1967). When
the sessions involved multiple respondents, all comments or views of the managers were recorded
separately. Subsequent coding of the notes would highlight any differing views of the managers.
All respondents were asked if they were ISO 14000 certified. In addition, their reasons for
certification (or for not being certified) were solicited. Of the 16 companies, 14 were certified, while
the remaining 2 were considering certification. Finally, we discussed the outcomes of certification
with those firms, which were certified. This research is built primarily on the responses of the
12 firms that were certified. However, the comments and concerns of the non-certified firms were
also used to help explain why firms may be reluctant to adopt the environmental standard.
Qualitative theory building research is an iterative process. Eisenhardt (1989) suggested that data
collection and data analysis should be done simultaneously. In other words, the data from one case is
collected and then analyzed before the next replication is performed. Improvements in the protocol

Table 2. Initial assessment questions.

• Is your plant ISO 14000 certified?


• Why are you (not) certified?
• If not certified, are you considering certification?
• What is your overall impression of ISO 14000?
• Has ISO 14000 improved the overall competitive stance of your plant?
• Specifically, how has ISO 14000 influenced your environmental performance?
• How has ISO 14000 influenced your ability to provide the level and types of service required by
your customers?
• Please detail the types of documentation performed to be certified. Were these activities valuable?
• Please describe the types of continuous improvement activities performed at the plant. Has certification
helped/hindered or not affected these efforts?
• Do you feel you have received a good return on this investment?

Note: Managers were asked to explain all of their answers.

163

© 2004 by Taylor & Francis Group, LLC


chap-16 19/11/2003 14: 47 page 164

can be made between replications by collecting data in this manner. Important issues that are raised
in early cases can be included in the protocol for subsequent replications. This ability to refine
and improve upon the protocol between cases is a significant advantage of this type of research.
The actual process was one where a case was collected, analyzed, the protocol was improved upon,
and then the next case was collected.

Data collection
The primary data collection was done using structured interviews in a field setting. Sixteeen plants
in 7 industries were visited over a one-year period. In the sample of 16 installations (one installation
per site), 14 different companies were represented. The plants were located in 6 mid-western states:
(1) Michigan; (2) Ohio; (3) Indiana; (4) Illinois; (5) Wisconsin; and (6) Minnesota.
Structured interviews at each plant generally took place with the plant manager as well as
the environmental manager. At most plants, additional interviews also took place with company
presidents or vice presidents, manufacturing engineers, quality engineers, purchasing managers,
and designers. At three of the smaller plants, interviews were limited to the plant manager or
presidents.
Data were collected following a strict interview protocol that included a tour of the plant. The
primary researcher was accompanied on all visits by a second researcher who reviewed all field
notes prior to final coding. The use of multiple respondents and multiple interviewers at every
plant helped limit possible biases introduced by a single respondent and researcher. The field notes
identified responses to all of the protocol questions, answers to other questions that were raised
during the interview and plant tour, and other information such as company publications.

Data analysis
The two main components of data analysis included within and across case analysis. Within case
analysis helped us examine ISO 14000 in a single context, while the across case analysis served as
a form of replication where the constructs of interest in one setting were tested in other settings.
One concern was controlling for the affects of the researchers’ a priori beliefs as to the reasons why
ISO 14000 was embraced. This was accomplished a variety of ways. First, the primary researcher
wrote up the field notes prior to coding. The secondary researcher, who also went to the plant,
reviewed these notes.
The second step taken was intended to mitigate against confirmation bias. That is, the amount
of within case analysis performed before the cross case analysis was limited. Miles and Huberman
(1994) note that the acts of coding and data reduction are actually forms of data analysis. In other
words, the act of coding could lead to confirmation bias problems in future cases. Therefore, coding
for within case analysis was limited to categorizing the individual case on previously identified
constructs and identifying interesting new issues to pursue at future sites. We were more open to
alternative explanations raised in future replications by avoiding comparisons early in the research.
The between case analysis consisted of looking for patterns of firms’ experiences with ISO
14000 across the various organizations. Between case analysis is facilitated by using a variety of
tools to reduce the amount of data and to display the data in a meaningful fashion (Miles and
Huberman 1994; Yin 1994). Data reduction was done primarily through categorization. A number
of categories were formed based on the literature. Through a process of combination, renaming,
and redefining, the data was reduced to seven main concepts that were most frequently noted as
reasons for embracing ISO 14000. The factors included: Previous experience with Total Quality
Management; Past success with quality-based certification processes such as ISO 9000 or QS 9000;
Previous experience with cross-functional teams and management; Firm size/Number of full-time
equivalents; Nature or corporate ownership (foreign-owned plants are more likely to pursue and
receive ISO 14000 certification); and End sales.
Following each interview, the field notes were typed. To facilitate data coding and analysis, a
meta-matrix display was constructed. The next step involved coding the data using Nudist® quali-
tative data analysis software. On reviewing the first six field notes, a list of several primary codes

164

© 2004 by Taylor & Francis Group, LLC


chap-16 19/11/2003 14: 47 page 165

was developed to capture information in different meta-environmental categories. The researchers


reviewed the transcribed field notes for all 16 of the site visits at least three times. In doing this,
the events and processes observed at each site were classified into an EMS category, and into
several other complimentary environmental categories, including product and process hazards,
factors affecting predisposition towards ISO 14000, metrics, tools, options, and opportunities. The
meta-matrix is available from the authors upon request.
To check the reliability of the coding, an approach suggested by Miles and Huberman (1994)
was applied: reliability = number of agreements/total of items. They suggest 70 percent inter-coder
reliability is appropriate when using multiple raters to code field notes. An agreement was achieved
when at least two of the three primary researchers agreed on the coding used. The total number of
agreements minus the number of disagreements comprised the actual number of agreements used
in the reliability formula. The coding of each interview had reliabilities ranging from 0.90 to 1.00,
with an average inter-coder reliability of 0.95.

RESULTS

ISO 14000 has only recently been introduced and many organizations are still struggling with the
decision whether to implement the system and get certified. This may be attributable to having no
clear picture of the critical factors for successful implementation of ISO 14000. In this section,
case studies, combined with the literature, are used to determine which major factors affect the
decision to attain ISO 14000 certification and how these factors can influence the level of success
achieved during the certification process.

Past experience with total quality management


There has been a great deal of discussion within the literature about Total Quality Management
(TQM) in environmental programs such as ISO 14000. Namely, environmental systems are viewed
as being TQM systems modified to deal with pollution issues. The gradual evolution of quality to
include aspects of the environment has been anticipated by several authors. The “no waste” aim
of ISO 14000 closely parallels the TQM goal of “zero defects.” Because the two systems share
a similar focus, it makes sense to use many of the tools, methods, and practices of TQM when
implementing an EMS such as ISO 14000.
Several of the companies visited utilized TQM approaches to developing their environmental
systems. Some of the relevant TQM principles which were integrated into their ISO 14000 based
programs included: (1) a systems analysis process orientation that aimed to reduce inefficiencies
and identify product problems; and (2) data-driven tools, such as cause and effect diagrams, quality
evolution charts, pareto analysis, and control charts. A chemical manufacturer was among the first
to extend their TQM initiatives to an EMS. Some of the TQM principles, which were integrated
into their waste minimization programs, included the use of pareto analysis and control charts to
signal pollution problems with the manufacturing process.
A first-tier automotive supplier also described that during the ISO 14000 certification process,
statistical tools were both appropriate and beneficial for eliminating errors in air emission sam-
pling/monitoring procedures. Several of the companies were also using benchmarking techniques
to assess conformance to the ISO 14000 standards. One company regularly audited its facilities
throughout the world in the areas of public relations, direct environmental impact, incident preven-
tion, and continuous improvement. Standards were developed in each of these areas at the facility
level ensuring business unit commitment and they generated a score for each facility.

Past experience with ISO 9000 and QS 9000


It has been argued that ISO 14000 builds on the foundation established by ISO 9000 and
QS 9000. Both of these certification processes are quality oriented, with QS 9000 oriented

165

© 2004 by Taylor & Francis Group, LLC


chap-16 19/11/2003 14: 47 page 166

towards the automotive industry and ISO 9000 more broad based in its focus. Both ISO
and QS 9000 are also process-based. Finally, both require external auditing and assessment
before certification can be conferred. These traits are very much in evidence in the ISO 14000
certification process. In addition, it has been argued that past experience with these two quality-
based certification processes positively prepares a firm to plan for and attain ISO 14000
certification.
It was observed at several of the companies visited that ISO 14000 status was positively influ-
enced by the status of the plant in terms of either ISO 9000 or QS 9000 certification. All of the
respondents agreed that operating two separate quality and environmental management systems
would have been wasteful and redundant. Integration was not only possible at the facilities, it was
preferable. Since they already had an ISO and/or QS 9000 quality management system in place
and wanted to implement an ISO 14000 EMS, integration was the next logical step. By using the
synergy, which exists between the two management systems, an EMS implementation was achieved
with marginal additional expenditures.
The early research on the relationship between ISO/QS 9000 and ISO 14000 are decidedly
mixed. While our findings suggest that firms who have achieved ISO 9000 certification should
have a relatively easier time achieving ISO 14000 certification, this may not actually be the case.
Several managers did warn us that significant differences between the two standards exist. Wilson
(2000) likewise suggested that if these distinctions are not recognized, the potential advantages of an
ISO 14000 EMS, as well as the synergies of an integrated quality and environmental management
system, will not be seen. An existing QMS cannot be transformed into an EMS by merely replacing
the word “quality” with the word “environmental.” ISO/QS 9000 focuses on waste as it applies to
process inefficiencies, whereas ISO 14000 tends to focus more on concrete outputs, such as solid
and hazardous waste.

Current status of cross-functional programs


Ultimately, to be certified on the ISO 14000 standards, the plant’s personnel must be able to work
together. Many of the problems uncovered during the process of attaining ISO 14000 certification
cannot be addressed by one functional area or group working in isolation. As a result, it is expected
that success in implementing cross-functional programs should have a significant influence on the
plant’s progress and status in attaining ISO 14000 certification.
A team orientation that uses the knowledge of employees to develop solutions for waste prob-
lems was integrated into the EMS for several of the cases. One company showed that employee
involvement can be promoted by improving employee-management interaction and promoting
responsibility for the environment among all levels of management including individual employ-
ees. Using such a team orientation for environmental management has already been advocated by
a number of groups, most notably the Global Environmental Management Initiative (GEMI) and
the Council on Environmental Quality (CEQ).
Another facility, whose environmental managers complained that a noncompliance analysis
was taking too long to finish, assembled a team to: (1) arrive at a specification for turnaround
time; and (2) analyze the reasons for existing turnaround time. The team working on the analysis
delays showed that almost all of the turnaround time could be attributed to two factors: (1) a lack
of communication between divisions within the company to anticipate information needs; and
(2) a lack of standards for technicians. Shortly after beginning their improvement process, the
analysis team used a histogram to measure how close they were to achieving their time-reduction
goal. The histogram showed that they had reduced the mean delivery time and dispersion by
over half.

Firm size/full-time equivalents


FTEE, which reports the number of employees in terms of full time equivalents, is a proxy for
corporate size. It is included because some researchers have argued that ISO 14000 certification is

166

© 2004 by Taylor & Francis Group, LLC


chap-16 19/11/2003 14: 47 page 167

primarily pursued by larger firms. That is, the larger the firm, the more likely it is to attempt and
to achieve ISO 14000 certification. It was acknowledged during our investigation that adoption
is most likely by larger firms. They have the staffing and environmental specialists to implement
it. One manager said it would not apply ISO 14000 at its smaller facilities. If there is already
an environmental management system in place targeting waste reduction, then there may be little
advantage in applying ISO 14000 to a small site.

End sales
End sales capture the percentage of total sales made by the plant that go directly to the end consumer,
as compared to another industrial customer. It has been argued that the more a plant or firm sells
directly to the end consumer, the greater the probability of it being interested in attaining ISO 14000
certification. The reason is that end consumers are more interested in the environmental activities
of the supplier. Achieving ISO 14000 certification for such firms offers a method of differentiating
their products and their corporate image from that of their competitors.
Subsequently, the development of EMS initiatives received a considerable boost among first-
line suppliers in different industries such as automotive. Led by Rover Group and the European
assembly operations of Toyota and Honda, one of the criteria for inclusion on the approved supplier
list now is demonstration of an operating EMS such as ISO 14000. The justification was increasing
pressure on the OEMs by consumers to demonstrate its commitment to improved environmental
performance, as well as the need to reduce resource consumption. The “recyclability content” of
new car models increasingly is becoming an important marketing attribute.

Ownership
A U.S.-based pharmaceutical firm, which is publicly traded, had a Belgium facility and it was their
first facility to become ISO 14000 certified. They claim this was largely due to cultural influences
in the European Union. Another facility visited was a Tier 1 supplier to the automotive industry.
This is a publicly traded company owned by a larger company from England. Their primary product
is automotive glass and it only does assembly with no cutting, bending, or fabrication. The direction
for ISO 14000 certification came from the parent company in Britain. Headquarters said that all
facilities globally had to be certified by 1999. Another plant is part of a privately owned foreign
subsidiary. They make braking systems (e.g., calipers, wheel cylinders, etc.) and most of their parts
are for passenger cars and light-duty trucks. The motivations influencing the decision to pursue
ISO 14000 certification was the German influence of other European facilities being ISO 14000
certified. There is a strong environmental corporate culture coming from Europe, and all of the
parent company’s European plants were certified, so the company wanted to try something similar
in North America.
Most ISO 14000 registered sites in the U.S. are operated by affiliates of companies headquartered
elsewhere. Many of these larger firms say they consider ISO 14000 to be inherently unfair to
companies in the U.S. This perception was a result of the drafting process, in which the U.S. had
much less input than European countries. U.S. corporations were behind the curve but are moving
more aggressively now.

Exports/exports to the European Union


These two variables measure different aspects of export sales. The first variable captures the
percentage of total sales made by the plant/firm that consist of exports. The second variable
measures the percentage of total sales made by the plant/firm that consists of exports des-
tined to the European Union. Both variables are based on the view that ISO 14000 certification
is most desirable internationally overall, and in the European Union, specifically. As the per-
centage of sales going to exports increase, the firm is increasingly likely to seek ISO 14000
certification.
167

© 2004 by Taylor & Francis Group, LLC


chap-16 19/11/2003 14: 47 page 168

A privately owned company in the U.S. manufacturers automotive glass, windows, mirrors, glass
touch screens that you see on the instrument panel. The CEO made a trip to Europe to visit their
facilities there and saw the plants there being pressured by customers to pursue ISO 14000, and
then made it a corporate priority. Their customers in Europe include Rover, Vauxhaull, Opel, and
Volvo. These companies placed pressure on the European facilities to become ISO 14000 certified.
External pressures have shown some less informal interest from their North American customers
to pursue ISO 14000.

DISCUSSION

While many factors have been cited as influencing the predisposition toward ISO 14000 certification
and the value of this certification, certain factors were identified as having a critical impact on
predisposition and progress toward attaining this new form of certification. These factors included:
Previous experiences with Total Quality Management; Past success with quality-based certification
processes, such as ISO 9000 or QS 9000; Previous experience with cross-functional teams and
management; Firm size/Full-Time Employee Equivalents; Nature of corporate ownership (foreign-
owned plants are more likely to pursue and receive ISO 14000 certification); and, End sales.
These factors describe a situation where the respondents saw ISO 14000 as an extension of the
TQM movement. They also describe a situation in which respondents recognized that success with
ISO 14000 requires cross-functional teams and cooperation. There seems to be recognition that
succeeding with ISO 14000 requires more than simply introducing a new program or creating a
new department. Rather, ISO 14000 is an undertaking that requires the participation of multiple
parties working together. It is argued that these various factors act to pre-condition the firm and its
systems to the introduction, acceptance, and progress on ISO 14000.
This research was exploratory in nature and qualitative data collection methods were used.
Our findings need to be evaluated in future studies which use confirmatory techniques to build
and evaluate a model that explains the factors underlying the decision to attain ISO 14000
certification and the level of progress in becoming certified. In future studies, researchers
should view the progress in ISO 14000 certification as a dependent variable, which can be
explained in terms of certain critical (independent) explanatory variables identified in this
article.

CONCLUDING COMMENTS

Customer demands and government regulation have and will continue to drive the acceptance of
ISO 14000. Although many U.S. industries have not indicated that they will require their suppliers
to become certified, many suppliers are seeking certification because they believe it will happen. It
has already happened in the U.S. automotive industry. If well implemented, ISO 14000 can result in
less pollution, greater efficiencies, cost reductions, and improved productivity. Clearly, the extent
of the improvements and the amount of the savings depend on several factors independent of
ISO 14000.
ISO 14000 is a trend in environmental management, which cannot be ignored. In fact, for those
companies, which wish to remain competitive, and improve their environmental systems, it can be
an invaluable tool. Many managers warned that ISO 14000 certification can result in non-value
added costs if it is pursued only for its marketing or regulatory appeal. The true commercial value
associated with ISO 14000 can only be achieved when it is made consistent with a company’s strate-
gic direction. This means using the ISO 14000 standards as a foundation for a much broader system
such as TQEM. The experiences of these companies can serve as an illustration for organizations
contemplating pursuing certification. Through its standardization of environmental systems, ISO
14000 can help an organization not only reduce waste, but also gain a competitive advantage in the
international marketplace.

168

© 2004 by Taylor & Francis Group, LLC


chap-16 19/11/2003 14: 47 page 169

REFERENCES

1. Carpenter GD. Total quality management: A journey to environmental excellence. Environment Today
1991; 27 (45).
2. Angell LC, Klassen RD. Integrating environmental issues into the mainstream: an agenda for research
in operations management. Journal of Operations Management 1999; 17 (5): 575–598.
3. Lally AP. ISO 14000 and environmental cost accounting: The gateway to the global market. Law &
Policy in International Business 1998; 29 (4): 501–538.
4. Scott A. Profiting from ISO 14000. Chemical Week 1999; 161 (36): 83–85.
5. Zuckerman A. Ford, GM set ISO 14000 requirements. Iron Age New Steel 2000; 16 (3): 58.
6. Abarca D. Implementing ISO 9000 & ISO 14000 concurrently. Pollution Engineering 1998; 30 (10):
46-48.
7. Begley R. Is ISO 14000 worth it? The Journal of Business Strategy 1996; 17 (5): 50–55.
8. Montabon F, Melnyk SA, Sroufe R, Calantone RJ. ISO 14000: Assessing its impact on corporate
performance. Journal of Supply Chain Management 2000; 36 (2): 4–16.
9. Mroz JG. Will ISO 14000 bring you more harm than good? Quality Digest 1997; 1 (1): 35–39.
10. Clark D. What drives companies to seek ISO 14000 certification? Pollution Engineering 1999; 1 (3):
14.
11. Bhat VN. Total quality management – An ISO 14000 Approach, Qurom Books, Greenwood Publishing
Group, P.O. Box 5007, Westport, Conn., 1998.
12. Epstein M. Measuring Corporate Environmental Performance: Best Practices for Costing and Managing
an Effective Environmental Strategy. Montvale, NJ: The IMA Foundation for Applied Research, Inc.,
1996.
13. Makower J. The E-Factor: The Bottom-Line Approach to Environmentally Responsible Business. New
York: Tiden Press, Inc., 1993.
14. Makower J. Beyond the Bottom Line: Business for Social Responsibility. New York: Tiden Press, Inc.,
1994.
15. Florida R. Lean and green: The move to environmentally conscious manufacturing. California
Management Review 1996; 39 (1): 80–105.
16. Porter M. America’s green strategy. Scientific American 1991; 1 (3): 168.
17. Porter M, Van der Linde C. Green and competitive – ending the stalemate. Harvard Business Review
1995a; 3 (4): 120–134.
18. Porter M, Van der Linde C. Toward a new concept of the environment-competitive relationship. Journal
of Economic Perspectives 1995b; 9 (4): 97–118.
19. Walley N, Whitehead B. It’s not easy being green. Harvard Business Review 1998; 5 (3): 46–52.
20. Tibor T, Feldman I. ISO 14000: A guide to the new environmental management standards. Irwin
Professional Publishing, Burr Ridge, IL, 1996.
21. Hale GJ. ISO 14000 integration tips. Quality Digest 1997; 1 (2): 39–43.
22. Hammer B. Pollution prevention: The cost effective approach towards ISO 14000 compliance.
Environmentally Conscious Design and Manufacturing List-Server 1996, University of Toronto, March
20, 1996.
23. Eisenhardt K. Building theories from case study research. The Academy of Management Review 1989;
14 (4): 532–550.
24. Glaser B, Strauss A. The discovery of grounded theory: Strategies for qualitative research. Aldine,
Chicago, 1967.
25. Miles M, Huberman A. Qualitative data analysis: A sourcebook of new methods. Sage Publications,
Newbury Park, CA., 1994.
26. Cook T, Campbell D. Quasi-experimentation – Design and analysis issues for field settings. Houghton-
Mifflin, Boston., 1979.
27. Klassen R. The implications of environmental management strategy for manufacturing performance.
Doctoral Dissertation, University of North Carolina at Chapel Hill, 1995.
28. Logsdon J. Organizational responses to environmental issues: oil refining companies and air pollution. In
L.E. Preston (Ed.), Research in Corporate SocialPerformance and Policy, vol. 7, JAI Press, Inc.,
Greenwich, CT, p. 47–71, 1985.
29. Shelton R. Organizing for successful DfE: lessons from winners and losers. Proceedings of the IEEE
International Symposium on Electronics and the Environment, Orlando, Florida, 1995.
30. Messick S. Dimensions of social desirability. Educational Testing Services, Princeton, NJ, 1959.
31. Yin R. Case study research: Design and methods. Sage, Thousand Oaks, CA Publications, 1994.

169

© 2004 by Taylor & Francis Group, LLC


chap-16 19/11/2003 14: 47 page 170

32. Curkovic S, Melnyk SA, Handfield RB, Calantone RJ. Investigating the linkage between total quality
management and environmentally responsible manufacturing. IEEE Transactions on Engineering
Management 2000; 47 (4): 1–21.
33. May DR, Flannery BL. Cutting waste with employee involvement teams. Business Horizons 1995; 38
(5): 28–38.
34. Sarkis J, Rasheed A. Greening the manufacturing function. Business Horizons 1995; 38 (5): 17–27.
35. Willig J. Environmental TQM. New York: McGraw-Hill, Inc., 1994.
36. Renzi MF, Capelli L. Integration between ISO 9000 and ISO 14000: Opportunities and limits, total
quality management. Abingdon, July, Carfax Publishing Company, 11 (4–6), S849, 2000.
37. Wilson RC. An integrated ISO effort may boost efficiency. Pollution Engineering 2000; 31 (2): 33.
38. Wilson RC. EMS, QMS: What’s the difference? Pollution Engineering 2000; 32 (4): 41.
39. Global Environmental Management Initiative (GEMI). Environmental self-assessment program (Third
Ed.), Washington, D.C.: GEMI, November 1994.
40. Global Environmental Management Initiative (GEMI). ISO 14000 environmental management system
self-assessment checklist. GEMI: Washington, D.C., March 1996.
41. Council of Great Lakes Industries (CGLI). Total quality environmental management: Primer and
assessment matrix. Detroit, MI: CGLI, 1994.
42. Enander RT, Pannullo D. Employee involvement and pollution prevention. Journal of Quality &
Participation 1990; 1 (3): 50–53.
43. Gripman SR. How to involve employees in solid waste reduction. Environmental Manager 1991, 3 (1):
9–10.
44. Gupta M, Sharma K. Environmental operations management: An opportunity for improvement.
Production and Inventory Management Journal 1996; 37 (3): 40–46.
45. Cook J, Seith BJ. Designing an effective environmental training program. Journal of Environmental
Regulation 1992; 2 (1): 53–62.
46. Cramer JM, Roes B. Total employee involvement: Measures for success. Total Quality Environmental
Management 1993; 3 (1): 39–52.
47. Marguglio BW. Environmental Management Systems. New York, NY: M. Dekker, 1991.
48. Cook J, Seith BJ. Environmental training: A tool for assuring compliance. Journal of Environmental
Regulation 1991; 1 (2): 167–172.

170

© 2004 by Taylor & Francis Group, LLC


chap-17 19/11/2003 14: 47 page 171

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Creation of a recycling-based society optimised on regional material


and energy flow

N. Goto∗, T. Tabata & K. Fujie


Department of Ecological Engineering, Toyohashi University of Technology, Toyohashi, Japan

T. Usui
Aichi Science and Technology Foundation, Kariya, Japan

ABSTRACT: One method of establishing a new society involving low environmental load, a
so-called “recycling-based society”, is to recycle waste. If waste is incinerated, energy is produced.
However if waste is to be converted to new materials, energy will have to be provided. Energy supply
and demand is hence a key concept in making a recycling based society a feasible proposition. The
objective of this study was to develop a method of assessing the optimal extent of a recycling based
society from the viewpoint of energy.
The method is as follows. Energy consumption in the recovery process is estimated using the
distribution of waste discharge sites and recycle facility locations having the minimum waste trans-
port distance. The energy consumption in the conversion and the energy supply in the incineration
are estimated using specific data from existing facilities. As an example, we apply the method to
the Aichi Prefecture in Japan.

INTRODUCTION

In order to establish a recycling-based society, efficient use of resources (human, industrial) and
energy are essential. One way of achieving this is to optimize regional material flow. This, in turn,
means constructing a regional system not only by reducing waste in the manufacturing process
but also by utilizing the waste in other industries. Although there have been numerous efforts at
reducing the environmental load at individual company level, these efforts have not directly reduced
the environmental load in a whole region. This requires an integrated system involving reduction
of the whole regional environmental load.
A recycling-based society requires minute analysis and optimisation of the current material flow.
Five institute in Austria, Germany, Japan, The Netherlands and United States [1] estimated national
material flow by accumulation statistic data. This focused on all materials in national level. Some
researches have been tried to analyse material flow in regional level. Hekkert et al [2] analysed
the paper and wood flow in The Netherlands. Kasakura et al [3] analysed plastic flow in Japan.
Tripathi and Sah [4] investigated material and energy balance in rural area in India. Applications of
material flow analysis have been tried by some researchers. Masui et al [5] developed economical
model including material flow and analysed waste management system in Japan. Fujie et al [6]
have estimated regional material flow to all industries in the Aichi prefecture using prefecture’s
input-output table. To optimize regional material flow, waste generated in one industry should be
utilized in other industries as recyclable resources. This waste utilization among industries can
be called an “industrial network”. Goto et al [7] also developed a method to design an industrial
network using comparing regional waste flow.
∗ Corresponding author. e–mail: goto@eco.tut.ac.jp

171

© 2004 by Taylor & Francis Group, LLC


chap-17 19/11/2003 14: 47 page 172

One of methods to establish the recycling-based society is to recycle waste. The recycling of
waste consists of several processes such as recovery, transportation and waste conversion. If the
waste is incinerated, energy will likely result. If the waste is to be converted to new raw materials or
products, energy will probably be required. Thus, the extent of the waste recovery has repercussions
on the energy balance of the overall recycle process. This balancing of energy supply and demand is
a key concept in creating an efficient extent of recycling-based society. The objective of this study
is to develop a model allowing the representation of an energy-optimized recycling based society.

METHOD

Figure 1 shows a concept of this method. Energy consumption in recycle process occurs in recovery
and conversion. To know the energy consumption in the recovery is to know distance between waste
discharge site and recycle facility. Conversion process need energy and it is important that how the
energy should be supplied. Then we could know how much energy could be consumed and how
much energy could be generated according to recover scale.

Recovery process
Energy consumption in the recycling process occurs both in recovery and conversion. In order to
assess the energy consumption during the recovery it is necessary to know the distance between the
waste discharge site and the recycling facility. The conversion process requires energy; therefore
it is important to decide how this energy can be supplied. Knowing this, we can then estimate how
much energy will be consumed and generated as a function of recovery scale.

Distribution of waste discharge


In order to evaluate the amounts of industrial waste discharge in a region, specific information such
as the kinds of waste, their amounts and so on, is required. For this, Japanese manufacturing industry
was classified into 34 sectors such as food industry, plastic products one, transportation equipment
one and so on.Although industrial waste discharge in each industry could be determined by collating
reports of industrial waste treatment in the Aichi prefecture, the individual waste discharges from
each facility would remain unknown. Survey by questionnaire is an effective method, but it is
enormously time and capital consuming and, in any case, it is impossible to cover all necessary

Recycle Process

Recovery Conversion

Waste Discharge Distance Location of Thermal recycle Material recycle


Site Recycle facility Energy supply Energy consumption

Waste discharge Location of Waste discharge distribution


distribution minimum transport recovery extent

Figure 1. Concept of this method.

172

© 2004 by Taylor & Francis Group, LLC


chap-17 19/11/2003 14: 47 page 173

items (plants, factories). Therefore, in this study, a simple method to estimate the industrial waste
discharge per factory using statistical data was devised. For accounting purposes, each industrial
branch was classified into ten characteristic types according to the number of employee per factory
(see Table 1). The industrial waste discharge rate per factory can be estimated by dividing the total
industrial waste discharge by the number of factories of a given size in one of the 34 determining
industrial sectors.
Xj
Wi,j = Wi × (1)
X
Wi,j
WFi,j = (2)
Fi,j
To help select the most appropriate parameters for use in correlations, several were compared.
Among these were: the number of employees, production level and the cost of raw materials in
manufacturing industry. On comparing the quality of correlations arrived at using these parameters
with “real world” data on existing factories (questionnaire), production figures proved to yield the
best good relationship.

Location of recycle facility


Minimization of the waste transportation distance between the waste discharge site and the recycling
facility was estimated based on the mesh map shown in Figure 2. The mesh colour reflects the
amount of waste discharged in each mesh. The transportation distance is defined to be the distance
among two meshes. The total distance of waste transportation between the waste discharge site and
the recycling facility is described by equation (3).

Dj = Di,j xi,j (3)
i∈M j∈N

The xi,j is 1 when waste is transported from the waste discharge sites to the targeted recycling
facility, and 0 when it is transported from the waste discharge site to another recycling facility.
The sum of the minimum distances of waste transportation from the waste discharge site to the
recycling facility is given by equation (4).

Dij = min (j ∈ N |Di1 , Di2 , . . . , Din ) (4)

This equation selects the best combination of all minimum distances of waste transportation by
calculating the distance between each waste discharge site and recycling facility. In our case study,

Table 1. Estimates for the plastic products industry. Production is the value of manufacturing
industry and number of factory is the value of plastic products industry.

Production Industrial waste discharge


Firm size employees [billion yen] Number of factory [t/year]

Total 3,370 2,130 115,300


4 to 9 130 1,160 4,300
10 to 19 130 440 4,500
20 to 29 140 240 4,800
30 to 49 130 100 4,600
50 to 99 250 120 8,700
100 to 199 280 40 9,600
200 to 299 190 15 6,500
300 to 499 270 10 9,100
500 to 999 350 4 11,800
more than 1,000 1,500 3 51,400

173

© 2004 by Taylor & Francis Group, LLC


chap-17 19/11/2003 14: 47 page 174

(2)

(1)
(3)

Recycle
facility

Waste Paper Discharge


[t/year/mesh]
0 - 30
30 - 50
50 - 100
100 - 300
(4) 300 - 500

Figure 2. Group of waste recovery. Colour mesh is waste discharge, and circle is the location of the facility.
Distance between wastes sites and facility in same frame is minimum.

10,000
Waste metal
Total waste transport distance [km]

y  4.29x
Organic sludge R2  0.81

Slag
1,000 Waste wood
Inorganic sludge
Waste paper

Waste tire

100
100 1,000 10,000
Number of waste generation mesh

Figure 3. Relationship between number of waste recycle discharge site and total waste transport distance
amount per one facility.

Figure 2 shows the spatial distribution of the distance of waste paper transportation in Toyohashi
city and the suburban area. When industrial waste from a given waste discharge site is transported
to the recycling facility, the distance of waste transportation is minimized by transport to the nearest
recycling facility.
We have examined the relationships between the number of waste discharge sites, number of
recycle facilities and total waste transport distance. Figure 3 shows a good example of one of these
for the number of waste discharge sites and total distance of waste transportation in the Aichi
prefecture, Japan (Figure 5). In general, as the number of waste discharge sites increases, the total
distance of waste transportation increases linearly. This result suggests that it is possible to estimate

174

© 2004 by Taylor & Francis Group, LLC


chap-17 19/11/2003 14: 47 page 175

10,000 600
Movement
Waste recycle
8,000 480
Total of movement [ton-k]

[t/Recycling faculty]
Waste recycle
6,000 360

4,000 240

2,000 120

0 0
1 3 43 106
Number of recycling faculty

Figure 4. Relationship among number of facility, total waste transport distance and waste recycle.
Energy needs for material recycle Emr

Energy supply for thermal recycle Ew

Amount of waste recover


Waste discharge for material recycle
Waste discharge for thermal recycle

Figure 5. Relationships among amount waste recover, energy need for material recycle and energy supply by
thermal recycle.

the transportation distance for one waste type if the number and the location of recycle facilities
and the number of its discharge sites can be determined.
Figure 4 shows the relationship between the number of recycle facilities, total waste transport
distances and waste recycle amounts per facility in the Aichi prefecture, Japan (see also Figure 5).
As the number of recycle facilities increases, the sum of the waste transportation distances and waste
recycle amount per facility decreases. Thus, from the point of view of the energy consumption and
cost, it is not effective for a single recycle facility to treat small amount of waste. A minimum
treatment amount should be set, thus allowing the necessary maximum number of facilities to be
established.

175

© 2004 by Taylor & Francis Group, LLC


chap-17 19/11/2003 14: 47 page 176

Conversion process and scale of waste recycle


The conversion process also needs to be considered from the viewpoint of mass and energy. The
conversion process may be classified into a thermal recycle and a material recycle. The thermal
recycle supplies energy. In contrast, the material recycle requires energy. Ideally, the recycle process
should not consume additional energy. So the recycle process should use unutilized energy, such
as wind, solar and waste.
Emr + Et ≤ Ef + En + Ew (5)
In order to establish equation (5), Ef should be at a minimum, Ew (or En ) a maximum and Emr
should be at a minimum. Ew is proportional to the amount of recovered waste, i.e. the extent of
waste recovery. Emr and Et are also proportional to the amount of recovered waste. If only Ew and
Emr are considered, it is better that Ew is equal to Emr as shown in Figure 5.

RESULTS

As shown in Figure 6, we have applied the system to a model region, the Aichi prefecture, located
in the centre of Japan.

Distribution of waste discharge


Figure 7 shows the distribution of factories in Aichi prefecture. Each dot shows a factory. Figure 8
shows the distribution of production. The distribution map is described as a 1 km × 1 km mesh
map. The map shows that a large number of factory and production units are located in the west
region of the Aichi prefecture. By using these figures we can estimate the distribution of industrial
waste discharges shown in Figure 9. There are much industrial waste discharge in west of Aichi.
The west of Aichi is replete with industrial waste discharge sources.

Scale of waste recycle


The method has been applied to waste paper recycling in the Aichi prefecture. Transport and
conversion conditions are shown in Table 2. Figure 10 shows material and energy balance in paper
recycling processes in Aichi prefecture. In Aichi prefecture, the amount of waste paper discharge

Tokyo

Aichi Prefecture

Osaka Nagoya
Toyota

Toyohashi

Figure 6. Aichi prefecture in Japan.

176

© 2004 by Taylor & Francis Group, LLC


chap-17 19/11/2003 14: 47 page 177

is 510 [t/day]. The conversion from the waste paper to recycled paper requires 220 [t/day] of waste
wood. The waste wood discharge is 530 [t/day] in Aichi prefecture. On the other hand, combustible
waste discharge is 1,710 [t/day]. This burnable waste and the rest waste wood, 2,020 [t/day], is
used as a heat source. Conversion from 510 [t/day] waste paper to recycled pulp requires 420
[103MJ/day], conversion from 220 [t/day] waste wood to pulp requires 1,750 [103MJ/day] and
conversion from pulp to recycled paper requires 1,770 [103MJ/day]. On the other hand, burnable
waste supplies 5,810 [103MJ/day]. Thus, thermal recycling can supply enough energy to recycle
waste paper. Thus the number of facilities for paper recycle will be one in Aichi prefecture and it
will be located in the centre of the region (see Figure 10).
On the other hand, for the case of waste iron recycle, the thermal recycle can supply only 4%
of the energy the iron recycle requires. The iron process discharges much waste, in which case
four recycle facilities are required to cause the transport distance show a minimum (see Figure 11).
From Figure 10 and Figure 11, it is evident that the extent of waste recovery depends on the nature
of the waste being treated.

Figure 7. Distribution of factory in Aichi prefecture.

Figure 8. Distribution of production in Aichi.

177

© 2004 by Taylor & Francis Group, LLC


chap-17 19/11/2003 14: 47 page 178

Figure 9. Waste discharge distribution in prefecture Aichi prefecture.

Table 2. Transport and conversion condition


of paper recycle.

Transport Truck
Fuel Light oil [–]
Mileage 4.0 [km/l]
Burden 8.8 [t]
Energy consumption 38.6 [MJ/l]
Conversion
Energy consumption 3,300 [MJ/t]

Figure 10. Extent of waste paper recovery.

CONCLUSION

In this study, we have developed a method to determine the spatial distribution of industrial waste
discharge plants and location of recycle facilities yielding the minimum transport distance and
best energy balance. From the results we can estimate the extent of waste recovery: this obviously

178

© 2004 by Taylor & Francis Group, LLC


chap-17 19/11/2003 14: 47 page 179

Figure 11. Extent of waste iron recovery.

Waste B Waste C
Waste A
Recover area Recover area
Recover area

Waste A
Recover area Waste A
Recover area

Waste A, B Material recycle


Waste C Thermal recycle

Figure 12. Miscellaneous size of waste recovery extent in recycle based society.

depends on the nature of the waste. A recycle-based society possesses several layers, each of which
show differing magnitudes of waste recovery extents, as shown in Figure 12. Figure 12 is one phase
of the recycle based society.

ACKNOWLEDGEMENTS

A part of this research described in this paper was supposed by Japan Science and Technology
Corporation (JSTC). The authors would like to express their appreciation.

NOMENCLATURE

Wi,j : waste discharge of firm size j of industry i,


Wi : waste discharge of industry i,
X : total of parameter in manufacturing industry,
Xj : total of parameter of firm size j,

179

© 2004 by Taylor & Francis Group, LLC


chap-17 19/11/2003 14: 47 page 180

WF i, j : waste discharge per factory of firm size j of industry i and


Fi, j : number of factory of firm size j of industry i.
Dj : total waste transportation distance between waste discharge sites and recycle facility j.
Di, j : the waste transportation distance between waste discharge site i and recycle facility j.
M : waste discharge site (M = {1, 2, …, n})
N : recycling facility (N = {1, 2, …, n})
xi, j : distinction parameter
Emr : energy demand for material recycle
Et : energy demand for material recycle
Ef : fossil fuel energy
En : natural energy
Ew : waste energy

REFERENCES

1. Hunter C. eds. 2000. Weight of Nations. World Resources Institute


2. Hekkert M.P., Joosten L.A.J. and Worrell E. 2000. Analysis of the paper and wood flow in The
Netherlands. Resource Conservation and Recycling. 30, 29–48
3. Kasakura T., Noda R. and Hashiudo K. 1999. Trends in waste plastics and recycling. Journal of
Material Cycles and Waste Management. 1, 33–37
4. Tripathi R.S., Sah V.K. 2001. Material and energy flows in high-hill, mid-hill and valley farming
systems of Garhwal Himalaya. Agriculture, Ecosystem and Environment. 86, 75–91
5. Masui T., Morita T. and Kyogok U.J., 2000. Analysis of recycling activities using multi-sectoral
economic model with material flow. European Journal of Operational Research. 122, 405–415
6. Fujie K., Goto N., Usui T. 2001. A Method to Evaluate Regional Material Flow for Designing Recycling
Society. Proceedings of the EcoDesign. Tokyo
7. Goto N., H.-Y. Hu and Fujie K. 2000. Study on a method to estimate waste reduction analyzing
regional material flow for zero emission. Proceeding of the First China-Japan Symposium on Chemical
Engineering. Beijing. China

180

© 2004 by Taylor & Francis Group, LLC


chap-18 19/11/2003 14: 47 page 181

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Environmental, energy and economic aspects and sustainability


in thermal processing of wastes from pulp production

J. Oral, J. Sikula, R. Puchyr, & Z. Hajny


EVECO Brno Ltd., Brno, Czech Republic

P. Stehlik & L. Bebar


Technical University of Brno, Faculty of Mechanical Engineering, Institute of Process and
Environmental Engineering, Brno, Czech Republic

ABSTRACT: The paper shows a concrete way how to solve the environmental problems con-
nected with the treatment of sludge from pulp and paper production with emphasis on sustainability
aspects. A thermal treatment unit with a capacity more than 100 tons of wet sludge per day had
been completely re-built because of more and more sweeping environmental laws. The retrofit
has been realized in three stages as follows: First, environmental problems have been solved by
rebuilding incineration and off-gas cleaning systems. The second stage of retrofit can be char-
acterized as a “waste-to-energy” one. Thanks to installing a heat recovery system for preheating
combustion air and water for steam generation $ 0.16 mil. is saved per annum. Since the pulp plant
is located in mining area the third stage of retrofit consists in substituting the currently used fuel
by mining gas containing approximately 50 to 60% of CH4 . It results in additional economic and
environmental benefit.

INTRODUCTION

Problems to be solved
Industrial production of pulp and paper is very demanding in terms of energy and raw materials
consumption. That is why the pulp producers make every effort to utilize as much energy and raw
material as possible and to minimize waste. Sludge originating in pulp mills belongs to the most
important wastes, which have to be treated.
Pulp and paper plants produce large amount of sludge. Environmental problems [1] con-
nected with this waste were not solved satisfactorily in the past (in the first half of 20th
century), unsatisfactory situation from the point of view of environmental protection existed
in many countries of the world. Wastewater from plants was discharged directly to rivers.
Wastewater treatment facilities were introduced step by step into production plants, however,
it was still necessary to solve the disposal of concentrated sludge. Moreover, this biological
sludge gradually decays. Therefore landfilling is not a suitable way from the environmental
point of view and thermal treatment of sludge showed it to be an appropriate solution of
the problem.

Characteristic features of waste


The sludge comes from wastewater treatment and sludge house keeping of the process producing
bleached pulp and fodder yeast. The treated sludge from one industrial plant, which is a typical
representative of such a process, has a composition as shown in Table 1.
The heating value ranges within the interval from 0.7 to 1.8 MJ/kg.

181

© 2004 by Taylor & Francis Group, LLC


chap-18 19/11/2003 14: 47 page 182

Table 1. Basic features of sludge.

Component Mass %

Dry components 13 to 35
Dry matter 4 to 7.5
Water 65 to 84

Figure 1. Flowsheet of original (previous) incineration unit.

The wet sludge is of very soft paste-like consistency. Characteristic properties of the wet sludge
only partially change with respect to the content of pulp fibre.

RETROFIT OF SLUDGE INCINERATOR

Sludge incinerator
A unit for the thermal treatment of sludge in one pulp and paper plant can serve as an example of a
large industrial incinerator. The said incineration unit processes the sludge, which is characterized
above. Maximum design capacity of the sludge incinerator is 130 tons of wet sludge per day, the
design operation per annum is 8280 h/year, the realistic maximum throughput achieved, due to
technical and technological reasons, was 105 to 110 t/day and the actual operation time was about
7000 hours per annum.
The incinerator was put into operation nearly twenty years ago. The original and/or previous
technology (see Figure 1) was designed as a relatively simple one. A multiple hearth combustion
chamber with fluidised bed can be considered as a basic device of the unit. One stage water
scrubbing without additives was used for de-dusting the flue gas.
De-watered sludge coming from wastewater treatment process unit enters the upper section of the
multiple hearth combustion chambers. The sludge enters in the form of bigger lumps corresponding

182

© 2004 by Taylor & Francis Group, LLC


chap-18 19/11/2003 14: 47 page 183

Table 2. Concentration of emissions and pollutants before retrofit.

Pollutant Measured values [mg/mN 3 ] Allowable limit [mg/mN 3 ]

NOx 50 to 227 350


CO 2200 to 3100 100
Cx Hy <100 20
Solid particles 100 to 157 30

to the dimensions of equipment (bulk conveyer) used for their transport. The rotating rabble arms
and supporting air-cooled rotating central shaft gradually move the sludge down over the other four
hearths. The flue gas passing by the sludge in counter flow dries it gradually. The lumps of sludge,
pre-dried that way, fall from the last hearth into the fluidised bed of the combustion chamber where
they are incinerated. The flue gas from the incineration process further passes multiple hearth
combustion chamber and is cooled down by water evaporation from the sludge. After leaving the
multiple hearth combustion chamber the flue gas enters a scrubber where it is first of all cooled
down to its saturation temperature (about 60◦ C) due to water injection. Then, in the scrubbing
stage, a part of polluting substances is treated.
Emissions concentrations values related to a common operational regime of the incinerator
before retrofitting is compared with the valid allowable legislative emission limits (the legislative
is in accordance with that of the EU) as shown in Table 2.
The high value of solid particles was due to rather low efficiency of the off-gas cleaning sys-
tem. The high content of carbon monoxide and Cx Hy was caused by an unsuitable processing
arrangement in the incineration unit. The lack of flue gas flow control across the multiple hearths
of the combustion chamber caused local increase of hearths temperature, which results in partial
pyrolysis of the sludge already during the process of drying. Chlorine emission limits were also
trespassed before the change of bleaching technology from the chlorine based one into chlorine free
one. Meeting the limits was basically without problems after the change of bleaching technology.
The high content of sulphur in the sludge was due to the composition of wood used for this pulp
production.
The original incinerator unit technology and equipment was up-to-date in the time of its construc-
tion, however, it gradually became both physically and technically obsolete. That applied in partic-
ular to the way of cleaning the flue gas. After coming to power the new legislative of atmospheric
pollution protection the equipment became unsatisfactory and its retrofitting was decided on [2].

RETROFIT OF INCINERATOR UNIT

First stage of retrofit – solution of environmental problems


First, environmental problems to be solved and economic and technical feasibility to solve these
problems were analysed. Three possible ways were evaluated. The first one was the drying of the
sludge in a dryer followed by incineration in a waste heat boiler, but the sludge drying was found
very complicated for sludge features. The second way consisted in the build-up of a completely
new two-stage incinerator but it was found too expensive and hardly feasible from the point of
view of limited area. Therefore retrofit of the existing incineration plant proved itself to be the only
realistic alternative.
A complete retrofit was divided into three stages. Main achievement of the first stage consisted
in satisfaction of the environmental limits.
Process flow sheet after completing this stage is shown in Figure 2. General overhaul of the
existing fluidised bed combustion chamber was carried out during the first stage. The combustion
chamber was supplemented by an afterburner chamber with a thermo-reactor (secondary combus-
tion chamber) ensuring combustion of the over-limit content of carbon monoxide and hydrocarbons.
The multiple hearth combustion chambers with fluidised bed was supplemented by a by-pass of

183

© 2004 by Taylor & Francis Group, LLC


chap-18 19/11/2003 14: 47 page 184

Figure 2. Technology of incineration after completing first stage of retrofit.

flue gas. It enabled an efficient control of volumetric flow rate of flue gas through the hearths and
the control of the related temperature.
The transport of the sludge to the combustion chamber was solved by means of high-pressure
volumetric pump that replaced the original bulk conveyer. The transport of the sludge through
single hearths remained unchanged.
In accordance with the wish of the investor only partial utilisation of the flue gas heat content
was taken into account in the first stage of retrofitting. The respective part of the heat content was
utilised in the plate recuperative heat exchanger, the rest of the said heat was extracted by quench.
The realised off-gas cleaning was a multi stage one. The original scrubber remained in use
however it was modified in order to satisfy the new process conditions. Liquid droplet separator
and particulate solid (flying ash) separator were incorporated. NaOH solution was injected into
water retained in the solid separator in order to maintain there the required pH value by neutral-
ising harmful acid substances (namely SO2 ). A part of unreacted NaOH solution was utilised for
neutralisation of the environment in the flue gas scrubber.
Purified off-gas flew further into stack. The necessary draught for the transport of the off-gas
through the whole system was ensured by flue gas fan placed between the contact cooler and the
off-gas scrubber.
After completion of the construction work the guarantee test verified process parameters of the
combustion unit after retrofitting. The guarantee test was carried out at three operational regimes
of the incineration unit:
Regime I : Operation at the throughput equal to the immediate production of sludge in the time
of this guarantee test.
Regime II : Operation at the realistic maximum throughput of the sludge to be processed during
the 24-hour period, i.e. the operation at 85 t/day.
Regime III : Operation at the minimum required throughput during the 24-hour period
(approximately 35 t/day).
Operational parameters of the incineration unit, namely the consumption of some process utilities
at the characteristic throughput of each of the three regimes is presented in Table 3. Similarly

184

© 2004 by Taylor & Francis Group, LLC


chap-18 19/11/2003 14: 47 page 185

Table 3. Operational parameters in the course of guarantee tests.

Regime I Regime II Regime III

Sludge throughput [t/d] 71 85 35


Natural gas flow rate [mN 3 /h] 357 395 348
Demineralised water flow rate [m3 /h] 2.97 3.22 2.73
Power demand [kVA] 163 171 155

Table 4. Emissions during guarantee tests.

Concentration measured during operation [mg/mN 3 ]


Emission limit
Pollutant Regime I Regime II Regime III [mg/mN 3 ]

NOx 261 ± 4 261 ± 4 280 ± 4 350


CO 4±1 2±1 4±1 100
Cx Hy 0.53 ± 0.28 0.16 ± 0.19 0.07 ± 0.1 20
Solid particles – 28.9 ± 0.2 – 30

emission levels determined in the course of guarantee testing are presented for each of the three
regimes in Table 4.
The realistic maximum throughput of the sludge to be processed decreased after completion of
the first stage of retrofitting. The reason was the requirement of the investor to utilise for cooling
down the flue gas the contact cooler (quench) placed down-stream after the recuperative heat
exchanger. Due to steam generated from the injected cooling water evaporation, the volume of flue
gas increased considerably. Self-evidently, the equipment downstream the said quench cooler had
to be dimensioned with respect to optimum investment cost to capacity ratio. The consequence of
such philosophy of dimensioning was an upper limit value for the off-gas flow and correspondingly
also a limited realistic maximum throughput of the sludge to be processed.

Second stage of retrofit


The second stage of retrofit can be characterised as a “waste-to-energy” one. Its purpose was to
increase the utilisation of heat contained in flue gas leaving the after-burner chamber and to increase
the realistic throughput of the sludge to be processed in the incineration unit.
The heat carried by flue gas is only partially used presently for pre-heating the fluidisation and
combustion air. The rest of the heat carried by flue gas is rejected in the contact cooler. Within the
framework of this second stage of retrofitting the existing recuperative heat exchanger was replaced
by a heat exchanger of another type and the contact (quench) cooler is replaced by a heat exchanger
flue gas – demi-water. The demi-water pre-heated in the last mentioned heat exchanger is further
heated and de-gassed in the existing factory power and heating plant to be used as feed-water for boil-
ers producing steam.The contact cooler is saved as a back up for cases of flue gas cooling breakdown.
An increase of the realistic throughput of the incineration unit is related to the replacement of
the contact cooler. The replacement will eliminate the bottleneck in the cold part of the technology.
Technology modification after completing the second stage of retrofit is shown in Figure 3 (a part
of the overall flowsheet differing from Figure 2). Basic process parameters are shown in Table 5.
The situation in emissions concentration will be reviewed later.

Third stage of retrofit


The pulp and paper plant is located in a previous mining area. The mining operations already
finished, however mining gas is still available. Third stage of retrofit is a project for substituting the
natural gas by the mining gas as a fuel in the incineration plant. This project is under completion.

185

© 2004 by Taylor & Francis Group, LLC


chap-18 19/11/2003 14: 47 page 186

Figure 3. Changes of technology after the second stage of retrofit.

Table 5. Basic process parameters after the second stage of retrofit.

Sludge throughput [t/day] 105 to 110


Natural gas flow rate [m3 /h] 460 to 550
Flow rate of pre-heated demi-water [m3 /h] 108 to 234

Table 6. Basic features of sludge.

Component Volume %

CH4 55
N2 38
CO2 4
O2 3

Since belonging to greenhouse gases [3], mining gas cannot be emitted into the ambient air,
so the third stage of retrofit will have highly positive environmental impact. Mining gas price is
lower then one half of the natural gas price, so there will be economic effect for the paper plant
too. Mining gas composition is shown in Table 6.
Mining gas will be fed under ambient temperature. Currently it is transported by vacuum pumps
from mines but pressure is very fluctuating. Therefore the transport line will be supplemented with
a supercharger. To allow combustion of the new fuel, dual burner was installed at the combustion
chamber with fluidised bed [4]. Saving more than 50% of natural gas will be achieved after finishing
of the third stage of the retrofit.

Simulation – efficient computational tool for retrofit design


Evaluation of the important process parameters for retrofit of the unit was performed using a soft-
ware system TDW for simulating processes for thermal treatment and for incineration of wastes [5].

186

© 2004 by Taylor & Francis Group, LLC


chap-18 19/11/2003 14: 47 page 187

Figure 4. Flowsheet generated by software system TDW.

Developing this software was initiated by the need to support and facilitate designers’and operators’
activities. The software package is based on modelling – performing a mass and energy balance.
The design of a unit consists in drawing a flowsheet. The flowsheet obtained by using the simula-
tion program is similar to those generated by professional software packages. After flowsheet gener-
ation input data for all pieces of equipment have to be specified as well as input data for inlet streams
and those for outlet streams where a specification is required (e.g. temperature of leaving flue gas).
After completing the flowsheet and input data specification the computer program can be run
and results viewed.
Sample flowsheet generated using this software is shown in Figure 4. This is the process after
the first stage of retrofit and the flowsheet is in agreement with Figure 2.
Benefit of using the software consists in investigating various regimes of operation and paramet-
ric sensitivity, in utilizing it for decision-making, process parameters optimisation etc. Utilizing
simulation of various operational alternatives is indispensable.

EVALUATION OF RETROFIT

The primary purpose of retrofit consisted first of all in meeting more sweeping environmental laws
on purity of emitted off-gas. A comparison of the average levels of emissions before retrofit and
after the waste-to-energy stage of retrofit including allowable limits is shown in Table 7.
Realisation of the second stage of retrofit brought about considerable economical profit. The
only acceptable alternative to incineration is sludge land filling. The costs of landfilling for one
ton of sludge is approximately US$10.50 but seasonally the cost can climb up to US$30.50 per ton
(such a high cost occurs in the season with high ambient temperatures that intensify the biological
processes in the sludge). Sludge landfilling, even at high costs mentioned above, is cheaper than the
thermal treatment in the unit being only after the first stage of retrofit. However, after completing

187

© 2004 by Taylor & Francis Group, LLC


chap-18 19/11/2003 14: 47 page 188

Table 7. Emissions after retrofit.

Concetration [mg/mN 3 ]
Allowable limit Emissions
Pollutant Before retrofit After second stage [mg/mN 3 ] reduction [t/yr]

NOx 50 to 227 200 to 250 350 0


CO 2200 to 3100 20 100 187.6
Cx Hy <100 0 20 7.0
Solid particles 100 to 157 20 to 25 30 7.5

the second stage of retrofit, the gross specific profit of incineration is about US$3.50 per one ton
of the sludge incinerated. Such profit can be achieved owing to heat supplies via heated-up demi-
water into the plant heat and power network. Thanks to the above mentioned profit the payback is
expected to be at the time horizon not surpassing six years.

CONCLUSION

The technology of the incineration unit, as it was designed and realised, is in accordance with the
waste-to-energy philosophy and it also meets requirements of efficient processing of the sludge
resulting from wastewater treatment. The unit efficiently utilizes the energy released in the course
of sludge incineration both for its own use and also for the heat-up of a utility subsequently used
within the whole plant.
Comparison of parameters of sludge incineration on the principles of waste-to-energy approach
with those of sludge landfilling (the only one acceptable alternative to incineration) leads to conclu-
sion that the thermal treatment of the sludge by a suitable technology utilizing energy from sludge
incineration is favourable both economically and environmentally. In some cases land filling can
be advantageous economically but it remains unacceptable from environmental reasons.
The retrofitted unit for the thermal treatment of sludge is one of the biggest industrial incinerators
in the Czech Republic. After completing the retrofit, this incinerator can also be ranked among the
most modern ones from the point of view of process technology and equipment as well as process
control. Thanks to its capacity it covers the needs of sludge processing of the plant and – within
limits of its potential – it efficiently utilizes energy supplies and saves fuel for steam production in
the local heating and power plant. All that is achieved with simultaneous satisfying emission limits
given by the valid environmental legislation.

REFERENCES

1. Oral, J., Sikula, J., Hajny, Z., Puchyr, R., Trunda, P. and Stehlik, P., Thermal Processing of Wastes
From Pulp and Paper Plant as a Solution of Environmental Problems, 6th World Congress of Chemical
Engineering, Proceeding on CD ROM, Melbourne, Australia, (September 23–27, 2001).
2. Oral, J., Hajny, Z. and Stehlik, P., Experience with Thermal Treatment of Hazardous Industrial Wastes
in the Czech Republic, International Conference on Incineration & Thermal Treatment Technologies,
Proceedings, pp. 721–728, Orlando, Florida, USA, (May 10–14, 1999).
3. Kiely, G., Environmental Engineering, McGraw-Hill Publishing Co., Maidenhead (1997).
4. Bebar, L., Canek, J., Kermes, V., Stehlik, P. and Oral, J., Low NOx Burners – Recent Development,
Equipment, Experience, Modelling, International Conference on Incineration & Thermal Treatment
Technologies, Proceedings on CD ROM, Philadelphia, Pennsylvania, (May 14–18, 2001).
5. Stehlik, P., Puchyr, R. and Oral, J., Simulation of Processes for Thermal Treatment of Wastes, Waste
Management 20, pp. 435–442 (2000).

188

© 2004 by Taylor & Francis Group, LLC


chap-19 19/11/2003 14: 47 page 189

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Worldwide use of ethanol: a contribution for economic and


environmental sustainability

Cortez, Luís A.B.


Interdisciplinary Center for Energy Planning – NIPE School of Agricultural Engineering – FEAGRI
State University of Campinas – UNICAMP

Griffin, Michael W.
Green Design Initiative – Carnegie Mellon University – CMU

Scaramucci, José A.
Institute for Mathematics, Statistics and Scientific Computing – IMECC State University of
Campinas – UNICAMP

Scandiffio, Mirna Gaya


Faculty of Mechanical Engineering – FEM State University of Campinas – UNICAMP

Braunbeck, O.A.
School of Agricultural Engineering – FEAGRI State University of Campinas – UNICAMP

ABSTRACT: The use of ethanol from biomass as a gasoline substitute in cars and light trucks
is possibly one of the most attractive and feasible alternatives to deal with global warming to con-
tribute to alleviate developing countries’ trade balance by cutting oil imports. Ethanol consumption
of 13 billion liters annually helps Brazil to avoid nearly 26 million metric tons of carbon dioxide
emissions. Annual world production of bio-fuel ethanol is about 26 billion liters, from which Brazil
is responsible for about 60%, followed by the U.S. and China. Roughly 60% of world ethanol is
produced from sugar crops, mainly sugarcane and sugar beet; the remaining comes from grain,
mainly corn. The key for making ethanol competitive as a gasoline additive is the ability to pro-
duce it from low-cost biomass. Brazil has continually decreased its costs for ethanol production
from sugarcane. However, more work is necessary to develop new technologies for ethanol pro-
duction from biomass particularly from lignocellulosic materials.This paper aims to provide some
information and perspectives about the status of fuel ethanol production and use around the world.

INTRODUCTION

Ethanol is perhaps the most attractive short to medium term alternative for gasoline in cars and
light trucks. By reducing the use of gasoline, ethanol can be important in lowering greenhouse
gas emissions, improving environmental quality, and improving economic sustainability. Many
different countries are considering ethanol as the most adequate fuel for octane enhancement and
to substitute the gasoline, in different proportions.
Despite the existing commercial barriers and the absence of regulatory mechanisms, the evo-
lution of international agreements has conducted to vigorous actions towards: (i) the decrease in
imported oil dependence promoting a policy for energy self-sufficiency in liquid fuels, and redefin-
ing the importance in energy geopolitics, so important nowadays; (ii) the existence of environmental
awareness to decrease the gases emissions responsible for the greenhouse effect – mainly CO2 ; and

189

© 2004 by Taylor & Francis Group, LLC


chap-19 19/11/2003 14: 47 page 190

(iii) to improve the air quality, mainly in large cities, problem which is becoming a main concern
by the public, notably in the countries with larger revenue per capita.
Several initiatives have been implemented in different countries – Holland, Sweden, France,
Spain, United States, Canada; China, Japan, Australia – stimulating the production and exports –
or even the imports – of ethanol [1].

WHY ETHANOL IS THE WAY TO GO?

In a recent study Lave et al. [2] pointed out three technologies having the potential of reducing net
emissions by motor vehicles: (i) batteries; (ii) fuel cells; (iii) ethanol. Each of these technologies
have their own advantages and drawbacks and none is in reality a “perfect put-in substitute” for
the present world-wide fossil fuel model. The point is that the up to now successful “fossil fuel
model” is been seriously jeopardized by its own medium term destructive action of pollutant gases
requiring a short to medium term solution or at least a reasonable remedy. In the analysis presented
by Lester et al. the existing individual vehicle transportation model itself is not being questioned.
They proposed that instead of inducing consumers to choose more efficient vehicles for personal
transportation through higher taxes on gasoline consumption, ethanol could be used in different
proportions as an alternative to gasoline. That is to say, let consumers continue choosing their “fuel-
hungry” sport utility vehicles (SUVs) and light trucks (which account for more than half of new
vehicle sales i the US), but induce them to turn to ethanol as fuel. The rationale is that efficiency
arguments do not apply in general to consumers’ choices on vehicles for personal transportation.
Any improvement in engine efficiency ends up just as a further incentive for consumers to buy
vehicles with more powerful engines. That is not to say efficiency initiatives should be neglected.
Each of these technologies mentioned above is briefly discussed below, having in mind the zero-CO2
emissions target.

Batteries
The electricity is to be produced to charge batteries. It is important in this respect that the source of
primary energy be renewable, such as hydro or biomass.1 Unfortunately battery-powered cars are
expensive and represent a potential public health menace. To get even a 100-mile range, about
500 kg of batteries are required for a two-passenger car. Making and recycling these batteries is
expensive, leading to large increase in the cost of driving. If the current US fleet of 200 million
vehicles were run on current lead acid, nickel cadmium, or nickel metal hybrid batteries, the amount
of these metals discharged to the environment would increase by a factor of 20 or 1,000, raising
vast public health concerns [2].

Fuel cells
Fuel cells basically emit water vapor and are much more efficient than internal combustion engines.
Unfortunately, fuel cells are still extremely expensive, and they cannot compete with current
engines. Major technology breakthroughs are required to make fuel cells attractive for light vehi-
cles. Also, the environmental implications of the fuel cells cannot be known until we know what
material and processes will be used and how hydrogen will be produced [2].

Ethanol, the way to go!


Ethanol gives a relatively “clean” combustion because the CO2 delivered back to the atmosphere
was extract in the photosynthesis process. Because ethanol can be continuously produced in the

1 Nucleartechnology also produces zero-CO2 emissions but cannot be considered a feasible alternative for
worldwide electricity generation. The main difficulty to increment nuclear technology is its negative image
summarized in the popular say “not in my backyard”.

190

© 2004 by Taylor & Francis Group, LLC


chap-19 19/11/2003 14: 47 page 191

long term, it is generally accepted that it can be considered a fairly renewable fuel. Also, Ethanol
can be produced economically, at least in some areas of the world today, such as Brazil. In Brazil,
ethanol is been produced in large scale since the beginning of the Proálcool in 1975 [3]. Year
after year sugarcane has been cultivated yielding high biomass production at lower costs. Presently
Brazil produces the lowest cost ethanol in the world (around US$ 200/m3 ), which is country-wide
distributed at a about half price of gasoline, without any government subsidy.
Therefore, if fuel ethanol presents these advantages and is competitive in Brazil, the question
that is raised here is: could ethanol be produced in large scale in a sustainable way? How much
ethanol can be sustainably produced? Could other countries adopt fuel ethanol model similar to the
one in Brazil?
One important key point, often considered as a “drawback” with the renewable ethanol model in
relation with the fossil fuel model is that ethanol requires growing raw material. This is a key element
in the discussion because this is a key issue when it comes down to decide if the fuel ethanol model
can be applied world-wide. Feasible alternatives for large scale production of enough raw material to
feed the enormous requirements necessary to supply the world vehicle industry are discussed below.

RAW MATERIALS FOR LARGE SCALE ETHANOL PRODUCTION

Although ethanol can be produced from sugar, starch and fiber, currently, the major source of
ethanol is fermentation of sugar from corn and sugarcane. Although sugarcane ethanol became
very competitive in Brazil, sugarcane is planted mainly for its sugar content. The ethanol technology
advanced significantly in the last 25 years since the Proálcool but is not likely that we can depend
on this technology to achieve more ambitious targets such as producing a worldwide alternative
fuel. The US produces ethanol basically from corn, which is highly subsidized (US$ 0.54 per gallon)
and is not likely either to become competitive with sugarcane.
The most abundant raw material, worldwide available, for ethanol production is fiber. Fiber can
be obtained economically from sugarcane, agricultural residues and other dedicated crops such as
eucalyptus. Lignocellulosic derived ethanol can be produced, harvested and processed so that there
are no net carbon dioxide emissions and environmental quality is enhanced, relative to current land
use (less erosion, less use of chemicals and water, less soil loss).
The technology to convert lignocellulosic materials into ethanol is the hydrolysis process, which
has not been fully developed. The use of lignocellulosic materials as a source of sugars for the
conversion to ethanol is rapidly developing, particularly in the US and Brazil.

THE BRAZILIAN EXPERIENCE

The use of fuel alcohol in Brazil dates back to the 1930’s when alcohol was first blended to
gasoline thus providing a “sink” for excess sugar production. Sugar producers have proved to be
highly influential in Brazilian politics and this explains the government’s constant preoccupation
to provide relief for the industry’s surplus.
This was the case in the early 70s, and the government demonstrated a great deal of creativity
when it not only incremented the alcohol fraction in gasoline, but launched a program for pure
ethanol as a fuel for vehicles, named Proálcool [3]. This innovative program was conceived in the
wake of a global energy crisis by the military regime, which regarded energy supply as a high
priority in its national security agenda.
In the beginning of the 80s the country was highly indebted when it was overtaken by a serious
foreign exchange crisis. Because of that, and fueled by a second oil price shock, the investments in
new ethanol plants continued and production grew rapidly between 1980 and 1985 reaching around
10 billion liters/year, enough to fuel to fill the tanks of the more than 4 million pure ethanol fueled
cars and to be blend ethanol with gasoline for the remaining vehicle fleet.
The second half of the 80s was marked by a relief in world oil prices and a stagnation in ethanol
production took place. It was also time for new priorities in Brazil as the country launched several

191

© 2004 by Taylor & Francis Group, LLC


chap-19 19/11/2003 14: 47 page 192

stabilization programs in a row as attempts to control an ever growing inflation. However, demand
for ethanol continued to grow and the unbalance between a stagnating supply and an increasing
demand culminated with the 1989 supply crisis which forced the government to import ethanol to
methanol.
This crisis was a serious drawback to the program and caused many vehicle owners to reject new
ethanol-fueled cars. The number of pure ethanol vehicles produced has plummeted since then and
is nearly 2% today. Therefore, all of the ethanol produced in the country in the 1996/97 season,
nearly 13.7 billion liter/year, replacing nearly 200,000 barrels of petroleum/day, is either used to
fuel relatively old vehicles (generally pre-1990) or to blend with gasoline at a 22% volume content.
Today Brazil produces nearly 300 million tons of sugarcane, nearly 50% of it dedicated to ethanol
production. The country’s ethanol production is around 13 billion liters annually and it has kept
this level since the 1989 crisis. Although the hydrated ethanol (used in E100 vehicles) is declining
as a result of the consumer skepticism, the anhydrous ethanol consumption in blended gasoline
(in E24 vehicles) is increasing at a rate that almost compensates for the former.
The 2001/2002 sugarcane harvest is expected to be 10% higher in cane production in Brazil. The
present scenario also indicates an increase in oil prices with a correspondent increase in gasoline
prices at the gas pumps in Brazil, increasing the gap between gasoline and ethanol and increasing,
although not significantly yet, the preference and trust for ethanol vehicles by consumers. Although
it is still too early to forecast, the gasoline prices hikes caused by the current crisis in the Middle
East, may lead to a revival of consumers’ interest for E100 vehicles in Brazil.

THE US EXPERIENCE

In 1990 the president of the United States signed the “Clean Air Act Amendments” (CAAA), reg-
ulating the use of reformulated, oxygenated gasoline, mainly in highly polluted regions, specially
during the winter months; according to this document a certain percentage of the oxygenated fuels
should derive from renewable sources, ethanol becoming the best choice. The government interven-
tion was not only limited at the political sphere. Economically, several incentives were promoted
to farmers through tax exemption, both in the federal and the state levels, besides the special credit
programs. The tax exemption beginning at January 1991 was set at 5.4 cents of US dollar/gallon for
a minimum share of 10% of ethanol in the mixture, representing 54 cents of dollar for each gallon
of ethanol. In 1998 the government approved a law giving continuity to the subsidy until 2007;
the subsidy should be declining steadily reaching US$ 0.51 in 2005. These actions are stimulating
several new other projects for ethanol production, which are likely to increase the US production
capacity by 60% – to approximately 11 billion liters – in the next two or three years. The present
production level is about 7.5 billion liters of fuel ethanol in the US.

THE ETHANOL PRODUCTION AND USE IN OTHER COUNTRIES2

The European Union has demonstrated its commitment to reduce the greenhouse effect and is
working seriously to stimulate the use of fuels derived from renewable sources of energy. Taxes
exemption for biofuels used in the transport sector has been a key instrument to reduce gas emissions
in Holland and other European countries; France produces fuel ethanol mainly from sugar beets;
the Swedish government is investing more than US$ 4 million in the last three years to bring to
public opinion the advantages of using renewable fuels; Sweden will have by 2010, 15% of its
transportation sector based on biofuels; Spain, a major ethanol producer in Europe, is increasing
its participation by buying new ethanol plants in the US (High Plain Corp.).
In Asia, China is increasing its share of ethanol in the gasoline up to the 10% level. The US
has initiated several different projects in that country to produce ethanol from corn. Japan is also
developing an important program to add ethanol in the diesel. Also, there is now an alliance

2 Text based on reference [1].


192

© 2004 by Taylor & Francis Group, LLC


chap-19 19/11/2003 14: 47 page 193

denominated “Global Alliance” which regroups countries like Canada, Australia, Thailand and
Guatemala, having as objective the trade of fuel ethanol.
In the other hand, increasing the supply of fuel ethanol requires further actions to promote the
production of low cost renewable ethanol from the raw materials presently available in the world.
Today, the fuel ethanol can be produced using sugarcane in countries like Brazil), corn in the US
and China and beets in the European Union and China. Besides the “traditional” raw materials, the
use of lignocellulosic as a source of convertible sugars to ethanol is been developed at a certain
speed. It is expected enzymatic hydrolysis to become commercially available in the next ten years,
which will allow the use of agricultural residues such as corn and sugarcane residues, abundant
and produced at relatively low costs in the US, Brazil, China, India to name few countries. More
applied research in hydrolysis needs to be done in already existing commercial plants to speed up
the technology development.

A WORLDWIDE PRODUCTION OF ETHANOL

To replace the worlds present consumption of gasoline would take more than 2 trillion liters
of ethanol. At average yields of ethanol per hectare of energy crops in Brazil this would take
up to 400 million hectares of land dedicated to energy establishing a large agricultural industry to
produce the raw materials. This represents an immense expansion of the ethanol production and
requires a well planned and reasoned development program to assure the many environmental,
social and economic concerns are addressed properly.

MARKET STRATEGIES FOR GASOLINE SUBSTITUTION

Fuel ethanol can be used to increase octane number in the gasoline (up to 25% is presently used in
Brazil) and to fuel E100 vehicles. In the US fuel ethanol producing states are using for more than
two decades the “gasohol” blend in a lower percentage than Brazil. In both countries the use of
ethanol in gasoline blends were highly motivated by farmers political pressures to compensate for
their market losses in sugarcane and corn.

The market for ETBE and MTBE


Oxygenates, including fuel ethanol, methyl tertiary butyl ether (MTBE), ethyl tertiary butyl ether
(ETBE), and tertiary amyl methyl ether (TAME), are been added to gasoline to make the combustion
cleaner, and consequently reducing toxic muffler pollution, particularly CO2 .
Ethanol is a base chemical to be used directly as additive or to ETBE production. The extensive
use of ethanol and ETBE could boost the use of renewable fuels having a significant impact on
the atmosphere. Also, demand of ethanol could increase further after methyl tertiary butyl ether
(MTBE) be eliminated from gasoline, which was already announced to be phaseout in the State of
California by 2002 – this state alone uses 25% of the global production of MTBE.
The US demand for ethanol may also increase if the MTBE – methyl tertiary butyl ether – is to
eliminated in the gasoline, what has already been announced by the State of California, responsible
for nearly 25% of total world production of MTBE. Presently, the US imports around 80% of
the MTBE utilized in the country. The Renewable Fuel Standard – RFS and the Renewable Fuel
Association – RFA estimates that by 2010, approximately 6% of the fuel consumed in the US will
come from renewable sources; in the case of ethanol the demand could reach 20 billion liters/year
[4,5,6]. However, the best oxygenating additive could be ethanol itself. Accepting the concept of
E10 to E85 flexible engines will resolve completely this issue.

International trade of ethanol


The volume of world traded ethanol is still very small compared to fossil fuel. The transportation
costs do not affect significantly the costs associated with fossil fuels as they do with ethanol

193

© 2004 by Taylor & Francis Group, LLC


chap-19 19/11/2003 14: 47 page 194

because the production costs of ethanol are significantly higher. Producing fuel ethanol requires
an agricultural raw material, which represents nearly 70% of the final fuel costs. Therefore, if any
single-phase step is to be considered a limiting factor, this should be the agriculture phase. This
partly explains why Brazil is competitive to produce low cost ethanol from sugarcane, a crop very
well suited for large scale production in Brazil, at least up to the present production level. Also
the US produces a significant amount of ethanol, although with high and unsustainable long term
subsidies. The American corn-based ethanol production from high cost corn to substitute low price
gasoline constitutes an equation difficult to sustain in the long run.
Therefore, Brazil and few other tropical countries may have absolute and comparative advantages
to produce fuel ethanol surpluses to feed the high levels of consumption in markets such as the US,
European and Japan. Probably the major difficulties to implement an international ethanol trade is
the barrier created by the influence local farmers which treat any foreign commodity as a threat to
them. Mainly American and European strategists also regard their agricultural sector as important
to their national security policies.3
It is important to mention that any strategy aimed at the creation of a large international market
for ethanol should associate the product with energy, never as an agricultural commodity. For
instance, ethanol entering the US should be seen as a substitute for imported oil, not for the alcohol
produced from the corn supplied by local farmers. It should be a “crowding in” situation with
respected to domestic agricultural producers if the ethanol initiative is to be succeed. Therefore,
if large volumes of fuel ethanol cannot be traded between Brazil and developed countries, what
could eventually be done? A possible strategy may be is first to protect and encourage the built up
of stronger local fuel ethanol markets by decreasing the risk of supply. Although Brazil is a major
fuel ethanol producing country it experimented already its own supply crisis in 1989.
What is been proposed here is the creation of an strategic ethanol regulatory volume maintained
at a certain level that could be used by any country as needed. Brazil will probably have an extra
2–3 billion liters of ethanol/year in the next coming years. This surplus could be used to create a
5 billion liters regulatory stock, enough to avoid a major supply crisis in the consuming countries.
Another objective of the proposed regulatory volume of fuel ethanol could be to assign certain
quantities of fuel ethanol to be used as a blend to gasoline in newly created markets, such as China
and India, for example. The new renewable fuel can become, in fact, an important element in the
a new economic development model, which is presented below.

THE CLEAN DEVELOPMENT TRADE MECHANISM – CDTM

Since the Clean Development Trade Mechanism (CDTM) was proposed during the negotiations
of the Kyoto protocol it has attracted worldwide interest. CDTMs allow companies located in
developing countries to buy carbon certificates from companies located in other countries.Although
CDTM can be considered an efficient mechanism to reduce CO2 emissions throughout the globe,
it still does not represent an adequate solution to assure environmental and economic prosperity in
a global scale.
It is well known that the main aspiration of developing countries is to achieve sufficient eco-
nomic development to satisfy not only their basic needs but also to close the gap with the other
countries in the world. This requires more and more energy inputs and investments. In the last few
decades, notably after the World War II, the developing nations experimented an important surge in
economic growth with a corresponding fast increase in their energy consumption, which prompted
them to abandon traditional low energy biomass and use more important quantities of fossil fuels.
Many developing countries still present high foreign debts, which may lead to political instability,
threatening long term treat to economic prosperity.
What is proposed here is a mechanism by which, not only fuel ethanol but other renewable and
zero-CO2 net emission sources of energy could be used and traded to help developing countries

3 According to personal communication from the Brazilian Agribusiness Association (ABAG) the ethanol fuel
from Brazil is taxed 2.5% in addition to US$ 0.54/gallon resulting in a tariff near 50%.
194

© 2004 by Taylor & Francis Group, LLC


chap-19 19/11/2003 14: 47 page 195

reduce foreign debt and to enhance their participation in the global economy. Renewable energy
sources, such as ethanol, wood, agricultural residues, vegetable oils could be traded directly or
indirectly in a more liberalized international market. For instance, all goods produced from renew-
able source of energy would have a zero-import tax thus helping many developing countries in the
world to overcome foreign exchange constraints.
For example: Brazil presently has a “clean” energy matrix compared with most developed coun-
tries. Nearly 60% of the Brazilian energy matrix is composed by renewable sources, mainly hydro
(40%), burning wood and charcoal (10%) and sugarcane (10%). However, Brazil with its current
US$ 400 billion foreign debt, which requires nearly US$ 30 billion/year to finance it, receives
no incentives to export “green” products. On the contrary, its charcoal based steel industry has
been recently highly taxed by the US and Europe. There are several other examples, also in other
countries, in which the CDTM could be implemented resulting in important benefits worldwide.

COMPLEMENTARY STUDIES TO INCREMENT ETHANOL PRODUCTION

At the present conditions it is not possible to implement a worldwide large scale program to
substitute the fossil fuel model by an fuel ethanol model. There are serious key problems to be
resolved such as: (i) how much land, how much resources will be necessary to produce such a large
quantity of fuel ethanol? Is there significant and viable world-wide natural resources to be used?
Is this idea politically sound? (ii) is the technology thoroughly developed to produce fuel ethanol
from low cost lignocellulosic materials? (iii) is there enough capital available for investment in
distilleries and agricultural structure; (iv) do we have enough fertilizer supply? (v) what is going
to be to the environmental impact of a sugarcane are much larger than the present one? (vi) what
is going to be the impact of pests of much larger areas of sugarcane?

Studies on the sustainability aspects


The increment of ethanol production in large scale quantities deserve a profound and careful inter-
disciplinary study considering: the downstream impacts on the environment, labor, and economy.
Today, the cultivated area occupied by sugarcane in Brazil is about 5 million ha, and it is the leading
country in sugar and ethanol production. Sugarcane is ranked third in cultivated area been much
below soybeans and corn which requires nearly 13 million ha each. Some specialists claim that a
first phase of increasing the fuel ethanol production in Brazil, increasing cultivated area from 5 to
13 million ha, could be accomplished without any significant negative impact on the environment.
Other specialists claim that sugarcane can be cultivated in nearly 40% of Brazilian territory repre-
senting nearly 300 million ha. However, a more in-depth study is required to implement any large
scale project in Brazil [7].

Studies on fiber collection and hydrolysis for converting fibers into ethanol
Besides the already mentioned sustainability aspects there are other important limitations to over-
come. The US and Brazil are firmly engaged today at conducting applied research to produce
ethanol from lignocellulosic materials. Brazil will then be able to supply large volumes of fuel
ethanol to countries with limited natural resources, land and water inputs, for instance. However,
there are still today important technical barriers limiting the use of fibers for ethanol production.
In Brazil the use cane trash (not available today because presently most cane is still burned prior to
harvesting) could be feasibly if present harvest technology progress to a level of collecting a low
cost trash [8]. Some difficulties are been overcome and at the present stage trash can be produced
and delivered at the factory gate at a cost of US$ 10/ton (dry matter).
It is also necessary to address the problems involved in converting the lignocellulosic materials
into fuel ethanol, mainly: development of the whole technological package of enzymatic hydrolysis
and the adaptation of the “diluted acid pre-treatment” process to accomplish simultaneous sachar-
ification and fermentation, a process named “SSF”. Also it is necessary to develop studies further
to implement a pilot plant in Brazil and to determine the proper operating parameters for this plant.
195

© 2004 by Taylor & Francis Group, LLC


chap-19 19/11/2003 14: 47 page 196

CONCLUSION

The main issues addressed in this paper were the implications of a worldwide large scale production
of fuel ethanol to substitute the existing fossil fuel model. The main advantages and difficulties
have been discussed.
There are certain levels up to which the ethanol production could be increased without signif-
icant impacts. Also, politically, the well-known difficulties of the traditional petroleum sector are
now combined with the singularities of the agricultural (mainly corn and sugarcane) sectors. It
is important to state that sugarcane is a highly political product with which several developing
countries have been conducting their foreign policies for centuries.
Possibly the best long term strategy is to create a market, at local and international scales,
and promote them in such way to create a “renewable fuel mentality” particularly in the developed
nations. Another important aspect is to associate the environmental benefits of fuel ethanol, helping
to reduce the greenhouse effect, with the social and economic benefits derived from the proposed
Clean Development Trade Mechanism – CDTM. The economies of many developing countries
could have important benefits from the implementation of this strategy, helping all of us to achieve
progress and stability, and a more equitable world.

REFERENCES

1. Berg, C., World Trade Production and Trade to 2000 and Beyond, www.distill.com/berg.htm, 17p.,
January 1999.
2. Lave, L.B., Griffin, W.M., MacLean, H., The Ethanol Answer to Carbon Emissions, Issues in Science
and Technology on line, http://bob.nap.edu/issues/18.2/lave.html, 9p., Winter 2001.
3. Rosillo-Calle, F.; Cortez, L.A.B., Towards PROALCOOL II – A Review of the Brazilian Bioethanol
Programme, Biomass & Bioenergy, 14(2), 115–124, 1998.
4. DiPardo, J., Outlook for Biomass Ethanol Production and Demand, Energy Information Administration,
14p. year not available.
5. Renewable Fuels Association – RFA, Ethanol – Industry Outlook – 1999 and Beyond, Washington
D.C. USA, www.ethanolrfa.org, 1999.
6. Urbanchuck, J.M., Ethanol’s Role in Mitigating the Adverse Impact of Rising Energy Costs on U.S.
Economic Growth, AUS Consultants, 2001.
7. Cortez, L.A.B., Braunbeck, O., Rosillo-Calle, F., Bauen, A., Environmental Aspects of the Brazilian
alcohol program: Why should we consider it?, International Conference on Agricultural Engineering
– AgEng/Oslo 98, Paper 98-E-034, Oslo 24–27 August 1998, pp. 241–242.
8. Braunbeck, O., Bauen, A., Rosillo-Calle, F., Cortez, L.A.B., Prospects for Green Cane Harvesting and
Cane Residue Use in Brazil, Biomass & Bioenergy, 17(6): 495–506, 1999.

196

© 2004 by Taylor & Francis Group, LLC


chap-20 19/11/2003 14: 47 page 197

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Environmental aspects of socio-economic changers for industrial


region in russia in transition economy

Boris Korobitsyn∗ & Anna Luzhetskaya


Institute of Industrial Ecology, Ural Branch of Russian Academy of Sciences, Ekaterinburg, Russia

ABSTRACT: To diagnose the sustainability of the present socio-economic configuration of


the old industrial region of Russian Federation in transition economy and to analyze tendencies
to describe possible changes of the regional sustainability in the future Sverdlovsk Oblast was
chosen for a case study. In the result of drastic drop in industrial production output during last ten
years emission of pollutants in the environment has significantly decreased too. But during the
same period the most part of region’s mining and metallurgical combines became more outmoded
because of luck of funding for reconstruction. The second negative tendency is that the structure of
regional economy becomes more and more “heavy”: the quota of raw materials and semi-finished
products has increased. The third negative tendency is the dynamics of specific expenditures of
energy and resources per the unit of industrial production output.

INTRODUCTION

In spite of the sharp decrease of industrial production output in Russia in the last decade, environ-
mental contamination levels remain high. So the environmental priority assessment is the objective
of vital importance. Sverdlovsk Oblast as one of the most industrialized territories of Russia was
chosen for a case study.
The Russian Federation consists of 89 administrative units (Oblasts, Republics, and Krais).
Sverdlovsk Oblast is situated in the Middle and North Urals and on the adjacent territory of the
Western-Siberian Lowland within the following geographical coordinates: latitude 56◦ 00’−62◦ 00’
North and longitude 57◦ 10’−66◦ 10’ East. The area of Sverdlovsk Oblast is 194.8 thousand square
km (1.1% of the territory of Russia). In the north, Sverdlovsk Oblast borders on Komi Republic, in
the east on Tumen Oblast, in the south on Kurgan Oblast, Chelyabinsk Oblast and Bashkortostan
Republic, in the west it borders on Perm Oblast, Figure 1.
By the number of population Sverdlovsk Oblast ranks fourth in population (4676.7 thousand
people in 1996) among Russian states. The Oblast is highly urbanized: 87% of population lives in
towns. The largest cities are Ekaterinburg (1277.8 thousand people), Nizniy Tagil (408.4 thousand
people) and Kamensk-Uralskiy (197.5 thousand people).
The economy of the Oblast is based on reach mineral resources. Sverdlovsk Oblast represents
23% of Russia iron ore output, 71% in bauxite, 6% in copper ore, 20% in fire clay and 97% in
asbestos output. The industrial specialization of the Oblast is in the field of heavy industry: ferrous
and non-ferrous metallurgy, diversified mechanical engineering, chemical, timber, woodworking,
pipe and paper industry. Sverdlovsk Oblast is among the most environmentally unfavourable regions
of Russia. The main reason for this situation is the extremely high concentration of environmentally
dangerous industries. The other reasons are that the majority of the region’s vast mining and met-
allurgical combines are outmoded, a lack of pollution control technologies, and poor exploitation
of existing equipment for pollution control.

∗ Corresponding author. e-mail: kba@ecko.uran.ru

197

© 2004 by Taylor & Francis Group, LLC


chap-20 19/11/2003 14: 47 page 198

Moscow

Sverdlovsk Oblast

Figure 1. Geographical position of Sverdlovsk Oblast.

SOME REGIONAL HISTORY

The Urals is the oldest source of mineral and raw materials in Russia. Rich natural resources were
the foundation for the industry and economy of Sverdlovsk Oblasts.
The industrial development of the Urals was started in the beginning of the XVIII century. The
rate of growth of mining and smelting industry was unprecedented. From 1700 to 1861, there were
built 205 metallurgical plants, 144 – for ferrous metallurgy, 61 – for copper metallurgy. And it
was at that time when the seeds of many present environmental problems were sowed. Because the
falling water was the only source of energy for factory’s machines and mechanisms, all plants were
located near small rivers that were easy to dam. The majority of Urals towns sprung up around these
old plants. Therefore, nowadays these towns suffer from a shortage of water because small rivers
cannot satisfy the water requirements of modern industrial centers. On the other hand, these small
rivers are incapable of bearing contemporary anthropogenic pressure that becomes much harder.
Already in the end of the XVIII century, the first environmental crisis caused by the damaging
felling of forests happened in the industrial part of the Urals. Until 1924, all Ural blast furnaces
used only charcoal. As a result, the territory around the most old and large plants was deforested
in a radius about 10–20 kilometers. Because of wood fuel shortage, many plants were forced to
change the type of activity or curtail the production.
In the end of XIX century the new stage of industrial development of the Urals began. It was
the beginning of the timber-industry. The development of industrial logging caused widespread
environmental damage from floating of untied logs.
The next stage of the industrial development of Sverdlovsk Oblast took place at time of Stalin’s
industrialization in 1930s, when the rapid growth of new heavy industrial enterprises began. Newly
constructed and restored enterprises were converted from charcoal to coal as a fuel, but the process
of deforestation continued because the construction demanded a lot of wood. The industrializa-
tion was accompanied by population migration from the European part of the USSR to the Urals.
During 1920–1940s the population of the main industrial towns of Sverdlovsk Oblast increased
by 3–6 times. This industrialization stage was accompanied by the appearance of new environmen-
tal problems. To reduce the price of construction and exploitation of enterprises, new ferrous and
non-ferrous metallurgical, chemical, heavy engineering and power engineering plants were con-
centrated in small towns that became large industrial centers. Industrial technologies being in use at
that time were very environmentally imperfect and such industrial concentration caused intolerably
high contamination of air and surface water.

198

© 2004 by Taylor & Francis Group, LLC


chap-20 19/11/2003 14: 47 page 199

Table 1. Industrial production output in mining and processing industry of Sverdlovsk Oblast.

1990 1995 1996 1997 1998 1999 2000


Production [1] [2] [3] [3] [3] [3] [3]

Electric energy, kWh 60519 38956 37375 36778 37518 35531 43481
Coal, Mt 4370 2987 3067 2859 2597 2305 2255
Peat, Mt No data 302 165 191 113 130 59
Iron ore, Mt 14658 11373 10568 9276 8572 9817 10588
Cast iron, Mt 6886 5400 5034 4007 2707 4041 4714
Steal, Mt 10547 6694 6532 5402 3874 5365 6473
Mineral fertilizers, Mt 360 20 13 11 8 11 12
Sulphuric acid, Mt 1160 498 446 514 421 567 670
Timber, thousand m3 15427 5988 4331 3335 3115 3070 3085
Saw-timber, thousand m3 4417 1607 1183 1126 1024 937 889
Paper, Mt 75 28 25 23 21 25 39

These negative environmental tendencies of industrial development became stronger during the
Second World War in 1941–1945, when a lot of enterprises were evacuated from the European part
of the USSR to the Urals. All evacuated enterprises were relocated to territories of already exiting
ones, which increased industrial concentration and pollutant emission in the region.
After the Second World War, the Urals as the well-developed region that was remote enough
from the frontier of the USSR was chosen as the primary region for the development of Soviet
nuclear industry. Two of ten closed cities of Russian Ministry of Atomic Industry and Beloyarskaya
Nuclear Power Station are situated in Sverdlovsk Oblast.

PRESENT ECONOMIC-ENVIRONMENTAL TRENDS

During the period of drastic reforms in the economy of Russia since 1990, the dynamics of
economic-environmental situation in Sverdlovsk Oblast was characterised by the following
tendencies.
In the result of the general decrease of industrial production output, an extraction and con-
sumption of natural resources decreased too, Table 1.
The structure of regional economy becomes more and more “heavy”: the quota of raw materi-
als and semi-finished products has increased. The share of the most environmentally dangerous
industries in the total industrial production output continues to grow, Table 2.
In the result of drastic drop in industrial production output during last ten years emission of
pollutants in the environment has significantly decreased too. This is a positive tendency. But
the decrease of in the intensity of environmental impact is significantly less than the same in the
industrial production output. For example, the consumption of fresh water and wastewater discharge
from industrial, municipalities and agricultural sources even increased since 1990, Table 3.
Hazardous industrial wastes are one more continuing problem in the region, increasing in both
quantity and toxicity. Mining and heavy industry such as steel and copper production is the largest
sours of solid wastes. In the process of production of 1 metric ton of metal about 100 tons of solid
wastes are produced at the stage of ore mining and processing and up to 8 tons of slugs and other
wastes at the stage of smelting. Average annual production of such wastes in Sverdlovsk Oblast
is about 2 billion cubic meters per year. The most part of wastes produced in the process of ore
mining and processing are not extremely hazardous and so not officially classified as toxic ones but
they take up a lot of territory (at present more than 2 thousand square kilometres) and for decades
will remain a source of water and air contamination in the result of leaking and wind dispersion.
The most hazardous solid and liquid industrial wastes which are classified as toxic in accordance
with Russian regulation practice (this classification is based on the set of factors which take into
account both the impact of waste on the environment and toxicity for man) are usually stored at

199

© 2004 by Taylor & Francis Group, LLC


chap-20 19/11/2003 14: 47 page 200

Table 2. Changes in the structure of the regional industrial production output, %.

1990 1995 1996 1997 1998 1999 2000


Branch of industry [1] [2] [3] [3] [3] [3] [3]

Power engineering 3.8 12.6 13.9 15.9 15.3 10.8 10.1


Fuel industry 0.2 0.4 0.5 0.5 0.5 0.4 0.3
Ferrous metallurgy 18.3 29.5 28.2 25.5 19.6 19.8 24.0
Non-ferrous metallurgy 25.5 18.4 16.6 17.4 23.3 31.0 28.2
Chemical and petrochemical 5.0 4.1 3.2 3.3 2.8 2.6 2.2
Mechanical engineering 26.1 15.8 17.0 18.0 17.9 15.6 17.0
Timber and woodworking 4.6 3.0 2.7 2.2 2.1 2.1 2.0
Production of building 3.7 5.1 5.5 5.5 4.9 3.7 3.9
materials
Light industry 3.7 0.6 0.5 0.5 0.5 0.4 0.4
Food industry 5.5 7.3 7.9 7.7 9.4 10.2 8.8

Table 3. Environmental impact intensity.

1990 1995 1996 1997 1998 1999 2000


Indicator [1] [2] [3] [3] [3] [3] [3]

Total emission of air pollutants 2793 1474 1394 1361 1279 1273 1471
from stationary sources, mt/y
Total consumption of fresh 2000 2278 2192 2363 2390 2134 2304
water, million m3 /y
Wastewater discharge from 766 912 878 827 841 836 826
industrial, municipalities and
agricultural sources, million m3 /y

Table 4. Specific expenditures of energy and resources for production of one ton of
rolled stock [5–7].

Fuel equivalent Electric energy


Year Steel (kg) (kg) Oxygen (m3 ) (kWh)

1990 1248 615 64 90


1992 1238 668 73 88
1993 1236 673 70 89
1995 1216 689 70 91
1996 1199 674 67 87
1997 1181 654 68 83

the territory of those enterprises were they were produced. It seems that the industrial toxic wastes
problem will not be solved during decades because of absence of wasteless technologies in mining
and metallurgy and economic ineffectiveness of their processing at the present state of economy.
The negative tendency is the dynamics of specific expenditures of energy and resources per the
unit of finished product. The common tendency in the world is the decreasing of such specific
expenditures. But in the regional industry these characteristics remains approximately the same.
As an example, the dynamic of specific expenditures in ferrous metallurgy is present in Table 4.
To diagnose the present socio-economic and environmental situation from the viewpoint of
sustainability and to analyze tendencies to describe future which might be more sustainable or not
an indicative approach was selected. The Core Set of environmental indicators developed by the

200

© 2004 by Taylor & Francis Group, LLC


chap-20 19/11/2003 14: 47 page 201

Organization for Economic Co-operation and Development [4] was used to show and interpret
tends in the environment and related human activities in Sverdlovsk Oblast and provide a basis for
international comparison.
It was founded that many environment impact and environmental quality indicators for
Sverdlovsk Oblast seems to be less than expected and are quite comparable with the same of
OECD countries which usually are considered to be environmentally successful. But economy-
environmental indicators (such as emission of pollutant per capita or per the unit of Gross Domestic
Product demonstrate that the environmental effectiveness of regional economy is very low in
comparison with the most developed countries.
The main tasks of Oblast’s (and Russian) industry now are to move towards the challenge of new
century and to establish the concept of sustainable development to make the coordinated progress
of both economic growth and environmental protection.

ACKNOWLEDGMENT

This work was funded by the Russian Fund of Basic Research, Project # 02-06-96406.

REFERENCES

1. Sverdlovsk Oblast in 1990–1993. Report of the Sverdlovsk State Department of Statistics, Sverdlovsk,
1994. (in Russian)
2. Sverdlovsk Oblast in 1992–1996. Report of the Sverdlovsk State Department of Statistics #050,
Ekaterinburg, 1997. (in Russian)
3. Sverdlovsk Oblast in 1996–2000. Report of the Sverdlovsk State Department of Statistics #04001,
Ekaterinburg, 1997. (in Russian)
4. Towards sustainable development: Environmental indicators. Organisation for Economic Co-operation
and Development, Paris, France. 1998. 129 p.
5. Lisisn V.C., Stal’ (“Steel”), No. 10, p. 1, 1999.
6. Yuzov O.V., Isaev D.F., Stal’ (“Steel”), No. 10, p. 72, 1999.
7. Brodov A.A., Lopatina E.M., Stal’ (“Steel”), No. 4, p. 64, 1996.

201

© 2004 by Taylor & Francis Group, LLC


chap-20 19/11/2003 14: 47 page 202

© 2004 by Taylor & Francis Group, LLC


chap-21 19/11/2003 14: 48 page 203

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Waste incineration in swedish municipal energy systems – modelling


the effects of various waste quantitites in the city of Linköping

Kristina Holmgren∗
Division of Energy Systems, Department of Mechanical Engineering Linköping Institute of
Technology, Sweden

Michael A. Bartlett
Department of Chemical Engineering and Technology/Energy Processes Royal Institute of
Technology, Sweden

ABSTRACT: This study investigates the impact on the energy system in the city of Linköping,
Sweden, if different amounts of waste are used as a fuel. Two different levels of electricity prices
are also studied. The effects regarding system costs and carbon dioxide emission are examined.
The study is carried out using the optimisation model MODEST. The most important results are the
following; it is economically favourable for the energy utility to use waste as fuel. An increased elec-
tricity price leads to more locally produced electricity resulting in a lower cost for the district heating
production. It also means lower carbon dioxide emissions globally since the assumption is made
that the locally generated electricity replaces electricity produced in coal condensing power plants.

INTRODUCTION

Waste management and the energy system are linked since waste is widely used as a fuel in the
Swedish district heating supply. New laws and regulations concerning the waste management system
have come into effect or are planned for the near future, both in Sweden and the European Union.
The most important regulations in Sweden are the introduction of a landfill tax of 250 SEK/ton
January 1st 2000 and a ban of landfill of combustible waste January 1st 2002. This impacts heavily
on the energy system.
The consequences for the power and heat production in the city of Linköping, Sweden, if different
amounts of waste are used as a fuel are investigated. The aim is to give an understanding of what
it means for an energy utility to use waste as a fuel in the present situation. The study object is the
utility company Tekniska Verken, which runs the district heating system in the city of Linköping.
In this system, a large waste incineration plant is the base supplier of heat to the district heating
grid. The plant also produces electricity when integrated with an oil-fired gas turbine. The effects
of lower amounts of waste incineration are studied as well as of two different levels of electricity
price. The issues that will be in focus is what fuels are used for district heating production, amount
of electricity produced and by which fuels, system costs (which in this study is operational costs
to satisfy the district heating demand) and variations in carbon dioxide emissions.

WASTE MANAGEMENT IN SWEDEN

Waste management is an area in focus both in Sweden and in the European Union. In the European
Union, the strategy is to break the existing correlation between waste production and economic
∗ Corresponding author. e–mail: kriho@ikp.liu.se
203

© 2004 by Taylor & Francis Group, LLC


chap-21 19/11/2003 14: 48 page 204

Table 1. Treatment methods for municipal


waste 2000.

Part of the waste


Treatment method treated (%)

Material recovery 29
Biological treatment 8
Energy recovery 38
Landfill 24
Hazardous waste <1

growth in order to decrease the amount of waste with 20% between 2000 and 2010 and 50% until
2050. The Community’s waste hierarchy gives preference first to waste prevention, then waste
recovery (reuse, material and energy recovery where material recovery is preferred) and lastly
waste disposal (waste incineration where no energy is recovered and landfill) [1]. There are several
regulations that have come into effect and will come into effect in the near future.The most important
are the following:
January 1st 2000: Introduction of a tax on landfill of 250 SEK∗ in Sweden [2]. This tax was
raised to 288 SEK from January 1st 2002 [3]. (This increase in tax is not used in the calculations
in this study.)
January 1st 2002: Introduction of a ban in Sweden of landfill of combustible waste [4].
December 28th 2002: Latest date for implementing the European Union directive concerning
emissions from waste incineration plants on a national level. This directive can mean higher costs
for flue gas cleaning and measuring [5].
January 1st 2005: A ban in Sweden of landfill of organic waste [4].
The Swedish government carried out an investigation into the result of the January 1st 2000
landfill tax, which showed that the tax has successfully decreased the amount of landfill of waste
in Sweden. The investigation also studied what the consequences would be if a tax on waste
incineration was introduced. The reason to do this is among other things to make material recovery
more attractive compared to energy recovery [6].
In this study, the focus is municipal waste. The definition of municipal waste is waste from
households and waste that is similar to it’s content, for example waste from offices and schools.
The amount of municipal waste in Sweden was 3.8 million ton in 1999. Table 1 shows how the
municipal waste was treated by different methods [7].
As can be seen, energy recovery is today the most common way to treat the municipal waste.
The trend is towards decreased landfill and increased energy and material recovery and biolog-
ical treatment. The ban on using combustible waste for landfill and the tax on landfill are, of
course, incentives for this. In 1994 a law of producer’s responsibility was introduced, directing
manufacturers to take care of packages, paper and rubber tires in an environmentally accept-
able way. It has later been extended to include cars and electrical and electronic devices. The
law states the proportions of these wastes that must be recovered through material or energy
recovery [8].
In the coming years there will be a shortage of waste treatment capacity which means that
municipalities will have difficulties to comply with the ban of landfill of combustible waste. Many
municipalities are planning for new waste treatment facilities. Waste incineration will increase
with 90% and biological treatment will increase with 100% until 2005–2008 if all planned projects
are carried through, but it will not be enough for the near future [9]. If certain conditions are
fulfilled, for example; if there are plans for new treatment facilities, it is possible to be granted
exemption from the ban of landfill of combustible waste. The exemption has to be renewed every
year [10].

∗1 a equals 9,06 SEK in April 3rd 2002

204

© 2004 by Taylor & Francis Group, LLC


chap-21 19/11/2003 14: 48 page 205

WASTE AS AN ENERGY SOURCE IN SWEDEN

The energy content in waste can be recovered in several ways. Waste may be allowed to digest with
the resulting gas used as a fuel, or the waste can be incinerated, releasing energy. The second type
is more common and is the focus of this study.
In Sweden, the district heating network is widely extended. The district heating provides about
40% of the total heating demand of buildings, or 40 TWh heat per year [11]. Waste is an important
fuel for this supply. Currently there are 22 waste incineration facilities in Sweden, incinerating
1.9 million tons of waste. (This includes municipal waste as well as industrial waste.) These facilities
produced 5.6 TWh heat and 0.3 TWh electricity in 1999 [7]. There are no waste incineration plants
in Sweden that are used only for the destruction of waste without recovering the energy content.
The European Union is promoting increased production of electricity from renewable sources, the
aim is to increase it from 13.9% 1997 to 22% 2010 [12]. As the biodegradable part of waste is
included in the Union’s definition of renewable energy sources, waste incineration can be one of
the ways to achieve this goal. However, the waste hierarchy, which is explained in the previous
chapter, must be respected. This means that incineration of unsorted municipal waste should not
be furthered if it undermines the waste hierarchy.
The European Union has also recognised that combined heat and power production (CHP) is an
effective way to use resources. Raising the amount of electricity produced in CHP plants is seen
as one of the measures needed to fulfil the Kyoto agreement of decreasing EU’s carbon dioxide
emissions with 8% between 1990 and 2008–2012. The European Commission aims for a doubling
of electricity generation from CHP plants between 1994 and 2010 to 18% [13]. The large heat sink,
which the district heating networks provide, gives Sweden an advantage to achieve this goal. There
are, however, difficulties with CHP when waste is used as fuel. A critical factor in achieving a high
electrical efficiency is the inlet pressure and temperature to the steam turbine. It is highly desirable
to heat the steam over its boiling point (superheating), typically to values above 400◦ C. When using
waste as a fuel, corrosion problems are encountered when superheating the steam. There are many
impurities in unsorted waste, and corrosion of particular concern arises from high concentrations
of chlorine compounds in the furnace. At surface temperatures above 350◦ C the lifetime of the
superheater’s piping are severely shortened. There are several ways to deal with this problem. One
such solution, which is used in the waste incineration plant in Linköping, is to use a clean fuel to
carry out the super-heating, creating a so-called hybrid system.
The most important environmental problems associated with waste incineration are the flue gas
emissions and the ashes from the incineration. The flue gases consist of hazardous substances
such as heavy metals (e.g. lead, cadmium and mercury) and dioxins. Today, all waste incineration
facilities have advanced flue gas cleaning systems, and the emissions of hazardous substances
has decreased dramatically since the 80’s. However, dioxins and heavy metals end up in the flue
gas ashes. These ashes are about 4% of the weight of the municipal waste and are classified as
hazardous waste. These ashes have to be landfilled in a safe way in order to prevent leakage. The
bottom ash is about 19% of the weight of the municipal waste and is mostly landfilled, even if it
might be used for road construction and the covering of landfills [14].
In the Swedish tax system, waste is considered a biomass fuel and is taxed accordingly. Fossil
fuel producing heat is taxed with carbon dioxide tax and energy tax while biomass fuel producing
heat is exempt from these taxes. Fuel used for electricity generation is also exempt from these
taxes. In reality, municipal waste is not carbon dioxide neutral, since it consists of e.g. plastics with
petrochemical origin. An estimated figure for the release of fossil carbon dioxide from unsorted
waste incineration is 83 kg CO2 /MWh fuel in comparison to 274 CO2 /MWh fuel for oil and 335
CO2 /MWh fuel for coal [15]. These figures are used in this study.

DESCRIPTION OF THE POWER AND HEAT PRODUCTION IN LINKÖPING

Linköping is the fifth largest city in Sweden with a population of 130 000. It is situated
200 kilometers southwest of Stockholm. Tekniska Verken, which is a company owned by the local

205

© 2004 by Taylor & Francis Group, LLC


chap-21 19/11/2003 14: 48 page 206

External
power Electricity
supply Wind power Electric sales
boiler
Hydro power
Steam
Diesel engine Electricity Oil-fired demand
CHP grid boilers
Oil
District District
Waste heating heating
Waste-fired
Oil-fired network demand
Coal CHP
boilers
Plastic
Central Fuel
Wood Electric Heat and steam
CHP
Rubber boiler Electricity

Figure 1. A simplified view of the heat and power system in Linköping.

government, operates the power and district heating system in Linköping. The total demand of heat
from the district heating grid is around 1340 GWh/year. This demand has to be met by Tekniska
Verken’s plants. The base supplier of heat is the waste incineration plant, the Gärstad plant. This
plant consists of three boilers with a total capacity of 73 MW and an additional 16 MW from flue
gas condensing. Electricity can also be produced when integrated with an oil-fired gas turbine, a
so called hybrid system. The hybrid system has a capacity of 50 MWel , and can operate part load.
There are two other CHP plants in the system. The central power plant has three boilers with differ-
ent capacities. Boiler 1 (63 MW) uses coal, rubber and biomass. Boiler 2 (137 MW) uses oil, but
also some animal fat. Boiler 3 (55 MW and 19 MW flue gas condensing) uses biomass and plastic
waste. This plant can produce electricity and heat or use a direct condenser for the sole production
of heat. The third CHP plant consists of two oil-fuelled diesel engines, with a total capacity of
14 MW electricity and 13 MW heat and steam. Tekniska Verken also operates some hydro power
plants and a wind power plant. For peak loads in the district heating demand there are oil-fuelled
hot water plants and two electric boilers. The total demand for electricity in Linköping is around
1200 GWh per year. This can be provided by Tekniska Verken’s production plants or bought from
outside suppliers. Figure 1 shows a simplified view of the heat and power system in Linköping.

METHODOLOGY OF STUDY

An optimisation model, MODEST (Model for Optimisation of Dynamic Energy Systems with
Time-dependent components and boundary conditions) is used to perform the study [16]. This is a
linear programming model that minimises the cost for supplying the demanded heat and/or power
for the analysed period. Several alternative energy supply and conservation possibilities may be
included in an analysis. The model has a flexible time division, which can reflect demand peaks
and diurnal, weekly, seasonal and long-run variations of energy demand and other parameters. The
main purpose of the MODEST modelling program is to find suitable investments in energy systems
but the program can also be used for operation optimisation of existing plants. The latter is the
model function in this study.
In the study there are six different scenarios. The scenarios consist of three different levels of
waste used at the Gärstad plant and two levels of electricity price. The fuel that replaces the waste
at the Gärstad plant is biomass since it is the only other fuel that can be used at the plant due
to suitable moisture and heat content. The potential for producing biomass fuel in Sweden is not
fully exploited, therefore the assumption can be made that increasing the use of biomass fuel at
the Gärstad plant does not compete with the use of biomass fuel in other plants. The scenarios are
presented in Table 2.

206

© 2004 by Taylor & Francis Group, LLC


chap-21 19/11/2003 14: 48 page 207

Table 2. Scenarios.

Waste used at the Gärstad plant Electricity price


Scenarios (percentage of boiler capacity) (yearly increase during a ten-year period)

1 100 The same as the general inflation


2 100 7.5% over the general inflation
3 65 The same as the general inflation
4 65 7.5% over the general inflation
5 43 The same as the general inflation
6 43 7.5% over the general inflation

Today there is a shortage of waste incineration facilities in Sweden and a decrease in waste
amounts to incinerate is unlikely to occur. This can however change in the future. A situation with
competition over the waste might start in the region since nearby communities are planning to
build waste incineration facilities. Increased material recovery might also decrease amounts of
waste needed to be treated in waste incineration facilities. New restrictions can occur limiting the
amount of waste that can be used for energy recovery.
It is important to be aware of the system boundaries in this study. The comparison is made of
waste with other fuels when used for district heating, and the consequences for the energy utility
to use waste. Incineration is not compared to material recycling of the waste. The energy needed
for transportation of waste is not included in the study.
The reason for the scenarios with a 7.5% increase in the electricity price over general inflation
during a ten-year period is to investigate the consequences for the energy system in Linköping
when the hybrid cycle at the Gärstad plant is operated. It was found in preliminary studies that this
price rise is necessary for profitable electricity production at the Gärstad plant. It can be argued
that this is a substantial increase in the electricity price, since it means that the electricity price
doubles during a ten-year period. However, in continental Europe the electricity prices are higher
than in Sweden. The Swedish electricity prices are expected to approach the continental European
electricity prices when facing a deregulated European electricity market [17].
The following issues will be analysed in the 6 different scenarios; which fuels that will be used
for district heating, the differences in system costs, the amount of electricity that is produced and
by which fuels and the amount of carbon dioxide emitted.
The electricity prices used in the simulations vary between 161 SEK/MWh in December and
January and 112 SEK/MWh in July. The electricity price in Sweden is highly dependant on the
outdoor temperature since electricity is extensively used for heating purposes in buildings not
connected to the district heating grid. Table 3 shows the fuel costs, including the fuel handling
costs and taxes, used in the simulations. As can be seen in the table, waste has a negative cost. This
depends on the cost for alternative treatment methods, particularly the landfill tax of 250 SEK/ton.

RESULTS

This chapter shows the results of the optimisations made with the MODEST model [18]. Table 4
shows the fuels used for district heating production in the different scenarios. The most important
result here is which fuel fills the energy shortfall caused by decreasing waste incineration at the
Gärstad plant. As can be seen, biomass fuel at the Gärstad plant replaces a large part of the energy
deficit but not all. Boiler 1 at the central CHP plant, which uses biomass and plastic waste as fuels,
replaces also part of the energy shortfall. It is cheaper to use boiler 1 at the central CHP plant than
to incinerate biomass fuel at the Gärstad plant. Thus, the capacity of boiler 1 at the central CHP
plant is fully utilised before the Gärstad plant incinerates biomass. Another interesting result is that
in the scenarios with low electricity prices, the electric boiler is used for heat production. This is
not economically feasible in the scenarios with a high electricity price.

207

© 2004 by Taylor & Francis Group, LLC


chap-21 19/11/2003 14: 48 page 208

Table 3. Fuel costs including fuel handling costs and taxes,


SEK/MWh.

Fuel Fuel costs

Gärstad plant
– Waste −118
– Biomass 124
– Light fuel oil for heat production 293
– Light fuel oil for electricity production 150
Power plant
– Biomass boiler 91
– Coal boiler, electricity production 65
– Coal boiler, heat production 136
– Oil boiler, electricity production 150
– Oil boiler, heat production 293
– Oil boiler, heat from animal fat 198

Table 4. Fuel used for district heating production (GWh/year).

Fuels used in the different scenarios 1 2 3 4 5 6

Gärstad plant, waste 854 854 533 533 356 356


Gärstad plant, biomass 0 0 167 165 270 268
Central CHP plant, biomass, plastic waste 370 370 439 439 502 502
Central CHP plant, coal, rubber, biomass, 37 65 53 81 53 81
industrial tax∗
Central CHP plant, coal, rubber, biomass 182 189 182 192 182 192
Central CHP plant, animal fat 40 40 40 40 40 40
Central CHP plant, heavy fuel oil, 0 17 0 17 0 17
industrial tax
District heating plants, 1 4 1 4 1 4
Heavy fuel oil, industrial tax
Electric boiler, industrial tax 48 1 48 1 48 1
Heat wasted 61 61 3 3 0 0
∗ Heat to industries is taxed differently than heat to households, 30% of the carbon dioxide
tax the heat to household carries and none of the energy tax.

Table 5 shows the fuels used for electricity production in the different scenarios. As can be seen,
the Gärstad plant produces electricity in the scenarios with a high electricity price, and the central
CHP produces even more electricity. Furthermore, the power output of the central CHP plant also
increases when district heating production at the Gärstad plant decreases; thus the decrease in heat
production at the Gärstad plant “frees up” a portion of the district heating demand for the central
power plant to utilise allowing more operational hours where the co-generation of heat and power
is feasible. Boiler 1 is used for this increase, using coal/rubber/biomass as its fuel.
The system cost is the cost for supplying heat for the district heating demand during a ten-year
period. In these costs only operational costs are included; hence the interest lies in the comparison
between the scenarios. The most important factor contributing to the differences in the system
cost is the amount of waste used as a fuel for district heating. The more waste incinerated at the
Gärstad plant, the lower is the system cost. The electricity price is also an important factor for
the total system cost, since electricity produced at Tekniska Verken’s plants generates an income
in the system. The scenarios with a high electricity price and therefore a higher local production
of electricity have a lower system cost than the scenarios with equal amount of incinerated waste
but a low electricity price. The system costs varies between −347 MSEK for the cheapest scenario
(scenario 2) and MSEK 512 for the most expensive (scenario 5).

208

© 2004 by Taylor & Francis Group, LLC


chap-21 19/11/2003 14: 48 page 209

Table 5. Fuel used for electricity production (GWh/year).

Central CHP plant, Central CHP plant, Gärstad plant


Scenarios heavy fuel oil coal, rubber, biomass light fuel oil

1 61 107 0
2 129 131 126
3 61 133 0
4 129 188 126
5 61 152 0
6 129 243 126

Table 6. Carbon dioxide emissions.

Comparable
CO2 emissions from CO2 emissions
Electricity production CO2 emissions power used in between the
At Tekniska Verken’s CO2 crediting from local plants electric boiler scenarios
Scenarios plants (GWh/year) (kton/year) (kton/year) (kton/year) (kton/year)

1 141 135 216 46 127


2 287 275 293 1 19
3 162 155 208 46 99
4 321 307 296 1 −10
5 178 170 204 46 80
6 350 225 303 1 −31

The environmental effects are studied in regard to carbon dioxide emissions. In Table 6 the
amount of emissions for the different scenarios are shown. The scenarios have different emissions
of carbon dioxide from heat and electricity production at the local plants. However, different
amounts of electricity are produced at Tekniska Verken’s plants (the power produced at Tekniska
Verken’s hydro power stations is not included since it is the same in all scenarios, 56 GWh per year)
and the electric boiler is used to a different extent in the different scenarios. These variations have
to be taken into consideration. The consequences for the external power system are assumed to
impact upon coal condensing power plants since these are the current marginal power producers
in the European electricity system [19]. The carbon dioxide emissions from the electricity used
in the electric boiler are added to the local carbon dioxide emissions and the electricity produced
at the local plants is credited from the local amount of carbon dioxide emissions.∗ The comparable
amount of carbon dioxide emissions can be seen in the sixth column in Table 6.
Carbon dioxide emissions decrease with a decrease in the amount of waste incinerated at the
Gärstad plant. This is because waste is mainly, but not fully, replaced by biomass fuel with zero net
emissions of carbon dioxide. The comparable carbon dioxide emissions decrease with increased
electricity production at Tekniska Verken’s plants. This is due to the method of comparison, with
electricity from outside suppliers produced with at fuel with high carbon dioxide emissions and in
condensing plants which have lower efficiency than the CHP plants that Tekniska Verken operates.
The use of electric boilers for district heating production also has a large impact on carbon dioxide
emissions when compared in this manner.

CONCLUDING DISCUSSION

Today, with the present tax system, it is profitable for an energy company to use waste as a fuel
for district heating purposes. There exists a double incentive in the tax system due to the landfill

∗ Theassumption is made that the coal condensing plants affected have an efficiency of 35%, which results in
carbon dioxide emissions of 960 kg/MWh.)

209

© 2004 by Taylor & Francis Group, LLC


chap-21 19/11/2003 14: 48 page 210

tax, presently 288 SEK/ton, and the favourable taxation in the energy sector which treats waste as
biomass fuel. The question of introducing a tax on waste incineration is raised and it is likely that
this will happen in the future. This would make other waste treatment methods more competitive
compared to incineration and it would also be in line with the waste hierarchy in the European
Union which states that material recovery of waste is preferred before energy recovery.
Combined heat and power is regarded as an efficient way to use resources and a way to decrease
carbon dioxide emissions. However, it is not favoured in the Swedish tax system since fuel producing
heat is taxed if the heat is used for district heating but not if it is wasted in a condensing plant. There
is a risk that waste incineration removes the heat sink for combined power and heat production
since it is complicated to produce electricity with waste used as fuel. This risk should be noticed in
the present situation with a lot of plans for investment in waste incineration facilities. In the result
of this study, it can be seen that with lower amounts of heat production from waste incineration,
power production in the system increases since it is possible for the central CHP plant to operate
for more hours.
Carbon dioxide emissions from the European system decrease with an increased electricity
production at Tekniska Verken’s plants. This result is highly dependent on the assumption that
electricity from outside suppliers is produced at coal condensing plants but shows the importance
of efficient plants such as those with the cogeneration of power and heat. A higher electricity price
is needed to make the gas turbine in the Gärstad plant profitable to operate, and hence facilitate
CHP production at the plant. While the electricity price in Sweden is likely to increase in the future
when the European electricity market is completely deregulated, the policy instruments, such as
taxes and subsidies, should be reviewed in order to promote CHP in a better way.
A large increase in the carbon dioxide emissions occurs in the scenarios with heat production
from electric boilers. This shows the importance of using electricity for electric-specific purposes
and not for heat purposes.

ACKNOWLEDGMENT

The work has been carried out under the auspices of The Energy Systems Programme, which
is financed by the Swedish Foundation for Strategic Research, the Swedish Energy Agency and
Swedish industry. The authors are grateful to Mats Bladh, Dag Henning and Jörgen Sjödin for
valuable comments on the paper.

REFERENCES

1. European Commission, Environment 2010: Our future, Our Choice – The Sixth Environment Action
Programme, COM (2001) 31 final, European Commission, Brussels, 2001.
2. Ministry of Finance, Lag (1999:673) om skatt på avfall (Law (1999:673) on the waste tax, in Swedish),
Ministry of Finance, Stockholm, 1999.
3. Ministry of Finance, Lag (2001:960) om ändring i lagen (2001:673) om skatt på avfall (Law (2001:960)
on change in the law (2001:673) on the waste tax, in Swedish), Ministry of Finance, Stockholm, 2001.
4. Ministry of the Environment, Förordning (2000:512) om deponering av avfall (Ordinance (2000:512)
on landfill of waste, in Swedish), Ministry of the Environment, Stockholm, 2000.
5. The European Parliament and the Council of the European Union, Directive 2000/76/EC of the
European Parliament and of the Council of 4 December 2000 on the incineration of waste, The
European Parliament and the Council of the European Union, Brussels, 2000.
6. Ministry of Finance, Skatt på avfall idag – och i framtiden (Tax on waste today – and in the future, in
Swedish), SOU 2002:9, Fritze, Stockholm, 2002.
7. Swedish Association of Waste Management, Swedish Waste Management 2000, Swedish Association
of Waste Management, Malmö, 2000.
8. Ministry of the Environment, SFS 1997:185, Förordning (1997:185) om producentansvar för förpack-
ningar (Ordinance 1997:185 on producer‘s responsibily for packages, in Swedish), Ministry of the
Environment, Stockholm, 1997.

210

© 2004 by Taylor & Francis Group, LLC


chap-21 19/11/2003 14: 48 page 211

9. Swedish Association of Waste Management, Kapacitet för att ta hand om brännbart och organiskt avfall
(Capacity to take care of combustible and organic waste, in Swedish), RVF-report 00:13, Swedish
Association of Waste Management, Malmö, 2000.
10. Swedish Environmental Protection Agency, Naturvårdsverkets föreskrifter om hantering av brännbart
avfall NFS 2001:17 (The Swedish Environmental Protection Agency‘s directions on handling of
combustible waste NFS 2001:17, in Swedish), Swedish Environmental Protection Agency, Stockholm,
2001.
11. Swedish National Energy Administration (STEM), Energy in Sweden 2001, STEM, Eskilstuna, 2001.
12. The European Parliament and the Council of the European Union, Directive 2001/77/EC of the European
Parliament and of the Council of 27 September 2001 on the promotion of electricity produced from
renewable energy in the internal electricity market, The European Parliament and the Council of the
European Union, Brussels, 2001.
13. European Commission, A Community Strategy to promote Combined Heat and Power (CHP) and to
dismantle barrier to its development, COM (97) 514 final, European Commission, Brussels, 1997.
14. Swedish Association of Waste Management, Förbränning av avfall: En kunskapssammanställning om
dioxiner, (Waste-to-energy: An inventory and review about dioxins, in Swedish) RVF-report 01:13,
Swedish Association of Waste Management, Malmö, 2001.
15. Swedish National Board for Industrial and Technical Development (NUTEK), Avfall-94 (Waste-94, in
Swedish), R1994:17, NUTEK, Stockholm, 1994.
16. Henning, D., MODEST – An Energy-System Optimisation Model applicable to local utilities and
countries, Energy – the International Journal, Vol. 22, No. 12, pp. 1135–1150, 1997.
17. Süleyman D., Volvo Faces a Deregulated European Electricity Market, Dissertation 663, Linköping
Institute of Technology, Linköping, 2000.
18. Bartlett M., Holmgren K., Waste Incineration in Swedish Municipal Energy Systems, Arbetsnotat 19,
Program Energisystem, Linköping Institute of Technology, Linköping, 2001.
19. Swedish National Energy Administration (STEM), Beräkning av CO2 -reduktion och specifika kostnader
i det energipolitiska omställningsprogrammet (Calculation of CO2 -reduction and specific costs in the
energy-political programme for changeover, in Swedish), ER 15:1999, STEM, Eskilstuna, 1999.

211

© 2004 by Taylor & Francis Group, LLC


chap-21 19/11/2003 14: 48 page 212

© 2004 by Taylor & Francis Group, LLC


chap-22 19/11/2003 14: 48 page 213

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Tidal power generation: a sustainable energy source?

Alain A. Joseph
Department of Electrical and Computer Engineering, Dalhousie University, Halifax, Nova Scotia, Canada

ABSTRACT: The Annapolis Tidal Generating Station is the first and only active tidal generat-
ing station in North America. Located in Eastern Canada on the Bay of Fundy, the station takes
advantage of the world’s highest tides (up to 9 m) to generate 32 GWh of electricity per year.
Structures used to direct tidal water flows for electricity generation influence biological activity
and alter water levels in tidal basins. Economic and ecological sustainability factors are discussed
for three tidal power stations: Annapolis, La Rance, and Kislaya Guba. At Annapolis, improvements
in construction and operation have reduced ecological disruption compared to previous tidal power
plants.
Annapolis has been in continuous operation for 17 years, producing clean, emission-free
electricity. The station demonstrates that tidal generation is both feasible and sustainable.

INTRODUCTION

Tidal power
Humans have always had a close relationship to the oceans. For the first time in our history we have
the ability to harness the incredible power of the oceans through tidal electricity generation. Tidal
power has been a reality since 1966 when the first large scale commercial 240 MW generating
station opened at La Rance, France. In the nearly forty years that have followed, only limited
advances in tidal power have been realized. Modest single turbine pilot generating stations have
been built in Kislaya Guba, Russia (1968), in JiangXia, China (1980), and in Annapolis Royal,
Canada (1984) [1, 2]. None of these stations exceeds 20 MW generating capacity. China also has
a limited number of smaller (<500 kW) stations.
The few existing tidal generating stations capture a tiny fraction of the available tidal energy
estimated at nearly 3 terawatts (TW) [3]. With increasing global demand for energy and an emphasis
on renewable energy sources tidal power has become a topic worthy of study and discussion. There
are many technical and environmental challenges to consider before advocating the widespread
implementation of tidal generation. This paper will examine issues surrounding the current state
of tidal electricity generation, the sustainability of this energy source, and the future of tidal power.
The Annapolis Royal Tidal Generating Station will be used as an example for discussion.

Defining sustainability
Before questioning the sustainability of various power generating regimes, we must discuss what
is meant by the term “sustainability”. There are many interpretations of this concept, however all
agree that sustainability involves taking into account the long-term effects of present practices. A
sustainable concept, project, action, object, or idea must endure without having detrimental effects.
Some would argue that in order for something to be sustainable it must contribute rather than
consume.
The concept of sustainability varies with the context of discussion. In the specific example of
electricity generation, sustainability can be viewed in several ways. Nearly all electricity generating

213

© 2004 by Taylor & Francis Group, LLC


chap-22 19/11/2003 14: 48 page 214

Figure 1. Location of Annapolis Royal Tidal Generating Station (maps adapted from [5, 11]).

systems are based on power plant life-expectancies of at least 20 years, ranging to up to 75 years
for hydro power systems. In human terms, these are large time-spans: an entire career or an entire
lifetime. In environmental terms, this represents only a brief period of activity. It is important that
power generating systems endure long enough to pay the costs of construction and the investment
associated with such projects, but it is also important that power generating systems pay back the
ecosystems from which they “borrow” energy. It is in this sense that many conventional energy
systems such as thermal electricity generation fail to contribute and technologies such tidal, wind,
solar, geothermal and wave power show the greatest promise.

THE ANNAPOLIS ROYAL TIDAL POWER PROJECT

The Bay of Fundy, on Canada’s Atlantic coast has some of the highest tides in the world. The length
and funnel-shape of the bay channel tidal waters to 16 m heights. In the 1970’s energy prices and
concerns about oil reserves encouraged several studies [4, 5] into the potential to develop the Bay of
Fundy tidal resource for electricity generation. A detailed examination of the tidal cycles, geology,
biology, and economic implications of the project was produced and 39 potential generating sites
were identified [1]. Two large projects were examined in detail: a 3800 MW site (B9) on Cobequid
Bay and an 1100 MW site (A8) on the Cumberland Basin.
An existing tidal barrage in Annapolis Royal was selected as a pilot plant location to test the
Straflo Turbine, a cost saving design produced by the Escher-Wyss Corporation. Success with the
Straflo Turbine in Annapolis would encourage the larger B9 and A8 projects.
The Annapolis Royal location was ideal for testing tidal generation. In 1960, the Maritime
Marshland Rehabilitation Administration built a dam and sluiceway to control tides and allow
greater land use on the up-river side of the barrage. A relatively simple modification of the existing
barrage allowed a 7.6 m Straflo Turbine unit to be installed. The Annapolis turbine is the largest
Straflo turbine constructed to date [6]. The exact details of the construction and commissioning of
the Annapolis facility have been document by other authors [7, 8].
The Annapolis project has been considered a technical success. Lessons learned from La Rance
and Kislaya Guba were incorporated into the Annapolis design and helped to reduce construction
costs. The principal design variation is the type of turbine employed: the Straflo design is more
compact, reducing the size of the turbine powerhouse support structures and decreasing the depth
of excavation required for installation. Cage debris barriers were not installed, relying instead on
tidal flushing to clear the turbine passageway.
Some initial problems were encountered with standstill seals of the original design. These failed
immediately, and when replaced, failed again. The turbine’s hydrostatic seals function adequately

214

© 2004 by Taylor & Francis Group, LLC


chap-22 19/11/2003 14: 48 page 215

Figure 2. Cross section of the Annapolis powerhouse structure (Diagram Courtesy of NSPI).

at standstill. If major renovation of the turbine is required stop logs are put in place and the unit
is de-watered [9]. High humidity and salt levels corroded piping in the turbine’s salt water-cooling
system. Installation of fiberglass pipes solved this problem. The entire water-based cooling system
was eventually eliminated as it encouraged high humidity levels in the rotor/stator assembly and
led to corrosion of uninsulated copper. The existing air-cooling system was found to be sufficient
to maintain operating temperatures in the turbine and replaced the water-cooling system. Bare
copper plating used for its heat dispersion properties in the generator setup was later coated with
epoxy to prevent corrosion and arcing in the system. It was also learned that the hydrostatic seals
require cleaning every 3–7 years to remove manganese deposited by seawater. With these minor
modifications and adjustments, the plant has gone on to operate without problems for over 17 years.
The Annapolis Royal Tidal Generating Station feeds into the regional power grid via a 69 kV
transmission line. The Maritime Integrated System (MIS) distributes power over a large region
and is capable of absorbing tide-generated power even at off-peak times. In this manner Annapolis
offsets the fossil fuel requirements of local thermal generating stations by approximately 50,000
barrels of oil per year.
Operation of the Annapolis station can be conducted remotely via the Nova Scotia Power Incor-
porated (NSPI) Ragged Lake Control Center over 100 km away. Computer software linked to
system controls allows plant parameters to be monitored and adjusted from a small laptop com-
puter. During normal daytime working hours an operator monitors the plant. The operator programs
parameters for upcoming tidal cycles taking into consideration variables such as precipitation, river
levels, barometric pressure, and other factors which have been observed to affect tidal variation.
The replacement of a computerized system with a human operator in 1991 has resulted production
gains of up to 10%. Human operators have continually improved. In March 2002 the plant produced
a record one-day output of 132 MW.
The Annapolis station also houses the region’s Tourist Information Bureau and an interpretive
center with displays on the local ecosystem, tides, tidal basin, and the inner workings of the
generating station. Anyone who has experience the noise and dust of a coal-fired thermal station
would remark at the cleanliness and silence of tidal power.
Financial success was never a consideration for Annapolis Royal. The purpose of the site was
to test the potential of the Straflo turbine for low-head applications, to promote advances in tidal
power technology, and to demonstrate to the public tidal power before implementing much larger
tidal schemes. The facility cost $60 million (CDN$) to construct or $3000/kW, very high comp-
ared to other generating stations in the region. A typical thermal plant costs closer to $400/kW.

215

© 2004 by Taylor & Francis Group, LLC


chap-22 19/11/2003 14: 48 page 216

In order to reduce the per kilowatt costs of tidal power most planners develop large schemes and
the economies of scale greatly reduce per unit costs. Nevertheless, the financial break-even point
on the large Fundy A8 and B9 projects was estimated at 35 years [5].
The useful life of tidal barrage generating stations is estimated at 75 years and the durability of
La Rance, Kislaya Guba, and trouble free operation of Annapolis suggest this goal is attainable. The
financial break-even point for tidal power systems is much later than for thermal technologies, and
is comparable to nuclear power projects. Despite many planned projects, there are few commercial
tidal plants due to prohibitively high start-up costs. Tidal power remains a financially risky option
so long as cheap fossil fuels are available.

ENVIRONMENTAL CONSIDERATIONS FOR TIDAL POWER

While the financial gains associated with large scale tidal power are determined to be marginally
profitable given the correct conditions, the environmental costs/benefits are largely unknown. Tidal
power displaces fossil fuel consumption thereby reducing CO2 emissions.Tidal power is indefinitely
renewable, with benefits comparable to those from wind and solar systems. Tidal systems are also
less likely to surrender to the sedimentation problems, which reduce output capacity in Hydro
dams. The flushing action of tides keeps flow-ways clear. Construction of tidal barrages leads to
potential problems with fish and bird migrations, reduced inter-tidal habitat, and will alter species
composition in the local ecosystem. With less than 10 well-documented tidal stations, only one full
scale tidal plant (La Rance), and fewer than 40 years experience in tidal systems our understanding
of the environmental consequences of tidal power is limited to specific case studies.

La Rance (France)
During construction of La Rance cofferdams sealed the head pond from the sea for a period of
three years. This effectively destroyed the ecosystem in the head pond. It was ten years before the
ecosystem stabilized and even at this, what presently exists is a substantially different ecosystem
than the one found prior to construction of the barrage [10].

Kislaya Guba (Russia)


Experiences from Kislaya Guba show considerable agreement with observations from La Rance. In
this case, the head pond basin was sealed for four years, destroying the basin ecosystem. Stabilization
is reported to have occurred after 5 years. However plant operation was altered and finally ceased
in 1981. By 1983 de-oxygenation and other effects had eliminated 8 of 11 fish species and leading
to valuable fisheries species such as scallops, bivalves, and salmon. The station returned to normal
operation in 1983 and ecosystem regeneration has occurred [10].

Annapolis Royal (Canada)


Annapolis was constructed with minimal ecological disturbance. The principal environmental
concerns at Annapolis are effects on nearby agricultural land, shoreline erosion, flooding, sedi-
mentation, and fish mortality [11]. Groundwater and soil monitoring has established that salinity
levels upstream of the barrage have not increased. Shoreline erosion did occur at advanced rates
immediately following the plant opening; however, these have stabilized to the levels found prior
to development. Numerous fish mortality studies have been conducted at the site by the Acadia
Center for Estuarine Research. The findings of these studies remain controversial due to limita-
tions in accurately assessing fish mortality. The present data suggests that mortality for fish that
choose passage through the turbine is higher than originally anticipated. There is also evidence
that the fish-way installed to divert fish away from the turbine has had limited success. Measured

216

© 2004 by Taylor & Francis Group, LLC


chap-22 19/11/2003 14: 48 page 217

turbine-induced mortality approaches 20% for some large species (i.e. Alosa sapodissima) however,
populations of most species found in the basin remain stable [12] indicating that fish populations
have been able to compensate.

CONCLUDING ANALYSIS

What are we attempting to sustain? If the provision of clean renewable energy is our ultimate goal,
tidal power is one of the options deserving serious consideration. There is a great deal of room for
improvement and innovation in the technologies currently used in tidal power generation. Attempts
to harness tidal energy have been very limited to date, and as with any emerging technology, lessons
must be learned before techniques can be optimized.
A number of interesting innovations for exploiting tidal power have yet to be fully tested. Free-
standing tidal turbines (not unlike wind turbine systems) have excellent potential, as do modular
“tidal fences” employing several vertical-axis turbines. The principles of tidal power are sound, but
the techniques used in power capture need to evolve.
There is no debating that tidal barrage systems such as those presently found in La Rance, Kislay
Guba, JiangXia, and Annapolis alter the surrounding ecosystem. These effects do not appear to be
out of proportion to the benefits of offsetting fossil fuel consumption. Even the most dedicated
marine biologists studying the Annapolis Royal ecosystem agree that tidal power is relatively
benign compared to thermal electricity generation, which produces acid-rains devastating to local
fish populations. Unfortunately, measuring the effects of airborne pollutants is much more difficult
than observing the influence of a tidal station. The true environmental costs of many energy sources
such nuclear, hydro, and thermal remain difficult to measure.Although the long-term environmental
consequences of tidal power have yet to be fully understood, it is seems evident that both ecosystems
and communities would rather endure a tidal power plant than a nuclear or thermal power facility.
While the economics of tidal power generation are not yet encouraging, the inevitable decline of
fossil fuel availability suggests that tidal power will have a bright future. There are literally dozens
of well-planned tidal projects around the globe awaiting investment. The “tide” is rising and where
it will take us remains to be seen.

ACKNOWLEDGMENTS

The author wishes to thank Stuart MacDonald and Tom Foley with Nova Scotia Power Incorpo-
rated (NSPI) for providing technical information on the Annapolis Royal Generating Station, and
Dr. Graham Daborn and Jamie Gibson of the Acadia Center for Estuarine Research in Wolfville,
Nova Scotia, for providing environmental information on the Bay of Fundy eco-region.

REFERENCES

1. O’Kelly, F., Harnessing the Ocean’s Energy: Are We Ready for a Gift from the Sea? Hydro Review,
Vol. 10, No. 4, pp. 24–28, July 1991.
2. Bershtein, L.B., editor, Tidal plants: in operation, under design, studied and proposed, Appendix in
Tidal Power Plants, Korea Ocean Research and Development Institute, pp. 418–419, 1996.
3. Daborn, G.R., Tidal Power from the Bay of Fundy, Acadia University Institute, 1985.
4. Humphreys, E.W., Kirkpatrick, L.F., O’Connor, A.J., Collin, A.E., Cameron, R.B., and Thompson, E.D.,
Board Report: Re-assessment of Fundy Tidal Power. Report of the Bay of Fundy Tidal Power Review
Board and Management Committee, 57pp., Nov. 1977.
5. Clark, R.H., Committee Report: Re-assessment of Fundy Tidal Power. Report of the Bay of Fundy
Tidal Power Review Board and Management Committee, 507pp., Nov. 1977.
6. Douma, A., and Stewart, G.D., Annapolis Straflo Turbine will demonstrate Bay of Fundy Tidal Power
Concept, Hydro Review, pp. 1–8, 1990.

217

© 2004 by Taylor & Francis Group, LLC


chap-22 19/11/2003 14: 48 page 218

7. Rice, R.G., and Baker, G.C., The Annapolis Experience. Institute of Electrical and Electronics Engineers
(IEEE) publication, pp. 391–396, CH2498-4/87/0000-391, March 1987.
8. Delory, R.P., and Rice, R.G., Fundy Power, Paper presented to the Canadian Institute of Marine
Engineers, pp. 1–16, May 1987.
9. Rice, R.G., and Baker, G.C., Annapolis: The Straflo Turbine and Other Operating Experiences. Internal
Nova Scotia Power Report, 16pp., 1992.
10. Bershtein, L.B., editor, Environmental Impacts of Tidal Power Plants, Chapter 21 in Tidal Power Plants,
Korea Ocean Research and Development Institute, pp.383–395, 1996.
11. Daborn, G.R., and Dadswell, M.J., Natural and Anthropogenic Changes in the Bay of Fundy-Gulf of
Maine-Georges Bank System, pp. 547–560 in Natural and Man Made Hazards, Proceedings of the
International Symposium, El-Sabh, M.I, and Murty, T.S., eds. Riedel Publishing Co., 1986.
12. Gibson, J. (personal communication), Ph.D. candidate/Research Associate, Acadia Center for Estuarine
Research, Wolfville, Nova Scotia, Canada.

218

© 2004 by Taylor & Francis Group, LLC


chap-23 19/11/2003 14: 48 page 219

Modelling and simulation of energy,


water and environment systems

© 2004 by Taylor & Francis Group, LLC


chap-23 19/11/2003 14: 48 page 220

© 2004 by Taylor & Francis Group, LLC


chap-23 19/11/2003 14: 48 page 221

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Modelling of energy and environmental costs for sustainability


of urban areas

Alfonso Aranda, Ignacio Zabalza∗ & Sabina Scarpellini


CIRCE – Centre of Research for Power Plant Efficiency-CPS – University of Zaragoza, Spain

ABSTRACT: The paper’s principal aim is to present the results achieved by the practical appli-
cation of the energy modelling to urban areas based on two interrelated concepts: energy costs and
environmental costs. The analysis has been carried out in three standard Municipalities located in
a Mediterranean Zone (Spain) selected on the grounds of their different size and social-economic
activities, in order to facilitate the extrapolation of results.
Energy flows of the analysed urban areas have been quantified and classified. In addition, energy
and environmental costs have been aggregated for each productive sector. Using the methodology
proposed in this paper, innovative solutions could be specially designed for different areas in order
to ensure the sustainable development of urban areas. Finally, the basis for changing the present
development model in the Municipalities is settled out by mean of the application of sustainability
[1] principles set in Agenda 21 [2].

INTRODUCTION

At present, urban areas concentrate a large number of activities and population. This implies the
importation and use of massive energy resources. The majority of the imported resources are not
renewable and a huge quantity of waste is generated. Large imbalances are caused by the inflows
and outflows of materials, altering seriously the ecosystems where resources are taken. Due to the
exploitation of resources and the discharge of waste these ecosystems are doubly damaged.
Approximately the 20% of world population, most of them inhabitants of urban areas in developed
geographical areas, consume the 80% of the natural resources. The forecasts indicate that in 2025
the 75% of population will live in urban areas, what would imply an increment of the 25% in
comparison with the current situation.
Cities have a complex daily operation. They use big quantities of energy and materials in order to
provide services to the citizens and to create the necessary infrastructures. The question is: will the
future increase of energy and materials consumption be sustainable with the population growing
in the urban areas?
During the last decade, various authors have developed different methodologies for the calcula-
tion and analysis of energy and environmental costs of different types of systems. The study of the
costs in urban areas can be illustrated as a “dark box” where there are big inflows and outflows of
materials and energy [3] in order to supply the complex city structure.
The environmental costs can be directly analysed from the calculation of energy costs. From
these costs it is possible to calculate the cities’ greenhouse effect because all the end uses of energy
cause gas emissions and thermal pollution. This environmental cost analysis [4], which considers
the transformation of the spent energy in CO2 , allows defining the balance between the CO2
emission and the CO2 consumption in a specific territory.

∗ Corresponding author. e-mail: izabal@posta.unizar.es

221

© 2004 by Taylor & Francis Group, LLC


chap-23 19/11/2003 14: 48 page 222

The pollution related to a city can be an important indicator for determining the territory that a
city needs in order to assimilate its pollution. The cities’ environmental costs can be measured using
the territory occupation [5]. This ratio has to be calculated considering the potential occupation [6]
of the territory necessary for assimilating the pollution and not considering only the urban areas.
Another way for calculating the environmental costs [7] is the use of the “energy units” obtained
measuring the energy that is necessary for separating and neutralising the emissions.
The presented study has been elaborated in the framework of the Project called “Costurbis” (it
means “the urban areas’ costs”) and its principal aim is to quantify the energy inflow of cities and
its environmental impact. A new vision of energy flows is proposed by this project. It analyses
the real causes of massive energy consumption. Different urban areas are compared appointing
possible actions for their sustainability in the future. The control of energy and environmental costs
in the urban areas [8–9] represents, therefore, an essential instrument in order to achieve the correct
resource planning.

METHODOLOGY

The methodology can be summarised in next points:


1. Selection of the urban areas for the analysis and definition of the main targeted sectors.
2. Summary, comparison and analysis of information and energy data.
3. Calculation of the energy flows and the energy and environmental costs.
4. Energy modelling done by the Savings-Investment curves [10].
5. Improvement proposals, reflections and conclusions.
The size, the Mediterranean weather conditions, and social-economic features have been considered
in the urban areas selection.
The analysed Municipalities have been grouped in three basic types. (1) Small towns (1,000–
2,000 inhabitants) located in rural areas that have as principal activities the agriculture and breeding.
(2) Medium towns (10,000–18,000 inhabitants) which principal sectors are the industrial one,
services and the agriculture and breeding sectors. (3) Large towns (30,000–50,000 inhabitants)
that have a socio-economic structure characterised by the primacy of the service sector and a
strong industrial sector.
Residential, services and road transport sectors have been mainly analysed. In the information
collection phase, data related to energy consumption’s and equipment always referred to the year
2000, have been gathered. Likewise energy diagnosis in several centres of the services sector
(including City Council’s buildings and public lighting) has been carried out. On the other hand,
in residential and transport sector the population has given useful energy information by a survey
distributed through several high schools by aleatory sampling. In this survey the habits in energy
use, the main consumption and energy equipment of houses, as well as the usual ways of transport
have been asked. Then data collected have been extrapolated to the total population.
Data about global consumption and associated costs for the towns provided by the gas and
electricity supplier companies have also been used. Managing all data collected, the global energy
inflows and outflows and the environmental impact associated [11] have been calculated for the
towns. After valuing the energy flows crossing the urban areas and detecting the energy ineffi-
ciencies, some proposals to diminish the energy and environmental costs have been estimated, in
order to obtain in this way an energy modelling of the Municipality through the Law of Savings-
Investment. This Law allows estimating the money saving that a Municipality can obtain from the
done investments.

Sources of error
In the developing of this study, various Local Administrations and High Schools were involved
in the pool phase and in the questionnaire’s collect. The results of this collaboration were quite
insufficient: more than 1,100 questionnaires were distributed, but only the 19% were collected and

222

© 2004 by Taylor & Francis Group, LLC


chap-23 19/11/2003 14: 48 page 223

considered valid in order to analyse and extrapolate the results. This circumstance demonstrates
the short people’s consciousness about the environmental issues at different levels. These factors
caused a bigger margin of error in the extrapolation of results.

RESULTS

Energy flows in residential sector


The housing energy consumption depends on its constructive features, the energy systems and
equipment and the habits of its inhabitants. In the small towns, the 64% of the population live in
single houses, what generally implies a larger housing size. In the medium and large towns this
percentage decreases respectively to the 28% and the 4%.
The average of occupation density of homes in medium or large towns is 3.2–3.3 people/home,
while in small towns of rural areas the density descends up to 2.8 people/home what is in contrast
with the larger housing size. This fact could be provoked by the social-labour change experimented
in the rural areas during last year. The principal cause of this change was the separation of rural
activities from private houses with the consequence increase of the houses’ area. Moreover, the
average great age of houses is bigger in small towns. It implies that the window insulation is more
inefficient. These circumstances produce a greater heating consumption in small towns, which
represents the most important energy consumption in private houses.
The predominance of single houses and cottages in the small towns implies that almost the
100% of the installations are individual. These installations are obviously less efficient than the
centralised ones. The average of single houses is inversely proportional to the Municipalities’
size while the centralised installations in building are obviously directly proportional to the city
size. In the medium towns centralised installations represent the 9–10% while in large towns they
achieve the 50% of the total.
The availability of natural gas (it supposes more clean emissions) increases in the bigger Munic-
ipalities principally due to the existing networks at present. Only the 50% of homes that use electric
energy for heating have electric accumulators and night rate, while in the rest of them inefficient
conventional stoves are still used. Moreover, and in spite of the previsions, firewood is rarely used
for heating (3%) in the analysed rural areas.
For what concerns home lighting, the use of low-consumption lamps and fluorescent only reaches
the 45–55% of houses.
The annual energy consumption per capita in urban areas (Figure 1) is lightly higher than the
Spanish annual average [12] that is 0.21 toe/inhabitant. Because of the different home occupation
density in the analysed town, it is interesting to consider the annual energy consumption in each
home, which is also higher than national annual average, that is 0.71 toe/home.
In the residential sector, the thermal consumption is predominant representing the 66% of
the total energy consumption of this sector. The annual electric consumption fluctuates among
Annual consumption (toe/home)
Annual consumption (toe/inhab)

0.8

0.6

0.4

0.2

0
Medium town Large town Small town

Figure 1. Consumption energy residential sector.

223

© 2004 by Taylor & Francis Group, LLC


chap-23 19/11/2003 14: 48 page 224

Table 1. Annual environmental impacts of residential sector.

SO2 NOx CO HC
Size (kg/inhab) (kg/inhab) (kg/inhab) (kg/inhab)

Large town 2947.3 1396.0 95.8 142.1


Medium town 3026.6 1433.6 98.1 145.9
Small town 3718.3 1759.6 118.8 179.1

Table 2. Mobility population with own vehicle and vehicles density.

Annual mobility Vehicles density


Size (km/inhab) (vehicles/1000 inhab)

Large town 9.250 564


Medium town 7.750 542
Small town 8.500 534

3.065 kWh/home in small towns and 3.835 kWh/home in large towns. On the other hand, the
annual thermal consumption per capita is 0.18 toe in medium towns, 0.22 toe in large towns, and
0.33 toe in small towns. Both results are direct consequence of smaller occupation of houses in the
small towns.
The annual energy cost per capita in the residential sector reaches 240–420 a. Approximately
the 59% of this cost is associated to the electricity consumption.
In the medium and large towns the energy consumption in the residential sector implies the 44–
48% of the polluting emissions of the town. This percentage reaches the 70% in the small towns
because the residential sector is the more important one and the services sector is irrelevant. In
Table 1 the polluting emissions per capita associated to residential sector are exposed.

Energy flows in transport sector


More and more powerful cars as well as the increasing of the mobility of people have annulled the
improvements introduced in the car’s efficiency.
A consequence of the socio-economic development is the highest availability of the own vehicles.
This supposes an increase in the average number of vehicles per capita (Table 2). The average
density of the vehicles in the urban areas overcomes in 21% the European Union ratio that is 451
vehicles/1000 inhabitants. In the urban areas analysed, the citizens mobility [13–14] in own vehicle
is above the national ratio that it is estimated in 6,000 kilometres travelled for each inhabitant
and year.
The majority part of the energy consumption is generated in the urban areas and in the large
towns. This is a direct consequence of the daily journeys of people to the work. On contrary, the
journeys are mainly trunk runs in the small towns and the trunk journeys to the leisure areas during
the weekend are the majority cause of consumption in the large towns. Moreover in the rural areas
has increased the energy consumption due to the intensive use of cars and increasing use of tractor
and other agriculture machines at present.
Public transport is six times more efficient than private transport, but in the analysed towns some
deficiencies were detected in the public transport and it produces a fall in the usage rates.
In a typical home, the fuel consumption for transport is the highest energy consumption. So in
medium towns this consumption represents 65% of the total energy consumption of home, while
in large and small towns these percentages reaches to 67% and 55% respectively.
Annual energy cost per capita in fuel for transport is 500 a approximately in small and medium
towns. In large towns, this cost is lightly higher reaching to 600 a. In Table 3 the derived polluting
emissions of the transport are presented.

224

© 2004 by Taylor & Francis Group, LLC


chap-23 19/11/2003 14: 48 page 225

Table 3. Annual environmental impacts of transport sector according to type of fuel.

SO2 NOx Particles CO HC


Size Fuel (kg/inhab) (kg/inhab) (kg/inhab) (kg/inhab) (kg/inhab)

Large town Gasoline 0.065 4.874 0.662 116.973 9.748


Gasoil 9.546 9.069 6.348 3.150 18.138
Medium town Gasoline 0.066 5.352 0.357 128.450 10.704
Gasoil 5.581 5.302 3.665 1.841 10.604
Small town Gasoline 0.000 4.910 0.431 119.380 9.991
Gasoil 6.632 6.288 4.393 2.153 12.489

Table 4. Annual environmental impacts of the public lighting.

Consumption SO2 NOx CO HC


Size (kWh/inhab) (kg/inhab) (kg/inhab) (kg/inhab) (kg/inhab)

Large town 132 0.300 0.142 0.010 0.014


Medium town 101 0.230 0.109 0.007 0.011
Small town 139 0.317 0.150 0.010 0.016

Energy flows in municipal sector


Although the municipal sector represents a little part of the total energy consumption of the
Municipalities, a local policy that take into account the energy optimisation criteria’s can have
a “multiplication effect” and to foment that the other users manage adequately the energy.
The public lighting in urban areas represents almost the 50% of electric cost in the municipal
sector. Moreover the 40% of public lighting on the average is wasted emitting light to the atmosphere
(lightwave pollution). Annual electricity consumption per capita for public lighting is in a range of
100–140 kWh, and the annual economic costs associated are 7–12 a/inhabitant depending on the
Municipalities’ urban structure in both cases. The implementation of regulation systems of lighting
level [15] and the improvement of the installation maintenance would represent an energy saving
around the 20–30%.

Global energy flows


The annual energy consumption per capita for urban areas is around 1 toe for each inhabitant,
corresponding the 20% to the electric consumption. In the small towns the ratio is 1.13 toe/inhabitant
due to the greater thermal consumption caused by the lower houses’ occupation. In the medium
towns the ratio values 1 toe/inhabitant; and in the large towns it is about 1.23 toe/inhabitant due to
the greater consumption in the urban transport sector.
Regarding the participation of the different sectors in total energy consumption of the town, the
transport sector is the most important energy consumer since it requires 52% of the total energy.
Subsequently it must be considered residential sector with an average percentage around 32%,
commercial and services sector with 14%, and finally the municipal sector with 2% of total energy
consumption. Annual energy cost per capita (Figure 2) presents few variations in urban areas,
always inside a range of 930–1110 a.
The annual electricity cost per capita is similar in the urban areas analysed, about 300 a, while
the annual thermal cost per capita is among 630–780 a, which supposes 70% of the total energy
cost of the town approximately.
Next table exposes the polluting emissions per capita associated to the total energy consumption
of urban areas.
225

© 2004 by Taylor & Francis Group, LLC


chap-23 19/11/2003 14: 48 page 226

1100
1050

Annual cost
(€/inhab)
1000
950
900
850
Medium Large town Small town
town

Figure 2. Energy cost per capita for the towns.

Table 5. Annual environmental impacts for the towns.

SO2 NOx CO HC
Size (t/inhab) (t/inhab) (t/inhab) (t/inhab)

Large town 6.63 3.15 0.33 0.35


Medium town 6.26 2.97 0.33 0.32
Small town 5.16 2.45 0.29 0.27

Energy modelling: analysis of inefficiencies and improvements


Starting from the identification and the quantification of energy saving actions that are necessary
for rectifying the detected inefficiencies, an energy model for each Municipality can be obtained
by means of the characterisation of the Municipality’s Savings-Investment curves. The principal
saving proposals quantified in this model are:
(a) Automation of thermal equipment’s and individual management of consumption’s.
(b) To foment the thermal solar energy as support to the hot water systems.
(c) To promote the more efficient technologies for heating.
(d) To better the thermal insulation in equipment’s and buildings.
(e) To increment the refrigeration equipment’s efficiency.
(f) To increment the lighting systems efficiency.
(g) To foment the public and clean transports in the urban areas.
(h) To optimise the municipal electric invoices.
In order to quantify the modelling two different scenarios have been taken into account:
Scenario A: The possibility of carrying out the proposed actions with the present level of
awareness of population.
Scenario B: The potential growing obtained by the implementation of the above mentioned
actions has been evaluated taking into account the hypothetical results reached by means of a
awareness campaign directed to foment the rational and efficient energy use among population.
The Savings-Investment curves have been obtained arranging the proposals for energy saving
depending on their economy profitability, top to down.
The ratio total saving / total investment has been taken as reference of the profitability of each
action. It allows evaluating all the different types of investments (annual or multi-annual) respect
to the total saving obtained during all the investment duration independently of the time.
After to arrange the proposals, the added up amount of investments and savings obtained is
calculated, from the most profitable to the less, and the Savings-Investment curve is obtained
representing these points.
A(I ) = AM · (1 − e−εI ) (1)

226

© 2004 by Taylor & Francis Group, LLC


chap-23 19/11/2003 14: 48 page 227

600
Saving (€/inhabitant)

500
400
Small town
300 Medium town
Large town
200

100
0
0 50 100 150 200 250
Investment (€/ inhabitant)

Figure 3. Savings – investment curves for towns in scenario A.

1.400
Saving (€/inhabitant)

1.200
1.000
Small town
800 Medium town
600 ∆ Large town
400
200
0
0 50 100 150 200 250 300 350 400 450 500 550 600 650 700
Investment (€/inhabitant)

Figure 4. Savings – investment curves for towns in scenario B.

Table 6. Towns energy modelling.

Scenario A Scenario B

Maximum Saturation Maximum Saturation Awareness


saving: AM coefficient saving: AM coefficient campaign cost
Size (a/inhabitant) ε (a/inhabitant) ε (a/inhabitant)

Large town 511 0.0292 787 0.016 3.69


Medium town 443 0.022 782 0.0095 7.33
Small town 263 0.059 1273 0.00563 23.04

A(I ): Total saving obtained during the total time of the investment (a)
I : Total investment (a)
AM : Maximum reaching saving (a)
ε: Saturation coefficient of the Savings-Investment curve (no-dimensional)
The most profitable proposals are displayed at the beginning of the modelling and it is possible
to observe that some little investments generate big saving. Incrementing the investment the sav-
ing growth too, but more moderately. The curve reaches the saturation point from a determinate
investment level where besides the growing of the investment the saving does not increase. This
asymptote limit coincides with the Municipal’s maximum reaching saving (AM ).
The increase of energy saving obtained by mean of an awareness campaign can reach the 54%
in the large towns (Table 6). This increase would reach the 76% in the medium towns and it would

227

© 2004 by Taylor & Francis Group, LLC


chap-23 19/11/2003 14: 48 page 228

join the 383% in the small towns. It means that the awareness of population reaches better results
in the small towns. Besides the per capita costs of the campaigns are more expensive due to the
minimum costs for carrying out the campaigns.
The saturation coefficient shows the speed with the level near to the saturation or the total
reaching level is jointed AM .
This coefficient is inversely proportional to the municipality size in the scenario A due to the
profitability of the actions carried out in each case. Notwithstanding this fact changes radically
when an awareness campaign is carried out, reducing the three coefficients due to the greater
investment per capita. An important reduction is experimented in the small towns because the per
capita dissemination cost would be more expensive, but the reached energy saving increases.

CONCLUSIONS

Analysing the results we can conclude that the present situation of the Municipalities is very far from
ideal principles of sustainable use of energy. But implementing strategies orientated towards the
development of new criteria of sustainability can reverse this situation. In this sense, to implement
the principles of the Agenda 21 is absolutely necessary:
(a) Sustainable local policies have to be carried out by Municipalities in order to obtain a multiplying
effect among the population. The implementation of awareness campaigns, jointly with an
environmental educational programme at schools, could cause an important reduction of energy
consumption in the Municipalities.
(b) Municipalities have to diversify the energy consumption and to exploit the endogenous resources
in order to improve the air quality and to reduce the greenhouse gases. At present there exists a
deeply ingrained use of conventional energy sources, based on a lack of information about the
advantages that the use of renewable energies could represent. In this context, the promulgation
of a Local Ordinance for the incorporation of thermal solar energy systems in buildings and the
promotion of biomass (firewood) as fuel for heating are urgent.
(c) The massive use of private cars with a low occupation rate has to be reduced, increasing the use
of more efficient means of transport. It is necessary to minimise the urban mobility necessities in
the large towns fomenting an efficient urban planning and compact urban structures. Likewise
it is essential to promote the use of the public transport in order to reach higher usage rates
similar to the rates obtained in the north and central Europe countries.
(d) New actions are urgent in order to promote recycling among the population, as well as the
materials’ re-utilisation and the exhaustive wastes’ classification.
Summarising, the analysis detected an urgent necessity of strengthening the Municipalities’
competencies in energy matters and at the European level. The aim is to promote the renewable
energies and the environmental protection by means of the implementing the Agenda 21 principles.

ACKNOWLEDGEMENTS

This paper has been developed from the results obtained in the framework of a biannual Project
“Costurbis” financed by the Spanish Ministry for Science and Technology. We would like to thank
the Municipalities Huesca, Alcañiz and Muel for their collaboration in the information and energy
data compilation. We have to mention as well the collaboration of students and the High schools
involved in the pool phase.

REFERENCES

1. Ulrich Von Weizsäcker, E., Lovins, A.B. and Lovins L.H. Factor 4 Duplicar el bienestar con la mitad
de recursos naturales, Galaxia Gutenberg – Círculo de Lectores, 1997.

228

© 2004 by Taylor & Francis Group, LLC


chap-23 19/11/2003 14: 48 page 229

2. Agenda 21, the Rio Declaration on Environment and Development, and the Statement of principles for
the Sustainable Management of Forests adopted by more than 178 Governments at the United Nations
Conference on Environment and Development (UNCED) held in Rio de Janerio, Brazil, 3 to 14 June
1992.
3. Naredo, J.M. and Frías, J. Flujos de energía, agua, materiales e información en la Comunidad de
Madrid, Conserjería de Economía de la Comunidad de Madrid, Madrid, 1988.
4. Valero, A., Subiela, V. and Cortés, C. Balance de emisiones y consumos de dióxido de carbono en
España, Energía, Año XX, September/October, pp.97–107, 1994.
5. De Groot, W.T. Environmental Science Theory. Concepts and methods in a one-world problem-oriented
paradigm, Elsevier Science Publishers, Amsterdam, 1992.
6. Cleveland, C.J. Natural Resource Scarcity and Economic Growth Revisited: Economic and Biophysical
Perspectives, R. Constanza, ed. Ecological Economics – The Science and Management of Sustainability,
Columbia Univ. Press, New York, 1991.
7. Kümmel, R. and Schüssler, U. Heat equivalents of noxious substances: a pollution indicator for
environmental accounting, Ecological Economics, 3, pp.139–156, 1991.
8. Prats, F. Sostenibilidad y políticas urbanas y locales: el caso de las ciudades españolas, Biblioteca
Ciudades para un futuro más sostenible.
9. Sánchez de Muniáin, J.L. El papel de los municipios en la gestión de la energía, Genera Congress,
Madrid, March 2001.
10. Valero, A., Muñoz, M. and Lozano, M.A. A general theory of exergy saving III. Energy saving and
thermoeconomics, Computer-Aided Engineering and Energy Systems Vol. 3: Second Law Analysis and
Modelling – AES, Vol. 2–3, New York, 1986.
11. EMEP – CORINAIR; Atmospheric Emission Inventory Guidebook, Gordon Mc Innes, European
Environment Agency, First Edition, Copenhagen, 1996.
12. Instituto para la Diversificación y el Ahorro de Energía – Ministerio de Ciencia y Tecnología; El
consumo de energía de las familias españolas, Madrid, 1999.
13. López, C. Transporte y movilidad: el factor energético y ambiental, Genera Congress, Madrid, March
2001.
14. Estevan, A. and Sanz, A. Hacia la reconversión ecológica del transporte en España, Los libros de la
catarata, 1996.
15. Royo J. et al. Análisis del potencial de ahorro y eficiencia energética en Aragón, Gobierno de Aragón –
Dpto Industria, Comercio y Desarrollo, Colección Datos Energéticos de Aragón, Zaragoza, 2000.

229

© 2004 by Taylor & Francis Group, LLC


chap-23 19/11/2003 14: 48 page 230

© 2004 by Taylor & Francis Group, LLC


chap-24 19/11/2003 14: 48 page 231

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Alteration of chemical disinfection to environmentally friendly


disinfection by UV-radiation

Slaven Dobrović, Nikola Ružinski & Hrvoje Juretić


Power Engineering Department, Faculty of Mechanical Engineering and Naval Architecture,
University of Zagreb, Zagreb, Croatia

ABSTRACT: In the summer season the waterworks Ponikve distribute daily approximately
7000 m3 of treated lake water “Jezero” through the complex drinking water distribution system
of the northern part of the island Krk, Croatia. In order to avoid problems with disinfection
byproducts of both chlorine and chlorine dioxide that were present due to presence of natural
organic matter in lake water, the large-scale pilot test of UV irradiation for disinfection of drinking
water was started in 2000. This paper presents the results of a 9 months investigation period that
includes different seasonal qualities of raw water. During the test period intensive bacteriological
monitoring showed that microbial water quality standards could be held. Despite the natural organic
matter (NOM) content and very high summer temperatures of water (up to 30◦ C), the disinfec-
tion process was successfully conducted solely with the UV-radiation. The waterworks Ponikve
continues to change chemical disinfection equipment to UV-radiation.

INTRODUCTION

The term disinfection refers to the selective inactivation of pathogens or disease-causing organisms
[1]. Most widely used technology is the addition of chemicals like chlorine and chlorine dioxide.
Nowadays, UV disinfection is becoming increasingly recognized as a practical and effective alter-
native to the chemical disinfection of water. The main reason for that is the discovery that a large
number of byproducts are formed by reaction of chlorine with the naturally occurring organic mate-
rials such as humic and fulvic acids or other organic molecules present in water [2]. Chlorination
byproducts in drinking water were discovered in 1974, and epidemiologic assessment of cancer
risk started shortly thereafter [3]. Concerns have been raised regarding the potential health effects
associated with the exposure to disinfection byproducts. For many of the byproducts, cancerogenity
and mutagenity have been proven or highly suspected [4–8].
The Waterworks Ponikve distribute daily during the summer season approx. 7000 m3 of treated
lake water “Jezero” through the complex drinking water distribution system of the northern part
of the island Krk. Due to constant natural organic matter content of lake water, and in spite of
adsorption with powdered activated carbon during the summer months, there are always cer-
tain amounts of residual organic loading in finished drinking water. Until 1997 the disinfection
was done by chlorine at the end of the treatment process. As a result of increased public con-
cerns considering continuous presence of trihalomethanes in the distributed drinking water, the
waterworks converted to disinfection with chlorine dioxide. Compared to chlorine, chlorine diox-
ide is a more efficient biocide that does not directly form halogenated hydrocarbons [1], and it
was believed that the problem of disinfection byproducts would be solved. But instead of tri-
halomethanes, increased levels of chlorite ion were frequently present. Since Croatian regulations
set 0.2 mg/l as a maximum contamination level for chlorite [9], problem of DBP was still far
from solved.
231

© 2004 by Taylor & Francis Group, LLC


chap-24 19/11/2003 14: 48 page 232

At the beginning of 2000, field-testing of UV disinfection of drinking water started on existing


distribution system.

MATERIALS AND METHODS

Distribution system
Evaluation was undertaken in potable water distribution system of the city of Omišalj, which is
supplied from two reservoirs (see Figure 1). Upper town area is supplied from the reservoir Hamec,
volume 150 m3 , and lower zone from reservoir Boki, volume 120 m3 .
The lake water “Jezero” with total and carbonate hardness of 250 and 220 mg CaCO3 /L
respectively, was known to contain 4.65 to 5.1 mg/L DOC, mainly attributed to humic substances.

Ultraviolet units
Total number of 4 UV disinfecting units with law pressure mercury lamps which radiate 254 nm
short-wave ultraviolet light, providing dosage of more than 30 mWs/cm2 . Site Boki was equipped
with two units containing 10 lamps and site Hamec with two 6 lamps units (model CI 10L and CI
6L, respectively, producer Ideal Horizons-Wedeco). The quartz sleeves were cleaned every 6 hours
by the automatic, pneumatically driven mechanical wipening system.
It is important to notice that the distribution system was chlorinated in April and October
2002 with standard procedure using sodium hypochlorite, active chlorine 5–10 ppm, duration
12 hours.
Turbidity of water samples was measured on a Hachs laboratory turbidimeter, model 2100AN
according to procedure specified in Standard Methods. Permanganate value was estimated as
consuming capacity of the potassium permanganate in accordance with Merck’s procedure.
Four types of bacteriological examinations of water samples were carried out to determine the dis-
infections efficiency: total coliforms, fecal coliforms, fecal Streptococcus and total microbial count.
Microbiological analyses were performed in the laboratory of waterworks Ponikve and in the
Croatian National Institute of Public Health in Rijeka.

Reservoir
Hamec Distributing
system Omišalj –
upper zone
m
0
250

P.K. Brgud Reservoir


L

Boki, Omišalj
Distributing
L  8000 m system Omišalj –
lower zone

L
400
m

Lake

Water treatment plant


"Jezero"

Figure 1. Drinking water distribution system of northern part of island Krk.

232

© 2004 by Taylor & Francis Group, LLC


chap-24 19/11/2003 14: 48 page 233

RESULTS AND DISCUSSION

Water quality
The quality of water being disinfected could be summarized as very hard, with constant residuals
of highly absorbing humic substances and with low but existing turbidity. The following figures
represent measured values of temperature, permanganate value and turbidity of raw lake water and
finished water in both – reservoir Boki and Hamec during the test period.
Expected seasonal change of temperature for the relatively shallow lake in Mediterranean climate
could be noticed in Figure 2. Temperature difference between the lake water and the water in
distribution system switches from average positive value of 4◦ C in spring to average negative −5◦ C
in autumn season.
Figure 3 shows that the organic content of raw water is quite stable with some fluctuations which
could be connected to the wind blowing above the lake surface, causing mixing of deeper water
richer in DOC with surface layers.
Observed fluctuations certainly do influence the applied UV dose in the process of disinfection.
Both the presence of NOM and turbidity of water deteriorate effectiveness of UV water disinfec-
tion. These physio-chemical characteristics make treated water from lake “Jezero” difficult to
disinfect.

28
26 Raw lake water
24 Reservoir Boki
Water temperature, °C

22 Reservoir Hamec

20
18
16
14
12
10
Mar Apr May Jun Jul Aug Sep Oct Nov

Figure 2. Water temperature during the test period.

Raw lake water Reservoir Boki Reservoir Hamec


25
Permanganate value, mg/L

20

15

10

0
Mar Apr May Jun Jul Aug Sep Oct Nov

Figure 3. Organic matter content, measured as permanganate number during the test period.

233

© 2004 by Taylor & Francis Group, LLC


chap-24 19/11/2003 14: 48 page 234

5.0

Raw lake water


4.0
Reservoir Boki
Turbidity, NTU

3.0 Reservoir Hamec

2.0

1.0

0.0
Mar Apr May Jun Jul Aug Sep Oct Nov

Figure 4. Turbidity of water during the test period.

Water characteristics affecting


UV disinfection performance

DIRECT INDIRECT

Scattering of UV Absorption of UV light by the Fouling of quartz


light and shading UV-absorbing compounds: envelopes
effects by suspended – organic, like humic matter due to the hardness
solids – turbidity – inorganic, like Fe or Mn of water

Figure 5. Influence of water characteristics on UV disinfection performance.

Influence of water quality on disinfection process


The photochemistry is relatively independent of pH, temperature, ionic strength, and therefore
variations in these water quality parameters have minimal impact on the disinfection process,
once irradiation comes to water [10,11]. The temperature of water being treated has influence
on temperature of the lamp body that reflects the lamp’s UV output. When temperature of water
reaches 30–35◦ C level, approximately 20% decrease of UV radiation is expected.
On the other side, there are parameters of water quality that directly or indirectly influence total
transmitivity, or the ability of lamp energy output at a germicidal wavelength to be transmitted
through water. This will reduce the intensity of light within an engineered UV system and require
an inversely proportional increase in exposure time to compensate for reduced intensity when
delivering a given UV dose [11,12].
Yellowish-brown color of lake water “Jezero” is a sign of presence of colloidally dispersed
humic substances, which absorb in observed UV spectra, same as iron and manganese. Turbidity,
like suspended solids will scatter the UV light, but also create zones of shades and regions of
reduced irradiation [11]. Finally, high hardness of water is source of minerals that in processes
of thermal precipitation forms scales on quartz sleeves [13]. These mechanisms of deterioration
of disinfection procces efficiency are shown in Figure 6.
When considering quality of lake water being disinfected, the most critical issue is the presence
of UV-absorbing organic compounds, mostly in the form of humic substances.

234

© 2004 by Taylor & Francis Group, LLC


chap-24 19/11/2003 14: 48 page 235

I1 << I0
Carbonate layer Inorganic Incomplete
fouling penetration
I1

Zone of limited
I0 cellular damage

Scattered
UV light

Quartz envelope
(sleeve) UV lamp
Particle in shade

Figure 6. Mechanisms of deterioration of UV disinfection efficiency.

The size of UV equipment is specified according to the transmittance quality of water, and at
the maximal allowed flowrate guaranteed UV254nm dose should be 30 mJ/cm2 . That value stands
for initially clean quartz sleeve and defined absorbance of water.

Dose of UV light
Disinfection reactors have to provide the sufficient dose of UV light in order to cause desired
germicidal effect. The quantity of UV energy, or UV dose, will depend on:
• Intensity of the UV lamp irradiance
• Exposure time, determined by flowrate through UV chamber
• UV transmission qualities of water.
Generally, it has been found that the more complex the microorganism, the more sensitive it is to
the UV irradiation. Thus, viruses are the least sensitive, then bacterial spores and finally bacteria
as very sensitive [14,15]. Until recently, protozoa such as Cryptosporidium parvum and Giardia
lambia, appeared to go against this trend. As it was thought they were very insensitive to UV
because of difficulty in penetrating the shell in their cyst or oocyst state. However, recent works
have demonstrated that these organisms are in fact quite sensitive to UV [16,17]. This means
that UV disinfection can now be safely extended to cover almost all pathogens at very moderate
UV doses.
As a microbe enters a UV disinfection system, it will receive varying irradiance levels from one
or more UV lamps. The exact dose will depend on the specific path of the organism through the
reactor. Those that travel closer to the surfaces of the lamps and choose longer trajectory get higher
doses. As the fate of any specific organism is difficult to determine, hydraulic characterization of
disinfection vessel usually relays only on Mean Residence Time (MRT) of ideal reactor. It means
that any molecule of water entering will travel from the inlet to the outlet for a period of time
equal to the reactor volume divided by the flow rate. The disinfection chambers do not behave
as ideal reactors in reality, because of the phenomena like short-circuiting, dead zones and back
mixing. That results in a broad distribution of Real Residence Time (RRT) of each element of
the disinfected water. The narrower is the distribution of RRT, the better is the UV disinfection
chamber.
235

© 2004 by Taylor & Francis Group, LLC


chap-24 19/11/2003 14: 48 page 236

30
Prior disinfection
25 Distribution system Boki
Distribution system Hamec
20
CFU/100 ml

15

10

0
Mar Apr May Jun Jul Aug Sep Oct Nov

Figure 7. Total coliforms – before irradiation and after in the distribution system of Boki and Hamec.

30
Prior disinfection
25 Distribution system Boki
Distribution system Hamec
20
CFU/100 ml

15

10

0
Mar Apr May Jun Jul Aug Sep Oct Nov

Figure 8. Fecal coliforms – before irradiation and after in the distribution system of Boki and Hamec.

Assessment of efficacy of UV disinfection


As it could be noticed, in Figures 7–10, absolutely total bactericidal efficacy was achieved during
the 9 months test period regarding three microbial groups – total coliforms, fecal coliform and
fecal Streptococcus – microorganisms whose presence in water is not allowed at all. Inactivation
of microorganisms detected as total microbial count group was not absolute, but the great majority
of results still matches Croatian regulations regarding microbiological purity of water [9].
There are great differences in views and practices regarding residual disinfection among some
highly developed countries. The USA favors the application of chlorine and chloramine residuals
while several European countries strongly discourage such practice.
Chlorine is an effective primary disinfectant, but when applied to water as the disinfection resid-
ual, its effectiveness in protecting water from recontamination during distribution and storage is
quite questionable [18]. According to [19] maintenance of free residual concentration in a distribu-
tion system does not provide a significant inactivation of pathogens, and actually gives false sense
of safety with little active protection of public health.
Maintaining a clean distribution system and treating water to the point of biological stabil-
ity have no better alternative. Biologically stable water does not support significant growth of
microorganisms owing to the insufficient level of nutrients like biodegradable organic matter and
ammonia [20,21].

236

© 2004 by Taylor & Francis Group, LLC


chap-24 19/11/2003 14: 48 page 237

30
Prior disinfection
25 Distribution system Boki
Distribution system Hamec
20
CFU/100 ml

15

10

0
Mar Apr May Jun Jul Aug Sep Oct Nov

Figure 9. Fecal streptococcus – before irradiation and after in the distribution system of Boki and Hamec.

100
Prior disinfection
Distribution system Boki
80
Distribution system Hamec

60
CFU/ml

40

20

0
Mar Apr May Jun Jul Aug Sep Oct Nov

Figure 10. Total microbial count group – before irradiation and after in the distribution system of Boki and
Hamec.

CONCLUSION

This study at the waterworks Ponikve in Krk, Croatia, has proven that the use of chlorine or
chlorine dioxide in drinking water distribution system in order to insure microbiological quality
of drinking water is no longer necessary. According to 1104 microbiological analyses of dis-
infected water from the distribution system, during the 10 months investigation period, sole use of
UV disinfection equipment provides drinking water that totally meets microbiological standards
regarding pathogens. It is worth noting that the UV disinfection process was successful in spite of
the unfavorable physical and chemical water characteristics and the complexity of the distribution
system.
Additional investigation should be done in order to find the best solution for inorganic fouling
problem, considering high values of water hardness in the region. Besides the mechanic wipen-
ing system, which appeared to be successful but requires frequent maintenance, combination of
magnetic pretreatment and chemical cleaning should be tested.
The UV method of disinfection is now widely recognized by governments in many countries. In
addition, as the concern of chemical contamination increases, many companies have accepted UV
as a viable water treatment option. Used by the leading companies around the world, UV dis-
infection is trusted to provide microbiologically pure water without the unwanted effects of
alternative treatments [22–24].

237

© 2004 by Taylor & Francis Group, LLC


chap-24 19/11/2003 14: 48 page 238

Croatian health authorities are becoming increasingly aware of the benefits of UV disinfection,
especially after the results of this field test on the island Krk.
It could be stated that UV irradiation provides efficient, safe, economic and reliable method of
water disinfection, and increase of its use is welcomed and expected.

ACKNOWLEDGEMENTS

Financial support from the Ministry of Science and Technology of the Republic of Croatia (scientific
project 120040) is acknowledged. The authors wish to thank to the public waterwork company
Ponikve for technical support.

REFERENCES

1. White, G. C., The Handbook of Chlorination and Alternative Disinfectants, Van Nostrand Reinhold,
New York, 1992.
2. Rook, J. J., Chlorination reactions of fulvic acids in natural waters, Environ. Sci. Tech., 11, pp 478–482,
1977.
3. Rook, J. J., Formation of haloforms during chlorination of natural waters, Water Treat. Exam, 23,
pp 234–235, 1974.
4. Isaac, R. A., Disinfection chemistry: selecting the appropriate agent, Water Environment and Technology,
Vol. 8, No. 9, 47–50, 1996.
5. Waller, K., Swan, S. H., Delorenze, G. and Hopkins, B., Trihalomethanes in drinking water and
spontaneous abortion, Epidemiology, 9, 2, 134–140, 1998.
6. Fawell, J., Robinson, D., Bull, R., Birnbaum, L., Butterworth, B., Daniel, P., Galalgorchev, H.,
Hauchman, F., Julkunen, P., Klaassen, C., Krasner, S., Ormezavaleta, J. and Tardiff, T., Disinfection
by-products in drinking water – critical issues in health effects research, Environmental Health
Perspectives, 105, 1, 108–109, 1997.
7. Bull, R. J., Birnbaum, L. S., Cantor, K. P., Rose, J. B., Butterworth, B. E., Pegram, R. and Tuomisto, J.,
Water chlorination – essential process or cancer hazard, Fundamental and Applied Toxicology, 28, 2,
155–166, 1995.
8. Simpson, K. L. and Hayes, K., Drinking water disinfection by-products – an Australian perspective,
Water Research, 32, 5, 1522–1528, 1998.
9. Pravilnik o zdravstvenoj ispravnosti vode za piće, Narodne novine, br. 46/94 and Pravilnik o izmjenama
i dopunama pravilnika o zdravstvenoj ispravnosti vode za piće, Narodne novine, 49/97.
10. Darby, J., Heath, M., Jacangelo, J., Loge, F., Swaim, P. and Tchobanoglous, G., Comparison of UV
Irradiation to Chlorination: Guidance for Achieving Optimal UV Performance, Project 91-WWD-1,
Water Environment Federation, Alexandria, 1995.
11. Darby, J., Emerick, R., Loge, F. and Tchobanoglous, G., The Effect of Upstream Treatment Processes
on UV Disinfection Performance, Project 95-CTS-3, Water Environment Federation, Alexandria, 1999.
12. Sommer, R., Haider, T., Cabaj, A., Pribil, W. and Lhotsky, M., Time dose Reciprocity in UV disinfection
of water, Water Science Technology, 38, 12, 145–150, 1998.
13. Lin, L. S., Johnston, C. T. and Blatchley, E. R., Inorganic fouling at quartz: Water interfaces in
ultraviolet photoreactors-I. Chemical characterization., Water Research, 33, 15, 3321–3329, 1999.
14. Liu, Z., Stout, J. E., Tedesco, L., Boldin, M., Hwang, C. and Yu, V. L., Efficacy of Ultraviolet light
in preventing Legionella colonization of a hospital water distribution system, Water Research, 29, 10,
2275–2280, 1995.
15. Blatchley III, E. R., Dumoutier, N., Halaby, T. N., Levi, Y. and Laine, J. M., Bacterial responses to
ultraviolet irradiation, Water Science and Technology, Vol. 43, No. 10, pp 179–186, 2001.
16. Cotton, C. A., Owen, D. M., Cline, G. C. and Brodeur, T. P., UV disinfection costs for inactivating
Cryptosporidium, Journal of American Water Works Association, Vol. 93, No. 6, pp 82–94, 2001.
17. Craik, S. A., Weldon, D., Finch, G. R., Bolton, J. R. and Belosevic, M., Inactivation of Cryptosporidium
parvum oocysts using medium- and low-pressure ultraviolet radiation, Water research,Vol. 35, No. 6,
1387–1398, 2001.
18. LeChevallier, M. W., Cawthon, C. D. and Lee, R. G., Factors Promoting Survival of Bacteria in
Chlorinated Water Supplies, Applied and Environmental Microbiology, Vol. 54, pp 649–654, 1988.

238

© 2004 by Taylor & Francis Group, LLC


chap-24 19/11/2003 14: 48 page 239

19. Payment, P., Poor efficacy of residual chlorine disinfectant in drinking water to inactivate waterborne
pathogens in distribution systems, Canadian Journal of Microbiology, Vol. 45, No. 8, 709–715, 1999.
20. Hu, J. Y., Wang, Z. S., Ng, W. J. and Ong, S. L., The effect of water treatment processes on the
biological stability of potable water, Water research, Vol. 33, No. 11, 2587–2592, 1999.
21. Van der Kooij, D., Maintaining quality without a disinfection residual, Journal of American Water
Works Association, Vol. 91, No. 1, pp 55–64, 1999.
22. Blatchley III, E. R., DoQuang, Z., Janex, M. L. and Laine, J. M., Process Modeling of Ultraviolet
Disinfection, Water Research, 38, 6, 63–69, 1998.
23. Hoyer, O., Testing performance and monitoring of UV systems for drinking water disinfection, Water
Supply, Vol. 16, No. 1–2, pp 424–429, 1998.
24. Acher, A., Fischer, E., Turnheim, R. and Manor, Y., Ecologically friendly wastewater disinfection
techniques, Water Research, 31, 6, 1398–1404, 1997.

239

© 2004 by Taylor & Francis Group, LLC


chap-24 19/11/2003 14: 48 page 240

© 2004 by Taylor & Francis Group, LLC


chap-25 19/11/2003 14: 48 page 241

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Modelling the geographic distribution of scattered


electricity sources∗

Poul Alberg Østergaard


Department of Development and Planning, Aalborg University, Aalborg, Denmark

ABSTRACT: In Denmark more than 40% of the electricity consumption is covered by scattered
electricity sources namely wind power and local CHP plants. This causes problems in regard to
load balancing and possible grid overloads. The potential grid problems and methods for solving
these are analysed in this article on the basis of energy systems analyses, geographic distribution
of consumption and production and grid load-flow analyses. It is concluded that by introducing
scattered load balancing using local CHP plants actively and using interruptible loads such as heat
pumps, requirements of the transmission grid are lowered thereby reducing or eliminating needs
of grid reinforcement. It is important that load balance is kept at local level and not just at an
aggregate level.

INTRODUCTION

In 1990 Denmark introduced an ambitious national energy policy aimed at curbing CO2 emissions
by 20% relative to 1988 before 2005 and a long term aim of halving emissions [1]. These aims were
later reiterated in subsequent energy plans [2,3]. Important measures included expansion of both
wind power and the use of CHP (cogeneration of heat and power). The planned expansion would
bring Denmark to the absolute forefront in regards to the exploitation of wind power and the use
of CHP as well as create some problems.
CHP may be utilised on both large and small power plants. The large-scale CHP plants are
typically back-pressure or bleeding steam turbines supplying district heating to large urban areas
as well as electricity. The small plants are typically based on gas turbines or piston engines and
supply either district heating to small towns or heat for industrial use. More importantly in this
context, however, is the distinction between central plants and local plants. The electricity generation
on central CHP plants is controlled directly by the central load dispatch whereas that of the local
plants is not. The development in Denmark has been on large as well as small CHP plants but the
largest increase has been on the small local CHP plants.
Consequently, the development hitherto resulted in not only fuel savings due to the exploitation
of a renewable energy source and the exploitation of the heat produced in thermal power plants. It
also resulted in a geographically scattered electricity production system where there is no central
control of the production of many on the electricity producing plants. In 2000 wind power alone
supplied nearly 14% of the Danish electricity consumption and together with local CHP, the two
supplied more than 40% in 2000 [4]. As expansion will continue and as consumption is relatively
stable, the share of wind power and local CHP is projected to increase. This is an interesting
development from a technical perspective as it reveals problems in regard to load balancing and
possible grid overloads. These are the focal points in this article.

∗ The work is carried out as a part of the research programme Mosaik funded by the Danish Development
Programme for Renewable Energy.

241

© 2004 by Taylor & Francis Group, LLC


chap-25 19/11/2003 14: 48 page 242

120%

100%
Share of consumption

80%

60%

40%

20%

0%
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Share of the year

Figure 1. The share of electricity consumption covered by local CHP and wind power in the Jutland-Funen
area in 2001. Computed from raw data from [6].

Transmission systems have traditionally been designed to accommodate a radial electricity flow
from a few points of production to a multitude of points of consumption. The development towards
a geographically scattered production system has generated a need to look thoroughly into the geo-
graphic distribution of production and consumption and the impact on the layout of the transmission
system.
Denmark is for historical reasons split up into two asynchronous electricity systems that are
operated independently. The focus in this article is on the Jutland-Funen area where the main part
of the population lives [5] and where wind power and local CHP has the highest penetration.
The Jutland-Funen area is already now in a situation sometimes, where production from wind
turbines and local CHP plants surpasses consumption as indicated in Figure 1. It is also clear from
the figure, that the two throughout the year has a notable production. At moments with production
on wind turbines and local CHP plants exceeding consumption, the electricity flow is from the
scattered point of production to the backbone of the transmission grid and on to surrounding areas
or foreign countries. This is particularly the case in thinly populated areas with abundant wind
resources where production may surpass consumption considerably.
In 2001, 38% of the time, local CHP generation and wind power constituted more than 50% of
the electricity consumption and 10% of the time, the share was higher than 75%. At its highest
hourly value, wind and local CHP constituted 148.7% of electricity consumption. This leaves little
latitude for regulation of the load balance as also the central power plants may be subjected to
limitations in their operation either due to heat production or due to technical requirements. It
also sets high requirements of the transmission grid that has to able to pick up the production and
transmit to points of consumption outside the Jutland-Funen area.
In addition to being of a large magnitude, wind power and local CHP production may also change
noteworthy from hour to hour. Figure 2 demonstrates the changes that occur in wind and local CHP
production. The curve shows the correlation between the change from hour to hour in the sum of
local CHP production and wind power and the frequency at which it occurs.
20% of the time the average production in the subsequent hour will differ more than 10% from
the current hourly averaged value and 5% of the time the subsequent value will differ more than
24%. As the wind power is proportional to the wind velocity cubed and as a certain geographic
equalisation is attained in Figure 2 from the size of the Jutland-Funen area, data for smaller areas
would reveal larger changes.
The load balance is presently maintained by adjusting the few, large central CHP plants and
condensation plants. Import and export may also be applied to assist load balancing and a three-tier
tariff influences the local CHP plants to be operated according to a certain fixed diurnal variation.
A further expansion of wind power and local CHP would have a number of consequences.

242

© 2004 by Taylor & Francis Group, LLC


chap-25 19/11/2003 14: 48 page 243

90%
80%
Jump from hour to hour

70%
60%
50%
40%
30%
20%
10%
0%
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Share of the year

Figure 2. Changes from hour in the sum of local CHP and wind generation and the frequency at which it
occurs in Jutland-Funen in 2001. Computed from data from [6].

• The response time of large power plants become important as they would have to rapidly accom-
modate changes in e.g. the output of wind turbines. They would have to remain at stand-by to
take over if production on wind and local CHP drops. This could result in a modest number of
annual hours of operation at relatively high fixed costs.
• The three-tier tariff can be modified, but if it remains in the form of pre-defined production
prices for pre-defined hours of the week it will still be too inflexible.
• Connections to surrounding countries could come under stress and they or the surrounding
countries may not be able to assist enough for load balancing. Reinforcement of the grid could
be required. These may not be politically acceptable as there is a growing opposition against
aerial transmission lines in Denmark.

SCOPE OF THE ARTICLE

Scattered balancing may be an appropriate option for addressing both problems in load balancing
and problems with possible transmission grid overloads. With scattered balancing, geographically
scattered plants such as local CHP plants, heat pumps and devices for using electricity for vehicles
are adjusted to balance production and consumption. This lowers the necessity to transmit electricity
and thus the requirements of the transmission grid.
Previous analyses have demonstrated that with the present stock of wind turbines and CHP plants,
line transmissions can be limited to magnitudes suitable for 150 kV lines if scattered load balancing
is introduced [7]. In the long term, analyses show that although transmission needs exceed what is
suitable with 150 kV lines, demands are still smaller with scattered load balancing [8].
The analyses in [7,8] take the geographic distribution of production and consumption into con-
sideration but weather conditions are considered even throughout the area. If wind turbines are
producing at half the nominal output in one area, wind turbines in other areas are assumed to be
producing at half their nominal output too. With weather fronts continuously moving in and across
Denmark, however, wind conditions cannot be expected to be the same throughout the country.
In this article, the different impacts on the transmission grid are analysed in two cases both using
scattered load balancing:
• where load balance is sought kept locally in each node throughout the system
• where only the overall load balance of the entire system is kept
The first case corresponds to a situation where overall load balance is maintained and an even distri-
bution of production and consumption in the different areas is assumed. The second case is modelled
by distributing fixed amounts of wind power generation unevenly in the Jutland-Funen area.
Only the transmission grid at 150 and 400 kV in the Jutland-Funen area is considered.

243

© 2004 by Taylor & Francis Group, LLC


chap-25 19/11/2003 14: 48 page 244

Table 1. Description of the energy system modelled.

Year 2020 installed capacities

Wind 2500 MW on-shore


1445 MW off-shore
CHP 2600 MW
Heat pumps 650 MW
Condensation 1900 MW
Consumption 24.87 TWh/year

Table 2. Description of the load case modelled.

Load situation Equipment Load

Medium wind power generation Heat pump 650 MW


High heat demand Transport 273 MW
Low electricity consumption Consumption 2313 MW
Off-shore 500 MW
On-shore 1064 MW
Power plant 176 MW
CHP 1297 MW

THE ENERGY SYSTEM SCENARIO

The analyses are based on the energy systems analysis model EnergyPlan and scenarios developed
by Lund [9,10]. Using the EnergyPlan model, scenarios of consumption and production on various
types of equipment is modelled and calculated for each hour of the year 2020. The system modelled
can be described by the factors indicated in Table 1. The data stem from a report prepared by a
working group set up by the Danish Energy Agency and may thus be seen as a semi-endorsed
prognosis.
These analyses give 8784 hourly values for each type of production or consumption equipment.
The grid response to every single of these sets of data may be analysed, but this would be an
unnecessarily burdensome task. Critical situations are only when large imbalances between on
one side consumption and on the other side local CHP and wind power generation exists. These
situations are identified manually, and in this article one situation is singled out for further analyses.
The situation is characterised by a medium level wind power generation as only this level can permit
full wind power generation in one area of Denmark and no wind power generation in another. For the
analyses, a January situation with low electricity consumption and high heat demand is identified.
The load situation is detailed in Table 2.

GEOGRAPHIC DISTRIBUTION OF PRODUCTION AND CONSUMPTION

Production and consumption of electricity is distributed geographically in 90 150 kV and 400 kV


nodes in the Jutland-Funen area.
• Consumption. The consumption is distributed using maximum load data for the 150 kV nodes
from [11] as indexes. The load data is from a 1991 prognosis for the year 2000.
• Transport. It is assumed that hydrogen production or battery chargers are distributed evenly with
consumption.
• CHP. As CHP has penetrated the Danish energy system down to even small towns, these indexes
have also been used to distribute CHP generation assuming heat demand and thus CHP electricity
generation is proportional to electricity demand.

244

© 2004 by Taylor & Francis Group, LLC


chap-25 19/11/2003 14: 48 page 245

Table 3. Active node load indexes.

Heat Wind Wind Power


Node Consumption pumps Transport inland Off-shore CHP plants Import Export

KAE 83 83 83 135 150 83 0 0 0


KAS 0 0 0 0 0 0 0 1600 1600
NEV 0 0 0 0 0 0 657 0 0

• Heat pumps. As heat pumps are assumed established adjacent to the CHP plants, the same index
are used here.
• Wind power. Inland wind turbines are distributed according to a listing of the stock in eight
specific areas [12]. Within these areas, the wind turbines are distributed on a number of nodes.
Presently Denmark only has a few and relatively small off-shore wind power parks. These present
off-shore turbines are included in the inland stock. In addition a 160 MW wind park is under
construction off the West Coast and an even larger may be erected off the East Coast although
there is some political debate. In the analysis, off-shore turbines are connected to the grid in a
ratio of 1:2 in the two parks.
• Phase compensation is applied evenly in the nodes using the maximum load data for the 150 kV
nodes as indexes.
• Import and export is not applied in these analyses.
• No loads are applied to the 400 kV nodes. Production or consumption to and from the 400 kV
grid is via the 150 kV grid.
The result is a matrix of nodes and indexes as illustrated partially in Table 3.
In addition, all nodes are identified with their geographic co-ordinates. The co-ordinates as well
as node load indexes are fed into a model called PlanToGrid. Using the geographic co-ordinates
PlanToGrid produces a node map. This map is interlaced with a grid as indicated in Figure 3. For
each square in the grid, an index value is multiplied by the inland wind power index for the nodes
within the square to produce new set of indexes for these nodes. By adjusting the grid indexes in
PlanToGrid, different geographic distributions of wind power generation at a specific time may
thus be modelled.
Hourly consumption and production values produced with the EnergyPlan model are then mul-
tiplied by these series of indexes thereby forming a matrix of active node loads. Using stated power
factors for each consumption and production category, a similar matrix is calculated with reactive
node loads.
The highest proportion of the population of the Jutland peninsula lives in the Eastern half
whereas with mostly Westerly winds, the best wind potentials are in the flat Western half. It is thus
particularly interesting examining the response of the grid to fronts moving in from the West, as
this situation in particular will generate a production that locally exceeds consumption.
Two cases are analysed in this article. A case with equal distribution of wind power generation
and a case with uneven distribution of wind power generation corresponding to a situation with
load balancing on a node level and a situation with load balancing only at an aggregate level. In the
unevenly distributed case, the area is split up into four columns of equal index declining linearly
from West to East as illustrated in Figure 3. Other uneven distributions have been modelled, but
only this particular one is singled out and detailed in this article.

MODELLING GRID FLOWS

Using the node loads and grid data from Elsam [11], the EnergyProGrid model [13] is then applied
to determine voltages and line currents throughout the transmission system. EnergyProGrid does
this by solving a matrix equation through numerical approximation. By checking line currents
against stated maximum line currents, over-loads are then identified by EnergyProGrid.

245

© 2004 by Taylor & Francis Group, LLC


chap-25 19/11/2003 14: 48 page 246

Figure 3. Node map of Jutland-Funen with indication of the squares used for specifying distribution of wind
power generation.

RESULTS OF THE MODELLING AND CONCLUSION

In the case of an even distribution of wind power generation throughout the area – i.e. with the same
wind conditions throughout the area – no lines are overloaded. This should be attributed to the even
distribution of production on wind and CHP plants, the even distribution of consumption and the
even distribution of the plants used to load balance. No high need for transmission away from any
part of the area will thus arise as potential surpluses and potential deficits are picked up locally.
If however, an uneven distribution of wind power generation is assumed, then there will in fact
be a need for transmission away from certain parts of the area to the parts where there is a surplus
of power consumption.
This increases the general load of the transmission grid and one line going West-East gets
overloaded for this reason. In Figure 4, a comparison of line currents in the two cases is shown.
Some lines have smaller loads, but most lines have a higher load. Some significantly higher increases
are not included in the figure as they are for lines that go from very low currents thus making any
increase a large increase in percent.
Different situations with uneven distribution of wind power generation have been analysed. Not
all situations result in overloaded lines and even the situations with overloaded lines may possibly
be avoided by different use of the transmission system or through different phase compensation.

246

© 2004 by Taylor & Francis Group, LLC


chap-25 19/11/2003 14: 48 page 247

40 800
35 700
Number of lines

30 600

Distance [km]
25 500
20 400
15 300
10 200
5 100
0 0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Ratio
Number Distance

Figure 4. Comparison of line currents with and without even distribution of wind power. Ratios higher than
1 indicate lines that are loaded heavier with uneven distribution and ratios lower than 1 indicate lines that are
loaded less with uneven distribution of wind power generation. Plotted in the midpoint of every 0.2 interval.

It is evident from the analyses, however, that a higher number of lines are closer to their transmission
capacity if there is an uneven distribution of wind power generation.
The analyses thus indicate that rather than maintaining load balance in the entire system seen
as one entity, load balance should be maintained throughout the grid thereby limiting line currents
and possible needs for grid reinforcements.
The analyses have furthermore been conducted assuming a transmission grid that is fully opera-
tional. A typical criterion when designing transmission grids is that they should not be overloaded
even in the event of a failure on one or two lines. Consideration for such criteria only amplifies the
need for local balancing throughout the grid.
Finally the analyses demonstrate that the geographic dimension needs consideration when mak-
ing energy systems analyses. On an aggregate level, the system may be in balance but on a
geographically detailed level, undesirable imbalances may occur.

REFERENCES

1. Danish Ministry of Energy, Energy 2000 – A plan of action for sustainable development, Danish
Ministry of Energy, Copenhagen, 1990.
2. Danish Ministry of Energy, Energy 2000 – The follow-up, Danish Ministry of Energy, Copenhagen, 1993.
3. Danish Ministry of Energy, Energy 21 – The Danish Government’s Action Plan for Energy 1996, Danish
Ministry of Environment and Energy, Copenhagen, 1996.
4. Danish Energy Agency, Energy statistics 2000 – Denmark’s production and consumption of energy,
Danish Energy Agency, Copenhagen, 2001. p. 8 (In Danish).
5. Statistics Denmark, Population in municipalities 1 January 1996, Statistics Denmark, Copenhagen,
1996 pp. 13–15.
6. Eltra spreadsheets with hourly year 2001 production and consumption data from www.eltra.dk/drift (In
Danish).
7. Lund, H. and Østergaard, P.A., Electric grid and heat planning scenarios with centralised and distributed
sources of conventional, CHP and wind generation. Energy Vol. 25, No. 4, pp. 299–312, 2000.
8. Østergaard, P.A., Transmission grid requirements with scattered and fluctuating renewable electricity
sources. Applied Energy vol. 76 Issue 1–3 pp. 247–255, 2003.
9. Danish Energy Agency, Report from the workgroup on electricity production from CHP and renewable
energy sources, Danish Energy Agency, Copenhagen 2001 (In Danish).
10. Danish Energy Agency, Attachments to report from the workgroup on electricity production form CHP
and renewable energy sources, Attachment 6, Danish Energy Agency, 2001 (In Danish).
11. Elsam, Grid expansion plan 1991 – Data foundation. Elsam, Skærbæk 1991 (In Danish).
12. Tech-wise. Wind database and wind models, Tech-wise, Fredericia, 2001 (In Danish).
13. Andersen, A.N., Calculations in Windows Elnet, EMD, Aalborg, 1999 (In Danish).

247

© 2004 by Taylor & Francis Group, LLC


chap-25 19/11/2003 14: 48 page 248

© 2004 by Taylor & Francis Group, LLC


chap-26 19/11/2003 14: 48 page 249

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Dynamic stock modelling: a method for the identification and


estimation of future waste streams and emissions based on
past production and product stock characteristics

Ayman Elshkaki∗ & Ester van der Voet


Centre of Environmental Science (CML), Section of Substances and Products Leiden University,
Leiden, The Netherlands

Veerle Timmermans & Mirja Van Holderbeke


Flemish Institute for Technological Research (Vito), Integral Environmental Studies Mol, Belgium

ABSTRACT: Large stocks of products, materials and substances have accumulated in society.
This article investigates the dynamic behaviour of these societal stocks in order to explore future
emissions and waste streams. We argue that the stock dynamics are mainly determined by its inflow
and outflow characteristics. The stock’s inflow is determined by socio-economic factors which can
be quantified using regression analysis. Two processes determine the stock’s outflow: leaching and
delay. Leaching occurs during use and can be modelled as a function of the stock size. Delay is
related to the discarding of products after use and can be modelled as a delayed inflow distributed
over time. This approach is illustrated by the case of lead as applied in cathode ray tubes in the EU.
By applying this model to other lead applications and combining the results, the dynamic behaviour
of the total lead stock can be described.

INTRODUCTION

The extraction, use and discarding of materials gives rise to environmental problems. Some of these
problems are related to resource depletion, others to pollution resulting from emissions during the
life cycle of these materials. A relatively new field of research, industrial ecology, studies society’s
metabolism to analyse the causes of these problems and indicate possibilities for a more sustainable
management of materials. Substance flow analysis (SFA) is one of the main analytical tools within
the industrial ecology research field. SFA is used to describe or analyse the flows of one substance
(group) in, out and through a system [1]. The system is a physical entity, often representing a
geographical area. In most cases, the SFA system is divided into two subsystems: the economic or
societal subsystem and the environmental subsystem.
SFA is based on the materials balance principle, which enables different types of analysis.
Substance flow accounts can be used to identify major flows and accumulations and, if available
for several years, to spot trends. Static SFA models can be used to identify causes of pollution
problems and to assess the effectiveness of measures [2]. Recently, it is acknowledged that the
main difference between static and dynamic models in SFA lies in the inclusion of stocks in
society [3]. Stocks of products and materials in use are a major cause of disconnection between
the system’s inflow and its outflow in one year. Ignoring them may lead to very erratic forecasts of
future emissions and waste streams. Dynamic SFA models including stocks lead to more accurate
prediction of future resource use and waste streams. Considering stocks so far has resulted in a

∗ Corresponding author. e-mail: Elshkaki@cml.leidenuniv.nl

249

© 2004 by Taylor & Francis Group, LLC


chap-26 19/11/2003 14: 48 page 250

few specific substance stock inventories or models [4,5]. This paper contains an effort to define a
general stock model.
The dynamics, which determine the growth and decline of the stocks of a substance over time
are determined by the inflow and outflow of the materials and products, it is contained in. This
article will focus on the demand for the products that materials and substances occur in.
Over the last century, the increase of the global population and the GDP per head in developed
countries has been accompanied by a rapid increase in material consumption [3]. In fact, the overall
level of national income, the product composition of income and material composition of product
have been used in determining the intensity of use of materials in several studies [6,7]. In this
article, a likewise approach is adapted to estimate the stock’s inflow. The modelling of the stock’s
outflow is mainly based on physical consideration, especially mass balance.
The next section will outline a methodology for dynamic stock modelling, followed by a descrip-
tion of the cathode ray tube system, and finally a section containing a discussion of the results and
some conclusions.

DYNAMIC STOCK MODELLING APPROACH

In the use phase, goods with a life span of more than one year tend to accumulate: they do not
flow out again in the same year but remain stored in the use processes. Such applications stored
in the use phase are referred to as stocks. The mechanism determining the stock dynamics can be
classified into three levels:
• stocks of products, handled by producers and users (e.g. cathode ray tubes)
• stocks of materials that those products are composed of (e.g. lead oxide) and
• stocks of substances, contained within these materials and hence products (e.g. lead).
Stocks on these three levels have their own characteristics and dynamic behaviour.
The demand for a particular product is determined by significant variables such as its price
compared to the price of the closest substitute, and the level of overall economic activities [6].
Moreover, in the course of time technological developments may also affect the demand for a
particular product because of the emergence of alternatives. It may also affect the demand for
materials due to changes in product design in terms of less or different material use. For example,
the developments in lead-acid battery technology led to a reduction of the total weight of a battery
from about 19 kg to 16.6 kg over a period of 15 year. Most of the reduction of about 2.5 kg was
obtained through reducing the lead content of the battery [8].
The dynamic behaviour of the product stock which is mainly determined by the behaviour of
the stock’s inflow (purchases of new products) and stock’s outflow (discarded products) will be
described in the following sections.

Modelling the product stock inflow


The total inflow into a particular stock of products-in-use is determined by supply and demand, each
of these in turn determined by several further variables. Among these are socio-economic variables
such as GDP, population, technological developments and welfare, as well as other economic
factors such as the presence of alternatives and their relative prices. It is useful to start by making a
qualitative model of the system. For example, whether or not there are any substitutes for the product,
and if so, what their specifications are regarding material composition, performance and price;
whether or not the product at present is subject to rapid change due to technological improvements,
and if so, in which direction.
The second step is to quantify the relationship between the inflow of the product and the most
influential variables (e.g. population, GDP). Time series data are required for the inflow on the one
hand, and the explanatory variables on the other hand.
To establish the relative importance of these variables on the shape of the inflow curve over time,
a regression model can be used. Regression analysis indicates the variables, which are significant

250

© 2004 by Taylor & Francis Group, LLC


chap-26 19/11/2003 14: 48 page 251

and contribute most to the shape of the inflow curve. It also examines the combined effect of
significant variables. The linear regression model used in this analysis is described by equation 1.
The adequacy of the regression model and the significance of the variables can be described in
statistical terms such as the adjusted coefficient of determinations (R2adj ), t tests and F statistics.

Y (t) = β0 + β1 · X1 (t) + β2 · X2 (t) + β3 · X3 (t) + β4 · X4 (t) + β5 · X5 (t) + ε(t) (1)


Y is the inflow of a particular good,
X1 , X2 , X3 , X4 , and X5 are indicating the different influential variables,
β0 , β1 , β2 , β3 , β4 ,and β5 are the regression parameters,
ε is the model error.
The derived regression model described by equation 1 can further be used to estimate the future
inflow of goods. Projected values of the influential variables are then required. Such projections
are available for GDP and population in different scenario studies [9].

Modelling the product stock outflow


The outflow out of the stocks depends on the mechanisms of delay and leaching. Delay represents
the discarding of products and is determined by the life span of the products. Empirical data on the
life span is often not available [10]. Either an average life span or a certain life span distribution
can be assumed. Possible types of distribution are normal, Weibull, or beta distributions. In this
study, the Weibull distribution has been used since it has been shown experimentally that Weibull
distribution provides a good fit to many types of lifetime [11]. The outflow from the societal stock
due to discarding is given by equation 2.
F out (t) = F in (t − L) (2)
F out (t)
is the outflow of goods at time t, F in
is the inflow of goods, and L is the life span.
Leaching refers to the emissions of the substance from the products during the use process.
The emissions during use can be described as a fraction of the stock. For different applications,
an emission rate can be established. For heavy metals for example, there are studies aimed at
establishing a corrosion-leaching coefficient [12]. The outflow during use is given by equation 3.
F out (t) = C · S(t) (3)
S is the size of the stock at time t and C is the leaching factor.
Reuse of products indirectly influences the stocks inflow as well as outflow. Reuse can be affected
by several factors. Among these are technical and economic factors, which determine mainly the
collection rate, and environmental policy aspects. When reuse must be modelled, these factors
should be accounted for. In this model, the reused stream is modelled as a fraction of the outflow
by discarding as given by equation 4. On the material and substance level, recycling also plays
a role.
R(t) = α · F out (t) (4)
R(t) is the amount to be reused at time t, α is the reuse rate and F out (t) is the outflow of discarded
goods as calculated by equation 2.

Modelling the product stock size


The change of the magnitude of the stock over time is the difference between the inflow and the
outflow as given by equation 5.
dS
= F in (t) − F out (t) (5)
dt
By knowing the initial value of the stock and the inflow of goods, it is possible to calculate the
stock as given by equation 6 and the future outflow using equations 2 and 3.

S(t + 1) = S(t) + F in (t) − F out (t) (6)

251

© 2004 by Taylor & Francis Group, LLC


chap-26 19/11/2003 14: 48 page 252

Em Exp Em Exp Em

Use Waste
Production Treatment
Extraction CRT L
and O
CRT Televisions
refinery CRT
Televisions Monitors
of Pb Televisions IN
Monitors S Monitors

Imp Imp

Figure 1. The CRT life cycle: inflows and outflows of lead.

300

250
[Ktonnes of lead]

200

150

100

50

0
1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000
Time [year]

Figure 2. The inflow of CRT into the stock-in-use in the EU expressed in ktonnes of Pb.

CASE STUDY – LEAD IN CATHODE RAY TUBES

Lead was one of the first metals used by humankind, and its use has been extensive throughout
history. Its unique properties such as its corrosion resistance, high density and low melting point
make it suitable for several applications. A considerable use of lead is its application as a compound
in Cathode Ray Tubes (CRTs). A CRT is one of the components of televisions and computer
monitors. Lead is used in CRTs as a protection from harmful radiation. The average weight of one
CRT is 13 kg with a lead content of 2 kg. Figure 1, shows the CRT life cycle. The stock of CRTs
is a part of this life cycle: it is accumulated in the “use” phase.

The inflow of CRT into the societal stock


The inflow of CRTs into the product stock is calculated as the number of CRTs produced within
the EU member states plus the imported number of CRTs from outside the EU minus the exported
number. The inflow of CRTs is shown in Figure 2, [13] and is expressed in terms of the lead
it contains: in ktonnes of Pb. To model the inflow, we assume that the inflow of these products
is affected by the availability of a viable substitute, by GDP, and by the size of the population.

GDP
The Gross Domestic Product (GDP) is a standard measure of economic production and, mirrored,
income, in monetary terms. Poverty, natural resources exploitation are all intimately connected to

252

© 2004 by Taylor & Francis Group, LLC


chap-26 19/11/2003 14: 48 page 253

it, either positively or negatively. On a general level, the correlation between GDP and production
and consumption of products is valid by definition. For specific products such as CRT, televisions
and computer monitors this relation can be more complex, as some products tend to be used less
with higher incomes and others more.

Population
There are several reasons to consider population a determinant of the use of products. In the first
place, there is the general law that it takes more to sustain more people. This refers mainly to the
basic needs. Nowadays, televisions and computers are transferring from luxury goods to basic items
and therefore a correlation between the population and the consumption of these goods might be
possible.

Substitution
In the CRT system, no viable alternative for lead is known of at present. Some researchers indicate
that it could be possible to replace lead with other materials such as barium, strontium, and zirconium
but no such glass is commercially available and it is not known if these materials can be supplied
in sufficient quantities to meet the demand. It is likely that the use of cathode ray tubes will decline
as a result of the advent of flat screen displays. When forecasting future inflows, this development
can be included under various scenario assumptions.

The outflow of CRT from the societal stock


Since no emission of lead occurs during the use of CRT, the outflow of CRTs out of the stock
is determined only by discarding: it equals the amount of the discarded CRT, including those in
discarded TV’s and PC monitors. The most important leakage to the environment probably will take
place after the disposal stage in waste management and/including recycling. The average lifetime
of CRTs is about 15 years [14].
At the moment, the stream of final CRTs containing waste is split between landfill (80%) and
incineration (20%) [14]. However for electrical and electronic equipment, the future recycling rates
are predicted to increase. Leaded glass could be returned to glass manufacturers for recycling. At
present, the glass industry is not doing this because there is no economic incentive to do so [14].
It is likely that the proposed EU Directive on Waste Electrical and Electronic Equipment (WEEE)
will change this situation.

EMPIRICAL ANALYSIS AND RESULTS

In this section, a preliminary empirical analysis of the CRTs inflow, outflow, stock and waste stream
in the EU economy will be modelled and the model outcome will be discussed.

Modelling the inflow of the CRTs into the use system


To assess the relative importance of the explanatory variables for the CRT inflow, regression anal-
ysis is used. The independent variables used in the analysis are Gross Domestic Product (GDP),
Population (Pop), and a Time variable (T) which will be used as a proxy for the combined influence
over time of other variables on the inflow trend. The period of analysis was from 1988 to 1999. The
fitting algorithm which determines the regression parameters (β0 , β1 , β2 , β3 ) in equation 8, uses
the ordinary least square (OLS) criterion [15].

Y (t) = β0 + β1 · GDP(t) + β2 · Pop(t) + β3 · T (t) + ε(t) (7)


Y (t) is the inflow of goods at time t,
β0 is the overall mean response or regression intercept,
β1 , β2 , β3 are the regression parameter or the main effect of the factors GDP, Pop, and T ,
ε(t) is the regression model error term.

253

© 2004 by Taylor & Francis Group, LLC


chap-26 19/11/2003 14: 48 page 254

Table 1. Results of the analysis of the individual factors on the inflow of CRT.

β0 β1 β2 β3 F
Estimation Variables (t-value) (t-value) (t-value) (t-value) R2adj statistics

1 GDP 47.4 0.01 0.43 10.151


(1.1) (3.186)
2 Pop 2521 7.3 0.65 23.490
(−4.52) (4.846)
3 T 111.76 9.97 0.74 36.237
(8.498) (6.10)
4 GDP, −3351 −0.008 9.7 0.63 11.517
Pop (−2.641) (−0.731) (2.679)
5 GDP, T 133.5 −0.004 11.6 0.73 17.279
(3.51) (−0.613) (3.629)
6 Pop, T 4811 −13 26.5 0.78 23.465
(1.847) (−1.804) (2.855)
7 GDP, 9225 0.015 −25.42 37.1 0.8 17.384
Pop, T (2.205) (1.319) (−2.173) (3.084)

300
250
[Ktonnes of lead]

200
150 Measured

100 Calculated

50
0
1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000
Time [year]

Figure 3. Measured and calculated inflow of CRT, expressed in ktonnes of Pb.

Table 1, shows the result of the regression analysis. Estimations 1 and 2 show a positive corre-
lation between the GDP and population variables and the inflow, with a fairly high coefficient of
determination (R2adj ). Estimation 3 shows a positive correlation between the inflow of CRT and T.
The correlation between the inflow of CRT and all of the three variables separately is significant at
the 95% probability level. The results indicate that the factors included in T (technological develop-
ments and substitution and maybe others) are important factors in the determination of the inflow
shape, and therefore should be investigated further. The results show also that the coefficient β3
has a positive value which means the inflow will increase over time. This indicates that in the past,
substitution has not had any influence on the inflow shape. In the future, this may be different. It is
clear from the estimations 4, 5, 6 that the combination between the variables is improving the over-
all correlation. When all the three variables are included in the regression equation (estimation 7),
R2adj has the highest value. The t-test for the individual coefficients shows that in this equation the
GDP and population are not significant. The coefficient for population has an unexpected nega-
tive sign, though this coefficient is statistically insignificant. However, the F-test indicates that all
the independent variables taken together are significant and contribute to the shape of the inflow.
Therefore, the following model will be used to calculate the inflow of the CRT:

Inflow(t) = 9225 + 0.015GDP(t) − 25.42Pop(t) + 37.1T (t) (8)

Figure 3, shows the difference between the measured inflow of CRT and the calculated inflow from
the regression model in equation 8.

254

© 2004 by Taylor & Francis Group, LLC


chap-26 19/11/2003 14: 48 page 255

1400
1200
[Ktonnes of lead]

1000
800 CRT Inflow
600 CRT Outflow
400
200
0
1988
1991
1994
1997
2000
2003
2006
2009
2012
2015
2018
2021
2024
2027
2030
2033
2036
2039
Time [year]

Figure 4. Projected future inflow and outflow of CRT in the EU, expressed in ktonnes of Pb, based on
“baseline scenario”, and Weibull distributed life span.

16000
14000
[Ktonnes of lead]

12000
10000
8000
6000
4000
2000
0
89
91
93
95
97
99
01
03
05
07
09
11
13
15
17
19
21
23
25
27
29
19
19
19
19
19
19
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
Time [year]

Figure 5. Amount of lead in the CRT stocks in the EU.

Modelling the outflow of CRTs


The outflow out of the stock of the CRT system is the discarded CRT, TV’s and monitors. This flow
is mainly determined by the life span of the CRT in these applications. The life span is assumed to
be distributed in time; a Weibull distribution is used assuming a minimum life span of 10 years and
a maximum of 25 years, with a most likely life span of 15 years. The outflow is included in Figure 4.

Modelling the future inflow and outflow of CRTs


The future inflow of CRT is calculated based on the regression model given by equation 8 and
projected values of the variables GDP and population. Projections are taken from a study for the
EU 6th Action Programme [9]. An average GDP growth for the EU by 2.4% per year is projected
from 2001 to 2010; slowing to 1.8% per year between 2011 and 2020, and 1.7% per year between
2020 and 2030. The population of the EU is expected to increase slightly during the first decade
of this millennium. After 2010, the rate of population growth falls and is expected to stabilize after
2020 [9]. The future outflow of CRT is calculated from the past and future inflow of CRT and the
Weibull distribution of the life span. Figure 4, shows the future inflow and outflow of the CRT in
the EU. The possible substitution of CRT by flat screen displays is as yet ignored.

Modelling the stock’s size of CRT


The only determinants of the change of the CRT in-use-stock over time as described by equation
6 are the stock’s inflow and outflow, as calculated above. Additional information on the initial
magnitude of the stock, which is corresponding to the number of TV’s and computers owned by
people in the EU member states in 1988, is needed. These figures can be found in a UNDP overview
[16]. The development in CRT stock size is shown in Figure 5.

255

© 2004 by Taylor & Francis Group, LLC


chap-26 19/11/2003 14: 48 page 256

1200
[Ktonnes of lead]

1000
800
Incineration
600
Landfill
400
Recycling
200
0
02

05

08

11

14

17

20

23

26

29

32

35

38
20

20

20

20

20

20

20

20

20

20

20

20

20
Time [year]

Figure 6. Future recycled, landfilled, and incinerated lead flows of CRT in the EU.

Waste stream of CRT


The draft WEEE states that of the equipment with cathode ray tubes that have been collected, 75%
should be recycled or recovered. With a collection rate of 25% or 50% this implies overall recycling
rates of 19 to 37.5%. An average value of 26.75% recycling rate has been used in modelling the
future recycled stream. It is also assumed that 80% of the remaining waste stream will be landfilled
and 20% will be incinerated which implies 59% of the total discarded stream will be landfilled
and 14.75% will be incinerated. The results for the three streams based on these assumptions are
shown in Figure 6.

CONCLUSIONS

The flows of a certain substance in and through the economy are basically determined by the
economic demand for its applications. This implies that in order to model substance flows, the
materials wherein the substance occurs should be a central modelling issue, in turn based on the
analysis of the products the materials are applied in. In contrast, the outflows of the substance out of
the economy in the shape of waste and emissions are basically determined by the physical-chemical
properties of the substance. These properties determine the losses during use and the possibilities
for recycling. A substance flow model therefore must include both economic and physical-chemical
variables.
In this paper, a method is presented to model the inflow of products, based on socio-economic
variables such as GDP and population size. This method is applied to the case of lead applied in
cathode ray tubes.
It appears that the model leads to good results: for a time series in the past, the measured inflow
compares well to the modelled inflow. In principle, the same equation can be used to model the
future inflow. Not the measured GDP, population etc. but prognoses for these variables then must
be used. The model forecasts however are only valid assuming that no unpredicted changes will
occur. Such changes will render the inflow equation useless.
In addition, a method is presented to model the product outflow based on two physical mecha-
nisms: leaching and delay. Leaching is modelled as a fraction of the stock. Delay is determined by
products life span. In the case of lead in CRTs, a Weibull distribution was used. The results of the
outflow model have not been compared yet with real data.

OUTLOOK

The approach presented in this paper appears to work well on the product level. The next step is to
integrate the models for the different products into one framework at the substance level. The idea
is that the result will be more than the sum of its parts. On the one hand, the flow of the substance

256

© 2004 by Taylor & Francis Group, LLC


chap-26 19/11/2003 14: 48 page 257

is also determined by certain characteristics, which can be both functional and economical. On
the other hand, recycling can be modelled most adequately at the substance level. This will be the
subject of further research.

REFERENCES

1. Van der Voet, E., Substance from Cradle to Grave: Development of a Methodology for the Analysis
of Substance Flows Through the Economy and the Environment of a Region, Ph.D. Thesis, Leiden
university, Leiden, 1996.
2. Bringezu, S., Fischer-Kowalski, M., Kleijn, R. and Palm, V., Regional and National Material Flow
Accounting: From Paradigm to Practice of Sustainability, Proceedings of the ConAccount Workshop,
Leiden, January 21–23, 1997.
3. Bergbäck, B. and Lohm, U., Metals in Society. In: Brune, D. and Chapman, V., (Ed.), The Global
Environment – Science, Technology and Management, Scandinavian Scientific Press, Oslo, pp. 276–289,
1997.
4. Boelens, J. and Olsthorn, A., Software for Material Flow Analysis. In: Vellinga, V., Berkhout, F. and
Gupta, J., (Ed.), Sustainable sustainability, Kluwer, Dordrecht, pp. 115–130, 1998.
5. Kleijn, R. and Van der Voet, E., Chlorine in Western Europe, a MacTempo Case Study, Materials
Accounting as a Tool for Decision Making in Environmental Policy, Centre of Environmental Science
(CML), Leiden, April 1998.
6. Tiltone, J., World Metal Demand – Trends and Prospects, Resources for the Future, Washington D.C.,
1990.
7. Moore, D. and Tilton, J., Economic Growth and the Demand for Construction Material, Resources
Policy, Vol. 22, No. 3, pp. 179–205, 1996.
8. International Lead and Zinc Study Group, Lead in Batteries, ILZSG, London, 1999.
9. Research for Man and Environment (RIVM), European Environmental Priorities: An Integrated
Economic and Environmental Assessment, RIVM, The Netherlands, 2001.
10. Kleijn, R., Huele, R. and Van der Voet, E., Dynamic Substance Flow Analysis: the Delaying Mechanism
of Stocks, With the Case of PVC in Sweden, Ecological Economics, Vol. 32, No. 2, pp. 241–254,
2000.
11. Melo, M., Statistical Analysis of Metal Scrap Generation: the Case of Aluminium in Germany,
Resources, Conservation and Recycling, Vol. 26, pp. 91–113, 1999.
12. Bentum, F., Verstappen, G. and Wagemaker, F., Watersysteemverkenningen: Een Analyse Van de
Problematiek in Aquatisch Milieu, WSV-doelgroepstudie Bouwmaterialen, Ministerie van Verkeer en
Waterstaat, The Netherlands, 1996.
13. Central Bureau of Statistics (CBS), Production & Trade Statistics, Eurostate, Voorburg, 2000.
14. Tukker, A., Buijst, H., Oers, L. and Van der Voet, E., Risk to Health and the Environment Related to
the Use of Lead in Products, TNO, Delft, The Netherlands, 2001.
15. Gijbels, I. and Rousson, V., A Non Parametric Least Square Test for Checking a Polynomial Relationship,
Statistics & Probability Letters, Vol. 51, pp 253–261, 2001.
16. United Nation Development Program (UNDP), Human Development Report, Oxford University Press,
New York, April 1992.

257

© 2004 by Taylor & Francis Group, LLC


chap-26 19/11/2003 14: 48 page 258

© 2004 by Taylor & Francis Group, LLC


chap-27 19/11/2003 14: 49 page 259

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Comparison of fouling data from alternative cooling water sources

Malcolm Smith∗ & Andrew Jenkins


Process Centre NEL, East Kilbride, UK

Colin Grant
Department of Chemical and Process Engineering, University of Strathclyde, Glasgow, UK

ABSTRACT: The process industry in the UK uses vast quantities of potable water for process
cooling. It is possible that this procedure could be carried out using lower quality water but this
may cause heat exchanger fouling problems. Tests were done on canal water and sewage effluent in
NEL’s cooling water fouling circuit to compare them with potable water. Fouling data were gathered
as plots of fouling resistance against time. It was found that the sewage effluent fouled rapidly and
severely but that the canal water performed similarly to the current potable supply and would be
suitable for further investigation.

INTRODUCTION

The process industry in the UK uses large quantities of potable water for applications where a
lower water quality would suffice. The amount of water that is taken from good quality sources is
increasing but the volume available is not. One way to lessen this problem is to match the quality
of available water supplies with the end user’s requirements. If water from an alternative source
replaces an equivalent volume of potable water then this could be used elsewhere and may mean
that new storage or treatment facilities for potable water do not have to be created. An additional
benefit to the user will be that any new water supply is likely to be cheaper than mains water, which
is subject to price restrictions. [1]
NEL co-ordinated a UK government funded project that sought to draw attention to the possi-
bilities that exist for the sustainable use of water in the process industry. One key aspect of this was
investigating the use of water from alternative sources for cooling. Process cooling accounts for
about 40% of industrial water use [2] and is an application where the quality of the water used is
not as important as in others. The ability to tolerate relatively low water quality and the potentially
large amount of water that could be replaced makes cooling ideal for investigation.

Heat exchanger fouling


A major problem experienced in cooling circuits is heat exchanger fouling. Fouling is usually
defined as the presence of an undesirable deposit on a heat transfer surface that increases the
resistance to heat transfer. It occurs in a number of different forms but those most commonly
associated with cooling water are scaling of inverse solubility salts, biofilm growth and corrosion.
These are typically controlled by the addition of a combination of biocides, dispersants and corrosion
inhibiting chemicals.
Kern and Seaton [3] modelled fouling by stating that it was a balance between two competing
processes: deposition and removal of the foulant. This is described mathematically by equation 1.

∗ Corresponding author. e-mail: malcolmsmith2001@yahoo.co.uk

259

© 2004 by Taylor & Francis Group, LLC


chap-27 19/11/2003 14: 49 page 260

dR f
= φ d − φr (1)
dt

The deposition rate was taken to be proportional to the rate at which the foulant passes the heat
transfer surface and the removal rate was said to be dependent on the deposit thickness and the
shear stress on the deposit. An asymptotic resistance occurs when the deposition and removal rates
are equal.
Taborek et al. [4] proposed alternative equations for the deposition and removal rates based on
the factors that had been observed to affect fouling. Thus, the deposition rate was related to the
surface temperature and the water quality and the removal rate was said to be dependent on the
deposit thickness, shear stress and the strength of the deposit.
The costs associated with fouling are enormous but are difficult to quantify as these costs occur
from so many different areas such as additional capital, operating and maintenance costs and lost
revenue through lost production time. Howarth et al. [5] estimate the cost of fouling in Europe to
be about a5000 million. Clearly, changing water supplies could increase the amount of fouling in
a system and one important aspect of the project is ensuring that using alternative supplies will not
add to this already substantial cost.
A recent description of some of the other issues associated with heat exchanger fouling is provided
by Bott [6].

Process cooling
One of the key operating parameters in an evaporative cooling circuit is the concentration ratio.
This relates the concentration of impurities in the system to the concentration of impurities found
in the water that is added to the system to replace evaporative losses. This is important in two ways.
Operating at higher ratios reduces the water requirement since less needs to be blown down from
the system to maintain the impurity concentration but the higher impurity concentration makes the
system more prone to fouling. When the water is less pure than potable water, it is possible that
cooling systems would have to be operated at lower ratios or to be treated more against the effects
of fouling to maintain performance.
It is not unusual for cooling circuits to be operated with non-potable water; indeed many power
stations are situated next to the sea so that a large cooling water source is available. The equipment
in these plants has usually been designed with the intention that low quality water will be used e.g.
using non-corrosive materials and allowing for fouling in the design of equipment. If the equipment
is not designed for this purpose it may not function as intended in the event of a different water
supply being used.
The aim of this work was to show that water from some alternative sources could be used as
a cooling medium without deteriorating heat transfer performance. This would be achieved by
comparing the fouling data obtained from an experimental cooling circuit.

EXPERIMENTAL FACILITY

NEL has over 20 years experience of fouling monitoring of aqueous streams. The Process Fouling
Assessment Unit (PFAU) is one of the most established rigs and consists of nine NEL fouling
monitors arranged in a 3 × 3 matrix. The NEL fouling monitor is a sidestream monitor consisting of
a metallic tube that is interference fitted into an aluminium sleeve. The sleeve is heated by a coiled
electrical heating element while the tube wall temperature is measured by thermocouples imbedded
in the sleeve close to the tube. The fluid temperature is measured by PRTs upstream and downstream
of the heated section while a precision wattmeter measures the electrical power to the heater. The
fouling resistance is obtained from the change in the overall heat transfer coefficient over time.
This is done by continuously evaluating equations 2 and 3.

260

© 2004 by Taylor & Francis Group, LLC


chap-27 19/11/2003 14: 49 page 261

Q
U=   (2)
A Tw − Ti +T
2
o

1 1
Rf = − (3)
U U0
The benefits of different fouling monitoring techniques and in particular those of the NEL fouling
monitor are presented in Glen et al. [7].
The arrangement of monitors in the PFAU allows assessment of the effects of a number of
parameters at once such as fluid velocity, heat flux and tube material. In addition, the PFAU
contains instruments for monitoring the pressure drop across the test sections and water quality
parameters including pH, conductivity, turbidity and redox potential. The data is logged continually
on a PC and is transferred from the rig via an RF modem.
In any previous cooling water studies, the PFAU was used in a once through mode with the water
being taken from an existing cooling circuit and returned to the circuit, or to drain, after being
tested. In this instance, the cooling circuit did not exist and to show the effects of recirculating
the water, a pilot scale cooling circuit was constructed using the PFAU as the heat source. The
nature of the rig is such that additional monitoring, or treatment, equipment can be added as
required.
Sand filtered water was pumped through the PFAU where it was heated. It was then cooled in
an evaporative cooling tower and passed to a mixing tank from which it was recirculated through
the PFAU. Evaporation caused some water to be continually lost from the system and, to control
the impurity concentration, more was removed by an automatic hourly blowdown. These losses
were replaced by topping up the tank with more filtered water via a level control valve. The make
up water was continually pumped through the filter to give a good quality filtrate with any excess
going to drain. A schematic diagram of the facility is shown in Figure 1.
The water was continuously dosed with sodium hypochlorite solution to maintain the level
of free chlorine in the system at a level that would control the growth of potentially harmful
bacteria.

Process fouling
assessment unit
Cooling
tower
Sand
filter

Blowdown
Mixing
tank Make-up

Effluent discharge
channel or canal

Figure 1. Experimental cooling rig.

261

© 2004 by Taylor & Francis Group, LLC


chap-27 19/11/2003 14: 49 page 262

EXPERIMENTAL PROGRAMME

The work was undertaken for Grangemouth Water Users Group (GWUG)∗ so the sources tested
had to be relatively convenient for this area. A number of chemical and petrochemical companies
are based in Grangemouth, Scotland and currently take all of their cooling water from the mains
supply.
Secondary sewage effluent and water from a local canal were identified as being available in
sufficient quantities to have the potential to be used as cooling water. Water from the canal, the
effluent and the mains were tested in the cooling circuit. For the tests involving the sewage effluent
and the canal water, the water was continually pumped from the source through the sand filter
but because there was no mains supply at the test location it was supplied in batches. It was not
necessary to filter this water. The mains water was used to compare the fouling from the alternative
sources with those of the current supply.
Tests were carried out at two concentration ratios to allow the effect of this to be tested. A low
ratio of 1.5–2 and a higher ratio of 4–5 were used. These were representative of the concentration
ratios currently used on plants in Grangemouth with mains water.
The PFAU was set up with monitors made from tubes of stainless steel, mild steel and admiralty
brass. Velocities were chosen that were representative of those used on the plants to allow the effect
of this to be investigated. Each of the three lines was set up with a different velocity (0.5, 1.2 and
2.0 m/s) and one of each of the monitor types. To encourage the rapid development of fouling, the
heat flux into the monitors was set at 90 kW/m2 , which was as high as the equipment would allow.
Automatic samplers took composite water samples of the makeup water and the water in the
circuit over a 24-hour period to allow analysis for some key chemical parameters.
The tests were allowed to run until asymptotic fouling was observed in the majority of monitors
or when the heaters switched off as a result of an automatic high wall temperature cut out.

RESULTS AND DISCUSSION

Fouling trends
Fouling data was gathered as plots of fouling resistance (Rf ) against time using equations 4 and 5.
These showed four distinct trends over the course of the tests, which are illustrated in, Figure 2.
Tests ended prematurely if the over temperature cut out was activated.
Those that fouled, but did not reach an asymptote, could have produced this profile for a number of
reasons. Fouling in these cases appeared to be approximately linear, although they may have reached
an asymptote had the tests lasted longer. For those tests that activated the high temperature cut out,
fouling may have progressed towards an asymptote had the limit been set higher. For those that
exhibited no fouling, it is possible that the tests were stopped before fouling had started.
Corrosion
When the sewage effluent was tested, all the monitors fouled heavily except the mild steel monitors
at the two higher velocities. On inspection it was found that these monitors were severely corroded.
If these were corroded at the start of the test, the heat transfer performance may not have deteriorated
further during the course of the test. Corrosion of the mild steel monitors proved to be a problem
throughout the duration of all the tests, so much so that they had to be replaced part way through
the test programme. Although these were passivated before use, the new mild steel monitors were
also subject to severe corrosion. While this is a valuable observation in itself, the fouling data from
these is less predictable than from those monitors made from more inert materials and will not be
considered further here. A summary of the other fouling data obtained is shown in Table 1.

∗ GWUG were East of Scotland Water, British Waterways, BP, Avecia, Enichem, ONDEO Nalco, SEPA and
Cal Gavin. These companies provided additional funding for the project. East of Scotland Water are now part
of Scottish Water.

262

© 2004 by Taylor & Francis Group, LLC


chap-27 19/11/2003 14: 49 page 263

Fouling resistance

Fouling but no asymptote


Premature end to test

AsymptotIc fouling

No fouling

Time

Figure 2. Different fouling profiles observed in tests.

Table 1. Fouling data.

Water Tube Velocity Run time Final Rf


type material (m/s) (days) Fouling trend (m2 K/kW)

Effluent Stainless 0.5 1 premature end 0.17


low ratio 1.2 5 premature end 0.18
2.0 12 no asymptote 0.07
Brass 0.5 2 premature end 0.16
1.2 12 no asymptote 0.50
2.0 12 no asymptote 0.10
Canal Stainless 0.5 6 premature end 0.19
low ratio 1.2 35 asymptotic 0.12
2.0 35 no fouling n/a
Brass 0.5 7 premature end 0.25
1.2 35 asymptotic 0.06
2.0 35 no fouling n/a
Canal Stainless 0.5 15 asymptotic 0.14
high ratio 1.2 15 asymptotic 0.10
2.0 15 asymptotic 0.04
Brass 0.5 15 asymptotic 0.12
1.2 15 asymptotic 0.06
2.0 15 asymptotic 0.03
Potable Stainless 0.5 9.5 asymptotic 0.17
low ratio 1.2 9.5 no asymptote 0.06
2.0 9.5 no fouling n/a
Brass 0.5 9.5 asymptotic 0.16
1.2 9.5 no fouling n/a
2.0 9.5 no fouling n/a
Potable Stainless 0.5 13 no asymptote 0.07
high ratio 1.2 11.5 no fouling n/a
2.0 13 no fouling n/a
Brass 0.5 13 no asymptote 0.07
1.2 13 no fouling n/a
2.0 13 no fouling n/a

263

© 2004 by Taylor & Francis Group, LLC


chap-27 19/11/2003 14: 49 page 264

2.0E04

Fouling resistance (m2K/W) Stainless steel 0.5 m/s


1.5E04

1.0E04
Admiralty brass 0.5 m/s

5.0E05 Stainless steel 2.0 m/s

Admiralty brass 2.0 m/s

0.0E 00
0 5 10 15
Time (days)

Figure 3. Effect of tube material on fouling.

Material effect
Comparing the fouling curves produced by the stainless steel and the admiralty brass monitors
suggests that the monitor type is unimportant with regards to the shape of the fouling curve.
However, for those tests that did not stop prematurely, the brass monitors consistently showed
lower fouling resistances. It has been suggested that copper alloys reduce the severity of biofouling
[8] which would mean a reduced fouling resistance at any given time. There is no other evidence
from these tests to back this up.
Another possible reason for this could be the fouling measurement technique. It has been shown
that the surface temperature affects the deposition rate but with the monitors being operated with
a constant heat flux, the surface temperatures should be the same. Because the measured wall
temperature, which is used to calculate U and therefore Rf , is not the actual temperature at the
surface, these temperatures are quite different for different materials. The full effect of this will
have to be investigated further. The effect of the tube material on fouling is shown in Figure 3 with
data taken from the canal water test at the high ratio.

Velocity effect
From Table 1 it can be seen that in those tests that did not end prematurely there is a relationship
between the velocity and the final fouling resistance. Figure 4 illustrates the change in fouling
resistance with time with data taken from the canal water test at the high concentration ratio. The
tubes were cleaned between tests and the deposits were always found to be soft and come off
easily. Thus, the increased surface shear at the higher velocities will have increased the removal
rate described in equation 1 and thus lowered the value at which the deposition and removal rates
are balanced.

Sewage effluent
The sewage effluent at the low concentration ratio caused far more severe fouling than either the
canal or potable water at low or high concentration ratios. In fact, this was so rapid and severe that
the tests on the effluent could only be carried out at the low concentration ratio, despite the tested
water being chlorinated and filtered. The deposits were found to be a mixture of organic and mineral
material suggesting that biomatter was present in the system and, was getting a sufficient supply of
nutrients to produce a biofilm on the heated surfaces. This was backed up by the large amount of

264

© 2004 by Taylor & Francis Group, LLC


chap-27 19/11/2003 14: 49 page 265

2.0E04

Fouling resistance (m2K/W) 1.5E04

0.5 m/s

1.0E04
1.2 m/s

5.0E05

2.0 m/s

0.0E 00
0 5 10 15
Time (days)

Figure 4. Effect of velocity on fouling resistance.

hypochlorite solution used during the course of the test, which indicates that there was a continual
source of oxidisable material.
It is clear that before this water can be used successfully for cooling, it will have to undergo
further treatment to remove biological matter and the nutrients in the water that could supply a
biofilm. A treatment such as reverse osmosis would be suitable because it removes over 90% of
dissolved species and also removes suspended matter.

Canal water
The results for the canal water can be seen in Table 1. At 2.0 m/s the only fouling that was witnessed
was with the high concentration canal water, which produced an asymptote at about 0.04 m2 K/kW.
In reality, this is an amount some way below what would be used as a design value so the water
could be used under these conditions without any operational problems. The fouling resistances
specified by TEMA for cooling tower water are in the range 0.17–0.33 m2 K/kW [9]. At velocities
of 1.2 and 0.5 m/s, the experimental value increases although even these asymptotic values are still
less than the TEMA values.
Fouling at the low ratio, when it occurred, was worse than at the high ratio but this may have been
because the bulk water temperature was much lower for the high ratio test. The development of
biofilms is dependent on the bulk water temperature and is seen to increase rapidly above a threshold
value of about 15◦ C. The water in the low ratio test was generally above 15◦ C whereas in the high
ratio test it was always below this value. No fouling was observed at the highest velocity. At the
lowest velocity, the over temperature cut out was reached within one week. At 1.2 m/s asymptotic
fouling resistances were seen at about 0.13 m2 K/kW. These values are also lower than the upper
limit of the TEMA values.
The tests using canal water did not produce a sufficient quantity of deposit for analysis but the
hypochlorite solution was used at a much lower rate than was observed with the effluent. This
suggests there was less biological matter in the incoming water or that the filter removed it.

Potable water
The potable water did not foul at 1.2 m/s or at 2.0 m/s in either the low or high ratio tests. This
is similar to the findings with the canal water where only a small degree of fouling was found
at either velocity. At the lowest velocity, there was again more fouling at the low ratio than the
high ratio, shown in Figure 5. This result conflicts with the expected behaviour particularly as
the bulk temperatures were similar. It is possible that because this water arrived in batches that

265

© 2004 by Taylor & Francis Group, LLC


chap-27 19/11/2003 14: 49 page 266

Canal low ratio


Fouling resistance

Potable low ratio

Canal high ratio

Potable high ratio

Time

Figure 5. Comparison between canal water and potable water at high ratio and 0.5 m/s.

the chemistries of the different batches were different. The batches were obtained from the same
hydrant but the water may have originated from different sources.
These results indicate that, over the relatively short duration of these tests, the canal water
performs in a similar way to the current supply and would not require any more treatment than is
presently employed. Longer tests using the existing treatment regime would be required to prove this.

CONCLUSIONS

The experiments sought to show that water from alternative sources could replace some of the
potable water used for cooling in Grangemouth. Two sources, secondary sewage effluent and canal
water, were thought to be suitable. The effect of the water on heat exchanger fouling in a cooling
circuit was examined at different velocities and concentration ratios and with different exchanger
tube materials.
The sewage effluent fouled rapidly and severely and would be unsuitable for use without further
treatment. Water from the canal showed similar fouling characteristics to the current potable supply
and these were less severe than those observed with the effluent. At the higher velocities there was
only a small amount of fouling from both the canal and potable water. At the lowest velocity, fouling
was observed but this reached an asymptotic value that was below commonly used design fouling
values.
The initial results indicate that water from the canal could be used as cooling water without
needing treatment on site beyond what is already given to the mains water.

ACKNOWLEDGEMENTS

Malcolm Smith is anAssociate of the Postgraduate Training Partnership (PTP) between the National
Engineering Laboratory (NEL) and Strathclyde University. The PTP scheme is a joint initiative
of the UK’s Department of Trade and Industry (DTI) and the Engineering and Physical Sciences
Research Council (EPSRC). The PTP scheme is supported by a grant from the DTI. Malcolm Smith
gratefully acknowledges grant support from both the EPSRC and NEL. Financial support for the
work described in this paper was provided by GWUG, whose support is gratefully acknowledged.

266

© 2004 by Taylor & Francis Group, LLC


chap-27 19/11/2003 14: 49 page 267

NOMENCLATURE

Rf Fouling resistance
t Time
φd Rate of deposition of foulant onto heat transfer surface
φr Rate of removal of foulant from heat transfer surface
U Overall heat transfer coefficient
U0 Overall heat transfer coefficient in clean condition
Q Electrical power input
A Heat transfer area
Tw Tube wall temperature
Ti Inlet fluid temperature
To Outlet fluid temperature

REFERENCES

1. OFWAT report, Final determinations: future water and sewerage charges 2000 – 05, 1999.
2. Latham, B., An Introduction to Water Supply in the UK, The Institution of Water and Environmental
Management, London, 1994.
3. Kern, D.Q., and Seaton, R.E., A Theoretical Analysis of Thermal Surface Fouling, Brit. Chem. Eng.,
Vol. 4, No. 5, pp 258–262, 1959.
4. Taborek, J., Aoki, T., Ritter, R.B., Palen, J.W. and Knudsen, J.G., Predictive Methods for Fouling
Behavior, Chem. Eng. Prog., Vol. 68, No.7, pp 69–78, 1972.
5. Howarth, J.H., Glen, N.F. and Jenkins, A.M., The Use of Fouling Monitoring Techniques, Understanding
Heat Exchanger Fouling and its Mitigation, Italy, May, 1997.
6. Bott, T.R., To Foul or Not to Foul – That is the Question, Chem. Eng. Prog., Vol. 97, No.11, pp 30–37,
2001.
7. Glen, N.F., Jenkins, A.M. and Howarth, J.H., Advanced Performance Monitoring Techniques, NEL/RSC
seminar: Sustainable Industrial Water Use, Edinburgh, April 9, 2001.
8. Müller-Steinhagen, H., Heat Exchanger Fouling, Mitigation and Cleaning Technologies, Institution of
Chemical Engineers, Rugby, 2000.
9. TEMA, Standards of the Tubular Heat Exchanger Manufacturers Association, Tubular Heat Exchanger
Manufacturers Association, New York, 1988.

267

© 2004 by Taylor & Francis Group, LLC


chap-27 19/11/2003 14: 49 page 268

© 2004 by Taylor & Francis Group, LLC


chap-28 19/11/2003 14: 49 page 269

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Influence of the critical sticking velocity on the growth rate of


particulate fouling in waste incinerators

M.S. Abd-Elhady∗, C.C.M. Rindt, J.G. Wijers & A.A. van Steenhoven
Department of Mechanical Engineering, Eindhoven University of Technology, Eindhoven,
The Netherlands

ABSTRACT: Gas side fouling of waste-heat recovery boilers is mainly caused by deposition of
particulate matter. The influence of the critical sticking velocity on the growth rate of particulate
fouling layers is described. The critical sticking velocity (CSV) of an incident particle hitting a
powdery layer is defined as the minimum impact speed by which a particle can rebound from the
surface. Since the CSV is a key parameter in the deposition mechanism, a well-defined experimental
set-up has been built to assign it. Experimental results showed that the CSV increases with the
porosity of the fouling layer. A correlation is made between the CSV and the fouling layer thickness.
This correlation is based on the experimental results and the variation of porosity with the thickness
for thin sintered powdery fouling layers as modelled by Todorov et al [1]. This new correlation
shows that the sticking velocity decreases exponentially as the thickness increases. Therefore fewer
particles are likely to stick as the fouling layer builds up and consequently the deposition rate
decreases. The change in the critical sticking velocity as the fouling layer builds up contributes to
the explanation of the asymptotic growth of particulate fouling layers on the tube bundle of waste
incinerators.

INTRODUCTION

Particulate fouling is defined as the deposition of unwanted materials (particles) on a heat exchange
surface. Fouling layers as observed on the tube bundles of the economiser in a Dutch waste inciner-
ator were thin and powdery. This is due to the low temperature of the flue gasses. In the superheater
of the waste incinerator, the temperature of the flue gasses is so high that melting of the powdery
layer occurs. Because of melting, the layer becomes sticky which leads to an increased sticking
probability and, therefore, thicker layer.
In this study attention is given to particulate fouling of economizers. Figure 1 shows the fouling
factor Rf , the thermal resistance of the fouling layer, as a function of time for the economizer
tubes as measured by van Beek et al [2]. The fouling factor shows an asymptotic behaviour with
a levelling off value. At the same time the fouling layer thickness became constant. The growth
rate of the fouling layer is equal to the difference between the deposition rate and the removal rate.
Therefore, in order to estimate the growth rate, the deposition rate as well as the removal rate has
to be known.
An important parameter that determines the deposition of an incident particle on the fouling
layer is the critical sticking velocity. The critical sticking velocity is defined as the minimum
impact speed at which a particle hitting a powdery layer can rebound from the surface. In this
study, the influence of the critical sticking velocity on the deposition rate and consequently on the
fouling rate is studied.
∗ Corresponding author. e-mail: M.S.Abdelhady@tue.nl

269

© 2004 by Taylor & Francis Group, LLC


chap-28 19/11/2003 14: 49 page 270

 103
12

10

8
Rf m2 K/W

0 500 1000 1500


t hr

Figure 1. Fouling resistance as a function of time for the economizer tube bundle, van beek et al [2].

As the fouling layer builds up, it’s thermal resistance increases causing the temperature difference
across the layer to increase, resulting in increased sintering of the fouling layer. Sintering is defined
as bonding and agglomeration of particles due to temperature elevation. Todorov et al [1] determined
a relation between the pore diameters of sintered Bronze with specimen thickness,
 −0.282
eq δ
Dmax = Dmax (1)
20 dp,m

with Dmax defined as the instantaneous value of maximum pore diameter in a specimen of given
eq
thickness. Dmax is the equilibrium value of maximum pore diameter at a layer thickness δ greater
than the critical, which is measured to be 20 dp,m . dp,m is the particle mean diameter. Since porosity
is proportional to the pore diameter, eq. (1) can be written as,
 −0.282
δ
P = P∞ (2)
20 dp,m

where P is the porosity of the layer and P∞ is the final porosity attained at a thickness δ equals to
20 dp,m . The exponent value 0.282 for sintered porous Bronze is assumed to be the same as for a
sintered porous ash deposit. When the thickness δ of the sintered layer is greater than 20 dp,m , then
the porosity of the layer does not change anymore. Therefore a supplementary condition is added
to eq. (2) as follows,
P = P∞ for δ > 20 dp,m (3)

From eq. (2), it can be concluded that, as the thickness of a sintered powdery layer increases the
porosity decreases. Figure 2 shows the variation of porosity with thickness for a sintered powdery
layer of particle’s diameter 54 µm and final porosity, P∞ , of 0.54. In this study the change in
the sticking velocity is measured as function of the fouling layer porosity. From the experimental
measurements and the model of Todorov et al [1] a correlation can be made between the sticking
velocity and the fouling layer thickness. From this correlation it can be shown how the deposition
rate varies as the fouling layer grows.

270

© 2004 by Taylor & Francis Group, LLC


chap-28 19/11/2003 14: 49 page 271

1
0.95
0.9
0.85
Porosity, P

0.8
0.75
0.7
0.65
0.6 P∞ = 0.54
0.55
0.5
0.0E+00 2.0E-04 4.0E-04 6.0E-04 8.0E-04 1.0E-03 1.2E-03

Thickness, δ (m) δ=20 dp,m

Figure 2. Porosity versus thickness for a thin sintered fouling layer of particle diameter 54 µm and final
porosity of 0.54.

Powder Press cap


(m)
Press cavity Final press
before pressing cavity after
pressing (V)
Piston

Press body

Locating nut

Plunger

Figure 3. A schematic diagram showing the steps of preparation for the powdery layers used (left). Insertion
of powder into the press. (right) powder after pressing.

EXPERIMENTAL SET-UP AND EXPERIMENTAL PROCEDURE

Sample preparation
Layers with different porosities are prepared in order to assign the critical sticking velocity of an
incident particle, as function of the impacted layer porosity. Figure 3 shows a schematic of the press
and the preparation steps taken to prepare the porous powdery layers. A certain mass m of powder
is poured into the press cavity. The powder is pressed to a final volume V. The average porosity P
of the prepared tablet is determined from the following equation,

(m/V)
P=1− (4)
ρp

where ρp is the density of the layer’s particles. By changing the amount of powder pressed, the
porosity of the formed tablet can be changed.

271

© 2004 by Taylor & Francis Group, LLC


chap-28 19/11/2003 14: 49 page 272

Optical chopper Vacuumed


column
Laserbeam
Lens
Particle
feed

Camera

Lightsheet

Figure 4. The experimental set-up (left) and a typical recorded image (right).

Experimental set-up
The experimental set-up used to measure the sticking velocity of an incident particle consists of a
vacuumed column in which particles impact on a well-defined surface. The impact of the particles is
recorded using a digital camera system. A pulsated light sheet illuminates the particle several times
in one camera image. For each particle the impact velocity is determined from the average distance
between two successive illuminations (blobs) and the rate of pulsation of the laser sheet. Further
details about the measurement procedure and analysis can be found in the thesis of van Beek [3].
At the bottom of the column, the powdery layer is installed on an object table. With this object
table the surface on which the particles impact can be rotated in a vertical plane to investigate the
influence of the impact angle. Figure 4 shows the experimental setup and a typical recorded image.

Experimental procedure
The prepared powdery layer is installed on the table of the vacuum column. Particles are dropped
on the powdery layer. The impact speed, rebounding speed, impact angle and rebounding angle are
measured. Varying the height of drop (height of the vacuum column) can vary the incident speed.
The objectives of the experiments are to determine the variation of the coefficient of restitution
with the impact speed and the critical sticking velocity for a particle hitting a porous powdery layer.
The coefficient of restitution e is equal to the normal rebound speed V1r,n divided by the normal
impact speed V1i,n of the incoming particle (m1 ) and is given by,

V1r,n
e= (5)
V1i,n

The coefficient of restitution is a measure for how much energy is lost during impact. The critical
sticking velocity is found by increasing the impact speed of the incident particle incrementally, till
the incident particle starts to rebound from the surface.

EXPERIMENTAL RESULTS AND DISCUSSION

Coefficient of restitution versus normal impact speed


Figure 5 shows the measured coefficient of restitutions for copper particles of diameter 54 µm,
hitting a copper powdery layer of porosity 0.48. The powdery layer is made out of copper particles

272

© 2004 by Taylor & Francis Group, LLC


chap-28 19/11/2003 14: 49 page 273

Particle sticks Particle Rebounds


0.8

0.7
Coefficient of restitution etc

0.6

0.5

0.4

0.3

0.2

0.1
Critical sticking
velocity = 0.15 m/s
0.0
0 0.2 0.4 0.6 0.8 1 1.2
Normal impact speed, V1i,n (m/s)
Experimental e.
Curve fitting

Figure 5. Experimental coefficient of restitution e for a copper particle∗ hitting a copper powdery surface of
porosity 0.48.

that have the same average diameter size as the incident particle. It is seen that, if the normal
impact speed V1i,n is lower than 0.15 m/s the particle sticks. If the normal impact speed is higher
than 0.15 m/s the particle rebounds. The rebounding behavior is fitted by the continuous line as
shown in Figure 5. It can be concluded that, the critical sticking velocity for this condition is equal
to 0.15 m/s.
The theoretical sticking model developed by van Beek et al [4] is used to predict the rebounding
behavior of the copper particle. The importance of the model comes from it ability to predict
the rebounding behavior in high precision. Also, it gives a good insight about the influence of the
particle diameter, material and impacted layer porosity on the critical sticking velocity. The sticking
model is based on assuming the impact to be a two-body collision. In this approach the second
body represents the bed of particles and is given a mass m2 proportional to the mass of the incident
particle m1 ,
m2
Cm = (6)
m1
The proportionality factor Cm characterizes the deposit. It is an indication for how close the particles
are packed together. The sticking model is used predict the rebounding behavior of a copper particle
of average diameter 54 µm hitting a copper powdery layer made of the same particles and of
porosity equal to 0.48. A proportionality factor Cm and a friction coefficient µ of values 53 and
0.15, respectively, are used in the sticking model. The experimental measurements and the sticking
model results are presented together in Figure 6. It is shown that, in the region where the particle
rebounds from the surface, V1i,n > 0.15 m/s, the model fits quite well with the measurements. In
the region where the particle sticks to the surface, V1i,n < 0.15 m/s, the sticking model continues
to predict the rebounding behavior as if there was no sticking. The sticking velocity calculated
by the sticking model is 0.007 m/s, which is much lower than the measured sticking velocity. The
calculated sticking velocity by the theoretical sticking model is based on the incident particle kinetic
energy loss to overcome adhesion forces during impact. This energy loss is small compared to the
real energy lost in the powdery layer during impact. The energy lost in the powdery layer during
impact is probably due to kinetic energy transfer to other particles.

∗ The average diameter of the incident copper particle and the bed of particles is 54 µm.

273

© 2004 by Taylor & Francis Group, LLC


chap-28 19/11/2003 14: 49 page 274

1
0.9 Theor. e (CM = 53).
Coefficient of restitution e

0.8 Curve fitting for the measured.


0.7 e at P = 0.48. (see fig. 5)

0.6
0.5
0.4
0.3
0.2 Critical sticking velocity as
calculated by the
0.1 theoretical sticking model.
0
0 0.2 0.4 0.6 0.8 1 1.2
Normal impact speed, v1i,n (m/s)

Figure 6. Theoretical and experimental coefficient of restitution e for a copper particle hitting a copper
powdery surface of porosity 0.48.

0.6
Critical sticking velocity, Vst (m/s)

Vst. = 0.0063e6.72p
0.5

0.4

0.3

0.2

0.1

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
Porosity, P

Experimental Critical sticking vel. (dp = 54 microns).


Theoretical Critical sticking vel. (dp = 54 microns)
Exponential curve fitting (dp = 54 microns).

Figure 7. Critical sticking velocity versus porosity for a copper particle impacting a copper powdery layer.

As the porosity of the powdery layer increases more particles are initiated to motion. So, the
energy lost increases and the sticking velocity also increases. This trend is examined experimentally
by measuring the critical sticking velocity of an incident particle as function of the layer porosity.

Critical sticking velocity versus layer porosity


Incident particle diameter equals to 54 µm
The previous experiment is repeated but using layers of different porosities. In each experiment
the critical sticking velocity is measured for the given layer. The results are presented in Figure 7.
Figure 7 shows the critical sticking velocity for a copper particle of average diameter 54 µm hitting
a copper powdery layer of the same particle size as function of layer porosity. The sticking velocity
for a solid surface could not be measured because of instrumentation limits. The model developed

274

© 2004 by Taylor & Francis Group, LLC


chap-28 19/11/2003 14: 49 page 275

0.6

Critical sticking velocity, Vst (m/s)


0.5
Vst. = 0.0082e6.07P
0.4

0.3

0.2

0.1

0
0 0,2 0,4 0,6 0,8
Porosity, P

Experimental Critical sticking vel.


(dp = 42.5 microns).
Theoretical Critical sticking vel.
Exponential curve fitting.

Figure 8. Critical sticking velocity versus porosity for a copper particle∗ impacting a copper powdery layer.

by Roger and Reed [5] is used to calculate the sticking velocity for a particle hitting a solid surface.
Curve fitting for the experimental data is done by including the theoretical sticking velocity at zero
porosity (solid surface), which is calculated by the model developed by Roger and Reed [5]. The
empirical relation between the sticking velocity Vst and porosity P can be approximated by,

Vst = 0.0063e6.72P (7)

Incident particle diameter equals to 42.5 µm.


The previous set of experiments is repeated, but using a copper particle of diameter 42.5 µm hitting
a copper powder layer of particle size equals to 54 µm. These experiments are done to see the
influence of the particle diameter on the critical sticking velocity. The experimental measurements
and curve fitting are plotted in Figure 8. The empirical relation between the sticking velocity and
porosity can be approximated by,
Vst = 0.0082e6.07P (8)
From equations (7) and (8), it can be concluded that the relation between the sticking velocity and
porosity follows the following exponential form,

Vst = aebP (9)

Factor a in eq. (9) is equal to the sticking velocity at zero porosity and the model of Roger and
Reed [5] can determine it. Factor b is determined experimentally and it is a function of the incident
particle diameter and material.

Discussion
Equation (9) shows that, when the porosity of the layer decreases its sticking velocity also decreases.
Sintering in particulate fouling layers found on the economizer tubes in waste incinerators, takes
place due to temperature gradient across the fouling layer. Therefore, the model developed by

∗ The average diameter of the incident copper particle is 42.5 µm and for the porous layer particles is 54 µm.

275

© 2004 by Taylor & Francis Group, LLC


chap-28 19/11/2003 14: 49 page 276

2.5E+00
Critical Sticking velocity, Vst (m/s)

2.0E+00

1.5E+00

1.0E+00
δ = 20d p,m
P = P∞ = 0.54
5.0E-01

0.0E+00
0.0E+00 2.0E-04 4.0E-04 6.0E-04 8.0E-04 1.0E-03 1.2E-03 1.4E-03
Layer thickness, δ (m).

Figure 9. Critical sticking velocity versus sintered layer thickness. The sintered layer is made of particles of
average diameter, dp,m = 54 µm and final porosity, P∞ of 0.54.

Todorov et al [1] can be used to relate porosity with the sintered fouling layer thickness. Equation
(2) shows that, for a thin-sintered powdery layers, the porosity of the layer decreases as the thickness
increases. By combining equations (2) and (9) a relation is obtained between the sticking velocity
and the thickness of the fouling layer. The results are plotted in Figure 9 for a sintered fouling layer
of final porosity 0.54. The fouling layer is made of particles of diameter 54 µm. It is shown that,
as the thickness of the sintered fouling layer increases the sticking velocity decreases.

CONCLUSION

Fouling layers found on the tube bundle of the economizer in a Dutch waste incinerator had a
particle size of average diameter 50 µm and an asymptotic average thickness of 1 mm. The final
asymptotic thickness is approximately equal to 20 dp,m . As the fouling layer starts to build up, the
porosity of the fouling layer decreases because of sintering effects. The model of Todorov et al [1]
shows that, the porosity of a thin sintered layer decreases with thickness till the final thickness is
equal to 20 dp,m . Therefore, it can be concluded from Todorov model and the fouling layers found
on the economizer tubes, that the porosity of the layer continued to decrease with the growth of the
fouling layer till the final thickness is reached. The decrease in Porosity causes the critical sticking
velocity of the fouling layer to decrease and consequently the deposition rate of incident particles
on the layer decreases. The change in the critical sticking velocity as the fouling layer builds up,
contributes to the explanation of the asymptotic growth of particulate fouling layers, on the tube
bundle of waste incinerators.

REFERENCES

1. Todorov, R.P., Georgiev, V.P., Vityaz, P.A., Kaptsevich, V.M. and Sheleg, V.K., Regularity of Structure
of Porous Materials from Bronze Powder, Soviet Powder Metallurgy and Metal Ceramics, Vol. 25,
No. 3, pp. 194–196, March 1986.
2. Van Beek, M.C., Rindt, C.C.M., Wijers, J.G. and van Steenhoven, A.A., Analysis of Fouling in Refuse
Waste Incinerators, Heat Transfer Engineering, vol. 22, pp. 22–31, 2001.

276

© 2004 by Taylor & Francis Group, LLC


chap-28 19/11/2003 14: 49 page 277

3. Van Beek, M.C., Gas-Side Fouling in Heat-Recovery Boilers, Ph.D. thesis, Dept. of Mechanical
Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands, 2001.
4. Van Beek, M.C., Rindt, C.C.M., Wijers, J.G. and van Steenhoven, A.A., Gas Side Fouling in Refuse
Waste Incinerators, Proc. United Eng. Found. Conf. on Mitigation of Heat Exchanger Fouling and its
Economic and Environmental Implications, Banff, Canada, 1999.
5. Roger, D.E. and Reed, J., The Adhesion of Particles Undergoing an Elastic-Plastic Impact with a
Surface, J. of Physics D: Applied Physics, Vol. 17, pp. 677–689, 1984.

277

© 2004 by Taylor & Francis Group, LLC


chap-28 19/11/2003 14: 49 page 278

© 2004 by Taylor & Francis Group, LLC


chap-29 19/11/2003 14: 49 page 279

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

A general mathematical model of solid fuels pyrolysis

Gabriele Migliavacca & Emilio Parodi


Stazione Sperimentale per i Combustibili, San Donato Milanese, Italy

Loretta Bonfanti
ENEL Produzione Ricerca, Pisa, Italy

Tiziano Faravelli, Sauro Pierucci & Eliseo Ranzi


Department of Chemical Engineering, Politecnico di Milano, Milano, Italy

ABSTRACT: The aim of this work is to present and discuss a detailed kinetic model able to
describe the devolatilization process of solid fuels, under pyrolysis conditions. The major reason of
this interest in a better understanding of pyrolysis and combustion of coal, biomasses and solid fuels
lies in the increasing concern of the environmental impact of large-scale combustion processes.
The common chemical and structural aspects of the different fuels are singled out and used as
the starting point to define this mathematical model. The formation of light gases and liquid tars
is the first step in the pyrolysis process. A particular attention is also devoted to the generality
and flexibility of numerical and mathematical methods. Several comparisons with experimental
data are analysed and the molecular weight distributions of the tar from different coals evolved at
different temperatures are also discussed.

INTRODUCTION

Coal devolatilisation is a process in which coal is transformed at elevated temperature to produce


gases, tar and char, where tar is defined as the room-temperature condensable species formed
during pyrolysis.
Models of coal devolatilisation moved on from simple empirical expressions of total mass release,
involving one or two rate expressions (Anthony et al. 1976 [1]), to more complex descriptions of the
chemical and physical processes. Nevertheless, simple models, like the one developed by Kobayashi
et al. [2] (1977), still have a relevant importance in CFD-combustion simulations.
Gas formation was often related to the thermal decomposition of specific functional groups in
coal, on the other hand, tar and char formation are more complex and the mechanistic modelling of
tar formation has to be improved. The level of details required in a model depends on its application:
in general combustion models the simple “weight loss” models have often been employed, however,
more sophisticated models are needed for a more detailed description.
Recent studies on the evolution of coal structure (Smoot 1993 [3]) during pyrolysis, based on
solid state 13 C-NMR, XPS, TG-FTIR, GC-MS, have facilitated the extension of the capabilities of
devolatilisation models to the predictions of the composition of volatile matter with the time and
the temperature.
Initially, C–C bond-cleavage reactions give rise to a mixture of different size fragments in the
form of a melted material called metaplast. Light species are released as gases. Successive bond
breaking leads to further gases and also to condensable species (tar) and to the formation of char.
Secondary gases are formed during char condensation reactions. Finally, tars can crack in the gas
phase forming soot and lighter gases.

279

© 2004 by Taylor & Francis Group, LLC


chap-29 19/11/2003 14: 49 page 280

Pitt (1962) [4] first treated the coal as a mixture of a large number of species decomposing by
parallel first order reactions with different activation energies. Similarly, Anthony et al. (1975) [5]
proposed the Distributed Activation Energy Model (DAEM). The DISCHAIN model, distributed-
energy chain statistics, used string statistics to predict the monomer production. These species
are a source of volatile tar and they can also polymerise at chain ends forming char. This model
includes also tar vaporisation as a multicomponent vapour–liquid equilibrium process. This flash
distillation mechanism lead to the FLASCHAIN model (Niksa et al. 1991) [6]. Furthermore,
Solomon et al. (1988) [7] combined the previously developed Functional Group model, used to
describe the gas evolution, with the Depolymerisation-Vaporisation-Cross-linking (DVC) algorithm
in the FG-DVC model.
Coal dependent chemical structural parameters, deduced from solid-state 13 C-NMR experi-
ments, are the starting point for chemical percolation and devolatilization (CPD) model developed
by Fletcher et al. (1990) [8]. The data are acquired using the extrapolation techniques of Solum
et al. (1989) [9] and the resulting parameters are used to obtain a statistical description of coal
lattice.
So it is possible to describe the size distribution of finite aromatic clusters, constituted by several
polyaromatic sites joined by intact bridges. These clusters are isolated from the remaining structure
due to bond breaking. The detailed description of devolatilization process requires experimental
data in order to derive the initial fraction of intact bridges, as well as experimental data on the
kinetics of bond breaking and finally on vapour pressure of heavy organic components.
The tar evolution represents a fundamental aspect in the dovalitisation process and many studies
were performed to identify the tar nature and to quantify its amount as a function of the pyrolisis
conditions; despite of this attention, many ambiguous and incongruent results are often found
in literature and different conclusions may be deduced from the available data and experimental
evidences.
Among the most interesting experimental studies, performed to characterise the general proper-
ties of tar, the ones by Unger and Suuberg [10, 11] are of particular interest for a model definition,
since they give a complete molecular weight distribution of coal tars. In these works all the con-
densable species having a molecular weight greater than 100–150 a.u. are referred to as tar and
characterised by gel permeation cromatography and osmometry. A general distinction is made
between tars collected from the washing of reactor surface and extracts obtained by means of sol-
vent treatment of the residual char. The tar release shows a typical monotonical increase towards
an asymptotic maximum value, while the extract yield presents an asymmetric bell shaped trend,
with a maximum in the range of 600–700 K and reaching zero at 1000 K. These extract fraction can
be properly considered a measure of the presence of a metaplastic phase, constituted by molecular
fragments of different dimension. At low temperature these fragments are imprisoned inside the
carbon structure by their high molecular weight, while at higher temperature they can be vaporised
or react with the matrix, forming char. The molecular weight distributions of coal tar have their
maximum between 500 and 800 a.u., which shows a slight displacement towards higher values with
increasing temperature. Distributions’ tails cover a range of molecular weight up to 4000 a.u. and
the presence of such high weight compounds is more evident when also the extracts are considered.

MODEL DEVELOPMENT

The general features of the model were already presented elsewhere [12], but they are shortly
proposed here for sake of completeness, along with recent modifications and implementations.

Structural model
Starting from the experimental evidence of the macromolecular nature of coal and from the analo-
gies between coal pyrolysis and other processes involving the thermal degradation of hydrocarbon
materials, like oils and polymers, we focused our attention on the possibility to develop a general

280

© 2004 by Taylor & Francis Group, LLC


chap-29 19/11/2003 14: 49 page 281

A B C D TAR

n1
COAL FRAGMENTATION
n2

n3

…….
AROMATICITY
…….
CHAR
n10

Figure 1. Structural array.

and fundamental devolatilisation model based on intrinsic-kinetic parameters. The final goal of
this study is to use all the knowledge already present in other models previously developed to
describe polymer degradation and refinery processes (Dente et al. 1997 [13]). On the other hand
the peculiar aspects of the coal nature require some specific approach and strong simplification,
due to the complex and heterogeneous properties of this material.
The starting point of this model is a simplified, but realistic, description of the coal structure.
Recent studies have contributed in better understanding some fundamental aspects of its macro-
molecular lattice. For example, Stock et al. (1993) [14] proposed a detailed structure of Pocahontas
#3 coal, identifying the aromatic polycyclic base units, along with the structures of side chains and
connecting bridges.
Specific information available on the coal structure is used to build an array, shown schematically
in Figure 1, which describes the main features of the starting material. The same array is also suitable
to describe the successive transformations of the lattice during pyrolysis, gasification and even
combustion processes. The different cells of this array represent a particular lump of aromatic sites,
characterised by an average number of aromatic carbons, as well as by the number of intact bridges.
Four structural classes of lumped species are considered here, depending on the number of intact
bridges. The aromatic sites contain three attachments, which are intact bridges (represented by solid
segments in Figure 1) or side chains (represented by segments with a small circle on the tip).
Although a maximum number of three attachments is used in this work, a higher number of
attachments can be easily assumed in the model, only by modifying the array dimensions. All the
species on the same line (namely A B C and D for sake of simplicity) have an aromatic site with
the same arbitrary number of carbon atoms. As a matter of example, the n1 aromatic carbons of the
first line could be 10, while the successive numbers (n2, n3, …) could be 20, 30, 60 and 120. In this
way, only a very limited number of lumped species (or ‘pseudo-components’) allows to describe
the overall coal structure and its evolution.
The internal distribution of the different pseudo-species describes the coal structure at any instant
of the conversion process. The initial and starting distribution of different coals is defined on the
basis of all the available experimental information. Experimental NMR data supply detailed and
essential parameters. However, these parameters can be also estimated from literature correlations
(Genetti et al. 1999 [15]).
In the present work a very simple rule was applied to determine the initial distributions, starting
from two available structural parameters: the mean number of aromatic carbons per cluster and the
mean number of intact bridges per cluster. From the first one it is possible to estimate the vertical
spread of the distribution, splitting the total coal mass between the two rows closer to the experimen-
tal data, according to a simple “leva rule”. For instance, a coal having a mean of 15 aromatic carbons

281

© 2004 by Taylor & Francis Group, LLC


chap-29 19/11/2003 14: 49 page 282

per cluster will be split “fifty-fifty” between the rows having respectively 10 and 20 carbons. On the
other hand, the horizontal spread is determined on the basis of the mean number of intact bridges,
splitting the coal between columns, according to the same procedure. The resulting distributions
have a maximum of four occupied cells, generally occupying the upper-left corner of the structural
array, which can be considered the “coal region”. On the contrary, the lower–right zone of the array,
corresponding to large and highly aromatic clusters, can be considered the “char region”.

Kinetic model
The kinetic scheme acts on the different pseudo-species (array elements) and progressively modifies
the starting distribution toward the final one, which gives detailed information on the char structure.
Of course, the operating conditions, in terms of holding time, heating ramp and maximum tempera-
ture, determine the path followed during this process and hence these conditions define the evolution
of the coal system toward its final distribution, with the formation of light gases and tar species.
The coal devolatilisation process is described on the basis of only four main classes of propagation
reactions:
a) Decomposition or fragmentation reactions.
b) Condensation reactions.
c) Cross linking reactions
d) Aromatisation reactions
The decomposition and fragmentation reactions account for the bond-breaking process, which is
progressively responsible of the tar formation. This reaction class is in competition with condensa-
tion and cross-linking reactions, responsible of the formation of heavier species with progressively
larger aromatic sites, towards a graphite like structure: condensation is assumed to act on species
still connected, while the cross-linking reconnects free units with the coal matrix. Another reaction
class, involved in the process, considers a progressive aromatisation of the structure by means of
progressive inclusion of portions of the aliphatic side chain in the aromatic clusters. All these reac-
tion classes push the coal structure towards the right and the bottom of the array respectively and
account for the final molecular weight distribution of tar and for the structural properties of char.
During both condensation and cross-linking reactions, the pseudo-species loose a portion of their
side aliphatic chains in the form of light gases and aliphatic compounds. Moreover, other specific
pyrolysis reactions justify the formation of permanent gases.
According to the assumptions of Solomon et al. (1988) [8], these reactions describe the behaviour
of tightly bound and loosely bound oxygenated functional groups. A similar approach is also applied
to describe the evolution of nitrogen and sulphur containing species.
A further important feature of the model accounts for the presence of guest molecules released
at relatively low temperatures in the form of light tars. Different pseudospecies, corresponding for
instance to C15 and C25 molecules, are used in the model and they are treated, along with the other
D elements, as tars precursors in the flash distilliation procedure.
As described elsewhere [16], the extension of the model to treat also biomasses only required the
coupling of cellulose and lignin devolatilisation behaviour with the coal devolatilization process.
As already mentioned, the thermal degradation of coal can be considered a typical radical chain
mechanism, where initiation and termination reactions define the overall radical concentration
active in the propagation reactions. These reaction classes are completely described by a limited set
of reference kinetic parameters. The adopted approach lumps together most of the molecules and all
the radicals in order to drastically simplify the overall system. This approach was also successfully
used in the modelling of polymer thermal degradation (Faravelli et al. 2001 [17]).

Initiation reactions and termination reactions


Initiation reactions determine C–C bond cleavages of bridges to form radicals. Due to the relative
short aliphatic chains, these scissions occur in the benzyl position and/or they are promoted by
specific functional groups, with the prevailing formation of resonantly stabilized radicals. Of

282

© 2004 by Taylor & Francis Group, LLC


chap-29 19/11/2003 14: 49 page 283

course, bimolecular termination reactions control the total radical concentration active in the
different classes of propagation reactions. H-abstraction reactions on the available sites of the
different bridges and side chains distribute the radical pool inside the structural array.

Propagation reactions
a. Decomposition This reaction class forces the distribution of the structural elements toward
the formation of tars. Kinetic parameters account for the apparent effect of the sequence of H-
abstraction and β-decomposition reactions of the long lived resonantly stabilised radicals, favoured
in the benzyl like positions.
b–c. Condensation and cross-linking A competitive pathway in propagation reactions is the for-
mation of larger aromatic clusters. Condensation reactions act on still connected species, while the
cross-linking reactions connect free units with the coal matrix. Both these reaction classes represent
an addition of benzyl like radicals on a substituted position of an aromatic ring with the successive
dealkylation and gas formation.
d. Aromatisation reactions This class of reactions includes monomolecular and bimolecular aro-
matisation mechanisms. The former accounts for the possibility that a portion of the side chain
could be included into its own aromatic cluster. As a consequence, the chain length is uniformly
reduced with a corresponding increase in the aromatic cluster sizes. In the latter condensation path-
way, two different aromatic clusters form a new aromatic cluster, larger than the sum of the starting
ones. Also in this case there is the inclusion of a portion of the aliphatic chain and a possible gas
release.

Gas species formation


Permanent gases and hydrocarbons released by the coal matrix
Along with the previously described radical chain transformation of the coal matrix, several gases
are generated. Reactions like condensation, cross-linking and aromatisation produce an amount of
gases corresponding to the degradation of the side chains expelled during the reaction. According
to the instantaneous composition of these chains the total amount of gases is properly split into the
possible species (CO, CO2 , H2 , H2 O, CH4 , olefins and unsaturated hydrocarbons).
Not only these mechanisms are responsible for gas formation, so different paths are also con-
sidered in the model: some strictly connected with the matrix reactions described above and others
completely distinct from them. This fact is a consequence of the presence of different chemical
structures and functional groups, which can react in a variety of ways with different rates. To
account for these various behaviours, some gas precursors are treated apart from the rest of the coal
matrix and they are not included in the structural array.

CO
A fraction of the oxygen content, estimated according to a rank dependent function, is considered
bound in some highly stable structure, like furane, so that it may evolve only under very strong
severity conditions. The slow release of this CO is characterized by relatively high activation energy
in order to account for these tight bonds.

CO2 , H2 O
Both these species have the possibility to be easily released at low temperatures, due to the presence
of carbonyl, carboxyl and hydroxyl groups. Two asymptotic levels are set according to the coal rank
and two parallel reactions with low activation energies are introduced to predict this degradation
processes.

NH3 , HCN
These nitrogen containing species are treated according to the known nitrogen fate. So a fraction of
nitrogen, mainly constituted by pyridine rings, is considered trapped in the aromatic clusters, while
the remaining portion is treated as a source of NH3 and HCN according to proper evolution rates.

283

© 2004 by Taylor & Francis Group, LLC


chap-29 19/11/2003 14: 49 page 284

H2 S, SO2
These species are considered the only possible pyrolysis products of sulphur containing groups,
so that coal sulphur and the corresponding hydrogen and oxygen are treated separately from the
matrix, using specific reaction kinetics.

RESULTS AND DISCUSSION

Many different series of data were selected and used to develop, optimise and validate the model:
information about the total amount of gas and tar, the chemical composition of the different phases,
the species composition of the volatile matter, along with molecular weight distributions of tars, are
the basic elements on which this detailed and complex model of devolatilisation was built. A very
critical analysis of the available experimental data is required for modelling purposes. The works
by Xu and Tomita [18], by Unger and Suuberg [10] and by Watt [19] were mainly used, owing to
the good level of detail and to the widespread range of rank covered.

Model validation
First of all, the good performances of the model in respect to the major features of the process
(total volatiles, tar and gas amounts) are shown by means of three comparisons relative to a low
temperature (773 K), an intermediate temperature (1023 K) and a high temperature (1193 K) series
of tests. These data comes from the eight different coals analysed by Xu and Tomita [18]. A Curie
point pyrolyser was used in the experiments to reach the set-point temperatures at a high heating
rate (≈3000 K/s), and then this temperature was maintained for 4 s. These experimental conditions
are simulated, in the model, by means of an assigned temperature profile, while no thermal balance
is actually included, since the very small dimension of the coal particles generally used in pyrolysis
experiments, grant the absence of significant thermal resistances.
The experimental data (full circles) are reported and compared with the model predictions (empty
triangles) in Figure 2a & b as a function of rank, here assumed as the carbon percentage in the
parent coal. As one can see, the trends are fairly well predicted and also the absolute values, for
both tar and gas, are generally in good agreement with the experiments. Although some spread
is present, a uniform decrease of both tar and gas is definitely observed at the increase of carbon
content. The frequently reported low tar value for lignites is not clearly confirmed by the present
data. The application of the Genetti correlation functions [15], as a starting point for the coal matrix
definition, is responsible for the nonlinear trend of the model in respect to the carbon content. The
high error level, intrinsically connected to these kind of experimental measurements and the many
different techniques used by researchers to pyrolyse coal samples and to measure the released
species, make it difficult to build up a general model able to correctly predict all the available data.
But the model here presented showed a good flexibility and generality in relation with the selected
experimental set.
A more detailed comparison is possible in the case of the Liddel bituminous coal [18], for
which the experimental measurements of some gas species (CO, CO2 , H2 O, CH4 , C2 H4 ) are
available. As shown in Figures 3-4, the agreement is rather good especially for CO2 and water,
while the hydrocarbon gases are slightly underestimated, mainly at higher temperatures and CO
overestimated at medium temperature.

Tar molecular weight predictions


Another important feature, the model is able to predict, is the molecular weight distribution of
tar evolved at different temperature conditions and from different coals. Not many experimental
information about this subject are actually available, owing to the intrinsic complexity concerned
with studying so complex a mixture of heterogeneous organic compounds.
Ungher and Suuberg [10] data are very reliable and complete, so we used them as references.
In agreement with their evidences the predicted tar distributions show a maximum around 800 a.u.

284

© 2004 by Taylor & Francis Group, LLC


chap-29 19/11/2003 14: 49 page 285

TAR GAS TOTAL VOLATILES


0.25 0.25 0.45
0.40
0.20
mass fraction
0.20 0.35

mass fraction
mass fraction
0.15 0.30
0.15 0.25
0.10 0.10 0.20
0.15
0.05 0.05 0.10
0.05
0.00 0.00 0.00
0.6 0.7 0.8 0.9 1 0.6 0.7 0.8 0.9 1 0.6 0.7 0.8 0.9 1
Rank Rank Rank
0.35 0.35 0.70
0.30

mass fraction

mass fraction
0.30 0.60
mass fraction

0.25 0.25 0.50


0.20 0.20 0.40
0.15 0.15 0.30
0.10 0.10 0.20
0.05 0.05 0.10
0.00 0.00 0.00
0.6 0.7 0.8 0.9 1 0.6 0.7 0.8 0.9 1 0.6 0.7 0.8 0.9 1
Rank Rank Rank
0.40 0.40 0.70
mass fraction

0.35
mass fraction

mass fraction
0.35 0.60
0.30 0.30 0.50
0.25 0.25
0.20 0.40
0.20
0.15 0.15 0.30
0.10 0.10 0.20
0.05 0.05 0.10
0.00 0.00 0.00
0.6 0.7 0.8 0.9 1 0.6 0.7 0.8 0.9 1 0.6 0.7 0.8 0.9 1
Rank Rank Rank

Figure 2a. Tar (right), gas (centre) and total volatiles (left) from eight different coals [18] at 773–1037–1193 K.
Comparison with model predictions (empty triangles).

TAR TOTAL VOLATILES


0.25 0.50
exp exp
0.20 mod 0.40 mod
mass fraction

mass fraction

0.15 0.30

0.10 0.20

0.05 0.10

0.00 0.00
700 800 900 1000 1100 1200 700 800 900 1000 1100 1200
T (K) T (K)

Figure 2b. Tar and total volatiles from Liddel coal [18] as a function of temperature. Comparison with model
predictions.

0.06 0.06
COexp
CH4-exp
CO2exp
0.05 CH4-mod 0.05 H2Oexp
C2H4-exp
mass fraction
mass fraction

COmod
0.04 C2H4-mod 0.04 CO2mod
H2Omod
0.03 0.03
0.02 0.02
0.01 0.01
0.00 0.00
700 800 900 1000 1100 1200 700 800 900 1000 1100 1200
T (K) T (K)

Figure 3. Different light species from Liddel coal [18] as a function of temperature. Comparison with model
predictions.

285

© 2004 by Taylor & Francis Group, LLC


chap-29 19/11/2003 14: 49 page 286

0.16 0.8

0.14 0.7

0.12 0.6

relative fraction
mass fraction

0.1 0.5

0.08 0.4

0.06 0.3

0.04 0.2

0.02 zap 0.1 zap


illi6 illi6
0 blue1 0 blue1
300 pitt8 300 500 pitt8
500 800 1000 800 1000
poc3 2000 4000 poc3
2000
4000 8000 MW 8000
MW

Figure 4. Model predictions of tar molecular weight distributions of different coals [19] at 1050 K.

0.09 1
0.08 0.9
0.07 0.8
0.7
relative fraction

0.06
mass fraction

0.6
0.05
0.5
0.04
0.4
0.03 0.3
0.02 0.2
1193 1193
0.01 1037 0.1 1037
923 923
0 863 T (K) 0 863 T (K)
300 500 773 300 500 773
800 1000 718 800 1000 718
2000 4000 2000 4000
MW 8000 MW 8000

Figure 5. Model predictions of tar molecular weight distributions for Wandoan [18] coal at different
temperatures.

and a long tail covering a range of many thousands of a.u.. The rank dependence of the ditributions’
shape is not very evident, mainly at low temperatures. Figure 4 shows the predicted tar distributions
for five different coals having a variable rank and pyrolised under the same temperature conditions
(1036 K). Both absolute and relative distributions are presented.
A more evident change in the distribution shape is observed with final pyrolysis temperature,
especially beyond 1000 K. The high molecular weight species, having higher boiling points, start
their vaporisation process with a consequent shift and broadening of the corresponding tar distri-
bution. Figure 5 clearly shows this effect in the case of Wandoan coal [18]. This phenomenon is
partially in agreement with the Unger and Suuberg experiment [10].

ACKNOWLEDGMENT

This work was financed by the fund established by the Italian Government to support research
activities of general interest of the electric sector (Ministerial Decree 26 January 2000).

REFERENCES

1. Anthony D.B., Howard J.B., AIChE Journal, 22, 4, p. 625, 1976.


2. Kobayashi H., Howard J.B., Sarofim A.S., 16th Symp. Comb., Combustion Inst., Pittsburgh, p. 411, 1977.

286

© 2004 by Taylor & Francis Group, LLC


chap-29 19/11/2003 14: 49 page 287

3. Smoot L.D., Fundamentals of Coal Combustion, Elsevier, Amsterdam, 1993.


4. Pitt J.G., Fuel, 41, 267, 1962.
5. Anthony D.B., Howard J.B., Hottel H.C., Meissner H.P., 15th Symp. (Int.) Comb., The Combustion
Inst., Pittsburgh, p. 1303, 1975.
6. Niksa S., Kerstein A.R. Energy & Fuels, 5, 647.
7. Solomon P.R., Hamblen D.G., Carangelo R., Serio M., Deshpande G.V., Energy & Fuels, 2, 405, 1988.
8. Fletcher T.H., Kerstein A.R., Pugmire R.J., Grant D.M., Energy & Fuels, 4, 54, 1990.
9. Solum M.S., Pugmire R.J., Grant D.M., Energy & Fuels, 3, 187, 1989.
10. Unger P.E., Suuberg E.M., Fuel, 603, p. 606, 1984.
11. Suuberg E.M., Unger P.E., Energy & Fuels, 1, pp. 305–308, 1987.
12. Migliavacca G., Faravelli T., Pierucci S., Ranzi E., Bonfanti L., Parodi E., Technologies and Combustion
for a Clean Environment Paper 19.3, p. 625–634, Oporto, July 2001.
13. Dente M., Bozzano G., Bussani G., Comp. Chem. Engng., 21, pp. 1125–1234, 1997.
14. Stock L.M., Muntean J.V., Energy & Fuels, 7, pp. 704–709, 1993.
15. Genetti D.B., Fletcher T.H., Pugmire R.J. Energy & Fuels 1999, 13, 60–68, 1999.
16. Migliavacca G., Sartori V., Zanderigo E., Faravelli T., Ranzi E., Parodi E., Combustion and the
environment, S. Margherita Ligure, 16–19 September, 2001.
17. Faravelli T., Pinciroli M., Pisano F., Bozzano G., Dente M., Ranzi E., J. Anal. Appl. Pyrolysis, 60 (1),
103–121, 2001.
18. Xu W., Tomita A., Fuel, 66, p. 627, 1987.
19. Watt M., M.S. Thesis, BYU, 1996.

287

© 2004 by Taylor & Francis Group, LLC


chap-29 19/11/2003 14: 49 page 288

© 2004 by Taylor & Francis Group, LLC


chap-30 19/11/2003 14: 50 page 289

Thermo-economic analysis of energy,


water and environment systems

© 2004 by Taylor & Francis Group, LLC


chap-30 19/11/2003 14: 50 page 290

© 2004 by Taylor & Francis Group, LLC


chap-30 19/11/2003 14: 50 page 291

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

The EnergyPLAN model: CHP and wind power system analysis

Henrik Lund∗
Department of Development and Planning, Aalborg University, Aalborg, Denmark

Ebbe Münster
PlanEnergi s/i, Skoerping, Denmark

ABSTRACT: Both CHP and Wind Power are essential for the implementation of European
Climate Change Response objectives, and both technologies are intended for further expansion
in the upcoming decade. Meanwhile, wind turbines depend on wind, and CHP depends on heat
demand. Consequently, the production in some areas sometimes exceeds the demand. The main
purpose of the EnergyPLAN model is to design suitable national energy planning strategies by
analysing the consequences of different national energy investments. The model emphasises the
analysis of different regulation strategies and different market economic optimisation strategies.
The model has been used in the work of an expert group conducted by the Danish Energy Agency
for the Danish Parliament. Results are included in the paper in terms of strategies, in order to
manage the integration of CHP and wind power in the future Danish energy supply.

INTRODUCTION

There is a growing trend towards decentralised energy production and supply in Europe. Both
increased decentralised production and use of wind power will result in a growing number of
small and medium size producers, who will be connected to energy networks and in particular to
electricity grids, originally designed for monopolistic markets. Therefore, many new problems will
arise, related to management and operation of energy transfer and related to efficient distribution
of wind power and other renewable energy sources in the grids. In order to bring about a substantial
long-term penetration of distributed energy resources in Europe, it is necessary to address the key
issues related to their integration into existing and future energy systems. One of the most important
future challenges seems to be the management of integration of fluctuations in the electricity
production from renewable energy sources and the electricity production from CHP units [1,2].
The EnergyPLAN computer model has been created to analyse and design suitable national
strategies for the integration of electricity production from resources into the overall energy sys-
tem. The main purpose is to design suitable national energy planning strategies by analysing the
consequences of different national energy investments. The model emphasises the analysis of
different regulation strategies. The analysis is carried out in hour-by-hour steps. The consequences
are analysed both by different technical regulation strategies facilitated by specific investments
in the energy system and different market economic optimisation strategies. The model is divided
into two sections. The first section makes a technical analysis based on demands and capacities.
The second section makes an economical optimisation of the behaviour based on further inputs
of marginal costs and hour-by-hour price-assumptions on the international electricity market. The
model has been used in the work of an expert group conducted by the Danish Energy Agency for

∗ Corresponding author. e-mail: Lund@i4.auc.dk

291

© 2004 by Taylor & Francis Group, LLC


chap-30 19/11/2003 14: 50 page 292

the Danish Parliament [3,4]. This paper describes the model and some of the results from the work
of the expert group. A detailed description of the algorithms of the model can be found in [5].

THE ENERGYPLAN MODEL

The model is an input/output model. General input are demands, capacities and the choice of
a number of different regulation strategies emphasising on import/export and surplus production
of electricity. Output are energy balances and resulting annual productions, fuel consumption and
import/exports. See also Figure 1. The details of this figure will be explained in the following
sections.
The energy system in the EnergyPLAN model includes heat production from solar thermal, indus-
trial CHP, CHP units, heat pumps and heat storage and boilers. District heating supply is divided
into I) group of boiler systems, II) group of decentralised CHP systems, and III) group of centralised
CHP systems. Additional to the CHP units the system includes electricity production from wind
power input divided into onshore and offshore, and traditional power plants (condensation plants).

Input data
The model requires three sets of input for the technical analysis. The first one is the annual district
heating consumption, and the annual consumption of electricity (including predefined solar ther-
mal, industrial and DH boiler production inputs to district heating). Also this part defines electricity
consumption from the transport sector, if any. The second one is the wind power capacity and the
yearly distribution of wind energy potentials. And the third one is the capacities and the operation
efficiencies of CHP units, power stations, boilers and heat pumps. The third input also includes
some technical limitations, namely the minimum CHP and power plant per cent of the load, in order
to remain grid stability. Furthermore, it includes the maximum heat pump percentage of the heat
production, in order to achieve the specified efficiency of the heat pumps.
For the economic calculations of exporting and/or importing electricity the model needs input
to define price variations on the international electricity market. The model has an internal

EnergyPLAN Model 4.4

Input Model Distribution Data: Output


Demands Electricity District H. Wind Market Prices
Fixed electricity
Flexible electricity
District Heating Solar Industrial CHP Results:
(Annual,
monthly
Wind and hour
Capacity (MW) by
Wind year distrib. Regulation strategy: hour values)
Distrib. factor
1. Meeting heat demand Heat productions

Solar, CSHP 2. Meeting both heat and electricity demand • Electricity


Heat production production
Electricity prod. Critical surplus production: • Electricity import
• reducing wind, export
Marketprice • replacing CHP with boiler • Forced electricity
Multiplication factor surplus production
Addition factor • replacing CHP with Heat Pump
Depend factor • Fuel consumption
• Payments from
import/export
Capacities &
Efficiencies
CHP, Power plant, Marginal System-stability
Heat Pump, Boiler prod. costs Minimum stabilisation capacity
Heat Storage Import, export Stabilisation share of CHP and Wind

Figure 1. The EnergyPLAN model.

292

© 2004 by Taylor & Francis Group, LLC


chap-30 19/11/2003 14: 50 page 293

hour-by-hour price variation based on the first operation year of the NordPool market. These
price variations can be adjusted by the following inputs: a multiplication factor, an addition fac-
tor (DKK/MWh) and an adjustment price of non-predictable export/import. Moreover, a factor
expressing market-reactions to wind and CHP can change the market (DDK/MWh per GW dif-
ference between demand and wind plus CHP). Furthermore, input is needed in terms of marginal
production costs of one MWh electricity produced, when heat, produced by either boilers or heat
pumps, is replaced by heat produced by CHP.

Internal model distribution data and initial calculations


The model includes a number of “hour by hour” distributions based on typical Danish data. The
data are electricity demand according to the actual distribution of year 2000, typical Danish district
heating demand, solar thermal based on the Danish Test Reference Year (TRY), distribution of
wind turbine electricity production according to three historically wind years. These years are 1991
(mean), 1994 (high) and 1996 (low). The data also include distribution of electricity market prices
based on NordPool in the first year (summer 1999 till summer 2000). Four of the “hour by hour”

Electricity demand Electricity demand: January week


150 150
125 125
Per cent

Per cent

100 100
75 75
50 50
25 25
0 0
1 2 3 4 5 6 7 8 9 10 11 12 1 25 49 73 97 121 145
Month Hours

District heating demand District heating demand: January week


250 250
200 200
Per cent

Per cent

150 150
100 100
50 50
0 0
1 2 3 4 5 6 7 8 9 10 11 12 1 25 49 73 97 121 145
Month Hours

Wind power production Wind power production: January week


450 450
400 400
350 350
Per cent

300 300
Per cent

250 250
200 200
150 150
100 100
50 50
0 0
1 2 3 4 5 6 7 8 9 10 11 12 1 25 49 73 97 121 145
Month Hours

Solar thermal Solar thermal: January week


450 450
400 400
350 350
Per cent

Per cent

300 300
250 250
200 200
150 150
100 100
50 50
0 0
1 2 3 4 5 6 7 8 9 10 11 12 1 25 49 73 97 121 145
Month Hours

Figure 2. Four examples of model hour by hour distribution data.

293

© 2004 by Taylor & Francis Group, LLC


chap-30 19/11/2003 14: 50 page 294

distribution data sets are illustrated in Figure 2. To the left, the annual distribution on months is
shown, and to the right, an example of a January week is given.
The “hour by hour” district heating and electricity demands are found, simply by distributing
the annual demands according to the internal hour-by-hour distributions. The wind power produc-
tion is found, when multiplying the wind turbine capacity by the distribution of the chosen wind
year. The productions from those wind years are found from historically wind turbine data leading
to productions of either 2064, 2303 or 1799 kWh/year pr kW installed capacity. The wind year
shown in Figure 2 relates to an average Danish wind year of 2064 kWh/year.

Electricity demand modifications


The model uses an hourly distribution of the electricity demand based on statistic information from
year 2000, supplied by the two institutions responsible for electricity transmission and balance
in the grids in Denmark: ELTRA, who are situated in West Denmark, and Elkraft-System who
are situated in East Denmark. This distribution can be modified by adding possible new demands
arising from the use of electricity for transport (batteries and/or hydrogen). Such demands are
assumed to be evenly distributed over the year, but can be made flexible within shorter periods.
The transport demands can be divided into the following four categories:
a. Demand following a fixed distribution on a 24-hour basis (typically battery cars being charged
over night).
b. Demand freely distributed over a 24-hour period according to the actual electricity balance.
(Similar to the above, but with the added possibility of concentrating the demand to the actual
peak hours for e.g. wind production – requires a method of communicating this knowledge to
the consumers).
c. Demands which can be distributed freely over a week according to the actual electricity balance
(Similar to the above – relevant for consumers with extra battery capacity and for hydrogen
operated vehicles).
d. Demands which can be distributed freely over a four week period (similar to the above – relevant
for hydrogen operated vehicles. Optimal distribution of demand for a period of this length
requires a long-term prognosis for the electricity balance to be transmitted to the consumers.
As this prognosis is partly based on a weather prognosis this is hardly possible today, as at pres-
ent, a prognosis for a four-week period is not sufficiently reliable).
Additionally, part of the existing electricity demand can be specified to be flexible in the same way
as the transport demands of category b, c, and d. Typically, these flexible demands will be connected
to either room heating or cooling processes (air conditioning or cold stores). The flexible demands
can be specified for the same three periods as the transport demands. For each period and for
each type (cooling, heating), the max capacity must be stated. Note that the fixed demand should
decrease, in order not to increase the total demand.
Based on the described “hour by hour distributions” and the rest of the input data, the model
calculates electricity and heat productions in two different regulation strategies.

Regulations strategy I: meeting heat demand


In this strategy, all units are producing solely according to the heat demands.The industrial CHP’s are
operating at constant capacity all year round. In district heating group I, the boiler simply supplies the
difference between the district heating demand and the production from solar thermal and industrial
CHP. For district heating group II and III, the units are prioritised according to the following
sequences: Solar thermal, industrial CHP, heat plant CHP’s, Heat Pumps and peak load boilers.
The electricity production from CHP’s and the consumption from heat pumps can now be found
by using the assumption for efficiencies. The electricity production at the condensation power plants
and the eventual export of electricity is determined by the following steps: Firstly, the electricity
consumption is adjusted by any use of electricity for heat pumps (determined above), by electricity

294

© 2004 by Taylor & Francis Group, LLC


chap-30 19/11/2003 14: 50 page 295

for transport, and by the influence from flexible electricity demands. Afterwards, the production
at the condensation plants is determined as the biggest of the following two values: (1)The difference
between the adjusted consumption and the production given by the wind and the heat demand, or
(2) the minimum production necessary, in order to reach the requirements of stabilising the grid.
The necessary share of power plants with stabilising ability of the total production less the share of
the total CHP production and the wind production, which is specified in input to have stabilising
ability. In case the need for power production exceeds the specified maximum value, the necessary
electricity production will be imported.
The export of electricity can now be calculated as the difference between all power productions
and the total demand. The export is divided into two categories: (1) Critical Surplus Electricity
Production (CSEP) and (2) Exportable Surplus Production. Critical surplus production appear,
when export makes the maximum capacities of the grid connections abroad increase. In practice
this export is not possible and should be avoided by the use of different means as discussed later
on in this paper.

Regulation strategy II: meeting both heat and electricity demands


When choosing strategy II, export of electricity is minimised mainly by the use of heat pumps
at combined heat and power plants. This will increase electricity demand and decrease electricity
production simultaneously, as the CHP units must decrease their heat production. By the use of extra
capacity at the CHP plants combined with heat storage capacity, the production at the condensation
plants is minimised by replacing it with CHP production. The following yearly distributions are
the same as for strategy I:
• all demands including transport and flexible demands,
• heat and electricity productions of industrial CHP plants,
• heat production of heat plants with boilers,
• electricity production from wind turbines.
The electricity production by CHP must at any time be lower than or equal to the maximum
capacity and to the capacity corresponding with the heat demand left by the industrial CHP and
the solar thermal. Within these limits it is determined as the biggest of the results of the following
two considerations:
• The capacity, which can ensure stability of the grid together with eventual stabilising effect of
the wind turbines, but without relying on any effects from the condensation plants.
• The capacity, which will be the result of reducing the CHP production found by strategy I, in
order to minimise the electricity export found by strategy I. The possible reduction will depend
on, whether the heat pumps at the CHP plants are already operating at maximum capacity. For
all hours, this is determined either as the technical limit or by the maximum share of the total
heat demand. If this is not the case, the reduction in heat production caused by the reduction
in electricity production can be balanced by an increase in heat production by the heat pumps.
This will also reduce the electricity export. If the heat pumps are already operating at maximum
capacity, the CHP production will have to be reduced in the size of the electricity export. For
these reasons the reduction has to be calculated in two steps. First reduction of CHP plus increase
of HP, then eventual further reduction of CHP only.
These calculations are performed for group II and III separately, but with due consideration to
total stabilisation demands, etc. After having determined the production at the CHP plants, the
production by the heat pumps and the boilers are calculated in the same way as in strategy I.

Heat storage utilisation


To improve the possibilities of minimising the electricity export, heat storage capacity is included.
In two situations the storage can be loaded, and in two situations the storage can be unloaded. The
four loading and unloading situations are calculated as follows.
295

© 2004 by Taylor & Francis Group, LLC


chap-30 19/11/2003 14: 50 page 296

Loading by increasing the use of HP in situations with electricity export


Firstly, the smaller of the potential increase in district heating production from HP and the potential
increase in storage content are determined. The market price has to be below the marginal production
costs of one MWh electricity by the power plant, or the electricity export in question should be
critical export. Otherwise the potential change is set to zero. Then, new values of heat pump
production and export can be calculated.

Loading by replacing electricity production from condensing plants by CHP plants


First the minimum condensing power plant production necessary is found, in order to fulfil the
requirements of stabilising the grid. Then the potential increase in district heating production from
CHP and the potential increase in storage content are determined, and new values of CHP and
power plant productions can be calculated.

Reducing the CHP production in situations with electricity export


Firstly, the minimum CHP production necessary is reached, in order to fulfil the requirements
of stabilising the grid. Secondly, the smaller of the potential reductions and the actual storage
content are found. Again, the market price has to be below the marginal production cost of one
MWh electricity by the CHP plant, or the electricity export in question should be critical export.
Otherwise the change is set to zero. Finally, new values of CHP and export can be calculated.

Reducing boiler production


The smaller of the potential reduction in boiler production and the potential reduction in storage
content are determined. Furthermore, new values of boiler production can be calculated.
Avoiding critical surplus electricity production (CSEP)
A number of means to reduce Critical Electricity Surplus Production have been identified and
are implemented into the EnergyPLAN model: Reducing wind power production, reducing CHP
production, and/or replacing boiler production with HP. It is possible to specify one or more
methods, which will be used in the specified order.

Reducing wind power production


Wind power production on-shore is reduced, by shutting down a number of wind turbines, and then
new values of export are calculated.

Replacing CHP production with boiler


First, the minimum CHP production necessary is found according to the input specifications, in
order to fulfil the requirements of stabilising the grid. Then, the potential reduction is located, as
the difference between the minimum production and the actual CHP production. Finally, the CHP
is reduced by the critical electricity production within the limits of the potential reduction. The
heat production is replaced by production on the boilers. The strategy can be specified, either by
prioritising group II or group III.

Heat pump replacing boiler production


The heat pump production is increased by the critical electricity production within the limits of the
potential heat pump production. The potential is considered as the smallest value of the following
three: (1) The boiler production, (2) the heat pump capacity and (3) the maximum share of district
heating supplied from the HP specified. The Heat Pump heat production will replace the boiler
heat production. The strategy can be specified, as prioritising either group II or group III.
Heat pump replacing boiler and CHP production
Initially, the former regulation is performed. Afterwards, the CHP production is reduced, and the
HP production increased in the same manner as earlier described. Having calculated the potentials,
new values are found for the CHP units and the heat pump. This process is carried out for both
group III and group II.

296

© 2004 by Taylor & Francis Group, LLC


chap-30 19/11/2003 14: 50 page 297

Import/export and condensation power


So far, the production by condensation power plants has been calculated solely, in order to meet
demands (avoid import of electricity) and to provide the necessary stabilisation of grids. For both
regulation strategies the calculation can stop now or one can add an economic calculation, in
which the production is optimised according to export/import prices at the NordPool exchange.
If the market price is higher than the marginal costs of replacing the boiler and/or the heat pump
by CHP, exports should be increased, if capacities allow it. And if the price in a given hour is
lower than the marginal production costs for the condensation plants AND if the corresponding
production exceeds the level necessary of stabilisation, it is possible to replace the surplus by
imported electricity. By doing this, the maximum transmission capacity of import and the minimum
production with stabilising effect are observed. If, on the other hand, the price at NordPool in a
given hour is higher than the marginal production costs of the condensation plants, the production
increases to maximum capacity of the plants or the transmission lines are reached. By doing this
as the last step also ensures that the criteria of grid stabilisation is fulfilled.

THE CASE OF INTEGRATING RENEWABLE ENERGY IN DENMARK

In 2001, the EnergyPLAN model was used as one of the models in an expert group conducted by
the Danish Energy Agency for the Danish Parliament. According to official Danish Energy Policy,
the wind power production was supposed to increase from approx. 15 per cent of the electricity
production in year 2001 to between 40 and 50 per cent within approximately twenty years. The
main task of the expert group was to address the problem of integrating such wind productions
into the overall energy supply system [3]. The specific purpose of the EnergyPLAN analysis was
to give an answer to the following two questions [4]:
• Should future critical surplus electricity production problems (CSEP) be solved by investing in
new transmission lines, or should it be solved by introducing limitations/changes in production,
and if so how should different means be prioritised?
• Should flexibility be built into the system in order either to avoid exportable surplus electricity
production (ESEP) and/or to achieve the ability to exploit price fluctuations on the international
electricity market?
Plans for expanding wind power production in the period until year 2020 were used as a reference
scenario. Detailed analyses of this reference were made by the two companies ELTRA and Elkraft-
System, who have been appointed as responsible parties for balancing the electricity supply systems
in Western and Eastern Denmark. Their detailed calculations on the reference were reconstructed
on the EnergyPLAN model with the same assumptions of main data such as wind power capacity,
electricity demand, CHP-production etc.
In the case of the regulation strategy I (meeting solely heat demands as described above), Energy
PLAN came to the same conclusions as the other analyses in the reference, namely a surplus
production of total 8 TWh/year equal to 46 per cent of the wind power production. See also Table 1.

Table 1. Danish reference scenario year 2020.

Reference year 2020 Western DK (Eltra) Eastern DK (Elkraft) Total

Electricity demand 24.87 TWh 16.22 TWh 41.09 TWh


Wind power production 12.16 TWh 5.55 TWh 17.71 TWh
Surplus production
Total (TWh/year) 6.41 TWh 1.68 TWh 8.09 TWh
Percent of demand 26 per cent 10 per cent 20 per cent
Percent of wind 53 per cent 30 per cent 46 per cent
Critical surplus production 1.30 TWh 0.00 TWh 1.30 TWh

297

© 2004 by Taylor & Francis Group, LLC


chap-30 19/11/2003 14: 50 page 298

The surplus production includes a critical surplus production of 1.3 TWh located solely in the
western part of Denmark.
Firstly, the expert group analysed and evaluated a number of different means of reducing surplus
production in the reference on a marginal basis. Secondly, the potential of the best means was ana-
lysed and estimated. The means were analysed, both if they were solely used to avoid critical surplus
production, and also if they were used to avoid both critical and exportable surplus production. In
the first case, typically, means with small investments proved to be the most cost effective, while
in the latter case, the means with high efficiency and consequently low fuel consumption showed
the best profits.
Then the EnergyPLAN model was used to analyse two different combinations of means. The
first combination was designed, in order to avoid critical surplus production and consequently to
avoid investments in new transmission lines. The second combination was designed to incorporate
flexibility into the system in order to be able either to reduce the surplus production or to move the
surplus production from low price periods to high price periods.
To avoid solely critical surplus production the following combinations of means were
prioritised:

• Flexible demand (move/replace 200 MW within the period of one day).


• To exploit existing heat storage capacity in CHP units to move production from CHP units from
hours with a lot of wind power to hours with less wind power.
• To replace CHP production with boiler production in hours of critical surplus production.
• Electric heating of 350 MW to replace heat production form CHP units in hours of CSP.
• To stop wind turbines.

All means are characterised by very low additional investments, if any. Meanwhile, it should be
emphasised that the last three means result in an increase in fuel demands. Therefore, the means
should only be used to avoid surplus production, which arises in few hours, such as the critical
surplus production appearing with high capacities for short periods.
The above mentioned combination of means was analysed on the EnergyPLAN model leading
to the following conclusions:

• All the critical surplus productions can be avoided.


• The last means (stopping wind turbines) is used very little (7 per cent of the total reduction) and
can be excluded, if the use of electric heating is slightly increased.
• The fuel saving is found to be 114 TWh fuel pr TWh electricity, which is relatively poor. Typically,
electricity is produced with an efficiency of 40 per cent, and reductions should lead to savings
of approximately 250 TWh fuel pr TWh electricity.
• The costs of avoiding a critical surplus production of 1,3 TWh are 115 million DDK/year (invest-
ments plus fuel savings minus income from selling electricity). Consequently, such a strategy is
much cheaper than investing in the necessary transmission lines, which was estimated to a total
of DKK 9000 million.

To incorporate flexibility into the system, in order to be able to either reduce the surplus produc-
tion or to move the surplus production from low price periods to high price periods, the following
combination of means were analysed on the EnergyPLAN model:

• Flexible demand (move/replace 200 MW within the period of one day),


• Possibility of regulating CHP-units (EnergyPLAN regulation strategy II),
• 1000 MW-electric heat pumps to replace 25 per cent of the district heating production,
• Investment in additional heat storage capacity, in order to raise the total capacity to be able to
store one average day district heating production (100 GWh),
• Electric vehicles (battery and fuel cell) increasing electricity demand by 1.6 TWh/year [6],
• Grid stability operation task is solved partly by small CHP units and wind turbines.

298

© 2004 by Taylor & Francis Group, LLC


chap-30 19/11/2003 14: 50 page 299

The above mentioned combination of means were analysed on the EnergyPLAN model leading
to the following conclusions:
• 99 per cent of all surplus production can be avoided. All critical surplus production is avoided
with a high security margin.
• The fuel savings are in average found to be 217 TWh fuel per TWh electricity, which means
that in general, the surplus production is avoided with efficiencies equal to reducing solely
condensed production and not CHP production. In other words: The high fuel efficiency of CHP
and renewable energy in the Danish energy system is maintained.

CONCLUSION

The EnergyPLAN model uses a simplified model of the complex Danish energy system, where the
individual power and heat plants are aggregated into a few groups of similar plants, leaving out the
geographical distribution and the transmission lines. This makes the model very transparent and
relatively easy to use. Still the Model has proven the ability to reconstruct the results of reference-
scenarios analysed on models being comprehensive in the description of individual energy plants.
The model emphasises the evaluation of different operation strategies and constraints needed for
maintaining grid stability (voltage and frequency). Additionally, the EnergyPLAN model contains
a description of the international marked based on NordPool and operation strategies to optimise
profits from selling and buying electricity. The model has also proven able to analyse the importance
of different investment alternatives and different regulation strategies in future energy systems
including high percentages of CHP and renewable energy sources.
The Model is in the process of integration into the computer programs (EnergyPRO), which has
been used to design and optimise most decentralised CHP plants in Denmark.

ACKNOWLEDGEMENTS

The findings in this article is the result of two research projects at Aalborg University, which have
been supported financially by the Danish Energy Agency and by the Energy Research Programme
conducted by the Danish Environment and Energy Ministry.

REFERENCES

1. Commission of the European Communities (2000) Green Paper, towards a European strategy for the
security of energy supply COM (2000) 769, Brussels.
2. European Commission (2000) Energy, Environment and Sustainable Development, Work Programme
Update, Brussels October 2000.
3. Danish Energy Agency (2001) Report from the workgroup on electricity production form CHP and
RES, ISBN 87-7844-239-7. (in Danish)
4. Danish Energy Agency (2001) Attachments to report from the workgroup on electricity production
form CHP and RES, Attachment 6, ISBN 87-7844-240-0. (in Danish)
5. Lund, H. Münster, E. Tambjerg, L.H. (2001) EnergyPLAN computer Model for Energy System Analysis,
Department of Development and Planning, Aalborg University, Denmark.
6. Nielsen, L.H. Jørgensen, K. (2000) Electric vehicles and renewable energy in the transport sector –
energy system consequences, Risø National Laboratory, Roskilde, Denmark.

299

© 2004 by Taylor & Francis Group, LLC


chap-30 19/11/2003 14: 50 page 300

© 2004 by Taylor & Francis Group, LLC


chap-31 19/11/2003 14: 50 page 301

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Electric Power System Expansion Planning

Tatjana Kovačina & Edina Dedović


JP Elektroprivreda BiH Sarajevo, Bosnia and Herzegovina

ABSTRACT: Electric Power System Expansion Planning is very important, not only for the state
economy but also for the overall social development. The paper presents the general view of the
expansion planning concept with minimum costs “The Least Cost Planning” that is characteristic
for the monopolistic system which in open electricity systems transforms in minimising the prices
because of better competition and maximising the profit as a main criterion of the market. Also,
there are presented some world known models and program packages for electric power system
expansion planning: ENPEP, GTMax and PRAO.
The paper is done with respect of the fact that the central goal of sustainable development
is: “Maintenance or increase the overall assets (natural, man-made and human or social assets)
available to future generations, with minimising consumption of finite resources and not exceeding
the carrying capacities of Eco-systems”. Energy is essential for sustainable development [1].

INTRODUCTION

In many countries electricity markets are monopolies. The power producers trade among them-
selves, i.e. to sell the energy or to provide reserve power. In the most of the cases the markets are
not transparent, and trading is done on an over the counter (OTC) basis only.
After the liberalisation, power producers as well as customers face the market risk with prices,
which can be very volatile. It is important to realise that electrical power is different from other
commodities. Two factors are responsible for the extreme high volatility of the price. On the one side
electricity cannot be stored or only in limited volumes, respectively. On the other side congestion
of the grid or outage or production can lead to an unbalance of supply and demand within a very
short time.
In open markets the number of market participants increases significantly. New business opportu-
nities will attract new players to the markets. New products will be developed, i.e. financial products
to hedge the future production or demand.
The open electricity market changes many previously established views on the problems of
development and planning of electric power systems. The open market does not solve all problems;
it even generates some new ones. It opens up a new environmental with new challenges and needs.
To be able to solve it preparation of new strategies and tools are needed.
Market Development from a monopolistic to an open market is shown in Figure 1.
Standard economic theory suggests that a company will always price its output at the additional
marginal output’s marginal cost.
In the long run a company will only produce at a level where the price covers the long run
average costs. Hence, it will only produce at the level where the marginal costs are above or equal
to the average costs. With standard assumptions about the production function, the situation can be
illustrated as in Figure 2.
With production at levels where the marginal costs are above the average costs, the production
generates a profit to the company, a profit in the sense that the revenues are above the cost of the
input factors [2].

301

© 2004 by Taylor & Francis Group, LLC


chap-31 19/11/2003 14: 50 page 302

Power exchange
Open Market
– Spot/physical delivery – transparent
– Financial products – many market participants
– high volume in financial
Bilateral market trading
OTC

OTC-derivates
Price-index
Free access to the grid

Traders and brokers


enter the market
Monopolistic market
– bilateral trading
– non-transparent
– producers trade among
themselves

Figure 1. Market development from a monopolistic to an open market.

Marginal costs Average costs


Price

Supply curve

Average
variable costs

Quantity

Figure 2. Production costs and supply curve.

PLANNING, TOOLS AND MODELS

Electric Power System Expansion Planning depends on the market and economy relations in the
electric utility. In the vertical organised utilities, economic analysis deal with electricity supplies
and the planning conception “The Least Cost Planning” is named on the base criterion of the
monopolistic systems, the least cost. Monopolistic electric power system represents one supplier
as the state owner in the frame of its administrative borders.
In the open systems economy is faced to consumers and its requirements and habits and the
main concept is based on the minimising the prices because of the better competition, but also
maximising the profit which is the main criterion of the market. In open systems monopoly is
changed with private owner market.
Economic plan in open systems is selected on the base of the multiple-criterion analysis, which
takes into account the interests of all groups: consumers, producers, distributors and state [3,4].
For Electric Power System Expansion Planning the different models are in use. In this paper
are presented three program packages (ENPEP, GTMax and PRAO), of which two (ENPEP and
PRAO) were used in Public Electric Power Company “Elektroprivreda BiH.”

302

© 2004 by Taylor & Francis Group, LLC


chap-31 19/11/2003 14: 50 page 303

Electric Generating Overall Energy


System Analysis System Analysis

LDC IMPACT MACRO


Electric load Emissions Economic

ELECTRIC MAED DEMAND


System expansion Energy demand Energy demand

ICARUS PLANT BALANCE


Hourly dispatch DATA Overall energy
system

Figure 3. ENPEP structure.

Program package ENPEP


The Energy and Power Evaluation Program (ENPEP) is a set of microcomputer-based energy plan-
ning tools that are designed to provide an integrated analysis capability. ENPEP begins with a
microeconomic analysis, develops an energy demand forecast based on this analysis, carries out
an integrated supply/demand analysis for the entire energy system, evaluates the electric system
component of the energy system in detail, and determines the impacts of the alternative configura-
tions. This approach is an enhancement of existing techniques in that it places emphasis on looking
at the electric system as an integral part of the entire energy supply system. Also, it explicitly
considers the impacts that power system has on the rest of the energy system and on the economy
as a whole [5].
It is modular system consisting of nine modules as it is shown on the Figure 3.
Model applications cover the entire spectrum of issues found in today’s complex energy markets,
such as:
• Energy policy analysis;
• Energy market projections;
• Energy and electricity demand forecasting;
• Analysis of production sector development options;
• Analysis of production costs, marginal costs, and spot-market electricity prices;
• Operation and management of Hydro power plants and reservoirs;
• Economic evaluation and timing of new investments in the power sector;
• Energy environmental trade-offs and decision analysis;
• Natural gas market analysis;
• Carbon emissions projections;
• Projections and emission control strategies for criteria pollutants (PM, SO2 , NOx , etc.);
• GHG mitigation studies;
• Power market and design studies;
• Interconnection studies; and
• Market deregulation issues.
ENPEP is developed by Argonne National Laboratory (USA).

303

© 2004 by Taylor & Francis Group, LLC


chap-31 19/11/2003 14: 50 page 304

Program package GTMax


The Generation and Transmission Maximisation (GTMax) model was developed by Argonne
National Laboratory (USA) to study the complex marketing and operational issues in today’s dereg-
ulated power markets. GTMax helps generation companies and utilities to maximise the value of
their system assets, taking into account firm and non-firm contracts, independent power producer
(IPP) agreements, bulk power transaction opportunities, and limitations of energy and transmission
resources [5].
Illustration of GTMax model use is shown in the Figure 4.
Summary of GTMax model features:
• Maximises company revenues
• Optimises hydro and thermal generation
• Considers firm contracts and IPP agreements
• Estimates regional economic clearing price of energy
• Simulates spot market transactions
• Quantifies the operational costs and revenues of an IPP
• Models energy exchange agreements
• Operates in convenient GIS interface.

Program package PRAO


For short-term and long-term expansion planning of medium voltage distribution networks with
radial structure, or ring structure, which are in radial operation, program package PRAO (Planifica-
tion de Réseaux Assistée per Ordinateur-Computer Aided Network Planning) is in use. Electricite
de France (EDF) made PRAO for their distribution centres and for international use [6].
This software is suitable for restructuring networks’ studies, planning studies (for the certain
time period) and decision-making studies.
With PRAO, the planner can study electrical MV distribution networks that he can use to design
loading variations and development strategies. He can then test the behaviour of networks under
load and validate appropriate reinforcement solutions.
The study includes the electrical aspect, and also the supply quality aspect (estimate of number
of power cuts and duration of each, as seen by load points) for each development strategy studied
within network.
The planner can obtain a complete updated statement for each studied network development
strategy, including the investment costs, losses, short and long term power cuts and operating costs.
Each year, the user can also calculate the Benefit/Cost ratio of the works he wants to put into
service. These technical-economic calculations, together with supply Quality estimates provide the
planner with required information to make an appropriate decision.

Figure 4. Illustration of GTMax model use.

304

© 2004 by Taylor & Francis Group, LLC


chap-31 19/11/2003 14: 50 page 305

Therefore, the objectives of PRAO are to:


• manage “spatial” data: networks and associated extension of reinforcement works
• manage “time” data: changes in consumption and development strategies
• carry out electrical calculations: calculate the state of a network or optimisation of its operating
scheme
• make network quality evaluations: daily quality (number and duration of power cuts seen by
customers caused by incidents on the MV network) and improving reliability (calculation of the
reliability factor following total or partial loss of a source substation)
• make economic calculations.

SHORT REVIEW OF ELECTRICITY SITUATION IN BOSNIA AND HERZEGOVINA

Electricity net generation in Bosnia and Herzegovina was 10407 GWh in 2000. In the hydroelectric
power plants it was produced 4805 GWh or 46%, and in the thermal power plants 5602 GWh or
54% [7].
The total installed capacity of the power plants in Bosnia and Herzegovina was 3999 MW; in
the hydroelectric power plants was 2042 MW or 51%, and in the thermal power plants 1957 MW
or 49%.
The electricity gross consumption in Bosnia and Herzegovina was 9384 GWh in 2000. In this
amount, the share of distributive consumption was 77.2% (comparing to 69.3% in 1990), the big con-
sumers consumption (consumers on 110 kV) was 19.6% (comparing to 27.7% in 1990), while the
share of transmission losses in the total consumption in 2000 was 3.3% (comparing to 3.1% in 1990).
Electricity generation and consumption in Bosnia and Herzegovina for the period 1990–2000 is
given on the Figure 5.
The structure of electricity gross consumption in Bosnia and Herzegovina in 1990, 1996–2000
is given on the Figure 6.
Three public power companies currently do electricity generation, transmission and distribution
in Bosnia and Herzegovina.
Energy sector development studies in Bosnia and Herzegovina done for the last five years by
the foreign consultants and within the electric utilities were made with considering the existing

16000

14000

12000

10000
(GWh)

8000

6000

4000

2000

0
1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000
Year
HPP Generation
TPP Consumption

Figure 5. Electricity generation and consumption in B&H (1990–2000).

305

© 2004 by Taylor & Francis Group, LLC


chap-31 19/11/2003 14: 50 page 306

9000

8000

Consumption (GWh) 7000

6000

5000

4000

3000

2000

1000

0
1990 1996 1997 1998 1999 2000
Year

Distributive Transmission
consumption losses
Big consumers

Figure 6. Structure of electricity gross consumption in Bosnia and Herzegovina.

organisation structure (monopolistic) and under the least cost-planning concept. The results were
that the rehabilitation of units is feasible and cheaper than the installation of the new capacities [8].
Bosnia and Herzegovina has the non-used thermal and hydro potential and analysis are going in
the direction that besides the possible installation of traditional thermal and hydro capacities the
options of installing of small hydro and using the new technologies are in speculations.
Today electric power sector in Bosnia and Herzegovina is in restructuring process and privati-
sation, which is one of the precondition of economy development and it’s including in the world
economy. The following reform will be based on the Electricity Law approved on May 2000 by
Government. It takes account the European Directive about the electricity market and electricity
sector liberalisation. The final goal is to make possible the competition in electricity production
and delivery, quality supply with minimum price for the final users, sustainable development and
attraction of the foreign investors.
The Electricity Law about Transmission, Power System Regulator and Operator in Bosnia and
Herzegovina which is approved on January of 2002, predicts the establishment of the following
bodies and companies to the end of 2002:
• Regulatory Power Commission of Bosnia and Herzegovina (for the power system regulation),
• Independent System Operator (for the power system operation),
• Transmission Company of Bosnia and Herzegovina (for the operation of the transmission
network).
The goal of this Law is to accelerate establishment of the electricity market in Bosnia and
Herzegovina, as well as the regional electricity market.

CONCLUSION

• Regardless of the complexity of electricity:


– Electricity prices show complex characteristics difficult to model very high volatility, strong
daily, monthly and annual periodicity, strong mean reversions, etc.

306

© 2004 by Taylor & Francis Group, LLC


chap-31 19/11/2003 14: 50 page 307

– Electricity markets are very young and more experience has to be gained to obtain mature
prices data for both spot and derivatives contracts
– Electricity must be treated as any other commodity and models and tools have to be adapted
to the particular features of the commodity [2].
• The choice of technologies to advance sustainable development in any country is a sovereign
choice and each country will need a mix of technologies suited to its situation and needs. The
definition of sustainable development is the importance of expanding possibilities and keeping
options open – not foreclosing them for future generation [1].

NOMENCLATURE

OTC Over the counter


ENPEP Energy and Power Evaluation Program
GTMax Generation and Transmission Maximisation model
IPP Independent power producer
PRAO Planification de Réseaux Assistée per Ordinateur-Computer Aided
Network Planning
GIS Geographical information system
WASP Wien Automatic System Planning Package

REFERENCES

1. The IAEA, Nuclear Power and Sustainable Development, www.iaea.org


2. Reichelt D. and other experts, Portfolio and risk management for Producers and Traders in an Open
Market, ELECTRA- CIGRE, No 199 December 2001
3. Kovacina T., Koritarov V., Hamilton B., Models for the Electric System Expansion Planning, V B&H
Conference-BHK CIGRE Proceedings, Neum, Bosnia & Herzegovina, 2001
4. Slipac Mr G., Power generating system planning in power system with different structure of ownership,
IV Conference HK CIGRE, Cavtat, Croatia, 1999
5. Overview of ENPEP and GTMax Models, www.adica.com
6. Planification de Réseaux Assistée per Ordinateur-Computer Aided Network Planning (PRAO), User
Manual, EDF, France, 1998
7. Dedovic E., Kovacina T., Electricity Generation and Consumption in Bosnia and Herzegovina,
Information-scientific journal of JP Elektroprivreda BiH, Sarajevo, Bosnia & Herzegovina, No 61,
December 2001
8. Bosnia and Herzegovina – War Damage and Long Term Development Strategy for the Energy Sector,
SGI Engineering/ EBRD, 1999

307

© 2004 by Taylor & Francis Group, LLC


chap-31 19/11/2003 14: 50 page 308

© 2004 by Taylor & Francis Group, LLC


chap-32 19/11/2003 14: 51 page 309

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Development of an air staging technology to reduce NOx -emissions


in grate fired boilers

B. Staiger, S. Unterberger, R. Berger & Klaus R.G. Hein


Institute of Process Engineering and Power Plant Technology, University of Stuttgart,
Pfaffenwaldring, Stuttgart

ABSTRACT: Experiments with a new-designed Controlled Multiple Air Staging Technology


(CMAST) in grate-firings show a considerable reduction of NOx -emissions. The applicability of
the CMAST depends to a major extend on fuel parameters. Fuels with high moisture-contents cause
a drop of the heat output in full load operation due to the reduced fuel-conversion. In consequence
of reduced temperatures in the furnace the emissions of products of incomplete combustion rise
in part load operation. To compensate these effects more primary-air is necessary, preventing the
realisation of the multi-air-staging technique.
Experiments in laboratory, test- and commercial firings make understand the influence of dif-
ferent fuel characteristics on the combustion system and detect practical potentials and limits
of the air-staging. On this base, concepts can be developed for an optimised operation of grate
firings dependent on the fuel characteristics. These results promise a further improvement of the
combustion technology using wood fuels.

INTRODUCTION

In order to reduce CO2 emissions the distribution of decentralised biomass fired systems is sup-
ported by local governments and the European Commission. Apart from traditional wood fuels,
coming up by wood-processing, more and more types of wood fuels with very different characteris-
tics are used. There are forestry and agricultural residues, energy crops etc. From the commercially
point of view residues from state-owned forests are very attractive to be used for decentralised heat
generation in public facilities like schools or hospitals.
Within the scope of an European research project [2], titled Optimised Combustion of Wood
and Wood-waste Fuels in Stoker Fired Boilers, the Institute of Process Engineering and Power
Plant Technology optimised grate firing systems in view of their operation with different forestry
residues and their emission behaviour.
In order to characterise the behaviour of the different types of the above mentioned fuels in a
fixed bed combustion system, fundamental studies were carried out. A 240 kW test firing system
was installed in the IVD laboratories. Concepts to operate the system adapted on the fuel, as well
as a multi-air staging technique to reduce NOx -emissions were developed. These concepts have
been implemented in a commercially operated 450 kW firing system.

DESCRIPTION OF THE TEST UNIT

For the combustion experiments and for the development and optimisation of a controlled multi-air
staging technique (CMAST) a commercially available grate firing systems was installed at the
experimental test plant of the IVD.

309

© 2004 by Taylor & Francis Group, LLC


chap-32 19/11/2003 14: 51 page 310

Figure 1. Commercially available grate firing.

The grate firing unit


The maximal thermal capacity of the firing system designed for burning biomass fuels, especially
wood chips, is 240 kW. The firing system has a horizontal moving grate and is equipped with an
integrated two-pass boiler for preparation of hot water for heating purposes. Wood chips are fed
by a screw feeder onto the moveable grate. Optionally preheated primary air can be distributed
by two independent grate sections. The air volume flow of each grate section can be adjusted by a
servo-damper and is measured by a hot wire anemometer installed in the air channel. For a complete
combustion in the standard configuration of the grate firing system secondary air is added to the
flue gas at the entrance and within the lower part of the burnout combustion chamber. After passing
the upper part of the burnout combustion chamber the flue gas enters the two-pass boiler, a multi-
cyclone for particle precipitation and the chimney. The commercially available combustion system
designed in standard configuration is shown in Figure 1.
The grate firing system is similar to the system investigated in close co-operation with the
project partner GAUSS. This is due to the fact that the CMAST-arrangement to be developed at the
laboratory scale facility was applied and tested at the commercially operated GAUSS grate firing
with a thermal output of max 450 kW.

Modifications for multiple air staging


For the adaptation of the commercial grate firing system to experimental test conditions measur-
ing devices for the determination of the total amount of primary air and the different amounts of
secondary air for the different staging levels were installed. The fuel mass flow rate can be pre-set
for different loading conditions as well as for different kinds of fuels. In addition to the standard
configuration 12 access ports (six on each side) were arranged on the side walls of the burnout
zone in order to facilitate the proposed air staging with up to totally four different levels (A, B, C
and D). For each air staging level electrically preheated burnout air with a temperature in the range
of 20◦ C to 350◦ C is supplied by a separate fan. The different air volume flows can be adjusted by
dampers and the temperature and volume flow of the burnout air can be measured for each level.
The preheated burnout air of each staging level is added by different nozzles to the flue gas flow.
The air nozzles are arranged on both side walls. The number and exact arrangement of the nozzles
for optimised mixing and, therefore, optimised combustion processes was determined by numerical
simulations.

Monitoring of the combustion process


For detailed gas analysis and temperature measurements in the burnout zone 10 measurement ports
were included in the design of the firing system. The composition of the gases released by the fuel

310

© 2004 by Taylor & Francis Group, LLC


chap-32 19/11/2003 14: 51 page 311

Flue gas Flue gas


Online gas analysis
sample
• O2, CO2, CO
F
• NO, NO2, Nox
Heated Cooler
ceramic filter pump • HC
T

Hot water boiler

Fuel storage Cyclone T


separator Sec. D
Sec. B
Fly ash
Sec. C
Sec. A
Ports for
portable probes

Prim. 1 Prim. 2
M
F

Primary air M Ash

F

Secondary air A •
F

Secondary air B •

Secondary air C •

Figure 2. Modified grate firing system.

bed can be determined by means of four additional measurement ports arranged at the surface of
the fuel bed. Therefore, information can be obtained at different combustion stages of the wood
fuel (drying, pyrolysis, char burning). The information and data obtained by these measurements
will be used for the validation and further development of a numerical model for the simulation of
fixed bed combustion.
Flue gas concentrations, the flue gas temperatures and the flue gas volume flow are mea-
sured downstream the fan of the test facility. Together with the above mentioned continuously
measured input data a detailed description and characterisation of the combustion process can be
obtained. Figure 2 shows the modified 240 kW grate firing system.

FUEL ANALYSIS

The analysis and characterisation of different wood fuels used during the first experimental mea-
surement campaign in the, commercially operated 450 kW grate firing system of the project partner
GAUSS showed very low values of the net calorific value (NCV) as well as very high values of the
water content of the fuel. These conditions are due to the harvesting and storage conditions of the
wood fuels. The considered fuels were freshly cut, with a short storage duration of a few months.
The storage was realised with a big pile without any protection against rain and snow. Taking into
account the problems caused by this, harvesting and storage conditions for the wood fuels to be
used for the second measurement campaign during the heating period 1999/2000 was changed
in order to improve the fuel quality and therefore, to reduce the emissions and to improve the com-
bustion and operation stability of the firing system. Trees and bushes were chopped after a storage
of 2 years in the forest. The wood-chips are stored in open containers standing under a roof for
protection against rain. Figure 3 shows the influence of the storage on the fuel characteristics: due
to the reduced water content the net calorific value increases. However, the volumetric calorific
value stays nearly constant.

311

© 2004 by Taylor & Francis Group, LLC


chap-32 19/11/2003 14: 51 page 312

70 14000

60 12000
Water content in %

10000

NCV in kJ/kg
50
40 8000
30 6000
20 max 4000 max
average average
10 2000 min
min
0 0
98/99 99/00 98/99 99/00

Figure 3. Water content and net calorific value.

Table 1. Sample of fuels used during the measuring campaign.

Water content Ash content NCV


Sample [%] [%] [kJ/kg]

GA981125 46,2 3,57 8876


GA990128 42,4 1,94 9868
GA990311 51,4 2,16 7852
80 mm

GA 981125 GA 990128 GA 990311

INVESTIGATION OF THE COMBUSTION BEHAVIOUR OF THE COMMERCIALLY


AVAILABLE GRATE FIRING SYSTEM

The following results were obtained in several measuring campaigns during the heating period
of 1998/1999 at the 450 kW facility operated by Gauss. The duration of each measurement was
24 hours. The objective of the measurements was to investigate the influence of the fuel parameters
water content, ash content and heating value of the different fuels on the emissions and the operating
practice of the facility.
The fuel
Table 1 shows a selection of fuels and their characteristics. The variation of the water content has
a direct impact on the heating values. Fuels with higher water or ash content cause operational
problems like fouling of the heat exchange surface or slagging of the grate. High ash contents
result from high shares of bark and needles, typical by using a fuel consisting of tree-tops and
branches.
CO emissions
Figure 4 shows the influence of the fuel quality and the thermal output on the CO emissions. The
figure contains mean values of longer periods under constant thermal output. The CO emission is
a function of the fuel quality: As higher the water content of the fuel is, as higher the emissions
of products of incomplete combustion are. Lower thermal output levels, corresponding to low
temperatures in the reaction zone, show the influence of the used fuel even more clearly.

312

© 2004 by Taylor & Francis Group, LLC


chap-32 19/11/2003 14: 51 page 313

1200
limit 1. BImschV.
1000

CO in mg/Nm3 (@13% O2)


800

600
GA990311
GA981125
400

200

GA990128
0
0 100 200 300 400 500
Setting of load control in kW

Figure 4. Influence of the fuel quality and the thermal output on the CO emissions.

350
GA990128
NOx in mg/Nm3 (@13% O2)

300

250
GA981125
200

150
GA990311
100

50

0
0 100 200 300 400 500
Setting of load control in kW

Figure 5. Influence of the fuel quality and the thermal output on the NOx emissions.

NOx emissions
Another relevant value to assess the combustion quality is the NOx emission (Figure 5). In com-
parison with the CO emissions, a reverse trend is noticeable. The emissions rise with better fuel
quality on values of 300 mg/m3 . This may be explained by higher temperatures in the primary zone
by using drier fuels, which have an effect on the formation of NOx . A higher CO concentration
in the reaction zones may has an effect of NOx reduction as well. However, the results show that
additional measures to reduce NOx emissions are desirable when dry fuels are used whereas they
are not necessary when wet fuels are used.

DEVELOPMENT OF A MULTI-AIR STAGING TECHNIQUE TO REDUCE


NOx EMISSIONS

CMAST-configuration
The CMAST-system was realised during an extended measurement campaign at the IVD test facility.
Based on the results of the numerical simulation and the first measurements the parameter excess
air ratio in the primary zone was varied while using different air staging arrangements of 3 stages.

313

© 2004 by Taylor & Francis Group, LLC


chap-32 19/11/2003 14: 51 page 314

The secondary air was introduced either by the nozzles ABC or BCD (Figure 2). The secondary
air was distributed equally or proportionally. Equal distribution means every stage is supplied with
the same air flow. Proportional distribution means the added secondary air flow is proportional
to the flue gas flow at each stage.

Results
For the experiments, wood chips with a water content of about 25% were used. Figure 6 shows
the effect of the secondary air distribution. By a proportional distribution a reduction of 20% of
NOx emissions can be realised in contrast to the equal distribution, independent on the excess air
coefficient in the primary zone. This can be explained by preserving reducing conditions on a longer
distance, to enhance the reaction time for reducing NOx . A further enhancement of the reaction
time by using the arrangement BCD (Figure 7) does not reduce NOx much more. However, the
best value of 182 mg/m3 was realised with this arrangement. Though CO emissions are acceptable
in every case, the arrangement ABC is the best compromise of reducing NOx and keeping a low
CO emission level.

300

250
mg/m3 @ 13% O2

200

150 NOx, equal flow rate


NOx, prop. shared flow rate
100 CO, equal flowrate
CO, prop. shared flow rate
50

0
0,6 0,8 1,0 1,2
Excess air ratio in the primary zone

Figure 6. NOx and CO emissions, arrangement ABC.

300

250
@ 13% O2

200

150 NOx , ABC


NOx, BCD
mg/m3

100 CO, ABC


CO, BCD
50

0
0,6 0,8 1,0 1,2
Excess air ratio in the primary zone

Figure 7. NOx and CO emissions, arrangement BCD.

314

© 2004 by Taylor & Francis Group, LLC


chap-32 19/11/2003 14: 51 page 315

IMPLEMENTATION OF THE CMAST SYSTEM IN THE 450 kW PLANT

According to the CMAST system, realised and tested at the IVD test facility, the Gauss facility has
been modified. The Gauss facility is committed to supply a school building with space heat.
To simulate the conditions in the test facility, a tube system of stainless steel has been installed
like it is shown in Figure 8. An additional fan has been installed for the secondary air supply. The
air flow is shared in secondary air 1 and 2. Both flows are regulated with dampers by the original
control, analogous to the existing burnout air supply. The tubes in the horizontal flue gas pass above
the combustion chamber are installed close to the walls. Four air distribution boxes are put on the
tubes and serve as nozzles, analogous to the outlets A, B, C and D in the test firing. In order to
realise a proportional air distribution, the ratio of the air flow rates between 1.1 and 1.2 or 2.1 and
2.2 is adjusted by orifice plates. The proportional air distribution is demanded for an effective NOx
reduction (compare chapter Results).

Emission behaviour of the retrofitted 450 kW plant


The measurement campaign was carried out during the winter period of 1999/2000. During this
period, two different fuels with water contents of about 45% and 28% had been used. At the

Flue gas

Gas sample O2, CO2, CO


F HC
T NO, NO2, Ox

Fly ash

Prim. 11
Prim. Prim. 22
Prim.
Primary air
F Ash
M
Secondary air 1
SCHMID •
Pyrotronic
control F
M
Secondary air 2

Fan Servo motor Air damper Air flow rate orifice

Figure 8. Modified 450 kW facility operated by GAUSS.

315

© 2004 by Taylor & Francis Group, LLC


chap-32 19/11/2003 14: 51 page 316

500
NOx original
450
NOX and CO in mg/m3 @ 13Vol.% O2

400

350
NOx air staged
300

250

200
CO original
150

100
CO air staged
50

0
100 150 200 250 300 350
Thermal output in kW

Figure 9. NOx -emissions, original and CMAST mode operation.

beginning of the campaign an adjustment of the control was necessary. The Figure 9 shows the
NOx emissions under original conditions and with the CMAST system, using fuel with about 28%
water content. The reduction of the NOx is in the order of 10–20% over the whole range of thermal
output from 100 kW to 450 kW. The CO emissions were reduced the same time. A decrease of the
CO emissions can be explained by the better mixing conditions in the burnout chamber caused by
the improved design of the secondary air outlets.
In the 450 kW plant, the potential of the CMAST system is limited for the following reasons:
The operation as a heating system in a school building causes permanent load fluctuation during
the day. An immediate ignition of the fuel can only be guaranteed by a high amount of primary air.
Another problem is the high amount of leakage air, coming in through the stoker. Because of
the constant under-pressure in the system the amount is independent of the load conditions. This
results in an increasing excess air ratio on decreasing load. This leakage air functions as primary air
and reduces the effect of the air staging. The sealing of the fuel supply is possible by the integration
of a cellular wheel feeder. An inhomogeneous distribution of the fuel at the beginning of the grate,
caused by the rotating stoker, causes inhomogeneous combustion zones moved forward by the
grate. To guarantee a complete burnout, a higher amount of primary air is necessary. The part of the
primary air, slipping through zones with less material, functions as burnout air in the combustion
chamber, similar to the leakage air through the stoker.
The main problem is the changing fuel quality (compare chapter Fuel analysis and chapter
Investigation of the combustion behaviour of the commercially available grate firing system). To
guarantee a complete burnout of wet fuels, a higher share of primary air is necessary. Because
of this, the plant is adjusted to wet fuels. Burning drier fuels, the share of primary air could be
reduced. Of course, the plant is not adjusted to every new incoming fuel. With the set-up for wet
fuels the potential of a reduction of NOx emissions can not be used.

CONCLUSIONS

The experiments showed, that it is advantageous to reduce the share of primary air in every operation
mode. Unfortunately, this reduction is limited by the fuel quality. The measurements at IVD and
Gauss showed, that fuels with higher water content will not allow low primary air flow, which will
cause a reduced fuel transformation rate and, as a result, a diminished thermal output. To guarantee

316

© 2004 by Taylor & Francis Group, LLC


chap-32 19/11/2003 14: 51 page 317

a complete burnout of the fuel on the grate during load fluctuations, the systems are usually operated
with high primary air ratios. Therefore, the systems must be adjusted on fuels with higher water
contents, while fuels with higher quality cannot be taken into account.
High load fluctuations and inhomogeneous grate covering demand high primary air flows and
limit the potential of the air staging.
To establish the fuel quality as a control parameter should be a solution of this problem. Reliable
and reasonable sensors for the detection of the fuel quality have to be developed. It can be assumed
that such sensors and the related control systems are a prerequisite for the successful application
of the multiple air staging technique.

REFERENCES

1. Nussbaumer, Thomas: Schadstoffbildung bei der Verbrennung von Holz. Dissertation Zürich 1989.
2. Optimised Combustion of Wood and Wood-waste Fuels in Stoker Fired Boilers. Contract JOR3-CT97-
0178. Research funded in part by the European Commission in the framework of the Non Nuclear
Energy Programme JOULE III.

317

© 2004 by Taylor & Francis Group, LLC


chap-32 19/11/2003 14: 51 page 318

© 2004 by Taylor & Francis Group, LLC


chap-33 19/11/2003 14: 51 page 319

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Systemic approach for techno-economic evaluation of triple


hybrid (RO, MSF and power generation) scheme including
accounting of CO2 emission

Sergei P. Agashichev & Ali M. El-Nashar


Research Center, ADWEA, Abu Dhabi, UAE

ABSTRACT: System of models for evaluation of triple hybrid (RO-MSF-power generation) has
been developed. It includes models for: (A) power-generating; (B) reverse osmosis desalination
and (C) multistage flush desalination. Any group of individual models, in turn, comprises set of
submodels of different hierarchy level, namely: (1) technological, (2) energy, (3) ecological and (4)
economic submodels. (1) Technological submodel gives technological characteristics vs. operating
load. (2) Energy submodel calculates fuel influx vs. load; (3) Task of ecological submodel is
quantification of CO2 emissions; (4) Economic submodel gives the following economic indicators:
(a) water cost, (b) energy cost and (c) cost of CO2 emissions through imposed carbon tax. The
manuscript contains analysis of indicators at various parameters such as: (1) Load and efficiency
of power generating system, (2) Specific energy consumption for desalination, (3) Specific CO2
emissions and (4) Carbon taxes. Submitted model can be applied for analysis of various schemes
including RO.

INTRODUCTION

In recent years one can see the growth of scientific, engineering and commercial interest to sus-
tainable technologies being characterized by low values of fuel consumption and CO2 emission.
Specific fuel consumption and CO2 emissions can be considered as indicators of technological
sustainability [1–3]. Different modifications of dual purpose or cogenerative technologies where
thermal desalination being integrated with power-generating process are widespread throughout
the world. The UAE is expected to invest US$ 46 billion over the next decades in cogeneration
projects for desalination [1]. Recent years, new generation of dual purpose technologies namely
triple hybrid including power generation, MSF and RO desalination is becoming an attractive
alternative to conventional ones, in particular, power-desalination complex in Fujairah will have
capacity of 620 MW and 100 migd, where 62.5 migd (284,000 m3 /day) by MSF and 37.5 migd
(170,000 m3 /day) by RO. Unlike conventional cogeneration processes, triple hybrid includes RO
process along with thermal desalination and power generation. Some authors state that the RO can
successfully coexist with MSF rather then a process that should replace it [2–13].

STATEMENT OF THE PROBLEM AND OBJECTIVES OF STUDY

Proposed study covers technoeconomic evaluation and assessment of sustainability for triple hybrid
system including power generation, MSF and RO desalination.
In particular it is focused on calculation of load-dependent techno-economic characteristics such
as: (a) water cost; (b) cost of electricity; (c) cost of low-grade heat; (d) specific CO2 emissions and
(e) cost of CO2 emissions through imposed carbon tax. Triple hybrid is an example of the system

319

© 2004 by Taylor & Francis Group, LLC


chap-33 19/11/2003 14: 51 page 320

of multilayer hierarchy with load-dependent resource consumption and emissions. As an object of


modeling it can be represented as an array of flexible multiparameter models and submodels of
various hierarchy level. There are three groups of models underlying the system, they are: (A)
Models describing different power-generating technologies; (B) Model describing RO-desalination
and (C) Model describing MSF desalination. Any group of individual models, in turn, contains set
of matrix-submodels of different hierarchy level, they are: (1) technological submodel, (2) fuel or
energy submodel, (3) ecological submodel and (4) economic submodel.

ECOTAXATION AND PROGRAMS OF EMISSION TRADING ARE


ASPECTS OF SUSTAINABLE POLICY

Elaboration of environmentally sound technological policy is getting an unavoidable trend of


modern development. It is focused on elaboration of complex measures aimed at accelerated pro-
liferation of environmentally sound processes on technological market. Recent documents and
guidelines, namely Kyoto protocol, Treaties of Maastricht and Amsterdam, Rio and Oslo Accords,
[14–21] put forward the foundation of environmentally sound technological policy. Development
of emission trading program and implementation of ecotaxation system are key aspects of forth-
coming technological policy. Conferences on sustainable development held in Oslo recommended
shifting tax burden from labour to use of resource and damage of environment [22]. Carbon diox-
ide is one of the most proliferated greenhouse gas that contribute to global warming. Level of
recommended CO2 taxes being one of key economic assumption in our study is still disputable
issue. Published data on level of carbon tax are still randomized owing to lack of coherent transna-
tional policy. According to latest available data [23], “level $25 to $85 per tonne of carbon, the
level many experts think may be needed if industrialized countries are serious about the emission
targets agreed on in 1997 at Kyoto”. This range of carbon tax will be used in current study. Lack of
methodology for environmental accounting has hampered an estimation and techno-economic eval-
uation of various processes. This issue is getting especially essential in multicriteria and complex
analysis of multilayer and multistage processes like the triple hybrid. It is the systemic methodol-
ogy that can be successfully applied for analysis of behavior of comprehensive systems like the
triple hybrid.
Sustainability being complex characteristics can be quantified by group of multiparameter indi-
cators, in particular, in [24] the following groups of sustainability indicators are proposed: (1)
indicators for estimation of water quality; (2) estimation of resource; (3) estimation of environ-
ment, and group of indicators for estimation of (4) social aspects and (5) efficiency indicators. They
can be used for complex evaluation of sustainability of any technology, while proposed study is
focused on estimation of specific fuel consumption, cost of water, cost of energy and CO2 emissions
being key indicators for evaluation of dual purpose cogenerative plant.

TRIPLE HYBRID SYSTEM (POWER GENERATOR, RO AND MSF) AS AN OBJECT OF


MODELING USING SYSTEMIC APPROACH

Triple hybrid process including power generation, MSF and RO desalination is an example of
new generation of co-generative technology. It is getting an attractive alternative to conventional
technologies. This process can be subdivided into three technological subsystems namely (A)
subsystem for power generation; (B) RO desalinating and (C) MSF desalination. Simplified flow
diagram of triple hybrid is shown in Figure 1.
Hybridisation of thermally and electrically driven processes, can provide the following advan-
tages: (1) ability to diversify range of the power to water ratio, it is essential for regions where
demand pattern is characterized by low level of its value, (e.g. Abu-Dhabi); and (2) possibility
to use seasonal surplus of idle power. It should be noted that existing MSF-RO-power plants are
characterized by simple integration without hybridization of operating regimes, in particular, by

320

© 2004 by Taylor & Francis Group, LLC


chap-33 19/11/2003 14: 51 page 321

Electricity 40 MW
(GT emission) (output)
1290 tonne CO2/day

POWER PLANT Electricity


12.5 MW (el) RO
Fuel Input: Water
(to RO) Plant
231 MJ (fuel)/s
(into Turbine) Gas turbine
HRSG 60000
(capacity 100
m3/day
MW) Heat (HRSG)
7.5 MW (el)
(to MSF)
Electricity Water
7.5 MW (el) MSF
(to MSF) Plant 60000
Fuel Input:
m3/day
84 MJ (fuel)/s
(into AB) Heat (AB)
Auxiliary Boiler 71 MW (th)
(to MSF)

(AB emission)
465 tonne CO2/day

Figure 1. Flow diagram of triple hybrid desalination plant. (Shown values correspond to 100 MW power
plant operating at 60% load.)

blending distillate and using common intake system, while many essential technological aspects
are beyond the scope of consideration (of published studies). Incorporation of RO into existing
cogeneration plants can provide an essential economic, environmental and operational advantages
over conventional dual and single purpose isolated plants [3–7]. Systemic approach can be used for
analysis of multilayer and multiparameter systems like triple hybrid, where input and output char-
acteristics are depend on load and efficiency of adjacent subsystem, in particular, MSF output are
influenced by characteristics of HRSG, and, in turn, load of GT. This approach was developed and
widely implemented in chemical technology [25]. According to methodology of systemic approach,
any object can be considered as an array of subsystem of different hierarchy levels. Therefore the
triple hybrid (RO, MSF and power generation) technology can subdivided into three groups of mod-
els underlying the system, they are: (A) models describing different power-generating technologies;
(B) model describing RO-desalination and (C) models describing MSF desalination. Any group of
individual models, in turn, contains set of matrix-submodels of different hierarchy level describing
different groups of parameters namely (1) technological, (2) fuel (or energy), (3) ecological and
(4) economic parameters of considered technology.

1. Technological submodel is focused on calculation technological characteristics at different


operating load of generating systems;
2. Fuel or energy submodel covers calculation of fuel influx into power generating systems at
different operating load;
3. Task of ecological submodel is estimation of CO2 emissions at different operating regimes;
4. Economic submodel gives values of economic indicators such as (a) cost of water, (b) cost of
energy and (c) accounting of CO2 emissions through imposed carbon tax, (assuming rates of
environmental taxes recommended by EU tax legislation).

321

© 2004 by Taylor & Francis Group, LLC


chap-33 19/11/2003 14: 51 page 322

Group A
Group B Group C
(Submodels for
(Submodels for RO- (Submodels for MSF-
energy-generating:
desalination) desalination)
GT, HRSG, AB)

A4. Economic B4. Economic C4. Economic


Submodel Submodel Submodel

A3. Ecological B3. Ecological C3. Ecological


Submodel Submodel Submodel

A2. Fuel B2. Energy C2. Energy


Submodel Submodel Submodel

A1. Technological B1. Technological C1. Technological


Submodel Submodel Submodel

Figure 2. Systemic methodology (generalized structure of model).

Generalized structure of model based on systemic approach is shown in Figure 2. GT unfired


HRSG and AB is considered as an example of power generating-subsystem.
The model allows analysis of behavior of economic and ecological indicators at various economic
assumptions and technological parameters such as: (1) load, specific fuel consumption and effi-
ciency of energy generating system, (2) specific energy consumption for desalination, (3) specific
emissions of CO2 and (4) taxes on CO2 emissions.

Exergy based characteristics in economic analysis


Complexity of the problem requires development of new generation of methodology for multi-
criteria assessment of sustainability. Recent tendency reveals implication of exergy-based charac-
teristics, being exogenous to economics, in theory of macroeconomic analysis. In particular [26],
consider exergy as a factor of production along with labor and capital. Exergy being proportional
to potential future entropy production (or measure of “distance” from thermodynamic equilibrium)
is used as an exogenous input into economic model. In proposed study exergy-based submodels
were used for estimation of economic indicators. This methodological innovation made possible to
reallocate CO2 emissions between output fluxes of energy being based on cost of exergy embodied
into corresponding flux.

POWER GENERATING SUBMODELS: (GT, UNFIRED HRSG AND AB)

This group covers array of submodels describing behavior of power generating subsystems, namely:
GT, HRSG andAB. They include technological, fuel, ecological and economic submodels.Any indi-
vidual submodel, in turn, consists of set of equations describing technological, resource consuming,
CO2 emission and economic characteristics. In particular, the model is focused on quantitative esti-
mation of load-dependent behavior of economic indicators such as: (1) cost of electricity generated
by GT, (2) cost of low-grade heat produced by HRSG and, (3) cost of heat produced by AB. This
group is characterized by the two fuel influxes namely into GT and AB, and seven outputs, they
are: (1) heat from HRSG to MSF, (2) heat from AB to MSF, (3) electricity from GT to RO, (4)
auxiliary electricity flux from GT to MSF, and (5) electricity output to grid, along with there are two
CO2 fluxes, namely: (6) CO2 emissions generated by GT, and (7) CO2 emissions generated by AB.

322

© 2004 by Taylor & Francis Group, LLC


chap-33 19/11/2003 14: 51 page 323

Techno-economic assumptions underlying GT and unfired HRSG


(1) Size of power-generating system 100 MW; (2) Initial capital cost (GT + HRSG) is 50 mln USD,
(assuming specific capital cost to be 500 $/kW (electr)); (3) Economic life of capital equal 10 years.
Being based on these assumptions we get:
(A) Annual recovery of capital is 5 mln$/year; (B) O&M costs are 5 mln$/year; (C) Expenses
for primary fuel being load-dependent are based on results produced by technological submodel
(they depend on rate and overall fuel consumption on annual basis under various load of generating
system).

Techno-economic assumptions underlying submodel of auxiliary boiler (AB)


(1) The boiler is used to supply shortage of heat energy for MSF (in order to maintain constant
production of MSF). For simplification of mathematical treatment, all technological parameters of
boiler such as fuel consumption and heat output are expressed against load of GT while efficiency of
boiler is assumed to be load-independent being equal to 0.85. Estimation of required rated capacity
of AB is based on shortage of heat energy when load of GT drops to 20%. In considered case
it is equal to 3.9 10 + 9 MJ (heat)/year (or ∼124 MW (thermal)). Thus thermal output of boiler
varies from zero to 124 MW corresponding to variation of GT-load from 20 to 100%. (When the
GT operates at 100% load the boiler is off.); (2) Initial capital cost is 15 mln$ (assuming specific
cap. cost to be 130 $/kW) (therm); (3) Economic life of capital is equal to 10 years. Being based
on these assumptions we get: (A) Annual recovery of capital is 1.5 mln$/year; (B) O&M costs are
1.12 mln$/year; (C) Annual expenses for (or cost of) primary fuel being load-dependent are based
on results produced by technological submodel, (they depend on rate and overall fuel consumption
on annual basis under various load of generating system). For simplification of mathematical
treatment, all functional characteristics of AB will be expressed against load of GT.

Technological, fuel and ecological submodels for GT, unfired HRSG, AB


Technological submodels are based on correlation between electricity and heat output vs. GT load.
The submodel is going to be coupled with fuel resource submodel and models of desalination
groups. They can be approximated by thefollowing equations.

ElGT (L) = AGT L + BGT (1)

HHRSG (L) = AHRSG L + BHRSG (2)


Where ElGT (L) and HHRSG (L), electricity and heat output vs. load of power generating subsystem,
kWh/s. Similar mathematical formulation was used for description of auxiliary boiler (AB).

HAB (L) = AAB L + BAB (3)

Where HAB (L) heat output from the AB vs. load of power generating subsystem, kWh/s.
Fuel submodel is based on correlation between fuel consumption Fi (L) vs. load, L, of power
generating system (GT).
FGT (L) = AF/GT L + BF/GT (4)
Where, Fi (L), fuel consumption in MJ (fuel)/s, A & B coefficients
Similar equation can be used for the auxiliary boiler (AB). In proposed study the efficiency of
fuel utilization is assumed to be load independent being equal to 0.85, thus the slope coefficient
(A) is zero.
FAB (L) = AF/ABi L + BF/AB (5)
Where, FAB (L), fuel consumption in MJ (fuel)/s,
The submodel describing fuel consumption gives an input data for estimation of specific CO2
emissions, and it is going to be coupled with emission or ecological submodel. The system is

323

© 2004 by Taylor & Francis Group, LLC


chap-33 19/11/2003 14: 51 page 324

0.8
kg CO2/kWh
0.6

0.4

0.2

0
0.2 0.4 0.6 0.8 1
Load
kg-CO2/kWh(Heat-AB) kg-CO2/kWh(Heat-GT)
kg-CO2/kWh(Electr-GT)

Figure 3. Specific CO2 emission with respect to electricity and heat outflow (Option: GT + unfired
HRSG + AB).

characterized by influx of primary energy and outcoming fluxes of electricity and heat accompanied
by unused losses. Any energy flux leaving the system is accompanied by definite amount of CO2
emissions. Task of the ecological (emission) submodel is reallocation of CO2 emissions among
fluxes of heat and electricity produced. It contains function for specific CO2 emissions with respect
to heat and electricity produced, namely tonne of CO2 per kWh of electricity produced by GT;
tonne of CO2 per kWh of heat generated by HRSG; tone of CO2 per kWh of heat by AB. These
characteristics are load-dependent.Any outcoming flux can be specified by definite cost of allocated
exergy. Implying the fact the overall flux of CO2 emitted by the system can be subdivided into
two subfluxes being proportional to its exergy content namely subflux of heat or electricity. The
calculation is based on the following premises and assumptions:

Assumptions underlying calculation of specific CO2 emissions


Calculation of CO2 emitted by the system is based on carbon balance within the system. The
following assumptions are used: (1) Carbon content of fuel is assumed to be 0.8 ton of carbon per
ton of primary fuel, it is equivalent to 2.88 ton of CO2 per ton of primary fuel; (2) Calorific value
of fluid assumed to be 44.8 MJ/kg, that corresponds 6.43E-05 ton of CO2 per 1 MJ of primary fuel.

Assumptions underlying distribution of CO2 emissions between fluxes of heat and electricity
Estimation of emission shear of emitted CO2 is based on cost of exergy allocated to corresponding
flux. The following assumptions are used: (3) Cost of exergy embodied in losses assumed to be
zero; (4) Efficiency of reference cycle was assumed to be 0.24.
Load dependence of CO2 emissions, EEL (L) and ETH (L), can be approximated by the following
functions.
CGT −EL (L) = AC/GT (EL) L + BC/GT (EL) (6)

CGT −TH (L) = AC/GT (TH ) L + BC/GT (TH ) (7)

Where CGT −EL (L) and CGT −TH (L) are specific CO2 emission by GT with respect to electricity and
heat outflow, in ton of CO2 /kWh (electricity) and in ton of CO2 /kWh (heat), respectively. Similar
correlation was used for estimation of specific emissions produced by auxiliary boiler (AB).

CAB−TH (L) = AC/AB(TH ) L + BC/AB(TH ) (8)

Where CAB−TH (L) are specific CO2 emission with respect to heat outflow, in ton of CO2 per
kWh (heat) emitted by AB. Behavior of these functions is shown on Figure 3. Data generated by
ecological submodel are consolidated within Table 1.

324

© 2004 by Taylor & Francis Group, LLC


chap-33 19/11/2003 14: 51 page 325

Table 1. Load dependence of techno-ecological characteristics (GT + unfired


HRSG) (Data on fuel consumption are adopted from [27–28]).

Load of GT 0.2 0.4 0.6 0.8 1

MJ (fuel)/s 126.96 179.02 231.08 283.15 335.21


MW (electr) 20 40 60 80 100
MW (heat) 41.035 65.49 94.39 127.72 165.49
kg CO2 /kWh (EL) 0.869 0.786 0.703 0.620 0.538
kg CO2 /kWh (Heat) 0.206 0.187 0.169 0.150 0.131

Table 2. Load dependence of techno-ecological characteristics for the AB


(data on fuel consumption are adopted from [27–28]).

Load of GT 0.2 0.4 0.6 0.8 1

MJ (fuel)/s 146.42 117.65 83.65 44.43 0


MW (heat) 124.46 100.01 71.10 37.77 0
kg CO2 /kWh (Heat) 0.272 0.272 0.272 0.2722

Economic submodel for estimation of load-dependant cost of heat and


electricity produced by GT, unfired HRSG, AB
Cost of electricity and heat doesn’t remain constant and strongly influenced by regime parameters
such as load factor. The submodel is focused on estimation of load-dependent cost of heat and
electricity produced by GT, HRSG and AB. Within this submodel annual expenses suppose to
be subdivided into three groups, they are (A) load-independent annual payment for recovery of
capital costs; (B) load-independent O&M expenses excluding payment fuel or energy; and (C) load-
dependent fuel expenses. One of key aspect of the economic submodel is redistribution of expenses
between generated fluxes of electricity and heat produced by GT and HRSG. Embodied exergy
being exogenous to economic submodel value was used as a basic characteristics for economic
evaluation of load-dependent fluxes.

Distribution of expenses between load-dependent fluxes of heat and electricity


Any fuel consuming subsystem is characterized by influx of primary energy and outcoming fluxes
of electrical and heat accompanied by losses. Any outcoming flux can be specified by definite
amount of cost of exergy. Implying this fact the overall manufacturing expenses can be subdivided
into two load-dependent subfluxes being proportional to exergy embodied namely subflux of heat
or electricity. The calculation is based on the following assumptions: (1) Exergy cost of losses
assumed to be zero; (2) Efficiency of reference cycle was assumed to be 0.24. Calculated data are
attached in Table 3.

SUBMODELS FOR RO AND MSF DESALINATION

This group includes array of submodels describing behavior of RO and MSF subsystems. It covers
technological, energy, ecological and economic submodels. The MSF submodel is characterized
by the following input variables: (1) load-dependent heat flux from HRSG with load-dependent
cost; (2) load-dependent heat flux from AB with load-dependent cost and (3) constant influx of
electricity from GT with load-dependent cost. The RO subsystem is characterized by constant electr
influx with load-dependent cost. These fluxes are illustrated in Figure 1. In particular, desalination
models are focused on quantitative estimation of load-dependent behavior of economic indicators

325

© 2004 by Taylor & Francis Group, LLC


chap-33 19/11/2003 14: 51 page 326

Table 3. Load-dependent cost of heat and electricity produced by GT, HRSG (structure of annual expenses)
Cost of primary energy (1$/GJ).

Load of GT 0.2 0.4 0.6 0.8 1

(A) Annual expenses for capital recovery (GT + HRSG),


5 mln(cap)$/year
(B) O&M expenses (GT + HRSG),
5 mln(O&M)$/year
(C) Load-dependent fuel expenses∗ , mln$ (fuel)/year
Expenses for fuel (GT)
mln$/year, 4.00E + 00 5.65E + 00 7.29E + 00 8.93E + 00 1.06E + 01
(E) Cost of electricity (by GT)
$/kWh(El) 0.0479 0.03316 0.02522 0.02028 0.016926
(F) Cost of heat (by HRSG)
$/kWh∗∗ (HEAT) 0.0116 0.00801 0.006097 0.004903 0.00409
∗ Cost of primary fuel assumed to be 1$/GJ.

Table 4. Load-dependent cost of heat and electricity produced by AB (structure


of annual expenses). Cost of primary energy (1$/GJ).

Load of GT 0.2 0.4 0.6 0.8 1

(A) Annual expenses for capital recovery (AB),


1.5 mln (cap)$/year
(B) O&M expenses (AB),
1.12 mln (O&M)$/year
(C) Load-dependent fuel expenses (AB)∗ , mln$ (fuel)/year
Expenses for fuel (AB)
mln$/year, 4.62 3.71 2.57 1.4 0
(F) Cost of heat (from AB)
$/kWh∗∗ (HEAT) 0.0066 0.0072 0.0084 0.0121 X
∗ Cost of primary fuel assumed to be 1$/GJ.

such as cost of water and CO2 tax allocated to cubic meter of water under various level of possible
carbon tax. Calculation of these characteristics is strongly influenced by load-dependent variables
such as cost of heat, cost of electricity and specific CO2 emissions allocated to both form of energy.

Techno-economic assumptions underlying MSF submodel


Case study is based on the following assumptions: (1) Rated capacity of MSF subsystem was
assumed to be 60000 m3 /day; (2) Initial capital cost 90 mln$, (assuming specific capital cost being
1500 $/m3 /day); (3) Economic life of capital was assumed to be 10 years; (4) Specific O&M
cost (excluding energy expenses) is 90 $/m3 /day. Data on energy consumption of MSF subsystem:
(1) PR = 10; (2) Specific heat energy consumption 61 kWh (therm)/m3 (distillate); (3) Auxiliary
electricity consumption 3 kWh (el)/m3 (distillate). Being based on these assumptions: (A) Annual
capital recovery (MSF) is 9 mln$/year; (B) Annual O&M costs (MSF) are 5.4 mln$/year.

Techno-economic assumptions underlying RO submodel


Case study is based on the following assumptions: (1) Rated capacity of RO subsystem is
60000 m3 /day; (2) Initial capital investment 60 mln, (assuming specific capital cost 1000 $/m3 /day;
(3) Economic life of capital is 10 year; (4) Specific O&M cost is 120 $/ m3 /day, (5). Specific energy
consumption for RO desalination, assumed to be 5 kWh (el)/m3 (permeate). Being based on these

326

© 2004 by Taylor & Francis Group, LLC


chap-33 19/11/2003 14: 51 page 327

20
18
16
14
kgCO2/m3
12
10
8
6
4
2
0
0.2 0.4 0.6 0.8 1
Load
kgCO2(Total)/m3(MSF) kg CO2/m3(RO)

Figure 4. Specific CO2 emission allocated to cubic meter of water vs. load of GT. Where RO specific
energy consumption is 5 kWh (el)/m3 (permeate), MSF-specific energy consumption = 60 kWh (thermal)/m3
(distillate) at PR = 10, and MSF-auxiliary electrical energy consumption is 3 kWh (el)/m3 (distillate).

assumptions we get: (A) Annual capital recovery (RO) is equal to 6 mln$/year; (B) Annual O&M
costs (RO) are equal to 7.2 mln$/year.

Technological, energy and ecological submodels for RO & MSF desalination


These submodels are based on mass and energy balances. Using data generated by model for
power generating subsytem, they describe allocation of different forms of energy per cubic meter
of produced water. Ecological submodel gives load-dependent functions describing specific CO2
emission allocated to cubic meter of desalted water produced by RO and MSF, respectively. The
submodel is based on energy balance equations using data on specific CO2 emissions allocated to
heat and electricity output. Load dependence of specific CO2 emissions can be approximated by
the following linear functions.

CRO (L) = AC/RO L + BC/RO (9)

CMSF (L) = AC/MSF L + BC/MSF (10)

Where, CRO (L) and CMSF (L), are specific emissions in kg CO2 per m3 of water produced by RO
and MSF, respectively. Calculated values of load-dependent inputs of different forms of energy and
allocated emissions are consolidated in Tables 5 & 6.

Economic submodels for RO and MSF


It covers estimation of annual manufacturing expenses and specific cost of water.

Estimation of annual manufacturing expenses (MSF)


Structure of annual expenses contains load-independent and load-dependent constituents, namely:
(A) load-independent annual payment for recovery of capital costs, (B) load-independent O&M
expenses excluding payment for energy, (C) load-dependent energy expenses that, in turn, can be
subdivided into (C1) load-dependent cost of heat and electricity produced by GT + unfired HRSG,
and (C2) load-dependent cost of heat produced by auxiliary boiler (AB), and (D) load-dependent
cost of CO2 emissions in the case if carbon tax is imposed. Annual manufacturing expenses (MSF)
per year are consolidated in Table 7.

327

© 2004 by Taylor & Francis Group, LLC


chap-33 19/11/2003 14: 51 page 328

Table 5. Load-dependent inputs of different forms of energy and allocated emissions for MSF (Where
MSF-specific heat energy consumption = 60.0 kWh (thermal)/m3 (distillate), it corresponds to PR = 10,
MSF-auxiliary electrical energy consumption = 3 kWh (el)/m3 (distillate), Constant heat required for
MSF1.34E + 09 kWh (heat)/year).

Load GT 0.2 0.4 0.6 0.8 1

(A) Input of heat produced by HRSG and CO2 emissions allocated to it.
Heat from HRSG,
KWh (heat-GT)/year 3.59E + 08 5.74E + 08 8.27E + 08 1.12E + 09 1.34E + 09
Annual emission,
kgCO2 (GT-Heat)/year 8.53E + 07 1.03E + 08 1.29E + 08 1.60E + 08 1.79E + 08
(B) Input of heat produced by AB and CO2 emissions allocated to it.
Heat from AB,
KWh (heat-AB)/year 9.79E + 08 7.65E + 08 5.11E + 08 2.19E + 08 0.00E + 00
Annual emission,
kgCO2 (AB-Heat)/year 2.67E + 08 2.08E + 08 1.39E + 08 5.98E + 07 0.00E + 00
(C) Input of electricity produced by GT and CO2 emissions allocated to it.
Electr by GT,
KWh (El-GT)/year 4.38E + 07 4.38E + 07 4.38E + 07 4.38E + 07 4.38E + 07
Annual emission,
kgCO2 (GT-Electr)/year 4.30E + 07 3.25E + 07 2.83E + 07 2.59E + 07 2.43E + 07
Total annual emission
kgCO2 (GT + AB)/year 3.95E + 08 3.44E + 08 2.97E + 08 2.45E + 08 2.04E + 08
Specific CO2 Emissions
kgCO2 /m3 (MSF) 18.03 15.68 13.54 11.20 9.29

Table 6. Load-dependent inputs of energy and allocated emissions for RO. Where RO-specific energy
consumption is 5 kWh (el)/m3 (permeate).

Load GT 0.2 0.4 0.6 0.8 1

Electr from GT,


kWh (el)/year 1.1E + 08 1.1E + 08 1.1E + 08 1.1E + 08 1.1E + 08
Annual CO2 emission,
kgCO2 /year 1.08E + 08 8.13E + 07 7.07E + 07 6.47E + 07 6.07E + 07
Specific CO2 emission,
kg CO2 /m3 (RO) 4.91 3.71 3.23 2.96 2.77

Estimation of annual manufacturing expenses (RO)


Structure of annual expenses contains load-independent and load-dependent constituents, namely:
(A) load-independent annual payment for recovery of capital costs, (B) load-independent O&M
expenses excluding payment for energy, (C) load-dependent cost of electricity produced by GT +
unfired HRSG. (Within this study, energy expenses are deliberately eliminated from O&M group
and considered as an isolated item owing to its complicated load-dependent behavior) and (D)
load-dependent cost of CO2 emissions in the case if carbon tax is imposed. Calculated data are
consolidated in Table 8.

Structure of water cost produced by MSF and RO


It includes the following constituents: (A) load-independent capital recovery cost; (B) load-
independent O&M cost (excluding payment for energy); (C) load-dependent energy cost, that,
in turn, can be subdivided into (C1) load-dependent cost of heat and electricity produced by GT +
unfired HRSG, and (C2) load-dependent cost of heat produced by auxiliary boiler (AB) for MSF,
and (D) load-dependent cost of CO2 emission (this item suppose to be included in the case if the
legislation is imposed). Itemized structure of load-dependant total production cost is submitted in

328

© 2004 by Taylor & Francis Group, LLC


chap-33 19/11/2003 14: 51 page 329

Table 7. Structure of annual expenses for MSF desalination (60000 m3 /day).

Load of GT 0.2 0.4 0.6 0.8 1

(A) Load independent annual expenses for capital recovery 9 mln$ (Cap)/year
(B) Load independent annual O&M expenses 5.4 mln$ (O&M)/year
(C1) Load-dependent annual expenses for heat produced by HRSG
mln$(heat-HRSG)/year 4.642838 4.43656 4.7629496 5.2712669 5.426349
(C2) Load-dependent annual expenses for heat produced by AB.
mln$(heat-AB)/year 6.49E + 00 5.52E + 00 4.31E + 00 2.66E + 00 0.00E + 00
(C3) Load-dependent annual expenses for electricity produced by GT
mln$(el)/year 2.34E + 00 1.401145 1.0437219 0.8536398 0.73466
(D) Total annual expenses for electricity and heat (produced by HRSG + AB + GT) and
load-dependent cost of water
mln $(heat HRSG + heatAB + 13.47544 11.35728 10.119024 8.7878228 6.161009
electr. GT)/year
SUMMARY (MSF)
No carbon tax
Total annual expenses incl.
tax, mln$(Cap + OM + 27.87544 25.75728 24.519024 23.187823 20.56101
En+Tax)/year
Carbon tax is 30 $ per tone of emitted carbon
Carbon tax,
mln$(CO2 -tax)/year 3.29E + 00 2.86E + 00 2.47E + 00 2.05E + 00 1.70E + 00
Total annual expenses,
mln$(Cap + OM + 3.12E + 01 2.86E + 01 2.70E + 01 2.52E + 01 2.23E + 01
En + Tax)/year
Carbon tax is 40 $ per tone of emitted carbon
Carbon tax,
mln$(CO2 -tax)/year 4.39E + 00 3.82E + 00 3.30E + 00 2.73E + 00 2.26E + 00
Total annual expenses,
mln$(Cap + OM + 3.23E + 01 2.96E + 01 2.78E + 01 2.59E + 01 2.28E + 01
En + tax)/year
Carbon tax is 50 $ per tone of emitted carbon
Carbon tax,
mln$(CO2 -tax)/year 5.48E + 00 4.77E + 00 4.12E + 00 3.41E + 00 2.83E + 00
Total annual expenses,
mln$(Cap + OM + 3.34E + 01 3.05E + 01 2.86E + 01 2.66E + 01 2.34E + 01
En + Tax)/year
Carbon tax is 60 $ per tone of emitted carbon
Carbon tax,
mln$(CO2 -tax)/year 6.58E + 00 5.73E + 00 4.94E + 00 4.09E + 00 3.39E + 00
Total annual expenses,
mln$(Cap + OM + 3.45E + 01 3.15E + 01 2.95E + 01 2.73E + 01 2.40E + 01
En+Tax)/year

Table 10. Load-dependent CO2 tax allocated to cubic meter of water is based on carbon of pri-
mary fuel allocated to thermal and electrical energy consumed by MSF. It varies from 0.07 $/m3 to
0.15 $/m3 while the load drops from 1 to 0.2 under carbon tax being 30 USD per tone of emitted
carbon, and it varies from 0.15 $/m3 to 0.3 $/m3 under carbon tax being 60 USD per tone of emitted
carbon.
Influence of cost of primary fuel at various level of carbon tax rate is shown in Figure 6.
Load dependent CO2 tax allocated to cubic meter of water is based on carbon of primary fuel
allocated to electrical energy consumed by RO desalination. It varies from 0.02 $/m3 to 0.04 $/m3

329

© 2004 by Taylor & Francis Group, LLC


chap-33 19/11/2003 14: 51 page 330

Table 8. Load-dependent annual cost for RO desalination at various carbon tax. (Calculated projections
are based on the following assumptions: RO capacity is 60000 m3 /day, cost of primary fuel is 1$/GJ, carbon
tax varies from 0 to 60 USD per 1 kg of emitted carbon).

Load GT 0.2 0.4 0.6 0.8 1

(A)Load independent annual expenses for capital recovery is 6 mln$(Cap)/year


(B) Load independent annual O&M expenses are 7.2 mln$(O&M)/year
(C) Load-dependent cost of energy.
Cost of electricity,
$/kWh (el) 0.053 0.0319 0.0238 0.0194 0.0167
Annual expenses for electr,
mln$(el)/year 5.850619 3.502863 2.6093047 2.1340996 1.83665
SUMMARY (RO)
No carbon tax
Total annual expenses,
mln$(Cap+ OM+ 19.05062 16.70286 15.809305 15.3341 15.03665
En+Tax)/year
Carbon tax is 30 $ per tone of emitted carbon
Carbon tax,
mln$(CO2 -tax)/year 8.96E − 01 6.77E − 01 5.89E − 01 5.39E − 01 5.06E − 01
Total annual expenses,
mln$(Cap + OM + 1.99E + 01 1.74E + 01 1.64E + 01 1.59E + 01 1.55E + 01
En + Tax)/year
Carbon tax is 40 $ per tone of emitted carbon
Carbon tax,
mln$(CO2 -tax)/year 1.19E + 00 9.03E − 01 7.86E − 01 7.19E − 01 6.74E − 01
Total annual expenses,
mln$(Cap+ OM+ 2.02E + 01 1.76E + 01 1.66E + 01 1.61E + 01 1.57E + 01
En + Tax)/year
Carbon tax is 50 $ per tone of emitted carbon
Carbon tax,
mln$(CO2 -tax)/year 1.49E + 00 1.13E + 00 9.82E − 01 8.99E − 01 8.43E − 01
Total annual expenses,
mln$(Cap+ OM+ 2.05E + 01 1.78E + 01 1.68E + 01 1.62E + 01 1.59E + 01
En+Tax)/year
Carbon tax is 60 $ per tone of emitted carbon
Carbon tax,
mln$(CO2 -tax)/year 1.79E + 00 1.35E + 00 1.18E + 00 1.08E + 00 1.01E + 00
Total annual expenses,
mln$(Cap+ OM+ 2.08E + 01 1.81E + 01 1.70E + 01 1.64E + 01 1.60E + 01
En + Tax)/year

while the load drops from 1 to 0.2 under carbon tax 30 $ per tone of emitted carbon, and it varies
from 0.046 $/m3 to 0.082 $/m3 under carbon tax 60 $ per tone of emitted carbon.
Influence of cost of primary fuel at various level of carbon tax rate is shown in Figure 8.

CONCLUSIONS AND ANALYSIS OF RESULTS

According to published experts estimates [2–13], the RO can successfully coexist with MSF rather
then a process that should replace it. Proposed study confirms this statement and gives set of flexible
multiparameter submodels and methodology for techo-economic evaluation of triple hybrid scheme.
It provides an array of load-dependent techo-economic indicators namely cost of low-grade heat,
cost of electricity and cost of water produced by GT, HRSG, AB, RO and MSF, within the context of

330

© 2004 by Taylor & Francis Group, LLC


chap-33 19/11/2003 14: 51 page 331

1.7

1.6

1.5

1.4
$/m3(MSF)

1.3

1.2

1.1

0.9
0.2 0.4 0.6 0.8 1
Load

tax  0 $/t-carbon tax  50 $/t-carbon


tax  30 $/t-carbon tax = 60 $/t-carbon
tax  40 $/t-carbon

Figure 5. Cost of water (produced by MSF) vs. load of power generating system at various level of carbon
tax (Cost of primary fuel is 1$/GJ).

1.6

1.5
Water cost $/m3(MSF)

1.4 no carbon tax


tax  30 $/t-carbon
1.3
tax  40 $/t-carbon
1.2 tax  50 $/t-carbon
1.1 tax  60 $/t-carbon

0.9
1 2 3 4
Cost of primary fuel, $/GJ

Figure 6. Cost of water (produced by MSF) vs. cost of primary fuel at various values of imposed carbon tax
(at 100% GT load).

the schemes. Conducted research was focused on consideration of the following groups of variables
and assumptions: (1) technological variables such as load, (2) economic assumptions such as cost
of primary energy and (3) aspects of ecological legislation namely CO2 tax.

Influence of GT load on characteristics


Behavior of techno-economic indicators such as cost of electricity, cost of low-grade heat, cost of
water, specific fuel consumption and CO2 emissions are quite sensitive to variation of GT-load.
Thermal desalination is more affected by the load than RO. As a consequence it may be noted that:
(A) Drop of the load corresponds to non-linear growth of specific fuel consumption namely it goes
from 3.6 to 10.7 J (primary fuel)/J (electricity) when load drops from 1 to 0.2; (B) When the load
drops from 1 to 0.2 cost of electricity generated by GT, and cost of low-grade heat produced by
HRSG and AB goes from 0.017 to 0.048 $/kWh (el-GT); from 0.004 to 0.012 $/kWh (heat-HRSG)

331

© 2004 by Taylor & Francis Group, LLC


chap-33 19/11/2003 14: 51 page 332

Table 9. Load-dependent structure of water cost produced by MSF (including CO2 taxes charged per cubic
meter of water). Assumptions: (1) Rated capacity of power generating technology (GT + unfired HRSG) is
100 MW, Current load varies from 100% to 30%; (2) Rated capacity of MSF is 60000 m3 /day (with constant
output) (Cost of primary energy 1 $/GJ, PR = 10, auxiliary electr. consumption 3 kWh (el)/m3 .

Load of GT 0.2 0.4 0.6 0.8 1

Cost of water produced by MSF, $/m3


Capital cost, $/m3 0.411 0.411 0.411 0.411 0.411
O&M cost $/m3 0.2466 0.247 0.247 0.247 0.2466
Electr by GT, $/m3 0.1069 0.064 0.048 0.039 0.0335
Heat by HRSG, $/m3 0.212 0.203 0.217 0.241 0.2478
Heat by AB, $/m3 0.2965 0.252 0.197 0.122 0
No carbon tax imposed, Cost of primary fuel 1 $/GJ
Water cost, $/m3 1.2729 1.176 1.12 1.059 0.9389
Carbon tax is 30 $ per tone of emitted carbon, Cost of primary fuel 1 $/GJ
CO2 tax, $/m3 0.1503 0.131 0.113 0.093 0.0774
Water cost $/m3 1.4231 1.307 1.232 1.152 1.0163
Carbon tax is 40 $ per tone of emitted carbon, Cost of primary fuel 1 $/GJ
CO2 tax, $/m3 0.2003 0.174 0.15 0.125 0.1033
Water cost $/m3 1.4732 1.35 1.27 1.183 1.0421
Carbon tax is 50 $ per tone of emitted carbon, Cost of primary fuel 1 $/GJ
CO2 tax, $/m3 0.2504 0.218 0.188 0.156 0.1291
Water cost $/m3 1.5233 1.394 1.308 1.214 1.0679
Carbon tax is 60 $ per tone of emitted carbon, Cost of primary fuel 1 $/GJ
CO2 tax, $/m3 0.3005 0.261 0.226 0.187 0.1549
Water cost $/m3 1.438 1.345 1.246 1.0938

tax  0 $ /t-carbon
0.95 tax  30 $ /t-carbon
tax  40 $ /t-carbon
0.9 tax  50 $/t-carbon
tax  60 $ /t-carbon
$m3(RO)

0.85

0.8

0.75

0.7

0.65
0.2 0.4 0.6 0.8 1
Load

Figure 7. Cost of water (produced by RO) vs. load of power generating system at various level of carbon tax
(Cost of primary fuel is 1 $/GJ).

and from 0.012 to 0.0066 $/kWh (heat-AB), respectively; (C) When the load drops from 1 to 0.2
cost of water produced by MSF and RO goes from 0.93 to 1.27 USD per cubic meter (MSF) and
from 0.68 to 0.87 USD per cubic meter (RO) respectively, (see Tables 9 & 10); (D) Drop of th
load, in turn, will get specific CO2 emissions increased namely when the load drops from 1 to

332

© 2004 by Taylor & Francis Group, LLC


chap-33 19/11/2003 14: 51 page 333

Table 10. Load-dependent structure of water cost produced by RO. Structure of


load-dependent constituents including CO2 taxes charged per cubic meter of water,
Cost of primary energy 1 $/GJ, energy consumption 5 kWh/m3 .

Load of GT 0.2 0.4 0.6 0.8 1

Cost of water produced by RO, $/m3


Capital cost, $/m3 0.274 0.274 0.274 0.274 0.274
O&M cost $/m3 0.329 0.329 0.329 0.329 0.3288
Electr by GT, $/m3 0.267 0.16 0.119 0.097 0.0839
No carbon tax imposed, cost of primary fuel 1 $/GJ
Water cost, $/m3 0.87 0.763 0.722 0.7 0.6866
Carbon tax is 30 $ per tone of emitted carbon, cost of primary fuel 1 $/GJ
CO2 tax, $/m3 0.041 0.031 0.027 0.025 0.0231
Water cost $/m3 0.911 0.794 0.749 0.725 0.7097
Carbon tax is 40 $ per tone of emitted carbon, cost of primary fuel 1 $/GJ
CO2 tax, $/m3 0.055 0.041 0.036 0.033 0.0308
Water cost $/m3 0.924 0.804 0.758 0.733 0.7174
Carbon tax is 50 $ per tone of emitted carbon, cost of primary fuel 1 $/GJ
CO2 tax, $/m3 0.068 0.052 0.045 0.041 0.0385
Water cost $/m3 0.938 0.814 0.767 0.741 0.7251
Carbon tax is 60 $ per tone of emitted carbon, cost of primary fuel 1 $/GJ
CO2 tax, $/m3 0.082 0.062 0.054 0.049 0.0462
Water cost $/m3 0.952 0.825 0.776 0.749 0.7328

0.9

0.85
Cost of water, $/m3 (RO)

0.8

0.75

0.7

0.65

0.6
1 2 3 4
Cost of primary fuel, $/GJ

no carbon tax tax  50 $/t-carbon


tax  30 $/t-carbon tax  60 $/t-carbon
tax  40 $/t-carbon

Figure 8. Cost of water (produced by RO) vs cost of primary fuel at various values of imposed carbon tax
(at 100% GT load).

333

© 2004 by Taylor & Francis Group, LLC


chap-33 19/11/2003 14: 51 page 334

0.2, specific CO2 emissions allocated to cubic meter produced by MSF and RO goes from 9.29 to
18.03 kg CO2 /m3 (MSF) and from 2.77 to 4.91 kg CO2 /m3 (RO) respectively, (see Tables 5&6).

Influence of cost of primary fuel on water cost


Thermal desalination because of its relatively high energy consumption is more vulnerable to
growth of cost of primary fuel then RO. Analysis of calculated projections gives the following:
Increase of cost of primary fuel form 1 to 4 $/GJ will be accompanied by the following growth:
(A) in case of RO desalination, cost of water goes from 0.69 to 0.82 USD per cubic meter, when no
carbon tax imposed, and from 0.73 to 0.86 USD per cubic meter when imposed carbon tax is USD
60 per tone of emitted carbon; (B) In case of MSF desalination, cost of water is more negatively
influenced by growth of primary fuel cost namely, it goes from 0.94 to 1.37 USD per cubic meter
when no carbon tax imposed, and from 1.09 to 1.53 USD per cubic meter when imposed carbon
tax is USD 60 per tone of emitted carbon.

Influence of carbon tax rate on cost of water


Reconfiguration of tax system and accounting of CO2 emissions will increase cost of produced
water in accordance with imposed carbon tax. Level of optimum CO2 tax being still debatable
issue probably, according to published analysis [23], can be ranged from 25 to 85 USD per tone of
emitted carbon. In proposed study similar range was assumed. When imposed CO2 tax goes from
30 to 60 USD per tone of emitted carbon, cost of water produced by MSF ranges from 1.02 to 1.09
USD per cubic meter at 100% of GT load and from 1.27 to 1.57 USD per cubic meter at 20% of
the load. Similar behavior can be seen in the case of RO desalination namely: when imposed CO2
tax goes from 30 to 60 USD per tone of emitted carbon, the cost of water produced by RO ranges
from 0.68 to 0.73 USD per cubic meter at 100% of GT load, and from 0.87 to 0.95 USD per cubic
meter at 20% of the load. (See Tables 9 & 10)
Incorporating RO into co-generative scheme decreases cost of water and average level of CO2
emissions per cubic meter that would give double benefit when the system of CO2 taxation is
imposed. Implementation of the CO2 taxation would accelerate diffusion of RO-based technologies
on technological market.

NOMENCLATURE

CGT −EL (L) Specific CO2 emission of GT with respect to electricity outflow,
Ton of CO2 /kWh (electricity),
CGT −TH (L) Specific CO2 emission by GT with respect to d heat outflow,
Ton of CO2 /kWh (heat),
CAB−TH (L) Specific CO2 emission of AB with respect to heat outflow,
Ton of CO2 /kWh (heat),
FGT (L) Fuel consumption of CT, MJ (fuel)/s,
FAB (L) Fuel consumption of AB, MJ (fuel)/s,
ElGT (L) Electricity output vs. load of power generating subsystem, kW (el),
HHRSG (L) Heat output vs. load of power generating subsystem, kW (therm),
HAB (L) Heat output from AB vs. load of power generating subsystem, kW,
AB Auxiliary boiler,
GT Gas turbine,
HRSG Heat recovery steam generator.

REFERENCES

1. Co-generation – a new generation at Middle East electricity, in The Newsletter of the Middle East
Desalination Research Center, Iss. 13, August 2001.

334

© 2004 by Taylor & Francis Group, LLC


chap-33 19/11/2003 14: 51 page 335

2. M.R. Al-Marafie, Prospects of hybrid RO/MSF desalting plants in Kuwait, Desalination, 72 (1989),
395–404.
3. 3A El-Sayed, S. Ebrahim, A Al-Saffar, M A-Jawad, Pilot study of MSF/RO hybrid systems, 120
Desalination, (1998) 121–128.
4. Mark Wilf, Klinko Kenneth, Optimization of seawater RO systems design, Desalination, 138 (2001)
299–306.
5. Javier Uche, Luis Serra, Antonio Valero, Hybrid desalting systems for avoiding water sortage in Spain,
Desalination, 138 (2001), 329–334.
6. Z.K. Al-Bahri, W.T. Hanburry, T. Hodgkiess, Optimum feed temperatures for seawater reverse osmosis
plant operation in MSF/RO hybrid plant, Desalination, 138 (2001), 335–339.
7. L. Awerbuch, Power-desalination and importance of hybrid ideas, In: IDA World Congress on
Desalination, vol. 4, p. 184–192, 6–9 Oct. 1997, Spain.
8. B.A. Kamaluddin, J.L. Talavera, K. Wangnik, Reverse osmosis seawater desalination systems in co-
generation plants for additional production of drinking water using off-peak electrical energy, In: IDA
World Congress on Desalination, vol. 1, 387–413, 6–9 Oct. 1997, Spain.
9. K. Wangnik, A Global Overview of Water Desalination Technology and the Perspectives, In: Int.
Conference. Spanish Hydrological Plan and Sustainable Water Management, (Environmental Aspects,
Water Reuse and Desalination) Zaragossa, Spain, 13–14 June 2001.
10. Ali. M. Nashar, The role of desalination in water management in gulf region In: Int. Conference.
Spanish Hydrological Plan and Sustainable Water Management, (Environmental Aspects, Water Reuse
and Desalination) Zaragossa, Spain, 13–14 June, 2001.
11. M. Abdel-Jawad, et al Fifteen years of R&D program in seawater desalination at KISR Part I. RO
system performance Desalination, 135, (2001), 155–167.
12. S Ebrahim, M.Abdel- Jawad al, Fifteen years of R&D program in seawater desalination at KISR Part II,
Pretreatment technologies for RO systems, Desalination, 135 (2001), 141–153.
13. M.A.K. Al-Sofi, A.H. Hasan, E. F. El-Sayed, Optimum integrated power/MSF/RO plants, In: Research
Activities and Studies, Kingdom of Saudi Arabia, vol. 3, (1993), 61–83.
14. United Nations, (1993) Integrated Environmental and Economic Accounting, Series F, No 61, United
Nations, New York.
15. Skou Andersen (1994) The green tax reform in Denmark: shifting the focus of tax liability,
“Environmental Liability, vol. 2, No 2, pp 29–41.
16. Sterner T, (1994), Environmental tax reform: the Swedish experience, Studies in Environmental and
Development, Department of Economics, Goteborg University.
17. European Parliament Fact Sheets 2001, (4.9.1. Environmental policy: General Principles).
18. European Parliament Fact Sheets 2001, (3.4.7. The taxation of energy).
19. Federal Plan for Sustainable Development 2000–2004 (Adopted by the federal Government of Belgium
on 20 July 2000, and laid down by Royal Decree of 19 Sept. 2000), Brussels, 2000.
20. Agenda 21: Programme of Action for Sustainable Development, (Rio Declaration of Environment and
Development), 2–14 June 1992, Rio de Janeiro, Brazil.
21. R. Costanza, Ecological Economics: (The science and management of Sustainability), Columbia
University Press, NY, 1991.
22. Tim O’Riordan, Ecotaxation, Earthscan Pbl., UK, 1997.
23. A renaissance may not come, Economist, May 19th, 2001, p. 29–31.
24. Naim H. Afgan, Maria G. Carvalho, Darwish Al-Gobaisi, Indicators for sustainability of Water
Desaliantion System, IDA World Congress on Desalination and Water Reuse, Manama, Bahrain, March
8–13, 2002.
25. Kafarov V.V., Dorochov V.N., SystemicAnalysis in Chemical Technology, (Foundations of Strategy),
vol. 1, Science Pbl, Moscow, 1976, (in Russian).
26. Robert U. Ayres, Eco-thermodynamics and the second law, Ecological Economics, 26, (1998), 189–209.
27. Master Plan for the Power and Water Requirements During the Period 1995–2010 for WED. (Report HI
Operation and User Manual for Optimization Model and Probabilistic model for Reliability), Bechtel
Co. Ltd. 1996.
28. 1999 Update to Master Plan for the Power and Water Requirements During the Period 1995–2010 for
ADWEA, vol. 1. Bechtel Co. Ltd. 1999.

335

© 2004 by Taylor & Francis Group, LLC


chap-33 19/11/2003 14: 51 page 336

© 2004 by Taylor & Francis Group, LLC


chap-34 19/11/2003 14: 51 page 337

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

New fossil fuel energy technologies – a possibility of improving


energy efficiency in developing countries

Alija Lekić
Mechanical Engineering Faculty Sarajevo, University of Sarajevo, Sarajevo, Bosnia and Herzegovina

ABSTRACT: The energy intensity in developing countries, especially in Eastern Europe and
Former Soviet Union countries, is considerably higher than in industrialized countries. The devel-
opment of new fossil fuel energy technologies might be a possibility to improve the energy efficiency
and that way reduce the energy intensity in developing countries. Some of these new energy
technologies: combustion in the pressurized fluidized bed, integrated combined cycle with coal
gasification, combined cycle with natural gas as a fuel and fuel cell are shortly described. The
principle of work for the facilities based on the above-mentioned technologies is described, and the
basic characteristics and state of development are given. It has been emphasized that the facilities
with those technologies have higher efficiency compared to classic thermal power plants, and that
they polute considerably less the environment. The short overview of primary energy consumption,
the share of different resources and the estimation of saving which might be realized by using new
technologies are given.

INTRODUCTION

The average daily consumption of primary energy in the world, in the year 1999, was 26.78 million
tons of oil equivalent (toe). (In 1973 this consumption was 17.28 million tons of toe, and the
estimated consumption in the year 2020 will be 40.55 million tons of toe [1].) On the importance of
energy for the development can be judged from the Figure 1, where the relation between the GDP
and energy consumption in the world is shown. Clearly, the higher GDP is wanted, higher energy
consumption can be expected.
The industrialized countries consume the larger part of the energy, 54.9%, while their share in
the population is 15.8% [2]. The developing countries, which share in the population is 77.3%, use
31.9% of total energy consumption. It is clear, from these data, that the annual energy consumption
per capita in industrialized countries is almost 8.5 times larger than the corresponding consumption
in developing countries.
The industrialized countries are realizing the considerable part of total gross domestic product
(GDP), 78.3%, while the developing countries are realizing 18.6% only. Comparing the share in
energy consumption and in GDP, it can be concluded that the consumption of primary energy for
each $ of GDP in the developing countries is almost 2.5 higher than in the industrialized countries,
which indicates that the efficiency in using energy in developing countries (even taking into account
differences in economy structure) is considerably lower than in the industrialized countries. (This
indicator is even worse in Eastern Europe and Former Soviet Union countries. The energy intensity,
showing the energy consumption per GDP is given at Figure 2 for group of countries.)
For that reason, the developing countries are spending much more energy to produce the same
value. It is obvious that developing countries, and especially the countries of Eastern Europe should
do a lot of efforts to reduce the energy intensity, if they want to become competitive. The data for
Bosnia and Herzegovina are, in every respect, less favourable than the indicators for developing
countries [3].

337

© 2004 by Taylor & Francis Group, LLC


chap-34 19/11/2003 14: 51 page 338

Figure 1. World total primary energy supply vs GDP [1].

Figure 2. Energy intensity [2].

The efficiency in using energy in industrialized countries is partly realized by applying new tech-
nologies. Although time required for development of new technologies is long and the investment
is considerable, several technologies in energy transformation have been developed. Some of these
technologies, which applications are or will be soon growing, are briefly discussed.

FACILITIES WITH FLUIDIZED BED COMBUSTION

The technology with combustion in pressurized fluidized bed, and its application in power plants,
has been developed in the last twenty years. The technology is based on combustion in pressurized

338

© 2004 by Taylor & Francis Group, LLC


chap-34 19/11/2003 14: 51 page 339

Figure 3. Flow diagram of the plant with pressurized fluidised bed combustion [4].

fluidized bed and the application of, both, gas and steam turbine. The simple flow diagram of a
plant is shown on Figure 3.
Coal is burned in a bed at the temperature between 840–870◦ C and pressure of 12 or 16 b. (The ash
and sorbent for reduction of sulfur oxides are present in the bed, as well.) The combustion products,
after cleaning in cyclones and/or ceramic filters, flow to gas turbine, and then to economizer,
condensate and feed water heater, filter and stack. Steam is generated inside the fluidized bed, and
then it flows to steam turbine. In some of the plants steam is reheated, as well. Six power plants
with combustion in pressurized fluidized bed, using two different units (modules), have been in
operation since 1990 [5]. One module represents the unit of total power of 70–100 MWe (with gas
turbine of 17 MW), and the other is a unit of total power of 350–425 MWe (with gas turbine of
70 MW).
The basic characteristics of plants with combustion in pressurized fluidized bed are:
• Higher efficiency comparing to the power plants with pulverized coal combustion (the savings in
fuel are estimated to be 10–15%, with the possibility to be increased to 20–25% in the following
years [6]).
• The considerable reduction of sulfur and nitrogen oxides in the combustion products, so that
there is no need for flue gas cleaning.
• The possibility of using different types of solid fuels (all types of coal and biomass). Fuels in
the existing power plants have heating values from 8.5 to 29 MJ/kg, ash content from 2–47%,
moisture 5–30%, and sulfur content 0.1–9.0%.
• Compact facility. The footprint of a “six pack” P200 plant (6 gas turbines and 2 steam turbines),
which would have an output of about 600 MWe, including fuel silos and feed systems and ash
systems, is only ∼4150 m2 (56 × 74 m), and the height is 55 m [7].
The specific prices of the facilities with pressurized fluidized bed are varying between $1900
and $3700/kW. But, it is predicted that it could be reduced to $1000/kW for the secondgeneration
facilities, and the price of electricity generated in big power plants could be up to 20% lower
than in conventional power plant with the system for flue gas cleaning [6]. Lately, the pressurized

339

© 2004 by Taylor & Francis Group, LLC


chap-34 19/11/2003 14: 51 page 340

fluidized bed combustion with circulating fluidized bed is under development. The demonstration
facility with circulating fluidized bed, with the output of 265 MWe (58 MWe in gas turbine and
207 MWe in steam turbine), is under construction in USA [8]. In this facility, coal is partly gasified
at temperatures from 955–980◦ C, and the produced syngas is cooled to about 650◦ C and then
used in gas turbine combustion chamber. This way, the temperature of combustion products at gas
turbine inlet will reach about 1290◦ C, which will increase the efficiency of the facility over 45%.
Beside ABB, some companies in Japan, USA, Germany and some other countries have started
to work on development of facilities with pressurized fluidized bed [9].

COMBINED CYCLES WITH GASIFICATION

A number of facilities for gasification of solid and liquid fuels have been developed, and the installed
capacity is over 43000 MWth of syngas with the annual growth rates of 4000 to 5000 MWth [10].
The development of power plants with coal gasification has started in order to increase efficiency
and reduce pollution. In this facility (Figure 4), coal, after milling, is fed to a gasifier, where
the gasification of coal with air and oxygen and/or their mixture with steam is taking place. The
product of gasification is low heating value gas (syngas), which main components are carbon
monoxide and hydrogen. This gas, after cooling, purification and sulfur removal, is used in gas
turbine combustion chamber. The combustion products, after expansion in turbine, are used in
steam generator to produce steam, used in steam turbine part of the plant.
One of the first newer demonstration facilities with integrated coal gasification was built in the
Netherlands and started to work in 1994. The total power of the plant is 284 MWe [12], out of
which 31 MW is internally used. Coal is gasified with oxygen (95% purity) and steam, and the
gasification is taking place at the pressure of ∼28 b and temperature of ∼1500◦ C [13,14]. The
obtained syngas is cooled to ∼250◦ C generating a fraction of high and medium pressure steam.
Before being fed to combustion chamber nitrogen and steam are added to the syngas in order to
reduce formation of NOx . The net efficiency of the plant is 43%.
An analysis performed for thermal power plant in Bosnia and Herzegovina has shown that, by
replacing four old units, of 118 MWe optimal power output, with one unit with integrated gasifi-
cation combined cycle, an output of 200 MWe can be obtained with the same amount of coal [15].

Figure 4. Flow diagram of power plant with coal gasification [11].

340

© 2004 by Taylor & Francis Group, LLC


chap-34 19/11/2003 14: 51 page 341

Lately, several power plants with gasification of coal, mixture of coal and petrol coke, and with
the gasification of liquid fuel containing higher percentage of sulfur have been built. (The market
for oil containing higher percentage of sulfur has been reduced due to the environmental regulation.
In Italy, it is not allowed to burn oil in power plants if it contains more than 0.25–0.30% of sulfur
[16]. Therefore, the building of several power plants with oil gasification started.) The power plant
of 400 MWe with gasification of lignite is under construction in Czech Republic. In spite of the
existence of demonstration plant and plants in operation, there is a need for further development
of this technology. The advantages of this technology are clear:
• Flexibility in using different types of fuel
• Future potential in increasing efficiency
• Possibility of connection with other processes and obtaining other products,
but the investment costs are still considerably high. It is expected that the capital investment for
the second generation power plants with gasification will not be higher than $1400/kW, and that
the net efficiency will reach values of 0.48 or 0.50 [17].
Another possibility to increase the efficiency of power plants with gasification is to use a turbine
with the mixture of combustion products with steam. There is no steam turbine in this power plant,
but adding steam, generated during the gasification and cooling of syngas, to combustion products
before the turbine increases the flow rate through the gas turbine. This way the power of gas turbine
is increased.

COMBINED CYCLES WITH NATURAL GAS AS A FUEL

The use of natural gas as a source of primary energy is continuously growing. The growth in the
ten years (1990–1999) was 16.6%, while the growth in using oil was 12.8%, and the use of coal
dropped by 6.1%. (The largest share in consumption during the year 1999 was that of liquid fuel,
39.81%, then natural gas 22.73% and coal 22.18%. The share of the nuclear energy was 6.62% and
the others 8.66%.)
The use of natural gas in combined cycle power plants had the considerable part in the growth
of natural gas consumption. According to the information in Modern Power Systems journal, the
largest number of power plant contracts in 1999 were signed for power plants with natural gas
fueled power plants. The requirements for design of gas-fueled power plants were so high in Great
Britain that the Ministry of Energy refused to give permission for building several power plants
with natural gas as a fuel.
The power plant with combined cycle and gas as a fuel, similar as the power plants with com-
bustion in fluidized bed under pressure and the power plants with coal gasification, includes gas
and steam turbines.
The power plants with natural gas as a fuel are simpler than the power plants with fluidized bed
and with gasification. The basic characteristics of the facilities are [18]:
• Low specific investment costs
• The highest efficiency
• Relatively short time of design and construction
• High flexibility (short time to start the operation and good characteristics at lower loads)
• Low maintenance cost
• Low emission of noxious gases
The specific investment cost is in the range of US$400 to US$900 per installed kW, depending on
the model and power of the facility [19].
The considerable number of commercial facilities with combined cycle has been built and sev-
eral generations of facilities are in operation [20]. The power plants put into operation these years
reach the efficiency, for electricity generation only, up to 58% [18,21,22]. The efficiency of com-
bined cycle facility is mostly dependent on the gas turbine efficiency, which is highly effected

341

© 2004 by Taylor & Francis Group, LLC


chap-34 19/11/2003 14: 51 page 342

Figure 5. Flow diagram of new concept of combined cycle power plant [28].

by the combustion products temperature at the turbine inlet. Lately, the considerable research and
improvement of gas turbine [21,23–26] and the facility as a whole [22,27] is under way. (Simple
flow diagram of one of this facility is shown on Figure 5.)
For one of the development project, 95 universities and 8 gas turbine corporations made up the
consortium [27]. The objectives of the project are:
• Net efficiency for utility scale systems higher than 60%
• Improvement of gas turbine efficiency for 15%
• NOx emission less than 9 ppm
• Reduction of electricity production cost by 10% from current levels
• Fuel flexibility with the primary focus on natural gas.

FUEL CELLS

Unlike the traditional energy conversion technologies with combustion of fossil fuels, fuel cells
are systems in which a part of the chemical energy of a fuel is directly converted into electricity.
In principle fuel cell works similar to a battery, but, unlike a battery, a fuel cell does not require
recharging. The fuel cell is working as long as the fuel is supplied. The core of the fuel cell consists
of two electrodes separated by electrolyte. The reactions of energy conversion are taking place
at electrodes. The electrodes are porous and gaseous fuel is entering the pores of anode where it
reacts with electrolyte generating electrons. The oxidant flows to the cathode in which pores it
reacts.
Several types of fuel cells of different characteristics and for different purposes have been or
are under development [29]. The working temperature range depends on the type of fuel cell and
may be from 50◦ C to 1000◦ C, which makes it convenient for different applications. By now, the
widest use in thermal engineering has found the phosphoric acid fuel cell (PAFC), which are

342

© 2004 by Taylor & Francis Group, LLC


chap-34 19/11/2003 14: 51 page 343

Figure 6. The estimated electrical efficiency of facilities with different technologies [31].

commercially available. However, the solid oxide and molten-carbonate (MCFC and SOFC) fuel
cells are considered as the most perspective for use in power plants, and for cogeneration, due to
the possibility to work at higher temperatures (600–1000◦ C) and with combined cycles.
The basic characteristics of fuel cells are:
• High efficiency
• Modular design
• The possibility of using different kind of fuels
• Very low environment pollution an low level of noise
• Flexibility in operation
• The possibility of application in distributed systems
It is expected that the efficiency of combined cycle with fuel cell (MCFC or SOFC type) and gas
turbine could reach values of 0.72–0.74 [30]. The comparison of predicted efficiencies of facilities
with fuel cells and some other technologies are shown on Figure 6.
The modular design makes the preparation of ground and construction of the facility (as well
as the amplification) considerably easier and shortens the time of construction. The important
characteristic of fuel cell is that the efficiency practically does not depend on the capacity of the
facility, which is not the case with the largest number of facilities for electric power generation.
The efficiency of the facility with fuel cells is changing very little with the load variation, as well.
These characteristics make fuel cells especially suitable for the distributed systems, which are,
lately, getting more attraction.
Lately, a lot of funds are invested in the further development of fuel cells. One of the objectives
is to reduce the price of fuel cell [32].

343

© 2004 by Taylor & Francis Group, LLC


chap-34 19/11/2003 14: 51 page 344

CONCLUSION

All the above-mentioned technologies make possible the construction of power plants with higher
efficiency than with traditional technologies. Beside that, all the technologies may be used for
cogeneration, which gives the opportunity for more rational use of primary energy.
The increment of efficiency of primary energy use in the industrialized countries in the period
from 1990 to 1997 was 3.78%, and in the developing countries 5.19%. This represents the consider-
able saving in the consumption of primary energy. Some analysis [33] indicates that the increase in
efficiency in the next 20 years might reach 20 to 30%, which clearly shows what saving in primary
energy consumption may be realized using new and more efficient technologies. If one consider
that the energy market in European Union has been estimated to 380 billion Euro, and that the
primary energy consumption in developing countries in 1999. was more than twice higher, it can
be easily concluded about the money savings which could be achieved by increasing efficiency.
Increasing the efficiency, not only that the more economic production is achieved, but the reduc-
tion of environment pollution, as well, which contribute to the efforts the world is taking to protect
environment.

ACKNOWLEDGEMENT

Author would like to express his thankfulness to Bosna S Oil Company from Sarajevo for the
support of this work.

REFERENCES

1. International Energy Agency World Energy Outlook 2000, http://www.iea.org/statist/keyword


2. International Energy Outlook 2002, IEA, U.S. DOE, Washington, March 2002, http://www.eia.doe.gov/
oiaf/ieo/index.html
3. Šehović H., Energy Future of Bosnia and Herzegovina, International Workshop and Round Table
Discussion, Sarajevo, June 2000.
4. Walter E., Sattler E., Krautz H. J., van den Berg C. A., Almhem P., A Lignite Fired Combined
Cycle Heat and Power Plant using Pressurized Fluidized Bed Combustion, 14th International ASME
Conference on Fluidized Bed Combustion, Vancouver, May 1997.
5. Anderson J., Anderson L., Technical and commercial trends in PFBC, Paper presented at Power Gen
Europe, Frankfurt, June 1st, 1999.
6. Olson E., Pressurised Fluidised Bed Combustion Technology, Greenhouse gas technology information
exchange, February 1996.
7. Jansson S. A., Anderson J., Progress of ABB’s PFBC Projects, 15th International Conference on
Fluidized Bed Combustion, Savannah, Georgia, USA, May 9–13, 1999.
8. “McIntosh Unit 4B Topped PCFB Demonstration Project”, Fact Sheet, DOE, December 2000.
9. Björklund J., Anderson L., Seminar on New Technologies on European Lignites and Brown Coal,
Athens, January 31, 1997.
10. Simbeck D., Johnson H., World Gasification Survey: Industry Trends & Developments, Gasification
Technologies Conference, San Francisco, October 2001.
11. “What is Gasification”, Gasification Technology Council, http://www.gasification.org/story/explaine/
explaine.html
12. Eurlings J. Th. G. M., Ploeg J. E. G., Process Performance of the SCGP at Buggenum IGCC,
Gasification Technologies Conference, San Francisco, October 1999.
13. Zon G. D., IGCC future under test at Buggenum, Modern Power Systems, Vol. 10, Issue 8, pp 39–45,
August 1990.
14. Zon G. D., Meijer C. G., “Coal Gasification and Combined Cycle”, International Power Generation,
Vol. 14, No. 3, pp 60–64, May 1991.
15. Puška S., The analysis of facility with integrated gasification combined cycle, (in Bosnian) Diploma
thesis, Mechanical Engineering Faculty, Sarajevo, 2001.

344

© 2004 by Taylor & Francis Group, LLC


chap-34 19/11/2003 14: 51 page 345

16. Farina L., Bressan L., Castagnoli E., Bellina G., ISAB IGCC plant enters operation phase, Modern
Power Systems, Vol. 19, Issue 8, pp 49–51, August 1999.
17. Atlas, IGCC, http://www.jrc.es
18. “Otahuhu single shaft CCGT demonstrates power of modularisation”, Staff report, Modern Power
Systems, Vol. 20, Issue 1, pp 42–43, January 2000.
19. Ryder G., The Advantages of Combined Cycle Plants: A “New Generation” Technology, Probe
International, Toronto, Canada, November 1997., http://www.nextcity.com
20. Chase D. L., Combined – Cycle Development Evaluation and Future, GE Power Systems, Schenectady,
NY, 10/2000.
21. Smith D., New CCGT technology aims for over 60 per cent efficiency at Cottam, Modern Power
Systems, Vol. 19, Issue 9, pp 40–43, September 1999.
22. “Agawam merchant GT24 combined cycle plant uses once – through HRSG”, Staff report, Modern
Power Systems, Vol. 19, Issue 9, pp 31–38, September 1999.
23. “GE 7FB GT announced with sales of 15 units”, Staff report, Modern Power Systems, Vol. 20, Issue
1, pp 20–22, January 2000.
24. Nedderman J., First 701 Gs prepare for commercial operation, Modern Power Systems, Vol. 19, Issue 1,
pp 37–41, January 1999.
25. Smith D., First 7H turbines go to Heritage Station, Scriba, Modern Power Systems, Vol. 19, Issue 10,
p 15, October 1999.
26. Varley J., First GTX100 reaches full power at Västhamn, Modern Power Systems, Vol. 12, Issue 12,
pp 31–33, December 1999.
27. Flin D., Aiming for a goal of over 70 per cent efficiency, Modern Power Systems, Vol. 19. Issue 5,
pp 45–50, May 1999.
28. Smith R. W., Polukort P., Maslak C. E., Jones C. M., Gardiner B. D., Advanced Technology Combined
Cycles, GE Power Systems, Schenectady, NY, 10/2000.
29. Kordesch K., Simader G., Fuel Cells and Their Application, VCH, Weinheim, 1996.
30. “Fuel Cells Overview”, NETL – Research in Progress: Advanced Power Systems, http://www.fetc.doe.
gov/products/power/fuelcells/overview.html
31. Layne A. W., Williams M., Samuelson S., Hoffman P., Development Status of Hybrids, FETC, Presented
at the 1999. IGTI TurboExpo, http://www.netl.doe.gov/scng/publications/ats/hybrid.pdf
32. Department of Energy: Fossil Energy – Advanced Fuel Cell Systems: Strategic Objectives, http://www.
fe.doe.gov/coal_power/fuel_cells/fc_so.html
33. “Energy Efficiency Initiative”, Vol. 1, OECD/IEA, 1997.

345

© 2004 by Taylor & Francis Group, LLC


chap-34 19/11/2003 14: 51 page 346

© 2004 by Taylor & Francis Group, LLC


chap-35 19/11/2003 14: 51 page 347

Sustainable development
of water systems

© 2004 by Taylor & Francis Group, LLC


chap-35 19/11/2003 14: 51 page 348

© 2004 by Taylor & Francis Group, LLC


chap-35 19/11/2003 14: 51 page 349

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Water management of a small river basin toward sustainability


(the example of the Slovenian river Paka)

Emil Šterbenk, Alenka Roser Drev, Mojca Bole


ERICo Velenje Institute, Velenje, Slovenia

ABSTRACT: The Šalek Valley is overburdened with many human activities. The river Paka is
too small for an industrially and energetically intensive area. Not only environmental pollution
itself, but also changes in the river network have enlarged the sensitivity of the water bodies. Due to
coalmining the surface of the Šalek Valley has subsided and subsidence lakes have appeared. A high
standard of measures for environmental protection is necessary to prevent the pollution of these
lakes. Due to the poor quality of the river Paka at the beginning of nineties, a water improvement
programme was adopted. The main goal was to upgrade the Paka to the 2nd quality class. A lot of
environmental improvement activities have been carried out. Upgrading of the wastewater treatment
plant was the main aim of the programme. A financial grant (50%) was awarded by the European
Union’s ISPA financial instrument for carrying out the project.

FOREWORD

Mankind has come to the conclusion that natural resources are limited. In order to preserve them,
development must be directed in a sustainable way, which meets the needs of the present without
compromising the ability of future generations to meet their needs [1]. The principles of Agenda
21 and Local Agenda 21 [2] are founded on sustainable development. But the question is whether
sustainable development can be carried out in every local community, especially if various human
activities in that community are intensive. We try to answer this question for the example of water
body quality in a highly urbanised and industrially intensive region of Slovenia.

THE RIVER PAKA

The river Paka is the main water stream in the Šalek Valley but together with its tributaries, too
small for a coalmine, a thermal power plant and the industrially intensive Šalek Valley. The average
flow of the river Paka in Šoštanj is only 2.6 m3/s with a large difference between the maximum
and minimum levels. The river Paka has a mountain stream flow regime (unbalanced pluvio-nival
regime). The maximum flow is in November due to autumn rainfalls, with the second maximum
in March, due to snow melting. Minimum flows are in January (snow retention) and in August
(rainfall deficit). The minimum flow is less than 0.2 m3/s and the maximum over 100 m3/s, which
is greater than the riverbed capacity. The Paka used to meander and flood without any harmful
consequences. After Velenje expanded, the riverbed was straightened, widened and regulated. The
riverbed was also shortened, so its slope increased, and consequently the environmental and biotic
qualities of the river Paka changed for the worse [3].
If the number of inhabitants in Velenje is compared with the average flow of the river Paka
the result shows that the flow of the river Paka is less than 0.09 l/s per inhabitant. If we carry out
the same mathematical operation for Celje the result is almost 0.8 l/s per inhabitant. The result

349

© 2004 by Taylor & Francis Group, LLC


chap-35 19/11/2003 14: 51 page 350

for Kranj (4th town in Slovenia according to the number of inhabitants) is even better (1.4 l/s per
inhabitant), while in Ljubljana, the Slovenian capital, the result is 0.034 l/s per inhabitant. However,
if we compare the average minimum flow per inhabitant in these towns, the ratio for Velenje is
much more disadvantageous. At the minimum flow of the river Paka, the flow per inhabitant is only
0.005 l/s. But something should not be forgotten: the Coal-fired Power Plant (CPP) Šoštanj is a large
consumer of water resources. Its electrical power capacity of 750 MW is the biggest in Slovenia
and about one quarter of Slovenian power needs are generated in Šoštanj. The need for water at full
capacity is 700 l/s and more than 400 l/s of water evaporate from the cooling towers of the CPP.
Cooling water is pumped from Lake Družmirje or from the Paka. The brook Velunja, the main
tributary of the river Paka runs through Lake Družmirje and during dry periods of the year no water
drains from the lake to the river Paka.
Due to coalmining the river network has changed. The surface of the Šalek Valley has subsided by
around 110 million m3 up to the year 2000. The subsidence area amounts to approximately 6 km2 of
the valley surface. A quarter of the Paka watershed turned into a more sensitive lake watershed. The
most remarkable consequence of coalmining is the appearance of subsidence lakes. The surface of
these lakes is over 2 km2, and their volume is over 36 million m3 (coalmining is still in progress and
the lakes are growing all the time). Thus coalmining changed the river network, and lakes appeared
in the central part of the valley. Lakes are far more sensitive to negative environmental impacts
than running waters. Despite the appearance of these large water reservoirs (lakes) there is not
enough water for all human activities in this region. The water network is very weak. Due to the
ground configuration and hydro-asymmetry (the Paka runs on the southern edge of the valley) lakes
appeared in the areas of the Paka tributaries. The lakes are young and anthropogenic and have some
characteristics different from natural lakes. With the appearance of the lakes not only disorientation
of the river network was triggered, but also changes in the water cycle of Šalek waters. While the
water mass in the smallest Škale lake theoretically changes approximately five times a year, the
water in the largest Velenje lake changes only once in two and a half years. The rate of pollution of
the tributaries Velunja, Sopota and Lepena, whose waters are collected in the large northern part of

Figure 1. The regulated riverbed of the river Paka in the central part of the Šalek Valley. Photo: E. Šterbenk,
2002.

350

© 2004 by Taylor & Francis Group, LLC


chap-35 19/11/2003 14: 51 page 351

the Šalek Valley, is of extreme importance since they run into the lakes. The lake water quality can
be ascertained by the lake inflow quality. In the dry season of the year when evaporation exceeds
rainfall, the outflows from the lakes are smaller than the inflows.

THE HUMAN IMPACT ON THE WATER BALANCE IN THE PAKA RIVER BASIN

The average flow of the river Paka in Šoštanj is only 2.6 m3 /s (78 million m3 of water a year). The
Šoštanj CPP, which is the biggest user of water in the Paka river basin, alone uses up to 700 l of

7000

6000

5000

4000
Industry Sum
3000

2000

1000

0
1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000

Figure 2. Quantity of drinking water sold to industry and to all consumers (in 1000 m3 ). Source: Velenje
public utility.

Figure 3. The Šoštanj Thermal Power Plant is the biggest consumer of water in the Paka river basin.
Photo: E. Šterbenk, 2002.

351

© 2004 by Taylor & Francis Group, LLC


chap-35 19/11/2003 14: 51 page 352

water per second and that is more than four times as much as the absolute minimum flow of the
Paka in Šoštanj. The annual water consumption in the CPP is around 12.5 million m3 and up to
8 million m3 of water is evaporated for cooling purposes.
The waterworks in the Paka river basin consist of 16 water sources, 530 km of pipelines and a
drinking water treatment plant. The waterworks supply fresh water to more than 40,000 inhabitants
and almost 22,000 employed persons. During the nineties water consumption from the waterworks
has been falling. In 1989 it was more than 6 million m3 and in 2000 it was reduced to 4.6 million m3.
More than 2.7 million m3 of drinking water is pumped from the neighbouring drainage basin of
the brook Ljubija. The main water source in the upper part of the Paka river basin is the “Gorica”
pumping site where more than 2 million m3 of drinking water is pumped. In fact 6.4 million m3 of
water is collected from water sources and water losses represent 24.6% of the accumulated quan-
tity. These losses could be reduced by the renovation of the waterworks. Technological losses of
almost 200,000 m3 of water (washing of filters, use of water from hydrants) are unavoidable.
Wastewater is collected in a sewage system and (partly) treated in the Šoštanj wastewater treat-
ment plant.Annually more than 5.8 million m3 of water is treated and returned back to the river Paka.

THE HUMAN IMPACT ON THE WATER QUALITY IN THE PAKA RIVER BASIN

Intensive human activities influence the quality of water bodies. In the late eighties almost all water
bodies in the Šalek Valley were extremely polluted. So, their quality has been monitored since 1987
at eight sampling points along the Paka, at twenty-one sampling points along its tributaries and at
four sampling points in the lakes. The monitoring is carried out four times a year on each sampling
point (the quality of water in lakes is monitored at each meter of the depth profile of the lakes).
For monitoring both chemical and biological analyses are performed. According to the conditions
in the river Paka it was decided to use a simple and transparent method that suits the character

Figure 4. Water quality of the river Paka in 1991.

352

© 2004 by Taylor & Francis Group, LLC


chap-35 19/11/2003 14: 51 page 353

of the river and provides quick and accurate analysis. The method is based on analyses of leading
species and leading biocenoses. The pollution in the river Paka is a combined one, degradable
and nondegradable and this is why it is believed this method is the best way to demonstrate its
effects [4].
In 1991 the Paka entered the Velenje municipality area in the 1st quality class. In Velenje
the quality fell to the 2nd–3rd class and below Velenje already into the third. After the inflow
from the lake Velenje the Paka was in the 4th quality stage. The reasons for this situation were an
incomplete sewage system in the urban and semi-urban areas, farm and industrial pollution, and
pollution due to ash transport from the Šoštanj CPP to the landfill. The main industrial source of
pollution was a leather factory in Šoštanj whose negative impact on the river Paka was almost of
the same size as the impact of all the inhabitants in the area.

WATER IMPROVEMENT PROGRAMME OF THE RIVER PAKA

Due to the poor quality of the river Paka in the beginning of nineties the first Cadastre of
Water Quality [5] was elaborated and later a Water Improvement Programme [6] was adopted
in the Velenje Community. In 1994 the system of local communities in Slovenia was changed
and the Velenje Community was divided into three smaller communities. The new municipali-
ties of the area adopted a Water Improvement Programme for the river Paka, its tributaries and
the Šalek lakes. The programme was split into three sub-programmes: running waters, the lakes
and the sewage system. The main goal of the water improvement programme was to upgrade
the river Paka to the 2nd quality class. That was also the main goal of the sub-programme
on running waters. The main goal of the lakes programme was to bring all the lakes into a
moderate eutrophic state. The most important aim of the sewage system sub-programme was
to upgrade the sewage system and the WWTP. In this programme water protection measures
were proposed, an evaluation of funds for executing these measures and contractors for the
environment protection projects and measures were appointed. The programme design process
included not only the local communities of the area and hired experts, but also companies and
(partly) the public. The Paka water improvement programme was the first one so broadly based
in Slovenia.
Since the adoption of the programme the quality of the river Paka has significantly improved.
The greatest negative impact was caused by hydraulical ash transport from the Šoštanj CPP to the
landfill. At the beginning of its existence Lake Velenje was used as a dump for ash and a reservoir
for ash transport water from the power plant. Water from Lake Velenje drained into the river Paka
and after the outlet from the lake the Paka used to be in 4th quality class. The pH of the transport
water was around 12. The pH of Lake Velenje was the same, therefore no living organisms could
survive in such an alkaline environment. The situation in the Paka was similar. Up to the early
eighties ash slurry was run simply into Lake Velenje, but afterwards building of the ash landfill
began. Ash was retained in the landfill and only the transport water was run of to Lake Velenje. The
pH remained 12 because of the high alkalinity of the transport water. Then a closed loop system
for ash water transport was built in the autumn of 1994. The transport water is now collected under
the landfill. Thus the transport water is pumped back to the CPP and cyclically used again for ash
transport and no longer enters the Lake Velenje. The closed loop system has had a significant and
positive impact on the water quality of the the Lake Velenje. In only three years the lake water
quality has almost normalised [7]. In 1994 and 2000 desulphurisation plants were installed at the
Šoštanj CPP. The majority of the ash is mixed with gypsum and transported to the landfill by lorries,
so the impact on water was again reduced.
The leather factory in Šoštanj went bankrupt, so another source of pollution was removed. The
Gorenje Company is constantly upgrading the industrial WWTP and reducing the consumption
of water. The sewage system has been partly upgraded and partly rebuilt. The most important
improvement is a closed loop sewage collection system in the lake watershed. Also, the municipal
WWTP operates quite successfully taking into account that only the first phase has been completed.

353

© 2004 by Taylor & Francis Group, LLC


chap-35 19/11/2003 14: 51 page 354

Figure 5. The lake Velenje improved after the close loop system for ash transport had been built. Photo:
E. Šterbenk, 2002.

Figure 6. Water quality of the river Paka in 2000.

The existing WWTP was designed only for a primary treatment, but in fact one primary clarifier
is used as an aeration basin for high load biological treatment.
In 1991 the Paka was in 4th quality class below the outflow from Lake Velenje, but after the
closed loop system was built, it improved to the 2nd–3rd. The biggest polluter of the Paka is the
WWTP. After the outflow from the WWTP the quality of the Paka falls by one class. So the Paka
runs into the river Savinja in the 2nd–3rd quality class.

354

© 2004 by Taylor & Francis Group, LLC


chap-35 19/11/2003 14: 51 page 355

1600000
1400000
1200000
1000000
800000
600000
400000
200000
0
1991 1992 1993 1994 1995 1996 1997 1998 1999 2000

Figure 7. Quantity of water used in the Gorenje 1991–2000 (in m3 ). Source: Gorenje d.d.

WATER MANAGEMENT IN THE PAKA RIVER BASIN TOWARD SUSTAINABILITY

It is hard to discuss sustainable development in the field of water management in the Paka river
basin. There is an extreme disproportion between water quantity and its use. Our first goal should
be a more sustainable use of water. First steps in the direction of sustainability have already been
taken with the water improvement programme. The next steps are to complete the upgrading and
building of the sewage system and to upgrade the WWTP. The project for upgrading the WWTP
has already started. It is expected that the construction of phase 2 of the WWTP (total secondary
and tertiary treatment) will be finished by 2004. The Velenje public utility (Komunalno podjetje
Velenje) in co-operation with some other institutions and the Slovenian Ministry of Environment,
Spatial Planning and Energy, prepared all the necessary documentation for awarding ISPA financial
support. Subsequently the quality of the river Paka is expected to be no worse than the 2nd quality
class. The Paka will also be suitable for other functions like fishing. We will also have to examine
the possibilities of regenerating the Paka riverbed and to mitigate the consequences of its regulation.
The best way to use of technological water for the CPP was already outlined in the study of Water
Management in the Exploitation Area of Velenje Colliery [8]. The Velenje municipality is one of the
first in Slovenia to introduce Local Agenda 21. The Water Improvement Programme is becoming
one of the projects of the Local Agenda and the Local Agenda is a further step toward sustainability.

CONCLUSION

Although the river Paka is one of the most burdened water bodies in Slovenia, the results of the water
improvement programme have shown that it is possible to direct development towards sustainability
even in highly populated areas. We cannot say yet that sustainable development is achieved but
it is possible to say that recent development is more sustainable. The results are quite good, but
we must not forget that the first activities (monitoring of water bodies) started back in 1987. The
improved state of the water bodies is a result of monitoring, a systematically prepared improvement
programme and coordinated and well-performed water protection measures. We believe the process
of constant environmental development in the Šalek Valley has been successfully started and will
be carried out through the Local Agenda 21.

REFERENCES

1. Our Common Future, World Commision on Environment and Development, 1987.


2. Agenda 21, United Nations Conference on Environment and development, UN 1992.

355

© 2004 by Taylor & Francis Group, LLC


chap-35 19/11/2003 14: 51 page 356

3. Šterbenk, E., Šalek Lakes – Coalmining Impact on the Šalek Valley Landscape Transformation.
Založništvo Pozoj, ERICo Velenje. Slovenia, 1999.
4. Rošer Drev, A., Rejic. M., Ramšak, R., The Qualitative State of the Torrential Water stream in Highly
Industrialized Areas – Biological Analyses. 5th International Conference Environmental Contamination.
Morges Switzerland, 1992, pp 308–310.
5. Rošer Drev, A., Ramšak, R., Bole, M., Kugonič, N., Cadastre of the Water Quality in the Velenje
Municipality. ERICo Velenje, Slovenia, 1992.
6. Šterbenk. E., Water improvement programme for the Velenje Municipality, ERICo Velenje, Slovenia, 1993.
7. Šterbenk, E., Ramšak, R., Reclaiming Polluted Waters: the Case of the Velenje Lake, Sustainable
development in the Slovenian Alps and Neibouring Areas, First Anton Melik’s Days of Geography.
University of Ljubljana, Faculty of Arts, Department of Geography, Slovenia, 1999, pp 215–224.
8. Skutnik, B. et al, Water Management in the Exploitation Area of the Velenje Colliery, PUV Celje,
ERICo Velenje, Slovenia, 2000.

356

© 2004 by Taylor & Francis Group, LLC


chap-36 19/11/2003 14: 51 page 357

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

A simplified model for long term prediction on vertical


distributions of water qualities in Lake Biwa

Takashi Hosoda∗
Department of Civil Engineering, Kyoto University, Sakyo-ku, Kyoto, Japan

Tomohiko Hosomi
Osaka Municipal Office, Kita-ku, Osaka, Japan

ABSTRACT: This paper describes a one-dimensional simplified model to predict the seasonal
variations of the vertical temperature and water quality distributions through a year in Lake Biwa,
which is the largest lake in Japan, using the monthly averaged meteorological data. A hydrodynamic
model with a k-ε turbulence model and the thermodynamic exchanges at a water surface was firstly
tested to reproduce seasonal variations of temperature distributions in the vertical direction. Then
the water quality parameters such as phytoplankton (Chlorophyll-a), zooplankton, organic and
inorganic nutrients are combined with a water temperature prediction model, using a standard
modelling of ecological processes. It was shown that the calculated water quality distributions
agree with the observed ones qualitatively, and the model may be applicable to predict the effect
of global warming, though further investigations are required to identify the model parameters.

INTRODUCTION

A vertical one-dimensional numerical model was developed to predict the seasonal variations of the
vertical temperature and water quality distributions through a year in Lake Biwa, Japan, and then
was verified through the comparisons of calculated results with the observed ones in this paper.
Lake Biwa is the biggest lake located in Shiga Prefecture, Japan (the central part of Japan), of
which the dimensions are 680 km2 in surface area, 27.5 × 109 m3 in storage volume and 104 m in
maximum depth.
Many researches have been carried out intensively in view of the hydrodynamic and ecological
aspects as summarized in [1].
The characteristics of the seasonal variations of vertical temperature distributions are summarized
as follows (The observed temperature distributions are shown in Figure 1 by using the Lake Biwa
Biogeochemical and Ecological Database (3. Water Quality Data by Shiga Prefectural Institute
of Public Health and Environmental Science) supervised by the Centre for Ecological Research,
Kyoto University.):
a. water temperature is uniform in the vertical direction, and is about 6◦ C–8◦ C during winter
season (January to March).
b. water body near a water surface is warmed in early spring due to net solar radiation, and
temperature in the surface layer increases gradually. The temperature continues to increase from
April to August, and the vertical distribution in the epilimnion is linear. The maximum surface
temperature is about 30◦ C in August.
c. thermal convection due to the heat radiation from a surface starts in September, and the distinct
two layers with different temperatures can be observed. The depth of the interface between
∗ Corresponding author. e-mail: hosoda@river4.kuciv.kyoto-u.ac.jp
357

© 2004 by Taylor & Francis Group, LLC


chap-36 19/11/2003 14: 51 page 358

Water surface 1998


0
Apr.
10
May
Distance from water surface (m)
June
20
July
30 Aug.
Sep.
40 Oct.
Nov.
50 Dec.
Jan.
60 Feb.
Mar.
70

80
0 5 10 15 20 25 30
Temp. (°C)

Figure 1. Vertical temperature distributions in 1998.

epilimnion and hypolimnion becomes deeper, and the surface temperature decreases from
September to January, and the temperature stratification disappears in the end of January.
A simplified hydrodynamic model with a k-ε turbulence model and the thermodynamic exchange
at a water surface was firstly introduced to predict the seasonal variations of temperature distri-
butions shown in Figure 1, using the monthly averaged meteorological data [2], and was verified
with the comparisons between calculated results and observed ones.
Then, a standard model of ecological processes with phytoplankton (chlorophyll-a), zooplankton,
organic and inorganic nutrients was combined with a water temperature prediction model. The long-
term predictions of temperature and water quality indices were carried out for ten years to verify
the applicability of a simplified temperature-water quality model developed in this paper.

TEMPERATURE PREDICTION MODEL

Basic equations
A vertical 1-dimensional temperature prediction model is composed of the heat balance equation,
momentum equation in the horizontal direction with a k–ε turbulence model as follows [3]:
[Heat balance equation]
 
∂T ∂Qz /∂z ∂ ∂T
= + DTz (1)
∂t ρ a cp ∂z ∂z
[Momentum equation in the horizontal direction]
 
∂U 1 ∂p ∂ ∂U
=− + v − u v (2)
∂t ρ0 ∂x ∂z ∂z

[k–ε equations]
  
∂k ∂U ∂ Dmz ∂k g ∂ρ
= −u v −ε+ +v + DTz (3)
∂t ∂z ∂z σk ∂z ρ0 ∂z

358

© 2004 by Taylor & Francis Group, LLC


chap-36 19/11/2003 14: 51 page 359

  
∂ε ε ∂U ε2 ∂ Dmz ∂ε g ε ∂ρ
= −cε1 u v − cε2 + +v + cε3 DTz (4)
∂t k ∂z k ∂z σε ∂z ρ0 k ∂z
∂U
−u v = Dmz (5)
∂z
where z: spatial coordinate in the vertical direction from bottom to surface, t: time, T : water temper-
ature, DTz : turbulent diffusion coefficient, Qz : heat flux defined in Eq. (12), ρ0 : reference density
of water (=1000 kg/m3 ), cp : specific heat, U : velocity in the horizontal direction, p: pressure, v:
dynamic molecular viscosity, k: turbulent kinetic energy, ε: turbulent energy dissipation rate, Dmz :
turbulent eddy viscosity.
The following standard k–ε model constants are used:

cµ = 0.09, cε1 = 1.44, cε2 = 1.92, cε3 = 0, σk = 1.0, σε = 1.3

Considering the effect of density stratification, the eddy viscosity Dmz and diffusion coefficient
DTz are evaluated by the following equations [4,5].
 
k2 1 g k 2 ∂ρ
Dmz = cµ fb , fb = , B= (6)
ε 1 + 0.2B ρ0 ε ∂z
1 1
DTz = Dmz , Prt0 = 1/1.6 (7)
Prt0 1 + 0.24B
Additional mixing due to internal and surface waves is considered by adding the following
coefficient to DTz .  
 zs − z
DEX = cEX ks ls exp − ddump (8)
ls
where ls : mixing length scale (=0.5 m), cEX : empirical constant (=0.2), and ddump : empirical
constant (=0.1).

Conditions of numerical simulations


The boundary conditions at a water surface z = zs to calculate the momentum equation are denoted
as follows (the subscript s indicates water surface):

cµ/ ks /
3 4 3 2
∂U u∗2s
(v + Dmz ) = u∗2s , ks = , εs = (9)
c/
∂z 1 2 0.4zs
µ

where u∗s : friction velocity at a water surface induced by wind, which is evaluated by an empirical
formula and zs : a half of spatial mesh size for the finite difference scheme.
The pressure term in Eq. (2) is regarded as a parameter, and the value of the term is gradually
changed to make the discharge in the horizontal direction zero at each time step.
The heat exchange at a water surface is given as Eq. (10).
∂T Q0
DTz = (z = zs ) (10)
∂z ρa c p

where Q0 : heat absorbed at a water surface per unit area and time, which can be evaluated using
the following relation.
Q0 = βQS − (Qe + Qc + Qrw + Qa ) (11)
where QS : net solar radiation (= (1 − Ar ) QS0 , QS0 : solar radiation, Ar : albedo (=0.06)) , Qe :
evaporation heat flux, Qc : sensible heat flux, Qrw : water surface radiation heat flux, Qa : net
atmospheric radiation heat flux, and β: ratio of absorbed to net incoming radiation (=0.5).

359

© 2004 by Taylor & Francis Group, LLC


chap-36 19/11/2003 14: 51 page 360

50 6 78 567
2 11 10 50 2 4 8
9
4 5 3 9
40 3 40 10

30 30
11

y(m)
y(m)

20 20 12
12

10 10
1
0 0
0 10 20 30 -0.4 -0.2 0 0.2 0.4
T(deg) (m/s)

Figure 2. Results of temperature distributions. Figure 3. Results of velocity distributions.

Each term in Eq. (11) are evaluated by using the empirical formulas of common use. Qz in Eq.
(1) is also evaluated as Eq. (12).

Qz = (1 − β) QS exp (−κ(zs − z)) , κ = κ0 + κP P (12)

where κ0 : attenuation coefficient (=0.2), κP : dispersion coefficient due to phytoplankton (=0.001),


P: chlorophyll-a in phytoplankton.
To simulate the effect of natural convection which occurs between early autumn and winter,
when a density at an elevation zc is less than the average density from zc to a water surface, the
temperatures of a layer over zc are replaced the constant value Tave calculated by Eq. (13).

zs
Tdz = (zs − zc ) Tave (13)
zc

CALCULATED RESULTS OF TEMPERATURE

The calculated results of temperature, velocity and diffusion coefficient distributions are shown in
Figures 2, 3 and 4.
The characteristics of the seasonal variations on temperature described in INTRODUCTION
can be well simulated in the calculated results during both the periods of stratification and
destratification. The characteristics on turbulent diffusion coefficient are summarized that (1) the
coefficient is very small during the stratification period of April to September, and (2) the large val-
ues during the winter induce strong mixing in the depth-wise direction, and make the temperature
distributions uniform. The nutrients with high concentration in a lower layer are also transported
upward, and then are up taken by Phytoplankton in early spring.

MODELLING OF ECOLOGICAL PROCESSES

An ecological model is composed of the kinetics of chlorophyll-a in phytoplankton, carbon in


zooplankton, organic and inorganic nitrogen, organic and inorganic phosphorus and dissolved
oxygen as shown in the following equations [6,7,8].

360

© 2004 by Taylor & Francis Group, LLC


chap-36 19/11/2003 14: 51 page 361

50 10 11 1 2 12 50
9 8
4
9
3
40 40

30 30

y(m)
y(m)

20 20

10 10

0 0
0 0.005 0.01 0.E+00 1.E-06 2.E-06 3.E-06 4.E-06 5.E-06
(a) DT (m2/s) (b) DT (m2/s)

Figure 4. Results of diffusion coefficient distributions.

[Chlorophyll-a in phytoplankton]

∂P ∂wO P ∂2 P
= (GP − DP )P − +D 2 (14)
∂t ∂z ∂z
(T −20) Iγ Iγ CIN CIP
GP = SP · RPh · θPh exp (1 − )· · (15)
IS IS KIN + CIN KIP + CIP
(T −20) KPP
DP = RCP · θCP + Cg · ·Z (16)
KPP + P
[Carbon in zooplankton]

∂Z ∂2 Z
= (GZ − DZ )Z + D 2 (17)
∂t ∂z
KPP
GZ = α · aZ · Cg · ·P (18)
KPP + P
(T −20)
DZ = RZ · θZ (19)

[Inorganic nitrogen]

∂CIN KPP
= − βN · GP · P + βN (1 − aZ ) · Cg · ·P·Z
∂t KPP + P
(T −20) (T −20) ∂2 CIN
+ RN · θN · (CON − βN P − γN Z) + EIN θEIN /z + D (20)
∂z 2
[Organic nitrogen]

∂CON KPP
= βN · GP · P − βN (1 − aZ ) · Cg · ·P·Z
∂t KPP + P
(T −20) ∂
− R N · θN · (CON − βN P − γN Z) − (wN (CON − γN Z))
∂z
∂2
+ D 2 (CON − γN Z) (21)
∂z
361

© 2004 by Taylor & Francis Group, LLC


chap-36 19/11/2003 14: 51 page 362

[Inorganic phosphorus]

∂CIP KPP
= − βP · GP · P + βP (1 − aZ ) · Cg · ·P·Z
∂t KPP + P
(22)
(T −20) (T −20) ∂2 CIP
+ RP · θP · (COP − βP P − γP Z) + EIP θEIP /z + D
∂z 2

[Organic phosphorus]

∂COP KPP
= βP · GP · P − βP (1 − aZ ) · Cg · ·P·Z
∂t KPP + P
(T −20) ∂
− RP · θP · (COP − βP P − γP Z) − (wP (COP − γP Z))
∂z
∂2
+D (COP − γP Z) (23)
∂z 2

[Dissolved oxygen]

∂DO (T −20) (T −20)


= KDO1 (DOsat − DO) + KDO2 (GP − RCP · θCP )P − KDO3 θDO COD
∂t
(T −20) ∂2 DO
− EDO θDO /z + D (24)
∂z 2

KDO3 COD ≈ KDO4 (CON − βN P − βN Z) (25)

where:
P(µgchla /l): chlorophyll-a in phytoplankton,
Z(mg-C /l): carbon in zooplankton,
CIN (µg- N /l): inorganic nitrogen,
CIP (µg-P /l): inorganic phosphorus,
CON (µg−N /l): organic nitrogen,
COP (µg−P /l): organic phosphorus,
DO(mg/l): dissolved oxygen,
SP = exp (−µS · P),
µS = 0.00385: space effect function for primary production,
RPh (1/sec): growth rate of phytoplankton at 20◦ C (=2/(24*60*60)),
θPh : temperature coefficient on primary production of
phytoplankton (=1.05),
Iγ ≡ Qz : light intensity at z,
IS (kcal/m2 /sec): optimum light intensity (=2000/(24*60*60)),
|wO |(m/sec): settling velocity of phytoplankton (=0.1),
D: turbulent diffusion coefficient,
KIN (µg/l): half saturation constant (=25),
KIP (µg/l): half saturation constant (=2),
RCP (1/sec): respiration and mortality rate of phytoplankton (=0.01/(24*60*60)),
θCP : temperature coefficient on respiration of phytoplankton (=1.05),
Cg (l/mg-C /sec): grazing rate of phytoplankton by zooplankton (=0.25/(24*60*60)),
KPP (µg/l): saturation constant of grazing (=60),
Gz : grazing term of zooplankton,
Dz : mortality of zooplankton,

362

© 2004 by Taylor & Francis Group, LLC


chap-36 19/11/2003 14: 51 page 363

α(mg-C /µg-chla ): rate of carbon to chlorophyll-a in phytoplankton (=0.05),


az : assimilation rate of carbon in phytoplankton (=0.6),
RZ (1/sec): mortality rate of zooplankton (=0.02/(24*60*60)),
θZ : temperature coefficient on mortality rate of zooplankton (=1.05),
βN (µg−N /µg−chla ): N/Chl.a in phytoplankton (=10),
RN (1/sec): mineralization rate of organic N (=0.027/(24*60*60)),
θN : temperature coefficient on mineralization rate of N (=1.05),
γN (µg−N /mg- C ): N/C in zooplankton (=200),
EIN (µg/m2/sec ): release rate of inorganic N at bottom (30/(24*60*60)),
θEIN : temperature coefficient on release rate of IN (=1.12),
|wN |(m/sec): settling velocity of organic N (=0.1),
βP (µg−P /µg−Chla ): P/Chlain in phytoplankton (=1),
RP (1/sec): mineralization rate of organic P (=0.2/(24*60*60)),
θP : temperature coefficient on mineralization rate of P (=1.05),
γP (µg−P /mg−C ): P/Chla in zooplankton (=26),
EIP (µg/m2/sec ): release rate of inorganic P at bottom (=0.0/(24*60*60)),
θEIP : temperature coefficient on release rate of IP (=1.12),
|wP |(m/sec): settling velocity of organic P (=0.1),
DO(mg/l): dissolved oxygen (mg/l),
DOsat : saturated dissolved oxygen concentration,
KDO1 , KDO2 , KDO3 : constants (=0.000035, 0.06, 0.00000002, respectively),
θDO : temperature coefficient on mineralization rate of COD (=1.05),
EDO (mg/m2/sec ): consumption rate of DO at bottom (=15/(24*60*60)),
θEDO : temperature coefficient on consumption rate of DO (=1.12).

CALCULATED RESULTS OF WATER QUALITIES

The long-term calculations were carried out for ten years with the temperature prediction. Figure 5
shows the seasonal variations of phytoplankton, zooplankton and other nutrients concentrations
at a surface layer after ten years from the start of calculation. The seasonal variations of vertical
distributions are also shown in Figure 6.
There are two peaks in early spring and late autumn during a year in the calculated temporal
variations of phytoplankton at a surface layer (Figure 5), as seen in the observed variations of
chlorophyll-a shown in Figure 7. Limitation on Inorganic Phosphorus in a surface layer is also
reproduced during a summer in Figure 5 and Inorganic nutrients in a epilimnion are supplied from
a hypolimnion by both the thermal convection in autumn and the large eddy diffusivity during a
winter when vertical temperature distributions are uniform.
The observed distributions of phytoplankton and dissolved oxygen are shown in Figures 7 and 8
by using the Biwako Biogeochemical and Ecological Database supervised by the Centre for Eco-
logical Research, Kyoto University. It is pointed out that the calculated water quality distributions
agree with the observed ones qualitatively, and the model may be applicable to predict the effect
of climate change.

CONCLUSIONS

This paper describes a one-dimensional simplified model to predict the seasonal variations of the
vertical temperature and water quality distributions through a year in Lake Biwa. It was shown that
the calculated temperature and water quality distributions agree with the observed ones qualitatively,
and the model may be applicable to predict the effect of global warming.

363

© 2004 by Taylor & Francis Group, LLC


chap-36 19/11/2003 14: 51 page 364

Chlorophyll-a (surface layer) Zooplankton (surface layer)


4 0.16
0.14
3 0.12
0.10

mg/l
mg/l

2 0.08
0.06
1 0.04
0.02
0 0.00
0 100 200 300 400 0 100 200 300 400
(Jan.) day (Dec.) (Jan.) day (Dec.)

Inorganic Nitrogen (surface layer) Inorganic Phosphorus (surface layer)


200 3

150 IP(mg/l) 2
IN(mg/l)

100
1
50

0 0
0 100 200 300 400 0 100 200 300 400
(Jan.) day (Dec.) (Jan.) day (Dec.)

Disolved Oxygen (bottom layer) Disolved Oxygen (bottom layer)


14 14
12 12
10 10
DO(mg/l)

DO(mg/l)

8 8
6 6
4 4
2 2
0 0
0 100 200 300 400 0 100 200 300 400
(Jan.) day (Dec.) (Jan.) day (Dec.)

Figure 5. Calculated seasonal variations of water qualities at surface or bottom layer.

Dissolved Oxygen Chlorophyll-a in Phytoplankton


Water surface Water surface
50 50
9 7 6 5 4 7 9 10
45 45 11
40 40 8
8 6
35 10 35
30 30
11
z(m)

z(m)

25 25 12
12 2
20 3 20 4
15 15 5 1
10 1 10 2
5 5 3
0 0
4 6 8 10 12 0 1 2 3 4
DO(mg/l) P(mg/l)

Figure 6. Calculated vertical distributions of water qualities.

364

© 2004 by Taylor & Francis Group, LLC


chap-36 19/11/2003 14: 51 page 365

Chlorophyll-a in Phytoplankton (surface layer) Dissolved Oxygen (bottom layer)


16 12
14
10
12

DO (mg/l)
10 8
g/l

8 6
6
4
4
2 2
0 0
Jan-96
Apr-96
Jul-96
Oct-96
Jan-97
Apr-97
Jul-97
Oct-97
Jan-98
Apr-98
Jul-98
Oct-98
Jan-99
Apr-99
Jul-99
Oct-99
Jan-00

Jan-96
Apr-96
Jul-96
Oct-96
Jan-97
Apr-97
Jul-97
Oct-97
Jan-98
Apr-98
Jul-98
Oct-98
Jan-99
Apr-99
Jul-99
Oct-99
Jan-00
Figure 7. Observed seasonal variations of Chlorophyll-a and Dissolved Oxygen (The Lake Biwa Biogeo-
chemical and Ecological Database).

Chlorophyll-a (1998) Dissolved Oxygen (1998)


water surface
0 Apr. 0
Apr.
10 June 10 June
Aug. Aug.
20 20
Oct. Oct.
30 Dec. 30 Dec.
Zs-Z (m)

Zs-Z (m)

Feb. Feb.
40 40

50 50

60 60

70 70

80 80
0 5 10 0 5 10 15
Chlorophyll-a (g/l) DO (mg/l)

Figure 8. Observed vertical distributions of Chlorophyll-a and Dissolved oxygen (The Lake Biwa Biogeo-
chemical and Ecological Database).

ACKNOWLEDGEMENTS

The authors acknowledge the Centre for Ecological Research, Kyoto University for the free use of
the Lake Biwa Biogeochemical and Ecological Database.

REFERENCES

1. Somiya, I. (Editor): Lake Biwa – Its water environment and water quality formation-, Gihodo, 2000.3
(in Japanese).
2. Hikone meteorological observatory: The weather in Shiga Prefecture, 1992 (in Japanese).
3. Hosoda, T. and Hosomi, M.: A simplified model to predict the seasonal variations of water temperature
in Lake Biwa, Proc. of 5th symposium on environmental hydrodynamics, JSFM, 611–612, 2000 (in
Japanese).
4. Launder, B. E.: On the effects of a gravitational field on the turbulent transport of heat and momentum,
J. Fluid Mech., Vol.67, 569–581, 1975.
5. Ushijima, S.: Turbulence modelling of temperature stratified flow and its application, PhD. thesis,
Kyoto University, 1989 (in Japanese).
6. Orlob, G.T.: Mathematical modelling of water quality, John Wiley & Sons, 1983.
7. Somiya, I.: A ecological model on mass balance and water quality indices in a lake, Proc. of 2nd
symposium on eutrophication problems (Research Report No.18), National Institute of Environmental
Pollution, 114–155, 1981(in Japanese).
8. Matsuo, N.: Prediction of water temperature, turbidity and water quality indices in stratified reservoirs,
PhD. thesis, Kyoto University, 1982 (in Japanese).

365

© 2004 by Taylor & Francis Group, LLC


chap-36 19/11/2003 14: 51 page 366

© 2004 by Taylor & Francis Group, LLC


author-Index 21/11/2003 9: 53 page 367

Sustainable Development of Energy, Water and Environment Systems, Afgan, Bogdan & Duić (eds)
© 2004 Swets & Zeitlinger, Lisse, ISBN 90 5809 662 9

Author index

Abd-Elhady M.S. 269 Haris Lulić 109 Naim Hamdia Afgan 3


Alain A. Joseph 213 Henrik Lund 291 Nikola Ružinski 119, 231
Alenka Roser Drev 349 Hiromi Yamamoto 23
Alfonso Aranda 221 Hrvoje Juretić 231 Oral J. 181
Alija Lekić 337
Ali M. El-Nashar 319 Ichiro Naruse 29 Poul Alberg Østergaard 241
Andrew Jenkins 259 Ignacio Zabalza 221 Puchyr R. 181
Anna Luzhetskaya 197 Ilaria Principi 37
Ayman Elshkaki 249 Raffaella Gerboni 129
Ilesanmi I. 99
Rindt C.C.M. 269
Bebar L. 181 Jahn M. 99
Berger R. 309 Sabina Scarpellini 221
José A. 189 Sadjit Metović 109
Boris Korobitsyn 197
Braunbeck O.A. 189 Sauro Pierucci 279
Kameel Virjee 57 Scandiffio 189
Kenjereš S. 139 Scaramucci 189
Carcassi M.N. 67
Kenji Yamaji 23 Sergei P. Agashichev 319
Cerchiara G.M. 67
Kimito Funatsu 29 Sikula J. 181
Colin Grant 259
Klaus R.G. Hein 309 Šime Čurković 161
Cortez 189
Kollias P.S. 91 Slaven Dobrović 119, 231
Kollias S.P. 91 Staiger B. 309
Davor Ljubas 119
Kollias V.P. 91 Stehlik P. 181
Diamantino Durão 151
Kozo Kanayama 79 Susan Gaskin 57
Kristina Holmgren 203
Ebbe Münster 291
Tabata T. 171
Edina Dedović 301
Laura Fugaro 37 Takashi Hosoda 357
Eliseo Ranzi 279
Li Z. 99 Tatjana Kovačina 301
Emil Šterbenk 349
Loretta Bonfanti 279 Tiziano Faravelli 279
Emilio Parodi 279
Luís A.B. 189 Tomohiko Hosomi 357
Emin Kulić 109
Tomoyuki Goto 29
Ester van der Voet 249
Evasio Lavagno 129 Malcolm Smith 259 Unterberger S. 309
Manuela Duarte 151 Usui T. 171
Francesco Gagliardi 47 Manuela Sarmento 151
Fujie K. 171 Maria Pia Picchi 37 van Steenhoven A.A. 269
Mariacristina Roscia 47 Veerle Timmermans 249
Gabriele Migliavacca 279 Masaya Hotta 29
Goto N. 171 Michael A. Bartlett 203 Wijers J.G. 269
Griffin 189 Michael W. 189
Gulyas H. 99 Mirja Van Holderbeke 249 Yoshihiro Obata 79
Mirna Gaya 189 Yuzo Furuta 79
Hajny Z. 181 Mojca Bole 349
Hanjalić K. 139 Muhamed Korić 109 Zambolin L. 67

367

© 2004 by Taylor & Francis Group, LLC

You might also like