Professional Documents
Culture Documents
Intelligent Computing
and Information Science
13
Volume Editor
Ran Chen
The Key Laboratory of Manufacture and Test
Chongqing University of Technology
Chongqing, 400054, P. R. China
E-mail: sanshyuan@hotmail.com
ISSN 1865-0929
ISBN-10 3-642-18133-3 Springer Berlin Heidelberg New York
ISBN-13 978-3-642-18133-7 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting,
reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication
or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965,
in its current version, and permission for use must always be obtained from Springer. Violations are liable
to prosecution under the German Copyright Law.
springer.com
© Springer-Verlag Berlin Heidelberg 2011
Printed in Germany
Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India
Printed on acid-free paper 06/3180
Preface
October 2010
Ran Chen
ICICIS 2011 Committee
Conference Chairman
Ran Chen Chongqing University of Technology, China
Publication Chair
Wenli Yao Control Engineering and Information Science
Research Association, Hong Kong
International Frontiers of Science and
Technology Research Association
Co-Sponsored by
Control Engineering and Information Science Research Association
International Frontiers of science and technology Research Association
Chongqing Xueya Conferences Catering Co., Ltd
Chongqing University of Technology
Table of Contents – Part II
Fuzzy Control for the Swing-Up of the Inverted Pendulum System . . . . . 454
Yu Wu and Peiyi Zhu
Study on USB Based CAN Bus for Data Measurement System . . . . . . . . . 544
Weibin Wu, Tiansheng Hong, Yuqing Zhu, Guangbin He,
Cheng Ye, Haobiao Li, and Chuwen Chen
XVI Table of Contents – Part II
Multi Scale Adaptive Median Filter for Impulsive Noise Removal . . . . . . . 124
Xiangzhi Bai, Fugen Zhou, Zhaoying Liu, Ting Jin, and Bindang Xue
Keywords: The fourth party platform integrated payment service, Unified elec-
tronic currency, Unified billing.
1 Introduction
Electronic currency has a strong vitality. Compared with the traditional currency, it has
unique advantages, such as improving the operational efficiency of capital, reducing
the transaction costs of clearing, beyond the constraints of time space and so on. How-
ever, we cannot ignore the problems which appeared in the development of e-currency.
For example, the problems of unity, security issues and regulatory issues, etc.
To solve the above problems of electronic money, we need to propose a complete
solution from the entire payment chain perspective. Therefore, this paper will develop
unified e-currency based on the fourth party platform integrated payment service.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 1–6, 2011.
© Springer-Verlag Berlin Heidelberg 2011
2 X. Yong and H. Qiqi
Unified e-currency includes two functions: one is using the traditional currency re-
charge for consumption; the other is the use of other electronic cash recharge.
A. Users apply to the fourth party integrated payment service for account.
B. Users login account and buy unified e-currency and download electronic purse
through the net. And then unified e-currency will be stored in electronic purse.
C. Users choose commodities, then transfer product information and unified elec-
tronic currency to the businessman.
D. Businessmen confirm legitimacy of unified electronic currency to the fourth
party integrated payment service.
E. Confirm the legality of unified electronic currency, businessmen delivery.
F. The fourth party integrated payment service makes unified settlement of funds
according to financial information.
A. User logins the account of unified e-currency and makes dead e-currency ex-
change function available. Then selects dead e-currency A for exchange.
B. User recharges unified e-currency with dead e-currency A; Then the fourth party
platform integrated payment service adds the funds in the internet unified e-
currency account according to e-currency exchange rate, and send encrypted re-
charge messages to dead e-currency issuer A; Dead e-currency issuer A deducts
the funds in dead e-currency A account according to the encrypted recharge
messages.
C. User recharges the account B with the unified e-currency; then the dead e-
currency issuer B adds funds in the dead e-currency B account depending on the
encryption information.
D. The fourth party platform integrated payment service makes unified settlement
and do hedge liquidation according to the recharge message.
4 X. Yong and H. Qiqi
In the process of exchange, unified e-currency purse must link to the Internet. Ex-
change funds are directly deposited into the electronic wallet, which can help separate
unified e-currency and funds in Internet account out.
A. User logins unified e-currency wallet, select the recharge way: electronic cash
recharge.
B. User recharges unified e-currency purse with e-currency A.
C. Issuers of e-currency receive and sent recharge data through a special interface,
then reduce the amount of e-currency A.
D. The fourth party platform integrated payment service receives data through the
special interface, send data to unified e-currency purse to increase the amount of
e-purse.
E. The fourth party platform integrated payment service makes unified settlement,
then the funds of e-currency A in the bank A go to the temporary platform bank
account.
User uses the bank account A to buy unified e-currency, the funds remains in the
fourth party platform integrated payment service temporary clearing account. When
using the unified e-currency shopping, the liquidation process is as follows:
Unified Electronic Currency Based on the Fourth Party 5
A. User buys unified e-currency and the purchase information is passed to the
fourth party platform integrated payment service.
B. The fourth party platform integrated payment service receives purchase in-
formation which includes user account information, funds information and
so on.
C. The fourth party platform integrated payment service sends finished infor-
mation to the central bank, and then the central bank transfers the funds
from User Banks to the fourth party platform integrated payment service
temporary clearing account.
D. User uses unified e-currency purse to pay, and the secure payment informa-
tion is passed to the fourth party platform integrated payment service
E. The fourth party platform integrated payment service receives users’ secure
payment information. This information includes user payment amount in-
formation, the user bank information, and business information. The fourth
party platform integrated payment service based on business information;
determine the business bank account information, and business information.
F. The fourth party platform integrated payment service clears up the financial
information and bank account information received, analyzes the flow of
funds between various banks, and makes the transmission of finished in-
formation to the central bank.
G. According to information received, the central bank makes the capital set-
tlement, and then sends the funds from the fourth party platform integrated
payment service temporary clearing account to merchant banks clearing ac-
count.
H. The central bank transfers liquidation Information to the fourth party plat-
form integrated payment service, then the information is cleared up and sent
to the merchants head bank by the platform.
I. The head band sends the specific account information and financial infor-
mation to the merchant bank branches, making the primary account and
fund settlement.
When making purchases, the user can directly use the non-unified e-currency. Plat-
form can judge issuer and currency exchange rates of RMB, according to the type of
e-currency, and clears up the funds directly with the issuer bank account. It needn’t to
6 X. Yong and H. Qiqi
be converted into unified e-currency. The liquidation process is the same as the direct
use of unified e-currency.
Dead e-currency. The amount of Dead e-currency exchange is usually little, and the
exchange involves interests of various businessmen. Therefore, dead e-currency uses
the way of hedging accounts directly to clear up. The way of hedging accounts di-
rectly to clear up refers that it directly hedges user’s account among dead e-currency
issuers, is not to involve in the actual transfer of funds.
Undead e-currency clearing and settlement. The users choose the function of un-
dead e-currency exchange. The fourth party platform integrated payment service must
reach agreement with undead e-currency issuers, provide the interface for them, clear-
ing up thought the platform. Its liquidation mainly relates to inter-bank settlement; the
process is the same as using Internet Bank to buy unified e-currency.
5 Summary
Unified e-currency is the ultimate trend in the development of e-currency, the full
realization of the ultimate form is industry-wide and entire network coverage func-
tion, and replaces all other types of e-currency. The purpose of e-currency exchange is
to replace the other e-currency smoothly in the early development. Security and su-
pervision of unified e-currency must be able to keep up with development of unified
e-currency, which requires the State and the community to pay adequate attention.
References
1. Xu, Y., Fang, C.: A theoretical framework of fourth party payment. ICEE, EI Retrieval
2. Li, Y.: Promote the healthy development of the cause of payment and settlement. Financial
Computer of China (03) (2010)
3. Ye, Y., Xiong, Z., Chen, Y.: Proposal concerning reconstruction of Chinese foreign cur-
rency clearing system based on analysis of its drawbacks. Shanghai Finance (03) (2008)
4. Zhou, Z.: Payment and settlement system for cross-national perspective. China Foreign
Exchange (08) (2009)
5. Zhou, J.: The Development of Modern Payment System and the Adjustment of Monetary
Regime. Journal of Financial Research (01) (2007)
6. Lacker, J.M.: Clearing, settlement and monetary policy. Journal of Monetary Econom-
ics 40, 347–381 (1997)
7. Committee on Payment and Settlement Systems: The role of central bank money in pay-
ment systems. In: BIS (2003)
Modeling and Power Flow Analysis for Herringbone
Gears Power Dual-Branching Transmission System
1 Introduction
The herringbone gears power flow transmission system has been widely used in the
aeroengine and reducer to achieve the high-speed and reverse spilt-flow and
confluence high-power [1-3]. The power dual-branching gear transmission system
which is composed by herringbone gears of dual-branching structure to implement
dual-path power of cross-flow is simple and compact [4]. Besides, the herringbone
gears as transmission component has unique characteristics of large contact ratio,
smooth transmission and low noise, which make the system on the one hand, to meet
the requirements of heavy duty working conditions. On the other hand, it has broad
application prospects on the fields such as aviation and marine transmissions due to
its size reduced and weight lightened.
In this study, the mechanical structural model of power flow transmission system is
presented. Based on the torque analysis of the herringbone gears, the mutually
meshing influence between related gear pairs of system is researched during the
analysis of deformation compatibility. The power flow situations are solved by
programming. Actually, the clearance is unavoidable during meshing of gear pairs
due to the processing error and installation error of gear system [5-6], and analyzing
the influence on power split.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 7–15, 2011.
© Springer-Verlag Berlin Heidelberg 2011
8 X. Yang et al.
T24 T42
T46
T21 T64
T1 2
T1 T6
T13
T31 T65
T56
T35 T53
The conditions of torque equilibrium which can be obtained based on gears and shafts
illustrated in figure 2 are represented with the following equations.
Modeling and Power Flow Analysis for Herringbone Gears Power 9
24 (T21 )
2 4
46 (T46 )
12 (T12 )
1 6
13 (T13 )
56 (T56 )
3 5
35 (T31 )
Fig. 2. Relationship model of various gear pairs meshing rotation angle and gear shafts twist
angle
The relationship of various gear pairs meshing rotation angle and gear shafts twist
angle is represented as follows,
⎧Δφ12 (T12 ) = φ1 − φ2 i12 , Δφ24 (T21 ) = φ2 − φ4
⎪
⎨Δφ46 (T46 ) = φ4 − φ6 i46 , Δφ13 (T13 ) = φ1 − φ3 i13 (5)
⎪Δφ (T ) = φ − φ , Δφ (T ) = φ − φ i
⎩ 35 31 3 5 56 56 5 6 56
10 X. Yang et al.
Here, φi is the rotation of the gear i . Δφij ( Tij ) is the angular distortion of gear i with
respect to j under the action of torque Tij , that is the loading transmission error,
and Δφij ( Tij ) is the function of Tij .
Gear 2 and 3, pinion 4 and 5 are as same level gear, respectively. Therefore, the
numbers of teeth of gear 2 and 3, pinion 4 and 5 equal, respectively, yield the
following equations,
Equations (5), (6) and (7) yield the deformation compatibility equations of the system
as follows,
Δφ12 (T12 ) r1 / r2 − Δφ24 (T12 ) r2 / r1 + Δφ46 (T12 ) r2 / r1 = Δφ13 (T13 ) r1 / r3 − Δφ35 (T13 ) r3 / r1 + Δφ56 (T13 ) r3 / r1 (8)
The relationship between the torsion meshing stiffness K ij and load Tij is nonlinear,
however, in the vicinity of a certain load, so it can be approximately considered as
linear [9-10]. Accordingly, the meshing angular distortion of gear pairs can be
simplified represented as follows,
Δφij = Tij / Kij ri 2 (9)
K ij = 2 × 103 Cγ ij bi (10)
Here: b is total width of herringbone gears, mm as unit; Cγ ij = 0.75ε αij + 0.25 C 'ij is ( )
defined as the total rigidity mean value in the end section of gear i and j , abbreviation
“meshing stiffness”, mm ⋅ μ m / N as unit; ε α is transverse contact ratio; Cij ' = 1 q is the
maximum stiffness in the normal section of one pair gears,
Gear 2 and pinion 4, gear 3 and pinion 5 are connected though the load-sharing
torsion shafts and their angular deformation is represented by twist deformation of the
shaft [11].
Δφ24 = −(32r2 l24T12 ) / (Gπ r1 (d 2 4 − d 4 4 )), Δφ35 = −(32r3l35T13 ) / (Gπ r1 ( d34 − d5 4 )) (11)
where, G is the material shear elastic modulus of shaft, MPa as unit, and for steel,
G = 8.1× 10 4 MPa; l AB ( A = 2,3; B = 4,5) is the length of the shaft under torque, mm as unit;
d i (i = 2,3, 4,5) is the shaft diameter connecting first-level and second-level.
As mentioned above, Δφij (Tij ) is the angular distortion of gear i with respect to
gear j under the action of torque Tij . The variable Δφij (Tij ) includes initial gap between
gears correspond to the relative twist angle Δφij0 before tooth surface of
gear i and j come into contact and the relative twist angle Δφij1 (Tij ) at the mesh point
after tooth surface of gear i and j contact deformation.
Equations (8), (9), (10), (11) and (12) yield the deformation compatibility equations
under the action of torque Tij of meshing gear i and gear j as follows,
T12 [1 / ( K12 r1r2 ) − (32r2 l24 ) / (Gπ r1 (d 2 4 − d 4 4 )) + r2 / ( K 46 r4 2 r1 )] − T13 [1 / ( K13 r1r3 ) − (32r3l35 ) / (Gπ r1 (d34 − d5 4 )) + r3 / ( K56 r5 2 r1 )] = 0
(13)
The linear equations (4) and (13) conveniently yield load bearing torque of each gear
pair Tij , which is the condition of power split of system.
Envision that the output gear of the gear train is rigidly fixed from rotating and a
torque is applied to the input pinion. As the torque is applied, the input shaft will
rotate some amount because of deformations [4]. This rotation of the input pinion
relative to the output gear is called the load windup. In the condition of gear
parameter and meshing parameter invariable, due to the parallel error of the axes of
gear pair made by manufacture and installation [11-12] (thereinafter abbreviation axis
based error), the deviation of center distance among gear pairs is produced directly,
leading to the gap inevitably in the process of meshing of gear pairs, corresponding to
the initial twist angle Δφij0 non-zero.
12 X. Yang et al.
The gap of center distance error separately projects along and vertically the direction
of line of action, in which the projection along the direction of line of action can
transform as the angle of clearance, while the projection vertically the direction of
line of action possible ignore[13]. According to the geometric al relationship of tooth
face of herringbone gears, when the axial movement of gear i (the gap of center
distance error) is ΔEi , the relative twist angle corresponding to initial clearance
between gears of gear pair is represented as follows:
Δφij 0 = ΔEi sin α t cos β / Rij (14)
where Rij is the mean radius of gyration for the meshing point with respect to the
shaft of gear.
If the gap of center distance axial error ΔE2 occurs between pinion 1 and gear 2, the
deformation compatibility equations as follows,
If the gap of center distance axial error ΔE4 occurs between pinion 4 and gear 6, the
deformation compatibility equations as follows,
If the gap of center distance axial error occurs in every gear pair simultaneously, the
deformation compatibility equations as follows,
T12 [1 / ( K12 r1r2 ) − (32r2 l24 ) / (Gπ r1 ( d 2 4 − d 4 4 )) + r2 / ( K 46 r4 2 r1 )] − T13 [1 / ( K13 r1r3 ) − (32r3 l35 ) / (Gπ r1 ( d 3 4 − d5 4 )) + r3 / ( K 56 r5 2 r1 )]
= ΔE2 sin α t cos β r1 / r2 2 − ΔE4 sin α t cos β / r4 + ΔE3 sin α t cos β r1 / r3 2 − ΔE5 sin α t cos β / r5
(19)
Modeling and Power Flow Analysis for Herringbone Gears Power 13
5 Calculation Results
The input power of pinion 1 is 1556 kw , rotational speed 6000 r / min. The parameters
of gears of transmission system are listed in table 1, and the parameters of middle
torsion shaft listed in table 2.
5.1 The Effect of Power Flow Caused By the Gap of Center Distance Axial
Error
The gap of center distance axial error of gear pairs, leading to the initial twist angle
corresponding to gear pairs, impacts on power split of gear pairs of system, which is
according to the torque transferred change. In the conditions of standard installation
and existing different errors, the situation of power split in this study is listed in table
3 by programming.
Standard Error in pair Error in pair Error in pair Error in pair Error in every
items
installation gear 12 gear 13 gear 46 gear 56 gear pair
Ei / mm 0 0.01 0.02 0.01 0.02
T12 / N m -1246 -1231 -1216 -1061 -938 -1242
T13 / N m -1246 -1261 -1276 -1431 -1554 -1250
T46 / N m -4154 -4104 -4054 -3537 -3126 -4142
T56 / N m -4154 -4204 -4254 -4771 -5182 -4166
The load of every gear is diverse in the different engaging position of the system,
which explained particularly in author’s other papers. The power flow charts of gear
14 X. Yang et al.
pairs in various situations are fitted through loaded tooth contact analysis [14]. The
power flow of system in the various situation of two mesh period (correspondence ten
engaging position) is as shown in figure 3.
6 Conclusions
This study was done to better understand split-path transmission and support their
application in the future. From the analytical study, the following conclusions can be
drawn:
(1) The method developed in this study greatly simplifies the process for solving
the situation of power spilt of the complex gear transmission system, increasing the
computational efficiency, with the guidance to the three and four branch transmission
structure.
(2) The situation of power split affect though adjusting the gap of tooth flank can
be conveniently calculated using the method developed. In normal conditions, the
amount of power split of dual-branch is equivalent. The insignificant gap results in
the tremendous transformation. Therefore, the branched structure is sensitive to the
gap of center distance axial error.
(3) The variation of power flow caused by the gap of center distance axial error is
consistent with the graphs power split caused by installation error. The loaded
imbalance of two sides of gear is caused by errors, which is easy to cause one-sided to
overload. And consequently, the author proposes the method to achieve load-sharing
of power spilt as far as possible though changing the gap of tooth flank of adjusting
the axial gear position.
(4) The precision required for gap and installation error is within the capabilities
of available and proven manufacturing processes. The model provides important
theoretical instruction for achieving valid error compensation of load-sharing.
Modeling and Power Flow Analysis for Herringbone Gears Power 15
References
1. Litvin, F.L., Fuentes, A.: Gear Geometry and Applied Theory. Cambridge University
Press, Cambridge (2004)
2. Litvin, F.L., Zhang, J., Handschuh, R.F., Coy, J.J.: Topology of Modified Helical Gears.
Surface Topography 3, 41–58 (1989)
3. White, G.: Design Study of A 375-kw Helicopter Transmission with Split-torque Epicyclic
and Bevel Drive Stage. J. Mech. Eng. Sci. 197C, 213–224 (1983)
4. Timothy, L., Krantz, A.: Method to Analyze and Optimize the Load Sharing of Split Path
Transmissions. In: Seventh International Power Transmission and Gearing Conference,
San Diego (1996)
5. Zhang, Y., Fang, Z.: Analysis of Transmission Errors Under Load of Helical Gears with
Modified Tooth Gears. ASME Journal of Mechanical Design 119, 120–126 (1997)
6. Umeyama, M., Kato, M., Inoue, K.: Effects of Gear Dimensions and Tooth Surface
Modifications on The Loaded Transmission Error of A Helical Gear Pair. ASME Journal
of Mechanical Design 120, 119–125 (1998)
7. Yao, Y., Yan, H.S.: A New Method for Torque Balancing of Planar Linkages Using Non-
circular Gears. Proceedings of the Institution of Mechanical Engineers Part C-Journal of
Mechanical Engineering Science 217, 495–503 (2003)
8. Gu, J.G., Fang, Z.D., Zhou, J.X.: Modeling and Power Flow Analyzing for Power Split
System of Spiral Bevel Gears. In: Feng, G., Xijun, Z. (eds.) Proc. of 2009 International
Workshop on Information Security and Application, pp. 401–404. Acadmy Publisher,
Finland (2009)
9. Peng, H., Liu, G.L.: Tooth Contact Analysis of Face-gear Meshing. Mechanical Science
and Technology for Aerospace Engineering 27, 92–95 (2008)
10. Vecchiato, D.: Tooth Contact Analysis of a Misaligned Isostatic Planetary Gear Train.
Mechanism and Machine Theory 41, 617–631 (2006)
11. Shao, X.R.: Influence of Load Sharing Coefficient on the Manufacture and Assemble Error
of the Planetary Gear NGW. Journal of Northeast Heavy Machinery Institute 18, 306–309
(1994)
12. Li, T., Pan, C.Y., Li, Q., Zhang, L.J.: Analysis of assembly error affecting on directing
precision of spherical gear attitude adjustment mechanism. Acta Armamentarii. 30, 962–
966 (2009)
13. Chang, S.L., Tsay, C.B., Tseng, C.H.: Kinematic Optimization of a Modified Helical Gear
Train. ASME Journal of Mechanical Design 119, 307–314 (1997)
14. Gu, J.G., Fang, Z.D., Pang, H., Wang, C.: Modeling and Load Analysis of Spiral Bevel
Gears Power Split System. Journal of Aerospace Power 24, 2625–2630 (2009)
Research on SaaS and Web Service Based Order Tracking
1
School of Mechanical and Electronic Engineering, Wuhan University of Technology,
Wuhan 430070, China
2
Chong Qing Automobile College, Chongqing University of Technology,
Chongqing 400054, China
1 Introduction
DVE takes outsourcing as the main business mode and it allocates customer orders to
appropriate supplier by the way of bidding or negotiation. The real-time query of the
status of order is named order tracking. Provide order tracking service in DVE helps for
customers to participate in the manufacturing process of their orders so as to improve
their loyalty to DVE. The service is benefit for the core enterprise of DVE to reallocate
orders as to ensure orders completion.
The research on order track is rare, such as for order tracking in discrete manufac-
turing oriented enterprise, an order execution process analysis and model method based
on key node tracking of information in MTO environment was proposed [1]. To develop
an order tracking system for auto assemblage, the hardware structure and software de-
sign based on barcode technology were discussed [2]. Mobile agent was used to develop
online order tracking across global logistic alliances [3]. As to order tracking for alliance
members within a supply chain, mobile agent was employed to develop a prototype of
order tracking on the web [4]. Previous research is focus on single enterprise or alliance
enterprise with stable membership. However, the existing solutions and systems is
powerless for DVE for member enterprises join or leave dynamically and with distrib-
uted geographical location, heterogeneous hardware and software, different information
level. So, this paper is to seek the method for order tracking in DVE.
*
Corresponding author.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 16–22, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Research on SaaS and Web Service Based Order Tracking 17
2 System Design
After received orders, core enterprise of DVE will classify and divided them into
subtasks and allocated to suppliers. Suppliers arrange production for orders and store
the related data. In processing of order, customer can query its status and feedback the
suggestion for the manufacturing method. In addition, core enterprise of DVE may
reallocate orders according orders real-time status. The process is shown in figure 1.
Because of suppliers’ different information system and distributed globally, how to
collect manufacturing data from suppliers and provide order tracking on web is the
main problem to be solved.
SaaS provides online software and customers purchase service by need while the
copyright belongs to software provider. The provider is responsible for maintenance,
daily operations and technical support services to software [5]. Web serve is used to
integrate and share data or application in heterogeneous platform. So, according ap-
plication features of SaaS and technical characteristics of web service, a SaaS and web
service based order tracking solution was designed as shown in figure 2.
The system includes SaaS based manufacturing data management system, UDDI
and order tracking & querying. Its idea is suppliers without information system uses
SaaS service to store production date into manufacturing data DB. While suppliers have
information system encapsulates it into web service and publishes it to UDDI. The
order tracking & querying module is responsible for finding out the supplier ID from
order allocated DB related to the order firstly. Then it acquires the data from manu-
facturing data DB ID directly or by invocating web service deployed in remote supplier
enterprise.
18 J. Jiang et al.
3 Key Problem
In addition, it is important to keep the security of tenants’ data. The security includes
the authorization of customer account and accessing database. To the security of cus-
tomer account, centralized authentication is used. Software provider manages all cus-
tomer accounts and manager of customer account is authorized to manage user account
as shown in figure 5. As to the security of accessing database, MD5 encrypt algorithm
and the three-tier architecture was used as shown in figure 6.
Create customer role Allocate module for customer role
Account audit Account & role binding request
System manager Encryption
& Encrypted
Y
Apply account Verify account Inherit role module decryption DB
N query return Web service
Apply account failed customer
module of data query
Create user role Allocate module for user role
Presentation
customer layer Data layer
Business layer
Create user account Account & role binding Inherit role module
4 Prototype System
To meet the needs of virtual business model, a platform named product customization
of mechanical parts in network environment was developed for an enterprise. Its
Research on SaaS and Web Service Based Order Tracking 21
modules including 2D/3D production display on web, customer needs submit, need
quoting, classify and allocate orders, supplier management and production manage-
ment. For the disadvantages of complex tracking process, high cost of communications,
constrained by working hours and slow response in current order tracking method by
telephone, fax or email, the SaaS and web service based order tracking system was
added to the platform and the prototype was developed. Process of order tracking in the
prototype is in figure 8.
5 Conclusions
algorithm and three-tier architecture was used to ensure the security of customer data.
To encapsulate application system into web service, the solutions from three layers of
data layer, business logic layer and application program were researched. On basis of
the research in this paper, a web based order tracking prototype was developed and it
realized the order tracking across enterprises. Next, we will modify and improve it until
to put it into practical application. The research in this paper has reference to the de-
velopment of similar application system.
Acknowledgment
It is a project supported by National Natural Science Foundation for the Youth Project
of China (contract no. 50905133).
References
1. Chen, X., Tang, R., Wang, Z.: Research on Order Track Manage System Oriented to Dis-
crete Manufactruing. J. Light Industry Machinery 28, 111–115 (2010) (in Chinese)
2. Chen, S., Liu, T.: Barcode Order Tracking System for Auto Assemblage. J. Journal of
Wuhan University of Technology 27, 75–78 (2005) (in Chinese)
3. Trappey, A.J.C., Trappey, C.V., Hou, J.L., et al.: Mobile agent technology and application
for online global logistic service. J. Industrial Management & Data System 104, 168–183
(2004)
4. Cheng, C.-B., Wang, C.: Outsourcer selection and order tracking in a supply chain by mo-
bile agents. J. Coupter & Industrial Engineering 55, 406–422 (2008)
5. Choudhary, V.: Software as a Service: Implications for Investment in Software Develop-
ment. In: Proceedings of the 40th Hawaii International Conference on System Sciences, pp.
209–218. IEEE Press, Hawaii (2007)
Research and Analyses on Unsteady Heat Transfer of
Inner Thermal Insulation Wall during Multi-temperature
Refrigerated Transportation
1 Introduction
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 23–29, 2011.
© Springer-Verlag Berlin Heidelberg 2011
24 G. Liu, R. Xie, and Y. Sun
By the basic theory of heat transfer, when study unstable heat transfer problems of
single homogeneous material (shown in fig.1), we can obtain the answer to solve the
partial differential equations:
⎧ ∂t ( x,τ ) ∂t 2 ( x,τ )
⎪ = a⋅ ,0 < x < l ,τ > 0
⎪ ∂τ ∂x 2
⎪ ∂t ( x,τ )
⎨ q( x,τ ) = −λ ⋅ ,0 < x < l ,τ > 0 (1)
⎪ ∂x
⎪ t ( x,0 ) = 0
⎪⎩
℃
And t was temperature in , q was heat flux in W/m2, α was thermal diffusivity of the
wall materials in m2/h, λ namely thermal conductivity of the wall materials in W/m. , ℃
x was the wall calculation coordinate in m, τ namely time variable in h.
Laplace transform the variables x and τ in equation (1) that transform the original
differential equations to algebraic equations, for x=l, re-use inverse Laplace transform
to obtain the mathematical expression about the wall temperature distribution and heat
flux distribution, shown in (2).
And T(l.s) was the Laplace transform with time for temperature on l calculation section,
Q(l,s) was the Laplace transform with time for heat flux on l calculation section, s was
the Laplace transform with time.
If given the boundary conditions, we can obtain the Laplace transform with tem-
perature and heat flux in any position of wooden partition by (2), re-use inverse Laplace
transform with T(l.s) and Q(l,s),then obtain the final solution. Because formula (2) only
for single homogeneous material of the wall, the wall of multi-temperature refrigerated
trucks be combination of multi-layer insulation materials(shown in fig.2), then the
transfer matrix of thermal system wooden partition should include the air layer of
inside and outside and multi-layer wooden partition etc.
Research and Analyses on Unsteady Heat Transfer of Inner Thermal Insulation Wall 25
⎡T (l , s ) ⎤ ⎡1 − 1 / α n ⎤ ⎡ An ( s ) − Bn ( s )⎤ ⎡ An −1 ( s ) − Bn −1 ( s )⎤
⎢Q(l , s )⎥ = ⎢0 1 ⎥⎦ ⎢⎣− C n ( s ) Dn ( s ) ⎥⎦ ⎢⎣− C n −1 ( s ) Dn −1 ( s ) ⎥⎦
LL
⎣ ⎦ ⎣
(3)
⎡ A1 ( s) − B1 ( s) ⎤ ⎡1 − 1 / α w ⎤ ⎡T (0, s) ⎤
⎢− C ( s ) D ( s ) ⎥ ⎢0 1 ⎥⎦ ⎢⎣Q(0, s) ⎥⎦
⎣ 1 1 ⎦⎣
And αn was the convection heat transfer coefficient of air inside of refrigerated trucks in
℃
W/m2. , αw was the convection heat transfer coefficient of air outside of refrigerated
℃
trucks in W/m2. . Ai (s ) = ch( s / al i ) ; Bi ( s) = sh( s / ali ) / λ s / a ;
C i ( s ) = λ s / a sh( s / ali ) ; Di ( s ) = ch( s / al i ) .
For B(s)=0 when solution the heat transfer coefficient of wall obtain the solution ,
ai of the transcendental equation. Used expand law Heaviside and superposition
method to discrete time, we can receive the Y(j) of heat transfer coefficient in wall
under the disturbance of the units triangular wave, shown in (4).
⎧ ∞
Bi
⎪Y (0) = K + ∑ Δτ (1 − e
− ai Δτ
) j=0
⎪ i =1
⎨ ∞ (4)
⎪Y ( j ) = − Bi (1 − e − ai Δτ ) 2 e − ( j −1) ai Δτ
⎪⎩ ∑
i =1 Δτ
j ≥1
℃
And Y(j) was the j reaction coefficient in W/m2. , K was the heat transfer coefficient
℃
of wooden partition in W/m2. , i was the amount of equation root, ai was the i root
with B(s)=0 that transfer matrix with wooden partition thermal system. τ was the △
discrete time interval in h, Bi was the coefficient.
Bi = −1 /[ ai B′(−ai )]
2
(5)
Then, the heat through the wooden partition can be determined by (6).
∞
HG (n) = ∑ Y ( j )t z (n − j ) − Kt n (6)
j =0
HG(n) was the heat transfer gains with n time in W/m2, tz(n) was the integrated tem-
℃
perature outside the car with n time in , tn was the air temperature inside the car in . ℃
In short, the essence of analysis method above was that the modern control theory be
introduced and Laplace transform be adopted, which regards the wall as a thermody-
namic system, the temperature boundary conditions changing with any time are de-
composed into a series of triangular pulse, solved for the heat flux change caused by the
external disturbance of units of temperature wave triangle in advance, which regard as
the reaction coefficient of thermal system, and each pulse is linearly dependence, but
the thermodynamic system itself is linear system. Based on the application of super-
position principle, the heat flux change caused by each pulse is superimposed according
to the convolution principle, namely, acquired the thermal condition of overall system.
Due to a series of final, so this method is called reaction coefficient method. In the heat
transfer of construction field, this method has been applied in a certain, and recognized
as the optimization calculation method and the development direction of unstable heat
transfer, but due to the great differences between construction and refrigerator in the
26 G. Liu, R. Xie, and Y. Sun
conditions of use, structure, material etc, thus the related knowledge of reaction coef-
ficient method acquired in construction field can not directly apply in the refrigerator,
which must concrete analysis according to specific problems.
(- )
From (4) to (6), the heat transfer reactivity coefficient Y(j) of the batter wall is the sum
of an infinite index which is related to the root ai of the transition matrix element
B(s). The number of Y(j) selected in the calculation directly affect the accuracy and the
complexity level. At the same time, when the number of the heat transfer reaction
coefficient get large enough, Y(j) and the index related to the root above mentioned will
tend to be zero, so the problem of how to compute the capacity of heat transfer in un-
steady state accurately can be translated into problems that how to select the range of
the root value of B(s) element and what number of the heat transfer reaction coefficient
should be calculated.
In architecture, there are more researches on how to define the range of the root
value about B(s) element, but the opinions are various for the types of the architecture
and the accuracy standard are various: the ASHRAE says the accuracy can be guar-
anteed generally while calculating to the root is more than (-50) in air conditioning
buildings[1]; references [2, 3] suggest that (-40) is a proper value; besides, Japanese
HASP program says (-25) is appropriate while the Canadian program set the standard
that (-100) is necessary[4,5]. In the aspect of the number about heat transfer coefficients,
there are fewer studies since the relevant difference can be eliminated by adopting
periodic reaction coefficient which depends on the nature that there are no area changes
about the common buildings[6]. But this method is not suitable for the calculation of the
heat transfer of the multi-temperature refrigerated vehicles’ wall because of its nature
of moving all the time[7]. Therefore, to define the number of reaction coefficients is
necessary in the calculation of the unstable heat transfer. Besides, how the special
velocity nature impact the heat transfer nature of the vehicle while it is moving is also
the aspect which should be considered in the analysis as well as the impaction level.
To sum up, since the large difference between the vehicle and the common building,
their heat transfer nature is different, and even larger between their reaction coeffi-
cients. All of these make the adopting the method of reaction coefficient should be
given more comprehensive analysis on different velocities, different numbers of the
equation root, different numbers of the reaction coefficient and any other variety ele-
ments according to its nature in the progress of the research on unstable heat transfer in
multi-temperature refrigerated vehicles.
country were chosen to analyze. (1)1.5mm Weathering Steel, 200mm Insulation and
1.5mm Alloy Steel (2)4mm Rubber Floor Cloth, 19mm Plywood, 4mm Glass Floor,
115mm Insulation, 2mm Alloy Steel (3)1.5mm Weathering Steel, 135mm Insulation,
1.5mm Alloy Steel.
For comprehensive analyzing solution precision influence of B(s) element of the root
value set the range of (- )ai from (0) to (-10000) in the beginning. Through the study
about multi-temperature refrigerated truck, we found just only the top 5 of the number
of the B(s) element equation roots are able to greater impact the reaction coefficient.
- -
When the range is from ai to 10000, the value of Y(j) is standard value and obtain
relative results after putting into the 4th equation. Compare the results with the Y(j)
which result from the top 5 above. The Y(j)analysis curve of material 1 as picture 5. As
the picture show, the less the number of Y(j) will be, the more sensitive the result to the
number of the B(s) element equation roots will be and the more the number of roots
increase, the less the influence will be. In the other hand, the better the thermal insula-
tion of exterior-protected construction will be, the greater the influence to the B(s)
element which depend on the number of equation root. For material 1, if only take a
root, there are 32 reaction coefficients which exist more than 1.0 × 10-15% calculation
errors. For material 2, there are 15 and for material 3, there are 11. The trend will still
exist when the number of equation root become more. The top 5 of B(s) elements and
relevant roots of the multi-temperature refrigerated truck exterior-protected construc-
tion are shown in Table 1. The (- )
ai of the B(s) element root in roof, wall and un-
derneath of the truck should be set (-10), (-22), (-24) respectively and it will satisfy the
requirement of calculation accuracy. For simplicity, the range of (- )
ai should be set
to (-25) in calculation.
Fig. 3. Error with different number roots Fig. 4. The K with different materials
,
Using the basic properties of reaction coefficient Make the sum of former j-ary Series
respectively is K(j), make(6)defined the former 100 series of the reaction coefficient is
standard heat transfer coefficient, so by choosing the different number of terms reaction
coefficient, the existence relative error δj is:
K − K ( j)
δj = × 100% (7)
K
Where, δj is relative error by choosing the different number of terms reaction coeffi-
cient in energy consumption computation, %. K(j) is considered as heat transfer
coefficient value by select the former n-ary series reaction coefficient in energy
consumption computation, W/m2. ℃。
Based on the foregoing analysis, take the root value of elements B(s) from (- )
ai to
-
( 25). By (7), we can calculate the relative error when various walls surface selects the
different reaction coefficient, expressed as the image, As shown in Figure 4. The figure
, ,
shows in respect to reaction coefficient number of terms when the number of items
taken from less than 10, each additional one, errors were significantly reduced, but its
trend is more and more slowly. As far as car roof, when the number of items are more
than 16, error is less than one percent; When reaction coefficient number of terms take
to 29, error is less than one ten-thousandth; For wall and underneath of the car, when
reaction coefficient number of terms take to 15 and 13, error is less than one
ten-thousandth. So, reaction coefficient take to 30th can sufficient to meet the re-
quirements for multi-temperature refrigeration truck.
5 Summary
refrigerated vehicles. However, it has been only applied in construction area but re-
frigerated vehicles. Accordingly, this paper numerically calculated unsteady heat
transfer of the exterior-protected construction by principles of response coefficient
method and computer-aided analysis, studied the influences of various factors such as
different velocities, different term numbers of equation root and different term numbers
of response coefficients on load accuracy calculation of refrigerated vehicles. Here are
the conclusions as follows.
(1) The range of (- )
ai should be set to (- )
25 in the selection of v element's root.
(2) When the term number of response coefficient was determined, it is reasonable to
take values until the 30th term.
(3) This paper improved the heat transfer response coefficient method, created situa-
tions for the application of this method in multi-temperature refrigerated transportation,
and provided supports for the design optimization of multi-temperature refrigerated
vehicles and the energy-saving of food refrigerated chain.
Acknowledgements
The project was supported by the Natural Science Foundation of Guangdong Province
(No.9451009101003189) and Natural Science Foundation of China (No. 51008087).
References
Abstract. At first, this article gives an introduction to the principle and evalua-
tion procedures of TOPSIS. Secondly, Using TOPSIS method to assess 8 pre-
fecture-level cities of a province of China based on the basis of statistical data.
The evaluation results demonstrate that the application of TOPSIS method to
Industrial economic benefits is Reasonable. TOPSIS has some practical value in
the economic field, can provide scientific basis for the scientific management.
1 Introduction
Industrial enterprises are the pillar industry of national economy, the evaluation of
economic benefit is not only an important part of industrial policy, but also an impor-
tant responsibility of managers. How to find a practical, quantitative and accurate
multi-objective comprehensive evaluation method in recent years is an research topics
of common concern. This paper introduces systems engineering TOPSIS method, the
economic efficiency of industrial enterprises a comprehensive evaluation, and
achieved satisfactory results.
TOPSIS is Short for Technique for Order Preference by Similarity to Ideal Solution,
The method is close to the ideal solution to the principle of selecting the best technical
solution, the final evaluation results should be the ideal solution is the closest, and
farthest away from the worst solution. Using this method, firstly should define the
ideal solution for the positive and negative assessment, although both programs could
not exist in the assessment (otherwise no assessment), but we both can be idealized as
the best, the worst point, in order to other possible options for balancing the distance
between the two. Secondly, general the Assessment factor as so to sort, evaluate and
analysis. Overall, the algorithm process of the method is as follows:
*
This work is supported by the National Social Science Foundation of China (No. 09&ZD014).
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 30–34, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Evaluation of the Industrial Economic Benefits Based on TOPSIS 31
If c j is the cost-based indicator, the smaller the index value, the better the results
of the evaluation indicators, decrees
yij
zij = (2)
max { yij | 1 ≤ i ≤ n}
If c j is the moderate type of indicators, to assess the indicators were most satisfied
with is that for the more favorable assessment of indicators, decrees
max = max { yij − a #j | 1 ≤ i ≤ n} ;
min = min { yij − a #j | 1 ≤ i ≤ n} ;
max − | yij − a #j |
zij = (3)
max − min
2. Units of the evaluation matrix, make the index value into the dimensionless
quantity. Decrees
zij
rij = (4)
m
∑z
j =1
2
ij
3. Unit the weighted Matrix by Expert survey or AHP we can obtain the nomali-
zation of the index weight vector. The weighted matrix is:
⎡ x11 x12 L x1m ⎤
⎢x x22 L x2 m ⎥⎥
X = ⎢ 21 ; x = yij ⋅ w j ;
⎢L L O L ⎥ ij
⎢ ⎥
⎣ xn1 xn 2 L xnm ⎦
32 B. Wei, F. Dai, and J. Liu
4. Determine the positive and negative ideal solution, decree xi+ = max xij , state
; decree
i
x + = { x1+ , x2+ , L , xm+ } is the ideal solution xi− = min xij , state
= { x , x ,L x } is the negative ideal solution;
i
x− −
1
−
2
−
m
m
Di = ∑ ( xij − x −j )2 (6)
j =1
Li and Di are the closeness of the ideal solution to the positive and negative
points of Ai .
6. Determine the assessment factor, comparative and analysis the program
evaluation factor, define
Di
Ci = (7)
Di + Li
Clearly, the closer the assessment factor Ci closer to 1, the closer the program Ai is
the ideal solution; the closer the assessment factor Ci closer to 0, the closer the nega-
tive ideal solution.
Indicators A B C D E F G H
Labor
14250 27362 12521 8560 26895 16850 17520 7950
productivity
Net output rate 49.97 81.52 64.35 52.64 85.89 82.61 79.20 74.76
Output rate of
52.42 75.64 58.64 68.50 78.10 65.72 49.67 64.51
fixed assets
Interest rate of
11.24 28.56 31.00 9.68 20.78 19.85 24.69 10.83
fixed assets
Capital tax rate 15.64 19.88 20.69 19.67 20.13 11.35 9.99 9.97
Sales tax rate 12.35 20.10 17.35 16.85 19.99 12.34 15.66 11.31
The evaluation index system this title established has been meet the same trend, Di-
rectly normalized it,
⎡ 0.2817 0.2436 0.2858 0.1886 0.3437 0.2716 ⎤
⎢ 0.5409 0.3975 0.4124 0.4794 0.4369 0.4421⎥⎥
⎢
⎢ 0.2475 0.3137 0.3197 0.5203 0.4547 0.3816 ⎥
⎢ ⎥
0.1692 0.2566 0.3734 0.1625 0.3730 0.3706 ⎥
Z =⎢
⎢ 0.5316 0.4188 0.4258 0.3488 0.4424 0.4397 ⎥
⎢ ⎥
⎢ 0.3331 0.4013 0.3583 0.3332 0.2494 0.2714 ⎥
⎢ 0.3463 0.3862 0.2708 0.4144 0.2196 0.3445 ⎥
⎢ ⎥
⎣⎢ 0.1571 0.3645 0.3517 0.1818 0.1971 0.2488 ⎦⎥
Obtain the optimal value vector and the vector of worst values:
Z + = ( 0.5409, 0.4188, 0.4258, 0.5203, 0.4547, 0.4221) ;
Z − = ( 0.1571, 0.2436, 0.2708, 0.1625, 0.1971, 0.2488 ) ;
Calculated the distance between positive and negative ideal solution, take city A for
example:
Da+ = ( 0.2817 − 0.5409 ) + ( 0.2436 − 0.4188 ) + L + ( 0.2716 − 0.4421) = 0.5186 ;
2 2 2
From table2 we can see B, E and C city is the finest city, G, D, F, A and H city is the
Medium city in the industrial economic benefits. There is no inferior city. It explains
that the industrial economic benefits of the 8 cities’ development is Balanced.
4 Summary
TOPSIS is a systems engineering approach for the comprehensive evaluation. In re-
cent years, it has begun to be used for economic and health fields. The law of the
original data with the trends and normalized, not only to eliminate the influence of
different indicators of dimension, but also make full use of the original data can be
quantitatively evaluate the pros and cons of different units level, the results of objec-
tive and accurate. In this paper, the development of the current status quo of industrial
enterprises, the establishment of the industrial economic benefit evaluation index
system, the method is introduced into the field of industrial economy, the economic
benefits of comprehensive evaluation of multiple units, results in good agreement
with the actual situation.
References
1. Yue, C.-y.: Decision-making theory and approach. Science Press, Beijing (2002)
2. Xu, Z.-s.: A new method of multiple objective decision under uncompleted information.
Operations and Management 10(2), 25–28 (2001)
3. Olson, D.L.: Comparison of weights in TOPSIS models. Mathematical and Computer
Modeling (40), 721–727 (2004)
Optimization and Distributing Research of Measuring
Points for Thermal Error of CNC Machine Based on
Weighted Grey Relative Analysis
Qin Wu 1,2, Jianjun Yang 1,2, Zhiyuan Rui 1, and Fuqiang Wang1
1
School of Mechanical and Electronical Engineering, Lanzhou University of Technology,
Lanzhou, 730050, China
2
Key Laboratory of Digital Manufacturing Technology and Application,
The Ministry of Education, Lanzhou University of Technology,
Lanzhou, 730050, China
1261099906@qq.com, yangjj@lut.cn, zhiy_rui@163.com,
wangfq@lut.cn
1 Introduction
As the components of CNC machine tools being uneven heated in operation, so the
distribution of temperature field of machine tools being complex, the resulting thermal
deformation will cause the change of relative position between cutting tools and
workpiece , it can be said that the thermal error is the largest error source of CNC
machine tool. According to the statistics, the thermal error in the proportion of the total
error of machine tool can reach 70% [1]. So how to improve the prediction accuracy
and robustness of the error model, become the issue that so many scholars from dif-
ferent countries developing the large number of fruitful research. Chen J. S. of Uni-
versity of Michigan integrated the geometric error and thermal error, defined 32 new
machine errors, and conducted the error model of machine tool by using multiple re-
gression analysis[2]. Yang Jian-Guo of Shanghai Jiao Tong University proposed the
means of thermal error modeling robustly based on the thermal modal analysis, and
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 35–40, 2011.
© Springer-Verlag Berlin Heidelberg 2011
36 Q. Wu et al.
established the thermal error model applying multiple linear regression method and the
gray correlation method [3]; Du Zheng-Chun, etc. advanced the thermal error com-
pensating and modeling method based on Radial Basis Function (RBF) neural network
for CNC lathe. Although these analysis methods improved the identification speed and
accuracy of thermal error, but the prediction accuracy and generalization ability of the
model is not high, and it is difficult to achieve the requirements of precision machining
or ultra-precision machining.
As the temperature field with non-stationarity and time variability, its influencing
factors are complex. In order to avoid too little temperature measuring points leading to
the lack of useful information, it is required to arranging a large number of temperature
sensors on the machine tools. But these measuring points not only increases the
workload of measuring and calculating data, but also coupling could occur between the
nearby measuring points. So it requires to selecting the critical temperature measuring
points for thermal error modeling.
As the feature of thermal error is obviously gray, while the measurement of limited
temperature measuring points have the character of small sample and poor information
relative to the overall temperature distribution of the machine tool, so the article will
establish the weighted gray correlation analysis model based on the measured tem-
perature data, and select the critical temperature measuring points for the establishment
of thermal error prediction model.
Grey relative analysis does not require a large sample of the typical distribution of the
data column. In the experimental data processing of each temperature measuring point,
the analysis method can use very little experimental data.
The key of Grey relative analysis is the establishment of Grey correlation, and the
Grey relative grade is quantum that reflects the correlative degree among each factor
[4]. However, the formula which commonly calculating the correlative coefficient and
the relative grade only focusing on the area between the two curves to identify sem-
blable extent, ignoring the change trend of the curves, and not considering the weighted
difference of all factors. In this paper, the weighted coefficient will be added based on
the optimization of the sample data sequence, this method makes the results better
reflect the proximity degree between the two curves.
Make X = {xθ θ = 0,1,2,L, m} as the relative factor aggregate of the data sequence, x0
as the reference function, xi as the comparison function, i = 1,2,3, L , m , xθ (k ) for the
value of xθ in No. k points, where k = 1, 2,3, L , n .
In order to obtained Increment information and expanded density of the information
in the modeling, the data sequence had Inverse Accumulating Generation Operator
(IAGO). Set up y = ( y(1), y(2), y(3),L y(n)) as the inverse accumulating generated sequence
of the sequence x = ( x (1), x (2), x (3), L x( n)), where y (1) = x(1), y (k ) = x(k ) − x(k − 1),, (k = 2,
3, L n) .
Optimization and Distributing Research of Measuring Points 37
对于 x , x ,令:
0 i
ξ ⋅ max max x0 (k ) − xi (k )
i∈m, k∈n
ζ i (k ) =
(1)
λ1 x0 (k )− xi (k ) + λ2 y0 (k )− yi (k ) + ξ ⋅ max max x0 (k ) − xi (k )
i∈m k∈n i∈m k∈n i∈m k∈n
In the formula (1), the gray relational coefficient xi of x 0 at the time of k, ξ for
ζ i (k ) for
the differentiate ratio, 0 < ξ < 1 , λ1 for the weighted displacement coefficient, λ 2 for the
且
weight coefficient of rate of change, λ1 , λ 2 ≥ 0, λ1 + λ 2 = 1 . In practice, all coefficients
can be appropriately adjusted according to the emphasize particularly on the specific
issues.
According to the formula (1), it met the definition of the gray relational space. The
calculating formula of the gray relational grade given as follows [5]:
1 n
γ i = γ ( x 0 , xi ) = ⋅ ∑ ζ i (k ) (2)
n k =1
In this experiment, a vertical machining center was regarded as the subject investigated,
each measuring point and the spindle thermal error data were collected. As temperature
field of the tool is subject to many factors, in order to fully detect the changes in tem-
perature field, 16 temperature sensors were disposed on the machining center in the
experiment, their distribution was as follows: No.1, 2 sensor were placed at the spindle
box near the spindle bearings location; No.3, 4 sensors were placed at the spindle nose;
No.5, 6, 7 sensors were placed at the rolling guides of X, Y, Z axes; No.8, 9, 10 sensors
were placed separately at the ball screws of X, Y, Z axes close with screw nut; the 11th
sensor was placed at the front uprights; the 12th sensor to monitor temperature changes of
surroundings;No.13, 14 sensors were placed on both sides of the bed; the 15th sensor was
placed on the table; the 16th sensor was placed at the cooling fluid box under the bed.
Three non-contact eddy flow displacement sensors were used to measuring thermal
error of the vertical machining center. They were installed in specially designed ex-
perimental device, arranged around the spindle, were respectively used to measure the
thermal drift error of the spindle at the direction of the X, Y and Z axes [6].
The experiment adopted the style of dry run to simulate the cutting cycle. The
rotating speed of spindle was 3000r/min; the feed speed was 25m/min; and cooling
fluid keeping circulation. Sampling time was 5 hours of continuous operation; the
sampling interval was 15 min. The curve diagram changes along with time of the
temperature of each measuring point and the thermal error at the direction of z axis
separately shown in Fig. 1 and Fig. 2.
38 Q. Wu et al.
36 0
34
-5
9
4 7
32
16 3
1 2 11
Error/um
-10
5
10
30 8
T/
6 13 -15
15
28
14
12
-20
26
24 -25
0 50 100 150 200 250 300 0 50 100 150 200 250 300
t/min t/min
Fig. 1. The curve diagram changes along with Fig. 2. The curve diagram changes along with
time of the temperature of each measuring time of the thermal error at the direction of
point z axis
Analysis of the Experimental Data. Make the data sequence of the temperature
measuring points as a comparison function of the gray system (son-factors), and makes
the data sequence of the thermal error at z direction of the spindle as a reference func-
tion (mother-factors), each data series takes 11 time nodes. The data were processed to
( )
dimensionless first, and substituted into equation (1), then calculated the relative co-
efficient between the comparative series xi (k ) i=1,2,3,…,16, k=1,2,3,…,10 and the
reference series x0 (k ) .The gray relative coefficient matrix B formed as follows:
⎡0.8721 0.9041 0.9773 0.8651 0.8016 0.7686 0.7760 0.7515 0.6715 0.6036 0.5553 ⎤
⎢0.8721 0.9043 0.9769 0.8654 0.8011 0.7680 0.7759 0.7518 0.6713 0.6037 0.5556 ⎥⎥
⎢
⎢0.8721 0.9043 0.9769 0.8654 0.8012 0.7678 0.7756 0.7508 0.6706 0.6030 0.5549 ⎥
⎢ ⎥
⎢0.8721 0.9042 0.9769 0.8659 0.8022 0.7691 0.7766 0.7525 0.6717 0.6039 0.5557 ⎥
⎢0.8721 0.9039 0.9774 0.8666 0.8028 0.7705 0.7784 0.7540 0.6732 0.6051 0.5567 ⎥
⎢ ⎥
⎢0.8721 0.9039 0.9772 0.8669 0.8030 0.7707 0.7787 0.7540 0.6731 0.6052 0.5569 ⎥
⎢0.8721 0.9039 0.9773 0.8668 0.8031 0.7706 0.7780 0.7538 0.6734 0.6053 0.5572 ⎥
⎢ ⎥
B = ⎢0.8721 0.9036 0.9779 0.8671 0.8037 0.7720 0.7799 0.7555 0.6743 0.6060 0.5574⎥
⎢0.8721 0.9036 0.9780 0.8671 0.8036 0.7701 0.7773 0.7530 0.6725 0.6046 0.5562⎥
⎢ ⎥
⎢0.8721 0.9042 0.9766 0.8662 0.8026 0.7697 0.7773 0.7530 0.6723 0.6044 0.5561⎥
⎢ ⎥
⎢0.8721 0.9034 0.9079 0.8674 0.8033 0.7710 0.7795 0.7557 0.6753 0.6071 0.5585 ⎥
⎢0.8721 0.9034 0.9066 0.8673 0.8034 0.7707 0.7791 0.7556 0.6752 0.6071 0.5583 ⎥
⎢ ⎥
⎢0.8721 0.9037 0.9052 0.8669 0.8031 0.7709 0.7798 0.7557 0.6756 0.6075 0.5590⎥
⎢0.8721 0.9035 0.9040 0.8677 0.8038 0.7715 0.7804 0.7562 0.6761 0.6079 0.5591⎥
⎢ ⎥
⎣⎢0.8721 0.9027 0.9011 0.8697 0.8065 0.7752 0.7848 0.7605 0.6793 0.6106 0.5615⎦⎥
The relative coefficient substituted into equation (2), gives the relative grade that were
normalized between each temperature measuring point and the thermal errors:
Optimization and Distributing Research of Measuring Points 39
, , , , , ,
γ 1' = 0.6875 γ 2' = 0.6719 γ 3' = 0.6250 γ 4' = 0.75 γ 5' = 0.8750 γ 6' = 0.8438
γ '
= 0.8906,γ = 0.9688,γ = 1,γ = 0.8438,γ = 0.7969,γ = 0.0313,
' ' ' ' '
γ 13' '
14
'
15
'
16
Application and Validation of the Model. Generally, the measuring points which
were arranged in the same parts should be united a class, and the number of selected
critical temperature measuring points should be as much as possible with the number of
key components of machine tools. In this case, measuring points can be divided into six
categories, namely: (1,2,3,4); (5,6,7); (8,9,10); (11); (12,13, 14,15); (16). In each
category, choose the critical temperature measuring point which with the maximum
relative grade. Finally, No.4, 7, 9, 11 and 16, the five sensors were chose as the key
temperature measuring points for modeling the thermal error, they corresponded to the
five locations such as the spindle end, z-axis rolling guide, y axis screw nut, column and
cooling tank.
Finally, based on 16 temperature measuring points and the selected 5 critical
temperature measuring points, the modified GM modeling approach was used for the
thermal error modeling, the results shown in Fig.3 and Fig.4. Obviously, modeling
with the 16 temperature measuring points, the coupling with the data of each
temperature measuring point, affected the accuracy of the model; and modeling
with the finally identified five key temperature measuring points that optimized
by the weighted gray relative analysis, the accuracy of the model has been greatly
improved, the fitting precision of the curve was relatively high, residual error was
small.
Fig. 3. The thermal error model based on 16 Fig. 4. The thermal error model based on 5
temperature variables temperature variables
40 Q. Wu et al.
3 Conclusion
In this study, the data that collected by data acquisition system had gray nature relative
to the thermal distribution of the overall machine tool, and the data columns with small
samples and uncertain typical distribution. It was very suitable to analyzing and mod-
eling the thermal error with the gray system theory [8]. Using the model of weighted
gray relative analysis before-mentioned, the distribution of temperature field that af-
fected the thermal error of machine tool was optimized. Validated by the modeling, the
forecast precision of the thermal error model was quite high.
Acknowledgements
References
1. Jun, N.: A Perspective Review of CNC Machine Accuracy Enhancement through Real-time
Error Compensation. China Mechanical Engineering 8, 29–33 (1997)
2. Hong, Y., Jun, N.: Dynamic neural network modeling for nonlinear, non-stationary machine
tool thermally induced error. International Journal of Machine Tool Manufacture 45,
455–465 (2005)
3. Jianguo, Y., Weiguo, D.: Grouping Optimization Modeling by Selection of Temperature
Variables for the Thermal Error Compensation on Machine Tools. China Mechanical En-
gineering 15, 10–13 (2004)
4. Youxin, L., Longting, Z., Min, L.: The Grey System Theory and The Application in The
Mechanical Engineering. Press of National University of Defense Technology (2001)
5. Yongxiang, L., Hengchao, T., Jianguo, Y.: Application of Grey System Theory in Opti-
mizing the Measuring Points of Thermal Erron on Machine Tools. Machine Design & Re-
search 2, 78–81 (2006)
6. Jiayu, Y., Hongtao, Z., Guoliang, L.: Optimization of Measuring Points for Machine Tool
Thermal Error Modeling Based on Grouping of Synthetic Grey Correlation Method. Journal
of Hunan University (Natural Sciences) 35, 37–41 (2008)
7. Jiayu, Y., Hongtao, Z., Guoliang, L.: Application of a New Optimizing Method for the
Measuring Points of CNC Machine Thermal Error Based on Grey Synthetic Degree of
Association. Journal of Sichuan University (Engineering Science Edition) 40, 160–164
(2008)
8. Bryan, J.B.: International Statures of Thermal Error Research. Annals of CIRP 39, 645–656
(1990)
A DS-UWB Cognitive Radio System Based on Bridge
Function Smart Codes
1
Beihang University, XueYuan Road No.37, HaiDian District, BeiJing, China
xuyafei0208@126.com, shenghong@buaa.edu.cn
2
Beijing University of Chemical Technology 15 BeiSanhuan East Road,
ChaoYang District, Beijing, China
Keywords: UWB; Bridge function; Smart code; BER; Cognitive radio; MPI.
1 Introduction
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 41–46, 2011.
© Springer-Verlag Berlin Heidelberg 2011
42 Y. Xu et al.
This paper has the following main aspects. Section II describes the bridge function
smart code sequence matrix and the nature of autocorrelation. Section III elaborates
the SIMULINK / MATLAB system model. Section IV gives the simulation results
and makes a qualitative description. Section V is the conclusion.
autocorrelation function
⎧1 τ = 0
Raa (τ ) = ⎨ (1)
⎩0 τ = 1 in a small intervalτ 0
Rab (τ ) = δ (2)
Where δ far less than 1. Paper [4] discusses the first sort of the bridge function corre-
lation which is first copying and after shifting, obtains a class of smart code se-
quences, and received the following two theorems, quoting as follows:
Theorem 1
(1) Each sequence in the group B r i q , k ( m ) , have the same zero-correlation zone
length, when 0 < τ < 2k , RBri ( m ) (τ ) = 0
(2) Any two different sequences in the group have the same length of zero correla-
tion zone, when 0 ≤ τ ≤ 2 k , RBri ( m ), Bri ( n ) (τ ) = 0
By the definition of smart code, we know that the bridge function sequences in the
group is a special kind of smart code sequence.
Theorem 2
Fig. 1. The diagram of cognitive radio system model for one user
In the UWB pulse generator, Gaussian pulse waveform was generated. The output
single pulse signal is then modulated into modulation.
Signal model
The transmission signal can be expressed as follows
∞
s (t ) = ∑d
n =−∞
⎡n⎤
⎢ ⎥
c n p (t − nTc ) (3)
⎣G ⎦
p (t ) is the normalized pulse, and the duration is Tc . cn is the spreading code, and
the period is G , symbol period is Ts = GTc , so G is the spreading gain.
Sending the modulation signal into the standard of UWB Channel(SV/IEEE
802.15.3a)
By the SV / IEEE 802.15.3a channel model given [3]
L −1 K −1
h(t ) = ∑∑ α k ,l (t − Tl − τ k ,l ) (4)
l =0 k =0
Where nIPI ,u (i ' ) is inter-symbol interference, nISI ,u (i ' ) is Inter- Pulse interference.
We can see that multipath interference is made up by every multipath interference,
each of the multipath interference is divided into two. Based on the definition of IEEE
802.15.3a channel model, the energy of the multipath components is in accordance
with the exponential rate of decay,
By literature [5], we know that the variance of multipath interference is
N −1 M −1
1
∑ ∑
2
σ 2 MPI = le − M Δτ / Γ e − k Δτ /γ ( Rc 2 (m − M − k ) + Rc (m − M − k )) (7)
G2 l =0 k =0
M + K ≠m
4 Simulation Results
The system simulation structure is modeled in Simulink/Matlab software environ-
ment. UWB channel impulse response has four types (CM1, CM2, CM3 or CM4). In
our simulation, we select the CM2.The transmitted binary data is set to be 10000 bits
with symbol period of 4ns. Because the simulation results depend on the channel
impulse response of a specific implementation, the channel influence should be cor-
rectly amended by averaging a number of the channel realizations. However, this will
bring more time costs. Therefore, we make an appropriate balance between impact of
the channel and time costs.
The simulation results were showed in figure 2, 3. Figure 2, using bridge function
smart code sequence matrix which belongs to theorem 1, and parameters is from k=1
to k= 4. Figure 3, using special bridge function smart code sequence matrix which
A DS-UWB Cognitive Radio System Based on Bridge Function Smart Codes 45
belongs to theorem 2, and parameters is from k=1 to k=4. It can be seen from the
figure that using the bridge function smart code sequence matrix system has the better
bit error rate performance than using Walsh function sequence. And with the values
of k in the bridge function sequence increases, the zero correlation zone sequences of
smart sequences get large too, and the systems have better system performance. This
is because the bridge function smart code autocorrelation function, the existence of
zero correlation zones, can effectively reduce the multipath interference. From the
paper [7] we can see that the IEEE802.15.3a channel’s multipath interference is de-
termined by the spreading code cycle, multipath gain and code spreading sequence
code autocorrelation function .And when the channel parameters and the spreading
sequence code determines, the spreading code’s cycle, multipath gain is determined
too. Therefore, multipath interference is mainly determined by the correlation func-
tion of the spreading code sequences. With the increase of zero correlation zone of the
correlation function, by the formula (9), we know that system has better BER
performance.
0
10
-1
10
Bit Error Rate
-2
10
-3
10
Walsh With zeors padding
Walsh(8)
Bridge(3,1)
Bridge(3,2)
-4
10
0 5 10 15
Fig. 2 Eb/No(dB)
Fig. 2. Using bridge function smart code sequence matrix which belongs to theorem 1 and
Fig. is2 from k=1 to k=4
q = 4, and parameters
0
10
-1
10
Bit Error Rate
-2
10
Walsh(8) With one zero padding
Walsh(16)
Bridge(4,1)
Bridge(4,2)
Bridge(4,3)
-3
10
0 5 10 15
Eb/No(dB)
Fig. 3. Using special bridge function smart code sequence matrix which belongs to theorem 2
and q = 4, and parameters is from k=1 to k=4
Fig. 3
Fig. 3
46 Y. Xu et al.
5 Conclusion
This paper proposes an ultra wideband (DS-UWB) cognitive radio system model
witch make the bridge function smart code as the spread spectrum sequences. And
compared bit error rate (BER) which use the Walsh sequence as the spreading code.
The non-periodic autocorrelation function of each sequence that in the bridge function
smart code sequence matrix has the zero correlation zones (ZCZS). Using the bridge
function smart code sequence matrix, and chose the appropriate value of k, we can get
a better anti-multipath fading and BER performance. The simulation results also ver-
ify this conclusion. This paper presents a single-user system, and verifies the zero
correlation zones (ZCZS) of bridge function smart code sequence autocorrelation
function have a good inhibition to the multipath interference. We know that, in the
Code Division Multiple Access(CDMA) system, wish to use small cross-correlation
function of sequences, ideally, the correlation function is to 0, that is, two code se-
quences is orthogonal, so that we can distinguish between different users, so smart
code sequence in the communication system has broad application prospects.
Acknowledgments
This work is supported by the Fundamental Research Funds for the Central Universi-
ties under grant No. YWF-10-02-023. China.
References
1. Hong, S., Liu, K., Qi, Y.: A new direct-sequence UWB transceiver based on Bridge func-
tion sequence. In: 2010 Second International Conference on Computational Intelligence
and Natural Computing, September 13-14, pp. 209–212 (2010)
2. Di, J., Hong, S., Zhang, Q.: An UWB Cognitive Radio System Based on Bridge Function
Sequence Matrix and PSWF Pulse Waveform. In: 2010 Second International Conference
on Computational Intelligence and Natural Computing, September 13-14, pp. 162–165
(2010)
3. Fisher, R., Kohno, R., Ogawa, H., Zhang, H., Takizawa, K., Mc Laughlin, M., Welborn,
M.: DS-UWB physicallayer submission to 802.15 task group 3a, pp. 15–40. IEEE P, Los
Alamitos (2005)
4. Shaterian, Z., Ardebilipour, M.: Direct Sequence and Time Hopping ultra wideband over
IEEE.802.15.3a channel model. In: 16th International Conference on Telecommunications
and Computer Networks, SoftCOM 2008, September 25-27, pp. 90–94 (2008)
5. Yang, Z., Qian, Y., Li, Y., Bi, X.: IEEE 802.15.3a channel DS-UWB multipath interfer-
ence analysis. Journal of Natural Science of Heilongjiang University 24(1) (February
2007)
6. Sablatash, M.: Adaptive and Cognitive UWB Radio and a Vision for Flexible Future Spec-
trum Management. In: 10th Canadian Workshop on Information Theory, CWIT 2007, June
6-8, pp. 41–44 (2007)
7. Zhang, F., Zhang, Q.: A new type of smart code sequence and the correlation function.
Journal of Telemetry;Tracking and Command (September 2005)
Problems and Countermeasures of
Zhejiang High-Tech Enterprises
Industry-University-Institute
Cooperation in China
1 Introduction
Industry-university-institute cooperation is valuable experience getting from the eco-
nomic and technical development. At the moment, Industry-University- Institute
cooperation has become the strategic measure which speeds up technical development
and achievement, increases overall national strength, and strengthens economic com-
petitiveness. In 2008, Chinese Industry-University-Institute cooperation technology
development contract amounted to 900 billion yuan, accounting for 37% of technology
market transactions in total contract amount. The whole nation has established 8 In-
dustry-University-Institute strategic alliances, 129 productivity promotion centers, 67
university science parks, with 1115 colleges and universities participated in the
research cooperation. In recent years, industry-university-institute cooperation has
enjoyed the rapid development, but also some new problems; many researchers on
these issues conducted analysis and discussion from all sides. Cohen, Nelson and
Walsw (2002) found that not only the formal cooperation was very important,
but informal cooperation was also very important, even more important [1].
Monjon and Waelbroeck (2003) considered that commissioned R&D in a variety of
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 47–52, 2011.
© Springer-Verlag Berlin Heidelberg 2011
48 Q. Zhou, C.-F. Mao, and L. Hou
2 Problem Analysis
Between 2006 and 2008, the average operating income of Zhejiang high-tech enter-
prises was 513.5 million yuan. The average R & D investment was 1327 million yuan,
accounting for 2.5% of the average operating income. The average cooperative R & D
investment was 5.15 million yuan, accounting for 0.97% of the average operating
income. About 86 enterprises hammered at industry–university-institute cooperation,
and the total item was 351 projects, nearly 3 projects per enterprise. These data sug-
gested that industry-university-institute cooperation of Zhejiang high-tech enterprises
had a boom trend, industry-university-institute cooperation become an important way
of Zhejiang high-tech enterprises technical innovation. But through questionnaires and
interviews, we also found there were many problems.
Firstly, industry-university-institute cooperation still remains at a low level. The main
motivation of industry-university-institute cooperation of Zhejiang high-tech enterprises
was to develop new products, improve product quality, improve technology equipments
and develop new markets, the proportion separately reaching more than 30%. But the
object of patent development and technical standards , which was main way to enhance
core competitive advantages, separately held only 20% and 6% proportion.
Secondly, the mode of cooperation and the way of interest distribution were rela-
tively single. The main mode of industry-university-institute cooperation of Zhejiang
high-tech enterprises was cooperative development patterns, accounting for 86.0%.
Other patterns, such as commissioned development, building R&D institutions and
Problems and Countermeasures of Zhejiang High-Tech Enterprises 49
technology transfer etc, were accounting to less than 10%. At the same time, 62.0%
Zhejiang high-tech enterprises completed cooperation through providing R&D fund for
universities and research institutes. Other patterns, such as assigned profit according to
sales, retained profits, invest proportion, respectively held 18% proportion.
Thirdly, the secondary R&D ability of ZheJiang high-tech enterprises is not strong.
Cooperative approach was always the important innovation method selected by
ZheJiang high-tech enterprises, but the method always resulted in short-term goal on
industry-university-institute cooperation, which mass cooperation remained at imita-
tive phase. Meanwhile, because many decision makers couldn’t understand indus-
try-university-institute cooperation effectively, many enterprises didn’t want to invest a
lot of R&D funds for industry-university-institute cooperation. This was a direct result
for high-tech enterprises lacking "secondary innovation" capacity, which resulted in
weak R&D achievements.
Finally, the risk of industry-university-institute cooperation was obvious and seri-
ous. At present, the situation of industry-university-institute cooperation of Zhejiang
high-tech enterprises was relatively satisfied. More than 40% enterprises considered
that their industry-university-institute cooperation was fine; more than 10% enterprises
intended to cooperation again after current cooperation. But More than 40% enterprises
considered that there were many problems with industry-university-institute coopera-
tion, and among them 14% enterprises often encountered disputes on the distribution of
interest. 10% enterprises often disputed with partners because of intellectual property.
At the same time enterprises would face risk in the field of market, technology etc
during industry-university-institute cooperation. 50% enterprises considered the main
risk of industry-university-institute cooperation was technical risk, although 60% en-
terprises considered marketing risk. These risks directly lead to uncertainty during
industry-university-institute cooperation: 70% industry-university-institute coopera-
tion projects couldn’t meet the expected target, 13% projects couldn’t distribute interest
reasonably, 9% projects faced with conflict of inconsistent relationships and so on.
3 Countermeasures
Enterprise is the body of innovation, and enterprise innovation need intellectual support
of universities, research institutes, and it is more important for government supports to
provide necessary innovation public goods and services. Through government support,
enterprises, research institutes and universities can requires more innovation to be
innovative. This section analyzed how different factors affected Zhejiang high-tech
enterprises to absorb technology and knowledge from universities and research insti-
tutions. Then this section analyzed the role of government on industry-
university-institute cooperation and we gave some suggestion for government to help
Zhejiang high-tech enterprises cooperative R&D.
As we can see from Table 1, Zhejiang high-tech enterprises industry-university-
institute cooperation were influenced by many factors. The main influence factors
including lack of information on technology, the high fee of technology transfer, lack
of intermediate test and engineering capability and so on. These results indicated that
Zhejiang high-tech enterprises industry-university-institute cooperation required
50 Q. Zhou, C.-F. Mao, and L. Hou
Table 1. The Influent Factors of Enterprises Achieving Technology from Universities and Re-
search institutes
Lack of information on
5.95 15.86 40.17 26.49 11.53 3.22
technology
The high fee of
5.41 13.33 34.41 33.15 13.70 3.36
technology transfer
Not mature of technology 3.78 9.55 27.57 36.58 22.52 3.65
Unclear of technology
12.79 18.56 32.25 22.16 14.24 3.06
ownership
No advantage compared
to similar foreign 5.59 13.87 29.55 31.35 19.64 3.46
technology
Unclear of technology
6.30 13.15 28.65 33.16 18.74 3.45
application prospect
Not strong of
19.46 25.59 37.66 13.33 3.96 3.58
intermediary services
Lack of intermediate test
and engineering 7.75 16.76 30.63 30.63 14.23 3.29
capability
government policy efficient utilization, which can guide cooperation among enter-
prises, universities and research institutes to improve R&D performance.
As can be seen from Table 2, high-tech enterprises of ZheJiang government had very
high expectations of countermeasures from government policy to promote indus-
try-university-institute cooperation. Many enterprises hoped that government should
establish an information service platform to strengthen exchanges and cooperation,
establish fixed department responsible for technology transfer, encouragement to es-
tablish R&D consortium, and so on. The enterprises had strong expectation of more
than 4 points value, indicated that ZheJiang high-tech enterprises hoped government
can play an effective role in information communication, and could become informa-
tion bridge of industry-university-institute cooperation.
Therefore, government should develop effective guide and support mechanism for
Zhejiang high-tech enterprises industry-university-institute cooperation.
Firstly, government should promote the information service system vigorously.
Because information was important resource for technical innovation, especially
technology and information market information, government needed to provide perfect
kinds of information, such as cooperative information of potential members, patent
information, and build information database, etc. Government should ensure all in-
formation was authoritative, authenticities and efficient sources.
Problems and Countermeasures of Zhejiang High-Tech Enterprises 51
4 Conclusion
Zhejiang high-tech enterprises industry-university-institute cooperation has achieved
some achievements, but still remained some questions, such as low cooperative level,
single distribution, weak secondary R&D ability, obvious risk and so on, etc. This
paper summarized and analyzed these issues through empirical test, and offered some
policy suggestions for government to improve Zhejiang high-tech enterprises indus-
try-university-institute cooperation, which included promoting the information service
system vigorously, urging enterprises to enhance level of cooperation, playing impor-
tant role to construct and improve technical intermediary service organization system,
building better external environment of industry-university-institute cooperation, and
so on.
Acknowledgement
The authors would like to thank NSFC, China, for funding the project coded 70903020
and 71010107007. And we also would like to thank science research project supported
by Science and technology Department of Zhejiang Province, for funding the project
coded GK090903005.
References
1. Cohen, Nelson, Walsw: Links and Impacts: the Influence of Public Research on Industrial
R&D. Management Science 48, 1–23 (2002)
2. Monjon, Waelbroeck: Assessing Spillovers from Universities to Firms: Evidence from
French Firm-level Data. International Journal of Industrial Organization 21, 1255–1270
(2003)
3. Zhong W.-j., Mei S.-e., Xie Y.-y.: Analysis of Technical Innovation Modes for the Indus-
try-university-institute cooperation. China Soft Science 8, 174–181 (2009)
4. Mao J.-q., Liu L.: Problems& Development Measures in Combination of Industry, Study&
Research in China. Theory and Practice of Education 29, 23–25 (2009)
5. Chiesa, Coughlan, Voss: Development of a Technical Innovation Audit. Journal of Pro-
duction Innovation Management 13, 105–136 (1996)
Research on Puncture Area Calibration of Image
Navigation for Radio Frequency Ablation Robot
1 Introduction
The radio frequency ablation is an interventional operation which releases radio fre-
quency current at some part of intracardiac with electrode catheter to make local endo-
cardium and subendocardial myocardium coagulation necrosis to destroy some original
points of rapid arrhythmia. The eradication rate of this operation for paroxysmal su-
%
praventricular premature tachycardia is above 90 , so it has been the important
method and effective means to treat atrial fibrillation in department of cardiology. The
puncture and implantation of ablation electrode catheter are the long time operations
under the radiation of X-ray, and the ray harms doctors seriously, so it’s necessary to
research how to use image navigation robot to do the operations instead.
The primary problem for reaching the surgery accuracy requirement is how to cali-
brate the surgery system. During the calibration process, the mathematical model is
,
often used to describe the camera it describes the process from scene projection and
image. The pin-hole model is a common ideal state model, from the physics, the
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 53–58, 2011.
© Springer-Verlag Berlin Heidelberg 2011
54 P. Wang et al.
image point is the intersection point of the image plane and the line connecting be-
tween the optical center and the object point, its best advantage is the concisely, prac-
tically and accurately linear imaging relationship. Simulating the camera imaging
process linearly with the pin-hole imaging is a kind of approximation for the camera
applied in the actual, while the obtained puncture image has the seriously geometric
distortion, all the geometric distortions will change the image point position in the
scene, so they must be compensated. The lens distortion coefficient, which reflects the
distortion effect, is brought in the ideal pin-hole perspective model in this paper.
Where δ r is the nonlinearity distortion at the image point whose polar coordinate
is (r , ϕ ) , r is the radial distance between the image center and the pixel point and
k1 , k2 , k3 , k4 ,⋅ ⋅ ⋅ is distortion coefficient.
During the radial distortion correction, we find that the complicated distortion
model not only fails to enhance the measurement accuracy, it appears to make the
numerical calculation instability. Its accuracy contribution rate is less than the distur-
bance for the correction beginning from the third term of the polynomial, so only the
first two terms are adopted in this paper, and the finally radial distortion correction
model is shown as:
(
⎧ xr = xd 1 + k1r 2 + k2 r 4 )
⎨
(
⎩ yr = yd 1 + k1r + k2 r
2 4
) (2)
Actually, the varying degrees eccentricity necessarily exists, namely all the optical
centers of the lens can’t be strictly collinear. This defect causes the so-called centrifu-
gal distortion, which makes the image characteristic parameter instable. The simpli-
fied distortion components on x axis and y axis are:
⎡δ x ⎤ ⎡sin ϕ cos ϕ ⎤ ⎡δ r ⎤
⎢δ ⎥ = ⎢ ⎥⎢ ⎥ (3)
⎣ y ⎦ ⎣cos ϕ − sin ϕ ⎦ ⎣δ t ⎦
Where δ r and δ t are the distortional components at the radial and the tangential
respectively, m1 and m 2 are the constant coefficients, r is the radial distance between
the image center and the pixel point, ϕ is the angle between the radial line, where the
Research on Puncture Area Calibration of Image Navigation 55
image point locates, and y axis positive direction, while ϕ 0 is the angle between the
radial line at the maximum tangential distortion and y axis positive direction. Only
the first two order distortions are taken, so from the formula (3) we can get:
( )
⎧ xt = m1 3x 2 + y 2 + 2m2 xy
⎨
( )
⎩ yt = m2 x + 3 y + 2m1 xy
2 2
(4)
With the consideration of thin lens distortion, the nonlinearity distortions mentioned
above exist in the puncture images taken by the optical lens; the nonlinearity distor-
tion is the superposition of the three distortions, so the total distortion is:
( ) ( )
⎧ Δx = xd 1 + k1r 2 + k2 r 4 + m1 3x 2 + y 2 + 2m2 xy + n1r 2
⎨
( ) ( )
⎩Δy = yd 1 + k1r + k2 r + m2 x + 3 y + 2m1 xy + n2 r
2 4 2 2 2
(5)
The work model of radio frequency ablation robot is shown as Fig.1. Its model sche-
matic diagram is shown as Fig.2. The operation accuracy is directly influenced by the
system coordinate calibration and conversion, there are 6 coordinates can be built in
the system: word coordinate, camera coordinate, image coordinate, pixel coordinate,
robot coordinate and patient coordinate. The conversion from the puncture point to
the image coordinate needs 4 matrixes. According to the basic system working princi-
ple, the movement locus of surgical robot is determined by the target point correctly
calibrated and mapped in each coordinate.
polynomial can be obtained with the least square method, so get the polynomial
shown as below:
⎧ x = cos ϕX + k1r 2 X + sin ϕY + 3 sin ϕm1 X 2 + sin ϕn1r 2 + sin ϕ cos ϕ
⎨ (8)
⎩ y = sin ϕY + k1r Y + cos ϕX + 3 cos ϕm2Y + cos ϕn1r + sin ϕ cos ϕ
2 2 2
From the two figures above, we get that, the originally crooked side boundary is
eliminated after correcting the distortion image; with the area as parameter, the meas-
ured area data has variation during the correction and the measured area after correct-
ing reaches the real value better. So imaging system correction should be done to
ensure the puncture accuracy.
5 Conclusions
In this paper an image navigation puncture region calibration method is proposed, it is
realized with a comprehensive distortion correction projection model, and we test the
method with the experiments. The result shows that, this method is simple and effec-
tive, what’s more, it can enhance the location accuracy of the puncture point, so it
lays a foundation for realizing auxiliary radio frequency ablation surgical robot accu-
rately control the patient pose and enhances the safety of the participation, and mean-
while this method solves the radiation problem when the doctors directly operate in
the X-ray. The robot coordinate system will be further researched to effectively real-
ize the puncture point fast positioning.
References
1. Fei, B.: The safety issues of medical robotics. Reliability Engineering and System
Safety 73, 183–192 (2001)
2. Zhang, Z.: Camera calibration with dimensional objects. IEEE Transactions on Pattern
Analysis and Machine Intelligence 26(7), 892–899 (2004)
3. Machacek, M.: Two-step calibration of a stereo camera system for Measurements in large
volumes. Measurement Science and Teehnology 14, 1631–1639 (2003)
4. Devernay, F.: Automatic calibration and removal of distortion from scenes of structured
environments. Machine Vision and Application 13, 14–24 (2001)
Low Power and Robust Domino Circuit with Process
Variations Tolerance for High Speed Digital Signal
Processing
Jinhui Wang, Xiaohong Peng, Xinxin Li, Ligang Hou, and Wuchen Wu
VLSI & System Lab, Beijing University of Technology, Beijing 100124, China
wangjinhui888@bjut.edu.cn
Abstract. Utilizing the sleep switch transistor technique and dual threshold
voltage technique, a source following evaluation gate (SEFG) based domino
circuit is presented in this paper for simultaneously suppressing the leakage
current and enhancing noise immunity. Simulation results show that the leakage
current of the proposed design can be reduced by 43%, 62%, and 67% while
improving 19.7%, 3.4 %, and 12.5% noise margin as compared to standard low
threshold voltage circuit, standard dual threshold voltage circuit, and SEFG
structure, respectively. Also, the inputs and clock signals combination static
state dependent leakage current characteristic is analyzed and the minimum
leakage states of different domino AND gates are obtained. At last, the leakage
power characteristic under process variations is discussed.
1 Introduction
Domino circuits are commonly employed in register, cache, and high performance
microprocessors [1]. As technology scales down, the supply voltage must be reduced
to keep the dynamic power within acceptable levels [2]. However, to meet the
performance requirements, the threshold voltage (Vt) and gate oxide thickness (tox) of
the transistors must be reduced to accompany the supply voltage scaling down, which
leads to exponential growth of the sub-threshold leakage (Isub) and the gate leakage
(Igate) [3]. Especially, for domino circuits, the excess leakage current also degrades the
noise immunity. This further highlights the importance of leakage current reduction.
There has been lots of relative work on leakage reduction in domino circuits, such as
dual Vt technique [4], sleep transistor technique [5], P-type domino circuit [6], source
following evaluation gate (SEFG) [7], and so on. However, each technique could not
separately solve the leakage and robustness problem completely. Therefore, in this
paper, a novel domino design using several techniques is proposed for simultaneously
suppressing the leakage current and enhancing noise immunity.
2 Proposed Circuits
As described in Section 1, the excess leakage current has become an important issue
to threaten the performance of domino circuit [2]. Dual Vt technique is an effective
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 59–65, 2011.
© Springer-Verlag Berlin Heidelberg 2011
60 J. Wang et al.
technique to reduce Isub by using low Vt and high Vt transistors on the timing critical
paths and non-critical paths, respectively. Utilizing this technique, gating all the initial
inputs of the domino gates into a low Isub state is required. But dual Vt domino circuits
do not take the effect of the Igate into account and therefore it could not minimize the
total leakage current. To reduce Igate, P-type domino circuit adopting low leakage
PMOS transistors instead of high leakage NMOS transistors in input network and
sleep transistor technique based on adding current switch in sleep state are proposed
[5], [6]. What’s more, to improve the robustness of the circuits, Kim proposed SEFG
(Source Following Evaluation Gate) structure [7]. In SEFG, the output of source
follower is limited by the gate input voltage and does not depend on the width of
discharging path even if noise input lasts for a long time, as shown in Fig. 1 (b) and
(c). In this paper, the SEFG structure, the dual Vt technique, and the sleep transistor
are employed in P-type domino gate, as shown in Fig. 1 (d). When the gate works,
sleep transistor is turned on. In the pre-discharge phase, clock is set high. Dynamic
node is discharged to ground. Evaluation phase begins when the clock signal is set
low. Provided that the necessary input combination to charge the evaluation node is
applied, the circuit evaluates and the dynamic node is charged to Vdd, otherwise,
preserving until the following pre-discharge phase. When the gate sleeps, sleep
transistor is cut-off to lower the leakage current. Thus, compared to other existing
domino circuits, the proposed circuit could realize low power and high noise
immunity design with a little active power penalty.
Fig. 1. (a) Common Source (b) Source Following (c) SEFG structure (d) Proposed design
3 Simulation Results
The analysis in this paper is based on H-spice tool and 45 nm BSIM4 models [8]. The
device parameters are listed in Table 1. To evaluate the performance of the proposed
design, the following four two-input AND domino gates are simulated: the standard
low Vt domino gate (s_Low), the standard dual Vt domino gate (s_dual), the SEFG
gate, and the proposed design. Each domino gate drives a capacitive load of 8 fF. All
AND gates are turned to operate at 1 GHz clock frequency. To achieve a reasonable
comparison, all of the circuits are sized to have the similar delay. The delay is
calculated from 50% of signal swing applied at the inputs to 50% of signal swing
observed at the output.
The leakage current, active power, noise immunity of these gates are simulated and
compared. To analyze leakage current, two typical sleep temperatures are considered:
Low Power and Robust Domino Circuit with Process Variations Tolerance 61
(1) 110oC which is assumed that the sleep mode is short and the temperature keeps
110oC during the short sleep period; (2) 25oC which is assumed that the sleep period
is long and the sleep temperature has fallen to the room temperature. While
considering the noise immunity, the same noise signal is coupled to all of the AND
gates, so this situation represents the worst case noise condition. In order to quantify
the noise margins, the noise signal is assumed to be a 2.5 GHz square wave with
87.5% duty cycle. The maximum tolerable noise amplitude is defined as the signal
amplitude at the inputs which induced a 10%-Vdd rising/falling in the voltage at the
output of the AND gate.
(a) (b)
Fig. 2. (a) Comparison of the active power and the noise immunity of four gates (b)
Comparison of leakage current of four gates
Table 3. Leakage current of four gates in the different input vector and clock states at 25 oC and
110 oC (A)
Input vector and clock state (25 oC) Input vector and clock state (110oC)
CLIH CHIH CLIL CHIL CLIH CHIH CLIL CHIL
Proposed 6.02e-8 6.23e-8 1.76e-7 1.39e-7 2.94e-7 9.37e-7 3.17e-7 1.55e-6
SEFG 1.86e-7 2.14e-7 1.97e-7 2.49e-7 5.32e-5 1.05e-6 1.45e-6 1.61e-6
s_Low 1.84e-7 1.66e-7 1.59e-7 2.01e-7 2.07e-6 6.62e-7 1.21e-6 1.27e-6
s_Dual 1.81e-7 1.44e-7 1.07e-7 1.79e-7 1.57e-7 6.59e-7 1.67e-7 1.27e-6
CLIH (clock=low, inputs=high), CHIH (clock=high, inputs=high).
CLIL (clock=low, inputs=low), CHIL (clock=high, inputs=low).
Fig. 2(A) shows the normalized active power and noise immunity of four different
domino gates. It shows that the noise margin of the proposed circuit is 19.7%, 3.4%,
and 12.5% higher than that of the low Vt domino circuit, the dual Vt domino circuit,
and the SEFG structure, respectively. This results from to the additional P-keeper
transistor in the proposed circuit. Though this P-keeper is only half size of the
transistor in the pull-up network, it improves the noise immunity effectively. As also
shown in Fig. 2(A), the active power of the proposed design is increased by 45.8% as
compared to standard dual Vt domino circuit. As discussed in Section 2, the proposed
circuit has more transistors, thereby consuming more active power. Also, in order to
achieve the same delay time, the size of the sleep transistor and PMOS transistor in
the inverter must be increased, which leads to more active power in proposed design.
However, the proposed circuit shows better leakage characteristics and noise
immunity than other circuits including SEFG structure.
Table 3 lists the leakage current of four gates in different input vectors and clock
states at typical sleep temperatures. It can be observed that the leakage current
characteristic depends strongly upon both of input vector and clock states. Therefore,
to obtain the minimum leakage state which can be used to optimize the leakage
current, a detailed analysis is required.
On the one hand, when the sleep temperature is 25 oC, the minimum leakage states
of all AND gates share one common character: the clock signal is low. This is can be
explained as followed. When clock signal is low, low-Vt clock PMOS transistor is
turned on and produces the total leakage current is 5.3, as shown in Table 2; low-
Vt/high-Vt clock NMOS transistor is turned off and produces the total leakage current
is 126.2/60.4. When clock signal is high, low-Vt clock PMOS transistor is turned off
and produce the total leakage current is 56.3; low-Vt/high-Vt clock NMOS transistor
is turned on and produce the total leakage current is 159.1/124. Therefore, the low
clock signal decreases total leakage current, as can be seen in Table 3.
The leakage current of standard domino circuit at 25oC is minimized when inputs
are low, as shown in Table 3. When the inputs are high, the PMOS transistors
connected to inputs (input-PMOS) are turned off and produce both Isub and Igate
(totally 56.3). But with low inputs, these PMOS transistors only produce Igate (only
5.3). Thus, low inputs minimize the total leakage current. However, in the SEFG
structure and the proposed design, there are several additional transistors including
the P-keeper and sleep transistor. Low inputs would turn on input-PMOS, which
would result in the conduction of these additional transistors. Then the current loop is
formed and stack effect [9] is vanished, thereby increasing the leakage current
significantly. Thus, high input signals to input-PMOS could achieve minimum
leakage current in both the SEFG structure and the proposed design.
Low Power and Robust Domino Circuit with Process Variations Tolerance 63
On the other hand, when the temperature is 110 oC, high inputs would minimize
leakage current. This is because Isub increases exponentially as temperature increases,
due to the reduction of Vt and increasing of the thermal voltage. But Igate, unlike Isub,
has a very weak dependence on temperature. Therefore, at 110 oC, Isub makes a larger
contribution to the total leakage current than Igate. When the clock input signal is high
or low, one of the transistors connected to clock input is non-conduced produce large
Isub. To suppress this large Isub, the stack effect is required. The input-PMOS with high
signal realizes this important effect. Isub is only produced in turning-off state. In
standard dual Vt domino circuit and the proposed design, Isub of high-Vt off-NMOS
connected to clock (the values is 1.2) is less than that of low-Vt off-PMOS connected
to clock (the values is 22.7). Low clock signal is efficient to suppress Isub. On the
contrary, in the standard low-Vt domino circuit and the SEFG structure, Isub of the
low-Vt off-NMOS connected to clock (the values is 33.3) is larger than that of low-Vt
off-PMOS connected to clock (the values is 22.7). High clock signal is helpful to
suppress leakage current, as can be seen in Table 3.
From the above analysis, the minimum leakage state at 25 oC of the proposed
design and the SEFG structure is CLIH and the minimum leakage state of standard
domino circuits is CLIL. Alternatively, the minimum leakage state at 110oC of the
proposed design and standard dual Vt domino circuit is CLIH. But in SEFG structure
and standard low Vt domino circuits, the leakage is minimum in CHIH state.
Fig. 2(B) compares the leakage current of four different gates at their minimum
leakage states. It is observed that the leakage current of the proposed design is
smallest in four gates due to dual Vt technology and sleep transistor technology. At 25
o
C, the leakage current of the proposed design decrease by 43%, 62%, and 67% as
compared to standard dual Vt domino circuit, standard dual Vt domino circuit and the
SEFG structure. However, Isub increases exponentially with the increasing of
temperature. Although the sleep transistor suppresses leakage current efficiently, the
leakage current of the proposed design oversteps standard dual Vt domino circuit. But
simulation results in Fig. 2(B) indicates that at 110 oC the proposed design decrease
leakage current by 55%, and 72% as compared to standard low Vt domino circuit and
the SEFG structure. In conclusion, the proposed design has great advantage to
decrease leakage power at typical sleep temperatures.
150
55 Proposed (CLIH)
W Dual Vt (CLIL)
85 76 200
% %
Samples
180 Proposed (CLIH)
100 T=25 160 0.4 Low Vt (CHIH)
140 99 W 89
120
% %
100
T=11
80
50
60
40
20
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4
0
0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
Fig. 3. Leakage power distribution curves of the standard low threshold voltage circuit, the dual
threshold voltage circuit, and the proposed circuit under process variations
Fig. 3 shows the leakage power distribution curves of the standard low threshold
voltage circuit, the dual threshold voltage circuit, and the proposed circuit,
respectively, in two typical temperatures under process variations. And three kinds of
circuits are set in the minimum leakage power state (standard low threshold voltage
circuit - CHIH, dual threshold voltage circuit - CLIL, proposed circuit - CLIH). It can
be seen that the leakage current distribution curves of the dual threshold voltage
circuit vs. the proposed circuit cross at 55 nW in 25 oC. 85% of the samples of the
proposed circuit are lower than 55 nW and 76% of the samples of the dual threshold
voltage circuit are higher than 55 nW. The leakage current distribution curves of the
standard low threshold voltage circuit vs. the proposed circuit cross at 0.4 uW in 110
o
C. 99% of the samples of the proposed circuit are lower than 0.4 uA and 89% of the
samples of the standard low threshold voltage circuit are higher than 0.4 uW. These
results indicate that the proposed design is preferable to reduce the leakage current in
majority of the samples even under process variations, which is similar result to the
analysis of the normal case.
5 Summary
In this paper, a novel domino circuit structure is proposed to suppress the leakage
current and enhance the noise immunity. Based on the simulation results, input vector
and clock state of the gate is discussed to obtain the minimum leakage state. At last,
the leakage characteristic under process variations is analyzed.
References
1. Stackhouse, B., et al.: A 65 nm 2-Billion Transistor Quad-Core Itanium Processor. IEEE
Journal of Solid-State Circuits 44, 18–31 (2009)
2. Gong, N., et al.: Analysis and Optimization of Leakage Current Characteristics in Sub-
65nm Dual Vt Footed Domino Circuits. Microelectronics Journal 39, 1149–1155 (2008)
Low Power and Robust Domino Circuit with Process Variations Tolerance 65
3. Wang, J., et al.: Low Power and High Performance Zipper CMOS Domino Full-adder
Design in 45nm Technology. Chinese Journal of Electronics 37, 266–271 (2009)
4. Kao, J.T., Chandrakasan, A.P.: Dual-threshold voltage techniques for low-power digital
circuits. IEEE Journal of Solid-State Circuits 35, 1009–1018 (2000)
5. Liu, Z., Kursun, V.: IEEE transactions on Circuits and Systems. Leakage Biased PMOS
SleepSwitch Dynamic Circuits 53, 1093–1097 (2006)
6. Hamzaoglu, F., Stan, M.R.: Circuit-level techniques to control gate leakage for sub-100nm
CMOS Proc. In: Int. Symp. on Low Power Electronics and Design, pp. 60–63. IEEE Press,
New York (2002)
7. Kim, J., Roy, K.: A leakage tolerant high fan-in dynamic circuit design technique. In:
Solid-State Circuits Conference, ESSCIRC 2001, pp. 309–313. IEEE Press, New York
(2001)
8. Predictive Technology Model (PTM), http://www.eas.asu.edu/~ptm
9. Lee, D., et al.: Analysis and Minimization Techniques for Total Leakage Considering Gate
Oxide Leakage. In: ACM/IEEE Design Automation Conference, pp. 175–180. IEEE Press,
New York (2003)
10. Wang, J., et al.: Monte Carlo Analysis of Low Power Domino Gate under Parameter
Fluctuation. Journal of Semiconductors 30, 125010-1–125010-5 (2009)
Detection of Attention-to-Rest Transition from EEG
Signals with the Help of Empirical Mode Decomposition
1 Introduction
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 66–71, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Detection of Attention-to-Rest Transition from EEG Signals with the Help of EMD 67
The obtained SSVEP signals are decomposed into IMF based on the EMD method.
EMD has been demonstrated as a method for the data processing of nonlinear and
non-stationary signals. From the oscillatory activities of the decomposed SSVEP, it is
noticed that the attention-to-rest transitions are obviously shown in the 6th IMF. Due
to this observation, we will focus on examining the 6th IMF so as to lead to the detec-
tion of the attention-to-rest transitions. As a consequence, this may be used for detect-
ing the idle period of a SSVEP based BCI system.
2 Methodology
The signal of the SSVEP based BCI system is obtained from 7 volunteers at the ages
of 18–30. Each volunteer was stimulated with a series of flashing lights, correspond-
ing to 20Hz, 15Hz, 10Hz, 8Hz and 7.5Hz, by the use of electrodes PO3, PO4, POz,
Oz, O1 and O2. The signals were recorded with a sampling rate of 600Hz. Firstly, the
volunteer looked at the flashing light attentively, with the first stimulus frequency of
20Hz for 6s, and then rested for 4s. Secondly, the volunteer looked at the flashing
lights at stimulus frequency of 15Hz for the next 6s, and rested for 4s, and so on. The
above 5 stimulus frequencies were repeated for 1 time, resulting in a total experiment
time of 100s. Signal processing technology is used to obtain the EEG signals. Fig.1
offers a complete set of the original SSVEP-EEG signals. The red areas refer to the
time when the volunteer is gazing at the flickering lights of the specified stimulus
frequency. Every volunteer repeats the experiment for 6 times. Therefore, the EEG
database is composed of a total of 42 full-set signals.
EMD is applied to decompose the SSVEP signals. In Fig. 2, it denotes 11 IMFs and
one residue decomposed from the second channel data (PO4) of the original EEG
signal. From the oscillatory activities of the 5th IMF and the 6th IMF, each time after
the volunteer gazed at the flickering light and turned into a rest condition, the ampli-
tudes are greatly raised in each transition period. All the experiments show that this
transition phenomenon is most obviously shown in the 6th IMF. Fig. 3 depicts the
enlargement of the 6th IMF from the corresponding EEG signal. The red dotted lines
indicate the locations of 6s, 16s, 26s, 36s, 46s, 56s, 66s, 76s, 86s and 96s. We can see
clearly high amplitudes are found at the start of each transition. Therefore, it may be
useful to look more closely at some of the more important features of the 6th IMF.
The 6th IMF is further analyzed by Fast Fourier Transform (FFT). Fig. 4 denoted
the corresponding Fourier Spectrum of the 6th IMF, showing that there is a peak at
1Hz. The frequency contents of the 6th IMF from all the signals in our EEG database
are found to be at a very low frequency, between 0.5Hz – 2Hz. In this way, we try to
extend this observation into the idea that during the period of attention-to-rest transi-
tion, there is a very low frequency occurs in EEG.
68 C.M. Ng and M.I. Vai
Fig. 2. EMD decomposed signals of channel PO4 of EEG signal with 11 IMFs and residue
Fig. 3. Enlargement of the 6th IMF Fig. 4. Fourier Spectrum of the 6th IMF
Detection of Attention-to-Rest Transition from EEG Signals with the Help of EMD 69
A finite impulse response (FIR) equiripple band-pass filter (with low frequency at
0.5Hz and high frequency at 2 Hz) is therefore designed and applied to the original
EEG signal in order to preserve the frequency content of the 6th IMF. The power spec-
trum of the filtered EEG signal is performed by an application of window with length
500 ms moving along the signal. The filtered EEG signal is then divided into 10 seg-
ments of 10s duration (including 6s of time focusing on the flickering lights and 4s of
time resting). The highest power of each 10s duration are found to be located at
6.832s, 17.03s, 26.77s, 36.99, 46.74s, 56.82s, 66.89s, 77.78s, 86.58s and 96.88s. All
of them belong to the resting durations, the time is within 0.58s – 1.78s right after the
volunteer stop gazing at the flickering lights. Fig. 5 illustrates the power spectrum of
the EEG signal after the band-pass filter is applied, in which the highest power loca-
tions of each 10s duration are also marked.
3 Experimental Results
3.1 Accuracy
Fig. 6 indicates a better demonstration on the locations of the highest powers for the
filtered EEG signal. In this figure, the original SSVEP-EEG signal is shown in blue
color, and the green areas are the 6s durations of gazing at the flickering lights, finally
the red dotted lines are the occurrences of the highest power found in the filtered EEG
signal. As expected, all the highest powers are located in the resting duration and they
occur right after the volunteer stop watching the flickering lights. It can be concluded
that a very low frequency occurs during the attention-to-rest transition. The same
analysis procedures are applied to all the SSVEP signals in the EEG database. Table 1
summarizes the results of the accuracy for detecting the attention-to-rest transition
point. The mean accuracy is 82.6%. Accordingly, our method is able to detect the idle
period of the SSVEP based BCI system.
70 C.M. Ng and M.I. Vai
Fig. 6. Original EEG signal with (i) green areas are the time of watching flickering lights. (ii)
the red lines are the occurrences of the highest power from the filtered EEG signal.
Volunteers
1 2 3 4 5 6 7
Trial 1 90% 90% 100% 80% 90% 70% 90%
Trial 2 80% 70% 70% 70% 80% 70% 80%
Trial 3 100% 70% 90% 80% 100% 70% 80%
Trial 4 90% 70% 80% 80% 100% 100% 90%
Trial 5 90% 90% 70% 70% 100% 60% 80%
Trial 6 100% 80% 80% 70% 100% 70% 80%
Accuracy 91.67% 78.33% 81.67% 75.00% 95.00% 73.33% 83.33%
The experimental results lead to the conclusion that a very low frequency occurs at
the transition point of the idle period. Since this frequency band (less than 2Hz) be-
longs to the delta waves, it is reasonable to suppose that the attention-to-rest transition
might be related to an increase in delta EEG activity. Delta activity has been found
during some continuous attention task [1]. An increase in delta power has been re-
ported in different types of mental tasks [2], it is neither due to the ocular movements
nor to any other artifact. Research has been devoted that an increase in delta EEG
activity may be related to attention to internal processing during the performance of a
mental task [2]. Delta power will be increased in conditions such as attention, activa-
tion of working memory, letter identification etc [3][4]. It is pointed out in some
Go/No-Go analysis, which are studied with event-related potentials (ERP), that there
is a power increase at 1 Hz in both Go and No-Go conditions. The increase in delta
activities during the No-Go conditions is related to the inhibition of non-relevant
stimuli (Harmony et at., 1996), signal matching and decision making (Basar-Eroglu et
at., 1992) [3]. On the other hand, delta power will become high during target relative
to non-target processing, namely, in relation to the rest condition [5].
Detection of Attention-to-Rest Transition from EEG Signals with the Help of EMD 71
4 Discussion
In this paper, we begin with the analysis of the SSVEP based BCI system by the EMD
method. The transition response is found in the 6th IMF of the SSVEP signals. There-
fore, a band-pass filter (0.5 – 2Hz) is used to preserve only the very low frequency of
the original SSVEP signals. Consequently, the phenomenon of the occurrence of very
low frequency during the attention-to-rest transition is demonstrated. This phenome-
non is examined with different SSVEP signals obtained from different person. As a
result, the attention-to-rest transitions are being successfully detected and the mean
accuracy is found to be 82.6%. This result leads to the conclusion that during the
attention-to-rest transition, a very low frequency occurs which means that delta waves
are being evoked. To put it the other way round, when the volunteer turns from an
attentively focusing stage to an unfocused attention stage, there is an increase in delta
EEG activity. This phenomenon may be related to the inhibition of non-relevant stim-
uli, signal matching and in a state of non-target processing [3]. As a consequence, our
method is able to detect the idle period of the SSVEP based BCI system.
5 Conclusion
EMD method offers a powerful tool for analyzing nonlinear and non-stationary signal
such as EEG. It offers the key to understand the components of the SSVEP signals, in
which the attention-to-rest transition is able to be detected by means of the features of
the chosen IMF. The most likely explanation is that the attention-to-rest transition is
accompanied by the occurrence of delta activities. Since there is room for further
improvement on the detection accuracy, we will further analyze the behavior of the
EMD method, as well as the IMFs.
References
1. Wikipedia, http://en.wikipedia.org/wiki/Electroencephalography
2. Harmony, T., Fernández, T., Silva, J., Bernal, J., Díaz-Comas, L., Reyes, A., Marosi, E.,
Rodríguez, M., Rodríguez, M.: EEG Delta Activity: An Indicator of Attention to Internal
Processing during Performance of Mental Tasks. International Journal of Psychophysiol-
ogy 24(1-2), 161–171 (1996)
3. Harmony, T., Alba, A., Marroquín, J.L., González-Frankenberger, B.: Time-Frequency-
Topographic Analysis of Induced Power and Synchrony of EEG Signals during a Go/No-
Go Task. International Journal of Psychophysiology 71(1), 9–16 (2009)
4. Schroeder, C.E., Laktos, P.: Low-Frequency Neuronal Oscillations as Instruments of Sen-
sory Selection. Trends in Neurosciences 32(1), 9–18 (2009)
5. Doege, K., Bates, A.T., White, T.P., Das, D., Boks, M.P., Liddle, P.F.: Reduced Event-
Related Low Frequency EEG Activity in Schizophrenia during an Auditory Oddball Task.
Psychophysiology 46(3), 566–577 (2009)
A Traffic Information Estimation Model Using Periodic
Location Update Events from Cellular Network
Abstract. In recent years considerable concerns have arisen over building Intel-
ligent Transportation System (ITS) which focuses on efficiently managing the
road network. One of the important purposes of ITS is to improve the usability
of transportation resources so as extend the durability of vehicle, reduce the fuel
consumption and transportation times. Before this goal can be achieved, it is vi-
tal to obtain correct and real-time traffic information, so that traffic information
services can be provided in a timely and effective manner. Using Mobile Sta-
tions (MS) as probe to tracking the vehicle movement is a low cost and imme-
diately solution to obtain the real-time traffic information. In this paper, we
propose a model to analyze the relation between the amount of Periodic Loca-
tion Update (PLU) events and traffic density. Finally, the numerical analysis
shows that this model is feasible to estimate the traffic density.
1 Introduction
In recent years considerable concerns have arisen over building Intelligent Transpor-
tation System (ITS) which focuses on efficiently managing the road network. One of
the important purposes of ITS is to improve the performance of transportation so as
extend the durability of vehicle which can be reduced the fuel consumption and travel
time. Before this goal can be achieved, it is to obtain the correct and real-time traffic
information which includes traffic density, traffic flow, speed, travel time, traffic
conditions, and traffic accidents so that traffic information services can be provided in
a timely and effective manner.
At present, the methods of collecting real-time traffic information can be classified
into three categories as follows.
(1). Stationary traffic information detectors [1]
(2). Global Position System (GPS)-based probe car reporting [2]
(3). Tracking the location of mobile users through the cellular network [3-9]
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 72–77, 2011.
© Springer-Verlag Berlin Heidelberg 2011
A Traffic Information Estimation Model Using PLU Events from Cellular Network 73
By far the most intuitive way is to set up the stationary traffic information detectors
on important road segments to gather the information, but this way need excessive
setup and maintenance costs [1]. In addition, several researches using GPS-based cars
as probes. But the sample size of GPS-based probe cars needs to be high enough to
infer the traffic information accurately [2]. Additional costs are incurred when these
GPS-based probe cars periodically report the traffic information through the air. To
reduce the aforementioned building and maintenance costs, cost-effective and imme-
diately alternatives need to be found. As we can see nearly everyone has his own
Mobile Stations (MS), it would seem advisable to collect traffic information by using
MS as a probe [3-9].
Cellular network has rigorous management processes to keep track of the move-
ment of MSs. There are events triggered by mobility management for cellular network
such as Location Update (LU). By grouping neighboring cells, Location Areas (LA)
can be defined to describe high level locations of MSs. When a MS moves from a LA
to another LA, a Normal LU (NLU) event is triggered to inform the cellular network
the MS’s latest LA information. Besides, a Periodic LU (PLU) event is triggered to
update the MS’s LA information periodically if the NLU event hasn’t been triggered
in a period of time. Through the PLU and NLU process, the network always knows
the current cell and LA of a MS. Therefore, the amount of PLU events is related with
the density of MSs in specific cell. For this reasoning, analyzing the occurred time
and related information of these mobility management events can infer the real-time
traffic information.
In this paper, we propose a model to analyze the relation between the amount of
PLU events and traffic density information. We also derive a specific formula to de-
scribe this relation and provide a numeric analysis of it.
The remainder of the paper is as follows. In Section 2, we propose the analytical
model and the derived formula. The numeric analysis is provided in section 3. Finally,
conclusions are given in Section 4.
Figure. 1 shows the space diagram for scenario (1). There is a car within a MS moves
along the road. The first PLU event is triggered at time t0, and then the car enters the
cell at time t1. The second PLU event is triggered at time t2, and then the car leaves
the cell at time t3. The timing diagram is illustrated in Figure. 2.
The following assumptions and parameters in the model are defined below.
• There is only one road residing in the cell.
• c (hr): The cycle time of the PLU.
74 B.-Y. Lin, C.-H. Chen, and C.-C. Lo
• x (hr): The time difference between the first PLU event and entering the cell.
We assume the time function f(x) is an uniform distribution function as f(x) =
1/c.
• d (km): The length of road segment covered by a cell.
• v (km/hr): The average speed of a car crossing a cell.
In this scenario, the probability of PLU triggered in the cell is as formula (1).
⎛ d ⎞
Pr( Scenario 1) = Pr ⎜ x + > c ⎟
⎝ v ⎠
(1)
c c 1 d
= ∫
c−
d
v
f ( x ) dx = ∫
c−
d
v c
dx =
vc
Cell
Road
Fig. 1. The scenario diagram for vehicle movement and PLU events on the road when there is
no call between two consecutive PLU events
c
enter Cell leave Cell
x d/v
t0 t1 t2 t3
Fig. 2. The time diagram for vehicle movement and PLU events on the road when there is no
call between two consecutive PLU events
2.2 Scenario (2): Several Calls between Two Consecutive PLU Events
The first PLU event is triggered at time t0. There is a call arrived at time t1, and then
the car enters the cell at time t2. The second PLU event is triggered at time t3, and then
the car leaves the cell at time t4. After that, a second call arrives at time t5. The space
diagram is illustrated in Figure. 3 and the timing diagram is illustrated in Figure. 4.
A Traffic Information Estimation Model Using PLU Events from Cellular Network 75
The following assumptions and parameters in the model are defined below.
• The call arrivals to/from one MS per one car along the road can be evaluated.
The call arrival rate to a cell is λ (call/hr).
• The call inter-arrival time function g(t) is an exponential distribution function as
g(t) = λ e−λ t.
• The call inter-arrival time tCIA is exponentially distributed [10] with the mean
1/ λ.
• y (hr): The time difference from first call arrival to entering cell.
• The time function h(y) is an uniform distribution function so h(y) = 1/(2c).
In this scenario, the probability of PLU triggered in the cell is as formula (2).
⎛ d ⎞ ⎛ d⎞
Pr(Scenario2) = Pr⎜ tCIA > c ∩ y + > c ⎟ = Pr(tCIA > c ) × Pr⎜ y > c − ⎟
⎝ v ⎠ ⎝ v⎠
(2)
∞ c ∞ d d
= ∫ g (t )dt ×∫ d h( y)dy = ∫ λe − λt dt × = e − λc
c c− c 2vc 2vc
v
Cell
Road
1st Location Update 1stcall arrival Enter Cell 2nd Location Update Leave Cell 2nd call arrival
at t0 at t1 at t2 at t3 at t4 at t5
Fig. 3. The scenario diagram for vehicle movement and PLU events on the road when there are
several calls between two consecutive PLU events
1st Location Update 1st call arrival 2nd Location Update 2nd call arrival
tCIA
c
enter Cell leave Cell
y d/v
t0 t1 t2 t3 t4 t5
Fig. 4. The time diagram for vehicle movement and PLU events on the road when there are
several calls between two consecutive PLU events
76 B.-Y. Lin, C.-H. Chen, and C.-C. Lo
We use the formula (3) to consider and sum up the two scenarios about the probability
of PLU event in a specific cell for traffic density estimation.
Pr( PLU ) = Pr (tCIA > c ) × Pr( Senario 1) + Pr (tCIA < c ) × Pr( Senario 2)
= e − λc ×
d
( ⎛
)
+ 1 − e − λc × ⎜ e − λc
d ⎞
⎟ = 3−e
− λc ⎛
(
× ⎜ e − λc )
d ⎞
⎟
(3)
vc ⎝ 2 vc ⎠ ⎝ 2 vc ⎠
Formula (3) is the probability of PLU event triggered in a specific cell of one car.
To find the probabilities of PLU events triggered in a specific cell of all cars, we have
to multiply formula (3) with the traffic flow f (car/hr). The amount of PLU events r
(event/hr) on the road segment can be expressed as formula (4) to estimate the traffic
density D (car/hr).
(
⎛
r = f × Pr(PLU) = f × 3 − e −λc × ⎜ e −λc
d ⎞
)
⎟
⎝ 2vc ⎠ (4)
( ⎛
)
= D × 3 − e − λc × ⎜ e − λc
d ⎞
⎟
⎝ 2c ⎠
3 Numeric Analysis
In this section, we analyze the relation between the amount of PLU events and traffic
density to evaluate the feasibility of our traffic information estimation model. For the
purpose of demonstration, we adopt some parameters as followings to estimate the
traffic density and the amount of PLU events: f = 5000 car/hr, λ = 1 call/hr, d = 1 km,
and c = 1 hr. Fig. 5 shows that a positive relationship with the amount of PLU events
and traffic density. Therefore, this model can be used to estimate traffic information
for ITS to analyze the traffic information.
500 500
Traffic Density (car/km)
Density
400 PLU 400
f = 5000 (car/hr)
300 λ = 1 (call/hr) 300
d = 1 (km)
200 c = 1 (hr) 200
100 100
0 0
10 20 30 40 50 60 70 80 90 100
Speed (km/hr)
Fig. 5. The relation between the amount of PLU events and traffic density with different vehi-
cle speeds
A Traffic Information Estimation Model Using PLU Events from Cellular Network 77
4 Conclusions
This paper studied an analytic model to analyze the speed report rate with considering
communication behavior and traffic information for the feasibility evaluation of traf-
fic information estimation from cellular data. In experiments, the results shows that a
positive relationship with the amount of PLU events and traffic density. This model
can be used to estimate traffic information for ITS to analyze the traffic congestion,
accidents, transportation delays, and etc.
References
1. Middleton, D., Parker, R.: Vehicle Detector Evaluation. Report 2119-1. Project Number 0-
2119. Texas Transportation Institute (2002)
2. Cheu, R.L., Xie, C., Lee, D.H.: Probe Vehicle Population and Sample Size for Arterial
Speed Estimation. Computer-Aided Civil and Infrastructure Engineering (17), 53–60
(2002)
3. Ygnace, J., Drane, C., Yim, Y.B., de Lacvivier, R.: Travel time estimation on the San-
Francisco bay area network using cellular phones as probes. University of California,
Berkeley, PATH Working Paper UCB-ITS-PWP-2000-18 (2000)
4. Fontaine, M.D., Smith, B.L.: Probe-based traffic monitoring systems with wireless loca-
tion technology: an investigation of the relationship between system design and effective-
ness. Transportation Research Record: Journal of the Transportation Research
Board (1925), 3–11 (2005)
5. Bar-Gera, H.: Evaluation of a cellular phone-based system for measurements of traffic
speeds and travel times: A case study from Israel. Transportation Research Part C (15),
380–391 (2007)
6. Caceres, N., Wideberg, J.P., Benitez, F.G.: Deriving origin-destination data from a mobile
phone network. IET Intelligent Transport Systems 1(1), 15–26 (2007)
7. Logghe, S., Maerivoet, S.: Validation of travel times based on cellular floating vehicle
data. In: Proceedings of the 6th European Congress and Exhibition on Intelligent Transpor-
tation Systems, Aalborg, Denmark (2007)
8. Caceres, N., Wideberg, J.P., Benitez, F.G.: Review of traffic data estimations extracted
from cellular networks. IET Intelligent Transport Systems 2(3), 179–192 (2008)
9. Gundlegard, D., Karlsson, J.M.: Handover location accuracy for travel time estimation in
GSM and UMTS. IET Intelligent Transport Systems 3(1), 87–94 (2009)
10. Bolotin, V.A.: Modeling call holding time distributions for CCS network design and per-
formance analysis. IEEE Journal on Selected Areas in Communications 12(3), 433–438
(1994)
The Application of Virtual Reality on Distance Education
Zehui Zhan
1 Introduction
With the development of network technology such as broadband transport rate, Vir-
tual Reality Techniques have been gradually taken consideration by distance educa-
tion organizations. Modern educational technologies are making Virtual Reality a
promising instructional means, where modeling and simulations can be used to dis-
play the structure and trends of natural, physics or social system, so as to provide an
experiential and observable environment for students. The purpose of this paper is to
analyze the applicability of Virtual Reality in Distance Education.
2 Virtual Reality
As described on Wikipedia, Virtual Reality was first proposed by Jaron Lanier in
1989 [1]. It is also known as "Artificial Reality", "Cyberspace", "Artificial Environ-
ments", "Synthetic Environments" and "Virtual Environments". Virtual Reality is a
kind of perceived environment usually simulated by computer. Most virtual reality
environments are primarily visual experiences, displayed either on a computer screen
or through special stereoscopic displays, but some simulations include additional
sensory information, such as sound through speakers or headphones. Some advanced
and experimental systems have included limited tactile information, known as force
feedback.
Virtual Reality is based on computer technology and hardware equipments, to im-
plement a kind of illusional spaces that can be seen, listened, touched and smelled by
users. It can be a set of computer simulation of 3D environment, which might be ei-
ther entities exist in the world or those fictive characters never exist. In virtual reality
environment, by dint of the computer hardware, network techniques, broadband and
computer 3D calculating capacity, users can enter the virtual world through Human-
Computer Interface.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 78–83, 2011.
© Springer-Verlag Berlin Heidelberg 2011
The Application of Virtual Reality on Distance Education 79
Besides, Virtual Reality is a subject that integrates human factors and information
technology. Its main purpose is to denote information through artificial work and
experience. Complex and abstract objects can be divided into sub-objects and ex-
pressed as specific symbols in virtual environment. VIRTUAL REALITY will intro-
ject many human factors, and intends to magnify the affects that it carries out to
personal feelings. Therefore, it is built up on the integration of psychology, con-
trolism, computer graphics, database design, real-time distributed system, electronics,
robotization, multi-media techniques, etc.
Moreover, distinguished from the static CAD model, Virtual Reality is a dynamic
open Environment, which will response to the users' input such as gestures, sounds
and keyboard commands. Through these interactions, users will feel themselves
being immersing in this virtual space. Also, administer can supervise and control the
environment by another interface as well.
3.1 Immersion
Immersion emphasizes the "real" experience. Ideally, users should totally fling them-
selves into the virtual system without feeling they are inside of it. This perfect
illusional effect is a sense of presence, which should be contributed by three factors--
Imagery, Interaction and Behavior. Imagery guarantees the High Fidelity of 3D effects.
For example, we should control the parallax, vision scope, depth and angle to generate
the feeling of "presence". Interaction is various in Virtual Reality, users should be able
to feel natural when they execute an action in virtual environment. Behavior means
that objects in virtual environment should be able to obey the order of nature or the
regulation set by designers when they are moving or interacting. It is also called
Autonomy of Virtual Reality system. For example, when an object receives a force, it
will move to the opposite direction and give a force back at the same time, according to
physics regulation. According to human cognitive theory, these three factors connect to
as well as act on one another, and finally lead to the Immersion effect.
3.2 Interaction
Interaction is a goal to provide convenient and vivid interactive means, which enable
users to act inside virtual space and get real-time response; at the same time, enable
the system to collect feedback from users as well.
3.3 Imagination
Imagination is a factor to make the Virtual Reality system stand out. Most of the Vir-
tual Reality system is not one kind of high-end interface, but the application focus on
specific fields and question domain. Therefore, it requires not only the understanding
and digesting of techniques, but also the brave imagination.
80 Z. Zhan
Desktop Virtual Reality implements the simulation via personal computer and low-
end work station. Usually, computer screen is used as the window for users to observe
virtual space; the peripheral equipments are used to control the stand-ins and other
interaction. In desktop Virtual Reality, 3D vision effects (such as using stereo-glasses)
can enhance users' immersion. Besides, the CRT screen and stereo-image techniques
lead to a high screen solution power and a low price. Therefore, this kind of virtual
reality is the easiest to spread widely and has a strongest life force.
Immersive Virtual Reality uses interaction equipments such as headset screen and data
gloves to temporary block the users' vision and acoustical sensory perception into a
close-up environment, so as to immerse deeply into the virtual system. There are three
kinds of typical immersion Virtual Reality --cave immersion, cabin-seat immersion and
screen projection immersion. Cave immersion is a 360 degree immersion system.
Cabinet seat immersion makes the user sit in a cabinet, where there is a window-like
screen entitle them to "look outside". So the user can turn around and browse in this
space without wearing any Virtual Reality equipments. Projection immersion uses the
projector to reflect the user him/herself to the screen, so that they can see themselves
and interact with the virtual environment around them in the screen.
Distributed Virtual Reality is to connect different users that stay in different location,
let them share the same environment and work together in virtual space. Different
from other Virtual Reality that just enable people to interact with virtual environment,
it realizes the communication between different users as well.
To sum up, Virtual Reality can display a rich and colorful space, enable users
studying, working, living and entertaining inside. It brings a brand-new taste and view
to the society and education. Now, some kinds of virtual digital world, such as virtual
community, virtual family, virtual tourism and virtual language guide are already a
part in our life that cannot be neglected. At the same time, modern education is trying
to make good use of this technology and improve the education quality and learning
effects.
The drawbacks of the existing online learning environment can be sum up as follows:
First, today's online education is short of the real experience. According to the con-
structivism theory mentioned earlier in this paper, knowledge should be organized
within an environment, so that learners can link them actively and fulfill the process
The Application of Virtual Reality on Distance Education 81
One of the best possible solutions for the drawbacks mentioned before is to combine
cooperative learning techniques and virtual reality techniques, set up a distributed
virtual learning environment, shows information by a three-dimension dynamic for-
mat, so as to achieve better reality, natural interaction and immersive effects.
There are several advantages for applying Virtual Reality in online virtual class-
room:
First, virtual reality enriches the multimedia representation format in virtual class-
room. 2D and 3D format take the place of the dull interaction. They can make the
interface real and vivid, convenient for users to get information. An enjoyable feeling
will be transferred to users and increase their cooperation efficiency.
Second, virtual reality improves the cooperative users interface in virtual class-
room. 3D distributed cooperative learning environment is much friendlier than the
simple 2D software interface. Users can experience a deeper immersion, which make
the interface more acceptable and easy to use. User’s status will be more transparent
for others as well, so teachers and classmates will be able to know each other and
collaborate better.
Third, Virtual reality is good for the cultivation of learners' non-intellective factor.
In virtual classroom, stand-ins will be used to stand for learners. Users can select the
stand-in that similar to their own characteristic or the characteristic they would like to
82 Z. Zhan
cultivate. This step enables the system to identify user's personality, culture back-
ground and knowledge structure in the first stage. Everyone will use "first person" to
communicate with classmates or teachers, and then the system will be able to record
and analyze their inclination and interest, so as to choose the best way for them to
learn.
Fourth, virtual reality increases the whole system's efficiency. The reality three-
dimensional senses give students a quick entry to the study status and keep them
concentrate inside. In this way, the learning efficiency is increased. In addition, the
harmonious human computer interface transfers the real world interaction almost
directly into virtual classroom, therefore, users don't have to passively adapt to the
computer interface. In this way, Virtual Reality save the time and effort by avoid
making cognitive burden for users.
Actually, some countries have already pay attention on the Virtual learning envi-
ronment. For example, United States is the first country that applies Virtual Reality
onto education. In 1992, East Carolina University set up the Virtual Reality Educa-
tional Lab, aims to confirm the feasibility of Virtual Reality -Education, evaluate the
hardware and software of Virtual Reality, study the effects on education generated by
Virtual Reality and the application in reality, also, it compare the effects between
Virtual Reality and other education medias. Also, the United Kingdom develops Vir-
tual Reality-Education with great passion as well. The first Educational Virtual Real-
ity project is set up in Newcastle-Upon-Type middle school, based on the Dimension
International technology. Virtual Reality applied on language training and Industrial
training has been explored here. Meanwhile, the VITART (Virtual Reality Applica-
tion Research Team) project in Nottingham University is also doing the research on
virtual learning system, which focuses on Virtual Reality input equipments and dis-
able people training.
3D technique is much more complex than 2D. That’s the reason why it is not as
prevail now. However, with the development of technology, when the data communi-
cation is more advanced and it is not going to take us as much effort to set up a 3D
space, 3D virtual environment will certainly become popular for online education.
Summary
This paper analyzed the possibility and necessity of applying Virtual Reality on dis-
tance education. The definition, features and classification of Virtual Reality have
been sum up and merits of building up virtual classroom for distance education
has been pointed out. Future research is needed on the design and implementation
of virtual classroom and courseware, especially on the design of virtual classroom
interface.
Acknowledgement
This work is supported by the Natural Science Foundation of Guangdong province in
China No. 8451063101000690, and the Foundation of Scientific and Technological
Planning Project of Guangdong province in China No. 2009B070300109.
The Application of Virtual Reality on Distance Education 83
References
[1] Information on, http://en.wikipedia.org/wiki/Virtual_reality
[2] Burdea, G., Coiffet, P.: Virtual Reality Technology. John Wiley & Sons, New York (1994)
[3] Robertson, G., Czeminski, M., van Dantzich, M.: Immersion in Desktop Virtual Reality,
attrieved from http://www.research.microsoft.com/en-us/um/people/
marycz/uist97.pdf
Framework Design of Unified Cross-Authentication
Based on the Fourth Platform Integrated Payment*
Abstract. The essay advances a unified authentication based on the fourth inte-
grated payment platform. The research aims at improving the compatibility of the
authentication in electronic business and providing a reference for the estab-
lishment of credit system by seeking a way to carry out a standard unified au-
thentication on a integrated payment platform. The essay introduces the concept
of the forth integrated payment platform and finally put forward the whole
structure and different components. The main issue of the essay is about the de-
sign of the credit system of the fourth integrated payment platform and the
PKI/CA structure design.
1 Background
The data released by iResearch, a professional consulting company, shows that the
deal size in first quarter reached 2 trillion 12 billion and 100 million yuan, which
increased 17.8% compared with the deal size in the last quarter and 93.5% compared
with the same period of last year. It is obvious that the network payment industry
remains in rapid development and it has become one of the fastest growing industries
in the world of Internet [1].While the network payment industry is growing at a high
speed, two main problems are still unsolved. One lies in the incompatibility of its rapid
development and its related infrastructure. The other is contradiction between the
massive need for online payment security and the poor compatibility of existing online
payment tools [2].
According to the Computer Security Net, the number of government CA and li-
censed brand CA is 36 [3]. One of the key elements in protecting and securitizing the
ID authentication is a stable and secured network; however, the over-sophisticated and
duplicated net security instrument also negatively affects the efficiency and compati-
bility of network payment.
*
The National Soft Science Research Program, 2010B070300016.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 84–88, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Framework Design of Unified Cross-Authentication 85
The fourth integrated payment platform is also called the fourth party payment (TFPP
for short). The fourth party payment is integrated service provider offering electronic
payment and its value added service. As Figure 1 shows below.
Through its payment service, value added service, and by integrating different re-
sources and technologies owned by different service providers, standardizing the proc-
ess of electronic commerce and by offering electronic payment supervision interfaces.
The fourth party payment consists of interface service module and support system ser-
vice module. The interface service includes unified access interface for electronic
business vendors, unified access interface for value added service provider, products
show medium, payment interface, bank interface, electronic payment supervision in-
terface, unified settlement, credit assessment, tax control inferface. Value added service
includes security and risk management module and general information management
module.
The new PKI/CA based on TFFP is very similar with the PKI/CA in structure. PKI is
responsible for the part of user identity that fits the public key managed by PKI. CA is
responsible for the management of certificate [5]. With its payment function, the au-
thentication center of the fourth party payment based on PKI/CA has a structure as
shown in the fig.2
86 X. Yong and H. Yujin
As it is shown in the fig.2 that the fourth party payment unified cross-authentication
is linked with six major substances Domain. Foreground requests for authentication are
usually sent by authentication center of buyers and sellers, who initiate the authenti-
cation process. Between the buyers/sellers and the backgrounders are CAs of the in-
termediary service provider and third-party payment platform provider, which do not
actually trade commodities but benefit by offering intermediary service. CAs of finance
and government administration operates in the background and are responsible
for supervision and control. Government administration CA links to the cross-
authentication mainly because the current government online payment supervision
facilities are not able to keep pace with the evolution of the online payment and because
online trade tax issues. Government agencies are able to levy taxes, and work in co-
operation with financial institutions to supervise and manage the deposit fund.
The two main information flows can be categorized into two kinds. The first kind is
the request information and authentication information that transports from the sub-
stance domain to the fourth party payment. The first kind of information stream is
indicated by dotted lines in the above fig.2. The second, as shown by the real line, is the
response from the fourth party payment.
In the fourth party payment platform, unified authentication need to get the point that a
single user can achieve more identity. Using the way of “domain” and declaring the
Framework Design of Unified Cross-Authentication 87
attribute status can access the permissions after verification. The most important func-
tion of the fourth party payment center is carrying on the credit management to the
participation entities and guaranteeing the participation gain the permission and in-
formation assigned by the role. Next is providing authentication service which is cross
domain, cross system and cross application to the participation.
The fourth party payment PKI is similar to the traditional PKI in case of the system, but
has more flexibility in the credit administration and the authentication. The framework
of PKI in TFPP has shown below.
3. Attribute confirmation
Just as the preceding text said, every domain has the sole attribute value. The attribute
value is saved in the expand part of the certificates which make the X.509 V3 as the
standard platform. So the fourth party payment need to verify the attribute values, one
purpose is confirming the authenticity of the status, determining which domain it be-
long and assigning jurisdiction, another is transmitting information to the fourth party
payment CA, which providing the data for the authentication.
4. Certificate warehouse
Resembling the PKI, the fourth party payment PKI also use certificate to save and
retrieval certificate, certificate cancelled list and so on. The ostensible certificate in-
formation in the certificate warehouse can be inquired by the participant and public, but
regarding the information in the secret part, its authenticity and security can be safe-
guarded by effective reliable gathering to the certificate holder made by the organiza-
tion issuing the certificate.
We can see that the fourth party payment system includes two major functions:
identity authentication and cross credit transmission.
4 Conclusion
This article put forward the unified cross authentication of the fourth party payment
based on PKI/CA by studying the problems in the network payment, especially the
contradiction in the safe authentication aspect. Its characteristic lies in the ductibility of
the trust relationship in the PKI system. Different domain and different participation
establish mutual trust relationship by bridge CA the fourth party payment, reducing
cross authentication certificate between each other, raising the efficiency.
References
1. iResearch China, The market scale of China network payment has reached 212.1 billion
Yuan in the first quarter of 2010 (2010), iResearch China in
http://irs.irDEearch.com.cn
2. Financial Times, A report on China integrated liquidation platform. Cio360 in,
http://www.cio360.net/
3. The list of brand CA in China, http://www.infosec.org.cn/zydh/ca.html
4. Xu, Y., Fang, C.: A theoretical framework of fourth party payment. In: The International
Conference on E-Business and E-Government (ICEE 2010), May 7-9 (2010)
5. Ming, Q.: Electronic commerce Security, 2nd edn. Higher Education Press (2006)
Analysis towards VMEM File of a Suspended Virtual
Machine
1 Introduction
*
Corresponding author.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 89–97, 2011.
© Springer-Verlag Berlin Heidelberg 2011
90 Z. Song, B. Jin, and Y. Sun
machine, a temporary file suffixed with .vmem is created. When the suspend button is
pressed, the Virtual Machine Monitor (VMM) saves all the memory contents of the
virtual machine’s pseudo-physical memory into this file, which has a size that is same
to memory size in virtual machine’s configuration file (.vmx). So when the virtual
machine is resumed, VMM fetches data from this file and can create an environment
that is exactly the moment the virtual machine is suspended without any differences or
modifications.
So our investigation starts from this .vmem file. The next section discusses the
VMEM structure. Section 3 studies the useful information that can be obtained from
VMEM. The demos show the steps how we are able to find useful information from
Windows XP SP3 heap structure.
2 Overview of VMEM
_KPROCESS
+0x000 Header :
_EPROCESS _DISPATCHER_HEADER
+0x018 DirectoryTableBase :
_DISPATCHER_HEADER
...
[2] Uint4B +0x000 Type :
+0x084 UniqueProcessId : Ptr32 UChar
...
Void +0x001 Absolute :
+0x088 ActiveProcessLinks : UChar
_LIST_ENTRY _PEB +0x002 Size :
UChar
... ...
+0x003 Inserted :
+0x174 ImageFileName : [16] UChar
+0x008 ImageBaseAddress : Ptr32
UChar Void +0x004 SignalState :
Int4B
... ...
+0x008 WaitListHead
: _LIST_ENTRY
+0x1a0 ActiveThreads : Uint4B +0x018 ProcessHeap : Ptr32
Void
... ...
...
Fig. 1. EPROCESS and its internal substructures as KPROCESS, PEB and DISPATCH_
HEADER
So the next step is to convert the virtual addresses to physical ones so we can know
where it is located within the memory image. According to Intel’s publications [4],
virtual address can be translated to physical address via segmentation and paging
mechanisms. But we will only cover the paging because we can have two inputs and
want only an output:
Physical Address = TranslationFunc ( Virtual Address, CR3 );
Virtual address can be obtained obviously, while the value of CR3 register of each
process is stored in the 4 bytes beginning from offset 0x018 inside EPROCESS (the
first member of DirectoryTableBase array). But the conversion progress is different:
this conversion is done by hardware (MMU in CPU) under real machine, but achieved
by software-MMU (functions from VMM) in virtual machine. So traditional paging
translation mechanisms under x86 failed to output the correct physical address in our
experiments and we struggled to figure out that it PAE mode through various tests. It
seemed incredible to us at first because PAE is not turned on in BIOS in host machine,
besides, there were no clues about PAE in Windows XP SP3 guest OS. But we soon got
to know that either kind of paging mechanism is possible if it’s achieved by software.
The following figure shows the paging mechanism adopted by VMware Workstation
7.0.0 build 203739.
Fig. 2. Linear Address Translation with PAE enabled (4-KByte Pages) [4]
With the gap from virtual to physical bridged, more information can be revealed by
PEB. Executive files under Windows (e.g., *.exe) are organized as PE format, and
some sections are loaded into memory from disk, while others are not statically stored
and are dynamically created by Windows sub-system. Heap is an obvious example.
The Windows Heap Manager is a sub-system used throughout Windows to provi-
sion dynamically allocated memory. It resides on top of the virtual memory interfaces
Analysis towards VMEM File of a Suspended Virtual Machine 93
provided by the kernel, which are typically accessed via VirtualAlloc() and Virtual-
Free(). Basically, the Heap Manager is responsible for providing a high-performance
software layer such that software can request and release memory using the familiar
malloc()/free()/new/delete idioms[5].
Each process typically has multiple heaps, and software can create new heaps as
required. There is a default heap for the process, known as the process heap, and a
pointer to this heap is stored in the PEB. All of the heaps in a process are kept in a
linked list hanging off of the PEB.
The 4 bytes beginning from offset 0x090 is a pointer to a pointer array of Proc-
essHeaps whose size is represented by NumberOfHeaps in PEB. The first member in
this array is default heap for process which equals to ProcessHeap in PEB. Each
member of the array points to a HEAP structure. The HEAP structure contains a pointer
array with 64 elements, each of which points to a HEAP_SEGMENT structures or
filled with NULL. The FirstEntry in HEAP_SEGMENT is a pointer to HEAP_ENTRY
structure, which reveals size of current block, size of previous block and unused bytes
in current block. The value of Size/PreviousSize is calculated as actual bytes divided by
8. Flags represents the block is in use, the last one, or a virtual one.
_PEB _HEAP_SEGMENT
_HEAP
... +0x000 Entry :
+0x000 Entry :
_HEAP_ENTRY
_HEAP_ENTRY
+0x008 ImageBaseAddress
+0x008 Signature :
: Ptr32 Void +0x008 Signature :
Uint4B
Uint4B
...
+0x00c Flags :
+0x00c Flags : _HEAP_ENTRY
Uint4B
+0x018 ProcessHeap : ProcessHeaps Uint4B +0x000 Size
+0x010 Heap : Ptr32
Ptr32 Void Pointer Arrays ... _HEAP
: Uint2B
Pointer to Heap No.0 +0x002
... PreviousSize
(+0x018 ProcessHeap) +0x058 Segments[0]: ...
Ptr32 _HEAP_SEGMENT : Uint2B
+0x088 NumberOfHeaps
Pointer to Heap No.1 +0x018 BaseAddress : +0x004
: Uint4B +0x05c Segments[1]:
Ptr32 Void SmallTagIndex
+0x08c Ptr32 _HEAP_SEGMENT
Pointer to Heap No.2 +0x01c NumberOfPages : : UChar
MaximumNumberOfHeaps
... Uint4B +0x005 Flags
: Uint4B
... +0x020 FirstEntry : : UChar
+0x090 ProcessHeaps : +0x154 Segments[63]:
Ptr32 _HEAP_ENTRY +0x006
Ptr32 Ptr32 Void Ptr32 _HEAP_SEGMENT
Pointer to Heap UnusedBytes
+0x024 LastValidEntry :
No.(NumberOfHeaps - 1) : UChar
... ... Ptr32 _HEAP_ENTRY
+0x007
... SegmentIndex
: UChar
+0x038 LastEntryInSegment
: Ptr32 _HEAP_ENTRY
Although there are some insightful researches into Windows heap exploitation [5],
they are of few help here because their usages are all in live state but we see data in a
static way. Bridging the gap between live and static is much more difficult than ad-
dresses translation.
4 Demos
With the detailed description in last section, we use Perl scripts to analyze the .vmem
file. The procedures can be summarized as follows.
94 Z. Song, B. Jin, and Y. Sun
1. Indentify all the processes from the .vmem file by specific characteristics in
DISPATCHER_HEADER.
2. Find out physical location of a process’s PEB within .vmem file by address
translation.
3. Get detailed information directly or indirectly from PEB. For example,
searching for heap contents by ProcessHeaps form PEB.
The following demo is done under VMWare Workstation 7.0.0 build 203739 with a
guest OS of Window XP SP3. The guest virtual machine is configured with a memory
size of 256MB.
Scenario I
Start the virtual machine, log in to Windows XP SP3 guest, and open the default
notepad program in XP. Type in one line “This is a demo”, and DO NOT save it to any
.txt files on disk. Then press the suspend button from the VMware Workstation control
panel immediately. Now follow the steps below:
1. Identify the processes from VMEM by using a lsproc-series Perl script that we
modified from lsproc tools. From the picture captured below, we can see the
offset of EPROCESS structure of process notepad.exe is 0x0186e1b8.
2. Use a Perl script named lsheap_xpsp3.pl to display the structures of heap or-
ganization and their physical offsets within the VMEM file. There are alto-
gether 10 heaps in our demo, and we just showed 3 of them in the picture
captured below (Fig. 5) due to the length of this paper.
Analysis towards VMEM File of a Suspended Virtual Machine 95
3. By a deeper walk of the heap contents beginning from the first entry in the
default heap (starting from 0x000a0000), we found that the UNICODE string
of “This is a demo” locating at virtual address 0x000b1b08, which is offset
0x07e02b08 in VMEM file as showed in Fig. 6. Note that the 8 bytes from
0x0x07e02b00 is the HEAP_ENTRY part.
Scenario II
This is a similar scenario, but the outcome of our findings is attractive.
Start the virtual machine, log in to Windows XP SP3 guest, and open the latest
Windows Live Messenger (version 14.0.8117.416) in XP. Type in the password of an
account that is stored beforehand, and log in normally. Then press the suspend button
from the VMware Workstation control panel immediately.
This time when we are walking through the heap contents of the process
msnmsgr.exe, we found the plain-text password in the context of the MSN account
information accidentally. We assume that the password was used in the login process
and released without cleaning the contents. For privacy reasons, the results will not be
published here. This may become a potential vulnerability and users of MSN will risk
privacy leakage.
The method of analysis to .vmem file proposed in this paper has several significances.
1. It is a general method to search for useful information step by step, thus can be
ported to different paging mechanism or different operating system.
2. Other virtualization products with pause function by other vendors may have
similar architectures thus some attention should be paid.
3. As .vmem file is a binary file representing physical memory on the bottom of
the guest operation system, tricks taken by rootkits to hide in OS does not work
any longer. It is another way to detect whether there are hidden processed in
your system that you do not know yet.
Our method of searching sensitive data in .vmem file still has some limits:
1. Diversity. The actual structure of a .vmem file varies depending on the paging
mechanism, different operating system version or service pack, and it is dif-
ficult to develop an all-in-one program to solve the diversity problem. Script
language like Perl is more powerful here, and is agile and flexible to changes.
Besides, the analysis to .vmem file depends on the ability of analyst, and more
scripts can be prepared to deal with different situations (e.g., different versions
and service packs of Windows).
2. Efficiency. Identifying the specific bytes with characteristics from such a large
file and then checking those candidates is a time-consuming process. It usually
takes several minutes. However, the next steps can be done via scripts in
seconds because no comparison is needed. So optimizing the scripts to im-
prove efficiency is a future direction.
3. Granularity. In our demo, the plain-text contents of the heap can be found by
the process it belongs to, i.e., it is on process granularity. We hope future in-
vestigation can reveal more information so it can be used in forensic field
without any question.
Analysis towards VMEM File of a Suspended Virtual Machine 97
Acknowledgement
This paper is supported by the Special Basic Research, Ministry of Science and
Technology of the People's Republic of China (No. 2008FY240200), the Key Project
Funding, Ministry of Public Security of the People's Republic of China (No.
2008ZDXMSS003).
References
Abstract. This paper deals with an approach to the optimization and reconfigu-
ration of advanced manufacturing mode based on the object-based knowledge
mesh (OKM) and improved immune genetic algorithm (IGA). To explore the
optimization and reconfiguration of the new OKM by the user’s function re-
quirements, an optimization procedure of an OKM aiming at the user’s maxi-
mum function-satisfaction is proposed. Firstly, based on the definitions of the
fuzzy function-satisfaction degree relationships of the users’ requirements for
the OKM functions and the multiple fuzzy function-satisfaction degrees of the
relationships, the optimization model of the OKM multiple set operation ex-
pression is constructed. And the OKM multiple set operation expression is
optimized by the immune genetic algorithm, with the steps of the OKM optimi-
zation presented in detail as well. Based upon the above, the optimization and
reconfiguration of an advanced manufacturing mode are illustrated by an actual
OKM example. The proposed approach proves to be very effective.
1 Introduction
At present, various advanced manufacturing modes and new concepts emerge con-
stantly. Though these advanced manufacturing modes are of different advantages,
there has been no advanced manufacturing system that can contain all of the advanced
manufacturing modes suitable for all kinds of manufacturing enterprises. If all kinds
of complementary advanced manufacturing modes are transformed into their corre-
sponding advanced manufacturing knowledge in the advanced manufacturing system,
the enterprise can be allowed to select the most appropriate combination of advanced
manufacturing modes or the best mode for operation. Therefore, a knowledge mesh
(KM) was brought forward to formally represent complex knowledge such as ad-
vanced manufacturing modes, information systems. And to solve the information
exploration in KM representation, OKM was brought forward [1~3].
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 98–103, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Optimization and Reconfiguration of Advanced Manufacturing Mode 99
Evaluated
Function-satisfaction
OKM multiple sets
degrees for OKMs
Combined with
satisfaction
degree operators
Combined
with OKM Function-satisfaction
multiple set degree expression
operators
Perform immune
genetic algorithm
According to
OKM multiple set operation mapping rules Function-satisfaction degree
expression with optimal expression with optimal
function-satisfaction degree function-satisfaction degree
Theorem 1: There is a one-to-one mapping between the OKM multiple set operation
expression with brackets and N non-bracket operators and the function-satisfaction
degree expression with N operators and the non-repetitive permutation of operator
priorities.
100 C. Xue and H. Cao
The objective of optimization is to obtain the optimized chromosome with the best
fitness vector. Thus, the objective function is
J = max ε ( f m'n' ( x ) , x 0 ) ,
f
m'n'
Where, f m'n' is a fitness vector determined by the n ' th chromosome in the m' th
generation, which varies with the chromosomes. The notation x is the n ' th chromo-
some in the m' th generation. x0 is the ideal fitness vector and a row vector made up
of 1 for the ideal satisfaction degree of each OKM-function is 1.
2.3 Immune Genetic Algorithm for OKM Multiple Set Operation Expression
Optimization
Compared with genetic algorithm based on immune principle and standard GA, IGA
has the following advantages: (1) immune memory function; (2) maintaining antibody
diversity function; (3) self-adjustment function. When fitness in IIGA is calculated,
Niche algorithm can be used to select the best chromosome, and maintain the diver-
sity of population. Improvements proposed in this paper are as follows:
Each fitness is adjusted according to (1).
⎧ d
⎪1 − d ≤ σ share
sh ( d ) = ⎨ σ share (1)
⎪0 d > σ share
⎩
d ( opi, opj )
d= (2)
m × popsizel
Where, σ share is the radius of Niche, and generally σ share = 0.1 ; d ( opi , opj ) is Ham-
ming distance, m is the number of genes, popsize1 is the number of chromosomes in
each sub-group. Then the new method of calculating fitness is shown in (3).
ε ' ( x opi )
ε ( xopi ) = (3)
∑ sh ( x , xopj )
N
opi
opj =1
f =
⎝ opj =1 ⎠ (4)
⎛ N ⎞
log ⎜ ∑ sh ( xopi , xopj ) − 0.02 ⎟
⎝ opj =1 ⎠
Optimization and Reconfiguration of Advanced Manufacturing Mode 101
4 Examples
Following the steps in optimization of an advanced manufacturing mode, a simple
example is given to show the optimization process. The user requirements are first
transformed into layer structure according to step 1 of Procedure 1, as shown in
Fig. 2.
, (h )
T
⎛ ⎞
h1L = w L ⎜ H1 ⎟ = ( 0,0,0.6594,0, 0.6857, 0, 0.6535 ) L *
w1 = ( 0, 0,1, 0,1, 0,1) .
⎝ ~ ⎠
Optimization and Reconfiguration of Advanced Manufacturing Mode 103
, (h )
T
⎛ ⎞
h2L = w L ⎜ H 2 ⎟ = ( 0.832,0, 0, 0.642, 0,0.71, 0.538 ) L *
w2 = (1, 0, 0,1,0,1,1) .
⎝ ~ ⎠
, (h )
T
⎛ ⎞
h3L = w L ⎜ H 3 ⎟ = ( 0,0.751, 0, 0.542,0.496,0, 0.816 ) L *
w3 = ( 0,1,0,1,1, 0,1) ..
⎝ ~ ⎠
, (h )
T
⎛ ⎞
h4L = w L ⎜ H 4 ⎟ = ( 0.685, 0,0.579, 0.785, 0, 0.573,0 ) L *
w4 = (1, 0,1,1,0,1, 0 ) .
⎝ ~ ⎠
Following step 5 of Procedure 1, the best OKM multiple set operation expression is
(( M 2 - ( M1 + M 3 )) + M 0 ) - M 3 , with best fitness vector {0.832, 0.751, 0.6594,
0.785, 0.6857, 0.71, 0.816}.
We can see that after the reconfiguration of OKM, user satisfaction degree is im-
proved compared with the original OKMs, because functions are more richened.
5 Conclusions
The optimization of advanced manufacturing modes is studied based on OKM and
IGA. Based on the user function requirements of OKM and the optimization of the
OKM multiple set expressions, optimization problem of advanced manufacturing
modes aiming at maximum user satisfaction is solved. As is verified, the method
proposed can help the enterprise select the optimal combination of advanced manu-
facturing modes or the best mode for operation.
References
1. Yan, H.S.: A new complicated-knowledge representation approach based on knowledge
meshes. IEEE Transactions on Knowledge and Data Engineering 18, 47–62 (2006)
2. Xue, C.G., Cao, H.W.: Formal representation approach to enterprise information system
based on object knowledge mesh. In: 2008 Chinese Control and Decision Conference
(2008)
3. Cao, H.W., Xue, C.G.: A Study on reconfiguration of enterprise information system based
on OKM. In: The 8th World Congress on Intelligent Control and Automation (2010)
4. Yager, R.R.: Fuzzy modeling for intelligent decision making under uncertainty. IEEE
Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 30, 60–70 (2000)
5. Xue, C.G., Cao, H.W.: Evaluation and Decision Making on Advanced Manufacturing
Modes Based on Object-based Knowledge Mesh and User Satisfaction (in press)
Modeling and Simulation of Water Allocation System
Based on Simulated Annealing Hybrid Genetic Algorithm
1 Introduction
By transferring water from upstream, midstream and downstream areas of the Chang-
jiang River to North China, three transferring lines of South-to-North Water Transfer
Project come into being [1,2]. At the same time, by connecting the Changjiang River,
the Huaihe, the Yellow River and the Haihe River, total structure of water in China is
formed. The construction of the first stage of eastern route has begun and how to
allocate water reasonably and effectively causes public concern. To simplify research,
we choose a simple node lake as object and discuss water allocation by means of
product distribution theory [3,4].
In order to develop research, the following assumptions have been made: (1) Sup-
pose water consumed to maintain ecological environments in areas around the node
lake is provided by the Project and priority-of-guarantee principle is followed. (2)
Suppose the distribution of water for residential living use follows population distri-
bution pattern and water buying price is the same when the node lake allocates water.
(3) When the node lake allocates water for other uses, idea and competitive mecha-
nism of supply chain management (SCM) is used and water is distributed according to
operational mode of quasi-market economy. (4) The conflict between the lake and
node districts is solved by consultation, which is illustrated in [4].
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 104–109, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Modeling and Simulation of Water Allocation System Based on SHGA 105
The first condition is survival constraint condition which mainly refers that the living
of the residents within the areas along the Project must be solved firstly during the
process of water allocation. Survival constraint condition is defined as:
Qis≥Kibi . (1)
Where Qis denotes water quantity for living use allocated to the ith district, Ki denotes
total population of the ith district, bi denotes per capita water shortage quantity for
living use, n is the number of node districts.
The second constraint condition is total water resource which is defined as:
∑ (Q + Q ig + Q in + Q it ) ≤ Q .
n
is (2)
i =1
Where Q is allocable water amount in the node lake, Qig, Qin, Qit are water quantities
allocated to the ith district for industrial, agricultural and ecological uses.
The third constraint condition is water demand ability which is defined as:
Qis+Qig+Qin+Qit≤Qi max . (3)
Ξ B1 (ℑ1 )Ξ B2 (ℑ 2 ) ≥ Ξ ∗ . (5)
n n
Where 1 ¦z i
i
1
, 1i 2 Qis Qig Qin Qit QiminQimax , 2 ¦z i
i
2
,
i 1 i 1
B 1
1 1 t 1
, 2
i
! ! "
i i i
"i20 , ;B2 2 exp 4 2 2
.
1
®
2
¯exp 4 1 1 1 1
0 2
functions of coordination degree, ℑ1∗ and ℑ∗2 are optimal ratios, Ξ* is optimal coor-
i
dination degree, zi is the ratio of the ith district and zi=1/n, and h is economic growth
i i
indexes during baseline period and planning period respectively, l 20 and l 2 are
values of environmental improvement of the ith district during baseline period and
planning period respectively.
106 J. Zhu and S. Wang
The fifth condition is constraint of water quantity for ecological use. According to
the principle of guaranteeing water quantity, this constraint condition is:
Qis≥Qismin . (6)
Where Qismin denotes minimum water shortage quantity for ecological use in the ith
district.
The sixth condition is non-negative constraint of parameters, so there is:
Qis≥0, Qig≥0, Qin≥0, Qit≥0 . (7)
The goals include comprehensive water utilization and income of the lake. The first
goal is comprehensive water utilization. Comprehensive benefit of water utilization
includes economic, environmental and social benefits. The paper only discusses eco-
nomic benefits of water utilization which includes industrial and agricultural produc-
tion benefits, living benefit and ecological benefit. This goal is defined as:
⎧ n n n n
⎫
f1 (Q ) = Max ⎨∑ eis Qis + ∑ eig Qig + ∑ ein Qin + ∑ eit Qit ⎬ . (8)
⎩ i =1 i =1 i =1 i =1 ⎭
Where eis, eig, ein, eit is net benefit coefficient of water for living use, for industrial
use, for agricultural use and for ecological use in the ith district respectively, n is the
number of node districts.
The second goal is income of the lake. According to [4,5], each node lake is an in-
dependent legal entity and plays the role of distributing water resource. Therefore, the
goal of the node lake includes not only maximizing comprehensive benefit of water
utilization but maximizing incomes of the node lake. Income goal of the lake is:
⎧ n ⎫
⎪ ∑ s is ∑ p i (Q ig + Qin + Qit )⎪
n
p Q +
⎪ i =1 ⎪ .
f 2 (Q ) = max ⎨
i =1
⎬ (9)
( )
n
⎪− C ⎪
c∑ Q + Q + Q + Q
⎪⎩ i =1
is ig in it
⎪⎭
Where ps is price of water for living use. According to [4], water price for all districts
is the same. In (8), ps is price of water resource for other uses expect living use and
price information to all districts is asymmetrical, Cc is unit water cost.
The first step is encoding and decoding method. In the above model, there are
many continuous variables and value ranges of the variables are broad. To improve
operational efficiency and accuracy of solutions of SHGA method, we use decimal
floating point number to encode tenderly [6,7]. Value ranges of continuous variables
are divided into n aliquots and each genic value of chromosomes is denoted by the
integers in [1,n+1]. Each variable is encoded according to floating point number.
Transformation equation from genic value JYi to true value JCij of genic value of
chromosome is as follows:
[ ( )]
JYij = Qij min + (JYi −1) JCij max − JCij min n . (10)
Where [JC k
ij max ]
, JCijk min is value range of JCij.
The second step is construction of fitness degree function. This paper uses exactly
non-differentiable penalty function to deal with constraint conditions:
⎧ m1 m1 + m2
⎫
⎪∑ i S ( JM ) + ∑ min (0, S i ( JM )) ⎪
⎪ i =1 ⎪ .
G ( JM ) = g ( JM ) ± ς 1 ⎨ m + m + m
i = m1 +1
⎬ (11)
⎪+ max (0, S i ( JM )) ⎪
⎪ i = m∑
1 2 3
⎪
⎩ 1 + m 2 +1 ⎭
Where JM is decision variable after encoding, G(JM) is fitness degree, g(JM) is goal
function. If the goal is a minimum one, g(JM) is negative; if the goal is a maximum
one, g(JM) is positive. ζ1 is penalty factor, m1, m2 and m3 are the number of constraint
conditions of “=”, “≥,>” and ““≤,<” respectively, Si(JM) is expression of constraint
conditions [7].
The third step is genetic manipulation. Genetic manipulation is similar to general
genetic algorithm and mainly includes four steps: selection operation, cross operation,
variability operation and simulated annealing operation.
The fourth step is determination of parameters. In actual operation, discrete aliquot
(n), population scale (size), penalty factor (ζ1), dynamic crossover rate (η1), variability
rate parameter (η2) and initial temperature of simulated annealing (T0) are important
factors. Influences of n, size, ζ1 and T0 are relatively big. Oversize population scale
will lead to overmuch operation times and decrease optimizing speed. On the
contrary, undersized population scale will make optimal solution converge to an indi-
vidual. Generally speaking, the value of size lies in [50,100]. If T0 is too high, distri-
bution ranges of tentative points are broad and operational speed is decreased. If T0
is too low, distribution ranges of tentative points are narrow and topical searching
capacity of simulated annealing can not be exerted completely.
4 Case Study
Suppose there are eight water supply districts around a node lake, total water quantity
in a supply period in the node lake is 2×108m3, buying price of water for living use is
2.6RMB/m3, unit water cost is 2.4RMB/m3, minimum water demand for ecological use
in each node district is 50×104m3, per capita water shortage quantity is 2m3. Other
parameters are shown in Table 1.
108 J. Zhu and S. Wang
Node district 1 2 3 4 5 6 7 8
Net benefit
coefficient of water 0.396 0.366 0.411 0.319 0.442 0.512 0.378 0.462
for industrial use
Net benefit
coefficient of water 0.198 0.186 0.176 0.213 0.211 0.201 0.179 0.192
for agricultural use
Net benefit
coefficient of water 0.098 0.101 0.231 0.121 0.129 0.211 0.192 0.145
for ecological use
Net benefit
coefficient of water 0.396 0.366 0.411 0.319 0.442 0.512 0.378 0.462
for living use
Minimum water
500 620 780 560 660 710 780 800
requirement
Maximum water
1200 2310 3120 4120 2860 2130 5190 4670
requirement
Total population 60 71 56 52 67 72 66 54
Water buying price 3.6 3.2 3.5 3.8 4 3.7 3.9 4.1
According to the goal function and constraint conditions, we establish water alloca-
tion model aimed at the case study. Substituting pertinent coefficients into the model,
we obtain the result (as shown in Table 2).
Node District 1 2 3 4 5 6 7 8
Water for life needs 120 142 112 104 134 144 132 162.8
Process water 596 680 1022 1311 1798 928 2342 2463
Agricultural water 357 568 822 1050 753 674 1276 1433
Water for ecological
50.2 56.1 62.4 58.6 61.9 64.2 61.7 63.5
needs
Water supply
93.57 82.56 84.67 91.25 96.05 84.98 89.45 96.83
guarantee rate
Firstly, when water buying price provided by a district is high, water supply guar-
antee rate of this district is high, too. Then managers of the node lake prefer to sell
more water to this district. Secondly, we calculate μ1=0.921, which shows that the
coordination extent between water use and demand. Additionally, we calculate
μ2>0.8, which shows that the coordination extent between economic development and
environmental improvement in areas around the lake. Thirdly, not all water resources
in the node lake are distributed, because the lake must hold some buffer water. This is
the reason why allocable water quantity is less than 2×108m3 in the case study.
References
1. Hu, J.L., Ge, Y.X.: Research on the Water Distribute Mode and Harmonized System of
Huanghe River. Management World 20, 43–53 (2004) (in Chinese)
2. Zhou, L., Huang, Z.H.: Hybrid Genetic Algorithm for the Multi-objective Nonlinear Water
Resources Optimization Model. Water Resources and Power 23, 22–26 (2005) (in
Chinese)
3. Zhao, J.S., Wang, J.Z.: Theory and Model of Water Resources Complex Adaptive Alloca-
tion System. Acta Geographica Sinica 69, 39–48 (2002) (in Chinese)
4. Huang, F.: Optimal Operation Model for Lagerge-Scale Water Resources System Having
Multiple Resources and Multiple Users. Journal of Hydraulic 47, 91–96 (2002) (in
Chinese)
5. Li, X.P.: Research Review on Water Configuration. Haihe Water Resource 21, 13–15
(2002) (in Chinese)
6. Ren, C.X., Zhang, H., Fan, Y.Z.: Optimizing Dispatching of Public Transit Vehicles Using
Genetic Simulated Annealing Algorithm. Journal of System Simulation 17, 40–44 (2005)
(in Chinese)
7. Wen, P.C., Xu, X.D., He, X.G.: Parallel Genetic Algorithm/Simulated Annealing Hybrid
Algorithm and Its Applications. Computer Science 30, 21–25 (2003) (in Chinese)
Study on Feed-Forward MAP-Based Vector
Control Method of Vehicle Drive Motor
Abstract. Contraposing to the shortage of narrow efficient area and over cur-
rent when vector control method is applied to vehicle drive motors, this paper
proposes a feed-forward MAP-based vector control method of vehicle drive
motor. According to required motor torque and speed, directly control the mag-
nitude and direction of voltage space vector to realize the aim of motor torque
control. So, the calculation of torque current component and field current com-
ponent is no need, which not only avoids over current that the PID closed-loop
control leads to, improving the reliability of controller, but also avoids the de-
pendence on current measurement, improving control precision and motor
efficiency. Finally, simulation results and motor bench test prove this method
can significantly enlarge efficient area, and it is suitable for vehicle running
conditions.
1 Introduction
With the increase of automobile yield and quantities existed, increasingly pressure on
oil demand and environment protection forces the global vehicle industry to seek new
energy-saving power system, which accelerates the research progress of electric vehi-
cle technology [1]. Nowadays, many technical problems still need to be solved, one of
which is to seek for an efficient and reliable motor control method that is suitable for
drive motor operating characteristics [2].
Vector control and direct torque control are two kinds of vehicle drive motor con-
trol methods widely used [3]. Vector control is a feed-back loop control. Obtain
torque current component and magnetic current component by coordinating transform
based on measuring the three-phase current, then carry out PID control according to
the deviation between actual current component and demand current component to
adjust voltage output vector to control motors [4,5]. However, because the integral
link in PID control is hard to meet the rapid response, over current phenomena is
inevitable, which will shorten the life of IGBT in motor controller [6]. In addition,
current sensor’s measuring error is relatively bigger under small current operating
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 110–115, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Study on Feed-Forward MAP-Based Vector Control Method of Vehicle Drive Motor 111
condition, so the motor control performance is poor [7].Direct torque control uses the
hysteresis control and switch choice list to control the power inverter to output rea-
sonable voltage vector through observation on motor torque and stator flux to realize
the purpose of stator flux linkage and torque control [8]. But the effect of coupling is
needed to eliminate during magnetic flux control, so the decoupling model is neces-
sary [9]. And because of obvious current and torque ripple, its dynamic response
characteristic is poor [10].
Thus, this paper proposes an improved feed-forward MAP-based vector control
method of vehicle drive motor. Directly control the magnitude and direction of volt-
age space vector to realize the magnetic directional control according to the demand
torque and speed. Not only avoid current over caused by closed-loop control, but also
avoid the dependence on current measurements, which improves the control precision
and efficiency.
q E
U
d
T
M
D
Drive motor control system includes speed and angle position sensor, motor control
three-dimensional MAP inquiring module, space voltage vector PWM module, in-
verter module, battery and motor, shown as in figure 2, in which T* is command
torque, n* is motor speed, U* is optimal voltage amplitude, θ * is optimal voltage
phase, ϕ * is rotor machine angle position.
First, through calibration experiment form the optimal motor control three-
dimensional MAP which is stored in motor control program in format of tables. Sec-
ond, MAP inquiring module builds curved surface differential model, Third, optimal
voltage amplitude U*and optimal voltage phase θ * can be obtained by curved surface
differential calculation according to T*, n* and parameters in each vertex of the model.
112 Y. Zhou et al.
Last, space voltage vector PWM module gives voltage vector phase θ *+ ϕ *+90°
under α - β coordinate, according to θ *and ϕ *. Then produces six PWM control
signals to control inverter module to work.
battery
PWM control
signals
motor control U* space
T* three- voltage
vector
inverter
dimensional motor
MAP inquiring T* PWM
module
module module
n* M*
speed and angle
position sensor
3 Simulation Analysis
Feed-forward MAP-based vector control simulation model for permanent magnet
synchronous motor is built under Simulink, shown in figure 3. The model mainly
includes permanent magnet synchronous motor module, inverter module, MAP in-
quiring module, space voltage vector PWM module, speed and position detection
module, etc. MAP inquiring module is used for giving the optimal voltage and phase;
space voltage vector PWM module is used for producing six PWM control signals.
Simulation experiment is carried out under the model. Experimental conditions are
as follows: initial speed is 700rpm, initial load is 6Nm, and load suddenly drops to
2Nm at 0.04s. Speed curve and torque curve are respectively shown in figure 4 and 5.
Speed increases rapidly at first, and becomes maximum value 730rpm at 0.005. Then
fluctuates around 700rpm, and quickly steady at 700rpm at 0.23s. In the case of load
reduction suddenly at 0.04s, speed fluctuates, but soon stabilizes at 700rpm. Torque
fluctuates greatly at first, and peak value is 33Nm. Soon it stabilizes at 6Nm at 0.23s.
In the case of load reduction suddenly at 0.04s, torque fluctuates, but soon stabilizes
at 2Nm. From simulation results we can see that the system has good speed and
torque control performance.
Study on Feed-Forward MAP-Based Vector Control Method of Vehicle Drive Motor 113
4 Experiment Analysis
In order to further verify the feasibility of vehicle drive motor control method and in-
depth study motor control method and calibration method, we build drive motor ex-
periment platform. Figure 6 is experimental platform system diagram, in which 1 is
host-computer, 2 is battery, 3 is power analyzer, 4 is on-line battery charger, 5 is eddy
current dynamometer, 6 is bench, 7 is the tested motor, 8 is the tested motor control-
ler, 9 is cooling system, 10 is CAN communication transceiver, 11 is dynamometer
controller. The actual experimental platform is shown in figure 7.
What can be seen from figures above is motor efficient area is enlarged, up to 80%,
which can meet the demand of keeping high efficiency during large speed and torque
range for vehicle drive motors.
5 Conclusions
This paper studies MAP-based feed-forward vector control method deeply which is
suitable for vehicle drive motor. Motor efficiency at the same motor operating point is
different with different voltage amplitude and angle, so it is possible to improve mo-
tor efficiency based on MAP formed by the optimal voltage amplitude and angle.
Simulation and bench experiments show that the motor control method significantly
expands motors’ efficiency area. It is an efficient and reliable control method for
vehicle drive motor.
References
1. Haddoun, A., Benbouzid, M.H., Diallo, D.: A loss-minimization DTC scheme for EV in-
duction motors. J. IEEE Transactions on Vehicular Technology 56(1), 81–88 (2007)
2. Timko, J., Zilková, J., Girovský, P.: Shaft sensorless vector control of an induction motor.
J. Acta Technica CSAV (Ceskoslovensk Akademie Ved) 52(1), 81–91 (2007)
3. Kumar, R., Gupta, R.A., Bhangale, S.V.: Vector control techniques for induction motor
drive: A review. J. International Journal of Automation and Control 3(4), 284–306 (2009)
4. Nait Seghir, A., Boucherit, M.S.: A new neural network based approach for speed control
of PM synchronous motor. J. WSEAS Transactions on Circuits and Systems 6(1), 87–93
(2007)
5. Badsi, B., Masmoudi, A.: DTC of an FSTPI-fed induction motor drive with extended
speed range. J. COMPEL - The International Journal for Computation and Mathematics in
Electrical and Electronic Engineering 27(5), 1110–1127 (2008)
6. Vaez-Zadeh, S., Jalali, E.: Combined vector control and direct torque control method for
high performance induction motor drives. J. Energy Conversion and Management 48(12),
3095–3101 (2007)
7. Trentin, A., Zanchetta, P., Gerada, C.: Optimized commissioning method for enhanced
vector control of high-power induction motor drives. J. IEEE Transactions on Industrial
Electronics 56(5), 1708–1717 (2009)
8. Kadjoudj, M., Taibi, S., Benbouzid, M.E.H.: Permanent-magnet-synchronous-motor speed
control using fuzzy adaptive controller. J. Advances in Modeling and Analysis C 62(3-4),
43–55 (2007)
9. Vavrus, V., Vittek, J., Malek, M.: Velocity vector control of a linear permanent magnet
synchronous motor. J. Komunikacie 9(4), 14–19 (2007)
10. Singh, B., Jain, P., Mittal, A.P.: Sensorless DTC IM drive for an EV propulsion system us-
ing a neural network. J. International Journal of Electric and Hybrid Vehicles 1(4), 403–
423 (2008)
Condition Monitoring and Fault Diagnosis of Wet-Shift
Clutch Transmission Based on Multi-technology*
1
School of mechanical and vehicle engineering, Beijing Institute of Technology,
Beijing 100081, China
2
School of Mechanical & Electrical engineering, Beijing Information Science & Technology
University, Beijing 100192, China
1 Introduction
At present, wet shift clutch transmission is widely used in tracked armored vehicle and
engineering machinery industry. This transmission device is also called wet-shift clutch
,
transmission which is with a compact structure, a high transmission efficiency, and is
easy to operate, However, to test and control the running state of the transmission
quickly has been the most important issue that is imperative to be solve, which is also
the key for ensuring the riding quality of the transmission and insuring to launch the
maintenance in time[1].
*
This work were supported by national natural science foundation under Grant No. 50975020
and Key Laboratory foundation of BeiJing under Grant No. KF20091123205.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 116–123, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Condition Monitoring and Fault Diagnosis of Wet-Shift Clutch Transmission 117
A shown in Figure 2, oil is mainly divided into three circulating subsystems: (1)
hydraulic torque converter’s hydraulic transmission oil, radiator oil, and transmission
lubrication oil; (2) transmission clutch control oil and transmission lubrication oil; (3)
hydraulic pump motor system’s steering power flow transmission force oil and bearing
lubrication and cooling oil. Three oil circulating systems above have different flows
and pressure changes.
The working process of Hydraulic lubrication system was analyzed in this paper,
and pointed out that the oil analysis technology plays an important role in wet-shift
clutch transmission wear site and process monitoring, wear failure type and wear me-
chanism studying, and oil evaluation; besides, it is an important means to perform
condition monitoring and fault diagnosis on mechanical equipment without stopping
and disassembling[3].
The method and steps of condition monitoring and fault diagnosis for the transmission
of the tracklayer with wet-shift clutch with using the oil analysis technology, function
parameter test method and vibration analysis technology were introduced, as shown in
Fig.3.
As show in fig.3, function parameter test , oil analysis and vibration test were used to
monitoring wet-shift clutch trans- mission, and function parameter test is intuitive and
high efficiency, but there are many kinds of function parameters of wet-shift clutch
transmission, and it is impossible to test all the parameter. So it is very important to
Condition Monitoring and Fault Diagnosis of Wet-Shift Clutch Transmission 119
Fig. 3. Schematic drawing for state monitoring and fault diagnosis of wet-shift clutch
transmission
chose representative signals to test. Because faults symptom can be catch by using oil
analysis, so it is another efficiency method to monitor- ring and diagnose transmission
fault. Vibration monitoring is the third method in transmission state monitoring, but
because the influence of running, it is difficult to test and analysis the vibration of
wet-shift clutch transmission, so vibration monitoring is only the supplementary
means.
120 M. Chen, L. Wang, and B. Ma
In this study, considering the finiteness of the data quantity and the randomness of the
variety of the sample[4], the gray theory was selected to forecast the wear extent and the
select method and modeling steps of the GM(1,1) gray modeling with using the oil
analysis data was proposed, as shown in Fig.4.
Fig. 4. the gray modeling steps for fault diagnose by oil analysis data
As show in fig.4, as to the sampled data with unequal distance, judge whether the oil
data and sampled data collected from each sensor meet the scale requirements. If not,
judge again after calculating the logarithm of concentration of elements in oil and other
data. If it meets the scale condition, continue to judge whether it meets the sampling
interval for modeling PD[ ΔW N ( ) PLQ(ΔW ) <N . Δt k is the sampling interval. The
⎛ −
⎞
scale condition is : δ (N ) ∈ H
⎜
⎜
Q +
H Q + ⎟
⎟
,
and δ is scale function, n is the
⎝ ⎠
number of sampled data.
When it meets the sampling interval requirements for modeling:
( ) ( )
PD[ ΔW N PLQ ΔW N < , build the gray model through the difference quotient
method. If it doesn’t meet the modeling condition, sampling intervals shall be
optimized.
Condition Monitoring and Fault Diagnosis of Wet-Shift Clutch Transmission 121
After the optimization of sampling intervals, it should to judge whether the sampling
rule is smooth or not. If it is, the model shall be built as per grey modeling in
time-sequence fitting method. If there is data fluctuation, the grey model shall be built
through the improved time-sequence fitting method based on the wear curve.
After the grey model is built through the difference quotient method, time-sequence
fitting method and time-sequence fitting method based on the wear curve, the fitting
precision shall be checked. If it passes the check, grey prediction could proceed. If it
doesn’t pass the check, it shall be remodeled by changing the modeling dimension until
it meets the requirements for fitting precision.
The test parameters for transmission in this study mainly include: control oil press, shift
clutch press, rotate speed of each axis, oil system of the steering system and the hy-
draulic torque converter. The sequence and method to test each parameter were dem-
onstrated as shown in Fig.5.
As show in fig.5, check firstly the gear transmission pressure based on signals of
control pressure provided by the front pump of gearbox, the pressure of gear shift
clutch and lockup clutch. If the pressure of clutches at gears being tested is normal,
determine the gear of existing gearbox. In case of any anomaly, record the amplitude of
abnormal signals to indicate the anomaly of the existing gear.
When the existing gear is determined, convert and display the current input of rev-
olution speed and car speed based on the input revolution speed and three-axis revo-
lution speed collected; calculate the transmission ratio of current gearbox based on the
input and output revolution speed. If the calculated transmission ratio equals to that for
this gear, it indicates “normal state” for the current gear. If the calculated transmission
ratio doesn’t equal to that for this gear, it will diagnose and find the specific position for
the faulty position and display the diagnostic result.
After the current gear is displayed, check the pressure of the hydraulic circuit at the
high-pressure outlet and low-pressure inlet of the steering system. If the pressure of the
high/low-pressure hydraulic circuit is within the normal ranges, it will indicate “normal
pressure” for the steering system. If the pressure of the high/low-pressure hydraulic
circuit exceeds the range of technical indexes, it will record and indicate the faulty
position and specific value.
After the steering system is detected as normal, check whether the lubrication
pressure in the gearbox’s lubrication system is normal, measure mainly the lubrication
pressure at first, second and third transmission shafts. If the lubrication pressure of each
shaft is within the range of normal technical indexes, it will indicate “normal lubrica-
tion pressure” for the corresponding shaft. If the lubrication pressure at each shaft
exceeds the range of technical indexes, it will record and indicate the faulty position
and specific value.
The gearbox’s temperature sensor is installed inside the lubrication system’s hy-
draulic circuit. Before detecting the lubrication system’s pressure, the diagnosis system
will display the gearbox’s temperature.
122 M. Chen, L. Wang, and B. Ma
After the lubrication system is detected as normal, the hydraulic pressure at inlet and
outlet of the hydraulic torque converter is detected. If the pressure difference between
inlet and outlet of the torque converter is within the range of normal technical indexes,
Condition Monitoring and Fault Diagnosis of Wet-Shift Clutch Transmission 123
it will indicate “normal pressure” for the hydraulic torque converter. If the pressure
exceeds the range of technical indexes, it will record and indicate the faulty position
and specific value.
At last, store signals and faulty messages collected, generate inspection report and
print.
5 Conclusions
In this study, it’s indicated that the oil analysis and function parameter are the mainly
methods for wet-shift clutch transmission condition monitoring, and the vibration
analysis is the assistant method.
It’s validated that the representative function signals were chosen to execute the
condition monitoring analysis, when the fault symptoms were found, and the oil anal-
ysis data were used to apply the gray modeling to forecast the fault occurs time can
satisfy the demand of the condition monitoring and fault diagnosis for the transmission
regular work.
References
[1] Wang, L., Ma, B., Zheng, C., et al.: A Study on Running-in Quality Evaluation Method of
Power-shift Steering Transmission based on Oil Monitoring. Lubrication Engineer-
ing 7(33), 35–38 (2008)
[2] Li, H.-y., Wang. L.-y.: A Study on no-load Running-in Wear of Power-shift Steering
Transmission based on Spectrum Analysis Ma Biao. Spectroscopy and Spectral Analy-
sis 29(4), 1013–1016 (2009)
[3] Wang, L., Ma, B., Li, H., et al.: A Study on Running-in QualityEvaluation Method of
Power-shift. Steering Transmission based on Performance Parameter 3(33), 86–88 (2008)
[4] Deng, J.: Grey control system, Wu Han. Huazhong Institute of Technology Publisher,
Chinia (1985)
Circulant Graph Modeling Deterministic Small-World
Networks
Chenggui Zhao*
Abstract. In recent years, many research works have revealed some techno-
logical networks including internet to be small-world networks, which is attract-
ing attention from computer scientists. One can decide if or not a real network
is Small-world by whether it has high local clustering and small average path
distance which are the two distinguishing characteristics of small-world net-
works. So far, researchers have presented many small-world models by dy-
namically evolving a deterministic network into a small world one by stochastic
adding vertices and edges to original networks. Rather few works focused on
deterministic models. In this paper, as a important kind of Cayley graph, the
circulant graph is proposed as models of deterministic small-world networks,
thinking if its simple structures and significant adaptability. It shows circulant
graph constructed in this document takes on the two expected characteristics of
small word. This work should be useful because circulant graph has serviced as
some models of communication and computer networks. The small world char-
acteristic will be helpful to design and analysis of structure and performance.
1 Introduction
The research on small world networks has received a number of literatures in recent
years. In 1998, Watts and Strogatz [3] called those real networks with phenomenon of
six degrees of separation as small world networks. They described some stochastic
models of small-world networks. These models have clustering coefficients much
larger than those of random networks, but a small average path length. Some regular
networks have large clustering coefficient and large average path length, while the
random networks with the same size and average node degree have much smaller
clustering coefficient and average path length. D.J. Watts et al. randomly rewire the
edges with probability p on regular network such as a loop network to construct small
world network such that average path length dramatically decrease as p increases but
the clustering coefficient decreases slowly.
*
Corresponding author.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 124–127, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Circulant Graph Modeling Deterministic Small-World Networks 125
Most of Cayley-graph have been used to design interconnection networks. The circu-
lant graph is a important class of Caylay graph family.A digraph X=(V,E) is defined
by a set V of vertices and a set E={(u,v)| u, v∈V} of arcs. The subset E is symmetric,
so we can identify two opposite arcs (u, v) and (v, u) by the undirected edge (u,v). Let
G be a finite group and S a subset of G. If very element of G can be expressed as a
finite product of the powers of the elements in S, the subset S is said to be a generat-
ing set for G and elements of S are called generators of G, In this case, we also say
that G is generated by S.
Let Cay (G, S) denotes a graph with vertices that are elements of G and arcs that
are ordered pairs (g, gs) for g∈G, s∈S. Cay (G, S) is called Cayley digraph of group G
and the subset S, If S is a generating set of G then Cay (G, S) is called the Cayley
digraph of G generated by S. If the identity 1 element of G is not include in S and
-1
S=S , then Cay (G, S) is a simple and undirected graph. Unless noted otherwise, all
graphs in this paper are undirected graphs.
Firstly, let us have a special class of Cayley graph by following definition. It has been
applied to model computer networks for many years.
Definition. A circulant graph X (n, S) is a Cayley graph Cay (Zn, S) on Zn where n is
a power of 2. That is, it is a graph whose vertices are labelled {0, 1,…,n − 1}, with
two vertices labelled i and j adjacent if and only if i − j (mod n) ∈S, where S as a
subset of Zn has S = −S and 0 ∉ S.
Secondly, there will be constructed a particular circulant graph by a deliberate way
to select generating set S of Zn. if |n|=dt, where d is a factor of n and you can select d
126 C. Zhao
to obtain different node degree of Cay(Zn, S). Let S={1,t, 2t,…, dt}.Then Cay (Zn, S)
is a cayley graph because S is clearly a generating set of Zn by noting it includes the
generator 1 of Zn . Together with S =− S, It is sure Cay (Zn, S) is undirected and con-
nected. Let symbol X substitute for Cay (Zn, S) for simplicity. Next section shows X
has the Small-World characteristics: large coefficient and small average path length.
Obviously, the undirected Cayley Graph X(G,S) of a group G with respect to the gen-
erating set S o is a regular graph of degree d. Babai et al. proved that every group of
order n has log2 n+ O(log log n) elements x1,...,xt forms a generating set of G. It fol-
lows that G has a set of log2 n+ O(log log n) generators such that the resulting Cayley
graph has a logarithmic diameter. So X has average path length no lager than log2n.
For a general survey on Cayley graphs with small diameters see [4].
4 Conclusion
We have constructed a good model for small-world network by a simple way. It can
be believed that this model will improve some designs based on circulant graph
in technological network according such that they become small-world networks. On
the other way, this model may also service as model of deterministic small-world
network.
Circulant Graph Modeling Deterministic Small-World Networks 127
References
1. Wenjun, X., Parhami, B.: Cayley graph as models of deterministic small-world networks.
Information Processing Letters 97, 115–117 (2005)
2. Comellas, F., Sampels, M.: Deterministric small-world networks. Physica A 309, 231–235
(2002)
3. Watts, D.J., Strogatz, S.H.: Collective dynamic of small-world networks. Nature 393, 440–
442 (1998)
4. Heydemann, M.C., Ducourthial, B.: Cayley graphs and interconnection networks. Graph
Symmetry, Algebraic Methods and Applications. NATO ASI C 497, 167–226 (1997)
5. Xiao, W.J., Chen, W.D., Parhami, B.: On Necessary Conditions for Scale-Freedom in
Complex Networks, with Applications to Computer Communication Systems. Int’l J. Sys-
tems Science (to appear) (e-publication in March 2010)
6. Xiao, W.J., Peng, L., Parhami, B.: On General Laws of Complex Networks. In: Zhou, J.
(ed.) Proc. 1st Int’l Conf. Complex Sciences, Part 1, pp. 118–124. Springer, Heidelberg
(2009)
Research on Risk Manage of Power Construction
Project Based on Bayesian Network
1 Introduction
Risk management of power construction project is to identify, analyze, evaluate, pre-
dict and treat effectively the risk in construction project. It must deal with the uncer-
tain factors and reduce the cost of project, so as to finish the project security and
scientific. Because of the high input and high risk, the risk management is an impor-
tant part of project management, and advanced technology is must be used in project
management.
This article discusses the risk management of power construction projects based on
Bayesian network, and carry out the quantitative management for risk management.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 128–134, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Research on Risk Manage of Power Construction Project Based on Bayesian Network 129
3.1 The Relationship between Bayesian Network and the Power Risk
Management
The uncertain factors is the main problem of risk management for modern electrical
engineering, and because of the advantage of the Bayesian network, it is more agile
and scientific. Then, as building power projects important basis for risk management
models, Bayesian networks is mainly carried out based on the following points:
(1)Bayesian network has a solid theoretical foundation, and its reasoning result is
more acceptable. At the same time, the independencies of Bayesian network can ex-
press the relationship between power engineering project and risk.
(2)There is mature software and reasoning algorithm for Bayesian network, the key
is to get the value of conditional probability. Application of the Bayesian networks,
the probability of reasoning algorithm of information, can be required at low under
the condition of incomplete information on reasoning.
(3)The expression of Bayesian network is more suitable for risk prediction. We can
improve the network structure and parameters through reasoning process, and update
the information of probability.
130 Z. Jia, Z. Fan, and Y. Li
Bayesian network construction can study and artificial construct two ways. In the
training sample fully circumstance, can learn from these data structure of the network.
Construction of Bayesian network can be constructed by self learning and artificial.
In the case of the full training sample, you can learn network structure from the data.
For risk management system, the risk is uncertain, and there have no sufficient train-
ing data, therefore, it must be personally establishment and application for Bayesian
network.
Constructing Bayesian Networks for risk management is divided into the following
three steps:
(1)Determine the contents of the node
Bayesian network composed of nodes, different nodes corresponds to different risk
events. Therefore, we must determine the risk event exist in project implementation
process. Project manage contain nine knowledge areas: scope management, time
management, cost management, human resources management, risk management,
quality management, purchasing management, communication management and inte-
grated management. For the knowledge of the importance of the project area, depth
analysis should be made to analyze the impact of factors at all levels and the corre-
sponding factors of the properties, but little impact on project implementation to run
the knowledge, can be used for shallow analysis. The hierarchical structure refers to a
field of knowledge in risk events.
(2)Determine the relationship between nodes
Determine the node content need to follow a certain method to determine the relation-
ship between each node and the Bayesian network inference. In Bayesian network
inference, the common are 3 types: causal reasoning, diagnostic reasoning and sup-
port reasoning. Here, the hierarchical structure refers to a field of knowledge in this
field.
(A)Causal inference. The conclusion is obtained by reason from top to down, the
objective is to get the result. Known for a reason (evidence), through using the Bayes-
ian network inference, we can obtain the probability.
(B)Diagnosis reasoning. By structure of reasoning from down to up, the purpose is
to find the factors which may happen and find out the causes of risk as well as risk of
probability. The purpose of using the reasoning is control the risk in time, find its
source and prevent the recurrence when the risk occurs.
(3)The probability distribution
Probability distribution consists of two parts: determine the probability of the top-
level parent node P ( Ei ) , and determine the conditional probability P ( E i / Pb ( E i )) .
Probability distribution between the events needs to have extensive knowledge of risk
management, and the probability is determined by expert of risk management accord-
ing to their experience. Therefore, in building Bayesian networks, the system already
contains an authoritative expert knowledge. Because the feature of project is complex
and reversible, the above step 3 can be alternately and repeatedly modified until the
construction of Bayesian networks is finished.
Research on Risk Manage of Power Construction Project Based on Bayesian Network 131
4 Empirical Analysis
There are so many risks in electrical construction project that the paper takes the risk
management of cost for example. The cost risk is an important part of the manage-
ment in electric power construction project, cost risk would lead to direct economic
risk and the project will faces enormous economic losses, it will also cost large
amounts of money if we do not take cost risk into account.
In the Bayesian network of cost control, we can see the influence and relationship
among different nodes. Determination of node status is still obtained by the experts,
based on experience and knowledge, and it also can be characterized according to the
node's own analysis of the data obtained. In our model, there are 3 status in each node
1, 2, 3, corresponding to low-level, intermediate, advanced.
132 Z. Jia, Z. Fan, and Y. Li
This figure is two-way tracking Bayesian due to the original acyclic graph on the ba-
sis of the development of risk, so the Bayesian reasoning is performed at the same
time.
According to the Figure 2, reason the result according to the expert knowledge,
causal inference and diagnosis reasoning. When the technology factor risk rating is E
= l, probabilistic relationship is as follows:
P( E = 1 K = 1,W = 1, C1 = 1, C 2 = 1, C 3 = 1, J = 1)
P( E = 1 K = 2,W = 1, C1 = 1, C 2 = 1, C 3 = 1, J = 1)
… … … … … … … … … …
P( E = 1 K = 3,W = 3, C1 = 3, C 2 = 3, C 3 = 3, J = 3)
P(C 3 = 1, E = 1)
P = (C 3 = 1 E = 1) =
P( E = 1)
Other situations are similar. According to the assumption network diagram, the re-
sources are composed of six overlapping part, each part can produce risk and the
probability are P ( B 1 ) , P ( B2 ) ,…, P ( B6 ) , P ( Bi ) >0 (i = 1,L ,6) and then,
Thus, according to the previous algorithm, the assessment result is the following:
P ( K ∩ C0 )
P1 = P ( K C0 ) = =
P (C0 )
P (C0 K ) P ( K )
P( K ) P (C0 K ) + P (W ) P (C0 W ) + P (C1 ) P(C0 C1 ) + P(C2 ) P (C0 C2 ) + P (C3 ) P (C0 C3 ) + P( J ) P (C0 J )
0.1× 0.2
= = 0.14
0.1 × 0.2 + 0.2 × 0.1 + 0.1 × 0.25 + 0.3 × 0.1 + 0.2 × 0.15 + 0.1 × 0.2
P(W ∩ C 0 )
P2 = P(W C 0 ) = = 0.14
P(C 0 )
P(C1 ∩ C 0 )
P3 = P(C1 C 0 ) = = 0.17
P(C 0 )
P(C 2 ∩ C 0 )
P4 = P(C 2 C 0 ) = = 0.21
P(C 0 )
P(C 3 ∩ C 0 )
P5 = P(C 3 C 0 ) = = 0.21
P(C 0 )
P( J ∩ C 0 )
P6 = P( J C 0 ) = = 0.14
P(C 0 )
134 Z. Jia, Z. Fan, and Y. Li
(2)Overall assessment
According to the above analysis, we can draw the conclusion: the project cost control
risks level:
= 0.1× 0.2 + 0.2 × 0.1+ 0.1× 0.25+ 0.3× 0.1+ 0.2 × 0.15+ 0.1× 0.2 = 0.145
Because there are many factors affect the cost control, the paper just study few factors
of risk based on Bayesian networks, discussion of local risk factors can also evaluate
the overall risk. Because the risk factors of overall cost are more complex, this article
only gives the cost of local risk assessment model and corresponding algorithm. With
reference to the example of calculation and analysis, the comprehensive assessment of
actual cost can be completed.
5 Conclusion
Risk management based on the application of Bayesian network model can solve the
problem caused by lacking of historical data, and we also can get the order of key
factors through scenario analysis and causal analysis, the aim is to control the project
more effectively. However, the analytical method in the paper is hypothetical, we
must use real data in practical work so as to get more valid results and make full use
of the Bayesian network in risk management.
References
1. Li, D.J.: Based on Bayesian network serial decoding method. Communication Technol-
ogy 115(4), 38–40 (2001)
2. Zhang, S.: Bayesian Networks in Decision Support Systems. Computer Engineering (10),
1–3 (2004)
3. Jia, Z., Fan, Z., Jiang, M.: Distribution Network Planning Based on Entropy Fuzzy Com-
prehensive Method. In: 2010 International Conference on AMM, pp. 26–28, p. 780 (2010)
4. Evergreen, F.: Engineering factors affecting the cost of construction of the project. Shiji-
azhuang Railway Institute (11), 158–160 (2003)
5. Zang, W.Y.: Bayesian Networks in stock index futures early warning of the risk. Science
of Science and Technology Management (10), 122–125 (2003)
6. Cooper, G.: Computational complexity of probabilistic inference using Bayesian belief
networks. Artificial Intelligence 42(2), 393–405 (1990)
The Design of Logistics Information Matching Platform
for Highway Transportation*
Daqiang Chen1, Xiaoxiao Zhu1, Bing Tong1, Xiahong Shen1, and Tao Feng2
1
College of Computer Science & Information Engineering, Zhejiang Gongshang University,
No.18, Xuezheng Str.,Xiasha University Town, Hangzhou, 310018, China
2
College of Economics and Business administration, China University of Geosciences,
No. 388 Lumo Road, Wuhan, 430000, China
chendaqiang@mail.zjgsu.edu.cn
Abstract. The development status of logistics in the financial crisis requires the
shippers and carriers’ overall goal focus on cost reduction. This paper firstly
analyzes the problem of information mismatch between shipper and carrier in
nowadays, and describes the shippers and carriers’ demand for information plat-
form. Then based on requirement investigation and questionnaire statistics, the
specific demands for logistics information matching platform are analyzed. Fi-
nally, logistics information matching platform system for highway transporta-
tion is designed.
1 Introduction
With the development of modern logistics industry, the shortcoming in establishment
and application of the logistics information system has turned out to be the "bottle-
neck" of logistics development in China, which directly influences the communication
and cooperation between logistics enterprises and users, logistics enterprises and re-
lated government departments, and hampers the development of logistics service
quality [1]. The shortcoming in establishment and application of the logistics informa-
tion system are also acts as a serious impact on the competitiveness of China’s logis-
tics enterprises [2]. With modern information technology and logistics information
matching platform, the horizontal integration between logistics enterprises and manu-
facturing companies can be achieved, the regional logistics information resources can
be shared, the configuration of social logistics resources can be maximum optimized,
logistics cost can be reduced, and the whole process of logistics operation can be up-
graded. Because the optimization of logistics supply chain needs the participation of
business partners in various types (especially the supplier and demander of related
logistics service, for example the shipper and carrier in highway transportation) and
involves complex and various types of operations. And functions, such as information
*
This paper is supported by Zhejiang Provincial University Students Scientific Innovative Pro-
ject and Zhejiang Gongshang University Students Scientific Innovative Project.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 135–141, 2011.
© Springer-Verlag Berlin Heidelberg 2011
136 D. Chen et al.
release, exchange, and optimal matching between the relevant participants, which
serve as a support logistics supply chain logistics service based on the Internet, can be
easily accepted by the shipper and carrier.
Therefore, making the best use of information of shipper and carrier, and construct-
ing logistics information platform are of great significance in promoting the service
level and efficiency of logistics industry. Based on the detailed analysis of the devel-
opment status of logistics in the financial crisis with information demand analysis and
questionnaire statistics, this paper analyze the problem of the mismatch of informa-
tion between shipper and carrier in current domestic and their overall cost reduction
goal and demand for information platform, and proposed a overall design framework
for highway transportation logistics information matching platform. The organization
of the paper is as follows. In section 2, the problem of the mismatch of information
between shipper and carrier in highway transportation is described and analyzed, and
the function demand of the logistics information matching platform for highway
transportation is suggested. In section 3, the structure framework, system function and
database of this information matching platform are designed. Section 4 accomplishes
six main system functions according to its E-R diagram. The final section summarizes
the major conclusions and suggests further application prospect of this system.
the quality and effectiveness of the questionnaire directly, 85 percents was proceeding
by deep visits and sending out questionnaire face to face. The objects of the survey
include professional carriers, the intermediary organizations of transportation, the
transportation department of manufacturing enterprises, and large and medium-sized
manufacturing enterprises.
These questionnaires separate the objects into the shipper and the carrier. Shipper
investigation shows: 78% shippers were in trouble with too many goods to send; and
95% shippers are willing to share their information with the carriers. Carrier investi-
gation shows: only 13% can find the freight demand information by Internet cur-
rently; and 92% of the carriers are willing to search freight demand information by
Internet. The results show that the overall demand for shipper and carrier’s informa-
tion by logistics information platform based on Internet is huge and urgent.
Functional demands of shipper and carrier have the same points: a) quickly searching
the matching source, b) information release, c) member service, d) credit evaluation,
e) online transactions, and f) intelligent distribution solutions. In addition, the ship-
pers also want to gain functions of fast search and match enterprises’ true information
source, shipping quote, preponderant routes and logistics chain coordinated guidance
services.
According to the basic demand of shipper and carrier, a B/S structure for modern lo-
gistics information matching management system (as showed in Fig. 1) is suggested,
which integrates user browser, WEB server, MYSQL database, to achieves the pur-
pose of nationwide logistics information registration, inquiry, optimized match, online
transactions and feedback.
The advantage of adopting this B/S structure is that all applications are in applica-
tion servers, which can retain the superiorities of maintenance, management and cost
reduction. Data updating in workstations can reduce the workload of system mainte-
nance and modification, and is suitable for the Internet age's mobile applications.
The key of the database is the table structure design, and data modeling method which
employ the E-R method. Thus the logistics information matching platform for high-
way transportation should mainly include the following four basic tables:
z Car resource information tables, which are mainly used to record the basic in-
formation of carrier’s vehicle resource;
z Goods resource information tables, which are mainly used to record the basic
information of shipper’s freight resource;
z Complain information tables, which can be divided into complain information of
carrier’s vehicle and those of shipper’s freight;
z Feedback information tables, which are mainly used to record the feedback in-
formation of the platform.
Figure 3 is the system's E-R diagram, in which the user entity is involved in the proc-
essing of complain information of carrier’s vehicle, complain information of shipper’s
freight, information of carrier’s vehicle resource, information of shipper’s freight re-
source and feedback information, and the information of carrier’s vehicle resource
and those of shipper’s freight are correlate with each other by information matching.
The Design of Logistics Information Matching Platform for Highway Transportation 139
To ensure the safety of platform, only inputting the correct username and password can
the users be allowed to enter the system. "Register"--can be a new user to register,
which is default to be an ordinary users. After registration, the user’s statue and its ac-
cessibility are in a locked position, only be used after unlocked by the super user or full
administrator user of the system.
There is no difference between super users and ordinary users in this function. The
owner's name, ownership, insurance, destination, start date, models, quantity, volume
items of the carrier’s vehicle are required for information input, empty input would
cause an error of incorrect information. The information input of shipper’s freight re-
source is in a similar setting.
There is no difference between super users and ordinary users in this function as well.
The carrier’s vehicle resource ID and reason are required; empty input would cause an
error of incorrect information. The shipper’s freight resource complains information is
in a similar setting.
140 D. Chen et al.
In this function, super user or full administrator user of the system can do operation
such as add user, delete user, lock or unlock user, modify password and return. Add
user operation can add super users and ordinary users. For the delete function, the
super user can delete super user, ordinary users and also themselves (except full ad-
ministrator user). The function of lock and unlock user depends on the current status
of the object, i.e. super users can lock or unlock ordinary users and super users. All
locked users can not be allowed to login the system.
5 Conclusion
In the age of Internet economy, with the logistics information matching platform by
Internet, the carrier and shipper can reduce their operative cost, reduce the empty rate
of vehicle and improve the production capacity, to a certain extent avoid the resources
waste in regional logistics operation, and can indirectly reduce traffic volume, release
the traffic congestion, which is the key to improve operation efficiency.
The Design of Logistics Information Matching Platform for Highway Transportation 141
The information matching platform proposed in this paper has a strong fault-
tolerance, safety, convenient operation and the characteristics of stability and com-
prehensiveness, and is easy to expand and transplantation, which can enhance the
cooperation between nodes enterprises in logistic chain and promote the development
of logistics informalization, even would further promote e-business development in
logistics industry. Although it is restrained from by the factors, such as fund, technol-
ogy and enterprise reputation, etc., which make some insufficient information match-
ing functions still exist in this platform, and remain to perfect further, we still believe
it would have a significant application prospect.
References
1. Zhao, J.J., Yu, B.Q.: Modern logistics management. Peking University Press, Beijing
(2004)
2. Qin, W.Q., Wu, J.M.: New Access to Carries and Shipper Information: Logistics Informa-
tion Real-time Intelligent Matchmaking Platform. Logistics Technology (210-211), 190–
193 (2010)
3. Liu, C.: Analysis into the demands and Functions of Huai’An Logistics Information Plat-
form. Logistics Engineering and Management 31(9), 30–31 (2009)
4. Bagualaiwang logistics information, http://www.8glw.com
5. Public Logistics Information Service Platform of Zhejiang Province,
http://www.zj56.com.cn
6. China Wutong Logistics Information Website, http://www.chinawutong.com
An Intelligent Prediction Method Based on Information
Entropy Weighted Elman Neural Network*
1 Introduction
Artificial neural network is an intelligent tool used to deal with complex nonlinear
problems. Neural network has the characteristic of approximating the arbitrary con-
tinuous non-linear function and its all order derivatives with arbitrary precision by the
appropriate selection of network layer and the cell number of hidden layers, thus
being widely used in prediction during the industrial process.
In fault prediction, the contribution of network input values to network output val-
ues has basically the same probability. In order to overcome the deficiency of basi-
cally the same probability contribution of neural network input to output predicted,
this paper proposes a prediction method based on information entropy weighted El-
man combing with information entropy theory and Elman neural network.
*
National Natural Science Foundation of China (50975020) and Funding Project
(PHR20090518) for Academic Human Resources Development in Institutions of Higher
Learning under the Jurisdiction of Beijing Municipality.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 142–147, 2011.
© Springer-Verlag Berlin Heidelberg 2011
An Intelligent Prediction Method 143
The proposed information entropy weighted neural network prediction method in this
paper makes use of information entropy to indicate the contribution weight of net-
work input to the network output predicted. Information entropy is used to measure
the average amount of uncertainty of information source from the overall and objec-
tive perspective. The smaller the information entropy is, the more definite the infor-
mation is[1-3]. Suppose one condition monitoring and fault prediction system has n
sources of information (sensors), i.e. x1,x2,…,xn, the probabilities of the required infor-
mation provided by the information sources are p1,p2,…,pn, the information structure of
the system is:
⎛X⎞ ⎛x x2 x3 L xn ⎞
S = ⎜⎜ ⎟⎟ = ⎜⎜ 1 ⎟
⎝ P ⎠ ⎝ p1 p2 p3 L pn ⎟⎠ (1)
The weight coefficient calculated based on formula (4) reflects the information
amount carried by neural network input.
Elman neural network used in information entropy weighted Elman neural network is
a typical feedback neural network, which has simple structure and has stronger com-
puting power and stronger ability to adapt to time-varying characteristics than the
144 T. Chen, X.-l. Xu, and S.-h. Wang
forward neural network [4]. Different from RBF, BP and other forward neural net-
work, besides the input layer, hidden layer and output layer, Elman network still adds
a context layer in the hidden layer. The context layer acts as one-step delay operator,
and is used to memorize the output value at the previous moment for hidden layer
unit[5,6]. The structure of Elman neural network is shown in Figure1.
,
As shown in Figure1 k stands for moment, y, x, u ,xc respectively represents out-
put of the network, the output of the hidden layer, external input of network, and
output of context layer. w1,w2,w3 respectively stands for the connection weight ma-
trixes from the context layer to hidden layer, from the input layer to hidden layer,
from the hidden layer to the output layer respectively. b1 and b2 are the threshold
values of input layer and hidden layer.
Information entropy weighted Elman neural network, based on the Elman neural
network, adds an information entropy weighted processing layer between the
input layer and hidden layer. The structure of information entropy weighted Elman
neural network is shown in Figure 2, the information entropy weighted processing
layer identifies the contribution of the network input to the network output pre-
dicted, and can gain consistent description of running condition of mechanical
equipments.
This paper takes the flue gas turbine in large-scale petrochemical company as the
research object. The flue gas turbine is a kind of key devices in the catalytic cracking
energy recovery system. In order to ensure the safe operation and scientific mainte-
nance, conducting fault prediction will effectively avoid the contingency, save a great
deal of maintenance fees and increase the equipment utilization rate.
In accordance with the operating characteristics for the flue gas turbine, we collect
the historical vibration data measured at YT-7702A bearing point of the turbine, and
extract the vibration attribute value of 1 double frequency component to build 3-layer
information entropy weighted Elman to make condition trend prediction.
In the constructed network, we select the vibration attribute for every five days as
the information input into the neural network, and select the vibration attribute at the
sixth day as the output, that is, the number of neurons at the input layer is 5, and the
number of neurons at the output layer is 1. The iterative prediction way is used which
is formed from single-step prediction iteration.
The transfer function from the network input layer to hidden layer adopts the hy-
perbolic tangent S- function. In order to effectively use the S-function, and to ensure
that the nonlinear approximation ability of neural network, we conduct the normaliza-
tion processing to transform the sample data to the (-0.9,0.9)interval.
1.8( xi − x min )
xˆ i = − 0.9 (5)
x max − x min
After training, the actual value on network output result can be obtained by the in-
verse transformation, that is, the anti-normalized processing should be made on the
prediction results, then the actual prediction result will be obtained.
LM algorithm is used in neural network training. As LM algorithm is not strongly
dependent on the initial value, it can greatly improve the inherent flaws and shortcom-
ings of BP network; it has the speed and precision of Newton method while not calcu-
lating Hessian matrix. As for the number and accuracy of training times, LM algo-
rithm is obviously superior to conjugate gradient method and BP algorithm with vari-
able learning rate [7,8].
The optimal number of nodes in the hidden layer is determined by trial-and-error
method. Table 1 shows the pediction performance of different numbers of neurons in
hidden layer in the constructed neural network.
146 T. Chen, X.-l. Xu, and S.-h. Wang
As can be seen from Table 1, when the number of neurons in hidden layer is 11,
the prediction errot of MAPE and MSE are minimum, and the number of training
steps is intermediate; after the comprehensive assessment of training steps and errors,
and prior consideration of the error performance, the optimal number of nodes in the
hidden layer in the neural network is finally determined to be 11.
The information entropy weighted Elman neural network is used to predict the vibra-
tion attribute value of flue-gas turbine, and the comparison with Elman neural net-
work is made. The prediction results of informaton entropy weighted Elman and El-
man are shown in Figure 3. The following table 2 shows the prediction performance
of information entropy weighted Elman and Elman prediction methods.
4 Conclusion
Information entropy weighted neural network is an intelligent prediction mthod com-
bining with information entropy theory. It can overcome the deficiency of basically
the same probability contribution of neural network input to output predicted. The
prediction result shows that the Information entropy weighted Elman neural network
has higher prediction accuracy, better prediction real time performance. The analysis
indicates that the proposed method is feasible in the condition trend prediction for
large equipments with a broad application prospect.
Acknowledgments
The authors appreciate the comments of the anonymous reviewers. Thanks to the
scholars listed in the references, for their wisdom and creative achievement giving us
inspiration.
References
1. Barnum, H., Barrett, J., Clark, L.O., et al.: Entropy and information causality in general
probabilistic theories. New Journal of Physics 3, 1–32 (2010)
2. Zhang, Q.-R.: Information conservation, entropy increase and statistical irreversibility for
an isolated system. Physica A (388), 4041–4044 (2009)
3. Zhang, J., Mi, X.: Neural Network and Its Application in Engineering. China Machine
Press, Bejing (1996)
4. Elman, J.L.: Finding Structure in Time. Cognitive Sci. (14), 179–211 (1990)
5. Raja, S., Toqeer, N., Suha, B.: Speed Estimation of an Induction Motor using Elman Neu-
ral Network. Neurocomputing (55), 727–730 (2003)
6. Ciarlini, P., Maniscalco, U.: Wavelets and Elman Neural Networks for Monitoring Envi-
ronmental Variables. Journal of Computational and Applied Mathematics (221), 302–309
(2008)
7. Arab, C.M., Beglari, M., Bagherian, G.: Prediction of Cytotoxicity Data (CC50) of Anti-
HIV 5-pheny-l-phenylamino-1H-imidazole Derivatives by Artificial Neural Network
Trained with Levenberg–Marquardt Algorithm. Journal of Molecular Graphics and Model-
ling (26), 360–367 (2007)
8. Bahram, G.K., Susan, S.S., Troy, N.H.: Performance of the Levenberg–Marquardt Neural
Network Training Method in Electronic Nose Applications. Sensors and Actuators
B (110), 13–22 (2005)
A Multi-layer Dynamic Model for Coordination Based
Group Decision Making in Water Resource Allocation
and Scheduling
1 Introduction
The water management institutional reform in China is in the direction of basin water
resource unified management and region water affair integration management, and
requires building up cooperative mechanism of parts participant, democratic consulta-
tion, common decision, individual responsibility and efficient execution mechanism.
However, the current water allocation and management and decision support system
almost stay on the individual decision-making layer, the practical requirements of
water resources regulation should transform individual decision-making mode to group
decision-making mode.
Many researchers such as the references [1-5] expatiate the selection of water re-
source decision-making scheme is not a individual decision-making problem but a
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 148–153, 2011.
© Springer-Verlag Berlin Heidelberg 2011
A Multi-layer Dynamic Model for Coordination Based Group Decision Making 149
group decision-making problem, which mainly pay attention to water resource field
such as water resource bearing capacity, instead of group decision-making problem of
how to cooperate efficiently.
According to the characteristic of water resource configuration, this paper proposes
a multi-layer dynamic model of coordination based group decision making for water
resource allocation and scheduling. This model is a multi-objection, multi-layer,
multi-time interval, multi-restriction and multi-round decision-making process on the
basis of cooperation. In order to solve the confliction problem of group scheme in the
model, this paper proposes a conflict resolution scheme based on group utility distance
optimization. At last, this model is preliminary validated on the Swarm simulation
platform.
and constraints opinions for the upper (K-1)th layer come into being, which is fed
back to (K-1)th layer’s cooperative group, otherwise, objectives, schemes and con-
straints should be adjusted, and bonus-penalty factor should be added, then turn
to Stepk2 .
U t j (ai ) =|| min ai0 − max ai0 | / 2− | min aij − max aij | / 2 | . (1)
where U t j (ai ) is the distance between center points of two normalized utility region,
where t expresses the tth round adjusting.
Definition 3-2: Group adjusting inclination is described as index vector distance op-
timization after group adjusting, and is denoted as B(ai )
N N
B(ai ) =| ∑U t j (ai ) | − | ∑U t j−1 (ai ) |, i = 1..M . (2)
j =1 j =1
can not be satisfied, which is taken as global optimized adjusting attribute, and do
ΔU l* (ai ) adjusting.
Rule 2: If the decision maker doesn’t do any adjusting, then give the penalty equals to
K ΔU l* (ai ) , where K is more than 1.
A Multi-layer Dynamic Model for Coordination Based Group Decision Making 151
Rule 3: If the decision maker does the optimized adjusting, the judgment is the effects
of the sharpen degree for current conflict inclined to smooth, i.e., B(ai ) has inclined
trend, then preference compensated encouragement can be given.
Rule 4: If the decision maker’s adjusting is not the optimized one, then no rewards and
punishment is given.
② Decision makers’ cooperative strategy
Rule 1: Observe global optimized adjusting attribute, if property with its own strength
in the tolerance range of adjustment of preferences. The preference sequence is ad-
justing according to global optimized adjusting attribute.
Rule 2: If the utility is incompatible with attribute adjusting preference, i.e., loss utility
is too much, then by observed other conflicted attributes, the most adjacent personal
preference structural attributes is chosen, and yielded, which can help to bring about
new optimized group inclination.
Because group preference opinions embody in cooperators’ adjusting, decision
makers’ game is implicated in decision makers and cooperators.
4 Simulations
This paper does simulated experiment on Swarm simulated platform making use of
parts statistic data published in references [6-9], which simulates group deci-
sion-making composed of three decision makers and a cooperator.
(1) Satisfied concentration in preference independent
Taking the initial configuration set in reference [10], the assumption that decision
makers prefer the independence of decision-makers concerned only with the coordi-
nator of all the constraints, the satisfaction depends on the coordination tendency of
those. Simulated results are shown as Figure 1.
From the Figure 1 we can see that if the cooperators consider mandatory step, i.e.,
prescriptive plan, the random distribution of satisfactory degree can hardly converge.
Reference [10] configuration can cause high satisfactory degree upstream, but low
satisfactory degree downstream, and the average satisfactory degree is not high. From
the algorithm design idea, under independent assumption and mandatory strategy,
the reference standard of decision makers is unique, without considering later effect.
As well, each decision maker’s strategy is irrelevant, while the conditions are
independent, which can not be interacting, thus, the random characteristic of
satisfactory degree shows the rationality of algorithm function designing and
implementation.
(2) Satisfied concentration in harmony
Taking the initial configuration set in [10], the decision maker and the cooperator are
assumed to adopt correlated preference, that is, other participants’ preference are con-
sidered, and the personal strategy is modulated according to group preference inclina-
tion, and the simulated results are shown in Figure 2.
152 W. Huang et al.
If the different price of water used in peak period and valley period is introducing, in
the situation of complementary configuration in the time of water used, satisfactory
degree in long term is high, and increases year by year, while the velocity of conver-
gence is fast, which are shown in Figure 2. Meanwhile, the results accord with the long
term benefit configuration of preference in the algorithm design.
5 Conclusion
This paper proposes a group decision-making method which is MCGD (Multi-layer
Cooperative Dynamic Group Decision) corresponding to the need of water resource
allocation and scheduling, which combines multi-objective group decision-making,
multi-layer group decision-making, multi-period group decision-making, multi-attribute
group decision-making, multi-round group decision-making and so on. The character-
istic of cooperative multi-layer dynamic group decision-making is adopting group de-
cisions by the greatest extent, the results satisfying multi-part by cooperation, instead of
non-completed compromised equivalent solution. By cooperation and compromise, the
decision makers are impelled to develop avoiding conflicts, thus, the integral optimized
solution is got on the condition of satisfactory to every party. This decision-making
mode corresponds to the dynamic configuration of limited water resource.
Acknowledgment. This work is supported by the National Nature Science Foundation
of China (No.50479018), and the Fundamental Research Funds for the Central Uni-
versities of China (No. 2009B20414).
References
1. Wang, H., Qin, D.Y., Wang, J.H.: Concept of system and methodology for river basin water
resources programming. Shuili Xuebao 2002(8), 1–6 (2002) (in Chinese)
2. Chen, S.Y., Huang, X.C., Li, D.F.: A multi-objective and group-decision-making model of
water resources and macro-economic sustainable development in Dalian. Shuili Xue-
bao 2003(3), 42–48 (2003) (in Chinese)
A Multi-layer Dynamic Model for Coordination Based Group Decision Making 153
1 Introduction
*
Corresponding author.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 154–164, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Analysis of Mode Choice Performance among Heterogeneous Tourists 155
congested urban road network and limited parking spaces, will make it difficult for
individual transport to be used during the Expo; as such, high rates of utilization of
public transport will be necessary. Hence, exploring the trip mode choice behavior of
Expo tourists is the keystone for traffic planning of Expo 2010 Shanghai, especially for
the difference of heterogeneous tourists from various departure areas.
Over the past few decades, research interest in the link between travel choice be-
haviour and the contribution of various independent variables has blossomed. More
recently, discrete choice models based on SP methods have become popular among
academics, governments and consulting companies to explore many aspects of trans-
portation, including mode choice behaviour, urban forms, levels of service, prices and
so on (Fowkey and Preston 1991; Bhat 1997, 2005; Raney etl. 2000; Punj and Brookes
2001; Johansson et al. 2006; McMillan 2007; Lu and Peeta 2009).
Tourists differ from daily commuters in several ways that suggest different analysis
method may be necessary. First, tourists usually have no acquaintance with the urban
transport system in the visited city, they may prefer direct and comfortable trip mode.
Second, because of the budget difference in the aspect of tourist time and expense, the
variability of tourists’ choice behaviours may be greater than urban commuters. Third,
feature of tourist’s origins and destinations are more complicated than daily commut-
ers, therefore tourist-oriented management strategies must be flexible in location as
well as scale, to account for spatially shifting demand.
There are many strategies have been proposed for distinguishing among groups of
travellers, including ones based on attribute cutoffs, clusters of travel attitudes, moti-
vations or preferences, behavioural repertoires for different activities and hierarchical
information integration (Swait, 2001; Johansson et al., 2006; Steg, 2005; Van Exel et
al., 2005; Anable and Gatersleben, 2005; Bamberg and Schmidt, 2001; Tam et al.,
2008; Molin and Timmermans, 2009).
This paper presents a joint model system to investigate the differences of trip mode
choice behaviour among tourist groups. The model system takes the form of a joint
clustering analysis and hierarchical logit (HL) model. The remainder of this paper is
organised as follows. The next section describes the clustering analysis method is used
to distinguish the various types of tourists for the choice sets are variable with the
attributes of groups. This is followed by the structure of HL model is characterised by
grouping all subsets of tourist-correlated options in hierarchies. Each nest of the HL
model is represented by certain tourist features which are differ from the others. Then,
the joint model system is used to estimate trip shares on a survey sample of potential
tourists for Expo 2010 Shanghai, which have been conducted among tourists in an
airport, a train station and highway service stations in Shanghai. The last section in-
cludes the concluding comments.
indicates that Shanghai visitors will account for 25% of total visitors; visitors from the
Yangtze Delta, 40%; visitors from other regions of China, 30%; and visitors from
overseas, 5%, as shown in Figure 1.
Overseas,
5% Other
Yangtze
Delta, 40% Regions
Domestic,
30%
Shanghai
Local, 25%
Because of the range of tourists to Expo 2010 Shanghai and their various attributes, it
will obviously be difficult to obtain satisfactory results putting all Expo tourists into a
single group for analysis and modelling. In addition, as the World Expo has not pre-
viously been held in China, there is no reference to aid in the understanding or predic-
tion of tourist trip mode choice behaviour over the duration of this mega-event. Given
such a backdrop, this paper developed a two-stage gradual stated preference survey
Research into trip mode choice and design of stated preference survey
Stage 1
Data collection and homogeneity analysis
method for the in-depth study of Expo tourist trip mode choice behaviour. The method
of procedure is presented in Figure 2.
Based on the Stage 1 survey, the cluster analysis is utilized to classify the potential
Expo tourists. Cluster analysis groups data objects based only on information found in
the data that describes the objects and their relationship. The aim of cluster analysis is
to find out the objects within a group be similar to one another and different from the
objects in other groups.
The procedure of Cluster Analysis is:
Step 1 is transforming the variables to standard scores. In clustering, the measure of
original date often affects the comparative and operation; hence, transforming the
variables to standard scores is necessary. There are n subjects, and each subject has p
variables. Cluster analysis begins with a n*p original matrix as followed.
⎡ x11 x12 L x1 p ⎤
⎢x x22 L x2 p ⎥⎥
X=⎢
21
⎢ M M M M ⎥ (1)
⎢ ⎥
⎣⎢ xn1 xn 2 L xnp ⎦⎥
where xij and xkj are the jth variable of ith subject and kth subject.
Then the distance matrix is:
⎡ d11 d12 L d1n ⎤
⎢d d 22 L d 2 n ⎥
D = (dij ) = ⎢ 21 ⎥
⎢ M M M M ⎥ (4)
⎢ ⎥
⎣ d n1 d n 2 L d nn ⎦
where dii =0 and d ij = d ji ,
158 S. Jiang, Y. Du, and L. Sun
Step 3 is choosing a certain clustering algorithms, whose subjects are sorted into
significantly different groups where the subjects within each group are as homogeneous
as possible, and the groups are as different from one another as possible. There are
several different types of clusters that prove useful in practice, just like agglomerative
hierarchical cluster analysis, concept cluster analysis, k-means cluster analysis, fuzzy
cluster analysis. The K-means cluster analysis, which is suitable in large samples, is
chosen in this paper. The k-means algorithm assigns each point to the cluster whose
center is nearest. The center is the average of all the points in the cluster — that is, its
coordinates are the arithmetic mean for each dimension separately over all the points in
the cluster. The main advantages of this algorithm are its simplicity and speed which
allows it to run on large datasets.
According to data analysis of Stage 1 survey, we find that trip mode shares vary a
lot in distinct groups. Therefore, this paper uses the first choice of trip without con-
straints, travel time and travel costs as characteristic variables to assess. There are 3
types of groups after 10 iterations. The ratios of tourists from different departure
area in each type are shown in Table 1. The analysis results of variance are shown in
Table 2.
Type I
76.82% 4.71% 18.46%
Type II
14.38% 11.09% 74.53%
Type III 17.51% 69.59% 12.90%
Table 2. ANOVA
Cluster Error
F Sig.
Mean Square df Mean Square df
First
1000.144 2 .775 1185 1290.372 .000
Choice
Analysis of Mode Choice Performance among Heterogeneous Tourists 159
These results reveal that 3 variables of each type have statistical significance
(p<0.001). Hence, the classification is reasonable, and three types can be distin-
guished significantly. There are significant differences on ratios of tourists from
different departure areas in each cluster. Therefore, the categorization of a group of
tourists according to their departure areas, there are local visitors, out-of-town
lodging visitors and out-of-town lodging visitors.In the Stage 2 survey, group clas-
sification was carried out based on the different attributes favoured by these three
types of tourists.
This section describes the process of model development utilising the above group
division and Stage 2 survey data. With the development of stated preference survey
techniques, various new and powerful random utility choice modelling tools are put
into practices. In this paper, the hierarchical logit (HL) model is used to distinguish the
difference of trip mode choice behaviour in heterogeneous interviewees.
The hierarchical logit model is specified to account for the problem that alternatives
are not independent. Dependencies between trip mode alternatives can be represented
by the scale differences of the error components of different facets. The structure of HL
model is characterised by grouping all subsets of tourist-correlated options in hierar-
chies. Each nest of the HL model is represented by certain tourist features which are
differ from the others. Based on the results of the Stage 1 survey, the upper nests are
park-and-ride and arrive directly for out-of-town a-day-trip visitors. Therefore, the
upper nests are private transport and public transport for local and out-of-town lodging
visitors.
Two kinds of nested structure for different Expo visit groups are depicted in Figure 3
and Figure 4. The previous nested structure is for local and out-of-town lodging visi-
tors. The later nested structure is for out-of-town a-day-trip visitors.
Expo Trip
UA UB
U1 U2 U3 U4
Taxi Private Car Subway Expo Shuttle Bus
Fig. 3. The choice of trip mode nested under transport type for local and out-of-town lodging
visitors
160 S. Jiang, Y. Du, and L. Sun
Expo Trip
UA UB
U1 U2 U3 U4
Taxi Bus Subway Private Car
Fig. 4. The choice of trip mode nested under transport type for out-of-town a-day-trip visitors
:
we finally determine that the multinomial logit model choice utility function for the
Expo trip mode choice is
Vin = ASCi + β1 x1n + β 2 x2 n +β 3 x3n (6)
Where:
Out-of-town Out-of-town
Local Visitor
Model All Tourist A-day-trip Visitor Lodging Visitor
Group
Group Group
Coefficie t-Statisti Coefficie t-Statisti Coefficie t-Statisti Coefficie t-Statisti
Variables
nt c nt c nt c nt c
Taxi 0 - 0 - 0 - 0 -
Tube 1.76 15.3** 2.31 3.8** -0.073 2.4* 1.29 5.2**
Expo
Constant
Shuttle 1.96 11** 2.26 4.7** -0.23 -1.75 1.3 3.8**
Coefficient
Bus
Private
-0.02 -0.9 1.13 12.6** -0.13 2.1* -0.65 2.1*
Car
Walking
-0.0028 -0.2 -0.009 -4** -0.046 -2.4* -0.029 -1.96*
Time
Characteristic
Travel
Variable -0.0066 -2.6* -0.006 -3.3** -0.021 -3.4** -0.017 -1.99*
Time
Coefficient
Travel
-0.01 -2.9** -0.014 -2.24* -0.018 -4.1** -0.018 -2.16*
Cost
1.1*E3
Other Variable NESTA 1 5.6** 1.22 11.5** 1.01 3.1** 1
**
Coefficient
NESTB 9.33 2.7** 7.49 3.1** 7.53 2.39* 5.23 2*
departure area greatly influences trip mode choice behaviour. Hence, the model clas-
sification is based on tourist departure area. The results of the estimation of the trip
choice model parameters for four models are shown in Table 3. A probability level of
<0.05 was used as the threshold for statistical significance. The goodness-of-fit of the
logit models was assessed using the adjusted ρ statistic. An adjusted ρ value be-
2 2
values for the overall model is less than 0.2, the goodness-of-fit of this model could not
meet the requirement. The adjusted ρ statistic values for the classification models
2
are all great than 0.2, which lie in the ‘good fit’ range. Furthermore the t-statistic values
are significant at the 5% level. These parameters imply that classification models im-
prove analytic precision effectively.
The model parameters for the local visitor group show that the coefficient of walking
time is much greater than that of travel time, namely, walking time has a greater in-
fluence on the utility value than does travel time, which indicates that local visitors are
more sensitive to walking time but less sensitive to waiting and riding time, which may
be because a long walking time requires greater endurance, influencing the comfort of
local visitors. Local visitors are also sensitive to a change in travel cost.
The model parameters for the out-of-town one-day-trip visitor group show that the
coefficient of both walking time and travel time is greater than that of travel cost,
namely, time utility is greater than travel cost utility, which indicates that these visitors
are strongly sensitive to travel time but relatively less sensitive to travel cost.
The model parameters for the out-of-town lodging visitor group that the coefficient
of walking time is much greater than that of travel time, namely, walking time has a
greater influence on the utility value than does travel time, which is same as local
visitors. Therefore the coefficient of walking time is greater than that of travel cost,
namely, walking time utility is greater than travel cost utility, which indicates that these
visitors are strongly sensitive to walking time but relatively less sensitive to travel cost.
5 Conclusions
On the basis of the detailed analysis of abundant survey data and the in-depth explo-
ration of trip mode choice behaviour and its effect on Expo 2010 Shanghai, we offer the
following main conclusions.
1) For a case such as Expo 2010 Shanghai, which involves diverse tourist groups, a
single-form trip mode choice behaviour model is insufficient to reflect the effect of
variables and obtain a reliable result. The clustering analysis method is used in this
paper to distinguish the various types of Expo tourists for the choice sets are variable
with the attributes of groups. Sorted by the attribute of departure area, tourists of Expo
Shanghai could be divided into three kinds: local visitors, out-of-town one-day-trip
visitor and out-of-town lodging visitors.
2) A joint model system, in the form of associating clustering analysis and HL
model, is developed in this paper to investigate the differences of trip mode choice
Analysis of Mode Choice Performance among Heterogeneous Tourists 163
behaviour among heterogeneous tourist groups. The statistic parameters of three kind
models constructed by this joint system show that this modeling method improves
analytic precision effectively.
3) The model parameter estimation results for the three tourist groups show that
travel time, walking time and travel cost are all effective influencing factors but differ
in utility among the various groups. Local visitors are more sensitive to walking time
and total travel cost, out-of-town one-day-trip visitors are more concerned about total
travel time and out-of-town lodging visitors are highly sensitive to walking time and
total travel time.
Acknowledgements
This paper is based on the results of a research project that was supported by a research
grant (60804048) from the National Natural Science Foundation of China (NSFC) and
a research grant (NCET-08-0407) from the New Century Excellent Talents in Univer-
sity. The authors take sole responsibility for all views and opinions expressed in the
paper. The authors would like to acknowledge the following colleagues from the
Traffic Police Office in Shanghai and the University of Hong Kong for their support,
contributions and ideas that made this work possible: Mr Li Yin, Mr Xia Haiping, Dr
Zhou Xiaopeng, Ms Xiao Bin and Professor Wong SC.
References
1. Anable, J.: Complacent car addicts or aspiring environmentalists? Identifying travel be-
haviour segments using attitude theory. Transport Policy 12, 65–78 (2005)
2. Bamberg, S., Schmidt, P.: Theory-driven subgroup-specific evaluation of an intervention to
reduce private car use. Journal of Applied Social Psychology 31(6), 1300–1329 (2001)
3. Bhat, C.R.: Work travel mode choice and number of non-work commute stops. Transpor-
tation Research Part B 31(1), 41–54 (1997)
4. Bhat, C.R.: The multiple discrete-continuous extreme value (MDCEV) model: Role of
utility function parameters, identification considerations, and model extensions. Transpor-
tation Research Part B 42(3), 274–303 (2008)
5. Fowkey, T., Preston, J.: Novel approaches to forecasting the demand for new local rail
services. Transportation Research Part A 25(4), 209–218 (1991)
6. Johansson, M.V., Heldt, T., Johansson, P.: The effects of attitudes and personality traits on
mode choice. Transportation Research Part A 40(6), 507–525 (2006)
7. Lu, J.L., Peeta, S.: Analysis of the factors that influence the relationship between business
air travel and videoconferencing. Transportation Research Part A 43(8), 709–721 (2009)
8. McMillan, T.E.: The relative influence of urban form on a child’s travel mode to school.
Transportation Research Part A 41(1), 69–79 (2007)
9. Molin, E.J.E., Timmermans, H.J.P.: Hierarchical information integration experiments and
integrated choice experiments. Transport Reviews 29(5), 635–655 (2009)
10. Punj, G., Brookes, R.: Decision constraints and consideration-set formation in consumer
durables. Psychology and Marketing 18(8), 843–863 (2001)
11. Raney, E.A., Mokhtarian, P.L., Salomon, I.: Modeling individuals’ consideration of strate-
gies to cope with congestion. Transportation Research Part F 3, 141–165 (2000)
164 S. Jiang, Y. Du, and L. Sun
12. Steg, L.: Car use: lust and must. Instrumental, symbolic and affective motives for car use.
Transportation Research Part A 39(2-3), 147–162 (2005)
13. Swait, J.D.: A non-compensatory choice model incorporating attribute cutoffs. Transporta-
tion Research: Part B 35(10), 903–928 (2001)
14. Tam, M.L., Lam, W.H.K., Lo, H.P.: Modeling air passenger travel behavior on airport
ground access mode choices. Transportmetrica 4(2), 135–153 (2008)
15. Van Exel, N.J.A., de Graaf, G., Rietveld, P.: Getting from A to B: operant approaches to
travel decision making. Operant Subjectivity 27(4), 194–216 (2005)
16. Rui, Y., Keping, L., Jie, Y.: Traffic forecast for visitiors in World expo 2010 Shanghai
arena. Journal of Tongji University (Natural Science) 35(8), 1053–1058 (2007)
Hybrid Schema Matching for Deep Web
College of Computer Science and Technology, Jilin University, Changchun 130012, China
Key Laboratory of Symbol Computation and Knowledge Engineering of the Ministry of
Education, Changchun, China
chenke0616@163.com, wanli@jlu.edu.cn
1 Introduction
Schema matching is the process of developing semantic matches between two or
more schemas. Schema matching is a first step and critical part of data integration.
The data integration has multiple important application areas, such as E-business,
Enterprise information integration, Database integration, Semantic Web and so on.
For deep web, as the first step of deep web’s data integration, schema matching is
an important task. Currently, most research works in deep web’s schema matching are
limited to query interface, and ignored abundant information in query result pages. By
observing figure 1, while query interface is shown as (a), it’s clear the showing
schema attributes are very limited; we can add more attributes in the partial query
result pages (b), such as format, pub.data, edition and so on.
Based on the above observations, this paper proposed a mixed schema matching
technique that fully utilizes present schema matching techniques; in addition, as it
appropriately takes schema information in query result pages into consideration, it
could effectively improve the accuracy and comprehensiveness of the data integration.
*
This work is supported by the National Natural Science Foundation of China under Grant
No.60973040, No.60903098; the Science and Technology Development Program of Jilin
Province of China under Grant No. 20070533; the Specialized Research Foundation for the
Doctoral Program of Higher Education of China under Grant No.200801830021;
the basic scientific research foundation for the interdisciplinary research and innovation pro-
ject of Jilin University under Grant No.200810025.
∗∗
Corresponding author.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 165–170, 2011.
© Springer-Verlag Berlin Heidelberg 2011
166 K. Chen et al.
2 Related Work
2.1 Element-Level Matching
3 Problem Description
We formalize the schema matching problem as the following [1]:
The input is a set of schema S = {S1 , S2 ,..., Sq } , ∀Si ∈ S , 1 ≤ i ≤ q . Each schema
contains a set of attributes A = { A1 , A2 ,..., An } which extracted from a query interface
and query results pages. We assume that these schemas come from same domain. The
problem of schema matching is to find all matching M = {M 1 , M 2 ,..., M p } . The
matching M j (1 ≤ j ≤ p) is represented as Gj1 = Gj 2 = ... = Gjw .
A schema matching example is S = {S1 , S2 } in books domain, S1 = {Title, Writer,
ISBN-10, ISBN-13, Publication year, Subject, Price, Format}, S2 = {Title, First
Name, Last Name, Subject, ISBN, Keyword, Publication Date}. The result of schema
matching is {ISBN-10}= { ISBN-13} ={ ISBN }, { Writer }={ First Name, Last
Name }, { Publication Date }={ Publication year },
Before the schema matching, we need to perform the preprocessing of attributes set
contained by multiple schemas, and the main action of this preprocessing is combin-
ing attributes.
The two main works of attributes preprocessing are combining attributes and check-
ing duplication. There are two types of attribute combinations: the combination of
query interface’s attributes and query result pages’ attributes in single schema; and
combination of attributes set in multiple schemas. No matter it is combination in sin-
gle schema or combination of attributes set in multiple schemas, the same operation
“ ∪ ” will be performed.
After combining attributes, since there are mass amount of duplications existed in
attributes set, we need to check and delete duplications. Table 1 shows attributes sets
in four schemas, after combining attributes and removing deleting duplications,
Smerge ={ Title, Author, ISBN, Keyword, Format, Publication Date, Price, list price,
publication year, Binding, Last Name, First Name, Book description, ISBN-10,
ISBN-13, Publisher, Subject}.
168 K. Chen et al.
In regard to algorithm of schema matching discovery, this paper uses mining algo-
rithm in [9] for reference, but it is different here. In the process of calculating match-
ing scores, [9] is relied on the observation that synonym attributes are rarely co-
present in the same schema. However, when we observed query interfaces provided in
[10], as there often are synonym existed in the same schema, it’s clear that this algo-
rithm has some limitations.
(1) Grouping score calculation
A grouping score between two attributes is used to evaluate the possibility that two
attributes are grouping attributes. We found it’s more likely that the attributes ap-
peared simultaneously in the same query interfaces and query result pages belong to
the same group. For instances, the attribute first name and last name in schema S of 3
C
X = pq
pq
(1)
min(C , C )
p q
The grouping score threshold τ is used for controlling whether two attributes should
g
V p ⋅V q ∑ (V ∗ V )
Ap Aq
k k
Y = =
A A k =1
(2)
∑ (V ) ∑ (V )
pq
| V p || V q |
A A k
AP 2
k
Aq 2
In this paper, we employ cosine similarity to calculate the matching score between
two attributes: A and A . The cosine measure is the product of two vectors normal-
p q
ized to unit length and equal to the cosine of the angle between two vectors.
Ap Aq
Where V and V are two vectors, n is the dimension of the vectors, V
Ap Aq k
and V k
vectors.
Hybrid Schema Matching for Deep Web 169
5 Experiments
5.1 Dataset
We choose dataset TEL-8 from UIUC web integration repository [10] to test our
schema matching technology. The dataset TEL-8 contain the 447 original query inter-
faces in eight domains: Airfares, Hotels, Car Rentals, Books and so on. We only pick
two representative domains: Books and Automobiles.
5.2 Results
The partial schema matching of testing results is shown as table 2. The first column
represents Books and Automobiles domain, and the second column lists multiple
matching results, then the third column shows the manual judgment whether matching
results are right. From matching results, although the accuracy of matching is more
than 90%, some problems still exist.
6 Conclusions
This paper proposed a mixed schema matching technique for matching schemas in
deep web. Not only does the schema attribute take query interface’s attributes into
account, but it also adds abundant schema information from query result pages. The
system first combines attribute set of each schema and deletes duplicated attributes; it
then separately calculates the matching score and grouping score for each two attrib-
utes. Based on those two indicators, the system mines the attribute sets and finally
gets a matching result.
References
1. He, B., Chang, K.C.-C.: Discovering complex matchings across Web query interfaces: A
correlation mining approach. In: ACM SIGKDD Conference, pp. 147–158 (2004)
2. Cohen, W., Ravikumar, P., Fienberg, S.: A comparison of string metrics for matching
names and records. In: Proceedings of the workshop on Data Cleaning and Object Con-
solidation at the International Conference on Knowledge Discovery and Data Mining,
KDD (2003)
3. Madhavan, J., Bernstein, P., Rahm, E.: Generic schema matching with Cupid. In: Proceed-
ings of the Very Large Data Bases Conference (VLDB), pp. 49–58 (2001)
4. Euzenat, J., Valtchev, P.: Similarity-based ontology alignment in OWL-lite. In: Proceed-
ings of the European Conference on Artificial Intelligence (ECAI), pp. 333–337 (2004)
5. Valtchev, P., Euzenat, J.: Dissimilarity measure for collections of objects and values. In:
Liu, X., Cohen, P.R., Berthold, M.R. (eds.) IDA 1997. LNCS, vol. 1280, pp. 259–272.
Springer, Heidelberg (1997)
6. Do, H.H., Rahm, E.: COMA - a system for flexible combination of schema matching ap-
proaches. In: VLDB 2001, pp. 610–621 (2001)
7. Maedche, A., Staab, S.: Measuring similarity between ontologies. In: Gómez-Pérez, A.,
Benjamins, V.R. (eds.) EKAW 2002. LNCS (LNAI), vol. 2473, pp. 251–263. Springer,
Heidelberg (2002)
8. Ehrig, M., Sure, Y.: Ontology mapping - an integrated approach. In: Bussler, C.J., Davies,
J., Fensel, D., Studer, R. (eds.) ESWS 2004. LNCS, vol. 3053, pp. 76–91. Springer, Hei-
delberg (2004)
9. Su, W., Wang, J., Lochovsky, F.H.: Holistic Schema Matching for Web Query Interfaces.
In: Ioannidis, Y., Scholl, M.H., Schmidt, J.W., Matthes, F., Hatzopoulos, M., Böhm, K.,
Kemper, A., Grust, T., Böhm, C. (eds.) EDBT 2006. LNCS, vol. 3896, pp. 77–94.
Springer, Heidelberg (2006)
10. Chang, K.C.-C., He, B., Li, C., Zhang, Z.: The UIUC Web integration repository. Com-
puter Science Department, University of Illinois at Urbana-Champaign (2003),
http://metaquerier.cs.uiuc.edu/repository
Study on DS/FH Technology Used in TT&C System
Abstract. This paper talked about the character of DS/FH communication sys-
tem, and studied on how the DS/FH technology uses in TT&C system. A
scheme using DS/FH in the TT&C is presented in this paper, including the
work model and the design of hardware. The result of validation board shows
the scheme is available and suitable for TT&C system.
1 Introduction
DS/FH technology is widely used in the civil and military communications for its
good properties. The combined DS/ FH technology has a lot of advantages: a low
Intercepted probability, farer communication distance, immunized from multi-path
effect, and provided for code division multiple accessing[1,2]. It is considered the most
powerful anti-jamming communication method. High reliability is needed in TT&C
system to immunize the unmanned vehicle out of control. This paper presents the
study on how to design a TT&C System with DS/FH technologies. Key technologies
and designs methods of hardware are introduced in this paper.
2 Structure of System
Telemetry system is laden with responsibility of lots of vehicle’s monitoring data on
the flight to guarantee the control center on ground knowing the situation of the vehi-
cle in real time[3]. The flight states, flight parameters, and the states of the equipments
carried on unmanned vehicle are transmitted to the control center on ground. These
data supply the judgment for the remote control system to make the controlling deci-
sions. The task of remote control system is to transmit the controlling command to the
unmanned vehicle, to control the equipment carried on the vehicle, and to assure the
vehicle completing the flight schedule[4]. The function of antenna tracking is to guar-
antee the antenna pointing to the vehicle during the whole flight, which can assure the
continuous and stable communication between the control center on ground and the
aircraft vehicle. The diagrams of general TT&C system are shown in Fig.1 and Fig.2.
Traditional transmitter in TT&C have two categories, one is transmitting the high
speed data in wide band, including SAR, CCD television, high resolution digital cam-
era and some electronic reconnaissance equipment, this category of transmitter usually
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 171–176, 2011.
© Springer-Verlag Berlin Heidelberg 2011
172 C. Xiaoming, Z. Xiaolin, and H. Zhengjing
use the OFDM technology. The other category is data in narrow band; the data only
includes the remote control command from the control center and the telemetry in-
formation from the vehicle. The frequency band of this category transmitter is about
tens of KHz, and BPSK or DSS modulation are usually used in this kind of transmit-
ter. This paper will talk about the study on the second category, we are using the
DS/FH technology into the TT&C transmitter to promote reliability and security.
As we know, the direct sequence spread spectrum has the property of the distance
measuring. It can supply a second way to get the information from vehicle besides the
telemetry channel. From the phase of the spread spectrum code, we can get the accu-
rate distance range from vehicle to control center. From the Doppler frequency, the
speed in radial distance of vehicle is obtained. These informations are the important
complements to the telemetry system. The schematic diagram is shown in Fig.3.
Fig. 3. Schematic diagram of TT&C system with DS/FH modulation working flow
Study on DS/FH Technology Used in TT&C System 173
As shown in Fig3, the control command is sent by control center on ground, when
the DS/FH remote control receiver in unmanned vehicle synthesize the modulated
signal, the telemetry transmitter copies the control command frame and sends back
the frame back to control center for checking the right data has been received. Mean-
while, the uplink signal is reflected by the surface of the vehicle and the reflected
signal will be received by the receiver on ground using the symmetry polarization-
direction antenna. The supplemented information of the vehicle all comes from the
reflected information.
After the data is channel coded, the I branch is made up by combining the code and
short spread spectrum code, the Q branch is made up by combining the code and long
code. The reason using two different length codes is to guarantee both the high data
transferring speed and the long range for distance detecting. The I branch is used to
communication and the Q branch is used for distance measuring. Frequency hopping
is implemented in the DDS+PLL loop. Fig.5 shows the structure of the receiver.
3 Key Technologies
The acquisition of the DS/FH signal needs to search the time of next hopping, the
frequency of carrier, the phase of spread code, so the first important thing is how to
complete the acquisition with the shortest time. In this system, folded matched filter is
174 C. Xiaoming, Z. Xiaolin, and H. Zhengjing
used due to its rapid acquisition performance. It is consist of 32 matched filters with
two time sampling rate and 32 roms. The local pseudo code is saved as the coefficient
of the matched filters in 32 roms, each depth of rom is 8.
The folded matched filter can get the phase of the code, to speed up the total acqui-
sition speed, multi-parallel channels are needed. The numbers of channels is de-
pend on the frequency range and the ratio of code rate vs. the Doppler frequency of
carrier.
are lots of works to do to establish the equivalent rapid linear functions to replace the
accurate but nonlinear operations. Our library has completed these works in 2010 and
now the tracking loop all in FPGA can be steadily working.
The carrier series is generated by the FPGA through the DDS unit. Frequency synthe-
sizer unit cascades the DDS and PLL, and the PLL is set to the pattern of frequency
multiplication. PLL is well known for its high purity spectrum[5]. DDS has the advan-
tage of changing frequency with rapid speed and high resolution. In the system, they are
combined together. We use DDS to drive the PLL, and finally we can obtain a hopping
carrier with adjusted in a wide range. Thanks to the PLL loop, the output signal has a
high purity spectrum. Attention should be paid to the setup time of PLL is about 200us
level, which means in this time epoch the data can not be received normally, so the data
should be transmitted only in the later part time of the each hopping time.
4 Experiment
The environment of experiment is shown in Fig.9. Code rate is 10MHz, carrier fre-
quency is from 15 MHz to 25MHz, data rate is 10 KHz, the code length in one period
is 256bits, and frequency hopping speed is 1khops/second. In Fig8, the left board is
the receiver and the right one is the transmitter.
The results show the DS/FH system is successfully working with the data rate 10
KHz, it is enough for the narrow band TT&C system. Fig.10 shows the tracking re-
sults using the logic analyzer.
176 C. Xiaoming, Z. Xiaolin, and H. Zhengjing
5 Conclusion
This paper talked about the character of DS/FH communication system, and studied
on how the DS/FH technology uses in TT&C system. A scheme using DS/FH in the
TT&C is presented in this paper, including the work model and the design of hard-
ware. The result of validation board shows the scheme is available and suitable for
TT&C system.
References
[1] Simone, L., Salerno, N., Maffei, M.: Frequency. Hopping Techniques for Secure Satellite
TT&C: System Analysis & Trade. Alcatel Alenia Space - Italy. IEEE, Los Alamitos
(2006)
[2] Sung, H.Y., Sung, K.P.: DS/ CDMA PN code acquisition by neural network. In: Vehicu-
lar Technology Conference, pp. 276–280. IEEE, Los Alamitos (1995)
[3] Zhang, X.: TT&Cand Electronic System Design For Pilotness Helicopter. ACTC Aero-
nau Et Astronautica Sinica 23(5), 432–434 (2002)
[4] Lu, G., Zhang, X.: Design and Implementation of the Telemetry System for Pilot less
Helicopter. Journal of Beijing University of Aeronautics and Astronautics 29, 113–115
(2003)
[5] Cercas, F.: A Hardware Phase Continuous Frequency Synthesizer for Frequency Hop-
ping Spread Spectrum. In: Technical Conference Proceedings, pp. 403–406 (1989)
RFID-Based Critical Path Expert System for Agility
Manufacture Process Management
Abstract. This paper presents a critical path expert system for the agility manu-
facture process management based on radio frequency identification (RFID)
technology. The paper explores that the agility manufacture processes can be
visible and controllable with RFID. The critical paths or activities can be easily
found out and tracked by the RFID tracing technology. And the expert system
can optimize the bottle neck of the task process of the agility management with
the critical path adjusting and reforming method. Finally, the paper gives a sim-
ple application example of the system to discuss how to adjust the critical paths
and how to make the process more agility and flexibility with the critical path
expert system. With an RFID-based critical path expert system, the agility
manufacture process management will be more effective and efficient.
1 Introduction
In China, the industries have used computer network platforms for several decades to
improve their management. The information services of product process, materials
storage, daily tasks and industry policies provide a lot of supports for the industry
management. But because most product tasks have cost and time limits and share
labor and equipment resources, it is difficult to manage the tasks which generally
have complex links with others. Those assembly manufacturers always want to con-
trol every critical path in a product process. However the computer network
platforms such as ERP, MRPII and SAP can not provide the service of real time
monitoring information technology. The real time activities are usually traced and
regulated by labor and the collecting information always delays for the real time
responds. So the real time information of every working section impacts on each
other in an agility process. However, identifying the real time activity information
of a handling product in interactive task flows is still a big problem for managers
and researchers. Therefore, the problem is how to make the manufacture processes
traceable and controllable and be optimized with real time information tracking
technology. We also explore how to use the real time information for process
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 177–182, 2011.
© Springer-Verlag Berlin Heidelberg 2011
178 H. Cheng and Y. Xiang
adjusting and improve the agility and respondence of the manufacture process with
expert system.
Lots of studies of radio frequency identification (RFID) application have been
done in these ten years, but the agility manufacture management and the product life
circle tracing of the manufacture process is seldom cared, for the agility manufacture
process management is very complex. Managers must face the problem of resource
dispatching, machine checking and member management in every day. RFID tracing
technology is important for the process management. There are some retailers and
manufacturers having already used RFID technology to trace their suppliers’ deliver-
ing goods such as Wal-Mart and China International Marine Containers [2, 3].
Critical Path method is one of the simplest methods to manage the process. Since
the critical paths (CPs) determine the task completion time, it is important for man-
gers to focus on CPs to control the task time, decrease costs and allocate resources.
Critical path method (CPM) has been developed to identify critical path (CP) [6].
Therefore, RFID can track every activity of the process to have real time information.
It is possible to easily address every variability and interdependencies of activities in
process. With the help of RFID, the exact calculus of the probability distribution of
the activities in the task is feasible.
This paper purposes to discuss the architecture of the RFID-based CP expert sys-
tem (RCPES). The Critical Path Method (CPM) is used to support the process model
design in a complex manufacture network. And the RCPES rebuilds the process
model with the real time information is explored in this paper.
In this viewpoint, the paper first discusses the theory and applications of CPM and
the RFID tracing technology in section 2. Then the architecture and databases of the
RCPES and the expert decision steps are given in section 3. Section 4 presents a sim-
ple application example to explain the work steps of RCPES. Finally, conclusions are
presented in Section 5.
CPM is a method used to determine the start and completion dates of a process. It is
convenient to use CPM to model the paths of manufacture task management and CPM
has been applied in many industrial fields [6]. It is also an effective time management
method. Firstly, the unit duration of each task are calculated. The start date and end
date of the whole manufacture process are defined. A network topology diagram is
built according to the activities logic sorts and relationships. Finally, the longest path
in the diagram is decided to be the critical path.
However the CPM diagram with certain durations is not fit for the dynamic man-
agement of process. Because the real world is always variable, the critical path man-
agement strategy can not follow the change for the complex calculating durations of
CPM. To increase the decision quality of process management, RFID tracing technol-
ogy is used to track the activities in the process and collect the real time data to make
dynamic decision.
RFID-Based Critical Path Expert System for Agility Manufacture Process Management 179
RFID technology uses wireless technology to send and receive signals. The RFID
interrogators read and write the radio signals in RFID tags with no touch [1]. RFID
technology can handle multiple labels at one time. The RFID tag store more data than
barcodes and the data can be rewritten. The RFID tag also has characters as water-
proof, heat-resistance and wide distance operation that are better than barcodes. So
RFID is really fit for real time information tracing.
RFID technology has been successfully applied in many industries such healthcare
industry and retail industry. RFID can help to improve patient safety, identification,
medication management and medical processes in the healthcare field [4]. In the retail
management, RFID track the goods moving states for better customer service and
assist inventory management to control stocking problems. The RFID technology also
implements in other areas, such as transport payment systems and track valuable as-
sets for libraries and museums.
An agility manufacture process is a multi-dimensional, dynamic and complex sys-
tem. Its management composes with multiple tasks cooperation and uncertain risks.
According to the durations and relationship of tasks, critical path is found by CPM in
the first step to create the process model. Then every activity is tracked by RFID
technology system to improve the process model to be fit for the real manufacture
management.
The framework of RCPES, shown in Fig.1, involves RFID tracing system, CP Expert
system and three databases as process database, expert database and RFID database.
There are two interfaces: the RFID interface connects the RFID system with the CP
expert system, the Expert interface connects the CP Expert System with ERP, SCM or
other system in the E-industry network platform.
A RFID tracing system composes with readers, tags and RFID database. The read-
ers are located at every assembly operation place in the product assembly line. The
Tags stick on nucleus materials of the product. The readers collect the arriving time,
duration and leaving time of all the activities and send the data to the RFID database.
After the RFID database finishes collecting the whole process data, it will package
and transmit the data to the CP expert system.
The CP expert system composes four modules and two databases as knowledge da-
tabase and process database. After the tracking data is received, the CP expert system
will compare the initial hypothesis parameters with the real tracked information to
find timeout activities, then the system will work to find the reasons of timeout and
give adjusting strategies. If those strategies are received, the system modifies the
initial parameters and simulates the process to test the new parameters. If the new
parameters can not pass the test, the system will be asked to redo the strategies unless
the parameters pass the test.
180 H. Cheng and Y. Xiang
Knowledge DB
RFID DB
Processes DB
RFID
RFID System Readers
CP Expert System Tags
Expert Interface
Interface
E-industry network platform
ERP Interaction Design SCM
Products DB Resources DB Materials DB
3.2 Databases
There are three basic databases in the RCPES as RFID database, process database and
knowledge database. The RFID database stores all the real time information of the
activities tracked by RFID system. The process database stores parameters of process
and activity. These parameters can supply all the necessary information about the
identity of the processes and activities. And the knowledge database stores different
expert knowledge for critical path judgment, algorithms of CPM, rules of optimal
strategy and process models. And rules of the cost-sensitive strategy and the time
limit sensitive strategy are also saved in the knowledge database.
With these databases, RCPES can control the every real activity of the product in
the manufacture process. In every tracking node, three types of data are recorded:
reach, work and leave. The reach time of tracked object is related to all the activities
before this node. If the time is longer than anticipation, then the earlier period activi-
ties will be checked by the expert system. The work time of the tracked object gets the
duration of every activity and information of resources, which supports the expert
knowledge learning and process optimizing. The leave time is used to process adjust
and test. Moreover, the relationship between products, materials, resources and proc-
esses are saved in the knowledge database. When the strategy involves the adjusting
of materials and resources, the expert system will visit the databases in the ERP or
SCM system for information of materials and resources.
and adjusted by the Posttest Expert, the optimal process model will be saved into
knowledge database.
The five major decision steps for the CP expert system are described as below:
Step 1: Build a new process model structure with Process Design Expert module;
Step 2: Analysis the critical path and build the CPM diagram with Process Design
Expert module, then save the parameters and CPM diagram in Process DB;
Step 3: Simulate the model with Prognostic Expert module, adjust the model pa-
rameters;
Step 4: Start RFID tracing system and receive the real time information from RFID
system, then analyze the process model with Prognostic Expert module;
Step 5: Make expert decision and select optimal schemes, save the optimal process
model into knowledge DB.
Step 2- Step 5 will be run several times for rebuilding the process model structure
and improving the process effectiveness.
T3 T4 T10
Start T1 T2 T5 T6 T8 T9 End
T7
4 Conclusions
In recently years, the applications of RFID technology in logistics and supply chain
management may generate solid benefits and advantages. This paper focused on criti-
cal path (CP) identification in agility manufactures and proposes the CP expert system
based on RFID, which integrates RFID technology and Critical path method, and
makes it possible to control the dynamic process of the real world task. The system
also studies every change of the task performance to optimize the process, thus in-
creasing the decision quality of agility manufacture management.
In the applications, RFID technology has been automate daily workflow transac-
tions, thereby reducing cost and human error and facilitating swift response during
data entry operations. The initial goal for the research has been achieved. In short,
RFID applied in the management of industry can improve the efficiency of process
operation and raise the quality and effective of the expert decision.
References
1. Chen, J.L., Chen, M.C., Chen, C.W., Chang, Y.C.: Architecture Design and Performance
Evaluation of RFID Object Tracking Systems. J. Computer Communications 30, 2070–
2086 (2007)
2. Sangwan, R.S., Qiu, R.G., Jessen, D.: Using RFID Tags For Tracking Patients, Charts And
Medical Equipment Within An Integrated Health Delivery Network. In: Proceedings of the
IEEE Networking, Sensing and Control Conference, pp. 1070–1074 (2005)
3. Finkenzeller, K.: RFID Handbook, 2nd edn. Wiley, Chichester (2003)
4. Guercini, S., Runfola, A.: Business Networks and Retail Internationalization: A Case
Analysis In The Fashion Industry. J. Industrial Marketing Management 39, 908–916
(2010)
5. Um, I., Cheon, H., Lee, H.: The Simulation Design And Analysis Of A Flexible Manufac-
turing System With Automated Guided Vehicle System. J. Journal of Manufacturing Sys-
tems 28, 115–122 (2009)
6. Tian, G.Z., Yu, J., He, J.S.: Towards Critical Region Reliability Support For Grid Work-
flows. J. J. Parallel Distrib. Comput. 69, 989–995 (2009)
Research on the Evolutionary Strategy Based on AIS and
Its Application on Numerical Integration
Li Bei
1 Introduction
The immune system of organism primarily means the response of an organism to a
given stimulus. It consists of organ, tissue and cell which have immune function[1]. It
can use antibody to wipe out the invaded antigen. The mechanisms of operation of
immune system can be simulated in order to create an artificial immune system[2].
Artificial immune system (AIS) tries to learn the natural prevention mechanism form
outside material, and then provide the excellent mechanism of adaptive learning, self-
organization, and self-memory. AIS also combine the advantages of classifiers, neural
networks, and machine reasoning, and these makes it has huge potential in dealing
with all kinds of complex problems[3].
Nowadays, artificial immune algorithm based on AIS has been concerned by many
scholars, and it is used widely in many fields, such as control field[4], programmed
field[5], data processing[6], knowledge discovery[7]. At the same time, many prob-
lems can be modeled by mathematical methods and these problems often very com-
plex, which can not calculated by traditional methods accurately. So we need use
simulation method to solve these problems. A case in point is numerical integration,
which is widely used. However, how to solve the complex numerical integration is a
very difficult problem awaiting solution.
In this paper, firstly, we will research on the evolutionary strategy based on AIS,
and then this new strategy will be used to solving the definite integral problem.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 183–188, 2011.
© Springer-Verlag Berlin Heidelberg 2011
184 L. Bei
The definition about definite Integral Problem will described as follows. If there is a
function f (x ) , which is continuous in close interval [ a, b] . The primitive function
of f (x) is F (x) . Therefore, we can use Newton–Leibniz formula to solve the
definite integral of f (x ) in [ a, b] as following:
b
∫
b
f ( x)dx = F ( x) a = F (b) − F (a)
a
This traditional method played a major role in solving the numerical integration.
However, it can not solve all the definite Integral Problems according to reference [8].
① for some functions to be integrated, they has their own complex structure, so
there is no way to use basic elementary functions to describe primitive function, or
very difficult to get the primitive function.
② there are no concrete expression for the function f (x ) to be integrated, be-
cause f (x ) is formulated by some discrete data and figures. Based on the reasons
above, many scholars proposed some traditional methods to calculate the proximity of
numerical integration. For example, rectangular formula, trapezoid formula, Simpson
,
formula Romberg formula and so on. These methods focused on cut the solution
space with equal distance, and get the sum of these functions, respectively. However,
these methods had also some drawbacks. For example, trapezoid formula is fit for the
functions to be integrated which is non-smoothness, and it could not calculate the
other functions very well. Generally speaking, the best method should be cut the solu-
tion space according to the practical shape of the functions, and then create sub-
solution spaces which are vary in distance. Lastly, these different sub-functions in
different sub-solution spaces should be added to get sum. This process will be more
accurate than the traditional methods.
Traditional AIS is based on the biological immune system, it can simulate the im-
mune function of organism to solve some problems by constructing artificial immune
algorithm. On the face of it, AIA is very similar with artificial genetic algorithm; both
of them established the mechanism of selection whereby superior individuals will
prosper and inferior ones should be eliminated. They use a population with heuristic
searching in the solution space in order to get an optimum. However, artificial im-
mune algorithm includes many new strategies compared with genetic algorithm. For
example, self-memory and learning, the mechanism of feedback and genetic diversity,
the theory of clone selection and so on [ 9 ] . These new strategies can ensure AIA can
find better solution in solution space compared with traditional evolutionary algo-
rithm. A basic framework of artificial immune algorithm can be described as follows:
Research on the Evolutionary Strategy Based on AIS and Its Application 185
Firstly, the concepts of antigen and antibody are defined. Generally speaking, anti-
gen means the optimal solution which can satisfy the objective function and constrain
conditions. Antibody means the candidate solution in the solution space. The process
of the algorithm as following:
Step1: to input antigen;
Step2: to initialize an antibody population randomly.
Step3: to calculate the fitness value. The fitness value includes tow parts. The first
part is to calculate the objective function in order to get the fitness value between
antigen and antibody; the second part is to calculate the Euclid Distance between
these antibodies in order to ensure the fitness value. The first fitness value can reflect
the matching degree between the antibody and antigen; the second fitness value can
show the concentration of antibodies;
Step4: to judge the iteration condition, if the condition fulfill, then the algorithm
stop operating, or else go on;
Step5: to select the antibody. In the ordinary course of selection, the good indi-
viduals should satisfy the condition such as higher matching degree, and lower con-
centration. And these individuals will keep in the next iteration process.
Step6: according to the good individuals, some immune operator will be operating
in order to create new antibodies;
Step7: to update the whole antibody population.
We can see from above that the artificial immune algorithm has good abilities of
adaptive searching, self-control. These will give us inspiration. In this paper, a new
evolutionary strategy is proposed based on the features of artificial immune algo-
rithm. It will be used to calculate the numerical integration by fitting curves according
to the characteristics of integrand function. There are two questions should be solved
here:
① how to use antibody individual to searching the optimum in the solution space;
② how to construct the fitness function in order to fitting the curves.
3 To Solve the Numerical Integration Based on Artificial Immune
Algorithm
In this paper, every individual is corresponding to a set of split points in the solution
space. Then the optimum set of split points in the solution space can be seen as the
optimum problems to be solved. The concrete process can be described follows.
To take Definite Integral Problem as an example, the objective function can be set as
f (x) , which is corresponding to a antibody population x with the scale m .
x = {x1 , x 2 ,..., xm } . Every antibody is encoded by real numbers. At the same time,
every individual can be encoded as follows: xi = ( xi1 , xi 2 ,..., xin ) , i = 1,2,...., m .
The scale of solution space is in [ a, b] .
186 L. Bei
For each individual in the current population, the best individual will be as a guidance
to update the whole population. Firstly, to select the individual which is not the best
individual and its sub-intervals will be compared with the best individual. Then the
sub-intervals should be adjusted according to the corresponding sub-intervals of best
individual. The concrete process is as follows: as for each individual in the population,
there are n + 1 sub-individuals, n + 2 endpoints ( n points with two extremes a and
b ). Then according to the sequence in the solution space, rearrange these points in
order of increasing. As for a sub-intervals of individual xi , its corresponding
Research on the Evolutionary Strategy Based on AIS and Its Application 187
4 Experiments
Trapezoid formula and Simpson formula are used in reference [10] to calculated Inte-
grand functions: ① x ,② x ,③ x 1+ 1 ,④
2 4
1+ x2 ,⑤ sin x ,⑥ e x
in the
close interval [0,2]. At the same time, we use the new evolutionary strategy to solve
these six functions too. The results and its comparison [11] are as table 1.
function ① ② ③ ④ ⑤ ⑥
Exact 2.667 6.400 2.958 1.099 1.416 6.389
solution
Simpson 2.667 63667 2.964 1.111 1.425 6.421
formula
trapezoid 4.000 16.000 3.326 1.333 0.909 8.389
formula
AIS 2.66639 6.38984 2.95769 1.09901 1.41559 6.389301
188 L. Bei
As shown in table 1, the results which are calculated by this new evolutionary
strategy based on artificial immune algorithm are better and more precise than the
results which are calculated by traditional trapezoid formula and Simpson formula.
5 Conclusions
In this paper, according to the features of artificial immune algorithm, a new evolu-
tionary strategy is proposed to solve numerical integration. This new evolutionary
strategy makes full use of the ability of global searching, parallel computation of
artificial immune algorithm, and constructs judge principle based on the relationship
between antibody and antigen, antibody and antibody in order to solve the numerical
integration more accurately. The experiments show that the results of numerical inte-
gration which are calculated by the evolutionary strategy is satisfied, and it present
this new strategy is a practical and effective method.
References
1. Qi, A., Du, C.: Immune System and Nonlinear Modeling. Shanghai Scientific and Techno-
logical Education Publishing House, ShangHai (1998)
2. Dasgupta, D.: Artificial Immune System and Their Applications. Springer, Heidelberg
(1999)
3. Ding, Y., Ren, L.: Artificial Immune Systems: Theory and Applications. Pattern Recogni-
tion and Artificial Intelligence 13(1), 52–59 (2000)
4. Sasaki, M., Kawafuku, M., Takahashi, K.: An immune feedback mechanism based adap-
tive learning of neural network controller. In: 6th International Conference on Neural In-
formation Processing, vol. 2, pp. 502–507. IEEE Computer Society Press, Los Alamitos
(1999)
5. Gao, J.: The Application of the Immune Algorithm for Power Network Planning. System
Engineering-Theory & Practice (5), 119–123 (2001)
6. Shao, X., Chen, Z., Lin, X.: A Novel Algorithm for Fitting Analytical Signals-an Immune
Algorithm. Chinese Journal of Analytical Chemistry 28(2), 152–155 (2000)
7. Timmis, J., Neal, M., Hunt, J.: Data analysis using artificial immune systems, cluster
analysis and Kohonen networks:some comparisons. In: IEEE SMC 1999 Conference Pro-
ceedings, vol. 3, pp. 922–927. Institute of Electrical and Electronics Engineers, Incorpo-
rated (1999)
8. Ke, S.: Advanced Mathematics. Beijing University of Aeronautics & Astronautics Press,
Beijing (2007)
9. Jiao, L., Du, H.: Development and Prospect of the Artificial Immune System. ACTA Elec-
tronica Sinica 31(10), 1540–1548 (2003)
10. Burden, R.L., Faires, J.K.: Numerical Analysis, 7th edn., pp. 190–212. Brooks/Cole,
Thomson Learning, Inc. (2001)
11. Nie, L.: Artificial Fish Swarm Algorithm and its Application. Guangxi University for Na-
tionalities, Guangxi
Prediction of the NOx Emissions from
Thermal Power Plant Based on
Support Vector Machine Optimized by
Chaos Optimization Algorithm
Abstract. With the development of thermal power industry, statistics on the NOx
emissions become important. In this paper, based on the traditional support
vector machine model, we establish support vector machine model optimized by
chaos optimization algorithm, improve the prediction accuracy of SVM model.
Use the NOx emissions data from 1995 to 2009, predict the NOx emissions from
thermal power plant in the year of 2010, and verify the reasonableness of the
COSVM model.
0 Introduction
Relevant materials indicate, the NOx that the atmosphere of our country discharges has
70% of the direct burning coming from the coal, the fire coal amount of the coal-fired
power plant will exceed half of the total amount of consumption of coal after 2005,
would exceed 76% in 2020. So, the discharge of NOx is particularly important to
control.
Some domestic and international organizations and scholars once estimated and
predicted the emission of NOx of our country. Zhu FaHua, Wang Sheng have
discharged the thermal power NOx of our country from Power generation growth,
emissions standards, emissions performance three different angles of performance to
predict, and has carried on the research in discharging the current situation and con-
trolling the countermeasure. Wang ZhiXuan, Zhao Yi have considered factors such as
the nature of coal, burning way, unit capacity of the coal-fired power plant,etc. syn-
thetically, set up a central calculation NOx emission method and discharge competence
actually with the coal-fired generating set, and has predicted to NOx emission of the
2010 of our country.
With the gradual development of the artificial intelligence technology, as a new
developing method the support vector machine (SVM) has been widely used in each
field with its strong non-linear problem of treatment's ability. Choose 3 factors of
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 189–194, 2011.
© Springer-Verlag Berlin Heidelberg 2011
190 J. Wang, J. Kang, and H. Liang
influencing the thermal power plant NOx emission: Generation, installed capacity,
gross domestic product (GDP) as the input supporting the vector quantity machine,
regard NOx emission as export, realize the prediction of the emission.
1 Theoretical Foundation
, ,
Suppose there is a samples training set(xi,yi) and in which xi=Rn is input space yi is
output value, the basic idea is to find a non-linear mapping ϕ ( x) from input space to
output value. The data is mapped to the high-dimensional space, and construct a hy-
perplane for data classification. It uses structural risk minimization instead of the tra-
ditional empirical risk minimization, and overcome the many shortcomings of neural
networks. After the introduction of non-sensitive loss function ε , SVM can solve
non-linear problems in high dimensional space for linear regression.
The Chaos have randomness, all over characteristics such as the calendar, regularity,
have characteristic of abundant nonlinear kinetics. The Chaos optimize algorithms
and search for algorithms as a new kind of, its basic thought is to vary the variable
to solving the space from the space of Chaos, then it is that the variables it utilize
randomness Chaos have, all over calendar, regular characteristic go on search for,
the Chaos are easy to jump out to local optimal solution, the range of application is
wider.
1 n
f (x) − y 2
m in M S E = m in
n
∑
t =1
(
f (x)
) (1)
s.t. 、2、3),
ai ≤ xi ≤ bi i=(1
x=[x ,x ,x ,x ]=[C、 ε 、σ]
1 2 3 4
Prediction of the NOx Emissions from Thermal Power Plant Based on SVM Optimized 191
This literary grace produced the dynamics system that the Chaos move with Logistic
system.
t k + 1 = μ t k (1 − t k ) (2)
(3) Initialize. Initial iterations k=1. Initial chaotic variables and initial optimization
variables are identical, cxi (0) = xi (0) . Suppose the optimal value of the first step
xi* = xi (0) .
(4) Signal carrier of once. Substitute the Chaos variables cxi (k − 1) of the k-1 step into
formula(2), and make out the Chaos variables cxi (k ) of the k step.
Range of chaotic variables adjusted to optimize the parameters of the corre-
sponding interval in the various segments:
i =1,2,…,n; j =1,2,…,N.
(5) Search of Chaos. Make the value in N-interval of every optimization parameters
crossed and composed, and then receive the value vectors s=[x11(k),x22(k),…,xnN(k)]
by Nn objective function variables. And find the vector that minimize the objective
function variables from the value vectors. And the subscript value of optimal vari-
able is xb=[h1,h2,…,hn]. Compare f (s*) and f (x*) , if f (s*) is smaller than f (x*) , let
x * =s*, xb* =xb, cx* =Cx, otherwise, x* keeps intact, the value of k is added by 1.
(6) Go on the step (4) and (5) repeatedly. After several steps, if the value of x* has been
keeping intact all the time, carry on the step (7).
(7) Order p =1, μ=1. Narrow the interval range that the Chaos search for.
a i′ = x i* − λ ⋅ ( b i − a i )
(4)
b i′ = x i* + λ ⋅ ( b i − a i )
λ = 2−μ
(8) Signal carrier at twice. Substitute the Chaos variables cxi(P-1) of the P-1 step into
formula(2), and make out the Chaos variables cxi(P) of the P step. Adjust Chaos
change range of variable to the range provided by formula (4).
(9) Search again. Compare f (x (P)) with f (x*) , if f (x (P)) is smaller than f (x*), make
x*=x(P), otherwise, x* keeps intact, the value of P is added by 1.
(10) Repeat the step (8) and (9). After several steps, if the value of x* has been keeping
intact all the time, then μ =μ +1.
192 J. Wang, J. Kang, and H. Liang
(11) Repeat the step (10) for many times, and x * is the overall situation optimal solve
of the goal function, that is to say the optimum parameter of SVM.
Question that the prediction of NOx emission generating electricity in the trade is a
datum generalization in fact to fit. And learn according to the output date of inputting
into first, then direct against and is not studying the centralized data-in, calculate output
date from support vector machine, that is to say that predicts the data.
Set up and own m annual emission data in common, there is n discharge index in all
in each of annual discharge data. X={x1,x2,…,xm} is the input set of NOx emission, and
Y={y1,y2,…,ym} is the emission value set. Array A=(aij) means the index array of X,
and suppose that there are n indexes, and aij means the j index in the i year.
Xi={ai1,ai2,…,ain} is as the input vector of SVM, and yi is the return target value.
There are a lot of factors of influencing the trade NOx emission of generating elec-
tricity. Selected installed capacity, generation, gross domestic product (GDP) as in-
fluencing index of NOx emission. Regard 3 indexes described above as the input of the
vector quantity supporting machine, NOx emission regards as the predicted value Y,
according to support vector mechine theory, and the parameter after adopting and
optimizing algorithms and optimizing through the Chaos in calculating actually, the
emission predicts the foundation of the model is followed:
l
Y = f ( x ) = ∑ (α i − α i ∗ )k ( xi , x ) + b (5)
i =1
4 Empirical Research
Choose the annual installed capacity, generation of the thermal power plant, our
country annual gross domestic product (GDP) from 1995 to 2009 of our country in
sample herein and the thermal power plant annual NOx emission of our country. Expire
by the sample data of 10 years altogether of 1995 to 2004 as training samples, the
sample data of 5 years altogether, as testing the accuracy to test this text model of
sample from 2005 to 2009, utilize this model to make a prediction to the electricity
generation by thermal power NOx emission of 2010 finally.
In initial data, the unit of each index does not unify, so does not have dimension
treatment to index value at first, in order to carry on the next calculation job. Do the
dimensionless treatment with proportion method.
n
xi
zi = n
, ∑z i =1 (6)
∑x
i =1
i
i =1
Prediction of the NOx Emissions from Thermal Power Plant Based on SVM Optimized 193
Choose dimensionless treatment sample data between 1995 and 2004 as the train
samples of SVM. Then choose the data between 2005 and 2009 as testing samples. Test
the prediction accuracy of SVM based on Chaos algorithms.
Every parameter is set up as follows: The code length of C is 10, the code length of ε
is 9, the code length of σ is 11. The fetching value space of every parameter is set up:
1≤C≤10000, 0.0001≤ε≤0.1, 0.01≤σ≤600.
This paper fetch BP nerve network, not predict the test sample at the same time by
SVM and SVM based on that the Chaos algorithm has been optimized that is opti-
mized. Use the relative deviation Et and root relative deviation MSE as the evaluation
index.
Et = t
x − yt
xt
× 100% ; (7)
1 n
xt − yt 2
M SE =
n
∑
t =1
(
xt
) (8)
Use MATLAB 7.0 software to realize COSVM, SVM, BPN model to the prediction of
the NOx emission from thermal power plant. Predicts the result as shown in Table 2.
Finally, utilize 1995 to 2009 sample data for 15 year altogether as training sample to
SVM, the prediction in 2010 of NOx emission from thermal power plant is 8,706,500
tons.
194 J. Wang, J. Kang, and H. Liang
COSVM predicts model, SVM predict the prediction of models, BP nerve network
model fruits the analytical table is as the form above shows. Finally, root relative de-
viation to calculate model, find COSVM model the root deviation is well below other
two models, 1.37%, other two prediction model root deviation above 2%, BPN pre-
diction result of model the root deviation is up to 2.70%, it is not so high as the preci-
sion of the model that this text build.
Finally, the SVM utilizing the Chaos algorithm to optimize makes a prediction to the
NOx emission of 2010 from thermal power plant, the prediction result obtained is
8,706,500 tons.
5 Conclusions
This text has set up the support vector machine model based on that the improved
Chaos algorithm is optimized on the basis of support vector machine in tradition. Make
a prediction to the electricity generation by thermal power NOx emission of the 2010 of
our country finally, have offered an effective analytical method for further investigat-
ing the electricity generation by thermal power NOx emission problem of our country
in the future.
References
[1] Gao, J., Wang, Y., Zhang, B.: Countermeasure of Atmospheric Nitrogen Oxide Pollution
in China. Environmental Protection Science 10(30), 1–3 (2004)
[2] Zhu F.-h., Wang S, Zheng Y.f.: NOx emitting current situation and forecast from thermal
power plants and countermeasures. Energy Environmental Protection 18(1), 1–5 (2004)
[3] Wang, Z.-x., Zhao, Y., Pan, L.: Study on the estimation methods for NOx emission from
coal-fired power plants in China. Electric Power 42(4), 59–62 (2009)
[4] Yuan, X., Wang, Y.: The method of choose the SVM parameters based on that the Chaos
optimize algorithm. Control and Decision 21(1), 111–113 (2006)
[5] Zhang, Z., Ma, L., Sun, Y.: Load-forecasting model by Chaos theory and SVM. Power
System and Automatic Chemical Newspaper 20(6), 31–35 (2008)
Optimizing of Bioreactor Heat Supply and Material
Feeding by Numerical Calculation
Zhiwei Zhou, Boyan Song, Likuan Zhu, Zuntao Li, and Yang Wang
Abstract. Cell culture at large scale normally uses stirred structure. And the
situation of temperature field distribution is very important to the cell culture at
large scale. Some cells are very sensitive to the circumstances. The local tem-
perature is too high or too low all influences the cell survival and low the cell
quantity at unit volume. This paper simulates the temperature field under three
different heating conditions. Then analysis and contrast the simulation results.
The mixed situation in bioreactor is extremely significant for nutrition transmit.
Usually, use ways to measure the average mixture time in bioreactor, and im-
prove the mixture circumstance in the bioreactor through stirred impeller and
bioreactor structure change. This paper adopts numerical calculation method to
investigate the flow field in bioreactor. It gets the mixture time of bioreactor
through virtual tracer in simulate flow field and detects the tracer density time
variation curve in the bioreactor.
1 Introduction
Cell culture is an exothermic reaction, and it is sensitive to the temperature variation
[1]. Thus, the requirements of bioreactor design to temperature control are compara-
tively high. Especially with the increase of bioreactor volume, the heat transfer and
the temperature control are the important factors in bioreactor design [2]. In reaction
process of the bioreactor, the reason causes temperature variation is the net heat pro-
duced during the cell culture. The intensity of cell metabolism is proportional to the
temperature, but the cell normal metabolism and growth will happen when the tem-
perature is beyond the optimal range. Usually the temperature of cell culture is about
37 and the permissible changing range is around ±0.5 [3]. And the cell will get
the best growth conditions under the specific temperature. It is very difficult to guar-
antee the homogeneity of temperature in the bioreactor, especially in cell culture at
large scale [4]. Thus, the ways optimizing of heating supply is necessary and it helps
the bioreactor to get temperature homogeneity.
Cells need to get nutrition from outside to grow or metabolize. Types and
transfer speed of the nutrition are very important in the cell metabolic regulation [5].
The reaction of cell culture is a biological process engineering which based on the
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 195–202, 2011.
© Springer-Verlag Berlin Heidelberg 2011
196 Z. Zhou et al.
Simulating the three kind of heating wall separately, the schematic diagram of the
bioreactor wall is descripting as Fig.1. Adopting mode A: integral wall; mode B:
bottom wall of agitating bioreactor; Mode C: side wall of agitation bioreactor.
The temperature field skeleton diagrams of simulation results with the time
variation shows in Fig.2~4. From the temperature field skeletons of three models at
different times, it is clear that the fluid temperature is increasing gradually with the
warm-up time increased.
Optimizing of Bioreactor Heat Supply and Material Feeding by Numerical Calculation 197
a. t=2s b. t=10s
c. t=500s d. t=1500s
Fig. 2. Temperature contour of axial section of model A
a. t=2s b. t=10s
c. t=300s d. t=3200s
Fig. 3. Temperature contour of axial section of model B
198 Z. Zhou et al.
a. t=2s b. t=10s
c. t=480s d. t=1800s
The three models are all firstly heated is fluid flow near around the wall. Then, un-
der the impeller impetus, the heat flow under the impetus of axial flow field from
the bioreactor to form a large axial circle and temperature diffuse. And the high tem-
perature area is increasing gradually while the bulk temperature in the bioreactor is
decreasing gradually. After the flow field reaches a stable state, the temperature dis-
tribution tendency in the bioreactor basically keeps the same even if time changes.
The temperature of the impeller space is lower than the wall’s. The reasons include
strong intensity of turbulent flow and fast velocity of convection heat transfer. How-
ever, the fluid flow velocity at the top wall of bioreactor and in the taper inducing area
is comparatively slower. Convection heat transfer is also slower and temperature of
this area is comparatively high. Fig.5 is Average temperature curve of three models
monitoring facet.
Facet Average Total Temperature (K)
Model A
Model B
Model C
From the fig.5, the fluid achieves the balance of temperature when the heating
method adopts the type of integral wall heating. However, it is difficult to achieve
temperature balance for fluid when the type of bottom heating is adopted because the
area of heating is small and the heating transfer needs long time. Thus, the heating
method should adopt integral wall as the heat source in the bioreactor design and the
temperature distribution can afford theoretical basis for the temperature control.
Operation medium is water, stirred velocity is 70r/min, and tracer is the solution of
KCl 4mol/L, adding respectively from the location of A and B in figure 6. Set the
center facet as a monitoring one to monitor the average molarity variation of the
whole facets.
The tracer concentration distribution diagram when feeing location are A and B. Fig.7
and Fig.8
The figure above is concentration distribution, adopting MRF to simulate the tracer
concentration distribution at different times. It can get the process of the tracer distri-
bution visually. The Fig.(a) shows the concentration when adding tracer initially. And
the tracer diffuses from an initial position to axial and radial direction with the influ-
ence of molecule and turbulence diffuse. It is clear from the diagrams that the concen-
tration of tracer in bioreactor achieves homogeneity finally.
Optimizing of Bioreactor Heat Supply and Material Feeding by Numerical Calculation 201
The monitoring facet KCl average molarity of feeding location A and B, when
adopts MRF method to simulate. Fig.9
It is found by contrasting, when use MRF method, the average molarity of the
tracer at B area can reach a steady value earlier, which indicates the mixture time
consuming of feeding at B is fewer than feeding at A.
Conclusion
This paper sets a numerical model of the temperature field simulation, adopts SM
method to contrast and simulation analysis of structure of the three heating methods.
Result suggests that with the influence of the flow pattern produced by impeller, the
temperature in bioreactor basically keeps don’t change with the time variation after
the flow field reaches steadily. And the temperature of impeller area is lower than the
one near around the wall. When use the integral wall heating, the fluid in the bioreac-
tor can reach temperature balance fast, because of the large heating area.
Then use MRF method to simulate, optimize the feeding methods and find the av-
erage molarity of monitoring facet can get a stable value earlier when the tracer at B
area. This explains the mixture time consuming of feeding at B is fewer than feeding
at A. And it can get the stirred bioreactor mixture time from the variation curve of
tracer concentration with time change.
Acknowledgement
This work is supported by the HPC Center of Harbin Institute of Technology.
References
1. Zhang, W., Seki, M., Furusaki, S.: Effect of temperature and its shift on growth and antho-
cyanin production in suspension cultures of strawberry cells. Plant Science 127(2), 207–
214 (1997)
202 Z. Zhou et al.
2. Gammell, P., Barron, N., Kumar, N., Clynes, M.: Initial identification of low temperature
and culture stage induction of miRNA expression in suspension CHO-K1 cells. Journal of
Biotechnology 130(3), 213–218 (2007)
3. Stolzing, A., Scutt, A.: Effect of reduced culture temperature on antioxidant defences of
mesenchymal stem cells. Free Radical Biology and Medicine 41(2), 326–338 (2006)
4. Tirri, R., Soini, S., Talo, A.: The effects of temperature and adrenergic agonists on cardiac
myocytes of perch (Perca fluviatilis) in cell culture conditions. Journal of Thermal Biol-
ogy 19(6), 393–401 (1994)
5. Liu, C.: Optimal control for nonlinear dynamical system of microbial fed-batch culture.
Journal of Computational and Applied Mathematics 232(2), 252–261 (2009)
6. Haut, B., Ben Amor, H., Coulon, L., Jacquet, A., Halloin, V.: Hydrodynamics and mass
transfer in a Couette–Taylor bioreactor for the culture of animal cells. Chemical Engineer-
ing Science 58(3-6), 777–784 (2003)
7. Martin, Y., Vermette, P.: Bioreactors for tissue mass culture: Design, characterization, and
recent advances. Biomaterials 26(35), 7481–7503 (2005)
8. Yu, P., Lee, T.S., Zeng, Y., Low, H.T.: A 3D analysis of oxygen transfer in a low-cost mi-
cro-bioreactor for animal cell suspension culture. Computer Methods and Programs in
Biomedicine 85(1), 59–68 (2007)
9. Zeng, Y., Lee, T.-S., Yu, P., Low, H.-T.: Numerical study of mass transfer coefficient in a
3D flat-plate rectangular microchannel bioreactor. International Communications in Heat
and Mass Transfer 34(2), 217–224 (2007)
10. Langheinrich, C., Nienow, A.W., Eddleston, T., Stevenson, N.C., Emery, A.N., Clayton,
T.M., Slater, N.K.H.: Oxygen Transfer in Stirred Bioreactors Under Animal Cell Culture
Conditions. Food and Bioproducts Processing 80(1), 39–44 (2002)
Research on Technique of the Cyberspace Public
Opinion Detection and Tracking
Abstracts. The task of topic detection becomes the hotspot research direction in
the field of natural language processing in recent years. The cyberspace public
opinion has given the significant impact on the large numbers of internet users.
This makes the effective public opinion detection and tracking become very
important. We select the effective features in the story document and the vector
center model is taken to represent the text document. The algorithm of distance
based clustering is carried out for public opinion detection. It identifies the
emergence of new events and also merges the story to the corresponding cluster.
Finally, we give the performance evaluation by the F-Value and entropy value.
The system achieves the performance of 76% F value on the test set. The tech-
nique of topics detection can monitor the information sources in various lan-
guages, and it will provide the efficient guidance for judging the hot spots on the
web.
1 Introduction
Public Opinion Detection and Tracking concerns the field of natural language proc-
essing, artificial intelligence, information security and so on. This task monitors the hot
topics in the forums, BBS and so on. It will collect the latest news and views and then
the classifying and clustering algorithms are applied. Finally, the monitoring results are
presented to the end users, such as the relevant functional departments.
The task of topic detection is the relevant research field to the public opinion de-
tection and tracking. The international conference of Topic Detection and Tracking,
referred to as the TDT[1], is sponsored by the United States Defense Advanced Re-
search Development Agency (DARPA) and it is held every year. It attracts many re-
search institutions to participate. Topic detection and tracking gives research to the text
information re-organization. Various research institutions discussed it[2,3,4].
The selection of clustering strategy is important for this task. In recent years, the
subspace clustering algorithms have been proposed. Especially, some practical sub-
space clustering algorithms are emerged to resolve the problem of the high dimensional
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 203–207, 2011.
© Springer-Verlag Berlin Heidelberg 2011
204 Y. Du, J. Liu, and M. He
data. CLIQUE algorithm is widely used[5], specifically for clustering high dimensional
data sets. WaveCluster (Clustering with Wavelets)[6] is a clustering algorithm based on
the grid and density. It can effectively handle large data sets, and the computational
complexity is low.
The text is represented as a collection of feature items in the Vector Space Model. The
vector space becomes too large when more feature terms are extracted. The feature
selection is important to resolve the dimension reduction.
We adopt the method of TFIDF to select the feature and calculate the weight of the
feature item. Text document is treated as the set of the term t and each item has its
corresponding weight value w. Text document is represented as multiple <t, w> pairs.
Each text document d is mapped to a vector in the VSM, such as
V(d)=(t1,w1(d),t2,w2(d),…,tn,wn(d)). Here, ti denotes the feature term and wi(d) denotes
the weight value of ti in the text document d.
The weight value is calculated as following equation 1:
N
tf i × log
nt i
wi(d ) = (1)
n
N
∑i=1
( tf i ) 2 × log 2
nt i
Here, tfi denotes the frequency of ti that appears in the document d. N indicate the total
, , , ,
number of text documents for feature extraction. nti denotes the frequency of ti that
appear in the document set. Therefore, (w1(d) … wi(d) … wn(d)) is treated as a
vector in n-dimensional vector space.
The higher TFIDF value indicates that the feature term has better discrimination.
During the process of dimension reduction, the feature terms which have higher TFIDF
values are usually kept to represent the text documents. We give the experiments that
carried out with varying degrees of dimension reduction and the results are analyzed.
K-means clustering algorithm is commonly used, but the selection of initial center is
randomly. It will affect the final clustering results directly. We make use of the largest
minimum distance algorithm to implement the clustering process. The algorithm takes
the objects as farther away as the initial cluster centers and gives the effort to obtain a
better partition on the data set. It avoids the disadvantage of K-means algorithm that the
initial cluster centers may be close to each other and this can not reflect the distribution
of data set.
The dissimilarity between the objects is measured by the methods of SIM and
Euclidean Distance. The largest minimum distance algorithm is shown as following.
Research on Technique of the Cyberspace Public Opinion Detection and Tracking 205
1. Given the text document set D = {d1 , d 2 ,..., d n } . Select one document object
randomly, labeled as di, as the initial cluster center, labeled as C1.
2. Calculate the distance of C1 and the other document objects in the collection of D,
and select the document object with the maximum distance or minimum similarity as
the second cluster center C2. We use two methods, identified as DIS and SIM, to
implement the calculating.
: sim(d ,T ) =
n
SIM ∑ ( w (d ) ⋅ w (T ))
k =1
k i k j (2)
i j n n
∑w
k =1
2
k (d i ) ⋅ ∑ wk2 (T j )
k =1
DIS: dis (d , d ) = ⎡ w − w p ⎤ p
n
⎢∑ ki
(3)
i j kj ⎥
⎣ k =1 ⎦
The higher value of DIS means the lower value of similarity. We give the experi-
ments to verify the effect of p and find that the value of 2 achieves the better result.
3. For the document di in the collection of D, calculate the distance between di and
cluster center Ci separately. The minimum value is labeled as MinDisti. Further, find
the object which has the largest distance to the cluster centers within all the
remaining documents, namely:
n
δ = max(∏ MinDist i ) (4)
i =1
4. Set the threshold value λ. Set the object dj as the new cluster centers when δ is
larger than λ.
5. Repeat step 3 until there is no eligible new cluster centers.
The methods of DIS and SIM are applied for the distance computing during the
process of clustering. The system performance is different as shown in Fig 1. The
method of SIM method achieved better performance on the test set of all the 4 forums.
As a whole, the system performance gradually improved and then it leveled off with the
vector space dimension increasing. Finally, it even decreased. This indicates that it
brings some weak distinguishing features as the dimension increasing, and this not only
increase the system overhead of running time and space , but also bring the negative
impact on system performance. Among the test data set, the performance of the forum
of News is the lowest, and the difference in performance of the method of DIS and SIM
gradually narrowed with the vector space dimension increasing. The dispersion of the
news corpus makes it more difficult to implement the clustering.
We also give the result by the evaluation metric of entropy. The entropy value varies
as the dimension of the vector space increasing. The result on the 4 forums are shown in
table 1.
We find that there is a common phenomenon on the different test data set. The sys-
tem gets the minimum entropy value as the dimension of the vector space is 100 and
150. The clustering algorithm achieves the best performance. However, it reduced
significantly as the dimension increased to 200. Some noise data is brought and this
affects the clustering effect.
Research on Technique of the Cyberspace Public Opinion Detection and Tracking 207
5 Conclusion
The main purpose of the public opinion detection is monitoring the information sources
in various languages, and gives the warning when the new topic appears. We establish
the vector space model to represent the text information. The distance based clustering
algorithm is adopted for the topic detection. The collection of real-time data is divided
into clusters of different topics. The system performance is evaluated by the F-value
and entropy value. The results indicate that our approach is effective.
The task of topic detection is to detect an unknown topic in advance, and it is faced
with the real-time data. The future research will focus on the detection of the timing
news reports, and extracting the unique features to study the adaptive detection model
and strategies.
Acknowledgement
This work is supported by National Natural Science Foundation of China under grant
No. 60803086 and Science and Technology Program of Beijing Municipal Education
Commission (KM200910005009).
References
1. Allan, J., Carbonell, J., Doddington, G., Yamron, J., Yang, Y.: Topic Detection and
Tracking Pilot Study: Final Report. In: Proceedings of the DARPA Broadcast News Tran-
scription and Understanding Workshop, pp. 194–218. Morgan Kaufmann Publishers, Inc.,
San Francisco (1998)
2. Zhang, K., Li, J.Z., Wu, G.: New Event Detection Based on Indexing tree and Named Entity.
In: Proceedings of the 30th Annual International ACM SIGIR Conference, Amsterdam
(2007)
3. Makkonen, J., Ahonen-Myka, H., Salmenkivi, M.: Simple Semantics in Topic Detection and
Tracking. Information Retrieval 7(3-4), 347–368 (2004)
4. Brants, T., Chen, F.R., Farahat, A.O.: A System for New Event Detection. In: Proceedings
of the 26th Annual International ACM SIGIR Conference on Research and Development in
Information Retrieval, pp. 330–337. ACM Press, New York (2003)
5. Agrawal, R., Gehrke, J., Gunopulos, D., Raghavan, P.: Automatic Subspace Clustering of
High Dimensional Data. In: Data Mining and Knowledge Discovery, pp. 5–33. Springer
Science Business Media, Inc., The Netherlands (2005)
6. Sheikholeslami, G., Chatterjee, S., Zhang, A.: WaveCluster: A wavelet-based clustering
approach for multidimensional data in very large databases. The VLDB Journal 8(4),
289–304 (2000)
A High Performance 50% Clock Duty Cycle Regulator
Abstract. A low-jitter clock duty cycle corrector circuit applied in high per-
formance ADC is presented in the paper, such circuits can change low accuracy
input signals with different frequencies into 50% pulse width clock. The result
have show that the circuit could lock duty cycle rapidly with an accuracy of 50%
± 1% in 200ns. This circuit have 10%-90% of duty cycle input, and clock jitter
could be suppressed to less than 5ps. The method used in the circuit, which
provides little relationship with the noise and process mismatch, is widely used
Implemented in 0.18μm CMOS process.
Keywords: clock duty cycle; pulse width adjustment circuit; duty cycle de-
tection circuit; delay units.
1 Introduction
In the high performance A/D design,clock becomes a key signal which directly effects
the speed and accuracy of the Sample and Hold circuits. With the increase in resolution
and the low supply voltage, the phase noise and jitter in sample clock effects the ac-
curacy a lot. Consequently, low jitter, high accuracy sample clock plays a significant
role in the research of high performance A/D converters.
Nowadays, clock in ADC is produced by PLL (Phase Locked Loops) or DLL
(Delay Locked Loops). While, PLL and DLL can only ensure the input and output
clocks and they are not able to adjust the input clock. In SHA, a 50% duty cycle clock
is need, so there should be a duty cycle adjustment circuit to stable the input clock at
50% duty cycle. The traditional duty cycle adjustment circuit regulate the rising and
falling edge both, which generate great clock deviation [1]. And the circuit often has
charge pump, the match of the charge pump directly limit the accuracy of the
clock[1,2].
A high performance 50% duty cycle regulator based on mixed circuits is given in
this paper. Compared with traditional duty cycle adjustment circuits[1,2], it has sev-
eral advantages. Firstly it need no special reference clock, so the problem of the
dismatch in input clock and reference clock is kicked off, which simplify the circuit
and improves the performance of it. Secondly, the circuit only adjust the falling edge
of input clock, so the requirement of the accuracy is low. Thirdly, a continuous time
integrator circuit is applied to detect the duty cycle of output clock, not the traditional
charge pump. So there is no charge pump mismatch, which realizes the high
performance[2].
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 208–214, 2011.
© Springer-Verlag Berlin Heidelberg 2011
A High Performance 50% Clock Duty Cycle Regulator 209
3 Circuit Design
The pulse width adjustment circuit in fig2 is formed by two phase regulation circuits
and a basic RS flip-flop.The two inputs of RS flip-flop V1, V2 can not be zero at the
same time, so V1, V2 should be changed to broad pulse signals, which is realized by
phase adjustment circuit[5].
In figure 2, Vin is input clock, after it is adjusted by the phase adjustment circuit, it
generate a broad pulse V1,which is passed to delay line controlled by Vback. Vback is
produced by the continual time integrator which is used for detecting Vout. The delay
circuit produces a reference pulse Vdelay, the rising edge lags V1, the lagging time is
controlled by Vback. Vdelay generates broad pulse V2 similar to V1 by the phase
adjustment circuit. V1 V2 produce a 50% duty cycle circuit clock by a simple RS
flip-flop. The pulse width adjustment timing diagram shown in Figure 3.
210 P. Huang, H.-H. Deng, and Y.-S. Yin
In figure3, we can find that the rising edge of Vout is decided by the rising edge of
Vin, while the falling edge of Vout is determined by the rising edge of Vdelay, that is
the delay time of the delay unit.
Traditional Duty cycle detection circuit adopts charge pump, however, such a method
needs accuracy in the match of charging and discharging loop. The paper presents a
new method to detect 50% pulse duty cycle, which is based on continual time inte-
grator, as is shown in fig.4. The duty cycle detection circuit checks whether output
clock signal is 50% duty cycle, and it outputs Vback. In the figure4, R,C,C0 are integral
resistance, integral capacity, and filter capacity respectively; Vb is fixed reference
voltage[4].
From fig4, we find that:
1 ⎡ ⎤
V back = V b + RC ⎢⎢∫ (V b −V SS )dt + ∫ (V b −V dd )dt ⎥⎥
tl T
(1)
⎣0 tl ⎦
A High Performance 50% Clock Duty Cycle Regulator 211
In which RC is time constant of the integrator, th and tl is the high and low duration
,
time of Vout, th+tl=T T is the input clock period. Vdd and Vss are the high and low
voltage of Vout. So:
V back = V b + RC [V b * T −V SS * tL −V dd * tH ]
1 (2)
V back = V b + RC [V b −V SS K −V dd (1 − K )]
T
(3)
( )
We know from (3) that only when th=T/2 K=1/2 (duty cycle is 50%) in the single
period, the change of Vback in one period is zero, though Vback is a variable, the circuit
is dynamic adjustment. As long as the change of Vback is zero in one period, the duty
cycle of output clock is 50%. Otherwise, Vback will continually rise or fall because of
the integral margin. Such a detecting method can not be effected by the matching
problem. The rate of change in voltage from integrator has an inverse ratio to RC: the
more RC is, the accuracy increases, but the speed falls, vice versa. The transient re-
sponds are shown in fig.4.
Fig. 4. The duty cycle detection circuit and its transient respond
˄a˅Op Amp ˄b˅ bias circuit
When input signal’s frequency is low, the output swing of integrator is large, which
causes output clock jitter. In order to reduce the side effect, the swing of integrator is
limited in 1/10 supply voltage, that is, the swing is 0.18V, RC≥0.5e – 6. considering the
margin and the area of layout, R=250KΩ, C = 2. 5 p F. the output of integrator is dy-
namic, the adjustment process is dynamic and capable of resisting interference.
The Op Amp in the integrator is shown in fig5(a), the folded cascade structure is
adopted, the bias circuit is shown in fig.5(b). Because the requirements of Op Amp is
not high, the gain is 70db, unit bandwidth is 2GHZ, in this situation, Op Amp can
satisfy the requirements.
In fig4. M4-M9 form smith flip-flop, when V1 is low, M3 is on, V10 is high, then
M4 M5 is on, Vdelay is low , M8 is off, M9 is on and works as a source follower ; when
V1 is high, M2 is on, and V10 charges the capacity by M2,M1, and the current is
controlled by Vback according to M1. with V10 dropping, when
V 10 ≤ VDD − Vthp M7 is on; and the voltage of M6’ s drain port is low, so even
V10 is reduced to 1/2VDD, M6 is still off, V10 is low. till M4,M5 is off, the resistors of
M4 and M5 increase, and Vdelay is up, finally reaches V 10 − Vs 6 ≥ Vthp , then
M6 is on. because of the positive feedback of M9 , Vdelay rises to VDD sharply.
Here the flip-flop has two functions:it can restrain the jitter because the turn voltage
is over 1/2VDD or below 1/2VDD; it has positive feedback, which makes steeper rising
and falling edge.
Here the flip-flop has two functions:it can restrain the jitter because the turn voltage
is over 1/2VDD or below 1/2VDD; it has positive feedback, which makes steeper rising
and falling edge.
A High Performance 50% Clock Duty Cycle Regulator 213
5 Summary
A high performance 50% duty cycle regulator circuit is given in this paper. The circuit
consists of the pulse width adjustment circuit, the duty cycle detection circuit and the
214 P. Huang, H.-H. Deng, and Y.-S. Yin
delay unit, which together regulate 50% pulse width duty cycle. The duty cycle detec-
tion circuit adopts continual integrator to check whether output duty cycle is 50% and
control the falling edge. The structure of the circuit is simple and can be easily realized,
and also has a great capacity of resisting interference from input clock.
Acknowledgments. This work was supported by the National Natural Science Foun-
dation of China (No.***61076026).
References
1. Huang, H.-Y., Liang, C.-M., Chiu, W.-M.: 1-99% Input Duty 50% Output Duty Cycle
Corrector. In: ISCAS 2006, pp. 4175–4178 (2006)
2. Han, S.-R., Liu, S.-I.: A Single-Path Pulsewidth Control Loop with a Built-In Delay-Locked
Loop. IEEE Journal of Solid-State Circuit, 1130–1135 (2010)
3. Lu, P., Zheng, Z., Ren, J.: Delay-locked Loop and It s Applications. Research & Progress of
Solid State Electronics, 81–88 (2005)
4. Du, Z., Yin, Q., Wu, J., Pan, K.: High Precision Duty Cycle Corrector with Fixed Falling
Edge. Microelectronics, 739–743 (2007)
5. Zhang, H., Zhou, S.-T., Zhang, F.-H., Zhang, Z.-F.: Low Jitter Clock Stabilizer for
High-Speed ADC. Semiconductor Technology, 1143–1146 (2008)
Study on Risk of Enterprise’ Technology Innovation
Based on ISM
Hongyan Li
1 Introduction
Technology innovation is an important foundation for enterprise to maintain survival
and development, in order to enhance the long-term competitiveness, enterprise must
attach importance to technology innovation, but innovation activity may fail due to
big risk. The technology innovation risk mainly from the external environment, the
project itself and innovative enterprises, project management level and so on, so many
affect factors, how to reasonablely evaluate the risk degree of technology innovation
will not only help enterprise prevent the risks of technology innovation, but also im-
prove the success rate of technology innovation and the benefits of technology inno-
vation. In this paper, for the probem encouted in technology innovation, the use use
Interpretative Structural Modeling (ISM) to made useful exploration on risk evalua-
tion of technical innovation.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 215–220, 2011.
© Springer-Verlag Berlin Heidelberg 2011
216 H. Li
Domestic and foreign scholars have researched lot on the factors of technology in-
novation risk (success or failure)[1-5], but because of different research view, the
conclusions of key factors are also different. Systematic summary of these studies, we
can get the risk factors of general technology innovation. However, this paper argues
that these risk factors can only be used as a reference for risk assessment, while not be
directly as the evaluation index system such as in some literatures. Even if the tech-
nology innovation projects are very similar, in different time, place or ein different
implemented nterprise, there will be different risk factors, or different affect
degrees.According to the results of literature study, a preliminary summary of tech-
nology innovation risk factors is given, after arranged, a risk system include five sub-
systems and almost 30 risk factors is designed; On this basis, designs a questionnaire
survey, were invited 20 members of the group anonymous questionnaire to analyze
and answer (which 20 experts mainly from the textile clothing enterprises, industry
associations, higher education research institutions and government management),
after a few iterations, the views of experts gradually concentrated, and finally gets 16
major risk factors (S1~S16) of accepting textile and clothing industry transfer, as
shown in Table 1.
3 ISM Model
⎧⎪ 1, i ≠ j , si directly influnced on s j
aij = ⎨ (1)
⎪⎩0, i = j , or , si not directly influnced on s j
(A + I ) ≠ ( A + I ) 2
≠ ... ≠ ( A + I ) k = ( A + I ) k +1
(k < n − 1)
(2)
M = ( A + I )k +1
Matrix multiplication algorithm of equation (2) meets the Boolean algebra (1+1=1,
1+0=0+1, 1×1=1, 1×0=0×1=0), I is the identity matrix.
(4) Using standard or practical methods to build up a model of hierarchical structure
based on reachable matrix, express the structure of model by multi-level hierarchical
directed graph.
(5) Compare the structure model with the xisted awareness model, if the two do not
match, return to step (1), modify the relative elements and the ISM. Through re-
searching and learning on the structural model, the original awareness model is modi-
fied. After feedback, comparison, correction, learning, a satisfactory, enlightening and
instructive results of structural analysis can be received.
According above ISM modeling steps, establish the risk factors ISM model of textile
and clothing industry accepting coastal industry transfer.
Based on the above identified 16 risk factors, again discussed by the experts to de-
termine the relationship between them; and in accordance with the binary relations
between elements, we get the adjacency matrix A = aij ( n× n ) , as shown in Table 2, S1
expressed by 1, and so on.
218 H. Li
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
S1 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0
S2 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
S3 1 1 0 0 1 1 0 0 0 0 0 0 0 0 0 0
S4 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0
S5 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
S6 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
S7 0 0 1 0 1 1 0 0 0 0 0 0 0 0 1 0
S8 0 0 0 0 0 0 1 0 1 1 0 0 0 0 0 1
S9 0 0 0 0 1 1 1 0 0 0 0 0 0 1 1 1
S10 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0
S11 1 1 1 0 0 1 0 0 0 0 0 1 1 1 0 1
S12 1 1 1 0 0 1 0 0 1 0 0 0 1 0 0 0
S13 0 0 0 0 0 0 1 1 1 1 0 1 0 1 1 1
S14 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 1
S15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
S16 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
S1 1 1 1 0 1 1 0 0 0 0 1 1 1 1 1 1
S2 1 1 1 0 1 1 0 0 0 0 1 1 1 1 1 1
S3 1 1 1 0 1 1 0 0 0 0 1 1 1 1 1 1
S4 0 0 0 1 1 1 0 1 1 1 1 1 1 1 1 1
S5 0 0 0 0 1 1 0 0 0 0 1 1 1 1 1 1
S6 0 0 0 0 1 1 0 0 0 0 1 1 1 1 1 1
S7 1 1 1 0 1 1 1 0 0 0 1 1 1 1 1 1
S8 0 0 0 0 1 1 0 1 0 1 1 1 1 1 1 1
S9 0 0 0 0 1 1 0 0 1 0 1 1 1 1 1 1
S10 0 0 0 0 1 1 0 0 0 1 1 1 1 1 1 1
S11 0 0 0 0 1 1 0 0 0 0 1 1 1 1 1 1
S12 0 0 0 0 1 1 0 0 0 0 1 1 1 1 1 1
S13 0 0 0 0 1 1 0 0 0 0 1 1 1 1 1 1
S14 0 0 0 0 1 1 0 0 0 0 1 1 1 1 1 1
S15 0 0 0 0 1 1 0 0 0 0 1 1 1 1 1 1
S16 0 0 0 0 1 1 0 0 0 0 1 1 1 1 1 1
(3) Find the reachable and advanced set, carry out inter-class division. Set R(ni ) is a
reachable set of Si , which is combined by all the factors that in all lines factor equal
{
to 1 in the Si row of matrix M, that is R(ni ) = ni ∈ N mij = 1 . A(ni ) is the ad- }
vanced set of S i , which is combined by all the factors that in all rows factor equal to
{
1 in the Si line of matrix M, that is A(ni ) = ni ∈ N mij = 1 . Identify the reachable }
and advanced set of 16 risk factors in turn and summarized into Table 4.
Study on Risk of Enterprise’ Technology Innovation Based on ISM 219
(4) Reachable matrix inter-class division. According Table 4, the first advanced ele-
ments set is L1 = {5, 6,11,12,13,14,15,16} , delete L1 from reachable set, get new
reachable and advanced set, and list in table, risks can be divided into different levels
step by step, we get L2 = {1, 2,3} , L3 = {10} , L4 = {8,9} , L5 = {4, 7} . Therefore, the
original risk factors can be divided into five levels from top to down.
(5) ISM model establishment. According to the result of reachable matrix, the struc-
ture model is established, as shown in Fig. 1.
5 6 11 12 13 14 15 16 First
1 2 3 Second
10 Third
8 9 Fourth
4 7
Fifth
4 Model Analysis
As can be seen from the model, marketing capability in enterprise capabilitiy risk
subsystem and the strength of competitors in market risk subsystem in the fifth grade,
which shows the marketing ability and its competitors strength are the deepest level
and the basic risks of all, so in order to ensure the success of technology innovation,
enterprise should has strong marketing ability and weaker competitors.
The production equipment level, financial strength and management ability in en-
virnmental risk, enterprise capacity risk subsystem are in the second to fourth level,
220 H. Li
which are important roles in the risk factors, such risks are affected by the fifth level
risk factors and decision the risk factors existence in the first and the second level.
The risk factors of market risk and project risk subsystem in the first level are the
direct risk factors that in the process of enterprise’ technology innovation. These risk
factors are direct manifestation of innovation ability, and influence each other, pro-
mote or restrict the enterprise’ technology innovation.
5 Conclusion
Based on the results of above analysis, propose the ountermeasures that enterprise
should take in technology innovation as follow: Firstly, enterprise must have a strong
marketing capability, so that new products can successfully enter the market, and was
accepted by consumers; Secondly, enterprise should implement technology innova-
tion in the stable external environment and strong corporate capacity, enterprises
should continuously improve their technology and management level, to ensure inno-
vation success. Finally, choose the capable project leader, set up strength project
team, to keep abreast of market and technology information, to ensure that techno-
logical advancement and maturity.
References
1. Fu, J.J.: Technology innovation– Enterprise development in China, p. 472. Business Man-
agement Press (1992)
2. Xiang, W.M., Fang, W., Shi, Q.: The Fuzzy Comprehensive Evaluation of the Riof Tech-
nology Innovation for Enterprises. Journal of Chongqing University 26, 142–144 (2003)
3. Zhang, Z.Y., Yuan, G.Q., Li, Z., Wang, M.: The fuzzy culuster analysis and research on
technoloy innovation risk factors. Journal of the Hebei Academy of Sciences 21, 56–60
(2004)
4. Peng, C., Li, L.: Forecasting management of enterprise’ technology innovation risk. Stud-
ies in Science of Science 24, 634–641 (2006)
5. Wang, L.X., Li, Y., Ren, R.M.: Risk Evaluation for Enterprise’Technology Innovation
Based-on GreyHierarchyMethod. Systems Engineering - Theory&Practice, 98–104 (2006)
6. Warfield, J.N.: Participative Mathodology for Public System Planning. Computers & Elec-
trical Engineering 1 (1973)
7. Wang, Y.L.: Systems Theory, Methods and Applications. Higher Education Press (1998)
8. Xie, L.S.: The Preventive Measures against the Environmental Risks in the West Accep-
tance of the Shifted Eastern Industry. Commercial Research 1, 95–97 (2009)
9. Tang, L.Y., Zhang, Q.Y., Wang, H.Q.: Research on Risk of Software Industry Carrying on
International Industry Transfer Based on ISM. Value Engineering 8, 1–4 (2008)
10. Chang, Y., Liu, X.D., Yang, L.: Application of Interpretative Structural Modeling in the
analysis of high-tech enterprises technologic innovation capability. Science Research
Management 2, 42–49 (2003)
Study on Reservoir Group Dispatch of Eastern Route of
South-to-North Water Transfer Project Based on
Network
Hongyan Li
1 Introduction
South-to-North Water Transfer project is a major the infrastructure project to solve
the uneven spatial and temporal distribution of water resources in our country. The
project has been run, water resources rational allocation and dispatch is a measure of
project success or failure. Therefore, research on the water dispatching is very im-
portmant.On water dispatch of Eastern Route of South-to-North Water Transfer Pro-
ject(ERSNWT), many domestic scholars have deeply discussed from different views,
and got a lot of useful research results, but most from a technical point of view, the
final results are basically dispatching diagram form. Such as Zhangjian Yun, Liu Wei
[1,2] on the basis of system generalization, looked the minimum amount of energy
consumption and system shortage as a comprehensive goal, reached the ERSNWT
dispatching map. Then, Zhao Yong [3] used system simulation theory, established a
water quantity dispatch model, obtained control lines of ERSNWT. However, draw-
ing dispatch chart can only get reasonable result. Network flow analysis is a optimiza-
tion technology calculate fast and requir less storage, for the more reservoir and
complex relationship between water systems, the advantages of this method is more
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 221–226, 2011.
© Springer-Verlag Berlin Heidelberg 2011
222 H. Li
apparent. Also some scholars at home and abroad [4-8] used this method to the opti-
mal operation of reservoirs, and have achieved encouraging results, but only use this
method to dispatch cascaded hydroelectric stations, not to see someone use this
method to reservoir dispatch on the ERSNWT.
QV
QG
Beidagang
QV QV QV QV QV
QV
QX
QC QC QC
QG
Hongzhe Luoma Nansi Dongping Datun Dalang
QC lake lake lake lake lake dian
QX QX QX QX
QX QV
QG QG QG QG QG
QX QG
Qianqing
the red line means discharge water when flood exist wa
From the previous data, all month discharge flood can be rarely seen. Meanwhile, in
the the reservoir of south the Yellow River, pumping and discharge can not exist at the
same time, and if there is flooding, the first task is to flood, but also to the maximum
flood discharge. Therefore, the dicharge flood is special circumstances and can be
reflected by the water damage loss, so when design the dispatch structure, the dis-
charge can not be reflected, only the pumping. If dispatch every month, the horizontal
represents space, the vertical represents time, so the reservoirs dispatching network
structure of ERSNWT can be shown in Fig 2.
QVt i : The storage capacity of reservoir i at the beginning of period t; QX ti : The
discharging quantity from reservoir at higher levels to reservoir i at period t; QGti : The
quantity of regions around reservoir i drawn from it at period t; QCti : The quantity of
reservoir i pumping from the higher levels reservoir at period t. S : Source; T : Sink.
Study on Reservoir Group Dispatch of ERSNWT Project Based on Network 223
QV16
QG16
Beidagang
S S S S S
S
T
QV11 QV13 QV14 QV15 QX 6
QV12 1 QV17
2 QC13 QC14
Hongzhe QC1 Luoma Nansi Dongping Datun Dalang
S QG17
1 lake lake lake lake lake dian
QC1 QX15 QX17
QG12 QG13 S
QG14 QG15 .
QG11 QV18
.
QX. 8 QG18
1
Qianqing
wa
. . . . . .
. . . . . .
. . . . . .
.
QV126 .
.
QG126
Beidagang
QV122 6 T
QV121 QV123 QV124 QV125 QX12
QV127
QC123 QC124
Hongzhe QC122 Luoma Nansi Dongping Datun Dalang
S QG127
lake lake lake lake lake dian
QC121 QX125 QX127
QG122 QG124 QG125
QG121 QG123
QV128
QX128 QG128
Qianqing
T wa
3 Dispatch Model
3.1 Assumptions
The paper made the following basic assumptions: All water use regions around ①
②
reservoir will be looked as one region. The same period in the same reservoir of
③
water damage losses are the same. In the calculation of energy used use pumping
head design, without considering the head with the water level caused by the change
④
of efficiency changes. As the regulation capacity of river is much smaller than the
reservoir, so in the calculation of water shortage losses in the system only consider the
shortage situation around reservoir, without considering water shortage on both sides
of the river.
This paper select the sum of minimum water shortage losses, minimum pumping
energy consumption, minimum flood control losses and minimum reservoir storage
energy consumption as the objective function.
224 H. Li
; i
price of reservoir i at period t dt :Storage electric energy price of reservoir i at period
;
t Other symbolic are significance with the former.
3.3 Restraints
QG i : The total water quantity of reservoir i supply to regions around it; QI i :The total
natural water rights of reservoir i can be allocated; QPi : The total project water rights
of reservoir i can be allocated; QP i = QC i +1 − QC i .
③The reservoir’s storage capacity restraint: QV ≤ QV t
i
≤ QV (4)
QV , QV is dead storage capacity and maximum storage capacity.
④Pumping capacity restraint: QC ≤ QC i i
(5)
i i
QC , QC is water quantity of reservoir i pumped from the higher level and the overall
design pumping capacity.
⑤The Reservoir’s discharging capacity restraint. QX i ≤ QX i (6)
QX i , QX i is water quantity of reservoir i discharging to the lower lever reservoir and
⑥
the maximum discharge capacity.
Variable non-negative restraint.
All the variables in the model are non-negative.
directed graph W ( f (0) ) ;(3) Finding the shortest path from the start point to the collec-
tion point in W ( f ( n ) ) ( use Floyd algorithm), if there is no the shortest path, the length
of the shortest path is ∞ , so f (0) is the minimum cost flow, Calculating is end; if there
is the shortest path, so the shortest path is the corresponding augmented chain of origi-
nal network graph. Adjustment arc flow on the augmented chain, the adjustment rate is
given by following:
θ = min[min(cij − f ijn −1 ), min( f ijn −1 ) (7)
μ+ μ−
⎧ f ijn −1 + θ , (vi , v j ) ∈ μ +
⎪
f ijn = ⎨ f ijn −1 − θ , (vi , v j ) ∈ μ − (8)
⎪ f n −1 , (vi , v j ) ∈ μ
⎩ ij
The flow after adjustmen is (4)change to step (2).
Table 1. Water resources time allocation result of the first phase of ERSNWT(Dry year)
Table 2. Water resources time allocation result of the first phase of ERSNWT(Normal year)
196 arcs and more than ten thousands paths in the model, after 40th iteration we find
the minimum cost maximum flow. The results are shown in Table 1 and Table 2.
5 Conclusions
From the result of dispatch model, we can get the following conclusions: in normal
year, the optimal storage capacity of lake in flood season is usually in the near the
minimum capacity; in non-flood season, the optimal storage capacity is under the
maximum storage capacity.(2) in dry year, the optimal storage capacity of lake in
flood season is close to the maximum storage capacity.(3) Whether in dry or normal
year, there is less pumping from the higher lake in the flood season, and provide more
water to the surrounding area.
References
1. Zhang, J.Y., Chen, J.Y.: Study on Optimum Operation of the East-Route South-to-North
Water Transfer Project. Advances In Water Science 3, 198–204 (1995)
2. Zhao, Y., Xie, J.C., Ma, B.: Water dispatch of east-route of South-to-North Water Transfer
Project based on system simulation method. Shuili Xue Bao 11, 38–43 (2002)
3. Liu, G.W.: Water Transfer Interbasin Operation Management. China WaterPower Press
(1995)
4. International Commission on Irrigation & Drainage. Application of Systems Analysis to
Problems of Irrigation, Control Drainage & Flood, 131–147 (1980)
5. Martin, Q.W.: Optimal Operation of Multiple Reservoir Operations. Water resource plan-
ning management, Div ASCE 109 (1983)
6. Mei, Y., Feng, S.: A Nonlinear Network Flow Model For Selecting Dead Storage Of A
Multireservoir Hydropoweer System. International Joural Hydroelectric Energy 7, 168–
175 (1989)
7. Yang K., Dong Z.-c., Zhang J.-y.: Decomposition-Coordination Network Analysis Method
Used in Changjiang Flood Prevetion system Optimal Operation. Journal Of Hohai Univer-
sity 28, 77–81
8. Luo, Q., Song C.-h., Lei S.-l.: Non-linear network flow programming of reservoir group
system. Engineering Journal of Wuhan Unversity 34, 22–26 (2001)
Immune Clone Algorithm to Solve the Multi-object
Problems
1 Introduction
Biological immune system is a very complex, high parallel and self-adaptive system.
It can identify and eliminate the foreign matter of antigenicity, at the same time; this
biological immune system has the ability of studying, remembering and self-adaptive,
which can maintain the balance within the organic environment. In recent years the
artificial immune system (AIS) which is put forward based on the theory of biologi-
cal. AIS is this kind of new algorithm which has been widely used in many field and
demonstrated superior performance. AIS had been attracted attention of many schol-
ars, especially in the field of multi-object optimization [1] . Yang dongdong et al pro-
posed a new preference rank immune memory clone selection algorithm to solve the
problem of multi-objective optimization with a large number of objectives [ 2 ] . Shang
ronghua et al. proposed immune clonal multi-objective optimization algorithm, which
can treats constrained optimization as a multi-objective optimization with two objec-
tives [ 3] . Shang ronghua et al. pointed out that the new algorithm based on the im-
mune clonal theory can be used to test the complex multi-objective problems, and
there are much better performance in both the convergence and diversity [ 4 ] .
Because multi-object problems are very complex, if traditional immune algorithm
is used to deal with these problems, it has a lot of deficiencies, for example, it can not
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 227–232, 2011.
© Springer-Verlag Berlin Heidelberg 2011
228 L. Zhou and J. Zheng
effectively control over the scope of population, which will make the performance of
this algorithm degradation, and it has inherent slowly searching rate. In this paper,
research on the basic concept, theory of biological immune system, an improved im-
mune clone algorithm (IICA) is proposed in order to solve the multi-object problems.
In order to solve the key problems in the MOPs, an improved immune clone algo-
rithm is proposed to deal with MOPs based on the theory of AIS. In the AIS, the im-
mune mechanism in the organisms is embodied in the recognition of self or non-self
by organisms themselves, and then these organisms can exclude the non-self. It also
means that the organisms can recognize and obviate the foreign object of antigenicity
in order to maintain its balance of physiology. This material which can induce the
organisms’ immune response and has a specific reaction with corresponding antibody
is called antigen [ 3] . Based on the detailed description, it finds that the process of im-
mune is especially suitable for solving the multi-object optimization problems. How-
ever, the number of objective function in MOPs more than one and as the number
grows, the Pareto optimization solutions will increase sharply beyond a limited extent.
If we use evolutionary algorithm to solve MOPs, then it will cause the following
situation, all the solutions will be mutual non-control solution. Under such circum-
stances, all the solutions are equal based on the above definitions. However, this is not
a good signal for the solving process, it make the evolutionary process of algorithm
become more difficulty to move on. In this paper, a new clone mechanism is imported
into the immune algorithm in order to overcome this problem, at the same time, the
defining of a concept which is called preference difference is used, and lastly, we
adopted trichotomy overlap method to build the improved immune clone algo-
rithm(IICA).
In this paper, the actual multi-object optimization problems can be seen as antigen
as follows: y = F ( a ) = ( f1 ( a ), f 2 ( a ),... ..., f m ( a )) . The candidate solution of the
MOPs can be seen as antibody as follows: a = (a1 , a 2 ,.., a n ) , where a1 , a 2 ,.., an
are in compliance with the restriction conditions of equation (1). The algorithm flow
of IICA consist four operations. Each iteration for the initial population is called
A(it ) . The detailed steps for the IICA appear below.
The reason for the combination immune and clone is this operation can develop solu-
tion space, and improve the effect of solving; the essential of clone operation is to
copy the optimization individual in father population into the subsequent population.
As for the population A0 (it ) , before we clone A0 (it ) , we introduce the definition
of preference difference in order to make sure the candidate solution distribute
equally in the solution space.
Definition 1(preference difference ε ): as for the MOPs, its candidate solution is a
and a ∈ A0 (it ) , the definition of preference difference ε is as follows:
q p
ε= ∑ (0 − g i (a)) 2 + (∑ h j (a)) 2
i =1 j =1
between the current candidate solution a and actual feasible region. Generally, the
more lager the ε is, the greater the difference between a and actual feasible region.
This means the possibility of a to become a feasible solution become smaller, too. In
this paper, if the individual satisfy ε = 0 according to definition 1, and if it satisfies
definition Pareto optimization solution, it is considered of Pareto optimization solu-
tion. At the end of every iteration, we can find a Paretto = ( a p1 , a p 2 ,..., a pm ) , where
1 ≤ m ≤ n . a Paretto will be through immune clone operation. Then we can get
A' (it ) . To define a fitness function in order to make sure the population clone size:
y s = w1 f1 ( x) + w2 f 2 ( x) + ... ... + wm f m ( x) .
Where w1 + w2 + ... ... + wm = 1 . In this paper, w1 = w2 = ... = wm = 1 / m . As a
result, this fitness function can be used to ensure the clone size of individual as fol-
lows: s = int(λ ⋅ (1 / y s )) , Where int is a round function, and λ is a coefficient
which is related to the clone size.
In this paper, immune mutation operation will be introduced in order to maintain the
diversity of population; at the same time, this operation is intended to avoid the excel-
lent information of father individuals losing because of immune recombination opera-
tion. So the individuals can perform local search is very important after recombination
operation. Because of this reason, the immune mutation operation will perform global
Immune Clone Algorithm to Solve the Multi-Object Problems 231
search and local search alternately in the whole solution space. The concrete process
can be described as follows.
If ∀ai'' (it ) ∈ A '' (it ) , we will get new ai''' (it ) by using the following clone im-
mune operation:
antibody mutation in a local space, by using the clone immune operation, the whole
population can make a balance between global search and local search.
Example 1. Minimize f1 ( x) = y ( x) = x1 + 2 × x 2 , f 2 ( x) = g ( x) = − x1 − x2 ,
Subject to x1 ∈ [0,1] , x2 ∈ [0,1] .
The Pareto solution result based on IICA is as fig.1. As we can see from fig.1, the
IICA successfully seeks out the Pareto optimization solution and Pareto frontier, at
the same time, the solutions are evenly distributed.
5 Conclusions
In this paper, an IICA is proposed by introducing the clone mechanism, trichotomy
overlap method, and preference difference, which is used to solve the MOPs. The
introduction of new concepts into IICA can improve the quality of Pareto optimal
solution set, and they can expand the distribution range of Pareto feasible solution,
maintain the diversity of the population, and improve the efficiency of algorithm.
Based on the fact that IICA successfully solved the Example 1, and this IICA has sat-
isfied effect.
References
1. Wang, Y., Liu, L., Mu, S., et al.: Constrained multi-objective optimization evolutionary
algorithm. Journal of Tsinghua University (Science and Technology) 45, 1–5 (2005)
2. Yang, D., Jiao, L., Gong, M., et al.: Clone Selection Algorithm to Solve Preference Multi-
Objective Optimization. Journal of Software 21, 14–33 (2010)
3. Shang, R., Jiao, L., Ma, W.: Immune Clonal Multi-Objective Optimization Algorithm for
Constrained Optimization. Journal of Software 19, 2943–2956 (2008)
4. Shang, R., Jiao, L., Gong, M., et al.: An Immune Clonal Algorithm for Dynamic Multi-
Objective Optimization. Journal of Software 18, 2700–2711 (2007)
5. Gong, M., Jiao, L., Yang, D., et al.: Research on Evolutionary Multi-Objective Optimiza-
tion Algorithms. Journal of Software 20, 271–289 (2009)
6. Chen, F., Qin, B.: Realization of Immune Genetic Algorithm Based on Multi-Objective
Optimization in Matlab. Journal of Hunan University of Technology 21, 92–95 (2007)
Foreign Language Teachers’ Professional Development
in Information Age
1
College of Foreign Languages, Pan Zhihua University
2
College of Telegraphy, Pan Zhihua University
Pan Zhihua, Sichuan, China
bettyfan1988@163.com
1 Introduction
Since the 1980s, autonomous learning has been a hot topic in language teaching field
abroad. Holec(1981) [1] holds that autonomous learning is learners’ ability to be re-
sponsible for their own studies”. Domestic scholars has been introducing, empirically
studying, developing overseas autonomous learning theories since the 19990s. How-
ever the majority of domestic researches involve mainly the necessity, feasibility,
cultural suitability of autonomous learning, investigation and study of the learners’
consciousness and ability of their autonomy, while researches concerning the teachers
is only a supplementary topic. In the domestic foreign language field which emphasizes
teaching methods’ improvement, acknowledgment and study of the students’ status as
the subject no doubt important, however, training of the learners’ autonomy has set
new request to professional development of university foreign language teachers
because the learners’ autonomy is based on teachers' professional development, be-
cause the former takes latter as the foundation and premise. With the arrival of the
information age, Internet, multimedia technologies have provided the interactive,
three-dimensional and vivid language teaching environment and rich study materials.
The students have more opportunities, while simultaneously the foreign language
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 233–237, 2011.
© Springer-Verlag Berlin Heidelberg 2011
234 X. Fan and G. Wu
teachers face new challenges and opportunities. Therefore the research into foreign
language teacher's professional development in the information age seems especially
significant.
3.1 Studying Foreign Language Teaching Theories with the Sport of Information
Technology and Improving Scientific Research Level
Foreign language teaching is not only influenced by linguistics, but also is more re-
strained by pedagogy, psychology , curriculum theory, study theory, teaching peda-
gogy, appraisal theory and education scientific research theories and so on (Zhang
Jianzhong, 1993) [3]. Most foreign language teachers have solid knowledge of the
language, but when it comes to foreign language teaching theories, a considerable
number of teachers understand very little, and some are even indifferent to them. The
Chinese Foreign Language Education Research Center has once conducted a ques-
tionnaire in 2002 to more than 900 university English teachers from the nation’s 48
colleges and universities, which indicates that the majority of the teachers lack the basic
understanding of modern education ideas, teaching pedagogy and so on. 82.8 percent of
the teachers hold that their own good basic skills of English are enough to enable them
to teach English well(Zhou Yan, 2002:409) [4].
Foreign language teaching theories are not only one of the basic capabilities
foreign language teachers must have. Therefore, foreign language teachers who attach
importance to professional development should study and use them to instruct
their teaching practice, and explore foreign language teaching theories with Chinese
characteristics.
University foreign language teachers who want to obtain professional development
opportunities in the circumstances of short training resources and arduous teaching
tasks, may make full use of modern information technology such as multimedia and
Internet, while staying on the teaching job, they may and initiatively obtain opportunity
of professional development through online study and monographic research.
For instance ,foreign language teachers may freely surf English teaching websites
very conveniently through specialized gateway website and related links such as David
Sperling' s ESL Café: ttp://www.eslcafe.com, CELIA: http://www.latrobe.edu.cn/edu.au/
education/celia/celia.html and so on and, study the latest foreign language education
theories. They may also visit the academic periodical websites such as Applied Lin-
guistics: http://www.3.oup.co.uk/applij, ELT Forum: http://www.eltforum.com, Internet
TESOL Journal: http: //iteslj.org/index.html and so on to learn about paper solicitation
notices, academic conference news, scientific research findings, new teaching theories
and peer experience. Moreover on the Internet there are many academic databases, such
as CAL center for applied linguistics of Eric Digest from US education resource infor-
mation. There is EBSCO the Hostweb database and Chinese periodical full text database
which are of extremely high academic value. The above network resources are greatly
helpful to enhance university English teachers' professional development and their sci-
entific research level.
for them, which has solved the problem of their being unable to participate in group
teaching research activities existing in the past as a result of the time conflict. Online
teaching research supported by information technology can last long to guarantee the
teachers’ time to participate in the research activities, they can share with their col-
leagues what they have gained, which has guaranteed the teachers’ achievement of
their professional development. Besides, the teachers do not need to give their speech
immediately on the topic being discussed, thus, they can have in- depth thought after
hearing more different opinions and the feedback .In addition, with the aid of Internet,
these who are not willing to give comments on other teachers’ teaching face to face can
express their views more freely.
Constructivism Study Theory holds that knowledge construction process inevitably
includes exploring to problems together, examining and disputing the argument to-
gether, as well as compromising with their peers between different viewpoints. The
application of information technology for example, the Internet ,makes it possible for
the teachers’ to have online exchanges and dialogues, thus provides opportunity for
more teachers to understand the new viewpoints and the explanation of new concepts,
and to compare them with their former understanding, and extract new knowledge that
may be utilized in other situations. The teachers co-operation and interaction with the
environment play an extremely vital role in deepening their understanding of what they
have studied, and it is exactly this kind of cooperation in solving the problems im-
proves their ability to construct new knowledge and to apply it in the actual
classroom instruction that improve their and enhance their professional level.
teaching behaviors. They may focus on some fragment or a certain problem in teaching
process, discuss with the colleagues, find countermeasures to solves the problem,
which enables the teachers to have a more direct understanding of their own teaching,
form higher self-awareness of their own teaching ability, unify their individuality, and
form a personalized teaching style. Teachers should combine the reflection with for-
eign language teaching pedagogy research, improve the teaching quality and special-
ized quality, speed up their professional development and transform to researcher and
scholarly teachers.
Establishing an English teaching community with the characteristic of peer com-
munication aided by information technology will enable teachers to study and discuss
with the peers anytime and anywhere. It is a persistent effect mechanism, the conver-
sational English teaching community is a promoting mechanism for teachers’ profes-
sional development, which guarantees study and reflection becoming part of teachers'
daily specialized life.
Only through peer dialogue and thought confrontation can the teachers truly com-
prehend the teaching ideas which the related teaching behaviors contain and can they
truly distillate education thought. With the support of information technology like the
Internet, teachers are able to exchange more openly and equally, thus avoid direct
conflicts caused by face- to-face communication, thus guarantee the security of peer
dialogues.
4 Conclusion
References
1. Holec, H.: Autonomy and Foreign Language Learning. Pargamon, Oxford (1981)
2. Williams, M., Burden, R.: Psychology for Language Teachers. Cambridge University Press,
Cambridge (1997)
3. Zhang, J.: Foreign Language Pedagogy, pp. 1–22. Zhejiang Education Publishing House,
Hangzhou (1993)
4. Zhou, Y.: English Teachers’ Training Urgently Awaits to Be Reinforced. Foreign Language
Teaching and Research 6, 408–409 (2002)
5. Peng, J.: Taking Great Pains to Promote the Teachers’ Professional Development to Im-
prove English Teaching Quality. Chinese Education Journal 60 (2004)
Active Learning Framework Combining Semi-supervised
Approach for Data Stream Mining
Abstract. In a real stream environment, labeled data may be very scarce and
labeling all data is very difficult and expensive. Our goal is to derive a model
to predict future instances’ label as accurately as possible. Active learning
selectively labels instances and can tackle the challenges raised by highly dy-
namic nature of data stream, but it ignores the effect of unlabeled instances uti-
lization that can help to strength supervised learning. In this paper, we propose
a framework that combines active and semi-supervised learning to get advan-
tage of both methods, to boost the performance of learning algorithm. This
framework solves the active learning problem in addition to the challenges of
evolving data streams. Experimental results on real data sets prove the effec-
tiveness of our proposed framework.
1 Introduction
The management and processing of data streams have recently become a topic of ac-
tive research in several fields of computer science. In data streams' applications, such
as network monitoring, Stock market and sensor networks, due to the need to online
monitoring, answering to user's queries should be time and space efficient [1].
Because of recent developments in storage technology and networking architec-
tures, it is possible for broad areas of applications to rely on data streams for quick
response and accurate decision making [2].
There are different challenges in data stream mining that cause many research is-
sues in this field. We discuss these issues in our previous work in details [1]. Some of
the technical challenges that must be considered here are evolving nature and increas-
ing data volumes. Whereby considerable numbers of labeled instances are needed to
build a reliable model and on the other hand labeling training examples is costly and
difficult, new settings must be design for tackling all these challenges.
One recent proposed solution to address the issues that mentioned above is to use
active learning (AL) techniques [3], [4]. The goal of active learning is to maximize
the prediction accuracy by only labeling a very limited number of instances, and the
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 238–243, 2011.
© Springer-Verlag Berlin Heidelberg 2011
AL Framework Combining Semi-supervised Approach for Data Stream Mining 239
2 Related Work
In addition to data stream classification, our research is closely related to the existing
works on both semi-supervised and active learning. We present an analytical frame-
work of supervised algorithms of data stream classification in [1].
Semi-supervised and active learning frameworks for data stream classification are
established research areas and here we introduce some recent and reliable researches
in this field.
Clustering-training is a semi-supervised framework that uses clustering to select
confidently unlabeled samples, and uses them to re-train the classifier incrementally
which is proposed in [5].
Yan Yu et al. propose an anomaly detection algorithm for evolving data stream
based on semi-supervised learning, SSAD. The SSAD algorithm utilizes attenuation
rule to decrease the effect of historical data on detection result. SSAD also uses semi-
supervised learning to extend labeled dataset as training dataset to do with the prob-
lem of the lack of the labeled data [6].
Clay Woolam et al. propose a label propagation framework for data streams that
can build good classification models even if the data are not labeled randomly [7].
SmSCluster is an approach that built model as micro-clusters using semi-
supervised clustering technique and classification is performed with κ-nearest neigh-
bor algorithm [8].
Shucheng Huang presents an active learning method, which is composed of two
components. One is for actively identifying significant changes and the other is a
light-weight uncertainty-based sampling algorithm [9].
Xingquan Zhu et al. propose a classifier ensemble based active learning frame-
work, with an objective of maximizing the prediction accuracy of the classifier
240 M. Kholghi and M. Keyvanpour
ensemble built from labeled stream data [10]. We use this framework in our method
and describe it in more details in the next section.
3 Ensemble-Based AL Approach
A general practice for active learning is to employ some rules in determining the most
needed instances. Most of active learning approaches aim at building one single op-
timal model from the labeled data. However, none of them fits in data stream envi-
ronments. The aim of applying active learning for data stream classification is to label
“important” samples, based on the data observed so far, such that the prediction accu-
racy on future examples can be maximized.
In order to tackle the challenges raised by data streams’ dynamic nature, we choose
a reliable classifier ensemble based active learning framework which selectively la-
bels instances from data streams to build an accurate classifier[10]. In that framework,
a Minimal Variance principle is introduced to guide instance labeling from data
streams. In addition, a weight updating rule is derived to ensure that instance labeling
process can adaptively adjust to dynamic drifting concepts in the data. Fig. 1 shows a
general classifier ensemble framework for active learning from data streams.
Classifier Ensemble E
Wn-k+1 Wn Prediction
Wn-k+2
Cn-k+1 Cn-k+2 Cn
Fig. 1. A general classifier ensemble framework for active learning from data streams [10]
Based on the framework in Fig. 1, it is argue that minimizing the classifier ensem-
ble variance is equivalent to minimizing its error rate. Following this conclusion, an
optimal-weight calculation method is derived to assign weight values to the classifiers
such that it can form an ensemble with minimum error rate. The minimization of the
classifier ensemble error rate through variance reduction acts as a principle to actively
select mostly needed instances for labeling.
4 Semi-supervised AL Framework
There are many semi-supervised learning methods developed. Self-training learning
needs only one classifier, which is important to meet the speed requirement.
We choose Self-training to strengthen the learning engine in AL framework with
unlabeled instances. Based on the framework in Fig.1, instead of using a supervised
classifier in every chunk to build a model from initial labeled instances, we apply a
self-training algorithm with a new confidence measure. The Self-training method is
introduced in Algorithm 1[11].
AL Framework Combining Semi-supervised Approach for Data Stream Mining 241
Algorithm 1. Self-training
Given: labeled data, unlabeled data
Algorithm:
Repeat:
1. Train f from labeled data using supervised learning.
2. Apply f to the unlabeled instances and then select a subset S of unlabeled data.
3. Remove a subset S from U; add x, f x x ∈ S to L.
Typically, S consists of few unlabeled instances with the most confident f predic-
tions. For selecting a subset of unlabeled instances we use minimum classifier va-
riance on current unlabeled set in each chunk based on Fig. 1.
Assuming that σ denotes the variance of η , where η is a random variable
accounting for the variance of the classifier m with respect to classc . In our system,
σ is calculated by
σ ∑ , y f x . (1)
| |
(a) (b)
(c)
Fig. 2. Classifier ensemble accuracy: (a) Adult, (b) Letter, (c) CoverType
AL Framework Combining Semi-supervised Approach for Data Stream Mining 243
is a sparse dataset and instances are evenly distributed, and a small portion of exam-
ples are insufficient to learn genuine concepts underlying the data. In letter, SeSAL
performs inferior to MV in majority of cases. Hence, this characteristic forms our
future research on these kinds of datasets.
6 Conclusions
In this paper, we propose a new research topic on the combination of active and semi-
supervised learning for data streams with increasing data volumes and evolving na-
ture. In our proposed framework we use self-training with a new confidence measure
to take advantage of unlabeled instances to boost the performance of learning algo-
rithm. Moreover, in our experiments on real data set, we compared our algorithm with
a fully supervised active learning method. The experiments show that the proposed
method outperforms the compared method.
References
1. Kholghi, M., Hassanzadeh, H., Keyvanpour, M.R.: And Evaluation of Data Mining Tech-
niques for Data Stream Requirements. In: Second International Symposium on Computer,
Communication, Control and Automation (3CA), Taiwan, pp. 474–478 (2010)
2. Aggarwal, C.: Data Streams: Models and Algorithms. Springer, New York (2007)
3. Seung, H.S., Opper, M., Sompolinsky, H.: Query by committee. In: 5th Annual Workshop
on Computational Learning Theory, pp. 287–294 (1992)
4. Settles, B.: Active learning literature survey. Technical Report, Wisconsin-Madison (2009)
5. Wu, S., Yang, C., Zhou, J.: Clustering-training for Data Stream Mining. In: 6th IEEE In-
ternational Conference on Data Mining - Workshops, ICDMW 2006 (2006)
6. Yu, Y., Guo, S., Lan, S., Ban, T.: Anomaly Intrusion Detection for Evolving Data Stream
Based on Semi-supervised Learning. In: Köppen, M., Kasabov, N., Coghill, G. (eds.)
ICONIP 2008. LNCS, vol. 5506, pp. 571–578. Springer, Heidelberg (2009)
7. Woolam, C., Masud, M.M., Khan, L.: Lacking labels in the stream: Classifying evolving
stream data with few labels. In: Rauch, J., Raś, Z.W., Berka, P., Elomaa, T. (eds.) ISMIS
2009. LNCS, vol. 5722, pp. 552–562. Springer, Heidelberg (2009)
8. Masud, M.M., Gao, J., Khan, L., Han, J.: A Practical Approach to Classify Evolving Data
Streams: Training with Limited Amount of Labeled Data. In: 8th IEEE International Con-
ference on Data Mining, pp. 929–934 (2008)
9. Huang, S.: An Active Learning Method for Mining Time-Changing Data Streams. In: 2nd
International Symposium on Intelligent Information Technology Application (2008)
10. Zhu, X., Zhang, P., Lin, X., Shi, Y.: Active Learning From Stream Data Using Optimal
Weight Classifier Ensemble. J. IEEE Transactions on Systems Man, and Cybernetics—
Part B: Cybernetics (2010)
11. Zhu, X., Goldberg, A.B.: Introduction to Semi-Supervised Learning. Synthesis Lectures on
Artificial Intelligence and Machine Learning. Morgan & Claypool (2009)
12. Tumer, K., Ghosh, J.: Error correlation and error reduction in ensemble classifier. J. Con-
nection Sci. 8, 385–404 (1996)
13. Witten, I., Frank, E.: Data mining: practical machine learning tools and techniques. Mor-
gan Kaufmann, San Francisco (2005)
14. Newman, D., Hettich, S., Blake, C.: UCI Repository of machine learning (1998)
15. Quinlan, J.: C4.5: Programs for Machine learning. Morgan Kaufmann, San Francisco
(1993)
Sensorless Vector Control of the Charging Process for
Flywheel Battery with Artificial Neural Network
Observer
Abstract. The new type of flywheel battery requires control system with com-
pact structure and low manufacturing cost. To meet this requirement, a new
method for the sensorless vector control of flywheel battery is proposed in this
paper. The advantage of the proposed control system is that it does not need an
extra sensor to obtain the flywheel speed and position information. The deter-
mination of flywheel position and thereby speed are made by estimating back
electromotive force (EMF) using the artificial neural network (ANN) observers.
By doing so, the dimensions and cost of the driver system can be reduced. The
ANN observers use the instantaneous values of stator voltages and currents and
the estimated error of the stator current as their input to output the back EMF
components in the α-β reference frame. A simulation model was established by
the use of MATLAB/Simulink software to carry out the numerical experiments.
The test results demonstrate that the proposed charging control system for fly-
wheel battery has a good control performance and a good robustness. The speed
/position estimation precision is high and the error is acceptable for a wide
speed range.
1 Introduction
The energy density and power density of flywheel battery are greater than chemical
battery [1, 2], which make the flywheel battery more useful for application in wind
power, EVs, UPS, aerospace, etc. In general, a flywheel energy storage system
(FESS) is mainly composed of a flywheel, magnetic bearings which support the fly-
wheel, a motor-generator which is to drive the flywheel and inter-convert the me-
chanical energy and electrical energy, and a power converter (see Fig.1 [3]). The
power converter plays an important role in confirming the high performance of the
energy exchange. Ref. [4] and [5] employ the PI controllers to actualize the charging
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 244–249, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Sensorless Vector Control of the Charging Process for Flywheel Battery 245
and discharging, and [6] use a fuzzy methodology for charging. However, these con-
ventional vector controllers need extra electromechanical sensors such as tachogen-
erators and encoders mounted on the flywheel to acquire accurate speed and rotor
position. However, the complexity and cost of the driver system are correspondingly
increased [7-10]. The extra sensors can be eliminated if a mathematic method is
adopted to estimate the speed and position of the flywheel. This processing is called
sensorless control technique [11].
Hence, to optimize the control of the flywheel battery, the sensorless vector control
scheme is used to estimate the position and hence speed. Compared with other posi-
tion estimated methods, such as fuzzy observer [11] or voltage injection through the
machine [10], the ANN can provide learning methods with global asymptotic stability
for controlling plants without exact mathematic models and are quite suitable for
practical control applications.
A new sensorless vector control scheme based on the ANN observer is presented to
charge the flywheel battery in this work. The analysis, design and simulation of the
proposed controller are described. Ideal performances such as robustness, rapidity and
stability are achieved.
denote axis voltages, respectively; θ r denotes the flywheel position, and term
ωrψ f (sin θ r ) and term ωrψ f (cos θ r ) represent the back EMF [8, 11].
It is noted that the flywheel position θ r is contained in the back EMF components.
Thus the information on the flywheel position can be extracted from the estimated
back EMF components by means of inverse trigonometric functions [8, 9].
As usual in control strategy, the direct reference current is set to zero and then the
motor torque is controlled by the quadrature current referring to (3) and (4). Fig. 2
illustrates the proposed sensorless vector charging control approach for flywheel
battery. In contrast to the classical vector control scheme, the distinguished feature of
the sensorless vector control is that in the sensorless control, the speed encode is re-
moved and replaced by a position estimator. The RBF neural networks are adopted as
the observers to calculate the back EMF in the model.
Position
Flywheel
and speed
battery
estimator
As a mature technology, BP neural networks have been applied in the field of engi-
neering control for several years. However, the limitation of its ability lies in the
problem of local minimization. Because RBF (Radial Basis Function) network is
better than BP (Back Propagation) network in its property of optimal approximation,
classify ability and the rapidity of study, it can improve the control performances of
flywheel battery. The block diagram of the proposed ANN observers for position and
speed estimation is given in Fig. 3.
Two RBF observers are employed to obtain the two components of back
EMF, Eα and Eβ in the α-β reference frame. For each observer, there are two inputs
and one output. One of the inputs is the estimated current component and the other is
Sensorless Vector Control of the Charging Process for Flywheel Battery 247
the error of the estimated and measured currents. The output is aimed to induce back
EMF. Since the back EMF terms contain the flywheel position information as given
in Eq. 2, it is feasible to obtain the position from these back EMFs.
The relationships between uα , uβ and the estimated currents, iˆα , iˆβ and back
EMFs, Eα , Eβ can be described as
⎧⎪ pLiˆα = uα − Ra iˆα − Eα
⎨ ˆ . (2)
⎪⎩ pLiβ = u β − Ra iˆβ − Eβ
The estimated flywheel position can be calculated as
θˆr = tan −1 (− Eα / Eβ ) . (3)
r r
iˆD
ED
iD
ED Tˆr
Tˆr tan 1 ( )
EE
iˆE
EE
iE
Zˆ r
EE ED
iˆD uD
° pLiˆD uD Ra iˆD ED
iˆE ® ˆ uE
°̄ pLiE u E Ra iˆE EE
4 Simulation Results
The simulation test for the control model shown in Fig. 4 is implemented in the Mat-
lab/Simulink software package program. Therefore, the performance of the proposed
sensorless control technique has been validated by various speed references.
Fig. 4 shows the simulation results obtained for sensorless vector control using the
RBF observer. A sinusoidal reference speed of 1.0 Hz with a magnitude of 900 rad/s
was given to investigate the performance of the flywheel battery. It can be seen from
Fig. 4 that the estimated speed of the flywheel can track the reference effectively, and
the error between the estimated and actual speed is acceptable.
The estimated flywheel positions with the reference speed of 1000 rpm and 500
rpm were shown in Fig. 5 (a) and (b), respectively. It can be noted that the estimated
position can follow the track of actual position with limited errors. The simulation
results indicate that the proposed intelligent sensorless control system is effective for
charging the flywheel battery. Satisfactory performance can be obtained.
248 H. Qin et al.
Fig. 4. The simulation results of the estimated flywheel speed: (a) with various reference speed,
and (b) zoomed curves of picture (a)
Fig. 5. The estimated flywheel position: (a) with 1000 rpm reference speed, and (b) with 500
rpm reference speed
5 Conclusions
A new sensorless vector control system for flywheel battery is studied in this paper.
The position and speed information are estimated by means of back EMF. Two RBF
observers are applied to obtaining the components of back EMF in the α-β reference
frame in an intelligent way. The system has been modeled and simulated in MAT-
LAb/Simulink software for the purpose of investigating its dynamic behavior based
on the proposed controller. The results from the simulation proved that the perform-
ance of the system is satisfactory. The flywheel speed up fast and precisely, and the
robustness has been enhanced; meanwhile the charging procedure of the battery is
good in stability with various operational speeds. The ANN based sensorless vector
controller has gotten acceptable error in the estimation of position and speed in the
wide speed range from 500 rpm to 1000 rpm. Considering the oscillation problem
caused by switching working states, the charging and discharging simulation of the
models were tested separately and it must be improved in the future. The proposed
intelligent sensorless controller provides an effective method for the control of
Sensorless Vector Control of the Charging Process for Flywheel Battery 249
References
1. Saitoh, T., Yamada, N., Ando, D., Kurata, K.: A grand design of future electric vehicle to
reduce urban warming and C02 emissions in urban area. Renewable Energy 30, 1847–1860
(2005)
2. Suzuki, Y., Koyanagi, A., Kobayashi, M., Shimada, R.: Novel applications of the flywheel
energy storage system. Energy 30, 2128–2143 (2005)
3. Briat, O., Vinassa, J.M., Lajnef, W., Azzopardi, S., Woirgard, E.: Principle, design and ex-
perimental validation of a flywheel-battery hybrid source for heavy-duty electric vehicles.
IET Electr. Power Appl. 1, 665–674 (2007)
4. Yu, J., Tang, S., Li, Z., Liu, K.: Research on the flywheel battery power conversion control
based on BP neural network. Control Engineering of China 17, 1–6 (2010)
5. Jia, Y., Cao, B.: DSP micro-controller based control system of flywheel charge and dis-
charge. Power Electronics 38, 58–60 (2004)
6. Fu, X., Xie, X.: The control strategy of flywheel battery for electric vehicles. In: IEEE In-
ternational Conference on Control and Automation, Guangzhou, China, pp. 492–496
(2007)
7. Batzel, T.D., Lee, K.Y.: Slotless permanent magnet synchronous motor operation without
high resolution rotor angle sensor. IEEE Trans. Indust. App. 15, 366–376 (2000)
8. Chen, Z., Tomita, M.D., Doki, S., Okuma, S.: An extended electromotive force model for
sensorless control of interior permanent-magnet synchronous motors. IEEE Trans. Indust.
App. 50, 288 (2003)
9. Kojabadi, H.M., Ahrabian, G.: Similation, analysis of the interior permanent magnet syn-
chronous motor as a brushless AC drive. Simulation Practice and Theory 7, 691–707
(2000)
10. Tursini, M., Petrella, R., Parasitili, F.: Initial rotor position estimation method for pm
motors. IEEE Trans. Indust. App. 39, 1630–1640 (2003)
11. Bilal, G., Mehmet, O.: Sensorless vector control of a Permanent magnet synchronuous
motor with fuzzy logic observer. Electrical Engineering 88, 395–402 (2006)
Framework for Classifying Website Content Based on
Folksonomy in Social Bookmarking
1 Introduction
The rapid development of the Internet is associated with various information overload
phenomena, and difficulties associated with document management. The classifica-
tion of various documents in the field of information retrieval has been extensively
studied explored. TFIDF (Frequency and Inverse Document Frequency) is the method
for calculating the weight of an article, and use the method to classify documents.
Other classification approaches include K-Nearest Neighborhood, Artificial Neural
Network, and others. Although these classification methods are extensively applied,
those based on keywords are associated with a semantic issue. Although in recent
years many scholars have constructed ontology to solve the semantic problem. How-
ever, ontological construction depends on expert knowledge in the problem domain,
and the process of constructing knowledge requires the participation of knowledge
engineers. Therefore, the most serious problems associated with the ontological ap-
proach are how to define expert knowledge in a manner that adequately represents
domain knowledge.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 250–255, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Framework for Classifying Website Content Based on Folksonomy 251
In dot-com bubble era, many Internet companies have closed down. However, the
benefits provided by the Internet are ongoing. These websites have evolutionary from
Web1.0 to Web2.0. Many enterprises have developed innovative Web2.0 applications
and are eager to use them. Traditional automated document classification is enhanced
by user tagging using metadata [6]. The Internet is a platform that supports tagging.
We can see the example like del.icio.us and HEMIDEMI websites. Del.icio.us begin
in 2003, it was the first system to provide website bookmarking. It includes its own
favorite web bookmarks, and allows other users’ inquiries by using keywords.
HEMIDEMI was established in 2005 in Taiwan as a bookmarking site, which also
allows users to set favorite bookmarks. Although it has two bookmarks tool that use
Folksonomy, searching and browsing the results of classification causes information
overload, reducing user-friendliness.
User-set labels are prone to three main problems: synonyms, semantic issues asso-
ciated with keywords and classification issues [3]. These various problems have not
yet been solved. This study aims to improve the Folksonomic weighting mechanism
to solve the semantic problems of Folksonomy and eliminate the effects of poor clas-
sification. This study tries to answers two research questions. (1) Can we propose a
mechanism to improve the problem of synonymous words in Folksonomy and the
effect of automated document classification? (2)How can the usefulness of this
mechanism be verified?
The paper is organized as follows. Section 2 discusses the key issues in theoretical
background. Section 3 introduces our research methodology. Section 4 presents a
prototype systems and the experimentally demonstrates the effectiveness of the ap-
proach. Finally, section 5 draws our conclusions.
2 Theoretical Background
Tagging is the ability of users to define information, and use the keyword-based ap-
proach to describe their thought about specific web content [2]. User tagging behavior
is information-describing behavior. According to the photo sharing website, Flickr,
the "tagging label helps you find some commonality among photographs based on a
keyword or category." However, the descriptions made by users are not limited to
specific photographs. They can cover films, music, bookmarked links, and blogs.
Users can set any name according the tagging label and quickly find other users to
share resources of all kinds. Tagging by keywords is performed by users. It is not
based on the general meaning, but only on the needs of the user. It doesn’t meet strict
classificatory standard.
Various websites that support tagging have increased gradually. Folksonomy refers
to this phenomenon [1], [8]. The term combines the words “folk" and "taxonomy".
Folksonomy therefore refers to classification by users [6], [8]. Users can mark per-
sonal information, and use tags as a basis for classification. Tagging is useful not only
to the original tagger, but also to other users. Folksonomy effectively involves voting
by users of a classification system. In an arbitrary use of keyword-based distributed
classification systems, a group of users may establish some separate tags. Web2.0
provides a timely solution to the problems of traditional classification. Folksonomy
based on user-defined keywords for classification still has several problems. They
252 S.-M. Pi et al.
include quality of users, the quality of labeling, semantic problems associated with
keywords, the lack of constraints on keywords, the classification of poor results. Also,
public classification lacks accuracy and tags have multiple meanings [3], [4], [6].
The purpose of information retrieval is to eliminate information overload [5]. The
earliest and most extensive use involves the calculation of the TFIDF [9]. TFIDF cal-
culates two main frequencies. The first is term frequency across a number of docu-
ments. Frequency typically represents importance. The second is document frequency.
The WordNet lexical database is a development of modern psychological theory. It is
a set vocabulary, and is used to construct automated dictionary [7]. The WordNet lexical
database was developed at Princeton University for cognitive research. The construction
of WordNet mainly in English nouns, verbs, adjectives and adverbs, organized into sets
of synonyms. WordNet has a wide range of applications and its website offers many
open API (Application Program Interface) functions. The site is WordNet.net.
3 Research Methodology
The system framework consists of four modules: Folksonomy module, WordNet
synonym analysis module, Data storage components, and user behaviors module.
Figure 1 presents this system framework.
The system architecture of data storage components is based mainly various data
files. The profile database contains personal records of the main users, including per-
sonally composed information and stored bookmark tags. The parameters database
records mainly classifications as required parameters. TFIDF includes classification
weights, and the settings of the follow-up classification criteria.
The Folksonomic system is the most important part of this study. The details of
operational process as follows:
(1) Read the data of user’ bookmarks: before the Folksonomy process, we have to
collect all the data of user’ bookmarks.
(2) Read the parameters of classsification: in the stage, we have to find out the
TFIDF and weight from the parameter file of classification.
Framework for Classifying Website Content Based on Folksonomy 253
(3) Analysis of synonyms: in the stage, we have to send user’ bookmarks to syno-
nyms analysis module in WordNet. And we have to send back the result to Folk-
sonomy classification module.
(4) Classification: First, we use CKIP (Chinese Knowledge and Information Process-
ing) to proceed word segmentation. Second, we use WordNet to get the informa-
tion of synonyms. Third, we use TFIDF to calculate the weight. Finally, the
adjustment of the details for classification.
(5) Access the result of classification.
影片 影集
conversion category.
The classification of " "; " " Two labels belong to synonyms, they will
好吃 好吃的
Chinese synonyms involve the same category.
好吃的
Inquiries about " "; " "; After the Chinese word segmentation
words segmentation
溫泉 溫泉之 processing, " " should be classified into "
好吃 溫泉之旅 溫
" "; "
旅 " category; " " belongs to the "
泉
"
" category.
The classification of "部落格 部落 部落格介紹
"; " 部落格 " " belongs to " " category.
exceptional words
格介紹 "
254 S.-M. Pi et al.
In this study, the experiments arrange the actual users to use this system,. When users
browse the prototype system, the system adopts experimental design to differentiate
between the experimental group and controls group. Control group adopt traditional
Folksonomy; Experimental group adopt the method we proposed. After that, we com-
pare the result of control group and experimental group.
User identities and backgrounds are beyond the scope of this study. Therefore, to
reduce the number of users increase the convenience of sampling, background infor-
mation was obtained from many several students and college students who participated
in the experiments. A questionnaire survey was utilized to measure user satisfaction.
5 Conclusion
(1) Actual system observation: in the study of the classification mechanism, the ex-
periment ends when the number of labels reaches 164. Traditional public classifi-
cation requires 253. The effectively reduction in the number of labels is 30%.
(2) User satisfaction survey: after completion of the experiment, the users were re-
quired to complete a satisfaction questionnaire. Statistical analysis of the results
indicates that the proposed classification system exhibited significantly. The
Folksonomic mechanism provides significant improvements over traditional
Folksonomy, and helps users make information inquiries.
(1) Semantic issue: Future work should entail further semantic analysis, or establish
labels that combine the automatic construction of ontology, to improve public
classification.
(2) Integration of Chinese and English synonym: We hope to integrate the Chinese
and English synonym to improve the integrity of the classification system.
Framework for Classifying Website Content Based on Folksonomy 255
Acknowledgments. The authors would like to thank the financial support in Taiwan
by the Taiwan National Science Council (NSC96-2416-H-033-004). And we will
appreciate Ted Knoy for his editorial assistance.
References
1. Fichter, D.: Intranet Applications for Tagging and Folksonomies. Online 30(3), 43–45
(2006)
2. Golder, S., Huberman, B.A.: Usage Patterns of Collaborative Tagging Systems. Journal of
Information Science 32(2), 198–208 (2006)
3. Gordon-Murnane, L.: Social Bookmarking, Folksonomies, and Web 2.0 Tools. Searcher-
The Magazine for Database Professionals 14(6), 26–38 (2006)
4. Heymann, P., Koutrika, G., Garcia-Molina, H.: Can Social Bookmarking Improve Web
Search? In: Proceedings of the International Conference on Web Search and Web Data
Mining, pp. 195–206 (2008)
5. Kobayashi, M., Takeda, K.: Information Retrieval on the Web. ACM Computing Sur-
veys 32(2), 144–173 (2000)
6. Mathes, A.: Folksonomies - Cooperative Classification and Communication through
Shared Metadata (2004),
http://www.adammathes.com/academic/
computer-mediatedcommunication/folksonomies.html
7. Miller, G.A.: WordNet: A Lexical Database for English. Communications of the
ACM 38(11), 39–41 (1995)
8. Ohmukai, I., Hamasaki, M., Takeda, H.: A Proposal of Community-based Folksonomy
with RDF Metadata. Proceedings of the 4th International Semantic Web Conference,
ISWC 2005 (2005)
9. Salton, G., McGill, M.: Introduction to Modern Information Retrieval. McGraw-Hill, New
York (1983)
Research on Internal Damping Algorithm of
Marine Inertial Navigation System
Abstract. Due to the gyro random walk and other reasons, Schuler oscillation
amplitude of undamped inertial navigation system will increase with time. The
marine inertial navigation systems were often required to work continuously for
hours to days or even longer, over divergent INS position error will cause loss
of navigation function. Thus, the appropriate algorithm in the system must be
added to damp the Schuler oscillations. The damping networks for INS level
channels are designed. For the applications of marine INS, a new internal
damping method which uses its own measurements as reference velocity rather
than external inputs is presented. Marine test results show that the proposed
internal damping method can suppress the Inertial Navigation System Schuler
oscillation error and make it no longer divergent, thereby improving the naviga-
tion accuracy of INS in long-endurance applications, and has practical values in
engineering.
1 Introduction
Inertial navigation system is composed of a large number of parts and components
and it has many factors that generate additional lag. Therefore, it is actually quite
difficult to maintain a system of free oscillation amplitude when adjusting Schuler
tune [1]. In inertial navigation systems, gyro drift is the main factor that leads to di-
vergence of INS navigation errors, and it can be divided into constant drift parts and
random drift parts [2]. The majority of constant drift part can be compensated by
some prediction methods, but the random drift part will cause divergence of un-
damped INS navigation error, and the root mean square of error is proportional to the
square root of time [3-4].
In marine applications, INS are often required to work continuously for hours to
days or even longer, over divergent INS position error will cause loss of navigation
function. Therefore, correction networks are needed to destroy Schuler tuning condi-
tions to make the undamped navigation system into an asymptotic stable system, and
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 256–261, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Research on Internal Damping Algorithm of Marine Inertial Navigation System 257
this can ensure the system error is bounded. In this way, the system becomes a
damped inertial navigation system [5].
Damping of INS can be divided into internal and external ways. When external
reference measurements can be continuously provided, the external damping ways are
often be selected. However, in some special situations such as wartime, or for subma-
rines and other underwater transporters, the continuous accurate external reference
measurement is not easy to obtain [6]. A new internal damping method is proposed in
this paper and the new method is tested by marine experiments. The results show that
the proposed internal damping method can effectively reduce the Schuler oscillation
error of INS and make it no longer divergent over time, thereby improving navigation
accuracy.
Fig. 1. Single-channel damping block diagram of INS, Where ΔVI and ΔVr are the INS
velocity error and the reference velocity error. k is the damping coefficient. ε is the gyro drift
and Δϕ is the mathematical platform tilt angle, g denotes the local gravity and R is the radius
of the earth.
From the block diagram, the steady-state error of the velocity and platform tilt an-
gle of step response can be derived as equation (1) and (2):
k kR
Δϕ∞ = lim sΔϕ ( s ) = − ΔVr + ε. (1)
s →0 g g
From equation (1), it can be seen that the reference velocity error would cause steady-
state platform tilt angle error, nevertheless, it do not generate the steady-state INS
velocity error, this means, if the reference velocity is not accurate, it would not lead to
divergence of the position error.
The characteristic polynomial of the system is given in equation (3):
g
Δ( s ) = s 2 + ks + (3)
R
If the damping ratio is chosen as 0.5, the corresponding feedback coefficient k is
0.0012. The internal damping network is implemented in the marine INS in accor-
dance with the parameters above.
2.2 Simulation
The simulation is carried out to verify the feasibility of the designed damping net-
work. The INS is assumed to be stationary; the initial attitude errors are 10”, and the
initial azimuth error is 30”. Gyro drifts of the three axes are 0.002 º/h, 0.001 º/h,
0.001 º/h respectively.
0
ΔSe(m)
-2000
-4000
0 5 10 15 20 25
1000
ΔSn(m)
-1000
0 5 10 15 20 25
Time(h)
Fig. 2. Position errors of simulation results
Figure2 shows position error of the undamped and the damped INS navigation re-
sults. In the figures, the solid lines denote the undamped navigation error and the
dotted lines denote the damped results. It can be seen that the designed damping net-
work can effectively suppress the Schuler oscillation errors.
period before entering internal damping network can be approximately taken as refer-
ence velocity, and proper strategies can be made to judge whether the ship is maneu-
vering or not. When the ship is considered to be maneuvering, the internal damping
network is cut off, otherwise, the internal damping network is applied and the refer-
ence velocity is updated. Since in marine applications, the carriers are in the cruise for
most of the time, therefore, the internal damping strategy can be applied to suppress
the Schuler oscillation, and make the error no longer divergent, thereby enhancing the
INS navigation accuracy for long-endurance occasions.
The flow chart of INS internal damping strategy is shown as follows:
The INS internal damping data fusion strategy can be summarized as follows:
• Calculating the average INS east and north velocity vEk , vNk and acceleration aEk ,
aNk in every minute, and the average acceleration is calculated by
ak = (vk − vk −1 ) / 60 ;
• If the average accelerations of two channels continuously meet the threshold condi-
tion( ak < ath , ath =0.01m/s2) for 30 times, the internal damping network can be
started from the 1800th second, and the reference velocity can be set as
,
vEref = vE 30 vNref = vN 30 ;
• If the average INS accelerations of k-min do not meet the threshold condition dur-
ing the damping process, that is, ak > ath (k>30), the internal damping network
should be cut off immediately to make the system be in the undamped INS status.
260 L. Kui, L. Fang, and X. Yefeng
• If the difference between the average INS velocity of k-min and the pre-set refer-
ence velocity exceeds a threshold during the damping process, the reference veloc-
ity should be updated by the current velocity. That is, if vk − vref > vth , ( vth =1m/s),
In order to avoid problems caused by frequent switching between damping status and
non-damping status in INS, strategy 2 requires the threshold condition be met 30
times before starting damping. Since the work hours in marine applications is usually
long, and the marine INS often be of relative high precision, it would not have much
impact when the damping network is cut off for a period of time.
Strategy 3 is the condition for cutting off the damping network, and the goal of
strategy 4 is to solve the problem that the error of reference velocity would accumulate
if the ship is in the status of continuously acceleration or deceleration. In strategy 4, the
comparison of the INS velocity and reference velocity is made during the damping
process. If the difference is larger than the pre-set threshold, the reference velocity
would be replaced by the current velocity, and then the problem would be solved.
31 5000
ΔSe(m)
0
30.5
-5000
latitude(deg)
20 40 60 80
30
5000
ΔSn(m)
29.5 0
-5000
-10000
29
122 123 124 125 20 40 60 80
longitude(deg) Time(h)
Figure 4 shows the ship's navigation route. The frigate departed from the port and
had been patrolling in the target area for several days. It returned the port when the
tasks were finished. In figure 5, position error is presented. The solid lines are posi-
tion errors of undamped INS and dotted lines are errors of the proposed internal
damping INS. To further test the effectiveness of the proposed internal damping algo-
rithm, 70 seconds of the attitude error is added to the system at 33h after navigation. It
can be seen that the INS navigation errors are effectively suppressed by the proposed
internal damping algorithm.
Research on Internal Damping Algorithm of Marine Inertial Navigation System 261
5 Conclusion
In the marine INS applications, the carrier acceleration is often small; it can effec-
tively suppress the Schuler oscillation error by adding some appropriate damping
algorithms to avoid divergence of navigation errors due to random gyro drifts, thereby
improving navigation accuracy. For applications that the external reference measure-
ments are absent, the paper presented a new design of an internal damping algorithm
which is independent on external reference measurements inputs. The effectiveness of
the method is verified by marine experiments, and the results show that the proposed
method can effectively damp the system Schuler oscillation errors and improve navi-
gation accuracy, thus has excellent practical engineering applications.
References
1. Gao, W., Zhang, Y., Xu, B., Ben, X.: Analyse of Damping Network Effect on SINS. In:
Proceedings of the 2009 IEEE International Conference on Mechatronics and Automation,
Changchun, China, August 9-12, pp. 2530–2536 (2009)
2. Qin, Y., Zhang, H.: Kalam Filter and Integration Navigation Principle, pp. 283–285.
Northwestern Polytechnical University Press, Xi’an (1998) (in Chinese)
3. Titterton, D.H.: Strapdown Inertial Navigation, pp. 453–456. The Institution of Electrical
Engineers, Herts (2004)
4. Grammatikos, A., Schuler, A.R., Fegley, K.A.: Damping Gimballess Inertial Navigation
Systems. IEEE Transactions On Aerospace And Electronic Systems AES-3(3) (May 1967)
5. Cheng, J.-h., Zou, J.-b., Wu, L., Hao, Y.-l., Gan, S.: The Design of an Effective Marine
Inertial Navigation System Scheme. In: 2008 Workshop on Knowledge Discovery and
Data Mining, pp. 671–676 (2008)
6. Du, Y., Liu, J., Liu, R., Sun, Y.: Fuzzy Damped Algorithm in Strapdown Attitude Heading
Reference System. Journal of Nanjing University of Aeronautics and Astronautics 3(37),
274–278 (2005) (in Chinese)
Design and Implementation of Process Migrating among
Multiple Virtual Machines*
Si Shen, Zexian Zhang, Shuangxi Yang, Ruilin Guo, and Murong Jiang**
1 Introduction
In the cloud computing environment, the system resources are shared, and the dynamic
resource allocation is one of the core problems for keeping the task execution smoothly.
Process migration technology could be used to solve those problems, such as load
unbalance, system breakdown, user process death and so on; the tasks would not be lost
or reworked. The key problem of process migration is how to forward all of the state
information to the destination node. How to deal with the transformation of process
state between different processor is a difficulty point. Process state information in-
cludes processor information, process code, information of data and heap, state in-
formation for migrating process maintained by OS.
At present, there are many researches on process migration in domestic and over-
seas, mainly include: process migration supported by language and compiler [1];
detecting and translating the possible pointer dynamically by traversing stack [2];
*
This work has been supported by Yunnan NSF Grant No.2008PY034 and the TianYuan
Special Funds of the National Science Foundation of China 2011.
∗∗
Corresponding author.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 262–267, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Design and Implementation of Process Migrating among Multiple Virtual Machines 263
dividing address space, and reserving the specific virtual address of every processor’s
stack [4] [5]; C language of process migration at application level [3] [6], and so on. All
these methods mentioned above need a new language, or cannot detect all pointer in
stack, or only can be used at homogeneous platform, or apply to structure-oriented
language, the expansibility is low.
In this paper, we discuss a process migration technology by using Java JDI, produce
a demo migrating program on three experimental environments: among the physical
machines, between the physical machine and the virtual machine, and among the vir-
tual machines. This technology is on program level and platform independent. It also
has the advantages such as strong commonality, protecting local environment from
intrusion, and preventing from malicious code filching local information.
Preprocessor Tracker
Migrating User
Layer Program
Transfer Agent Event Monitor
Platform
JVM JPDA
Layer
The preprocessor adds migration marks to the Java source codes. The tracker
launches target JVM and saves process state information automatically. The process
state information includes: type and value of variables, location of the current check-
point. If the user process is migrated from other node, all process state information will
be restored. Then the program jumps to the checkpoint recorded by using the method
like ‘goto’, and continue to run. The monitor mainly monitors migration event and
information transmission event.
The platform layer mainly provides the platform independent API, and implements
the platform independent of program. JPDA is “The Java Platform Debugger Archi-
tecture”; JDI communicates with JVM under this architecture. JVM shields the dif-
ference between operating systems. Then the user process could migrate to any OS
which has a standard JVM.
Preprocessing
In order to migrate process in heterogeneous environment, the physical state of process
should be converted to logical state, and then process could migrate through network. As
a preprocessing, migration marks should be inserted into the source code of user program.
264 S. Shen et al.
Configure System
Configure System
Originating Destination
node Start Load User Program
node Start Receive User Program Data UserProgramContext
Run User Program Track User Process Click Migrate Button Yes
Need
Restore Process Data Restore?
Access Checkpoint
Access Checkpoint
Transmit Process Data
migrating request, the process will be migrated to a destination node, and all data will
be restore.
In the testing, three virtual machines (VM) and two physical machines (PM) are used.
PM1 and PM2 are physical machines (CPU 2.0 GHz, RAM 1GB); VM1, VM2 and
VM3 are virtual machines (CPU 2.0 GHz, RAM 256MB). VM3 is running in PM2.
A program that computing a 120 factorial is ran for testing. After each step, there is a
“Thread.sleep(200);”.
Fig. 4 shows that the user process is migrated to PM2 after checkpoint 2. At PM2, user
process is resumed from checkpoint 2 as shown in Fig. 4(b).
(a) Test program running in PM1 (b) Test program running in PM2
Fig. 4. The process of test program is migrated from PM1 to PM2
266 S. Shen et al.
Fig. 5 shows the migration from PM1 to VM3. The top half of the figure is interface of
PM1, and the bottom half is that of VM3.VM3 is a virtual machine created by VMware
Player running in PM1. The procedure of migration between physical machine and
virtual machine is same as that among physical machines.
Fig. 5. Process migrated from PM1 to VM3 Fig. 6. Process migrated from VM1 to VM2
VM1 named xp1 running in VMware ESXi server. Start VM1 and copy the process
migrating program to VM1, then run the program. VM2 was configured as VM1.
Fig. 6 shows that test program starts running in VM1, and it is migrated to target
node (VM2) after checkpoint 2. User process is resumed in VM2, and it continues
running from checkpoint 2.
Table. 1 gives out the migration time comparison for computing 120! in three ex-
perimental environments discussed above.
In the test, migration in virtual machine uses more time, because of the VM running
on a virtualization layer. The performance of VM is a little lower than that of physical
ones. Another influencing factor is that the target VM be nested in another VM in order
to simulate multiple VMs environment. Virtual machine has lower performance than
physical machine with the same hardware configuration. These are caused by virtual
machine production. And the overhead of migration is in an acceptable level compared
with the total computing time. So the technology also has a high value in actually
application.
Conclusion
In this paper, we have implemented a process migration technology on multiple virtual
machines. Basically, the process of migrating is transparent to user, whether preproc-
essing source code or automatically store/restore process data. The user has no need to
know the theory, so the usability is very strong. Moreover, because the computing task
could be restored at another node, it can improve the efficiency of distributed system,
and enhance the robustness.
References
1. Jul, E., Levy, H., Hutchinson, N., et al.: Fine-Grained Mobility in the Emerald System.
ACM Transactions on Computer Systems 6(2), 109–133 (1988)
2. Dimitrov, B., Rego, V.: Arachne: A Protable Threads System supporting Migrant Threads
on Heterogeneous Network Farms. IEEE Transactions on Parallel and Distributed Sys-
tems 9(5), 459–469 (1998)
3. Jiang, H., Chaudhary, V.: Compile/Run-time Support for Thread Migration. In: Interna-
tional Parallel and Distributed Processing Symposium, IBDPS 2002, vol. 1, p. 0058b (2002)
4. Chase, J., Amador, F., Lazowska, E., Levy, H., Littlefield, R.: The Amber System: Parallel
Programming on a Network of Multiprocessors. In: ACM Symposium on Operation System
Principles (1998)
5. Case, J., Konuru, R., Otto, S., Prouty, R., Walpole, J.: Adaptive Migration Systems for
PVM. Supercomputing (1994)
6. Jiang, H., Chaudhary, V.: Process/Thread Migration and Checkpointing in Heterogeneous
Distributed System. In: Proceedings of the 37th Hawaii International Conference on System
Sciences, Hawaii (2004)
7. Venners, B.: Inside Java Virtual Machine. McGraw-Hill, New York (1997)
8. Sun Microsystem, http://java.sun.com/products/jpda
9. Xiao, Q., Jiang, M., et al.: Cross-Platform Design and Implementation of Process Migrating.
Journal of Computer Applications 27(27) (2007)
10. Xiao, Q.: Process Migrating technology Research Based on Java. Master Degree Disserta-
tion, Yunnan University (2007)
The Adaptability Evaluation of Enterprise Information
Systems
1 Introduction
Enterprise information systems(EIS) should have the ability to adapt to the changing
environment inside and outside at any time, in order to enhance management
efficiency and add management function, i.e., enterprise information systems’
adaptability(EISA). At present, there are few references to EISA, and the main
researches[1-5] are as follows: 1) the analysis of EISA’s influencing factors[1]; 2)
EISA’s evaluation[2] ; 3) EISA’s empirical study; 4) how to enhance EISA. However,
the researches to 4) mainly focus on the software fields. For example, reference [4]
proposes a structure and two rule modes which have better adaptability. But there are
differences between software systems’ adaptability and EISA. And there are no
references to evaluate adaptability from both perspectives of EIS itself and software.
Hence, according to GQM (Goal-Question-metrics), this paper proposes a set of
adaptability evaluation system, and the corresponding evaluation model is given. The
contents can support further research to EISA’s optimization.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 268–272, 2011.
© Springer-Verlag Berlin Heidelberg 2011
The Adaptability Evaluation of Enterprise Information Systems 269
3 Evaluation Model
In this paper, based the similarity to ideal solution, the modeling process is shown in
Fig.2, which is illustrated in detail below via the indices proposed in this paper.
Step 1: according the Part 2, the grading standards for evaluation are as follows:
1) As for Time, the four levels A, B, C and D are 7, 5, 3, and 1 respectively.
2) As for Cost, the four levels A, B, C and D are 7, 5, 3, and 1 respectively.
3) As for absolute complex degree, the four levels A, B, C and D are 1, 3, 5, and
7 respectively.
4) As for relative complex degree, the four levels A, B, C and D are 7, 5, 3, and 1
respectively.
5) As for personal numbers involved in the adjustment, the four levels A, B, C
and D are 7, 5, 3, and 1 respectively.
6) As for the meeting degree involved in the adjustment, the four levels A, B, C
and D are 7, 5, 3, and 1 respectively.
7) As for the influence degree involved in the adjustment, the four levels A, B, C
and D are 7, 5, 3, and 1 respectively.
8) As for the confusion degree involved in the adjustment, the four levels A, B, C
and D are 7, 5, 3, and 1 respectively.
To be mentioned, if the degree is between two standards, the grades are 2,4,6
respectively.
Step 2: make sure the weight of the indices. As there are different weights among
different indices, it’s necessary to give them different weights according to the Delphi
method.
As for the indices proposed in this paper, the weights of them are as follows:
w = ( w1 , w2 , w3 , w4 , w5 , w6 , w7 , w8 ) = ( 0.1, 0.1, 0.05, 0.05, 0.1, 0.4, 0.1, 0.1) . where
w1 , w2 , w3 , w4 , w5 , w6 , w7 , w8 represent Time, Cost, absolute complex degree, relative
complex degree, personal numbers involved in the adjustment, the meeting degree,
the influence degree, the influence degree, and the confusion degree respectively.
Step 3: make sure the sample matrix SM . In this step, according to the grading
standards of the evaluation in step 1, each index is graded by experts, constituting
SM ', then multiplying the weight of each index, and SM is obtained.
Step 4: comprehensive evaluation of the sample matrix. Firstly, obtain the ideal
matrix SM . Then calculate the distance d (1, 2, L , n ) between each EIS and the
I
corresponding ideal matrix. And in this paper, the Frobenius matrix norm is adopted.
The Adaptability Evaluation of Enterprise Information Systems 271
According to the above steps, the final adaptability value is obtained, and the larger
the final value is, the better adaptability is.
4 Examples
The evaluation indices set and model are exemplified via the following case. The
related data are shown in Table 1.
EIS Indices
Complexity Risk
T C ACD RCD PNI MD ID CD
⎢ ⎥
⎣⎢ 0.5 0.4 0.25 0.20 0.5 2.8 0.3 0.6 ⎥⎦
Then the distance between each EIS and the ideal matrix d i ( i = 1, 2, 3 ) is calculated
via Matlab 6.5, and the results are shown as follows:
As d 2 < d 3 < d1 , so the second EISA is the best, and the first EISA is the worst.
What’s more, from the data in the table, the meeting degree in EIS2 is the highest. In
other words, although the other indices may be better, but the meeting degree is the
most important index, or the investment, such as time, money and so on will have no
meaning.
5 Conclusions
According to GQM, this paper proposes a set of adaptability index system, which
includes five aspects: Time, Cost, Complexity, and Risk. Then the evaluation model
is proposed to evaluate EISA and exemplified though a case. The research in this
paper can rich the EISA theory and lay a foundation for EISA optimization.
References
1. Meglich-Sespico, P.: Exploring the key factors for justifyingthe adoption of a human
resource information system. In: Proceedings – Annual Meeting of the Decision Sciences
Institute, pp. 535–540 (2003)
2. Liu, W.-g.: An Evaluation Model and Its Implementation of Information System.
Computer Applications 23, 33–35 (2003)
3. Naing, T., Zainuddin, Y., Zailani, S.: Determinants of information system adoptions in
private hostitals in Malaysia. In: 2008 3rd International Conference on Information and
Communication Technologies: From Theory to Applications, pp. 1–2 (2008)
4. Yu, C., Ma, Q., Ma, X.-X., Lv, J.: An Architecture-oriented Mechanism for Self-
adaptation of Software Systems. Journal of Nanjing University (Natrual Sciences) 42,
120–130 (2006)
5. Pan, J., Zhou, Y., Luo, B., Ma, X.-X., Lv, J.: An Ontology-based Software Self-adaption
Mechnism. Computer Science 34, 264–269 (2007)
Structural Damage Alarm Utilizing Modified
Back-Propagation Neural Networks
Xiaoma Dong
Keywords: damage alarm, modal frequency ratio, steel truss girder bridge,
modified BPNN.
1 Introduction
The casualty of large engineering structure are often arose by some minuteness fa-
tigue cracks, so it becomes an investigative hotspot to utilize some efficiency undam-
aged methods to inspect the structure damage beforehand[1~8]. In order to avoid
compound factor identification and predigest the complexity of identification, a multi-
level damage identification strategy was proposed. The strategy was to dispart the
integrity process of damage identification into three steps. The first step was damage
alarm, the second step was damage location, and the third step was damage degree
identification. Damage alarm is an important step among structure damage identifica-
tion. Its objective is to evaluate the structure health and give an alarm signal. By the
vend literatures[1~5], in the structure damage identification field the investigator
mostly fasten on the research of damage location and damage degree identification.
And that damage alarm research is less attended due to its easy realization. By the
vend literatures[6~8], the existing damage alarm methods were mostly based on con-
ventional BPNN, and these methods didesn’t think over testing noise. Moreover there
were testing noise in true testing signal. Therefore, in order to avoid the disadvantages
of conventional BPNN, a modified BP neural network was proposed for structure
damage alarm system in this paper. The experiment results of steel truss girder bridge
show that the new method is better than BPNN for structural damage alarm.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 273–278, 2011.
© Springer-Verlag Berlin Heidelberg 2011
274 X. Dong
The first layer is an input layer, the second layer is a hidden layer, and the third
layer is an output layer. The hidden layer node function adopts the non-linear sigmoi-
dal function, as follows:
1
f ( x) = . (1)
1 + e−x
where x is the neural node input vectors.
The output of the kth node in the hidden and the output layers can be described by
⎛ ⎞
ok = f (net k ) = f ⎜⎜ ∑ wkj o j ⎟⎟ . (2)
⎝ j ⎠
where the net k is the input of the kth node.
The interconnection weights, adjusted in such a way that the prediction errors on
the training set can be minimized, are gived by
Δw ji (t ) = ηδ j oi . (3)
where 0 < η < 1 is the learning rate coefficient, Δw ji (t ) is the actual change in the
weight and δ j is the error of the jth node, o j is the actual output of the jth node of
output layer, y j is the corresponding target output.
In order to control the network oscillations during the training process, a momen-
tum coefficient 0 < α < 1 is introduced to the definition of the weight change:
、 、
Because frequency is a simple economical easily gained modal parameter and
its precision is easily guaranteed, this paper chooses MFCR (modal frequency change
ratio) qua modified BPNN input character parameter[9]. Figure 3 shows former four
MFCRs at undamage and five damage condition.
276 X. Dong
In view of measure noise influence, a normal school random data is added on every
last MFCR to simulate actual measure data. The random data mean and mean squared
error are 0 and 0.005. The random data length is 300. Three hundred datum sample
got at undamaged condition are used to train modified BPNN, and other datum sam-
ple got at five damage condition are used to test modified BPNN.
% %
damage case 3 is 1.3 that is less than 1.5 . Through frontal analysis, the conclu-
sion are gained that RBFNN can give definitude alarm if MFCR due to damage isn’t
less than measure error.
5 Conclusion
In order to insure bridge structure safety, it is very important to detect and repair dam-
age as soon as possible. Damage alarm is the first step among structure damage identi-
fication, and an more important step also. The traditional damage alarm methods are
mostly based on conventional BPNN. BPNN is a global approach Neural Network
with weak mode classifying ability and anti-noise ability. So damage alarm effect is
not good based on conventional BPNN. This paper proposes a new modified BPNN for
structure damage alarm system. The experiment results of steel truss girder bridge
show that the proposed method is better than the old one for structural damage alarm.
278 X. Dong
In addition, through modal frequency sensitivity analysis, the conclusion are gained
that RBFNN can give definitude alarm if MFCR due to damage isn’t less measure
error.
Acknowledgments. This research is sponsored by the Aviation Science Foundation
of china (No. 2008ZA55004).
References
1. Dutta, A., Talukdar, S.: Damage detection in bridges using accurate modal parameters.
Finite Elements in Analysis and Design 40, 287–304 (2004)
2. Zhao, J., Ivan, J.N., DeWolf, J.T.: Structural damage detection using artificial neural net-
works. J. Infrastruct. Syst. 4, 93–101 (1998)
3. Shi, Z.Y., Law, S.S.: Structural Damage Location From Modal Strain Energy Change.
Journal of Sound and Vibration 218, 825–844 (1998)
4. Stubbs, N.S., Osegueda, R.A.: Global non-destructive damage detection in solids. The Int.
J. of Analytical and Exp. Modal Analysis, 81–97 (1990)
5. Dong, X., Sun, Q., Wei, B., Hou, X.: Research on Damage Detection of Frame Structures
Based on Wavelet Analysis and Norm Space. In: ICIII 2009, pp. 39–41 (2009)
6. Ko, J.M., Ni, Y.Q., Chan, T.H.T.: Feasibility of damage detection of Tsing Ma bridge us-
ing vibration measurements. In: Aktan, A.E., Gosselin, S.R. (eds.) Nondestructive Evalua-
tion of Highways, Utilities and Pipelines IV. SPIE, pp. 370–381 (2000)
7. Worden, K.: Structural fault detection using a novelty measure. Journal of Sound and Vi-
bration 1, 85–101 (2001)
8. Chan, T.H.T., Ni, Y.Q., Ko, J.M.: Neural network novelty for anomaly detection of Tsing
Ma bridge cables. In: International Conference on Structural Health Monitoring 2000,
Pennsylvania, pp. 430–439 (1999)
9. Dong, X.-m., Zhang, W.-g.: Improving of Frequence Method and Its Application in Dam-
age Identification. Journal of Aeronautical Materials 26, 17–20 (2006)
Computation of Virtual Regions for Constrained
Hybrid Systems
1 Introduction
Hybrid systems are systems which include discrete and continuate dynamics. In many
applications, hybrid systems have multiple operating modes, each described by a
different dynamical structure. In this paper, a situation of hybrid system is introduced
where transitions between the modes are caused by external events or disturbances
and the mode transitions include a finite-duration transient phase, as a mode transition
may correspond to a failure of the system [1].Examples of systems with this type of
protective switching action include power system [12] [13]. Invariant sets of hybrid
system play an important role in the many situations when dynamics system is con-
strained in some way. In this paper situations where transitions between the modes are
caused by external events or disturbances are studied. The purpose of this paper is to
study the transient behavior and establish the invariant sets of the system.
The mode transitions are modeled, and the variability defines the region of the dy-
namic system. Efficient computation for the invariant set is proposed. This method is
attractive as the invariant set is useful to design the switching strategies.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 279–284, 2011.
© Springer-Verlag Berlin Heidelberg 2011
280 J. Li, Z. Ji, and H.-l. Pei
state vector are the sufficient condition for the existence and uniqueness of solutions.
A mode transition due to external events such as a fault or control switching can be
described by a sequence of discrete states. When a transition happen, for exam-
ple, Si → Si +1 , there may exit a reset function Ri ,i+1 (i) to reset the value of the system to
a new value. However, a state transition will not cause the instant reset of the con-
tinuous part of the system, there may exit a transient phase between two discrete
states. The model can be shown as Figure 1 where the system has three phases which
are represent by M 0 , M 1 , M 2 . A mode transition is defined as follow.
In definition 1, when the event such as a fault happen, it cause the mode transition
occur. The system dynamic change from x(t ) = f 0 ( x(t )) to x(t ) = f1 ( x(t ), w(t )) ,
where the signal w ∈ W represents the uncertainty in the transient dynamic. The
closed set W represents the range of uncertainty in the transient dynamic. The mode
transition is completed and changed to x(t ) = f 2 ( x(t )) .
x = f 0 ( x) x = f1 ( x, w) x = f 2 ( x )
The system modes f i : i =0,1,2 are Lipschitz continuous in x , and the invariant set of
the system is discussed in the latter sections.
From the theorem, a necessary and sufficient condition for system (3) is every point
on the boundary ∂κ is directed into the set. This can be expressed as below:
nκ ( x)T f ( x) ≤ 0 ∀x ∈ κ (5)
V ( x) which defines the invariant set is a function of x. There are two important
families of invariant sets. These are the classes of ellipsoidal sets and polyhedral sets.
Mode transition dynamic system or continuous systems have these types of invariant
sets.
Ellipsoidal sets are used widely as invariant sets in continuous system. From the
existence of a quadratic Lyapunov function for such system and that levelsets of
Lyapunov functions are invariant sets [8]. A corollary can be deduced from it:
n× n
Theorem1 A system x = A( x) , x ∈ , A∈
n
, if A has all non-positive
real-part eigen -values, then the system has ellipsoidal invariant set.
Ellipsoidal sets are popular invariant sets. An ellipsoidal invariant set can be ex-
pressed as follow:
δ = {x ∈ n
| xT Px ≤ 1} (7)
Or δ = {x ∈ n
| ( x − xa ) P ( x − xa ) ≤ 1}
T
(8)
δ = {x : Fx ≤ 1} (10)
P ost f ( x 0 ) = ∪ {φ
w∈W
f = (t , t1 , x 0 , w) : x 0 ∈ χ 0 }
Pr e f ( x f ) = ∩ {x : x
w∈W
f
= φ f (t , t1 , x0 , w) : x f ∈ χ f } (12)
The virtual viability region of Si +1 in state Si are defined as [1], properties of these
operators as well as their computation are discussed in [3].Considering separately at
the given t0 and for the duration [t0 , t f ] ,
Post −1 (t0 , vi +1 ) = ∩ {x | x
w∈W
f = φw (t f , t0 , x ) ∈ vi +1} (13)
The computation of the safe region and the virtual region are important for applica-
tions, and the algorithm of the computation has been shown in [3]. The invariant sets
of the mode transition can be computed by follow procedure.
1.Given the initial set of the mode transition system.
2.Compute the invariant set of the pre- transition system by the methods of ellip-
soidal sets or polyhedral sets.
3.The invariant sets of the pre-transition system is use as the initial set of the transi-
tion system, compute the viability region as the invariant set.
Computation of Virtual Regions for Constrained Hybrid Systems 283
4.The post-transition system is changed to after the duration of transition. The in-
variant set of the post-transition system is computed from the final viable region of
the transition system.
The computation of the invariant sets of the mode transition is important for appli-
cation, but it is difficult to compute and represent high- dimensional systems. The
method in subsection 4 is efficient for low dimension system.
5 Example
2
In this sector, a continuous dynamics in R is chosen as the trajectories and sets can
be easy to visualize. Convex computation and computational procedures of invariant
sets of mode transitions is based on. The computation can be complete by the Mat-
lab’s toolbox. Consider the mode transition system in Figure 1. Mode M 0 , M 2 are the
pre-transition and post-transition modes, and mode M 1 is a transition mode which
caused by the disturbance and will last for a certain time.
Let the systems be given as follows:
⎛1 0 ⎞ , ⎛0 0⎞ ⎛ −1 0 ⎞ ,
A1 = ⎜ ⎟ B1 = ⎜ ⎟ , A2 = ⎜ ⎟
⎝ 1 1 ⎠ ⎝ 0 1 ⎠ ⎝ 1 −1⎠
⎛ 1 0.5 ⎞
A3 = ⎜ ⎟ ,B3 = ⎜
⎛ 0 0 ⎞ , | u |≤ 1 ,
⎟ i || x ||∞ ≤ 1
⎝ 0.5 1 ⎠ ⎝0 1⎠
From the pre-transition system f1 ( x) , the invariant set can be computed by iterative
procedure. The initial set of the system is given. After iteration, δ 3 = δ 2 The invari-
ant set of the pre-transition system f1 ( x ) is δ 3 . Let the invariant set of f1 ( x ) is the
initial set of the transition system f 2 ( x ) . With the computation algorithm proposed,
the invariant sets of f 2 ( x ) has been computed.
After a certain time, the integration of viability region for system f 2 ( x ) evolves
backward. The invariant sets of the mode transition systems have been shown in
Figure 3.
6 Conclusion
In this paper, a method to compute the invariant sets for mode transition dynamic is
studied. Since the invariant sets can be computed efficiently, the proposed invariant
sets make it possible to model-predictive control, protection, decision for mode
transitions before the transient actually. A simple example is given in this paper. The
applications to the realistic problem are currently being studied. The computation for
complex systems is difficult, and it may have a large event sets. More efficient meth-
ods are investigated in the next step.
The authors gratefully acknowledge the contribution of the National Science Foun-
dation of China [61001185] [61003271][60903114].
References
1. Pei, H.-L., Krogh, B.H.: Stability Regions for Systems with mode Transition. In: Proc. of
ACC 2001 (2001)
2. Branicky, M.S.: Multiple Lyapunov Functions and other Analysis Tools for Switched and
Hybrid Systems. IEEE Transactions on Automatic Control 43(4) (April 1998)
3. Pei, H.-L., Krogh, B.H.: On the operator Post− 1 Technical Report, Dept. of Electrical and
Computer Engineering, Carnegie Mellon University (2001)
4. Lygeros, J.: Lecture Notes on Hybrid Systems, Dept. of Electrical and Computer Engineer-
ing, University of Patras (February 2-6, 2004)
5. Donde, V.: Development of multiple Lyapunov functions for a hybrid power system with a
tap changer, ECE Dep., University of Illinois at Urbana Champaign (2001)
6. Mayne, D.Q., Rawings, J.B.: Constrained Model Predictive: Stability and Optimality,
Automatic 36 789–814 (2000)
7. Zhang, P., Cassandras, C.G.: An Improved Forward Algorithm for Optimal Control of a
Class of Hybrid Systems. In: Proc. Of the 40th IEEE CDC (2001)
8. Jirstrand, M.: Invariant Sets for a Class of Hybrid Systems. In: IEEE, CDC 1998 (1998)
9. Girard, A., Le Guernic, C., Maler, O.: Efficient computation of reachable sets of linear
time-invariant systems with inputs. In: Hespanha, J.P., Tiwari, A. (eds.) HSCC 2006.
LNCS, vol. 3927, pp. 257–271. Springer, Heidelberg (2006)
10. Blanchini, F.: Controlled-invariant sets (2006)
11. Chutinan, A., Krogh, B.H.: Compuring Polyhedral Approximations to Flow Pipes for Dy-
namic Systems. In: 37th IEEE Conference on Decision & Control
12. Kunder: Power System Stability and Control. McGraw-Hill, New York (1994)
13. Pai, M.A.: Power System Stability. North-Holland Publishing Co., Amsterdam (1981)
14. Boyd, S., et al.: Linear Matrix Inequalities in System and Control Theory. SIAM, Phila-
delphia (1994)
15. Bertsekas, Rhodes: Min-max infinite-time reachability problem (1971a)
Fault Diagnosis of Diesel Engine Using Vibration Signals
College of Marine Engineering, Dalian Maritime University, Dalian 116026, P.R. China
wangfl@dlmu.edu.cn, duanshulin66@sina.com
1 Introduction
The piston-liner wearing will lead to the change of the surface vibration response of
diesel engine [1]. When the piston-liner wearing occurs, the vibration signals of the
engine is non-stationary. The spectrum based on Fourier transform represents the
global rather than any local properties of the signals. For measured signals in practical
application with finite data length, a basic period of the data length is also implied,
which determines the frequency resolution. Although non-stationary transient signals
can have a spectrum by using Fourier analysis, it resulted spectrum for such signals is
broad band. For example, the spectrum of a single pulse has a similar spectrum to that
of white noise. Consequently, the information provided by Fourier analysis for tran-
sient signals were limited. In this paper, local wave analysis is introduced. Instead of
relying on convolution methods, this method use local wave decomposition (LWD)
and the Hilbert transform [2]. For a non-stationary signal, the Hilbert marginal spec-
trum offers clearer frequency energy decomposition than the traditional Fourier spec-
trum. However, the piston-liner wearing characteristics is always submerged in
the background and noise signals, which will cause the mode mixture and generate
undesirable intrinsic mode functions (IMFs). In order to decrease unnecessary noise
*
Corresponding author.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 285–290, 2011.
© Springer-Verlag Berlin Heidelberg 2011
286 F. Wang and S. Duan
Applying the Hilbert transform to each IMFs, the original data can be expressed as,
n
x(t ) = Re ∑ a j (t)e
iϕ j (t)
. (2)
j =1
This frequency–time distribution of the amplitude is designated as Hilbert time– fre-
quency spectrum,
n
H (ω , t ) = Re ∑ a j (t)e ∫
i ω j (t)dt
. (3)
j =1
We can also define Hilbert marginal spectrum,
T
h(ω ) = ∫ H (ω , t )dt . (4)
0
where T is the total data length. The Hilbert marginal spectrum offers a measure of
the total amplitude distribution from instantaneous frequency.
(1)Split: Split the original signal X(k) with the length of L into even sets Xe(k)
={x(2k) , k ∈ Z } and odd sets Xo(k)={x(2k+1), k ∈ Z}.
(2)Update: Using a one-point update filter, the approximation signal is computed,
c(k)=(Xe(k)+Xo(k))/2.
(3)Select prediction operator: Design three different prediction operators
Where, N is the number of neighboring c(k) while applying the prediction operator,
k=1~L/2. An optimal prediction operator is selected for a transforming sample ac-
cording to minimizing the [d(k)]2.
(4) Predict: Compute the detail signal d(k) by using the optimal prediction operator.
Because we update first and the transform is only iterated on the low pass coeffi-
cients c(k), all c(k) depend on the data and are not affected by the nonlinear predictor.
Then reuse these low-pass coefficients to predict the odd samples, which gives the
high-pass coefficients d(k). We use a linear update filter and let only the choice of
predictor depend on the data. The selection criterion of minimizing the squared error,
an optimal prediction operator is selected for a transforming sample so that the used
wavelet function can fit the transient features of the original signal.
In the signal denoising, apply various thresholds to modify the wavelet coefficients
at each level. The wavelet coefficients are modified via soft-thresholding with univer-
sal threshold at each level [6].
4 Simulation
Let us consider a signal consisting of amplitude and frequency modulation component:
x(t ) = 1.5(1 + 0.2 sin(2π × 7.5t )) cos(2π × 30t + 0.2 sin(2π × 15t )) + 3sin(2π ×10t ) + 0.025randn .
The total number of data points n=1024, the sampling frequency is 640Hz, and randn
is an n × 1 vector of normally distributed random numbers with a mean of zero and a
standard deviation of one.
The amplitude of the modulation signal is a (t ) = 1.5(1 + 0.2 sin(2π × 7.5t )) . The
variation of frequencies of the modulation signal is 27≤f(t)≤33. Fig.1 a) shows Fourier
spectrum of simulation signal. The spectrum based on Fourier transform is not capa-
ble of representing the characteristics of frequencies and amplitude of the modulation
、
signal. With the comparisons given in Fig.1 b) c), we can see that the straightfor-
ward LWD can get a better results than that of LWD after denosing. The Hilbert
marginal spectrum shown in Fig.1 d) represents the amplitude distribution changing
with each instantaneous frequency and represents the modulation characteristic of
simulation signal.
288 F. Wang and S. Duan
Fig. 1. Analysis of simulation signal. a) Fourier spectrum. b) results by using the straightfor-
ward LWD. c) results by using LWD after denosing. d) Hilbert marginal spectrum.
5 Application
The proposed method is applied to diagnosing the diesel engine piston-liner wearing
faults. According to the fundamentals of diesel engines, vibrations have a close
relationship with the impact of the piston-liner. The characteristics of vibrations gen-
erated by a 6BB1 diesel engine were measured by accelerometer mounted on the
cylinder body of cylinder 3 correspond to the top dead center, we collected three kind
vibration signals from the same cylinder, which represent the engine no wearing,
slightly wearing, and serious wearing states. All data were sampled at 25.6 kHz, and
the analyzing frequency is 10 kHz. The rotating speed of the diesel engine is 1100 r
/min around. Fig.2 a) ~ c) show the vibration signals of the engine no wearing,
slightly wearing, and serious wearing situation. From the comparison in the time
domain, we can see that the amplitude peaks of no wearing and slightly wearing sig-
nals are about the same in the time domain, no distinctness features. But the serious
wearing signal’s is the highest.
From the Hilbert marginal spectrum shown in Fig.2 d) ~ f). we can see that the
marginal spectrum offers a measure of the amplitude distribution from each instanta-
neous frequency. For no wearing cylinder, the energy of the signal obvious distributes
in a lower frequency area which is limited to a range of 2kHz. For slightly wearing
cylinder, the lower frequency energy content is low due to leakage of combustion, and
Fault Diagnosis of Diesel Engine Using Vibration Signals 289
Fig. 2. Vibration signals of diesel engines and Hilbert marginal spectrum. a)~c) vibration sig-
nals of no wearing, slightly wearing, Serious wearing, d)~f) Hilbert marginal spectrum of cor-
responding a)~c).
6 Summary
The lifting wavelet transform can overcome the denoising disadvantage of traditional
wavelet transform and is adopted to remove noise. It can reduce the mode mixture in
LWD, improve the quality of decomposition and obtain a much better decomposition
performance. The proposed method can be applied to extract the fault characteristic
information of the piston-liner wearing vibration signal effectively.
References
1. Geng, Z., Chen, J., Barry Hull, J.: Analysis of engine vibration and design of an applicable
diagnosing approach. International Journal of Mechanical Sciences 45, 1391–1410 (2003)
2. Huang, N.E., Shen, Z., Long, S.R.: The empirical mode decomposition and the Hilbert
spectrum for nonlinear and nonstationary time series analysis. Proceedings of the Royal
Society of London 454, 903–995 (1998)
290 F. Wang and S. Duan
Abstract. In free field, the directivity of sound pressure is omni directional, and
the directivity of vibration velocity is dipole directional, it can bring many ad-
vantages for acoustic measurements by combining utilization of both sound
pressure and vibration velocity information. However, under the condition of
boundary, the directivity of pressure and vibration velocity will be distorted by
diffraction wave. In this paper, the soft boundary element model of finite cylin-
der baffle is established; the sound diffraction field of a plane wave from it at
different frequencies and incident angles is calculated, the characteristics of di-
rectivity of pressure and vibration velocity at different frequencies and different
distances are analyzed.
1 Introduction
The directivity of pressure and vibration velocity of acoustic vector sensors will be
affected by sound diffraction wave under the condition of boundary [1][2]. Recently,
many scholars have carried out the research about it home and abroad. The theoretical
and experimental research about the influences of near-field acoustic diffraction
caused by the system consisted of the elastic spherical shell filled with air and AVS
on the measurement results of acoustic vector sensors were carried out [3-6]. The
influences on the beam pattern of the acoustic vector sensor line array by the rigid
spherical baffle showed that the beam pattern was seriously distorted by the sound
diffraction wave [7]. The sound diffraction field of rigid prolate spheroid at arbitrary
incident wave was calculated, and the characteristics of it are concluded. The sound
diffraction field of a plane acoustic wave from an impenetrable, soft or hard, prolate
or oblate spheroid was also calculated [8-13]. The influences of the infinite circular
cylinder baffle on the direction of vector sensor show that the directivity of the vector
sensor was seriously distorted by the sound diffraction of infinite circular cylinder
[14-15]. In this paper, sound diffraction models of sphere, prolate spheroid, and cyl-
inder are established; the influences on the directivity of acoustic vector sensor by
these baffles are analyzed and compared, respectively. The conclusion of the paper
will be useful for the design of the shape and the geometric of baffle.
∗
Project supported by the National Natural Science Foundation of China (Grant No.51009042).
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 291–295, 2011.
© Springer-Verlag Berlin Heidelberg 2011
292 J. Jianfei et al.
2 Principle
The model of sound diffraction by finite cylinder baffle is shown in Fig. 1(a). The
height and radius of the cylinder are h and r , respectively. The observation point is
located the in the x axis. l is the distance from the observation point to the origin
(center of the cylinder). The sound source incidents at a distance and rotates around
the observation point 360 ° in the xoy plane. It is difficult to calculate analytical
solutions of sound diffraction field for finite cylinder baffle. The numerical solutions
are solved by utilizing boundary element method (BEM). The BEM models of cylin-
der baffles are shown in Fig.2 (b). The height and radius of the cylinder baffles is
h = 0.5m and r = 0.5m , respectively. The models are meshed fine enough to meet
the calculation accuracy.
The medium inside the cylinder is air and the medium outside the cylinder is water,
because the wave impedance of water is much larger than that of air, the boundary
can be approximated to absolute soft. As shown in Fig.1 (a), the observation point is
located in x axis, and the incident plane wave rotates around the observation point
360 ° in xoy plane with an interval of 5 ° , the pressure, the vertical (in x axis direc-
tion) vibration velocity and the horizontal (in y axis direction) vibration velocity are
calculated at every incident angles. When the sound source is located in the positive
half axis of x axis, the incident angle is defined as 0 ° . The amplitude of the directiv-
ity below has all been normalized.
Fig.2 shows that when the distance from the observation to the origin l is 0.5m, the
directivity of pressure and vibration velocity at 100Hz, 700Hz, 1300Hz and 3000Hz.
As shown in Fig.2, at 10Hz, the directivity of sound pressure is symmetrical, but the
amplitude of the pressure is relatively small. As the frequency increasing, after re-
versed-phase superposition, the depression appears in the directivity pressure. The
directivity of vertical (in x axis direction) vibration velocity basically loses the natural
Influences on the Directivity of Acoustic Vector Sensor 293
dipole directivity and as the frequency increasing, the grating lobes appear in the
directivity pattern. In low frequency, the directivity of horizontal (in y axis direction)
vibration velocity of diffraction wave is shown dipole in shape. When the sound
source is in the other side of the baffle relative to the observation point, the intensity
of diffraction wave at the observation point is enhanced as the frequency increasing,
so the dipole directivity is right deviated. Fig.3 shows when the distance from the
observation point to the origin l is 3.225m, the directivity of pressure and vibration
velocity at different frequencies. Because the distance l becomes longer, the inten-
sity of diffraction wave is decreased, and the directivities of pressure and vibration
velocity are slightly distorted.
3 Conclusions
The BEM model of finite cylinder is established, and the directivity of pressure and
vibration velocity of the vector sensor by soft finite cylinder baffle at different fre-
quencies and distances is analyzed. The conclusions are as follows:
294 J. Jianfei et al.
1. When the distance from vector sensor to the finite cylinder baffle l is relatively
small, the intensity of diffraction wave is strong, the directivities of pressure
and vibration velocity are affected by the diffraction wave. When the distance
from the vector sensor to the finite cylinder baffle l becomes longer, the inten-
sity of diffraction wave is decreased, so the directivities of pressure and vibra-
tion velocity are slightly distorted.
2. The influences of diffraction wave on vertical (in x axis direction) vibration
velocity and horizontal (in y axis direction) are different. Most of the diffrac-
tion wave energy concentrates in the vertical (in x axis direction) direction,
the vertical (in x axis direction) vibration velocity is more seriously affected
than the horizontal (in y axis direction) vibration velocity by diffraction
wave.
References
1. Sun, G., Li, Q., Yang, X., Sun, C.: A novel fiber optic hydrophone and vector hydrophone.
Physics 35(8), 645–653 (2008)
2. Jia, Z.: Novel sensor technology for comprehensive underwater acoustic information-
vector hydrophones and their applications. Physics 38(3), 157–168 (2009)
3. Kosobrodov, R.A., Nekrasov, V.N.: Effect of the diffraction of sound by the carrier of hy-
droacoustic equipment on the results of measurements. Acoust. Phys. 47(3), 382–388
(2001)
4. Shi, S., Yang, D., Wang, S.: Influences of sound diffraction by elastic spherical shell on
acoustic vector sensor measurement. Journal of Harbin Engineering University (27), 84–89
(2006)
5. Shi, S.: Research on vector hydrophone and its application for underwater platform. Doc-
tor Dissertation of Harbin Engineering University (2006)
6. Sheng, X., Guo, L., Liang, G.: Study on the directivity of the vector sensor with spherical
soft baffle plate. Technical Acoustics (9), 56–60 (2002)
7. Zhang, L., Yang, D., Zhang, W.: Influence of sound scattering by spherical rigid baffle to
vector-hydrophone linear array. Technical Acoustics 28(2) (April 2009)
8. Barton, J.P., Nicholas, L.W., Zhang, H., Tarawneh, C.: Near-field calculations for a rigid
spheroid with an arbitrary incident acoustic field. J. Acoust. Soc. Am. 113(3), 1216–1222
(2003)
9. Rapids, B.R., Lauchle, G.C.: Vector intensity field scattered by a rigid prolate spheroid. J.
Acoust. Soc. Am. 120(1), 38–48 (2006)
10. Roumeliotis, J.A., Kotsis, A.D., Kolezas, G.: Acoustic Scattering by an Impenetrable
Spheroid. Acoustical Physics 53(4), 436–447 (2007)
11. Ji, J., Liang, G., Wang, Y., Lin, W.: Influences of prolate spheroidal baffle of sound dif-
fraction on spatial directivity of acoustic vector sensor. SCIENCE CHINA Technological
Sciences 53(10), 2846–2852 (2010)
12. Ji, J., Liang, G., Liu, K., Li, Y.: Influences of soft prolate spheroid baffle on Directivity of
Acoustic Vector Sensor. In: IEEE International Conference on Information and Automa-
tion, ICIA 2010, pp. 650–654 (2010)
Influences on the Directivity of Acoustic Vector Sensor 295
13. Ji, J., Liang, G., Huang, Y., Li, Y.: Influences on spatial directivity of acoustic vector sen-
sor by soft spherical boundary. In: The 2010 International Conference on Computational
and Information Sciences (2010)
14. Li, C.: Combined signal processing technology with acoustic pressure and particle veloc-
ity. Doctor Dissertation of Harbin Engineering University (2000)
15. Chen, Y., Yang, B., Ma, Y.: Analysis and experiment study on directivity of vector sensors
located on complicated boundaries. Technical Acoustics (25), 381–386 (2006)
The Method of Intervenient Optimum Decision
Based on Uncertainty Information
Lihua Duan
1 Introduction
One of motives of traditional optimization theory is to express mankind seek perfec-
tion of things. Practice has showed that people could not have an accurate judgment
for the perfection. Thus limitations of traditional optimization theory are to be re-
flected. Literature [1] puts forward system non-optimum theory, and point out system
non-optimum decides its optimum, and it proved the motto “Failure is the mother of
success” from the angle of quantitative analysis. Literature [2] discussed translation
from non-optimum to optimum system, and conversion disorder to order of relations
from perspective of self-organization, and obtained the conclusion, i.e. non-optimum
system recognition and control equal to system achieving synergies. Literature [3]
expounded non-optimum analysis is a generalized interval analysis, it is a develop-
ment based on sub-optimum theory and the third optimum method. Literature [4, 5, 6]
studied the method of foundation, recognition and control of non-optimum system.
The author resolved the scheduling and decision-making problems of Dalian Chemi-
cal Industries Parent Company in 1992 by using non-optimum analysis method. Ac-
tual application shows that this method is more practical and reliable than traditional
optimization method, and it has been achieved more economic benefits [7].
Practice proves that system non-optimum theory is a system decision-making
method with realistic backgrounds and practical significance. Meanwhile, it embodies
characteristics of mankind intelligent behaviors. So system non-optimum theory has
important practical background and broad application prospect. In this paper,
we bring forward the concept of drowsy set in the basis of the previous study and
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 296–301, 2011.
© Springer-Verlag Berlin Heidelberg 2011
The Method of Intervenient Optimum Decision Based on Uncertainty Information 297
practice, and put forward the intervenient optimum decision-making method from the
angle of system optimization, then discuss its practical significance and concrete
solution methods.
The previous system analysis committed that it is impossible to realize optimum un-
der a limited condition of time and resources. At the same time, behind the optimum,
there is definitely a series of hypotheses, middle-way decisions, and predigesting of
data. Under most conditions, the hypotheses of optimum do not exist. Although peo-
ple have generalized this method to many fields, the results obtained can be only
temporary, and sometimes cannot achieve the final goals [1].
In real life, there are no absolute optimums, and only under certain conditions, is
there differentiated relative optimum. Relative optimum can be seen as satisfactory
result, because there are a great deal of uncertainty and non-linearity as explained by
Simone. There are three defects in the traditional decision disciplines: to ignore the
uncertainty of the economic life; to ignore the non-linear relationship of the real life;
to ignore the subjective limitations of the decision maker. Simone held that in the
complicated real world, only a minority of problems can be solved through the calcu-
lus of the maximum and minimum value. Sometimes there are not the most optimal
solutions at all, and under most conditions people just try to find out a satisfactory
approximate solution (relative optimum solution) [2].
In real problems, the heart of decision analysis is how to define the boundary of
problem P between non-optimum and optimum. It is also the starting point of non-
optimum analysis studies. The meaning of non-optimum lies in definition of bound-
ary. It is a basic point of systems transformation, and it is called intervenient opti-
mum. States and behaviors of systems are in intervenient optimum under the most
conditions. Moreover so-called optimum and non-optimum can not exist independ-
ently. Anything has optimum and non-optimum with various degrees. When optimum
degree is greater than non-optimum, system has optimum attributes; when optimum
degree is less than non-optimum, system has non-optimum attributes. If the problem
we researched is a choice between optimum and non-optimum, then the kind of stud-
ies method is called intervenient optimum.
C C ︱ ,C ︱ , ,C ︱
Definition 1. Suppose ={ 1 θ1 2 θ2 … n θn} be needed character
P , , ,
set of problem under the circumstances of uncertain degree θ={θ1 θ2 … θn},
then ∀C i θ i → f (C ) λ ∈ (−n, n) , θ ∈{θ , θ , … , θ }, Where f (C ) is
i i 1 2 n i
Z (C )
needed eigenvalue under recognized specification λ .λ= r K x (C ) is real
Where P
O(C i ) expresses that problem has optimum attributes on the needed char-
P
acter C i ; SO (C i ) expresses that problem has intervenient optimum attributes on
the needed character C i ; NO (C i ) expresses that problem P
has non-optimum
attributes on the needed character C i .
Definition 2 puts forward a method of judging optimum and non-optimum in the
real analyzing problems. The setting standard of target is N O (C i ) =0 under the con-
ditions of specification λ . If ,
N O (C i ) < 0 the problem P is belong to non-optimum
attribute, the studied angle is to decrease the value of N O (C i ) . Accordingly the
value of ,
S (C i ) is also increased. If N O (C i ) > 0 the problem P is belong to λ op-
timum state, the degree of specification λ decides the optimization standard of prob-
P
lem . In fact, N O (C i ) =0 is basic conditions. That is to say, optimization problems
in the reality should be intervenient optimum based on specification λ .
According to the above discussion, we can find that any problem P has a eigen-
value f (C i ) under recognized specification λ , it is a description of object states. The
critical issue, whether or not it reflects the principles of authenticity reliability, is
defined the value of λ . The determining method of the value of λ is given as the
following:
Uncertainty is an essential attribute of mankind cognition, it is divided into subjec-
tive uncertainty and objective uncertainty. Subjective uncertainty embodies sensibility
and perceptibility of mankind. If the uncertainty of character attributes of things may
be appeared alternatively and duplicated in brains. Then it is called hesitant degree of
The Method of Intervenient Optimum Decision Based on Uncertainty Information 299
things (drowsy degree, for shot). It is a recognized process from uncertainty to cer-
tainty, then from certainty to uncertainty. Thus we have:
λ = Z r (C ) K (C )
x
In the analysis of uncertainty, hesitant sets should be built firstly. Then uncertain
degree could be attained by the way of hesitant sets. In reality, uncertainty has some
distributions, it can be obtained through statistical laws of limited hesitant proc-
esses θ 1 ,θ 2 , L ,θ n .
If a decision problems Pi is composed of conditions (Ci ) and targets (Oi ) in any sys-
tems S, then D{P: (C,O)}is called decision space. If a problem P could be divided
into optimum category and non-optimum category under the recognized conditions.
And it can be decreased non-optimum degree and increased optimum degree, then the
system is called intervenient optimum system.
﹤
Definition 5. When it has a choice interval of optimization Oab= ao ,bo for prob- ﹥
lems P in systems S, it must exist influence interval of non-optimum Nab= an ,bn , ﹤ ﹥
﹤
if ∀o ∈ ao ,bo ﹥,
∀n ∈ an ,bn ﹤ ﹥,
then Sab= Oab ∩Nab is called intervenient
optimum interval.
According to the above discussions, it is well known that so-called coexistence is
decided by the relevance between optimum and non-optimum system. Therefore,
intervenient problems could be studied by using correlated functions of Extenics.
300 L. Duan
①When s 0 ∉ S 0 or s 0 = a, b , then ρ ( s 0 , S 0 λ0 ) = d ( s 0 , S 0 ) ≥ 0 ;
②When s 0 ∈ S 0 and s 0 ≠ a, b , then ρ ( s 0 , S 0 λ0 ) < 0 , d ( s 0 , S 0 ) = 0
With the introduction of a concept of distance, the location relations of a point and its
intervenient interval could be described accurately by the quantitative analysis
method. When a point is within the interval, the distance between the point and inter-
val is equal to zero in the classical mathematics. Whereas in the analysis of interveni-
ent optimum, different points have different locations according to different values of
distance.
The concept of distance describes the located relations between points and inter-
vals. That “It is the same within a class” is an original recognition, now it has devel-
oped that “it is different degree within a class” by the quantitative analysis method.
In the analysis of system intervenient optimum, we must think not only the located
relations between points and intervenient intervals (or non-optimum and optimum),
but also the located relations between non-optimum intervals and optimum interval,
as well as the located relations between a point and two intervals. Therefore, we have:
⎧ ρ ( x, O ) − ρ ( x, N 0 ) x ∉ N 0
⎪
D ( x, N 0 , O ) = ⎨
⎪− 1 x ∈ N0
⎩
D( x, N 0 , O) describes the located relations between a point x and the nested inter-
val of N 0 and O . Based on the analysis of values of distance, the degree of inter-
venient optimum is expressed as the following:
ρ ( x, N 0 )
J (u ) = λ
D ( x, N 0 , O )
The Method of Intervenient Optimum Decision Based on Uncertainty Information 301
Where N 0 ⊂ O , and there is no common end vertex. The range of intervenient op-
timum degree is (- , )
∞ +∞ . The above mentioned formula expresses intervenient
optimum degree of non-optimum analysis, it expands recognition of non-optimum
from qualitative description to quantitative description.
In the analysis of intervenient optimum degree , J(x)≥0 expresses x belongs to op-
timum system, k(x)≤0 expresses x belongs to non-optimum system. The value of J(x)
and k(x) expresses its degree individually. k(x)=0 expresses x belongs to the boundary
of system. Therefore, intervenient degree is a transferred tool, which is a quantitative
description about things’ transformation from non-optimum to optimum.
In this way, a new thoughtway comes into being, namely intervenient optimum
principle. What is called intervenient optimum principle means that in the decision
analysis, any target and behavior have optimum and non-optimum attributes, in vary-
ing degrees. The optimum attributes under non-optimum state is called intervenient
optimum. It coexists with optimum and non-optimum.
4 Conclusions
Intervenient optimum analysis is used to study problems of system optimizations from
the angle of non-optimum analysis. It offers a fresh thoughtway for optimal decision.
Research indicates that the kernel of uncertainty decisions is to seek the measure-
ment of uncertainties. Because of the existence of uncertainties, the system’s optimi-
zation can not be optimum but intervenient optimum. It exists drowsy attribute in the
process of decision for uncertain problems. Therefore drowsy set is an effective
method in resolving this kind of uncertain problem with intervenient optimum attrib-
ute. If it has abilities of controlling drowsy attribute and judging drowsy number, then
the reliability of decision will be improved.
References
1. Qu, Z., He, P.: Intelligence Analysis Based on Intervenient Optimum learning Guide Sys-
tem. In: International Conference on Computational Intelligence and Natural Computing,
pp. 363–366. IEEE Computer Society, Los Alamitos (2009)
2. Ping, H.: Theories And Methods of Non-optimum Analysis on systems. Journal of China
Engineering Science 5(7), 47–54 (2003)
3. He, J., He, P.: A New Intelligence Analysis Method Based on Sub-optimum Learning
Model. In: ETP International Conference on Future Computer and Communication, pp.
116–119. IEEE Computer Society, Los Alamitos (2009)
4. He, P.: Method of system non-optimum analysis based on Intervenient optimum. In: Proc.
of ICSSMSSD, pp. 475–478 (2007)
5. He, P.: Methods of Systems Non-optimum Analysis Based on Intervenient Optimum. In:
Wang, Q. (ed.) Proc. of the Int’l conf on System Science, Management Science and Sys-
tem Dynamic, pp. 661–670 (2007)
6. He, P., Qu, Z.: Theories and Methods of Sub-Optimum Based on Non-optimum Analysis.
ICIC Express Letters, An International Journal of Research and Surveys 4(2), 441–446
(2010)
The Deflection Identify of the Oil Storage Tank*
Jingben Yin**, Hongwei Jiao, Jiemin Zhang, Kaina Wang, and Jiahui Ma
1 Introduction
Usually, the gas station has some underground oil storage tank of deposited fuel, and
equips “depth storing up oil measure manages system”. People adopt flowmeter and
oil digit meter to measure data of in/out oil capacity and depth storing up oil and so
on. Via beforehand demarcating tank capacity table (the relationship between volume
and depth storing up oil) to put up real time calculation, receiving transformation
instance of volume and depth storing up oil.
Owing to groundwork distortion, many oil storage tanks location will occur portrait
lean and landscape orientation deflection after a short time, which lead to transforma-
tion of oil capacity table. According to related regulation, we need to demarcate the
oil tank after a period of time.
In order to master the impact of oil capacity table after deflection, we impose a lit-
tle ellipse type oil storage tank (two ends crop ellipse cylinder).We do experiment on
tank without deflection and slope bevel serves as α = 4.1 when it occurs portrait
0
deflection. After the study we obtain the numerical tabular of the depth of oil each
1cm when the oil tank deflected [1,2].
*
This paper is supported by the National Natural Science Foundation of Education Department
for Henan Province (2008A110006, 2008B110004).
**
Corresponding author.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 302–307, 2011.
© Springer-Verlag Berlin Heidelberg 2011
The Deflection Identify of the Oil Storage Tank 303
2 Model Supposition
1) The hypothesis adopt flowmeter and the oil level plan to come to measure data
such as entering/ out oily amounts and painting altitude within the jar;
2) Assumes an oil storage tank after using a period of time, because foundation
deformation waits for cause, may change such as longitudinal incline and lateral de-
flection happened in using the location that the jar experiences and observes (the
following be called deflection);
3) Assumes the relation not considering the oil volume with the temperature
change;
4) Assumes the small ellipse type oil storage tank is both ends oval-shaped closely
cropped hair cylinder;
5) Oil storage tank considers as when assuming calculation being that deformation
happened in the standard oval-shaped cylinder;
6) Assumes that the oil storage tank thinking because time is long not but bringing
about corrodes condition;
7) The hypothesis does not consider an impact of intensity of pressure over volume.
3 Sign Explanation
Y is the summation when we add certain oil every time ( L ), X is the depth of the
oil when we add the certain oil into the tank ( cm ), a is half of the long axis of
elliptic cylinder, b is half of the short axis of elliptic cylinder, l is half of the elliptic
cylinder, m is the distance from the probe to the oil tanks side, d is Paint carburetor
float altitude after incline and paint the carburetor float altitude difference when hori-
zontal, V is the volume of the oil in the tank, H is the oil level probe demonstrated
oil level altitude, L is the length of the elliptic, Δh when H = 0 , Paint the altitude
facing in oil tank side, α is longitudinal incline angle, β is landscape orientation
incline angle,a1 is column shape oil tank’s two sides global hat style locality spheri-
form radius, b1 is column shape oil tank’s two sides global hat style locality spheri-
form radius, c1 is column shape oil tank’s two sides global hat style altitude, other
signs’ explanation will be showed when they are used.
Let
H −b
x= (0 ≤ H ≤ b, −1 ≤ x ≤ 1) ,
b
then
⎛x ⎞
V ( x ) = ⎜ + x 1 − x 2 + arcsin x ⎟ abL (1)
⎝2 ⎠
⎛x ⎞
We let a polynomial close ⎜ + x 1 − x + arcsin x ⎟ and get the approximation
2
⎝2 ⎠
⎛x ⎞
of the oil in the tank. ⎜ + x 1 − x + arcsin x ⎟ is continue on [ −1,1] ,
2
⎝2 ⎠
⎛ x ⎞
⎜ + x 1 − x + arcsin x ⎟ =2 1 − x is boundary on [−1,1] , the Chebyshev
2 2
⎝2 ⎠
( )
∞
1
series of x 1 − x + arcsin x is a0T0 ( x ) + ∑ anTn ( x ) , where
2
2 n =1
2 1 ( x 1− x 2
)
+ arcsin x Tn ( x )
an =
π ∫−1
1 − x2
dx ,
∞
1
Tn ( x ) is Chebyshev polynomial of n th order), a0T0 ( x ) + ∑ anTn ( x )
2 n =1
convergences ( x 1− x 2
)
+ arcsin x consistent. From (1) we get
⎛x ⎞ ⎡π 1 ∞
⎤
V ( x ) = ⎜ + x 1 − x 2 + arcsin x ⎟ abL = ⎢ + a0T0 ( x ) + ∑ anTn ( x ) ⎥ abL. (2)
⎝2 ⎠ ⎣ 2 2 n =1 ⎦
We substitute ( x 1− x 2
+ arcsin x for )
1
a0T0 ( x ) + a1T1 ( x ) + a2T2 ( x ) + ⋅⋅⋅ + anTn ( x ) ,
2
from (2) we get
⎡π 1 ⎤
V ( x) ≈ ⎢ + a0T0 ( x ) + a1T1 ( x ) + a2T2 ( x ) + ⋅⋅⋅ + anTn ( x ) ⎥ x ∈ [−1,1]
⎣2 2 ⎦
Case 1 n is even number
an = ∫
1 ( x 1− x 2
)
+ arcsin x Tn ( x )
dx
−1
1 − x2
The Deflection Identify of the Oil Storage Tank 305
Here (
Tn ( x ) is even number too, x 1 − x 2 + arcsin x is odd number, 1 − x 2)
is even number, [−1,1] is symmetrical, then
an = a2 k = 0 (k = 0,1, 2 ⋅⋅⋅⋅⋅⋅) (3)
⎧ 0, n = 2k
⎪
an = ⎨ 16
, n = 2k + 1 (k = 0,1, 2L)
(5)
⎪ ( 2k + 1) ⎡ 4 − ( 2k + 1) 2 ⎤ π
2
⎩ ⎣ ⎦
From 2 ( )and(5), we get
⎧ ⎡1 1 1 ⎤⎫
⎪ ⎢ 3 T1 ( x ) − 45 T3 ( x ) − 525 T5 ( x ) + ⋅⋅⋅⎥ ⎪
⎪ π 16 ⎥ ⎪ abL ( k = 0,1, 2 ⋅⋅⋅) (6)
V ( x) ≈ ⎨ + ⎢ ⎬
2 π ⎢+ 1
T ( x) ⎥⎪
⎪ ⎢ ( 2k + 1)2 ⎡ 4 − ( 2k + 1)2 ⎤ 2 k +1 ⎥
⎪ ⎪
⎩ ⎣⎢ ⎣ ⎦ ⎦⎥ ⎭
We substitute ( x 1 − x 2 + arcsin x +
π for
)
2
x
P3 ( x ) = + a1T1 ( x ) + a2T2 ( x ) + a3T3 ( x )
2
when k = 1 , we get
⎡π ⎤
V ( x) ≈ ⎢ +
32
⎣ 2 45π
( 9 x − 2 x 2 ) ⎥ abL
⎦
π
We substitute ( x 1 − x 2 + arcsin x + ) for
2
x
P5 ( x ) = + a1T1 ( x ) + a3T3 ( x ) + a5T5 ( x )
2
306 J. Yin et al.
when k = 2 , we get
⎡π ⎤
V ( x) ≈ ⎢ +
16
⎣ 2 1575π
( 615 x − 80 x 3 − 48 x 5 )⎥ abL
⎦
According to the above approximately function, we can easily calculate oil storage
capacity under the circumstance of given oil level altitude. We can handle MATLAB
to close the function between volume and depth with the given measured oil storage
capacity and altitude[4-6].
y = −0.0025 x 3 + 0.5394 x 2 + 4.4910 x + 42.1041
According to the request in subject, we can make use of the useable data and closing
polynomial model to obtain the numerical tabular of the depth of oil each 1cm when
the oil tank deflected. By analysing the error we can get results as below (Table 1).
Work out dispel dot chart by two groups data with MATLAB software(Figure 1).
One is image of model data, anther Image of real data.
From figure 1 and table 1, we know that the model curve is really close to real
curve. Despite individual dots, the error is less between numeration data and practice
data, the model we built is Accurate. So the model we built accord with fact.
5 Model Estimate
As for problem one, we build closing polynomial model, and analyze the result’s
error, and compare the model chart with real data. The error is less. As for problem
two, we calculate the real data with Lagrange interpolation method, subsection linear
interpolation method, thrice sample bar interpolation method. According to the calcu-
lated data, we obtain the numerical tabular of the depth of oil each 1cm when the oil
tank deflected, but the calculation quantity is huge.
References
1. Wu, L., Li, B.: Mathematics experiment and modeling. National Defense Industry Press,
Beijing (2008)
2. Huadong Normal University Mathematics Department: Mathematics Analysis. Higher
Education Press, Beijing (2008)
3. Sun, H., Guan, J.: Calculate oil storage volume when the cross section is ellipse with ap-
proaching method. Tube Piece and Equipment 3, 29–31 (2001)
4. Yang, M., Xiong, X., Lin, J.: MATLAB foundation and mathematics software. Dalian
Natural Science University Press, Dalian (2006)
5. Jiang, J., Hu, L., Tang, J.: Numerical value analysis and MATLAB experiment. Science
Press, Beijing (2004)
6. Jiang, Q., Xing, W., Xie, J., Yang, D.: University mathematics experiment. Tsinghua Uni-
versity Press, Beijing (2005)
PID Control of Miniature Unmanned Helicopter Yaw
∗
System Based on RBF Neural Network
1 Introduction
The design of flight control system of Miniature Unmanned Helicopter (MUH) in-
cludes modeling of yaw dynamics and control algorithm. The MUH is a very compli-
cated system, and it has highly nonlinear, time-varying, unstable, deep coupling char-
acteristics.
Two main approaches are used for MUH modeling: laws of mechanics based mod-
eling and system identification based modeling. Laws of mechanics models are usually
large and very complicated, and some parameters are very hard to measure [1].System
identification modeling can produce accurate low-order models [2, 8]. In this paper,
laws of mechanics method is used to analyze the yaw system characteristic, then a
SISO model of MUH in hovering or low-velocity flight mode is established using
system identification method.
The mostly commonly used algorithm of yaw control is PID controller [3].It based
on linearizing model or assumption that the model is decoupled. But the yaw dynamics
is nonlinear, time-varying and coupled model. The PID control law is limited for yaw
control. Nonlinear control designs include neural network control [4, 9], fuzzy control
[5], these methods are too complicated, and they need accurate dynamics model. In
∗
This work was supported by the National “863” Project under Grant 2007AA04Z224 of China.
**
Corresponding author.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 308–313, 2011.
© Springer-Verlag Berlin Heidelberg 2011
PID Control of Miniature Unmanned Helicopter Yaw System 309
fact, the nonlinear SISO model based control system has the advantages: simple
structure, high reliability, easy to implement. So, in this paper, the control algorithm is
based on nonlinear SISO model. In order to improve robustness and self-adapting
ability of traditional PID controller, Radial Basis Function (RBF) neural network al-
gorithm is introduced to adjust the parameters of PID controller. The simulation results
verify RBF-PID controller’s applicability.
The yaw dynamics model for MUH in hovering or low-velocity flight mode is [6]:
I zz ω& b 3 = Tψ − T MR = FT LT − K g ω b 3 − T MR
(2)
B3 B2
= a T c T ρπ R T3 Ω T2 ( δψ − λ T ) LT − K g ω b 3 − T MR
3 2
Transform equation (2) [6]:
(2) The angular velocity of main rotor remains unchanged. Because of the propor-
tional relation, the angular velocity of tail rotor Ω T remains unchanged too.
(3) The velocity of MUH is zero or very small, so assumes λT remains unchanged.
After finish the assumption above, the simplified yaw dynamics is:
310 Y. Pan, P. Song, and K. Li
aT cT ρπRT3 ΩT2 B 3 LT
3I zz (6)
Gψ ( s) =
K − a c ρπRT3ΩT2 B 2λT LT − 2T MR
s2 + g s + T T
I zz 2 I zz
In equation (6), the coefficients are constants, so, the yaw movement dynamics of
MUH in hovering or low-velocity flight mode can be described using a SISO model.
Using output error (OE) model can get more accurate model.
Using system identification method can obtain the yaw dynamics model:
1.33s + 31.08
Gψ ( s ) = (7)
s + 3.16 s + 29.82
2
h1
x1
ym
x2 h2 Σ
xn
hm
i
j
PID controller based on RBF neural network is constructed by RBF neural network
and traditional PID controller, as shown in Fig.2, using incremental as basic PID [7].
The three inputs of PID are as follows [10]:
xc(1) = error (k ) − error (k − 1)
xc(2) = error (k ) (8)
xc(3) = error (k ) − 2 * error (k − 1) + error (k − 2)
The system average square error is [10]:
1
E (k ) = error (k ) 2 (9)
2
PID Control of Miniature Unmanned Helicopter Yaw System 311
∂E ∂E ∂y ∂Δu ∂y
Δk p = −η = −η = ηerror ( k ) xc(1)
∂k p ∂y ∂Δu ∂k p ∂Δu
∂E ∂E ∂y ∂Δu ∂y (10)
Δk i = −η = −η = ηerror (k ) xc(2)
∂k i ∂y ∂Δu ∂k i ∂Δu
∂E ∂E ∂y ∂Δu ∂y
Δk d = −η = −η = ηerror (k ) xc(3)
∂k d ∂y ∂Δu ∂k d ∂Δu
4 Simulation
The differential equations of the yaw dynamics model can be described as follows:
y ( k ) = 1.927 y ( k − 1) − 0.9385 y ( k − 2) + 0.03204 x ( k − 1) − 0.02 x ( k − 2) (11)
A 3-6-1 structure of RBF network is adopted. The PID parameters k p , k i , k d are ad-
justed by self-learning of RBF neural network. Sampling period is 0.02s, learning
rateη = 0.3 , inertia coefficient α = 0.6 , the initial value of the proportional, differential
and integral coefficient are 0.005, 0.5 and 0.1 separately. To compare the results, tra-
ditional PID controller is introduced. The proportional, differential and integral coef-
ficients are 1.2, 1 and 0 separately. The simulation time is 40s.
In order to verify the anti-disturbance characteristic, disturbance to the output of the
model is added.
y ( k ) = 1.927 y ( k − 1) − 0.9385 y ( k − 2) + 0.03204 x ( k − 1) − 0.02 x ( k − 2) + ζ ( k ) (12)
Fig. 5. Model parameter impulse variation Fig. 6. Model parameter step variation
response response
5 Conclusion
The PID controller based on RBF neural network exhibits its fast response, robustness,
and adaptive ability. Compared to traditional PID controller, the RBF-PID controller
has higher accuracy and stronger adaptability. For the nonlinear, time-varying, cou-
pling, complex dynamics of yaw system of MUH, the PID controller based on RBF
neural network can get satisfied control result.
PID Control of Miniature Unmanned Helicopter Yaw System 313
Acknowledgment
We would like to express our gratitude to all the colleagues in our laboratory for their
assistance.
References
1. Padfield, G.D.: Helicopter Flight Dynamics: The Theory and Application of Flying Quali-
ties and Simulation Modeling. AAIA Education Series (1996)
2. Shin, D.H., Kim, H.J., Sastry, S.: Control system design for rotorcraft-based unmanned ae-
rial vehicles using time-domain system identification. In: Proceedings of the 2000 IEEE
International Conference on Control Applications (2000)
3. Shim, H.: Hierarchical flight control system synthesis for rotorcraft-based unmanned aerial
vehicles. University of California, Berkeley (2000)
4. Prasad, J.V.R., Calise, A.J., Corban, J.E., Pei, Y.: Adaptive nonlinear controller synthesis
and flight test evaluation on an unmanned helicopter. In: IEEE Conference on Control Ap-
plication (1999)
5. Frazzoli, E., Dahleh, M.A., Feron, E.: Robust hybrid control for autonomous vehicle motion
planning. IEEE Transactions on Automatic Control (2000)
6. Kim, S.K.: Modeling, identification, and trajectory planning for a model-scale helicopter.
The dissertation for the degree of doctor (2001)
7. Zhang, M.-g., Wang, X.-g., Liu, M.-q.: Adaptive PID Control Based on RBF Neural Net-
work Identification. In: IEEE International Conference on Tools with Artificial Intelligence
(2005)
8. Mettler, B., Tischler, M.B., Kanade, T.: System Identification modeling of a small-scale
unmanned rotorcraft for flight control design. J. Journal of the American Helicopter Society,
50–63 (2002)
9. Pallett, T.J., Ahmad, S.: Adaptive neural network control of a helicopter in vertical flight.
Aerospace Control Systems 2(1), 264–268 (1993)
10. Yue, W., Feng, S., Zhang, Q.: An Auto-adaptive PID Control Method Based on RBF Neural
Network. In: International Conference on Advanced Computer Theory and Engineer
(ICACTE) (2010)
Identity-Based Inter-domain Authentication Scheme
in Pervasive Computing Environments
1 Introduction
In a pervasive environment, mobile users often roam into foreign domains to request
service. Hence, efficient and secure inter-domain authentication should be highly
emphasized [1]. When users are roaming into a foreign domain, there is no trust be-
tween users and the foreign authentication server(FA), so the FA should cooperate
with the users’ home authentication server(HA) to authenticate users. During inter-
domain authentication, user’s real identity should be concealed in order to prevent
user’s sessions being tracked by malice. Besides of mutual authentication and key
establishment, the inter-domain authentication scheme for pervasive computing
should meet the following requirements: (1) Client anonymity: the real identity of a
user should not be divulged to the FA and outsiders; (2) Non-linkability: the outsiders
could not link different sessions to the same user.
Lin [1] proposed an inter-domain authentication protocol based on signcryption.
But it can not realize user anonymity. Peng [2] proposed an Identity-based(ID-based)
inter-domain authentication scheme. It realizes anonymity and non-linkability, but it
has high computation expense because users require to implement expensive bilinear
pairing operations. Zhu [3] proposed a novel ID-based inter-domain authentication
scheme, which reduces the count of pairing operations and has higher efficiency.
However, it has drawback in anonymity. The user uses the same temporary certificate
as authentication proof in the reauthentication phase, which causes user’s sessions to
be tracked.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 314–320, 2011.
© Springer-Verlag Berlin Heidelberg 2011
ID-Based Inter-domain Authentication Scheme in Pervasive Computing Environments 315
This paper first presents a new ID-based signature(IBS) scheme. Then, an inter-
domain authentication scheme is constructed based on the new IBS scheme. It is
showed that the scheme can achieve the security requirements in inter-domain authen-
tication of pervasive computing and has higher efficiency.
User-Key Extraction: Suppose IDA ∈ {0,1}* denotes user A’s unique identifier. PKG
generates A’s private key as follows:
(1) Choose at random rA ∈ Z q* and compute RA = rA P .
(2) Compute s A = rA + sc , where c = H1 ( IDA , RA ) .
A’s private key is the pair ( s A , RA ) , and it is sent to A via a secure channel.
The system architecture is shown as the Fig.1. There are two trust domains: Domain
A and Domain B. User A are located in the domain A. HA and FA indicate the au-
thentication server of Domain A and B respectively. A should first register in HA.
When A wants to access resource in Domain B, FA must cooperate with HA to au-
thenticate A. The security of our scheme relies on the following assumption: HA and
FA are honest and they trust each other.
3.3 Registration
A sends the identifier IDA to HA. HA checks the validity of IDA . Then HA generates
A’s private key ( s A , RA ) , where s A = rA + sHAc , c = H1 ( IDA , RA ) , rA ∈ Z q* . HA sends
( s A , RA ) to A. HA creates for A an account of the form < Ind A , IDA , RA > , where
Ind A = RA + H1 ( IDA , RA ) PHA is the index of A’s account.
ID-Based Inter-domain Authentication Scheme in Pervasive Computing Environments 317
3.4 Authentication
When A requests resource in Domain B for the first time, A needs to implement the
authentication protocol.
,
Step1: A chooses at random x y ∈ zq* , and compute Y = yP , X = xP , Y ' = Y + xPHA . A
picks up the current time TA , and compute h = H 2 ( IDA , TA , RA , Y ) , z = y + s Ah . A sends a
message < IDHA , TA , X , Y ' , h, z > to the FA.
Step2: After receiving the message, FA checks the validity of TA . FA rejects the
request if TA is not valid. Otherwise, FA does the following:
Step3: After receiving the message, HA checks the validity of TFA and Sig FA (mFA ) . If
the decision is positive, HA confirms FA is legal and does the following:
(1) Compute Y = Y ' − sHA X , Ind A = h −1 ( zP − Y ) , and search client accounts with
Ind A . If there is an account indexed by Ind A , obtain the corresponding identity infor-
mation and check whether the equality h = H 2 ( IDA , TA , RA , Y ) holds. If the decision is
positive, HA confirms A is a legal user.
(2) Pick up the current time THA , and compute k = H 3 (Y ) . Then, construct a mes-
sage mHA = {IDFA , IDHA , TFA , THA , EPECC (k )} , and compute signature Sig HA (mHA ) , where
FA
EPECC
FA
denotes the elliptic curve encryption scheme(ECES).
(3) Send a message < mHA , Sig HA ( mHA ) > to the FA.
Step4: After receiving the message, FA checks the validity of THA and Sig HA (mHA ) . If
the decision is positive, FA confirms HA and A are legal and does the following:
(1) Generate a temporary identifier IDA' and corresponding private key ( s 'A , RA' )
for A.
(2) Create for A an account of the form ( Ind A' , IDA' , RA' , time) , where
Ind A' = RA' + H1 ( IDA' , RA' ) PFA is the index of the account and time is the expiry date. IDA'
and ( s 'A , RA' ) can be generated in spare time.
(3) Decrypt EPECC (k ) and get k . Then, pick up the current time TFA' , and send a
FA
message < TFA' , Ek (TFA' , TA , s A' , RA' , IDA' ) > to A, where E is the symmetric encryption
algorithm.
Step5: After receiving the message, A checks the validity of TFA' . If the decision is
positive, A computes k = H 3 (Y ) , and decrypts Ek (TFA' , TA , s A' , RA' , IDA' ) . Then, A checks
the validity of TFA' and TA . If the decision is positive, A has known that FA is legal. A
318 S.-W. Huo, C.-Y. Luo, and H.-Z. Xin
uses k as the session key with the FA in future communications, and saves
IDA' , ( s 'A , RA' ) .
3.5 Reauthentication
When user A requests resource in Domain B again before the expiry date, A can im-
plement the reauthentication protocol. In this case, the FA can fast authenticate A
without the participation of HA.
,
Step1: A chooses at random x y ∈ zq* , and computes Y = yP , X = xP , Y ' = Y + xPFA . A
picks up the current time TA , and computes h = H 2 ( IDA' , TA , RA' , Y ) , z = y + s 'Ah . A sends
a message < TA , X , Y ' , h, z > to FA..
Step2: After receiving the message, FA checks the validity of TA . If the decision is
positive, FA does the following:
(1) Compute Y = Y ' − sFA X , Ind A' = h −1 ( zP − Y ) , and search client accounts with
Ind A' . If there is an account indexed by Ind A' , obtain the corresponding identity in-
formation and check whether the equality h = H 2 ( IDA' , TA , RA' , Y ) holds. If the decision
is positive, FA has known that A is a legal user.
(2) Compute the session key k ' = H 3 (Y ) . Pick up the current time TFA , and send a
message < TFA , Ek (TFA , TA ) > to A.
'
Step3: After receiving the message, A checks the validity of TFA . If the decision is
positive, A computes k ' = H 3 (Y ) , and decrypts Ek (TFA , TA ) . Then, A checks the valid-
'
ity of TFA and TA . If the decision is positive, A confirms FA is legal. A saves k ' as
the session key with FA.
4 Security Analysis
The proposed scheme can achieve the following security requirements:
Mutual Authentication: In the authentication phase, entities can authenticate each
other. In step 3, HA authenticates A through verifying the signature (Y ' , h, z ) .
(Y ' , h, z ) is the signature over timestamp TA using the IBS scheme in section2. We
encrypt Y and get Y ' , for in this case only HA can verify the signature. Since the IBS
scheme is secure and a timestamp is used for checking the freshness of signature, the
authentication is secure. In step 3 and 4, HA and FA authenticate each other through
verifying the other’s signature. Since the ECDSA is secure and a timestamp is used
for checking the freshness of signature, the authentication is secure. In step 5, A au-
thenticates FA through decrypting Ek (TFA' , TA , s A' , RA' , IDA' ) and checking TFA' , TA . Be-
cause HA encrypts k using FA’s public key and sends the EPECC (k ) to FA, only legal
FA
FA can decrypt EPECC (k ) .In step 4, FA trusts A because FA trusts HA and HA has
FA
ID-Based Inter-domain Authentication Scheme in Pervasive Computing Environments 319
that FA indeed gets k through decrypting Ek (TFA' , TA , s A' , RA' , IDA' ) and checking TFA' , TA .
In the reauthentication phase, FA and A can renew the session key.
Client anonymity: Any outsider and FA is unable to confirm user A’s real identity. In
the authentication phase, the authentication information A submits only contains TA
and its signature without any identity information. Only HA can compute the index of
A’s account, so any outsider and FA does not know A’s real identity. In the reauthen-
tication phase, FA can confirm A’s temporary identifier, but does not know A’s real
identity.
Non-Linkability: Any outsider is unable to link two different sessions to the same
user. In the authentication phase, the authentication proof is the signature over time-
stamp TA , so there is no linkability between different proofs and outsiders can not
link different proofs to the same user. Similarly, in the reauthentication phase, non-
linkability is achieved.
Our scheme can achieve the same security as the scheme in [2]. The scheme in [3]
has drawback in anonymity. In the reauthentication phase, the user uses the same
temporary certificate as authentication proof, which causes user’s sessions to be
tracked. So our scheme has superiority in security compared with the scheme in [3].
5 Performance Analysis
In this section, we compare the performance of our scheme in the authentication
phase with schemes in [2,3], for they are all identity-based. In the comparison, only
the time of public key operations are accounted.
We suppose that the hardware platform of HA and FA is a PIV 2.1-GHZ processor,
and the hardware platform of A is a 206-MHz StrongARM processor. The operation
time of cryptographic primitives on the HA/FA and A are obtained by experiment [5].
The time is listed in Table 1.
According to the date in Table 1, the running time of the three schemes is com-
puted. The running time is listed in Table 2.
320 S.-W. Huo, C.-Y. Luo, and H.-Z. Xin
The above results show that our scheme reduces the overall running time and the
client’s running time. The reason is that the proposed protocol uses the new IBS
scheme without pairing, and reduces the count of scalar on the client side.
6 Conclusion
This paper has presented an ID-based inter-domain authentication scheme in pervasive
environments. It can achieve such security requirements as mutual authentication,
secure session key establishment, client anonymity and non-linkability. It has superior-
ity in both security and efficiency, and is more suitable for pervasive computing.
References
1. Yao, L., Wang, L., Kong, X.W.: An inter-domain authentication scheme for pervasive
computing environment. J. Computers and Mathematics with Applications 59(2), 811–821
(2010)
2. Peng, H.X.: An identity based authentication model for multi-domain. J. Chinese Journal
of Computers 29(8), 1271–1281 (2006)
3. Zhu, H., Li, H., Su, W.L.: ID-based wireless authentication scheme with anonymity. J.
Journal on Communications 30(4), 130–136 (2009)
4. Zhu, R.W., Yang, G.M., Duncan, S., Wong, D.S.: An efficient identity-based key ex-
change protocol with KGS forward secrecy for low-power devices. J. Theoretical Com-
puter Science 378, 198–207 (2007)
5. Cao, X.F., Zeng, X.W., Kou, W.D.: Identity-based Anonymous Remote Authentication for
Value-added Services in Mobile Networks. J. IEEE Transactions on Vehicular Technol-
ogy 58(7), 3508–3517 (2009)
Computer Simulation of Blast Wall Protection under
Methane-Air Explosion on an Offshore Platform
1 Introduction
On an offshore platform, if combustible gas leaks, large gas cloud possibly forms and
further leads to an accidental explosion. In some cases, the explosions will take place
with the transition from deflagration to detonation. The relatively high overpressure
generated in explosion will bring serious damage and potential life loss. For example,
in 1988, the extraordinarily serious explosion accidence of one British oceanic off-
shore platform occurred. More than one hundred people lost their lives and this plat-
form was finally discarded as uselessness [1]. So the offshore industries have spent
considerable efforts in the qualification of explosion threats, the quantification of
blast over-pressures and further designing against them.
Since experiments can be very expensive, the computer simulation becomes very
popular to evaluate the offshore explosion cases. Moreover, more scenarios can be
considered for the evaluation of explosion possibility. TNO used the multi-energy
model for rapid assessment of explosion overpressure [2]. UKOOA [3] employed
CFD models in offshore explosion assessment and concluded that CFD models can
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 321–326, 2011.
© Springer-Verlag Berlin Heidelberg 2011
322 C. Wang et al.
predict reasonably good explosion evolution and overpressure. Raman and Grillo[4]
gave guidance on the application of the TNO multi-energy method and on the selec-
tion of model parameters related with the equipment layout and level of congestion.
Clutter and Mathis [5] simulated vapor explosions in offshore rigs using a flame-
speed based combustion model. Pula et al. [6] adopted a grid-based approach and an
enhanced onsite ignition model to simulate the offshore explosion overpressure.
On an offshore platform for methane exploitation in China South Sea, the blast
walls were designed to separate the process areas from living quarters and life boats.
In these process areas, some potential failure possibly leads to methane leakage and
further methane-air cloud explosion. In this paper, we employed computer modeling
to evaluate whether the blast wall is enough to protect the person and lifeboat.
2 Computation Set-Up
The appropriate model for offshore explosion in methane-air gas cloud is the Navier-
Stokes equations for multiple thermally perfect species with reactive source terms, as
described as follows:
∂Q ∂F ∂G 1 ⎛ ∂Fv ∂Gv ⎞
+ + = ⎜ + ⎟+S (1)
∂t ∂x ∂y Re ⎜⎝ ∂x ∂y ⎟
⎠
where
Q = {ρ1, ρ 2 ......ρ NS , ρu, ρv, ρE}T , F = {ρ1u , ρ 2u ,......ρ NS u , ρu 2 + p, ρuv, ( ρE + p)u}T ,
account, involving reacting species of CH4, O2, CO2, H2O and N2. The second-order
additive semi-implicit Runge-Kutta Method [8] was employed to discretize the time
term and treat the stiffness of chemical source terms. The convective terms were inte-
grated by 5th Weighted Essentially Non-Oscillatory (WENO) scheme [9]. The viscous,
heat and diffusion terms were evaluated using second-order central finite difference.
Fig.1 presents a local diagram of blast wall. The blast wall height is 2.5m. At the left
of it is gas turbine generator skid (GTGS) with 15m length, 3.2m width and 10.5m
height. Here the leakage of methane gas at high pressure possibly occurs and the re-
sulting methane-air gas cloud explosion emerges. At the right of blast wall are two
lifeboats which are used to carry the workers to escape from the offshore platform in
danger cases. So, current simulation aims at evaluating whether the blast wall is
enough to shelter the persons and lifeboats from explosion hazards. Due to relatively
larger computation scale and chemical reaction stiffness, a 2D simulation was carried
out to elucidate the above problem, as shown in Fig.2. The computed domain extends
18m in X direction and 20m in Y direction. Below the height less than 10.5m in Y
direction is the wall of GTGS. The lift boat plate is 7.5m away from the blast wall. Its
height and width are 2.7m, 2.7m respectively. The pressure outlet bound conditions
were imposed on the right and top boundaries, and the left boundary which height is
more than 10.5m. A leakage with the methane amount of 0.14kg was considered and
(a) Explosion close to the bottom of GTGS (b) Explosion close to the top of GTGS
located close to the bottom or top of the GTGS. So a semi-circular methane-air cloud
formed with a diameter of 2m. At its center, a ignition with 4 times Chapman-Jouguet
(CJ) pressure and 1 times CJ temperature was imposed, in order to generate a detona-
tion wave in gas cloud. The diameter of this ignition region is 0.2m. Actually, such
explosion was considered as the most serious case. However this explosion with cur-
rent leakage is very weaker than that of large leakage.
than that near the back surface and about 5 to 10 times. Fourthly, on the ground be-
hind the blast wall, the shock wave is normally reflected, and not Mach-reflected in
Fig.4 (g). Lastly, in Fig.4 (h), the shock wave with the identical pressure as shown in
Fig.3(h) exists on lifeboat plate.
According to the general criterion, if explosion overpressure is 0.1atm, the bridges
and lifeboats are impaired. If the overpressure is 1.0atm, the explosion wave leads to
person death due to lung and ear damage. In current cases, close to the ground, blast
wall and lifeboat plate, the overpressure values are always more than 1atm. At local
place, it is more than 10 to 30atm. So the current blast wall is not enough to keep the
person and lifeboat safe. Additionally, if the leakage increases more, the more
Fig. 3. Explosion evolution when the leakage occurs close to the bottom of GTGS
Fig. 4. Explosion evolution when the leakage occurs close to the top of GTGS
326 C. Wang et al.
dangerous cases will be faced. That is to say, the blast wall needs to be re-designed. It
is suggested that an elevated height be needed or the blast wall shape be changed.
This will be re-evaluated according to the further design.
4 Conclusions
The computer simulations described here provide detail descriptions of the blast wall
protection under the methane-air gas cloud explosion. The main conclusions can be
drawn as follows:
(1) The current computer program can successfully simulate gas cloud explosion.
(2) Because the overpressure behind the blast wall and on the lifeboat plate is more
than 1.0atm when explosion wave passes, the current blast wall is not enough
to keep the person and lifeboat safe. So the blast wall needs to be re-designed.
(3) The explosion wave of methane-air gas cloud undergoes a successive process
of detonation formation, detonation transmission, shock attenuation, regular re-
flection and Mach reflection etc.
(4) The maximum pressure for methane-air cloud detonation wave is about
18.5atm. Additionally, the shock wave reflects on a wall, and the local pressure
is possibly more than two times the pressure behind incident shock wave. So it
is extremely devastating and must be avoided at all times on offshore platform.
This project was supported by grants from the Ph.D Programs Foundation of Ministry
of Education of China (Grant No. 20070358072) and Open Foundation of State Key
Laboratory of Explosion Science and Technology of Beijing Institute of Technology
(Grant No. KFJJ06-2).
References
1. The public Inquiry into the piper Alpha disaster. The Hon Lord Cullen, presented to par-
liament by the secretary of the sate for energy by command of Her Majesty, Department of
the energy, London, HMSO (November 1990)
2. The Netherlands Organization for Applied Scientific Research (TNO), Internet website
(2004), http://www.tno.com.nl
3. UKOOA (UK Offshore Operators’ Association), Fire and explosion guidance. Part 1:
Avoidance and mitigation of explosions, Issue 1 (October 2003)
4. Raman, R., Grillo, P.: Minimizing uncertainty in vapor cloud explosion modeling. Process
Safety and Environmental Protection 83(B4), 298–306 (2005)
5. Clutter, J.K., Mathis, J.: Computational Modeling of Vapor Cloud Explosions in Off-shore
Rigs Using a Flame-speed Based Combustion Model. Journal of Loss Prevention in the
Process Industries 15, 391–401 (2002)
6. Pula, R., Khan, F.I., Veitch, B., Amyotte, P.R.: A Grid Based Approach for Fire and Ex-
plosion. Process Safety and Environmental Protection 84(B2), 79—91 (2006)
7. Anderson, W.K., Thomas, J.L., Van Leer, B.: AIAA Journal 26(9), 1061–1069 (1986)
8. Zhong, X.L.: Additive semi-implicit Runge-Kutta Methods for computing high-speed non-
eqilibrium reactive flows. Journal of Computational Physics 128, 19–31 (1996)
9. Shu, C.W.: Essentially Non-Oscillatory and Weighted Essentially Non-Oscillatory
Schemes for Hyperbolic Conservation Laws. ICASE Report 97-65 (1997)
Throughput Analysis of Discrete-Time Non-persistent
CSMA with Monitoring in Internet of Things
Abstract. With the development of Internet of Things industry, more and more
scholars start to study in the field of the Internet of things. The monitoring of
the transmission state of information is one of the important fields of research in
Internet of Things. This paper uses the discrete-time non-persistent CSMA with
monitoring function random access mode to realize the monitoring features of
the transmission state of information in the Internet of Things. And we get to
the throughput of the system using the average cycle analysis method, through
computer simulations to verify the correctness of the analysis.
1 Introduction
Internet of Things is defined as: The radio frequency identification (RFID), infrared
sensors, global positioning systems, laser scanners and other information sensing
device, according to the agreement agreed to anything connected to the Internet,
the information exchange and communication, in order to achieve intelligent identify,
locate, track, monitor and manage a network. Internet of things is the material objects
connected to the Internet. This has two meanings: First, things are still the core and
foundation of the Internet, is based on the Internet extension and expansion of the
network. The second, any goods can intelligently exchange information and commu-
nications with the other goods [1].
Internet of Things broke the tradition of thinking before. Past ideas have been the
physical infrastructure and IT infrastructure separately: one is the airport, roads,
buildings, and the other is the data center, personal computers, broadband and so
on. In the "Internet of Things" era, reinforced concrete, cable and the chip integrated
into a unified broadband infrastructure, in this sense, infrastructure is more like a new
earth site, the operation of the world were in on it, which including economic man-
agement, social management, production operation and even personal life[2].
In Internet of Things the persons and things, things and things as the platform for
the access way into polling multiple access methods and random multiple access
method. Among them, the random access method is divided into discrete time random
multiple access methods and continuous-time random multiple access system access
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 327–331, 2011.
© Springer-Verlag Berlin Heidelberg 2011
328 H. Ding, D. Zhao, and Y. Zhao
methods. This paper will uses the continuous-time non-persistent CSMA with moni-
toring function random multiple access system to achieve "automation" and "intelli-
gent" feature, first asked that the system has the availability of a client feedback to the
sender of information, monitoring functions. Enabling intelligent identification, posi-
tioning, remote monitoring, network status tracking, fault alarm, automatic measure-
ment and control functions.
In the Internet of Things, human and machine or between machines and machines
must be achieved: the machine can take the initiative to report information to the
people of the state information during transmission to achieve fault alarm, the system
can also remotely monitor. Between the machine and the machine can automatically
communicate with the data exchange, automatic measurement and control, data ac-
quisition and transmission, status tracking, etc. Remote monitoring, status tracking,
fault alarm, automatic measurement and control functions necessary requirement for
the realization of the receiving end system with information feedback to the sending
end, monitoring functions. This paper will uses the discrete-time non-persistent
CSMA with monitoring function random multiple access mode to achieve monitoring
feature. It asks that the system has the availability of a client feedback to the sender of
information, monitoring functions[3].
t
1+2a a
a 1+2a a
Fig. 1. The discrete-time non-persistent CSMA random access system with monitoring function
aGe− aG
U BU = . (5)
1 − e − aG
The average length of that information packet was successfully sent and information
packet collision time in the channel
E[ BU ] = 1 + 2a . (6)
330 H. Ding, D. Zhao, and Y. Zhao
4 Simulation
In the discrete-time non-persistent CSMA with monitoring function random multiple
access system based on the analysis, we conducted a computer simulation[7,8], theo-
retical calculation and computer simulation using the same parameter values were the
result of taking a = 0.1, as Fig.2. shown.
0.6
0.5
0.4
Theory
0.3
0.2 Simulation
0.1
0
0.01 0.4 0.9 1.4 1.9 2.6 3.5 7 20
G
5 Conclusions
The simulation results have been verified to agree well with the theoretical results in
Fig. 2 .
The discrete-time non-persistent CSMA with monitoring function random access
mode be used, we can be more easily achieved the monitoring features of the trans-
mission state of information in the Internet of Things.
Throughput analysis of the discrete-time non-persistent CSMA with monitoring func-
tion random access system laid a good foundation for more in-depth understanding of
the system. Throughput analysis of the system will help us to find ways to optimize.
References
1. Yao, W.: Basic Content of the Internet of Things. China Information Times 5, 1–3 (2010)
2. Li, Y., Chen, H.: Pondering on Internet of Things. Value Engineering (08), 126–127
(2010)
Throughput Analysis of Discrete-Time Non-persistent CSMA with Monitoring 331
3. Kleinrock, L., Tobagi, F.A.: Packet Switching in Radio Channels: Part 1 – Carrier Sense
Multiple-Access Midland Their Throughput-Delay Characteristics. IEEE Transactions on
Communications 23(12), 1400–1416 (1975)
4. Liao, B.: The Softswitch-based Personal Monitoring Communications. ZTE Communica-
tions (04), 47–50 (2006)
5. Hu, X., Zhou, L.: Research on Signaling Monitor System of Switch Soft Networks. Tele-
communications Science (01), 34–37 (2007)
6. Zhao, D.: Study on a New Analyzing Method for Random Access Channel. In: Second
International Conference and Exhibition on Information Infrastructure, Beijing, April,
1998, pp. 16–29 (1998)
7. Zhao, D.: Study on Analyzing Method for Random Access Protocol. Journal of Electron-
ics 16(1), 44–49 (1999)
8. Zhou, N., Zhao, D., Ding, H.: Analysis of Multi-Channel and Random Multi-Access Ad
hoc Networks Protocol with Two-dimensional Probability. In: Computational Intelligence
and Industrial Applications Conference (Proceedings of ISCIIA 2006), November 22-25,
pp. 26–32 (2006)
The Effect of Product Placement Marketing on
Effectiveness of Internet Advertising
1 Introduction
The Internet has become the third-largest advertising medium in the US, representing
17% of the market (Snapp, 2010). Thousands of advertisers have turned to the Inter-
net as a prospective media for promoting their brands and transacting sales. Internet
advertising provides advertisers with an efficient and less expensive way of reaching
the consumers most interested in their products and services, and it provides consum-
ers with a far more useful marketing experience. Internet advertising transforms the
way consumers and companies find and interact with one another (Snapp, 2010).
A growing stream of product placement research has conducted surveys of con-
sumer and practitioner views on the practice and experiments to gauge product
placement’s impact on brand awareness, attitudes, and purchase intent (Wiles &
Danielova, 2009). In this internet era, product placement is no longer exclusively
for big companies with marketing budgets to match. From Facebook to Twitter to
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 332–337, 2011.
© Springer-Verlag Berlin Heidelberg 2011
The Effect of Product Placement Marketing on Effectiveness of Internet Advertising 333
bloggers, many ways exist on the Internet to chat up and spread the word about a
product (Johnson, 2009).
Despite the burgeoning popularity of product placement as a marketing tool on the
Internet, there is limited substantive empirical evidence regarding whether and how it
is effective in impacting consumer responses. In this study, we try to answer the ques-
tion: Is there any difference in the proper conducts of product placement between
internet and traditional media? In an effort to enhance understanding of the impact of
product placements on the Internet, our study purposefully manipulates the product
prominence (Subtle or Prominent) and presentation of the advertising (Video or Im-
ages). Based on the findings of previous studies, we have proposed that these factors
interact to influence the effectiveness of the product placement conducts.
2 Literature Review
In the effort to enhance the effectiveness of product placement, there is an old para-
dox known to marketers: "If you notice it, it's bad. But if you don't notice, it's worth-
less" (Ephron, 2003). Findings from previous studies (Homer, 2009) indicated that
customers will experience greater brand impression increases when product place-
ments are vivid and prominent, but when placements are subtle, consumers’ attitudes
toward the advertising are relatively positive. So our first question is: Does product
prominence (Subtle or Prominent) have an impact on the effectiveness of product
placement advertising on the Internet?”
The advent of media-sharing sites, especially along with the so called Web 2.0
wave, has led to unprecedented internet delivery of multimedia contents such as im-
ages and videos, which have become the primary sources for online product place-
ment advertising. Industry and various academic studies have acknowledged the im-
portance of capturing a visual image of the placed product on the screen (Russell
1998, 2002). It is valuable for the product placement marketer to know the difference
in effectiveness when considering integrating their advertising with images or videos.
Therefore, our second research question is: Will the advertising effectiveness be im-
pacted differently when we present the product placement through video or through
still images?
In this study, the following hypotheses were tested:
H1: Product prominence significantly affects the effectiveness of product place-
ment advertising.
H1a: Subtle placements lead to a better advertising attitude than prominent
placements.
H1b: Prominent placements lead to a better brand impression than subtle
placements.
H1c: Subtle placements lead to higher user intention to click than prominent
placements.
H2: Product placement advertising that is presented through video can have a
greater effectiveness than advertising that is presented through images.
334 H.-L. Liao et al.
H2a: A product placement advertising that is presented through video can lead
to a better advertising attitude than presenting the advertising through
images.
H2b: A product placement advertising that is presented through video can lead to
a better brand impression than presenting the advertising through images.
H2c: A product placement advertising that is presented through video can lead
to a better user intention to click than presenting the advertising through
images.
The research model empirically tested in this study is shown in Fig. 1. It represents
the proposed research model drawn from the constructs of product prominence, pres-
entation of the advertising, and effectiveness of product placement advertising. It is
proposed in this model that different types of product prominence and different pres-
entations of the advertising are potential determinants of the effectiveness of product
placement advertising as the independent variable for this study.
3 Research Methodology
This study used a field experiment to test the research hypotheses. This section de-
scribes the participants, procedures, instrument development, and measures.
3.1 Participants
3.2 Procedures
Random sampling was used to assign users to four groups in Table 1. Subjects in each
group were asked to access several WebPages of fashion reports on the SogiKing
website with different product placement settings. After browsing the WebPages, the
subjects were asked to complete a survey indicating their advertising attitude, brand
impression, and intention to click.
Prominent-placement Subtle-placement
Video presentation Group 1 (53) Group 2 (63)
Image presentation Group 3 (62) Group 4 (64)
(N) : number of subjects.
This study developed 4 items of brand impression to ask subjects questions about
product catalog, brand name, and product characteristic in the product placement
advertising. For each question, subjects that had the right answer would get 1 point.
Subjects that had the wrong answer got 0 points. The instrument for advertising atti-
tude included a combination of items derived from Garretson and Niedrich (2004),
Chang (2004), and Martin et al. (2003). Additionally, the subjects’ intention to click
was assessed using a one item constructed questionnaire following the recommenda-
tions of Davis et al. (1989).
3.4 Measures
The constructs of reliability and validity of the instruments were evaluated. The sam-
ple showed a reasonable level of reliability (α > 0.70) (Cronbach, 1970.) Factor analy-
sis also confirmed that the construct validity of the scales could be carried out ade-
quately. Using the principal component method with varimax rotation, construct va-
lidity was examined. The factor loadings for all items exceeded 0.8 and indicated that
the individual items also had discriminant validity.
Table 2. The effect of product prominence and presentation of the advertising on the effective-
ness of product placement advertising
Independent
Dependent variable F P-value
variable
Product prominence Brand impression 295.132 0.000***
Advertising attitude 507.331 0.000***
Intention to click 282.915 0.000***
Presentation of the Brand impression 13.852 0.000***
advertising Advertising attitude 62.650 0.000***
Intention to click 50.542 0.000***
Product prominence Brand impression 0.737 0.391
*Presentation of the Advertising attitude 58.607 0.000***
advertising Intention to click 4.953 0.027**
*** p < 0.01, ** p < 0.05, * p < 0.1.
All hypotheses were examined and supported. Video presentations led to a higher
level of advertising attitude, brand impression, and intention to click than image pres-
entations. Subtle placements had a higher level of advertising attitude and intention to
click than prominent placements, but prominent placements led to a higher brand
impression than subtle placements.
5 Conclusion
While several past experimental studies report that product placement has little impact
on brand attitudes, many practitioners maintain that placement can produce "home
runs," especially when certain guidelines are met (Homer, 2009). The primary goal of
this study was to investigate two potential factors that may help to increase the effec-
tiveness of the product placement conducts on the Internet, that is, the product promi-
nence (Subtle or Prominent) and presentation of the advertising (Video or Images).
Our results show that product prominence (Subtle or Prominent) and presentation of
the advertising (Video or Images) both significantly affect the effectiveness of prod-
uct placement advertising on the Internet. Presently, advertising through video with
subtle placements can have the best result on advertising attitude and users’ intention
to click. However, to get a better brand impression, you should present the advertising
through video with prominent placements. Since all hypotheses were supported, and
our findings indicate a consistency with previous evidence, we have concluded that
the effects of product placement conducts (Product Prominence and Presentation) on
the Internet are similar to the effect of product placement in other media. Our results
provide further evidence that the impact of product placement is not a simple phe-
nomenon, but rather that effects are qualified by many moderating factors. As a result,
prominent placements on the Internet can have undesirable consequences. We believe
The Effect of Product Placement Marketing on Effectiveness of Internet Advertising 337
that this further highlights the importance of "integrating" your advertising with the
content, which is frequently noted by both academics and industry experts but ignored
by many internet marketers.
References
1. Aaker, D.A.: Managing brand equity. The Free Press, New York (1991)
2. Brackett, L.K., Carr, B.N.: Cyberspace advertising vs. other media: Consumer vs. mature
student attitude. Journal of Advertising Research, 23–32 (2001)
3. Chang, C.: The interplay of product class knowledge and trial experience in attitude forma-
tion. Journal of Advertising 33(1), 83–92 (2004)
4. Cronbach, L.J.: Essentials of psychological testing. Harper and Row, New York (1970)
5. Davis, F.D., Bagozzi, R.P., Warshaw, P.R.: User acceptance of computer technology: A
comparison of two theoretical models. Management Science 35(8), 982–1003 (1989)
6. Ephron, E.: The Paradox of Product Placement. Mediaweek, 20 (June 2, 2003)
7. Garretson, J.A., Niedrich, R.W.: Creating character trust and positive brand attitudes.
Journal of Advertising 33(2), 25–36 (2004)
8. Gupta, P.B., Lord, K.R.: Product placement in movies: The effect of prominence and mode
on recall. Journal of Current Issues and Research in Advertising 20, 47–59 (1998)
9. Homer, P.M.: Product Placements: The Impact of Placement Type and Repetition on Atti-
tude. Journal of Advertising 38(3), 21–31 (2009)
10. Johnson, R.: Running the Show — Screen Shots: Product placements aren’t just for big
companies anymore. Wall Street Journal, Eastern edition, R.9 (September 28, 2009)
11. Lutz, R.J., Mackenzie, S.B., Belch, G.E.: Attitude Toward the Ad as a Mediator of Adver-
tising Effectiveness: Determinates and Consequences. In: Bagozzi, R.P., Tybout, M. (eds.)
Advance in Consumer Research, vol. 10, pp. 532–539. Association for Consumer Re-
search, Ann Arbor (1983)
12. Martin, B.A.S., Durme, J.V., Raulas, M., Merisavo, M.: Email Advertising: Exploratory
Insights from Finland. Journal of Advertising Research 43(3), 293–300 (2003)
13. Russell, C.A.: Toward a framework of product placement: Theory propositions. Advances
in Consumer Research 25, 357–362 (1998)
14. Russell, C.A.: Investigating the effectiveness of product placements in television shows:
The role of modality and plot connection congruence on brand memory and attitude. Jour-
nal of Consumer Research 29(3), 306–318 (2002)
15. Snapp, M.: Principles Online Advertisers Can Thrive By. R & D Magazines (2010),
http://www.rdmag.com/News/Feeds/2010/03/
information-tech-principles-online-advertisers-can-
thrive-by/ (last visit: May 9, 2010)
16. Vaughan, T.: Multimedia: Making it work. Journal of Marketing Research 30 (1993)
17. Wiles, M.A., Danielova, A.: The worth of product placement in Successful films: An
Event Study Analysis. Journal of Marketing 73, 44–63 (2009)
A Modular Approach to Arithmetic and Logic Unit
Design on a Reconfigurable Hardware Platform for
Educational Purpose
Abstract. The Arithmetic and Logic Unit (ALU) design is one of the important
topics in Computer Architecture and Organization course in Computer and
Electrical Engineering departments. There are ALU designs that have non-
modular nature to be used as an educational tool. As the programmable logic
technology has developed rapidly, it is feasible that ALU design based on Field
Programmable Gate Array (FPGA) is implemented in this course. In this paper,
we have adopted the modular approach to ALU design based on FPGA. All the
modules in the ALU design are realized using schematic structure on Altera’s
Cyclone II Development board. Under this model, the ALU content is divided
into four distinct modules. These are arithmetic unit except for multiplication
and division operations, logic unit, multiplication unit and division unit. User
can easily design any size of ALU unit since this approach has the modular na-
ture. Then, this approach was applied to microcomputer architecture design
named BZK.SAU.FPGA10.0 instead of the current ALU unit.
Keywords: Arithmetic and Logic Unit design, Educational tool, FPGA, Com-
puter Architecture and Organization, Teaching method, Modular Approach.
1 Introduction
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 338–346, 2011.
© Springer-Verlag Berlin Heidelberg 2011
A Modular Approach to ALU Design on a Reconfigurable Hardware Platform 339
process of ALU design and to understand thoroughly the ALU’s inner structure. In
order to improve the effect of experimental teaching of ALU design, we have adopted
the modular approach to it. It is presented a method that allows the user to design any
size of ALU unit in a few steps. The units obtained using this approach can be used
both alone and on systems that need the ALU unit. This approach is then applied to
microprocessor design named BZK.SAU.FPGA10.0[7] that is the FPGA implementa-
tion of BZK.SAU[8] on Altera DE2 Cyclone II development board. This structure is
shown in Fig.1.
Fig. 1. Changing the Modular ALU unit obtained in this work with ALU unit in
BZK.SAU.FPGA10.0 microprocessor design
The proposed ALU has four units: arithmetic unit except for multiplication and divi-
sion operations, logic unit, multiplication unit and division unit. The top level view of
ALU architecture is shown in Fig.2. Output value of ALU according to S1 and S0
selector pins is given Table 1.
Any size of arithmetic unit design consists of only two steps. The first step is to de-
sign one-bit arithmetic unit. The one-bit arithmetic unit has two 8-to-1 multiplexer
and one full adder circuit. The inner structure and block diagram of this unit obtained
using Quartus Web Edition Software from Altera Corporation is shown in Fig.3 and
Fig.4.
The final step is to decide the size of arithmetic unit. n one-bit arithmetic unit block
is used to obtain n-bit arithmetic unit. Fig.3. summarizes this structure. It performs
different operations according to S2, S1, S0 and Carry_In selector pins of the one-bit
arithmetic unit block as shown in Table 2.
Fig. 4. The inner structure of one-bit Arithmetic unit in Altera’s FPGA environment
Table 2. The operation table according to S2, S1, S0 and Carry_In of selector pins
S2, S1 and S0 selector pins are common pins for every unit in the n-bit arithmetic
unit. When the arithmetic unit of ALU is used, “Enable” selection input is used to
enable or disable the arithmetic unit. “Carry_In” selector pin status in Table 2 is only
available for the uppermost one-bit arithmetic unit in Fig. 3.
342 H. Oztekin, F. Temurtas, and A. Gulbag
The occupied area to user in Table 2 is for user’s defined operations. User can de-
fine the maximum four operations. If user wants to define the more than four opera-
tions, it is sufficient to use larger multiplexer (16-to-1, 32-to-1 etc.) instead of multi-
plexers of one-bit arithmetic unit.
It is possible to design any size of logic unit in two steps same as the arithmetic unit.
The first step is to build one-bit logic unit. It is necessary an 8-to-1 multiplexer, a
AND gate, OR gate, XOR gate and two inverter. The proposed one-bit logic unit
performs seven operations as shown Table 3 according to S2, S1 and S0 selector pins
and its inner structure and block diagram are shown in Fig.5 and Fig.6.
S2 S1 S0 The operation
0 0 0 OutÅInput0 ∧ Input1
0 0 1 OutÅInput0 ∨ Input1
0 1 0 OutÅInput0 ⊕ Input1
0 1 1
OutÅ Input0
1 0 0
OutÅ Input1
1 0 1 OutÅShifted(Right) Out
1 1 0 OutÅShifted(Left) Out
1 1 1 Reserved Area
A Modular Approach to ALU Design on a Reconfigurable Hardware Platform 343
The shifting operations are meaningful when the logic unit has more than one-bit
logic unit block. Therefore “ShiftRight” and “ShiftLeft” input pins can only be used
in the event that it composes of more than one-bit logic unit block. The final step to
design any size of logic unit is to decide the size of logic unit. n size of logic unit has
n one-bit logic unit block. This structure is shown in Fig.5.
The connections to SR and SL input pins are done as follows: The uppermost SR
input pin is connected to the second bit of the “Operand0”, the next SR input pin to
the third bit of the “Operand0” etc and the last SR input pin to logic ‘0’. For SL input
pins, the uppermost SL input pin to Logic ‘0’, the second SL input pin to the first bit
of “Operand0”, the next SL input pin to the second bit of “Operand0” etc. “Enable”
input pin is used to enable or disable the use of Logic unit. If user wants to define one
logic function, the reserved area in Table 3 can be used. For defining more than one
logic function, it is required to change the current multiplexer with the larger multi-
plexers (16-to-1, 32-to-1 etc.).
Multiplication unit design is a little more complicated design than other units. Since
our aim is to provide uncomplicated designs to be used as an educational tool, we
used parallel multiplier unit design instead of conventional multiplication algorithm.
The differences between these designs are presented in section 4. Parallel multiplier
circuit and its FPGA implementation in Quartus software environment are shown in
Fig.7 and Fig.8. n-bit parallel multiplier circuit has n2 one-bit block and n-1 full adder
circuits. The connections between these units are as shown Fig.7. There are one full
adder circuit and an “and” gate in the inner structure of one-bit block unit. Its inner
structure is shown in Fig.8.
344 H. Oztekin, F. Temurtas, and A. Gulbag
Fig. 7. The FPGA implementation of Parallel Mutiplier circuit in Altera’s Quartus software
environment
Fig. 8. The inner structure of one-bit block in Parallel multiplier circuit on Altera’s Quartus
Software environment
A Modular Approach to ALU Design on a Reconfigurable Hardware Platform 345
Division unit design is a more complicated design than other units. In this work, the
division algorithm is developed as shown Fig.9.a. For n-bit division operation, it is
required n-bit register with load and shift control pins for dividend (Q), n+1-bit regis-
ter with load and shift control pins for divisor (M), n+1-bit register with only load
control pin for remainder (A), n-bit down-counter with load control pin (P) (initially
n) and n+1-bit full adder circuit. There are four states according to this algorithm. It is
sufficient four flip-flops to define these states. The outputs of these flip-flops should
be designed only one to be logic “1” at the same time. 16-bit division process is im-
plemented in this work as shown in Fig.9.b. The inner structure of this block is not
given in detail. It can be examined by downloading from web-site.
(a) (b)
Fig. 9. (a) The n-bit division algorithm (b) The block diagram of 16-bit division operation using
Altera’s Quartus software
Acknowledgement
This work was fully supported under TUBITAK (The Scientific and Technological
Research Council of Turkey) project no. 110E069.
References
1. Wang, A.: Computer Organization and Construction (3rd Version). Tsinghua University
Press, Beijing (2002)
2. Shangfu, H., Baili, S., Zhihui, W., Li, L.: The Virtual Experiment Design of Arithmetic
Unit Based on Object-Oriented Technology. In: 2010 Second International Conference on
Multimedia and Information Technology, IEEE Conferences, pp. 159–161 (2010)
3. Paharsingh, R., Skobla, J.: A Novel Approach to Teaching Microprocessor Design Using
FPGA and Hierarchical Structure. In: International Conference on Microelectronic System
Education, IEEE Conferences, pp. 111–114 (2009)
4. Hatfield, B., Rieker, M., Lan, J.: Incorporating simulation and implementation into teach-
ing computer organization and architecture. In: Proceedings 35 th Annual Conference
Frontier in Education, IEEE Conferences, pp. F1G-18 (2005)
5. Xiao, T., Liu, F.: 16-bit teaching microprocessor design and application. In: IEEE Interna-
tional Symposium on IT in Medicine and Education, IEEE Conferences, pp. 160–163
(2008)
6. Carpinelli, J.D.: The Very Simple CPU Simulator. In: 32 nd Annual Frontiers in Educa-
tion, IEEE Conferences, pp. T2F-11—T2F-14 (2002)
7. BZK.SAU.FPGA10.0,
http://eem.bozok.edu.tr/database/1/BZK.SAU.FPGA.10.0.rar
8. Oztekin, H., Temurtas, F., Gulbag, A.: BZK.SAU: Implementing a Hardware and Soft-
ware-based computer Architecture simulator for educational purpose. In: Proceedings of
2010 International Conference on Computer Science and Applications (ICCDA 2010),
vol. 4, pp. 90–97 (2010)
9. Zhu, Y., Weng, T., Keng, C.: Enhancing Learning Effectiveness in Digital Design Courses
Through the Use of Programmable Logic. IEEE Transactions on Education 52(1), 151–156
(2009)
10. Oztekin, H.: Computer Architecture Simulator Design, Msc. Thesis, Institute of Science
Technology, Sakarya, Turkey (January 2009)
11. Modular ALU Design for BZK.SAU.FPGA.10.0,
http://eeem.bozok.edu.tr/database/1/ALU
A Variance Based Active Learning Approach for
Named Entity Recognition
1 Introduction
Natural Language Processing (NLP) and Information Extraction (IE) with their vari-
ous tasks and applications such as POS tagging, NER, NP chunking, and semantic
annotation, are matters of concern in several fields of computer science for years [1].
In order to automate these tasks, different machine learning approaches such as su-
pervised learning, semi-supervised learning and active learning are applied [2][3].
Named Entity Recognition (NER) task is one of the most important subtasks in in-
formation extraction. It is defined as the identification of Named Entities (NEs) within
text and labels them with pre-defined categories such as person name, locations, or-
ganizations, etc [2]. In many works NER task is formulated as a sequence labeling
problem and thus can be done with machine learning algorithms supporting sequence
labeling task [4]. Moreover, in complex structured prediction tasks, such as parsing or
sequence modeling, it is considerably more difficult to obtain labeled training data
than for classification tasks (such as document classification), since hand-labeling
individual words and word boundaries is much harder than assigning text-level class
labels.
One way to address this issue is to use Active Learning (AL). In AL scenario only
examples of high training utility are selected for manual annotation in an iterative
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 347–352, 2011.
© Springer-Verlag Berlin Heidelberg 2011
348 H. Hassanzadeh and M. Keyvanpour
2 Related Work
In [6] they propose a range of active learning strategies for IE that are based on rank-
ing individual sentences, and experimentally compare them on a standard dataset for
named entity extraction. They have argued that, in active learning for IE, the sentence
should be the unit of ranking. Haibin Cheng et al. propose an AL technique to select
the most informative subset of unlabeled sequences for annotation by choosing
sequences that have largest uncertainty in their prediction. Their active learning tech-
nique uses dynamic programming to identify the best subset of sequences to be anno-
tated, taking into account both the uncertainty and labeling effort [7].
A method called BootMark, for bootstrapping the creation of named entity anno-
tated corpora presents in [8]. The method requires a human annotator to manually
mark-up fewer documents in order to produce a named entity recognizer with a given
performance, than would be needed if the documents forming the base for the recog-
nizer were randomly drawn from the same corpus.
Kaiquan Xu et al. propose a semi-supervised semantic annotation method, which
uses fewer labeled examples to improve the annotating performance. The key of their
method is how to identify the reliably predicted examples for retraining [9].
Another work related to ours is that of Tomanek et al. [4]. They propose an ap-
proach to AL where human annotators are required to label only uncertain subse-
quences within the selected sentences, while the remaining subsequences are labeled
automatically based on the model available from the previous AL iteration round.
They use marginal and conditional probability as confidence measure estimator and
they only apply the semi-supervised scenario as an auxiliary part beside the AL.
Where ε x , is an added error. If we consider that the added error of the classifi-
er mainly comes from two sources, i.e., classifier bias and variance [10], then the add-
ed error ε x in (1) can be decomposed into two terms, i.e., β and η x , where
β represents the bias of the current learning algorithm, and η x is a random vari-
able that accounts for the variance of the classifier (with respect to class c ).
According to [11] classifiers that trained by using the same learning algorithm but
different versions of the training data suffer from the same level of bias but different
variance values. Assuming that we are using the same learning algorithm in our anal-
ysis, without loss of generality, we can ignore the bias term. Consequently, the learn-
er’s probability in classifying x into class c becomes
| η . (2)
According to Tumer and Ghosh’s conclusion in [11], the classifier’s expected added
error can be defined as
Err . (3)
σ ∑ , y f x . (4)
| |
Where θ is the current learner model and Υ is a temporary set that its elements are
defined by Υ L ∪ x, y: θ , while y is the estimated target label for x which is
predicted by θ. In (4), |Υ | denote the number of examples in Υ . Base on this analy-
sis, our presented confidence measure base on variance reduction is
φ ∑ | ; σ . (5)
probability of each predicted label sequence to find the least confidence sequences
from unlabeled set. The rest of operations are proposed on these selected sentences.
In this step, the algorithm calculates the variance and the confidence measure pro-
posed in Section 3 for all the tokens in each selected sentences. Based on this meas-
ure, tokens with least confidence are picked up to be manually labeled. For using the
advantage of other unlabeled tokens, in this step we apply a simple semi-supervised
scenario to use the characteristic of those tokens which the current model is certain
about them. This semi-supervised phase is described in the following.
In recent years, semi-supervised learning is an efficient learning technique which
applied in different domain that automation is a matter of concern. Different semi-
supervised methods have been developed but self-training is the most applied method
among them. In our method, we apply self-training approach beside the active learn-
ing scenario to automatically label those tokens which have considerably high confi-
dent base on the proposed measure, and update the current model with these automat-
ically labeled tokens. We use CRF as the base learner in our framework [13].
The advantage of the this combined framework in a NER task is that, even when a
sequence is selected as an informative instance base on its low confident, it can still
exhibit subsequences which do not add much to the overall utility and thus are fairly
easy for the current model to label correctly. So, within these selected sequences only
those tokens remain to be manually labeled which base on the Equation (5) have high
variances.
The data set used in our experiment is CoNLL03 English corpus which is a well-
known benchmark for Named Entity Recognition task [14]. The details of train set and
test set we applied is shown in Table 1. The CoNLL03 corpus contains 9 label types
which distinguish person, organization, location, and names of miscellaneous entities
that do not belong to the previous three groups.
In our experiment we used the “Mallet” package as a CRF implementation [15].
We employ the linear-chain CRF model in our system. All methods and classes are
implemented in Java. A set of common feature functions was employed, including
orthographical, lexical and morphological, and contextual ones. Unlike several pre-
vious studies, we did not employ additional information from external resources such
as gazetteers. Overall experiment start from a 20 randomly selected sequences as ini-
tial label set (L). Our method picks up 50 sequences in each iteration.
6HWVRI/DEHO7\SHV6HQWHQFHV7RNHQV
7UDLQLQJVHW
7HVWVHW
'HYHORSPHQWVHW
A Variance Based Active Learning Approach for Named Entity Recognition 351
The three metrics widely used in the information retrieval field, precision, recall, and
F-score, were adopted in this experiment. Table 2 shows the results of our approach
on CoNLL03 test set and development set. The number of manually labeled tokens
used for training the model in our method is 11464 tokens which are only 5% of all
available labeled tokens in training set. Beside these tokens, the model labels 44275
tokens automatically in a semi-supervised manner (overall tokens used for training is
55739, and in term of sequence is 3000 sequences). In fact, automatically labeled to-
kens are those tokens which the algorithm determined them as confident samples
based on proposed variance based confidence measures. Table 2 shows the evaluation
of a model which trained on 52147 tokens (3000 sequences). Comparing the results
shows that not only our approach uses dramatically lower number of tokens to create
a predicting model, but also the performance of trained model in our approach is a
little higher that the model created in the same settings but through random sampling.
Table 2. Results of our AL method boosted by self-training and Random Sampling (RS) (ma-
nually labeled tokens for AL is 11464 and for RS is 52147)
6 Conclusions
In this paper, we propose an active learning method for NER task base on minimal
variance. For fully take the advantage of unlabeled data, we use self-training beside
our AL algorithm. Especially, a prediction confidence measure based on minimizing
the classifier’s variance is described that we studied that its minimization is equal to
minimizing the classifier’s prediction error rate. CRF is chosen as the underlying
model for this experiment. The new strategy we proposed makes the data with repre-
sentative information have much higher selection opportunity and improve the system
learning ability effectively. The experiments showed that our approach used consider-
ably fewer numbers of manually labeled samples to produce the same result when
samples are selected in a random manner. In this work, fix amount training samples
are added for each iteration, we will find how to use the unlabeled data chose by self
training in a more intelligent way in future work.
352 H. Hassanzadeh and M. Keyvanpour
References
1. Keyvanpour, M., Hassanzadeh, H., Mohammadizadeh Khoshroo, B.: Comparative Classi-
fication of Semantic Web Challenges and Data Mining Techniques. In: 2009 International
Conference on Web Information Systems and Mining, pp. 200–203 (2009)
2. Nadeau, D., Sekine, S.: A survey of named entity recognition and classification. J. Linguis-
ticae Investigation 30, 2–26 (2007)
3. Settles, B., Craven, M.: An Analysis of Active Learning Strategies for Sequence Labeling
Tasks. In: Empirical Methods in Natural Language Processing, pp. 1069–1078 (2008)
4. Tomanek, K., Hahn, U.: Semi-Supervised Active Learning for Sequence Labeling. In: 47th
Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pp. 1039–1047 (2009)
5. Settles, B.: Active learning literature survey. Technical Report, Wisconsin-Madison (2009)
6. Esuli, A., Marcheggiani, D., Sebastiani, F.: Sentence-Based Active Learning Strategies for
Information Extraction. In: 1st Italian Information Retrieval Workshop, IIR 2010 (2010)
7. Cheng, H., Zhang, R., Peng, Y., Mao, J., Tan, P.-N.: Maximum Margin Active Learning
for Sequence Labeling with Different Length. In: 8th Industrial Conference on Advances
in Data Mining, pp. 345–359 (2008)
8. Olsson, F.: On Privacy Preservation in Text and Document-based Active Learning for
Named Entity Recognition. In: ACM First International Workshop on Privacy and Ano-
nymity for Very Large Databases, Hong Kong, China (2009)
9. Xu, K., Liao, S.S., Lau, R.Y.K., Liao, L., Tang, H.: Self-Teaching Semantic Annotation
Method for Knowledge Discovery from Text. In: 42nd Hawaii International Conference on
System Sciences (2009)
10. Friedman, J.: On bias, variance, 0/1-loss, and the curse-of dimensionality. J. Data Mining
Knowledge Discover. 1, 55–77 (1996)
11. Tumer, K., Ghosh, J.: Error correlation and error reduction in ensemble classifier. J. Con-
nection Sci. 8, 385–404 (1996)
12. Tomanek, K., Olsson, F.: A web survey on the use of active learning to support annotation
of text data. In: NAACL HLT Workshop on Active Learning for Natural Language
Processing, pp. 45–48. Boulder, Colorado (2009)
13. Lafferty, J., McCallum, A., Pereira., F.: Conditional random fields: Probabilistic models
for segmenting and labeling sequence data. In: ICML 2001, pp. 282–289 (2001)
14. Sang, E.F.T.K., Meulder, F.d.: Introduction to the CoNLL- 2003 shared task: Language-
independent named entity recognition. In: CoNLL 2003, pp. 155–158 (2003)
15. McCallum, A.K.: MALLET: A Machine Learning for Language Toolkit (2002),
http://mallet.cs.umass.edu
Research on Routing Selection Algorithm Based
on Genetic Algorithm
1 Introduction
Network multimedia messaging services promotes the growing demand for multimedia
applications and existing network technology for further development. With the devel-
opment of network applications, people on the network quality of service QoS (Quality
of Service) have higher and higher requirements. How to achieve a variety of network
quality of service is the world's computers, electronics and communications topics at
the forefront of race. To increase the provision of network quality of service and net-
work load balance, people often use multiple service sites to meet the requirements of
network users. For example, the network often has a number of services to meet the
same requirements of the service site. In this way, users on the network's service re-
quirements, the network can be any one site to provide its services. Now the network
services such as WWW's "mirror" site, SOCK server [2] belong to such a structure. To
study the traffic, in recent years people put forward the "routing" of the communication
model in which a given network and a set of source destination point, seeking from the
source to any destination point routing path. Routing communication has been defined
as a standard in the Ipv6 traffic model in [1].
There are two different modes of world's communication model for routing :one is
for the network at the application level communication on the network routing [4,5],
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 353–358, 2011.
© Springer-Verlag Berlin Heidelberg 2011
354 G. Gao et al.
including the routing of the communication model and selecting a target Site Strategy;
the other is the network layer routing in the network of communication [3], this study
has just started, mainly for routing traffic in the routing table structure and routing
algorithm.
This paper presents a new routing algorithm, the idea is to use the network routing
algorithm for routing the request to a short delay to reach local optimum, thus improv-
ing search efficiency, so as to improve the efficiency and quality of service network.
min{ ∑∑ delay([k , l ]) }
i =1[ k ,l ].in . Pi
∑B < B i
i =1.and .[ k ,l ].in . Pi
kl
Research on Routing Selection Algorithm Based on Genetic Algorithm 355
Which, delay [k, l] that the path of each link on the delay, Bi is the path Pi is associated
with the flow of service requests (dynamic value). Requirements of each path to satisfy
the conditions Pi: Pi on the path of each link [k, l] the delay is less than Ni, the network
each link [k, l] on the flow through the transmission of requests and small bandwidth in
the link Bk, l.
It can be seen from Figure 1, when the network size increases, the algorithm, the av-
erage generation and growth in almost a straight line, this is due to number of nodes
increases, the genetic algorithm variable length chromosome coding, the search space
increases, solving the required the evolution of algebra also will increase. Figure 2
reflects the changes in the number of network nodes, the cost of routing. When the
network size increases gradually, routing algorithm has increased the cost of the path
gradually, which is obvious. But when the network reaches a certain size, the algorithm
cost routing path of a stable trend in the value of which played its local algorithm for
solving superiority.
Running time
2.0
1.5
1.0
0.5
0
3 4 5 6 7 8 9
Purpose of the number of nodes
Fig. 3. Groups the number of nodes changes the purpose of running time
We also change the purpose of the case number of nodes the algorithm efficiency
and convergence of the experiment, at this time the number of network nodes is fixed
at 20 nodes, the purpose of the number of nodes changes from 3 to 10, the size of ge-
netic algorithm initial population is set to 20 , Evolution 60 generations; genetic algo-
rithm is approximately 0.5 crossover probability, mutation probability of about
358 G. Gao et al.
0.01.Figure 3 shows that the purpose of the network size fixed and changing the num-
ber of nodes, the algorithm of routing costs, but the increase is relatively stable, be-
cause the destination node increases, making it easier to meet the satisfaction of the
evolution of the destination node.
These network simulation experiments show :the proposed algorithm has good ser-
vice performance, and can balance the network load effectively and improve search
efficiency. Because of the genetic algorithm using a simple, universal, robust, parallel
search, groups seeking excellent characteristics, which ensures the convergence of the
algorithm and avoids the routing loop.
5 Conclusion
Since a large number of network traffic, the network state is highly dynamic, so the
traditional network routing methods can not avoid the temporary local network conges-
tion, it is difficult to meet the needs of network users. This paper presents a new ge-
netic algorithm based routing algorithm, which can achieve a shorter delay of local
optimization, better balance the network load, improveing network utilization and
quality of service. The algorithm is so simple and easy to implement that it can be
applied to the actual network.
References
1. Hinden, R., Deering S.: IP version 6 addressing architecture. RFC 1884, IETF (1995)
2. Partridge, C., Mendez, T., Milliken, W.: Host Routing Selection server. RFC 1346, IETF
(1993)
3. Kwon, D.H., et al.: Design and Implementation ofan Efficient Mult icast Support Scheme
for FMIPv6. In: INFOC0M 2006, pp. 1–12 (2008)
4. Jia, W.-j., Xuan, D., Zhao, W.: Integrated routing algorithms for Routing Selection mes-
sages. IEEE Communications Magazine 38(1), 48–53 (2000)
5. Jia, W.-j., Zhao, W.: Efficient Internet multicast routing using Routing Selection path
selection. Journal of Network and Systems Management 12(4), 417–438 (2002)
Optimization of the Performance Face Recognition
Using AdaBoost-Based
Abstract. In this paper, using the results of classifier composition is one of the
methods of increasing efficiency of face recognition systems that many re-
searchers paid attention to it in recent years. However AdaBoost algorithm is
as one of the efficient boosting algorithm that has been used as to decrease the
dimensions of characteristic space extracted from face recognition systems, it
hasn’t been used as classifier in face recognition systems. In this paper paid at-
tention to how to use this algorithm in classifying face recognition systems. At
first the methods evaluated of classifier composition. Then, the result is pre-
sented of several composition methods in comparison with singular classifying
methods; therefore, database has correct recognition of 96.4% and improved the
results of KNN method with PCA specification. AdaBoost method is used ac-
cording to weak learning, as proposed classifier system with the aim of identifi-
cation validate.
1 Introduction
AdaBoost algorithm presented by Freund and Schapire in 1995 for the first time [1].
One of practical problems of Boosting with filtering methods needs training samples.
This problem can be addressed by Adaboost. In this algorithm there is the possibility
of using training information. As Boosting by filtering in Adaboost, there is weak
training model available. The aim of this algorithm is to find final theory that has low
error rate with respect to data D distributed to training samples. This algorithm is
different from other boosting algorithms [2]. Adaboost is set as adaptable according to
errors of weak theories in weak training models. Performance boundaries of Ada-
boost is related to performance of weak training model [3].
In this algorithm simple samples from training set from previous weak theories that
was classified correctly gain less weights and hard samples from training set that was
classified incorrectly gain more weights. So Adaboost algorithm concentrates to sam-
ple with hard classification.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 359–365, 2011.
© Springer-Verlag Berlin Heidelberg 2011
360 M. Faghani, M.J. Nordin, and S. Shojaeipour
Simulations done using computer with Intel® Pentium ® M-1.70GHz and with
512 Megabyte RAM and software MATLAB 7.40.
In these methods, declaration of each classifier due to input pattern, is calculated as one
vote, and final decision making is done by total votes of different classification [4].
The input pattern will be belonged to class with the most votes. If classifiers be inde-
pendent of each other and their recognition rates be more than 50%, by increasing the
number of classifier, the method of voting will increase the accuracy of classification.
• Unweighted Voting Method, the votes of all classifiers have the same weights.
Winning criteria in these methods is the total votes extracted. Complete agreement,
majority vote, absolute majority, correction and multi stage vote are voting methods
without weight [5].
• Confidence Voting Method, voter shows his confidence level to each candidate.
The more the confidence to candidate is the more winning probability. The candi-
date who gain more vote will be chosen [6].
• Ranked Voting Method, voters present their list of candidates due to their interest
ranking. In this case the mean of ranks presented by voters is calculated as the final
votes of that candidate, and the candidate with less rank will be chosen as the supe-
rior candidate [7].
In weighted voting methods different criteria is used to identify assigned weight to
each classifier, most common of them is using performance of classifier to experi-
mental samples. On compound system containing L independent classifier has most
accuracy once every weight in classifier Di be chosen as wi=log(pi /1-pi )
That pi is the accuracy of classifier i.
2 Database
For comparison different methods of face recognition has been gathered by research
centers. By using set of images, we can compare different methods.
Different databases have been created by different research centers, such as PIE,
AR, and FERET. Different databases have unique specifications besides common
specifications. Analysis and discussion should be done according to specification and
results of each datasets. For better results of methods, they should be tested in differ-
ent datasets.
In proposed method for face recognition, Yale and ORL database are used for simula-
tion. Each of them has specification that is useful for results analysis. We present
their specification and images in the next section.
ORL database is composed of 400 images related to 40 persons, images of them are in
ten different states. Images are in 112*92 pixels and 256 grey level. Changes contain
Optimization of the Performance Face Recognition Using AdaBoost-Based 361
light, face state (cheerful, upset, open or closed eyes), details of face (with glass or
without glass), face cover and circle in depth about 20 degrees. These sets of images
are created in 1992 to 1994 so it contains age changes too. We made the images of
this database smaller in order to decrease calculations and used in 46*46 size format.
Yale database is composed of 165 images related to 15 persons, images of them are in
11 different state. Changes contain light, face state (cheerful, upset, sleepy, astonish-
ing), direct sun light, light from left side, light from right side, glass state, without
glass. Images haven’t any depth or level circulation. Images are in 320*243 pixels
and 256 grey levels. For simulation, face side is cut and editing is done to it. We
made the images of this database smaller in order to decrease calculations and used in
50*50 size format.
Extraction method and selection of specifications in this paper is PCA and LDA and
selection of convenient coefficient. The selected specifications for ORL database and
LDA method are 39 feature and for Yale and LDA is 14 features. The best result
obtained for selection of 22 feature of PCA from ORL database, 22 feature from Yale
and PCA.
comparison, face recognition system according to KNN method with both extraction
method was simulated and the results was compared with proposed method. The way
of selecting method of training images is one of the effective points in recognition
rate. To increase the credits of results statistically, we select images randomly and
calculate mean and standard deviation. Experiments were repeated 30 times for each
method. Firstly the images of PCA method and then LDA method are presented.
Then related charts of simulations are presented in two databases.
0.96
Adaboost-LDA
0.94 Adaboost-KNN
0.92
0.9
Rcognition Rate
0.88
0.86
0.84
0.82
0.8
0.78
0 5 10 15 20 25 30 35 40
Number of classes
Fig. 1. Compare result for every class between LDA and KNN for ORL database
0.98
Adaboost-PCA
0.96 Adaboost-KNN
0.94
0.92
Recognition Rate
0.9
0.88
0.86
0.84
0.82
0.8
0.78
0 5 10 15 20 25 30 35 40
Number of classes
Fig. 2. Compare result for every class between PCA and KNN for ORL database
4 Error Analysis
By evaluating of using several different weak learning method in AdaBoost, as it is by
evaluating standard deviation in different classes, our experiment in ORL database,
most errors related to classes 5, 11, 15 and 40. By evaluation of images in these
classes we can see that light conditions, backgrounds and face state has more changes.
In Yale database, most errors related to classes 6 and 12. By evaluation of images in
these classes we can see that there is uncontrolled light conditions in these images.
The amounts of FAR and FRR in all methods obtained for all images then its mean
was calculated. As it can be seen in Table 4 the decrease of FAR in proposed method
with respect to KNN has significant improvements. But decrease of FRR in proposed
method with PCA specification in Yale database was improved and increased in the
other methods.
5 Conclusion
In this paper we used extracted specifications according to PCA and LDA conversion.
For training and testing this system, 400 images from ORL database and 165 images
from Yale database was used. AdaBoost as compound classifier could improve re-
sults with respect to singular classifier. Final model presented is strength against
different state of face, light changes and so on. The method presented in ORL data-
base has correct recognition of 96.4% and improved the results of KNN method with
PCA specification. the method presented in Yale database has correct recognition of
94.3% that has better performance with respect to KNN method with PCA specifica-
tion. Finally by using proposed method with LDA specification in ORL database, the
recognition percentage is 95.3% in Yale database has correct recognition of 93.6%
that has better performance with respect to KNN method with LDA specification.
References
1. Freund, Y., Schapire, R.E.: A Decision-Theoretic Generalization of On-Line Learning and
an Application to Boosting (1995)
2. Zhang, T.: Convex Risk Minimization. Annals of Statistics (2004)
Optimization of the Performance Face Recognition Using AdaBoost-Based 365
3. Yang, P., Shan, S., Gao, W., Stan, Z., Zhang, D.: Face Recognition Using Ada-Boosted
Gabor Features. In: 6th IEEE International Conference on Automatic Face and Gesture
Recognition, Korea, pp. 356–361 (2004)
4. Zhang, G., Huang, X., Li, S.Z., Wang, Y., Wu, X.: Boosting Local Binary Pattern (LBP)-
Based Face Recognition. In: Li, S.Z., Lai, J.-H., Tan, T., Feng, G.-C., Wang, Y. (eds.) SI-
NOBIOMETRICS 2004. LNCS, vol. 3338, pp. 179–186. Springer, Heidelberg (2004)
5. Ivanov, Y., Heisele, B., Serre, T.: Using component features for face recognition. In: 6th
IEEE International Conference on Automatic Face and Gesture Recognition, Korea, pp.
421–426 (2004)
6. Rowley, H.A., Baluja, S., Kanade, T.: Neural Network-based face Detection. In: 1996
IEEE Computer Society Conference on Computer Vision and Pattern Recognition, USA,
pp. 203–208 (1996)
7. Gökberk, B., Salah, A.A., Akarun, L.: Rank-based decision fusion for 3D shape-based
face recognition. In: Kanade, T., Jain, A., Ratha, N.K. (eds.) AVBPA 2005. LNCS,
vol. 3546, pp. 1019–1028. Springer, Heidelberg (2005)
Design and Implementation Issues of Parallel Vector
Quantization in FPGA for Real Time Image Compression
1 Introduction
Demand for communication of multimedia data through the telecommunications net-
work and accessing the multimedia data through Internet is growing widely. Image
data comprises of a significant portion of the multimedia data and they occupy a large
amount of the communication bandwidth for multimedia communication. As a result,
development of efficient image compression techniques continues to be an important
challenge to us. Vector quantization (VQ) is a lossy compression technique used to
compress picture, audio or video data. Fig 1 shows the block diagram of typical vec-
tor quantizer engine. A vector quantizer maps k-dimensional vectors in the vector
space into a finite set of vectors. Each of these finite vectors is called a code vector or
a codeword and the set of all the codewords is called a codebook. The vector quantiz-
er compression engine consists of the complex encoder at the transmission side and
very simple decoder at the receiver side. The encoder takes the input vector and
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 366–371, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Design and Implementation Issues of Parallel VQ in FPGA 367
outputs the index of the codeword that offers the lowest distortion. The lowest distor-
tion is found out by evaluating the Euclidean distance in the form of MSE (Mean
Square Error) between the input vector and each of the codeword obtained from the
codebook. Once the index of the winner codeword (codeword with least distortion) is
sent through the communication channel the decoder replaces the index with the asso-
ciated codeword from a similar codebook.
In the last years we can notice a rapid development of the design methods. Ad-
vanced re-programmable units are larger and larger (in respect of gate number) so it
enables the implementation of complicated digital units like SoC (System on Chip),
processors or specialized controllers with enhanced speed.
VQ
VQ
Decoder
Encoder Communication or
Storage Codebook
Input Index Index Recovered
Codebook Channel Index
Image Image vector
Search Matching
Vector
1
MSE I x, y I x, y (1)
MN
The iterative algorithm is executed in Matlab on a Lena image Matlab results for 10
Iterations were performed on 256x256 Lena Image for a codebook size of 256 and
vector dimension 16. Its MSE: 75.9049 ~ 60.1845 dBs. PSNR: 29.3281 ~ 30.3360
dBs, and were within the acceptable value of 30 db. As the code book size increases
and as the vector dimension increases this complex processing consumes more time
and hence not suitable for real time applications. From the design point of view, VQ
encoder can be suitably designed as a Hardware - Software Co-design.
The Software part can be implemented using the embedded MicroBalze and internal
memory of a Xilinx Vertex FPGA. Implementation such as off-line training of code-
books and prior computation of constants before the arrival of a real time image data
is executed without any timing constraints and can form the pre-processing or the
initialization of VQ system. The complete Hardware-Software co-design is shown in
Fig 2. For the sake of testing here, VQ decoder is implemented on the same module. It
will be however implemented at the decoder end of the communication system.
Internal VQ VQ
Memory Encoder FPGA Decoder
External Memory
Let X0,X1,…etc. be the input vectors of dimension k=2 for an image X and let X0 be
represented as
X0= (X00, X01) (1)
Also a 2 dimensional codeword which has been trained using Matlab by using a
number of standard images and of codebook size=N, be represented as follows. Since
our module is designed to find a winner codeword for every 4 codewords, let the 4
codewords be represented as 2 dimensional data.
CW0 = ( CW00, CW01 )
CW1 = ( CW10, CW11 )
(2)
CW2 = ( CW20, CW21 )
CW3 = ( CW30, CW31 )
Let each codeword be associated with an Index IN for N=0,1,2,3
I0 - ( CW0 )
I1 - ( CW1 )
(3)
I2 - ( CW2 )
I3 - ( CW3 )
Algorithm: The generic algorithm used in VQ Implementation is given as follows
Step 1: MSE calculation, between any input vector and the codeword N is given as,
D0(N) = ( X00-CW(N)0 )2+ ( X01-CW(N)1 )2
For e.g. The MSE between the X0 and CW0 (N=0) is D00, and MSE between X0
and CW1(N=1) is given by D01 and is given as
The VQ encoder involves the Euclidian distance measurement. This process is mod-
ified by using a look-up-table as discussed below, which enhances the speed as
compared in paper [4]. Here, the mathematical analysis and the improvement made
is discussed for the hardware Implementation. Let D1 is the distortion for a 2
370 K.R. Rasane and S.R.R. Kunte
dimensional vector input and the codewords CW0 and CW1 i.e. (D1=D00-D01), for
the input I0. Substituting and rewriting we get,
D1 = (CW002+CW012) – (CW102 + CW112) – 2[X00(CW00 + CW10) +X01(CW01
+ CW11)] …………………………………………...this can be written as
D1= A01 – X00[2(CW00 - CW10)] +X01[2 (CW01 - CW11)]
5 Results
The VQ was tested for various Dimensions of the vectors, i.e. for k=2, 4, 6, 8, 12 and
16. The delays were analyzed for both the designs, one using the conventional ap-
proach and the other for the new architecture. Table 5 shows that this method gives
comparison of delay as the ‘K’ values increases as compared to [4]. Our proposed
method is best suitable for VQ of higher Dimensions and can proposes images of 30
frames in less than 1 second hence is suitable for real-time.
References
[1] Al-Haj, A.M.: An FPGA-Based Parallel Distributed Arithmetic Implementation of the 1-D
Discrete Wavelet Transform, Department of Computer Engineering, Princess Sumaya
University. Informatics 29, 241–247 (2005)
[2] Rasane, K., Kunte, S.R.: An Improved Reusable Component Architecture for ‘K’ dimen-
sional 4 codebook VQ Encoder. Department of Electronics and Communication, KLES-
CET, Belgaum, Principal Institute, JNNEC Shivamoga, India, published in (ICEDST
2009), Manipal, India, pp. 271–274 (December 12, 2009)
[3] Chen, P.Y., Chen, R.D.: An index coding algorithm for image vector quantization. IEEE
Transactions on Consumer Electronics 49(4), 1513–1520 (2003)
[4] Rasane, K., Kunte, S.R.: Speed Optimized LUT based ‘K’ dimensional Reusable VQ En-
coder Core, Department of Electronics and Communication, KLESCET, Belgaum. In:
Principal Institute, JNNEC Shivamogga, India, published in ICECT 2010, Kuala Lumpur,
Malaysia, May 7-10, pp. 97–100 (2010)
Discriminative Novel Information Detection of
Query-Focused Update Summarization
1 Introduction
Novel Information Detection was firstly introduced by TREC in 2002[1]. The basic
task is as follows: given a topic and an ordered set of documents segmented into sen-
tences, return sentences that are both relevant to the topic and novel given what has
already been seen. This task models an application where a user is skimming a set of
documents, and the system highlights new, on-topic information. The Text Analysis
Conference (TAC1) is one of the most well-known series of workshops that provides
the infrastructure necessary for large-scale evaluation of natural language processing
methodologies and technologies. The update summarization task of TAC is intro-
duced since 2008, which aims to generate two short and fluent summaries respec-
tively for two chronologically ordered document sets to meet the topic-relevant
information need. The summary of the second document set should be written under
the assumption that the user has already read the earlier documents and should avoid
repeating old information and inform the user of novel information about the specific
topic.
*
Corresponding author.
1
http://www.nist.gov/tac
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 372–377, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Discriminative Novel Information Detection of Query-Focused Update Summarization 373
Where sim( Si , S j ) is the similarity of two effective words which obtained by using
WodNet synset[6,7]
If synset of word wik in WordNet is U and synset of word wjl in WordNet is V,
similarity of them can be obtained as the following:
U ∩V
sim( wik , w jl ) = (3)
U ∪V
The basic system selects sentences by using a feature fusion method to identify the
sentences with high query-relevant and high information density, i.e., more relevant a
sentence is to the query and more important a sentence is in the document set, more
likely the sentence is to be included in the final summary.
First, we score every sentence’s representative feature by obtaining its importance
in document set.
The query-independent score of sentence S can be obtained as following:
N
QI ( S ) = (∑ sim( S , Si )) / N (5)
i =1
Where Novelty(S) is the novelty factor influenced by the degree of how sentences S
related to the old information, it can be obtained as the following:
Novelty ( S) = (1 − Maxsim( Si , old _ content )) (9)
Where “old_content” is the document set A. “Maxsim” computes all the similarity
values between Si and each sentence in the old_content, and returns the highest one[5].
In Ref. 5, the cosine similarity is adopted. Unlike them, we adopted the semantic
similarity as described in Equ. 5.
In Equ. 9, although there have been no change on the query-focused part, but its in-
fluence increases with the decrease of query-focused part when sentence S contains
old information, i.e., Novelty ( S ) < 1.0 , which prevent the summarization system to
remove sentences closely related to the query.
4 Experimental Results
In participating TAC 2009, we submitted 2 results, ID 53, 14, respectively. To com-
pare the effectiveness of different methods, ID 53 used the proposed discriminative
376 J. Chen and T. He
novelty detection method (DN henceforth), while ID 14 (BN henceforth) used the
same summarization system as ID 53 with the only difference being that its novelty
detection method is defined as the following:
QF ( S ) QI ( S )
Score(S)=(σ i N
+(1-σ )i N
)i Novelty ( S )
(11)
∑ QF (Si )
i =1
∑ QI (Si )
i =1
Like most mainstream novelty detection method[1-5], BN treat the old information in
an additional independent stage. Table 1 shows results of DN and BN in manual
evaluation of TAC 2009 Update Summarization Task (set B). In Table 1, we can see
that DN performs far better than BN when evaluated by modified pyramid score,
average numSCUs, macroaverage modified score with 3 models, as well as the aver-
age overall responsiveness.
Table 1. Performance of DN and BN in TAC 2009 Update Summarization Task (set B) Manual
Evaluation
Average Macroaverage
Average Average overall
modified modified score with 3
numSCUs responsiveness
(pyramid) score models
BN 0.14 2.18 0.14 3.32
DN 0.24 3.50 0.23 4.07
In Figure 2, we can also see that DN performed far better than BN when evaluated
by automatic evaluation metrics like ROUGE-2[8]. More importantly, since BN is
outside the 95% confidence interval of DN when evaluated both by ROUGE-2 and
BE, we can see that differences between BN and DN are significant, which implies
effectiveness of our purposed method.
5 Conclusions
This paper presented a new method of detecting novel information of query-focused
update summarization. Unlike most current researches, we adopt differentiated
treatment to the old information according to the degree of relevance between the
historical information and the query. Experiment results on TAC 2009 show effec-
tiveness of our purposed method. Although the proposed approach is simple, we hope
that this novel treatment could inspire new methodologies in progressive summariza-
tion. For the future works, we plan to further validate effectiveness of our method in
other benchmark large scale corpuses.
Acknowledgements
This work was supported by the National Natural Science Foundation of China
(No. 60773167), the Major Research Plan of National Natural Science Foundation
of China (No. 90920005)t, the 973 National Basic Research Program (No.
2007CB310804), the Program of Introducing Talents of Discipline to Universities
(No. B07042), and the Natural Science Foundation of Hubei Province (No.
2009CDB145), Chenguang Program of Wuhan Municipality (No. 201050231067).
References
1. Harman, D.: Overview of the TREC 2002 Novelty Track. In: The 11th Text Retrieval Con-
ference (TREC 2002). NIST Special Publication 500-251, Gaithersburg (2002)
2. Abdul-Jaleel, N., Allan, J., Croft, W.B., Diaz, F., Larkey, L., Li, X.Y.: Umass at TREC
2004, Novelty and Hard. In: The Thirteenth Text Retrieval Conference (TREC 2004).
NIST Special Publication 500-261, Gaithersburg (2004)
3. Schiffman, B., McKeown, K.R.: Columbia University in the Novelty Track at TREC 2004.
In: The Thirteenth Text Retrieval Conference (TREC 2004). NIST Special Publication
500–261, Gaithersburg (2004)
4. Eichmann, D., Zhang, Y., Bradshaw, S., Qiu, X.Y., Zhou, L., Srinivasan, P., Kumar, A.,
Wong, H.: Novelty, Question Answering and Genomics: The University of Iowa response.
In: The Thirteenth Text Retrieval Conference (TREC 2004). NIST Special Publication
500-261, Gaithersburg (2004)
5. Li, S.J., Wang, W., Zhang, Y.W.: TAC 2009 Update Summarization with Unsupervised
Methods. In: Text Analysis Conference (TAC 2009), Gaithersburg (2009)
6. Miller, G.A.: WordNet: A Lexical Database for English. Communications of the
ACM 38(11), 39–41 (1995)
7. Fellbaum, C.: WordNet: An Electronic Lexical Database. MIT Press, Cambridge (1998)
8. Lin, C.Y., Hovy, E.: Automatic Evaluation of Summaries Using N-gram Co-occurrence
Statistics. In: NLT-NAACL, Edmonton, Canada, pp. 71–78 (2003)
Visualization of Field Distribution of the Circular Area
Based on the Green Function Method
Abstract. The Green Function method is one of the basic methods of studying
the theory of electromagnetic field. This paper, starting from the Green formula,
through the establishment of the integral expression about harmonic function on
the plane, draws forth Green Function on the plane, finds the Green Function
formula of the circular area (mathematical model) through the “image method”,
and finds potential function of any given point of the circle area, then programs
the Green Function using Matlab, and finally achieves visualization of f Green
unction of the electric charge of the electric field inside and outside the circular
area.
1 Introduction
The introduction of Green Function to solve problems of electromagnetic field is of
great importance. As equations of field are linear ones, any field source distribution can
be decomposed into a collection of point sources, an arbitrary source distribution in a
given condition generates a field, which is equal to overlapping of the distribution of the
field generated by these point sources in the same boundary conditions, and therefore,
after attaining the field of point sources in the given condition (the Green Function),
which can be used to seek the field of arbitrary distribution of field source in the same
boundary conditions, visualization of the field can be achieved through computer simu-
lation [1] [2] [3].
In order to establish integral expressions of plane harmonic function, where the intro-
duction of plane Green's formula is as follows[4]:
∂u ∂v
∫∫ (v∇ u + u∇ v ) dσ = ∫ ( v − u )ds
2 2
(1)
D
Γ ∂n ∂n
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 378–384, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Visualization of Field Distribution of the Circular Area 379
1
D − De , ∇ ln = 0 , on perimeter Γe there are the following expressions [5]:
2
In
r
∂ 1 1 1
(ln ) = =
∂n r r ε (4)
1 ∂u 1 ∂u
∫Γe
ln
r ∂n
ds = 2πε ln ( )
ε ∂n (5)
∂ 1 1
∫ Γe
u (ln )ds = 2πε u
∂n r ε (6)
Formula (5), (6) are obtained through middle value theorem, n is normal outward,
∂u
indicates the value in a point of De , take (5, (6) into (3), when ε → 0 , here
∂n
1 ∂u 1
ε ln → 0 , So we
comes 2πε ln ( ) → 0 , D − De → D , u → u ( M 0 ) , ε
ε ∂n
obtain the following expression through the transformation of (3)
1
∂ ln
1 1 ∂u 1 1
∫Γ (ln r ∂n − u ∂n )ds − 2π ∫∫ ln r ∇ udσ
u (M 0 ) = r 2
(7)
2π D
1
∂ ln
1 1 ∂u
∫Γ (ln r ∂n − u ∂n )ds
u (M 0 ) = r (8)
2π
380 G. Wu and X. Fan
with (8) - (9) and obtains the following expression when satisfying the condition of
1 1
vΓ = ln
2π r Γ .
∂ 1 1
u (M 0 ) = −∫ u ( ln − v)ds (10)
Γ ∂n 2π r
So plane Green's function is defined as[6]:
1 1
G (M , M 0 ) = ln − v (11)
2π r
From the above expression it is known with the known expression (11), the potential
of M 0 can be obtained through (10), whose expression is as follows:
∂G
u (M 0 ) = −∫ u (M ) ds (12)
Γ ∂n
2.3 Green Function of the Circular Area
In order to get Green Function of circular domain, here assuming there is an infinitely
long cylindrical of grounding conductor with radius a in space, with an infinitely
long wire of the line charge density ε 0 parallel to the conductor in the column. Be-
cause of symmetry of the column , any electric potential in cylindrical coordinates
( ρ , ϕ , z ) has nothing to do with the coordinates z in column. So the potential prob-
lem of any a point in the column can be solved by transforming it into a two-
dimensional circular domain problem. The potential of any point in the cross section
of the circular field is composed of two parts: ①the wire of line charge density ε 0
from the potential in M0 point of the circle ; ②the image charge of charge density
λ from the potential in M 1 outside the circle, in the circle, the potential of an arbi-
trary observation point M is the superposition of the potential of the original charge
in the column and the potential of image charge outside the column [7], using Gauss's
Law [8], Obtaining the Green's function expression[4]:
Visualization of Field Distribution of the Circular Area 381
1 1 λ 1
G= ln + ln +C
2π M 0 M 2π M 1 M (13)
Supposing ,
OM 0 = ρ 0 , OM 1 = ρ1 , OP = R , OM = ρ ∠MOP = θ , When
the observation point M moves to point P on the circle, from the given condition
G Γ = 0 , it can be on the border [9]:
1
− ln[ R 2 + ρ 02 − 2 Rρ 0 cos(γ − θ )]
4π
(14)
λ
− ln[ R 2 + ρ12 − 2 Rρ12 cos(γ − θ )] + C = 0
4π
Reorganize(14) after getting differential about θ , and obtain:
⎧λ = −1
⎪
⎨ R2 (15)
ρ =
⎪ 1 ρ
⎩ 0
So Green's function of the first boundary value is got in the circular area:
1 1
G= ln
2π R ρ 2 − 2 ρρ cos γ + ρ 2
0 0
(16)
2 2
1 R R
+ ln ρ 0 ρ 2 − 2 ρ ( ) cos γ + ( ) 2
2π ρ0 ρ0
382 G. Wu and X. Fan
In Formula (16), the first is the potential function of the original charge generation in
the circular, the second is the potential function of image charge generation outside
the circular.
⎧⎪∇ 2 u = 0
⎨ (17)
⎪⎩u Γ = f ( R, θ )
in polar coordinates, ds = Rdθ in Formula (12), then the expression of the potential
function is:
1 2π R2 − ρ 2
u (M 0 ) =
2π ∫
0
f ( R, θ )
R 2 + ρ 2 − 2 Rρ cos(γ − θ )
dθ (18)
In (18), in the given boundary conditions, the potential function of the circular can be
obtained.
-2
-4
-6
-2 0 2 4 6 8 10
Expression (16) is the Green Function expression of the circular area. To draw the
visual graphics outside the circular area, ρ > R, ρ1 > R is required in expression
r
(16). Here we take R = 1, ρ1 = 2 , and take OA as the direction of the polar axis, γ
is included angle of OM with OM 1 , using MATLAB to carry out program design in
expression (16) [12], using contour () sentences to draw contour while programming,
Visualization of Field Distribution of the Circular Area 383
r
using the formula E = −∇ϕ to get the vector of electric field intensity[10], using
sentence gridient () to draw their electric line of force, the result of which is shown in
Figure 2[11].
As the expression within the Circular area is still (16), using the same method
above programming, we can draw the figure of Green Function within the circular
region, but ρ < R, ρ1 < R is required in expression (16), here R = 2, ρ1 = 1 .
Revision of the program design outside the Circular area can be made to get the visual
graphics as in Figure 3 [11].
1.5
0.5
-0.5
-1
-1.5
-2
-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5
5 Conclusion
As can be seen from the above discussion, the Green Function is scalar potential func-
,
tion of unit point source (point charge) in certain conditions and the key to potential
function is to determine the Green Function based on the Green Function Method.
When the Green Function is given, the potential function of the source of a random
distribution may be obtained through integral. In the interactive Matlab working envi-
ronment [12], the obtained mathematical expression of scalar potential function can
be programmed to get accurate data of potential function and graphical visualization
in the Circular region, so the abstract concept of electric field can be transformed into
visual data and graphics[13]. The above research ideas can be widely applied in re-
search on the complex electromagnetic field.
Acknowledgment
Subsidized by Research Project of Education and Teaching Reform of Pan zhihua
University (JJ0825&JJ0805).
384 G. Wu and X. Fan
References
1. Wu, G., Fan, X.: Visualization of Potential Function of the Spherical Region Based on the
Green Function Method. In: The International Conference on E-Business and E-
Government (iCEE 2010), pp. 2595–2598. IEEE Press, Los Alamitos (2010)
2. Wu, G.: Discussion of the Electromagnetic Field Based on the Green Function and Dyadic
Green Function. Journal of Panzhihua University 6, 38–44 (2008)
3. Feng, Z.: Dyadic Green Function of Electromagnetic Field and Application. Nanjing Insti-
tute of Electronic Technology, Nanjing (1983)
4. Wang, J., Zhu, M., Lu, H.: Electromagnetic Field and Electromagnetic Wave. Xi’an Elec-
tronic Science and Technology University Press, Xi’an (2003)
5. Wang, Y.: Mathematical Physics Equations and Special Functions. Publishing House of
Higher Education, Beijing (2005)
6. Liang, K.: Methods of Mathematical Physics. Publishing House of Higher Education,
Beijing (2003)
7. Yao, D.: Study Guide of Mathematical Physics Methods. Science Press, Beijing (2004)
8. Xie, X., Yuan, X., Zhang, T.: University Physics Tutorial (2000)
9. Yang, H.: Mathematical Physics Method and Computer Simulation. Electronic Industry
Press, Beijing (2005)
10. Wang, J., Zhu, M., Lu, H.: Study Guide of Electromagnetic Field and Electromagnetic
Waves. Xi’an Electronic Science and Technology University Press, Xi’an (2002)
11. Peng, F.: MATLAB Solution to Equations of Mathematical Physics and Visualization.
Tsinghua University Press, Beijing (2004)
12. Li, N., Qing, W., Cao, H.: Simple Tutorial of MATLAB7.0. Tsinghua University Press,
Beijing (2006)
13. Li, L., Wang, J.: Electromagnetic Field Teaching Using Graph Based on Matlab. Journal
of Xiao Gan University 5, 120–121 (2006)
Efficient Genetic Algorithm for Flexible Job-Shop
Scheduling Problem Using Minimise Makespan
Abstract. The aim of this paper is to minimise the makespan. The flexible job-
shop scheduling is very common in practice and parallel machine is used in job-
shop environment as flexibility. These flexibilities could be used for increasing
the throughput rate, avoiding the production stop, removing bottleneck prob-
lems and finally achieving competitive advantages in economical environments.
In opposition to classic job-shop where there is one machine in each stage, in
this problem production system consists of multistage which in each stage there
are one or several parallel machines with different speeds. The flexible job-shop
scheduling with parallel machines problem consists of two sub-problems for
solving: assigning and sequencing sub-problem.
1 Introduction
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 385–392, 2011.
© Springer-Verlag Berlin Heidelberg 2011
386 H.G. Farashahi et al.
In a classic job-shop scheduling problem each job has a fixed and distinct routing
that is not necessarily the same for all jobs. There is only one routing for each job and
this refers to the lack of flexibility in this environment. The job-shop scheduling
problem with parallel machine is a kind of flexible job-shop scheduling problem in
which the numbers of machines are more than one in at least one stage with different
speed rate.
Researching the literature showed that one of the shortcomings in the classic job-
shop scheduling problem research is that, there is only one machine for processing
each operation and it means that there is only one feasible process routing for each
job, so that there is no flexibility in this environment [4]. The problem to be investi-
gated in this research is a critical extension of the traditional problem of job-shop
scheduling, where each operation can be processed on a set of parallel machines with
different speeds (uniform machine). A lot of researchers studied about flexible job-
shop, but most of them are limited to situation that the speed on each machine are the
same or the processing time over all machines are the same. The problem of job-shop
scheduling with uniform machines accounts for an important problem that is often
met in current practice of scheduling in manufacturing systems because a method or
algorithm that be able to solve uniform machine also can be used to identical parallel
machine.
The main objective in this research is to minimise the maximum completion time
(makespan). Pinedo shows that “a minimum makespan usually implies a high utiliza-
tion of the machine(s)” [5]. The usage for bottleneck and near bottleneck equipment is
directly associated with the system’s throughput rate. Hence, reduction of makespan
should cause a higher throughput rate as well [6].
The scope of this research is to optimize the job sequences and assigning jobs to a
machine in the job-shop environment based on to exist a set of uniform machine for
processing each operation.
problem with v stages and lv uniform machine at each stage v, v=1,2,…,m, is denoted
as follows : FJQm ║ Cmax.
The job-shop scheduling problem with uniform machine included two sub-
problems; routing sub-problem and sequencing sub-problem. In routing sub-problem,
any operation is allocated to one machine which is capable of processing and in se-
quencing sub-problem, the sequencing of operations is determined. Two types of
approaches have been applied for solving these two sub-problems: hierarchical ap-
proaches and integrated approaches. Since the problem is NP-hard, for solving the
research problem, five heuristic approaches based on priority dispatching rules and a
genetic algorithm are proposed to give near optimal solution in acceptable amount of
time.
3 Heuristic Methods
3.1 ECT Procedure
This procedure is based on the earliest completion time (ECT) rule that is utilized for
solving this scheduling problem. This heuristic is able to solve the problem integrated,
it means that, the information of jobs and machines are used simultaneously. The
steps for this method are presented as follow.
Step 0: Create set M with the first operation of all jobs as follow:
M = {Oj,1│1≤ j ≤ n } (1)
Step 1: Calculate cj,i for each operation of set M as follow:
cj,i = min{max(availv,r ,Cj,i-1 )+(Pj,i / Sv,r ) , r = 1,2,…,lv} (2)
Step 2: Choose the first operation (O*j,i) with the minimum completion time (cj,i)
from set M , and schedule it on the first machine that completed with the earliest
completion time.
O*j,i = {Oj,i ∈ M │ cj,i = min { cj,i ∈ M}} (3)
Step 3: Remove scheduled operation O*j,i from set M.
M = M \ {O*j,i} (4)
Step 4: If there is operation O*j,i+1 add it to set M.
M = M ∪ {O*j,i+1} (5)
Step 5: Return to step 1 until a complete schedule is generated.
ECT rule is used for solving assigning sub-problem. The SPT-ECT procedure follows
the steps:
Step 0: The virtual weight is calculated by the following equation for each operation:
, ∑ , (6)
Step 1: Compute the weighted processing time for each operation as follow:
P′j,i = Pj,i ⁄ wj,i (7)
Step 2: Create set H with the first operation of all jobs as follow:
H = {Oj,1│1≤ j ≤ n} (8)
Step 3: Create set M with operation(s) from set H which have the shortest weighted
processing time.
M = {Oj,i ∈ H │ P′j,i = min { P′j′,i′ │ Oj′,i′ ∈ H}} (9)
Step 4: Run the step 1 and step 2 from ECT procedure.
Step 5: Remove scheduled operation O*j,i from set H.
H = H \ {O*j,i } (10)
Step 6: If there is operation O*j,i+1 add it to set H.
H = H ∪ {O*j,i+1} (11)
Step 7: If H ≠ Ø return to step 3, otherwise terminate
This heuristic is a hierarchical approach and is based on the longest processing time
(LPT) rule and ECT rule. The difference between this procedure and previous one is
only in step 3 that is proposed as follow:
Step 3: Create set M with operation(s) from set H which have the longest weighted
processing time.
M = {Oj,i ∈ H │ P′j,i = max { P′j′,i′ │ Oj′,i′ ∈ H }} (12)
This procedure is a hierarchical approach and based on the most work remaining
(MWKR) rule and ECT rule. Firstly, MWKR rule is used for obtaining the sequence
of operations and for this purpose virtual weight is defined based on the average
speed of all machines at each stage for each operation. After that, ECT rule is used for
solving assigning sub-problem. The steps for this procedure are proposed as follows:
Step 0: The virtual weight is calculated by the following equation for each operation:
, ∑ , (13)
Step 1: Compute the weighted processing time for each operation as follow:
P′j,i = Pj,i ∕ wj,i (14)
Efficient Genetic Algorithm for Flexible Job-Shop Scheduling Problem 389
Step 2: Create set H with the first operation of all jobs as follow:
H = {Oj,1│1 ≤ j ≤ n} (15)
Step 3: Calculate weighted remaining processing time for each operation of set H as:
WR′j= P′j,i + P′j,i +1+…+ P′j,m (16)
Step 4: Create set M with operations from set H with most weighted remaining work.
M = {Oj,i ∈ H │ WR′j = max { WR′j′ │ Oj′,i′ ∈ H}} (17)
Step 5: Run the step 1 and step 2 from ECT procedure.
Step 6: Remove scheduled operation O*j,i from set H.
H = H \ {O*j,i } (18)
Step 7: If there is operation O*j,i+1 add it to set H.
H = H ∪ {O*j,i+1} (19)
Step 8: If H ≠ Ø return to step 3, otherwise terminate.
This procedure is based on the least work remaining (LWKR) rule and ECT rule. This
method is similar to MWKR-ECT procedure except for step 4 that is presented as
follow.
Step 4: Create set M with operation(s) from set H with the least weighted remaining
work.
M = {Oj,i ∈ H │ WR′j = min {WR′j′ │ Oj′,i′∈ H}} (20)
4 Experimental Results
To evaluate the performance of the proposed heuristic methods, each of the problem
instances consists of the following parameters: number of jobs, number of stages,
number of machines per stage, range of processing time for each operation and speed
of each machine. The levels of these parameters are shown in Table 1 and follow [7].
U[x,y] denotes a discrete uniform distribution between x and y.
Parameters Level
Number of jobs 20-50-100-150-200-300
Number of stages 2-4-6-8-10
Number of machines per stage U[1,5]
Processing time U[1,40]
Speed of machines U[1,4]
To consider the effectiveness and efficiency of the five proposed heuristics, every
heuristic ran on the same randomly generated problems (300). The experiments were
carried out for minimising makespan over all jobs begins scheduled.
To evaluate the performance of the proposed heuristic methods, factor “loss” is
used as a merit figure. This factor is described below in which makespan denotes the
maximum completion time obtained from each heuristic method separately for each
problem instance. However, the lower bound is fixed for each problem instance and is
independent of the using heuristic method to solve the problem instance [8].
(21)
Moreover, the best and the average of loss factor and the best and the average result
of makespan obtained from running in 10 replications each test scenario are used for
comparison of the proposed heuristics.
Table 2 shows the computational results of all 300 problem instances by running
ECT, SPT-ECT, LPT-ECT, MWKR-ECT and LWKR-ECT rules. The best and the
average of makespan and the best, the average and standard deviation (s.d.) of loss
were reported by taking the mean over 300 problem instances in Table 2.
Summary statistic results over all the randomly generated data are compared in Ta-
ble 2. These values indicate that MWKR-ECT rule achieved the minimum of makes-
pan up to 65% of all problem instances in comparison with other heuristics.
Based on this data, the MWKR-ECT and ECT rules emerge to be the best for most
of all the problem instances and LWKR-ECT, LPT-ECT and SPT-ECT performs
worst in the most of all these cases respectively. The performance of standard devia-
tion of loss indicated that the ECT rule is better than all in general.
each chromosome. So far, one iteration has been completed through one generation of
the algorithm. The algorithm converges toward an optimal or sub-optimal solution
after completion of several generations and will end upon fulfilling the algorithm’s
termination criterion. The termination criterion can be set based on the maximum
computation time, completion of certain number of iteration, which are set in ad-
vance, or due to no changes in several successive iterations of the algorithm, or other
specific conditions [9].
The major factors in designing a GA are: chromosome representation, initializing
the population, an evaluation measure, crossover, mutation, and selection strategy.
Also, the genetic parameters must be specified before running the GA, such as popu-
lation size (pop_size), number of generation (max_gen), probability of crossover (Pc)
and probability of mutation (Pm) [10].
The performance of the proposed genetic algorithm with reinforced initial popula-
tion (GA2) (with solution of the ECT and MWKR-ECT) is compared against itself
with a fully randomized initial population (GA0).
Parameters Levels
Machine distribution Constant Variable Constant Variable
Number of machines 2–6 U[1,4] - U[1,6] 2 – 10 U[1,4] - U[1,10]
Number of jobs 6 30 – 100
Number of stages 2–4–8
Processing time U[50,70] - U[20,100]
Speed of machines U[1,4]
By applying these parameters level of the problems, there are (4×3×3×2×1) 72 dif-
ferent types of test scenario. For each test scenario, 10 random problem instances are
generated with the same parameters. Thus, 720 problem instances are generated to
compare the performance of proposed GA.
Table 4 shows the results obtained from algorithms GA0 and GA2, on 720 problem
instances. This table includes the average amount of the loss parameter and the aver-
age amount of makespan as well as the number of times each algorithm has managed
to find the optimal solution. As you can observe in all cases GA2 is superior which
shows the better performance of the proposed genetic algorithm with the reinforced
initial population compared to genetic algorithm with fully randomized initial popula-
tion (GA0).
392 H.G. Farashahi et al.
7 Conclusion
The three hundred problem instances have been randomly generated and taken from
literature. Results over all instances indicated that the MWKR-ECT and ECT
achieved the minimum of makespan up to 65% and 34% of all instances in compari-
son with other proposed heuristic procedures. Then, the genetic algorithm has been
presented for resolving the research problem in other to find the best solution. It has
been shown that proposed genetic algorithm with a reinforced initial population, via
obtained solutions from MWKR-ECT and ECT has better efficiency compared to a
proposed genetic algorithm with fully random initial population. In future work, it
would be proposed to development of heuristic algorithms based on the solving the
assignment sub-problem and then comparison of those algorithms with algorithms
which solve the sequencing sub-problem at first prior to the assignment sub-problem.
References
1. Johnson, S.M.: Optimal two- and three-stage production schedules with setup times in-
cluded. Naval Research Logistics Quarterly, 61–68 (1954)
2. Bellman, R.: Mathematical aspects of scheduling theory. Journal of Society of Industrial
and Applied Mathematics 4, 168–205 (1956)
3. Sule, D.R.: Industrial Scheduling. PWS Publishing Company, Park-Plaza (1997)
4. Kim, Y.K., Park, K., Ko, J.: A symbiotic evolutionary algorithm for the integration of
process planning and job shop scheduling. Computers & Operations Research 30, 1151–
1171 (2003)
5. Pinedo, M.L.: Scheduling: Theory, Algorithms, and Systems, 3rd edn. Springer Science +
Business Media, LLC, New York (2008)
6. Cochran, J.K., Horng, S., Fowler, J.W.: A multi-population genetic algorithm to solve
multi-objective scheduling problems for parallel machines. Computers & Operations Re-
search 30, 1087–1102 (2003)
7. Nowicki, E., Smutnicki, C.: The flow shop with parallel machines: A tabu search ap-
proach. European Journal of Operational Research 106, 226–253 (1998)
8. Kurz, M.E., Askin, R.G.: Scheduling flexible flow lines with sequence-dependent setup
times. European Journal of Operational Research 159, 66–82 (2004)
9. Gen, M., Cheng, R.: Network Models and Optimization: Multiobjective Genetic Algorithm
Approach. Springer-Verlag London Limited, Heidelberg (2008)
10. Lee, Y.H., Jeong, C.S., Moon, C.: Advanced planning and scheduling with outsourcing in
manufacturing supply chain. Computers & Indusrtial Engineering 43, 351–374 (2002)
Core Image Coding Based on WP-EBCOT
Abstract. This paper proposed a new core image coding algorithm based on
Wavelet Packet Embedded Block Coding with Optimized Truncation (WP-
EBCOT), for the characteristics of rich textures and complex edges in the core
images. The entropy-based algorithm for best basis selection is used to decom-
pose the core image, and then the wavelet packet subband structure is tested by
EBCOT with various different code blocks size, as a result we find that the op-
timal size of the code blocks is 64*64. Results show that the proposed algo-
rithm outperforms the base-line JPEG2000 for PSNR, and provides better visual
quality for core images.
1 Introduction
The core samples are one of the most foundational geological data in the research of
exploration and development for oil and gas fields. It is very conducive to preserve
and analysis the core data that a large number of core samples have been stored by
scanning approach to the digital images. However because the amount of core images
is too huge, so they must be compressed before stored. Now the core images are
mostly compressed by the DCT-based JPEG algorithm, which will cause the fuzzy
blocking effects in the area of texture or edge region in low bit rate situation. The
wavelet transform cant not only effectively overcome the limitations of Fourier trans-
form in dealing with non-stationary signal and the shortcoming of block-effect of the
DCT coding, but also provide a multi-resolution image representation, by locating the
information at any resolution level to realize the embedded coding schemes of giving
priority to coding and transmission of the important information in images.
In 2000, Taubaman proposed EBCOT compression algorithm [1] which is finally
adopted by the JPEG2000 image coding standard [2] as the core scheme. JPEG2000
operate on independent, non-overlapping blocks of quantized wavelet coefficients
which are coded in several bit layers to create an embedded, scalable bit streams
(Tier-1 coding). Instead of zerotrees, the JPEG2000 scheme depends on a per-block
quad-tree structure since the strictly independent block coding strategy precludes
structures across subbands or even code-blocks. These independent code-blocks are
passed down the coding pipeline and generate separate bit streams. Transmitting each
bit layer corresponds to a certain distortion level. The partitioning of the available bit
budget between the code-blocks and layers (truncation points) is determined using a
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 393–398, 2011.
© Springer-Verlag Berlin Heidelberg 2011
394 Y. Nie, Y. Zhang, and J. Li
(a) (b)
Fig. 1. (a) Origin image. (b) 3 levels completely WP Decomposed image.
2n
dimensional image is produced 2 different subbands, for example the 3 levels
completely decomposed WP of peppers is illustrated in the Figure 1.
In order to achieve compression gain while keeping the computational load rea-
sonably low, the WP decomposition needs two entities: a defined cost function for
basis comparison and a fast and all-sided search scheme. The choice of cost function
for best basis of WP is substantial to the coding performance and computing effi-
ciency. Ramchandran and Vetterli proposed a practical algorithm based on the cost
function of coding rate and distortion, which considered the number of bits needed to
approximate an image with a given distortion in [9]. Because the selection of best
basis involves embedded nonlinear optimization problems, the overall complexity of
the approach is extremely high. Coifman and Wickerhauser proposed entropy-based
algorithm for best basis selection [10]. According to this cost function, the choice of
optimal basis is made by pruning the complete decomposition tree only once, but the
cost function and the corresponding wavelet packet decomposition do not take into
account the following encoding method, which to a large extent determines the per-
formance of a coding framework.
Although the entropy-based method leads to a sub-optimum basis, taking into ac-
count the computational complexity of the practical application, in this work we use
Shannon entropy as the cost function for WP basis comparison instead of coding rate
and distortion.
3 WP-EBCOT
Unlike the dyadic subbands structure, the high-pass subbands have been transformed
continuously in WP, so that the energy of high-pass subbands is accumulated to the
upper left corner and the possibility of significant coefficients occurrence in the upper
left corner of this subband is extremely high. Moreover the size of code block is an
important factor to affect coding performance. In order to analyze the impact of code
block size in wavelet packet decomposition subband on coding efficiency. A lot of
experiments are taken to get the comparison of coding efficiency of EBCOT with the
different size of code block partitioning. Here only we only presented the two test
core images samples see in Figure 2 and the result are presented in Figure 3.
396 Y. Nie, Y. Zhang, and J. Li
(a) (b)
Fig. 2. (a) Core image sample 1. (b) Core image sample 2.
(a) (b)
Fig. 3. (a) Core image sample 1: comparison of four different size of block for EBCOT and
WP-EBCOT coding PSNR. (b) Core image sample 2: comparison of four different size of
block for EBCOT and WP-EBCOT coding PSNR.
(a) (b)
Fig. 4. Best basis geometry: (a) Sample 1. (b) Sample 2.
Core Image Coding Based on WP-EBCOT 397
Our results indicate that as the size of code blocks are increased, the encoding per-
formance of EBCOT are reduced for the same core image sample, and the PSNR de-
clines significantly when the size of code block varying from 32*32 To 16*16 , but
declines slightly when from 64*64 to 32*32. In order to take advantage of the character-
istics of wavelet packet coefficients distribution while also taking into account the influ-
ence of code block size on PSNR , this work uses the size of code block is 64 * 64.
The WP decomposition results of two core images as shown in figure 4. Practical
tests are presented on a 5 level 9/7 biorthogonal wavelet transform and using the WP
decomposition in the high pass.
The rest of WP-EBCOT is same as the EBCOT. The whole algorithm of WP-
EBCOT is implemented in open source software jasper -1.900.1[11] of static image
compression standard JPEG2000 and the compile platform is based on VC++.NET
2005.
4 Experimental Results
The PSNR comparison of The WP-EBCOT and EBCOT compression are shown as
Figure 5. Experimental results show that compared to the EBCOT, the propose algo-
rithm achieve higher PSNR and obtain the better visual effects, especially at low bit
rates for the core image.
(a) (b)
Fig. 5. (a) Core image sample 1: comparison of PSNR values for EBCOT and WP-EBCOT.
(b) Core image sample 2: comparison of PSNR values for EBCOT and WP-EBCOT.
5 Conclusions
This paper proposed the WP-EBCOT to improve the compression performance of the
core images. The results show that the proposed algorithm provides the higher PSNR,
and better visual quality for core images than the EBCOT. One way to improve the
performance is to design an improved cost function for WP best basis selection which
would take into account the factor of coding block in EBCOT algorithm and provide
optimal distortion value for a given bit rate.
398 Y. Nie, Y. Zhang, and J. Li
References
[1] Taubman, D.: High performance scalable image compression with EBCOT. IEEE Trans.
on Image Processing 9(7), 1158–1170 (2000)
[2] Taubman, D., Marcellin, M.W.: JPEG2000-Image Compression Fundamentals, Stan-
dards and Practice. Kluwer Academic Publishers, Dordrecht (2002)
[3] Reisecker, M., Uhl, A.: Wavelet-Packet Subband Structures In The Evolution of The
Jpeg2000 Standard. In: Proceedings of the 6th Nordic Signal Processing Symposium -
NORSIG 2004, pp. 97–100 (2004)
[4] Shapiro, J.M.: Embedded image coding using zerotrees of wavelet coefficients. IEEE
Trans. Signal Process. 41(10), 3445–3462 (1993)
[5] Said, A., Pearlman, W.A.: A new, fast and efficient image codec based on set partitioning
in hierarchical trees. IEEE Trans. Circ. Syst. Video Technol. 6(6), 243–250 (1996)
[6] Pearlman, W.A., Islam, A., Nagaraj, N., Said, A.: Efficient, low-complexity image cod-
ing with a set-partitioning embed-ded block coder. IEEE Trans. Circ. Syst. Video Tech-
nol. 14(11), 1219–1235 (2004)
[7] Sprljana, N., Grgicb, S., Grgicb, M.: Modified SPIHT algorithm for wavelet packet im-
age coding. Real-Time Imaging 11, 378–388 (2005)
[8] Yang, Y., Xu, C.: A wavelet packet based block-partitioning image coding algorithm
with rate-distortion optimization. Science in China Series F: Information Sciences 51(8),
1039–1054 (2008)
[9] Ramchandran, K., Vetterli, M.: Best wavelet packet bases in a rate distortion sense. IEEE
Transactions on Image Processing 2(2), 160–175 (1993)
[10] Coifman, R.R., Wickerhauser, M.V.: Entropy-based algorithms for best basis selection.
IEEE Trans. Inform Theory, Special Issue on Wavelet Transforms and Multires. Signal
Anal. 38(3), 713–718 (1992)
[11] Adams, M.D.: JasPer Software Reference Manual (Version 1.900.0) (December 2006),
http://www.ece.uvic.ca/~mdadams/jasper/jasper.pdf
A New Method of Facial Expression
Recognition Based on SPE Plus SVM
1 Introduction
Facial expressions imply so much information about human emotions that it plays an
important role in human communications. In order to facilitate a more intelligent and
smart human machine interface of multimedia products, automatic facial expression
recognition (FER) has become a hot issue in the computer vision and pattern recogni-
tion community. One of the many difficulties on FER is the high dimension of data.
Extremely large data will increase the cost of computation and bring about the so-
called “dimensionality curse" problem. Reducing data into fewer dimensions often
makes analysis algorithms more efficient, and can help machine learning algorithms
make more accurate predictions.
To reduce the dimensionality of the hyper-dimensional data, many algorithms have
been introduced. Among them, stochastic proximity embedding (SPE), which was
proposed by DK Agrafiotis [1] in 2002, is an excellent self-organizing algorithm for
embedding a set of related observations into a low-dimensional space that preserves
the intrinsic dimensionality and metric structure of the data. This method is program-
matically simple, robust, and convergent [2]. Typically, SPE can be applied to extract
constraint surfaces of any desired dimension. Because it works directly with prox-
imity data, it can be used for both dimension reduction and feature extraction [3].
In this paper, we use SPE as the dimensionality reduction method, and adopt SVM
as classifier testing on JAFFE database on the study of FER. Comparing with conven-
tional algorithms, such as PCA and LDA, our algorithm has a better performance, and
its best performance was recorded 69% on FER.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 399–404, 2011.
© Springer-Verlag Berlin Heidelberg 2011
400 Z. Ying et al.
{ }
m-dimensional display plane xi , i = 1, 2, K , k ; x ∈ ℜ m , the problem is to place xi
onto the plane in such a way that their Euclidean distance d ij = xi − x j approximate
as closely as possible the corresponding values rij . The quality of the projection is
determined using a error function [4],
f (dij , rij )
E=∑ ∑r ij (1)
i< j rij i< j
where
figuration and iteratively refines it by repeatedly selecting tow point at random, and
adjusting their coordinates so that their Euclidean distance on the map dij matches
more closely their corresponding proximity rij . The correction is proportional to the
rij − dij
disparity λ , where λ is a learning rate parameter that decrease during the
d ij
course of refinement to avoid oscillatory behavior. The detail algorithm is as follow:
1 rij − dij
xi ← xi + λ
2 dij + ε
( xi − x j ) (3)
A New Method of Facial Expression Recognition Based on SPE Plus SVM 401
and
1 rij − dij
xj ← xj + λ
2 dij + ε
( x j − xi ) (4)
3 Experiment Analysis
3.1 Flows of the Experiment
The experiments of the proposed algorithm on FER are implemented in the JAFFE
database. The database contains 213 images in which ten persons are expressing three
or four times the seven basic expressions (anger, disgust, fear, happiness, neutral,
sadness and surprise). We select 3 images of each expression of each person (210
images in all) for our experiments.
To eliminating the unwanted redundant information effecting on FER, the expres-
sion images are registered using eye coordinates and cropped with a mask to exclude
non-face area, as showed in Fig. 1.
Fig. 1. The samples of the JAFFE database that have been excluded the non-face area
Then, images are resized to 64×64 pixels. After this step, we process the data with
histogram equalization. The step posterior to image preprocessing is dataset construc-
tion for SPE processing, we construct the dataset by reshaping each of the 64 64 ×
×
data-points to 4096 1 data-points. Then all data-points constitute a dataset which
contains 210 data-points with 4096 dimensions. Then we can reduce the data dimen-
sionality of the dataset using SPE algorithm.
After data dimensionality reduction, the next step is training and classifying data
using SVM. We divide the database into ten equally sized sets (each set is corre-
sponding to each specific person, the JAFFE database altogether contains ten persons’
facial expression images): nine sets are used for training and the remaining one for
testing. This process is repeated so that each of the ten sets is used once as testing set
and nine times ad training set. The average result over all ten rounds is considered as
the final expression recognition rates of the experiment.
2
σ 2
} ) kernel function as classifier in FER. At first, we optimize the parameters
402 Z. Ying et al.
on OAA-SVM model, this step insure the SVM classifier work on its best condition on
FER. This step including optimize the regulation constant C and kernel argument σ 2 .
The regularization constant C is a cost parameter that trades of margin size and
training error in SVM. At first, the performance of FER was improved along with the
increasing values of C , and recorded its peak value (69.05%) at where C =250, but
afterwards, the FER recognition rates trend to fluctuate widely when C continues to
increase. The recognition rate of FER rises sharply along with the increasing
of σ 2 value, then it trend to be stable when the value of σ 2 is over 2. The parameter
σ 2 gets its optimum parameter at 6.4, at where the recognition rates are 67.14%.
An experiment was conducted to analysis the relationship between reduced dimen-
sions and FER performance. The experiment result, which is showed in the Fig. 2,
indicates that, the dimensions can be reduced to 40 and the recognition rate also keeps
above 60%, and the performance of FER is stable around the dimensions between 40
and 200, and the recognition rates are almost all above 60%. The highest recognition
rate is 67.14% which is attained in the dimension of 48. The best performance of FER
using SPE+SVM algorithm are in a relative low dimensionality, which appears in the
range of dimensions between 40 and 60. To compare with other algorithm, we used
PCA to substitute SPE as dimensionality reduction toolkit and redo this experiment.
The result was also showed in Fig. 2, which reveals the different abilities of SPE and
PCA on feature extraction.
70
The expression recognition rates(%)
60
50
SPE+SVM
40
PCA+SVM
30
20
10
0 20 40 60 80 100 120 140 160 180 200
Dimensions of the dataset (ı2=6.4 C=250)
Fig. 2. The reduced dimensions of the dataset affect the performance of FER. (the results are
acquired under OAA-SVM parameters of C =250, and σ 2 = 6.4).
As we can see from this plot, apart from the FER performance is superior to
PCA+SVM on the whole, SPE also exhibits its’ ability as an excellent feature extrac-
tor. For the best performance of SPE+SVM algorithm on FER is attained at the range
of dimensions from 40 to 60, a relatively low dimensions compare with the most
conventional dimensionality reduction algorithm, such as PCA, this suggests the SPE
can produces maps that exhibits meaningful structure even when the data is embedded
in fewer than its intrinsic dimensions.
A New Method of Facial Expression Recognition Based on SPE Plus SVM 403
We also made a comparison between some conventional algorithm and our pro-
posed algorithm on FER as shown in table 1. The front four methods adopt OAA-SVM
as the classifier and the last one adopt MVBoost as classifier. Table 1 shows the recog-
nition rates of those methods. In this contrast experiment, we can see that SPE, as an
effective approach to facial expression feature extraction, is much more superior to
PCA, KPCA, LDA and also better than Gabor Histogram Feature+MVBoost algorithm.
Table 1. The performances comparing between different FER algorithms (in percentage)
90
SPE+SVM
The expression recognition rate(%)
80
PCA+SVM
70
60
50
40
30
20
10
0
anger disgust fear happiness neutral sadness surprise
Fig. 3. The FER performance comparison between SPE+SVM and PCA+SVM algorithms on
various expressions. (the results are acquired under OAA-SVM with parameters of C =250,
and σ 2 = 6.4).
As shown in Fig. 3, theses two algorithms both recorded high recognition rates on
the expressions of ‘anger’, ‘disgust’, ‘happiness’ and ‘surprise’, and their performance
have slightly differences. However, when it come to the rest three expressions(‘fear’,
‘neutral’, and ‘sadness’), which were recorded with low recognition rates on both
algorithms, the performances of FER are quite different: the behavior of SPE+SVM
algorithm is far superior to the PCA+SVM. To analysis this phenomenon, we can see
the divisibility of these three expressions is low. PCA, as a conventional dimensional-
ity reduction tool, when it reduces the dimensions of the dataset, it also reduces some
useful distinction nature between these expressions, which result in poor FER
performances. But SPE is not only just a dimensionality reduction tool, it also acts as
a feature extractor, it can preserve the minute difference of these expressions, this
404 Z. Ying et al.
specialty enables it work well on fuzzy environments. This instance maybe illustrate
the SPE algorithm is more robust than PCA etc. conventional dimensionality reduc-
tion algorithms.
4 Conclusions
A new approach for facial expression recognition based on stochastic proximity em-
bedding plus SVM was proposed. Tested on JAFFE database, the proposed algorithm
obtained a satisfactory result. The FER performance of the proposed algorithm better
than the traditional algorithms such as PCA, KPCA and LDA etc., and also superior
to some newly introduced algorithm, such as FEA based on Gabor Histogram Feature
and MVBoost. Because SPE has the ability to extract features from the dataset, com-
pare with conventional FER algorithm PCA+SVM, SPE+SVM can attain the best
performance at a very low dimension of the dataset, this advantage also enable the
SPE+SVM algorithm to be more robust than PCA+SVM etc. conventional dimen-
sionality reduction algorithms on FER.
Acknowledgment
This paper was supported by NNSF (No.61072127, No. 61070167), Guangdong NSF
(No. 10152902001000002, No. 07010869), and High Level Personnel Project of
Guangdong Colleges (No. [2010]79).
References
[1] Agrafiotis, D.K., Xu, H.: A self–organizang principle for learning nonlinear manifold.
proc. Network Academy of Science, 15869–15872 (2002)
[2] Rassokin, D.N., Agrafiotis, D.K.: A modified update rule for stochastic proximity embed-
ding. J. Journal of Molecular Graphic an Modelling 22, 133–140 (2003)
[3] Agrafiotis, D.K.: Stochastic Proximity Embedding. J. Comput. Chem. 24, 1251–1271
(2003)
[4] Nishikawa, N., Doi, S.: Optimization of Distances for a Stochastic Embedding and Clus-
tering of Hing-Dimensuonal Data. In: The 23rd International Technical Conference on Cir-
cuit/systems, Computers and Communications, pp. 1125–1128 (2008)
[5] Abe, S.: Analysis of Multiclass Support Vector Machines. In: International Conference on
Computational Intelligence for Modelling Control and Automation, pp. 385–396 (2003)
[6] Ying, Z., Zhang, G.: Facial Expression Recognition Based on NMF and SVM. In: Interna-
tional Forum on Information Technology and Applications 2009, vol. 3, pp. 612–615
(2009)
[7] Liu, X., Zhang, Y.: Facial Expression Recognition Based on Gabor Histogram Feature and
MVBoost. Journal of Computer Research and Development 44(7), 1089–1096 (2002)
Multiple Unmanned Air Vehicles Control Using
Neurobiologically Inspired Algorithms
Abstract. In order to develop and evaluate future Unmanned Air Vehicles for
the hazardous environmental monitoring, the comprehensive simulation test and
analysis of new advanced concepts is imperative. This paper details an on-going
proof of concept focused on development of a neurobiologically-inspired
system for the high level control of a Air Vehicle team. This study, entitled
Neurobiologically Enabled Autonomous Vehicle Operations, will evaluate
initial System-Under-Test concept data by selecting well defined tasks, and
evaluating performance based on assignment effectiveness, cooperation, and
adaptability of the system. The system will be tested thoroughly in simulation,
and if mature, will be implemented in hardware.
1 Introduction
The use of unmanned aerial vehicles (UAVs) for various military or civilian missions
has received growing attention in the last decade. Depending on the application, there
are many different ideas on how to measure the autonomy of a system. In [1] the
Autonomous Control Levels (ACL) metric was introduced. The majority of the tech-
niques that have been developed can be classified as either optimization-based meth-
ods that make use of extensive a priori information or reactive methods that use local
information to define a global planning strategy. Typically these methods require
offline processing due to their computational complexity and can thus show a lack of
robustness in dealing with highly dynamic environments. However, in some cases,
replanning phases during execution of the mission can be possible. The graph search
approaches that have been used extensively typify these methods. Reactive methods
take planning methods one step further by incorporating local information into the
control strategy to allow for changing conditions in the environment. Rather than
generating an a priori path through a given environment, reactive methods focus on
using local information to define a controller for a vehicle that ultimately gives rise to
the desired behavior. Potential function methods have long been exploited in the reac-
tive paradigm. However, there is an area of research that focuses on neurobiological
(or brain-based) design that has been less exploited in cooperative control.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 405–410, 2011.
© Springer-Verlag Berlin Heidelberg 2011
406 Y. Zhang and L. Wang
Brain-Based Design (BBD) may provide a unique solution to the cooperative con-
trol problem. The essence of BBD lies in its ability to adapt to changing conditions in
real time. By mimicking the design and operation of the human brain, these systems
can exhibit a great capacity for learning about their environment [2]. It is the ability to
make reasoned decisions that separates BBD from other cooperative control strate-
gies. The Neurobiologically Enabled Autonomous Vehicle Operations (NEAVO)
Study aims to exploit the capabilities of BBD to solve problems in the area of UAV
cooperative control.
2 Mission Tasks
In NEAVO, Multiple UAVs will be used to perform tasks in the category of Recon-
naissance, Surveillance, and Target Acquisition (RSTA). These tasks are especially
important in urban settings. Given the increased need for real-time information in
urban settings, perfecting RSTA UAV teams is an important research topic with ap-
plications in modern warfare and hazardous environmental monitoring. This process
is shown in Figure 1. For NEAVO, a subset of important RSTA tasks is identified as
test cases for the demonstration of BBD. Two basic tasks have been identi-
fied:(1)Tracking a Moving Target;(2)Cooperative Area Search. Which is shown in In
the Figure 2.
Fig. 1. The Kill Chain. Each step may iterate one or more times and be performed by one or
more assets.
(1) (2)
Fig. 2. Two basic tasks (1) Tracking a Moving Target; (2) Cooperative Area Search
Multiple Unmanned Air Vehicles Control Using Neurobiologically Inspired Algorithms 407
2.3 Constraints
For the above tasks, several constraints are provided. A list of constraints follows.
(1) The team shall stay within a specified area of operations. This area may change
in shape or size during operations.
(2) The team must avoid all areas marked as no-fly zones is shown in Figure 3. No-
fly zones may be added, removed, or changed during operations.
(3) The system shall allow for team members to be added or removed at any time.
New members shall be commanded in the same way as existing team members.
(4) Steady-state winds shall be considered when planning team actions.
3 System Models
All aircraft in NEAVO will be small UAVs. The team may be heterogeneous, but all
aircraft will be statically stable, autopilot controlled vehicles. Each vehicle will be a
waypoint follower. A route will be uploaded to the aircraft by the command system.
408 Y. Zhang and L. Wang
The focus of NEAVO is cooperative task planning. BBD has shown promise in the
area of video processing, and target recognition, but these technologies are not being
considered in the NEAVO effort. Sensors will be modeled as polygons of fixed size,
with a grid of pixels that represent a video image over that area. A notional sketch of
this type of sensor is shown in Figure 4.
G B B B
G B B B
G G G G
G G G G
B B B B
Data will be passed from the sensor simulation to the mission planner supplying
the information seen and a confidence value. Initially, the sensor will be in a fixed
orientation on board the aircraft, but sensor movement may be considered later in the
NEAVO effort.When data is passed from the sensor to the planner, a confidence value
is applied to the detection. The confidence can be expressed as a function C s of the
number of pixels on target, the amount of time that the target spends in the sensor
field-of-view, the aspect angle to the target and the level of contrast between the tar-
get and the surrounding pixels.
(
C s = f N p , t FOV ,ψ , δ ) (1)
NEAVO aims to apply BBD to the task allocation and path planning steps of the
mission planning system. The result of the path planning step is a route instruction to
one or more UAVs.
5 Neurocomputation
Neurocomputation seeks to mimic the processing abilities of the brain to solve com-
plex problems [3]. The most popular of these algorithms is the artificial neural net-
work. ANNs consist of very simple processing units connected in a distributed fash-
ion. Each processing unit is modeled after the neuron and is typically characterized
by an activation function that may or may not produce an output based on the input
presented to it. Learning is accomplished by adjusting the weights that connect the
neurons to each other. In order to produce useful results, the neural network must be
given some criteria for determining the goodness of its solution. This criterion varies
between different learning algorithms. [4][5]
Supervised learning algorithms make use of training data in the form of a set of in-
puts and outputs. Using information about the expected outputs given an input, the
network learns how to respond to new inputs that it hasn’t seen before. The perform-
ance of the neural network when acting on its own is entirely dependent on the behav-
iors present in the training data. The fitness of the solutions produced is determined
based on comparison to the output data in the training set. Essentially, these represent
desired outputs and the network is trained to minimize the difference between the
actual output and the desired output. Supervised learning mechanisms have been
successfully applied to problems involving handwriting recognition, pattern recogni-
tion, and information retrieval.
Unsupervised learning mechanisms make use of training data as well. However,
there is no output data used in the training set. An input data set is used to fit a model
to observations. By forming a probability distribution over a set of inputs, the neural
network can be trained to output the conditional probability of an input given all pre-
vious inputs. As an example, consider a set of temperature data from a properly func-
tioning power plant. A neural network could be trained to determine the likelihood of
a particular reading given all previous readings, and could thus be used to monitor the
operation of the power plant. Unsupervised learning techniques have also been ap-
plied to data compression problems.
Reinforcement learning is unique in that it does not make use of training data. In-
stead, a set of possible actions is provided to the network for a given situation and a
410 Y. Zhang and L. Wang
system of rewards and penalties is used to direct behavior. At each step an estimate
of the future expected reward given a particular action is formed and the neural net-
work is trained to maximize its reward. In this way, reinforcement learning mecha-
nisms rely on direct interaction with the environment. Reinforcement learning fo-
cuses on online performance rather than a priori training and seeks to strike a balance
between the exploration of unknown areas and the exploitation of currently held
knowledge. This learning scheme has been successfully applied to robot control and
telecommunications problems.
6 Summary
The NEAVO Program seeks to evaluate the concept of a neurobiologically inspired
system controlling a RSTA UAV team. The simulation chosen to assess the system
concept is called MultiUAVs. Construction of MultiUAVs satisfies the need for a
simulation environment that researchers can use to develop, implement and analyze
cooperative control algorithms. Since the purpose of MultiUAVs is to make coopera-
tive control research accessible to researchers it was constructed primarily using
MATLAB and Simulink. Some of the simulation functions are programmed in C++.
During the simulation, vehicles fly predefined search trajectories until a target is en-
countered. Each vehicle has a sensor footprint that defines its field of view. Target
positions are either set randomly or they can be specified by the user. When a target
position is inside of a vehicle’s sensor footprint, that vehicle runs a sensor simulation
and sends the results to the other vehicles. With actions assigned by the selected coop-
erative control algorithm, the vehicles generate their own trajectories to accomplish
tasks. The simulation takes place in a three dimensional environment, but all of the
trajectory planning is for a constant altitude, i.e. two dimensions. Once each vehicle
has finished its assigned tasks it returns to its predefined search pattern trajectory. The
simulation continues until it is stopped or the preset simulation run time has elapsed.
Several issues are currently being explored in the NEAVO Program using MultiUAVs.
References
1. Clough, B.T.: Metrics, schmetrics How the heck do you determine a UAVs autonomy
anyway. In: Proceedings of the Performance Metrics for Intelligent Systems Workshop,
Gaithersburg, MD (2002)
2. Reggia, J., Tagamets, M., Contreras-Vidal, J., Weems, S., Jacobs, D., Winder, R., Chabuk,
T.: Development of a Large-Scale Integrated Neurocognitive Architecture. DARPA In-
formation Processing Technology Office (2006)
3. Rumelhart, D., McClelland, J.: Parallel Distributed Processing: Explorations in the Micro-
structure of Cognition. MIT Press, Cambridge (1988)
4. Haykin, S.: Neural Networks: A Comprehensive Foundation. Prentice-Hall, Englewood
Cliffs (1999)
5. Nunes de Castro, L.: Fundamentals of Natural Computing: Basic Concepts, Algorithms,
and Applications. Chapman and Hall, Boca Raton (2006)
The Use of BS7799 Information Security Standard to
Construct Mechanisms for the Management of Medical
Organization Information Security
1 Introduction
The objective of information security is "to ensure the various interests which rely on
the information system, avoid harm created by the loss of confidentiality, integrity,
and availability" [5, 9, 14]. Past research on information security has constantly
stressed the need for information and communication technology (ICT) such as data
encryption, firewall technology, and computer viruses [2, 4, 6, 7, 8, 10, 11, 12, 16, 17,
18, 19, 20, 21, 22]. The primary responsibility of medical organizations is to provide
patients with quality care, information security events quickly leave a medical organi-
zation incapable of carrying out normal operations, with negative consequences for
the rights of patients. Thus, the importance of maintaining the security of medical
organizations lies not only in protecting important information from theft, forgery, or
damage; more importantly, information security safeguards the reputation, value, and
sustainable development of the medical organization.
To help enterprises achieve information security goals, the British Standards Insti-
tution (BSI) has published the BS7799 information security management standard in
1995 (which became ISO 17799 international standard in 2000) [1, 14], covering all
aspects of information security. Current literature on risk management can only
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 411–416, 2011.
© Springer-Verlag Berlin Heidelberg 2011
412 S.-F. Liu, H.-E. Chueh, and K.-H. Liao
2 Literature Review
The frequency of information security events in medical organizations has been at-
tracting increased attention to medical organization security issues. According to
Smith and Eloff [15], the scope of security protection for medical organizations
should include assistance in avoiding: (1) physical damage to persons or property; (2)
privacy violations; (3) loss or destruction of medical information; (4) harm to opera-
tional integrity or consistency. The programming information security for medical
organizations must be considered from the perspective of a comprehensive security
protection , including: defining the scope of system security protection, assessing
risks, establishing internal control and auditing systems, exploring other management-
related issues, and structuring the comprehensive needs for information security of
medical organizations.
Harmful information security events usually include behavioral uncertainty, unpre-
dictability, and the inability to understand what true security is. In one way, risk man-
agement improves the performance and the effectiveness of evaluation with regard to
information security. After enhancing awareness of information security, organiza-
tions can use resources more efficiently, create better project management, and mini-
mize waste. Willett (1901) [23] believed that "risk" and "chance" were different and
that they should be distinguished by using "the degree of probability" for chance and
"the degree of uncertainty" for risk. The general steps in managing risk are: first,
identifying the risk; second, measuring the risk; third, selecting the proper tools;
fourth, taking action; fifth, evaluating the performance. Because a single information
security event could trigger a chain of issues, dealing with information security events
arbitrarily may cause unforeseen error and losses.
The United States has information security operating standards such as Control
Objectives for Information and Related Technology (COBIT), Trusted Computer
System Evaluation Criteria (TCSEC), while the United Kingdom has introduced the
BS7799 information security management standard. Control Objectives for Informa-
tion and Related Technology, the operating standard published by Information
Systems Audit and Control Association (ISACA) in 1995, is a set of comprehensive
considerations based on information technology control standards and information
technology as security. Trusted Computer System Evaluation Criteria, proposed by
the U.S. National Computer Security Commission (NCSC) in 1983, takes a system-
atic approach by dividing security issues into four categories, named A, B, C, and D,
where the category of computer systems dictates the standard level of security re-
quired. The aim of the BS7799 information security management standard, developed
The Use of BS7799 Information Security Standard to Construct Mechanisms 413
by BSI, was to ensure the security of business information assets including software
and hardware facilities as well as data and information, by avoiding information secu-
rity-related damage caused by internal and external threats as well as operating mis-
takes of organizational staff. Simply put, the objective of BS7799 was to establish a
comprehensive information security management system by ensuring information
confidentiality, integrity, and availability [1, 3, 5, 9, 14].
Each standard has its own suitable application, but from the perspective of infor-
mation security management for medical organizations, BS7799 is best suited to en-
suring the security of medical organizations and their information-related assets. The
BS7799 information security management standard includes the protection of soft-
ware, hardware facilities, information, and avoiding various internal and external
threats as well as operating mistakes of organizational staff. It also covers various
aspects of security policy, from formulating policy related to security and delegating
security related responsibilities to assessing risk and access control. As for studies
applying BS7799 to the medical industry, Janczewski (2002) [15] researched the use
of BS7799 to develop the Healthcare Information System (HIS), and investigated the
basic security infrastructure behind the development of HIS.
3 Research Methods
The main purpose of this study was to help medical organizations to use the most
appropriate resources to build the most effective mechanism for information security
management. This study uses the control items and the control objectives of the
BS7799 information security management standard as the foundation to establishing a
mechanism for the information security management suitable for medical organiza-
tions. This study first identified all information assets, information security threats,
and information security vulnerabilities within the organization based on BS7799's
one hundred twenty-seven control objectives, and formed a risk assessment scale
through the assessment of the probability of occurrence and the degree of impact
posed by security events. Then, according to the assessments of experts, we set
weightings, ranked the information security events in order of priority, and structured
a security management model.
Carroll [1] proposed a mathematical formula for using annualized loss expectancy
(ALE) to assess the necessary cost of asset protection and losses due to information
threats. The formula is as follows:
ALE = TV . (1)
Here, T is the annual value of a given threat, while V is the estimated value of assets.
To evaluate the value at risk (VaR) of BS7799 control objectives, we convert the
value T into the control objective's probability of occurrence (value P), and convert
the value V into the control objective's degree of impact (value I).The ALE is then
regarded as evaluating the VaR for each information security control objective.
P × I = VaR. (2)
Using the 127 control objectives in BS7799 as a framework, we build an assessment
table for the information security management according to each objective’s
414 S.-F. Liu, H.-E. Chueh, and K.-H. Liao
Table 1. The comparative table for the value P (probability) and value I (impact)
Table 2. The comparative table for the value P (probability) and value I (impact)
Information security management mechanisms constructed in this way are not al-
ways the best model. Therefore, they must be continually assessed to maintain effec-
tiveness in risk management.
Risk
Control Item Procedural Order
Quadrant
1-1, 2-3, 2-4, 3-1, 4-5, 4-6, 4-7, 4-10, 5-1, 5-2, 5-3, 5-8,
5-9, 6-1, 6-2, 6-3, 6-8, 6-9, 6-10, 6-11, 6-13, 6-14, 6-15,
6-17, 6-18, 6-23, 7-2, 7-4, 7-6, 7-7, 7-8, 7-9, 7-11, 7-13,
Q2 Highest priority
7-14, 7-15, 7-17, 7-19, 7-20, 7-21, 7-23, 7-25, 7-26, 7-
27, 7-28, 8-1, 8-2, 8-10, 8-13, 8-14, 8-17, 8-18, 9-1, 9-2,
9-3, 10-2, 10-3, 10-4, 10-6.
Q3 2-2, 2-6, 2-8, 5-6, 5-12, 5-13, 6-4, 6-6, 6-12, 6-16, 6-19,
6-20, 6-21, 6-22, 7-3, 7-10, 7-12, 7-16, 7-24, 8-3, 8-4, 8- Needs to be processed
5, 8-6, 8-7, 8-8, 8-9, 8-11, 8-12.
Q4 1-2, 4-1, 4-8, 5-7, 6-5, 7-1, 7-5, 7-30, 8-15, 8-16, 10-5. Needs to be processed
5 Conclusion
Given the competitive climate in the medical industry, with the expansion of hospitals,
changes in government policy, and increasing demand for quality medical service,
many hospitals have had to face unprecedented operational pressures, forcing hospital
operators to pay closer attention to costs. The mechanisms for the information security
management developed in this study could help medical organizations to assess their
level of information security risk and identify appropriate improvement strategies.
References
1. Arthur, E.H., Bosworth, S., Hoyt, D.B.: Computer Security Handbook. John Wiley &
Sons, New York (1995)
2. Badenhorst, K.P., Elloff, J.H.P.: Framework of a Methodology for the Life Cycle of Com-
puter Security in an Organization. Computer & Security 8(5), 433–442 (1989)
416 S.-F. Liu, H.-E. Chueh, and K.-H. Liao
3. Christophy, A., Dorofee, A.: Introduction to the OCTAVE Method. The CERT® Coordi-
nation Center, CERT/CC (2001)
4. Ellison, R.J., Linger, R.C., Longstaff, T., Mead, N.R.: Survivable Network System Analy-
sis: A Case Study. IEEE Software 16(4), 70–77 (1999)
5. Eloff, J.H.P., Eloff, M.M.: Information security architecture. Computer Fraud & Secu-
rity 11, 10–16 (2005)
6. Eloff, M.M., Von Sloms, S.H.: Information Security Management: A Hierarchical Frame-
work for Various Approaches. Computers & Security 19(3), 243–256 (2000)
7. Eloff, M.M., Von Sloms, S.H.: Information Security Management: An approach to Com-
bine Process Certification and Product Evaluation. Computers & Security 19(8), 698–709
(2000)
8. Ettinger, J.E.: Key Issues in Information Security. Information Security. Chapman & Hall,
London (1993)
9. Finne, T.: Information Systems Risk Management: Key Concepts and Business Processes.
Computers & Security 19(3), 234–247 (2000)
10. Gehrke, M., Pfitzmann, A., Rannenberg, K.: Information Technology Security Evaluation
Criteria (ITSEC)-A Contribution to Vulnerability? In: The IFIP 12th World Computer
Congress Madrld on Information Processing, pp. 7–11 (1992)
11. Gollmann, D.: Computer Security. John Wiley & Sons Ltd., UK (1999)
12. Gupta, M., Chartuvedi, A.R., Metha, S., Valeri, L.: The Experimental Analysis of Informa-
tion Security Management Issues for Online Financial Services. In: The 2001 International
Conference on Information Systems, pp. 667–675 (2001)
13. Halliday, S., Badenhorst, K., Von Solms, R.: A business approach to effective information
technology risk analysis and management. Information Management & Computer Secu-
rity 4(1), 19–31 (1996)
14. ISO/IEC 17799. Information technology-code of practice for information security man-
agement. BSI, London (2000)
15. Janczewski, L.J., Shi, F.X.: Development of Information Security Baselines for Healthcare
Information Systems in New Zealand. Computer & Security 21(2), 172–192 (2002)
16. Schultz, E.E., Proctor, R.W., Lien, M.C.: Usability and Security An Appraisal of Usability
Issues in Information Security Methods. Computers & Security 20(7), 620–634 (2001)
17. Sherwood, J.: SALSA: A method for developing the enterprise security architecture and
Strategy. Computer & Security 2(3), 8–17 (1996)
18. Smith, E., Eloff, J.H.P.: Security in health-care information systems-current trends. Inter-
national Journal of Medical Informatics 54, 39–54 (1999)
19. Song, M.J.: Risk Management. Chinese Enterprise Develop Center, 33–456 (1993)
20. Trcek, D.: An Integral Framework for Information Systems Security Management. Com-
puters & Security 22(4), 337–360 (2003)
21. Von Solms, R.: Information Security Management: The Second Generation. Computer &
Security 15(4), 281–288 (1996)
22. Von Solms, R., Van Haar, H., Von Solms, S.H., Caelli, W.J.: A Framework for Informa-
tion Security Evaluation. Information & Management 26, 143–153 (1994)
23. Willet, A.H.: The Economic Theory of Risk and Insurance. Ph. D. Thesis in Columbia
University (1901)
An Improved Frame Layer Rate
Control Algorithm for H.264
Abstract. Rate control is an important part of video coding. This paper presents
an improved frame layer rate control algorithm by using the combined frame
complexity and the adjusted quantization parameter (QP). The combined frame
complexity can be used to more reasonable implement bit allocation for each
frame, and the quantization parameter adjusted by the encoded frame informa-
tion also can get more accurate rate control. Experimental results show that our
proposed algorithm, in comparison to the original algorithm, reduces the act bit
rate error of video sequences and gets a better average PSNR with smaller
deviation.
1 Introduction
The principle task of rate control in video communication is to collect the buffer status
and image activity. It also allocates a certain number of bits for each image of video in
the purpose of controlling the output rate and minimizing the image distortion.
In the rate control algorithm for H.264/AVC, the quantization parameter is used in
both the rate control and rate distortion optimization (RDO), which leads to chicken
and egg dilemma [1].In order to solve the chicken and egg dilemma, many scholars
have done a lot of researches. The work in [2] solves the dilemma by enhancing the ρ
domain model. The relational model between rate and quantization step is advanced
in [3] for the dilemma. Besides those, the scheme represents a new rate control algo-
rithm with the comprehensive consideration of HRD consistency and the ratio of the
mean absolute difference (MAD) in [4]. Typically, Li et al. in JVT-G012 [5] have
proposed a linear model for MAD prediction to solve the chicken and egg dilemma,
this method can obtain good coding result. Although JVT-G012 proposal well solves
the dilemma, there are some problems of the proposal. Due to no explicit R-Q model
for an intraframe discussed, the scheme [6] introduces an adaptive intraframe rate-
quantization(R-Q) model. The proposed method aims at selecting accurate quantiza-
tion parameters for intra-coded frames. The work in [7] proposes separable R-D
models for color video coding. Also the rate control algorithm for JVT-G012 has
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 417–421, 2011.
© Springer-Verlag Berlin Heidelberg 2011
418 X. Chen and F. Lu
some lack of bit allocation in frame layer, it allocates bits based on the buffer status
without considering the frame complexity, and it does not take the characteristics of
the encoded frames impact on the current frame into account when calculating QP.
Therefore, this paper proposes an improved frame layer rate control algorithm. By
allocating bits in frame layer based on the frame complexity and adjusting the QP
based on the encoded frame, our method can be more effective both in achieving rate
control and improving the image quality.
As the rate-distortion(R-D) model and the linear model for MAD prediction are not
accurate, we need to reduce the differences between the actual bits and target bits by
calculating a target bits for each frame. The frame layer rate control algorithm for
JVT-G012 allocates the bits for non encoded frames on average based on the buffer
status. So it is difficult to reach the required coding bits with the same quality due to
the different frame complexity. Thus we must consider the frame complexity when
allocating the bits in frame layer, which can get more accurate results. Our frame
complexity FC is defines as follows [8]:
where MADratio (i, j ) is the ratio of the predicted MAD of current frame j to the
average MAD of all previously encoded P frames in the ith GOP,
Cj = H j H j −1 , H j is the average difference of gray histogram between the
current frame and the previous reconstruction frame, and μ is a weighting coefficient.
The target bits allocated for the jth frame in the ith group of pictures (GOP) in
JVT-G012 is determined by frame rate, target buffer size, actual buffer occupancy
and the available channel bandwidth
⎧ ⎧u(ni, j ) ⎫
⎪0.88×FC×β ×
Tr (ni, j )
+(1− β)×⎨ [ ]
+γ Tbl(ni, j ) − Bc (ni, j ) ⎬ 0 ≤ FC≤1.1
⎪ Nr ⎩ Fr ⎭
⎪ ⎧u(n ) ⎫
⎪ T (n )
[ ]
Ti ( j) = ⎨[0.8×(FC−1.15) +1.1]×β × r i, j +(1− β)×⎨ i, j +γ Tbl(ni, j ) − Bc (ni, j ) ⎬ 1.1< FC≤ 2.1 , (2)
⎪ N r ⎩ r F ⎭
⎪ ⎧ i, j ⎫
⎪1.15×β ×
⎪⎩
Tr (ni, j )
+(1− β)×⎨
u(n
[ )
]
+γ Tbl(ni, j ) − Bc (ni, j ) ⎬ FC> 2.1
Nr ⎩ r F ⎭
where Ti ( j ) is the target bits allocated for the jth frame, Fr is the predefined frame
rate, u ( ni , j ) is the available channel bandwidth for the sequences, Tbl (ni , j ) is the
target buffer level, Bc (ni , j ) is the occupancy of virtual buffer, β is a constant and
its value is 0.5 when there is no B Frame and 0.9 otherwise, γ is a constant and its
An Improved Frame Layer Rate Control Algorithm for H.264 419
value is 0.75 when there is no B Frame and 0.25 otherwise, and T ' = Tr (ni , j ) , Tr ( ni , j )
r
Nr
is the remaining bits of all non coded frames in the ith GOP, and N r is the number of
P-frames remaining to be coded. The formula allocates smaller bits for a frame with
lower frame complexity measure, whereas the more bits for a frame with higher com-
plexity measure.
The quadratic R-D model has two key parameters: MAD and the header bits. The
inaccuracy of two parameters should lead QP calculated by the quadratic R-D model
can not produce the desired coded bits. Therefore, we must adjust the current QP by
the previous frame coding information, which can achieve more accurate rate control.
When we obtain the value of Ti ( j ) in the use of the improved algorithm, we calcu-
late the quantization parameter using the R-Q model. To consider the feedback infor-
mation of the coded frames, we adapt the quantization parameter adjustment factor
ΔQ to adjust the value of QP. The value of ΔQ is determined by the actual bits of
the previous frame and the target bits of the previous frame.
⎧ QPj j = 2
QPj = ⎨ . (4)
⎩QPj + ΔQ j > 2
After the adjustment, the algorithm has taken full the coded frame information into
account, it achieves a good rate control, and then we perform RDO.
3 Experimental Results
We have implemented our proposed rate control algorithm by enhancing the JM8.6
test model software. The JVT-G012 rate control algorithm is selected as a reference
for comparison (as is implemented on reference software JM8.6). The tested se-
quences are in QCIF4:2:0 formats: suzie, football, mobile, foreman and coastguard. In
the experiments, the frame rate is set to 15 frames per second, the target bit rate is set
to 64kb/s, the total number of frames is set to 100, the initial quantization parameter is
set to 28 and the length of GOP is set to 25.
420 X. Chen and F. Lu
The experimental results are shown in Tab. 1. As summarized in Tab. 1, our pro-
posed rate control algorithm can control the bit rates more accurately. The maximum
error of the actual bit rates is 0.91%, which is reduced by 2 times compared with the
maximum error 2.45% of the original algorithm. Meanwhile, the average error of the
actual bits is 0.48%, which is reduced by more than 50% compared with the average
error 1.03% of the original algorithm.
The proposed algorithm also improves the average PSNR and PSNR deviation sig-
nificantly for the sequences. In Tab. 1, it shows that our method achieves an average
PSNR gain of about 0.54dB with similar or lower PSNR deviation as compared to the
JVT-G012 algorithm. Especially for the sequence football and the sequence mobile,
which has high motion and complex texture, the proposed algorithm achieves the
PSNR gain of 0.85dB and 1.31dB separately. The improvement of the sequence fore-
man with moderate texture is not obvious. For most sequences, the proposed algorithm
obtains lower PSNR deviation compared with the JVT-G012 algorithm. This shows the
proposed algorithm can smooth the PSNR fluctuation between frames to some extent.
Table 1. Performance comparison for the proposed algorithm with the JVT-G012 algotithm on
JM8.6
4 Conclusion
In this paper, we propose an improved frame layer rate control algorithm by using the
combined frame complexity and the adjusted quantization parameter. The algorithm
allocates bits in frame layer according to the frame complexity, and computes the
quantization parameter of current frame with the consideration of the previous frame
information. The experimental results show that our algorithm achieves more accurate
rate control and better average PSNR.
Acknowledgments
This work was supported by "Qing Lan Gong Cheng" program of Jiangsu Province of
China and National Natural Science Foundation of China (No. 10904073).
References
1. Ma, S.W., Gao, W., Wu, F., Lu, Y.: Rate control for JVT video coding scheme with HRD
considerations. In: Proceeding of IEEE International Conference on image and Processing,
vol. 3, pp. 793–796 (2003)
An Improved Frame Layer Rate Control Algorithm for H.264 421
2. Shin, I.H., Lee, Y.L., Park, H.W.: Rate control using linear rate-ρ model for H.264. Signal
Process Image Communication. 19, 341–352 (2004)
3. Ma, S.W., Gao, W., Lu, Y.: Rate-distortion analysis for H.264/AVC video coding and its
application to rate control. IEEE Trans. on Circuit Syst. for Video Technol. 15, 1533–1544
(2005)
4. Li, Z.G., Gao, W., Pan, F.: Adaptive rate control for H.264. Journal of Visual Communica-
tion and Image Representation 17(2), 376–406 (2006)
5. Li, Z.G., Pan, F., Lim, K.P.: Adaptive base unit layer rate control for JVT, JVT-G012. In:
Proceedings of 7th Meeting, Pattay II, pp. 7–14. IEEE Press, Thailand (2003)
6. Jing, X., Chau, L.P., Siu, W.C.: Frame complexity-based rate-quantization model for
H.264/AVC intraframe rate control. IEEE Signal Processing Letters 15, 373–376 (2008)
7. Chen, Z.Z., Ngan, K.N.: Towards rate-Distortion tradeoff in real-time color video coding.
IEEE Trans.Circuits Syst.Video Technol. 17, 158–167 (2007)
8. Chen, X., Lu, F.: An Improved Rate Control Scheme for H.264 Based on Frame Complexity
Estimation. Journal of Convergence Information Technology (accepted)
Some Properties in Hexagonal Torus as Cayley Graph
Zhen Zhang1,2
1
Department of Computer Science, Jinan University, Guangzhou 510632, P.R. China
2
Department of Computer Science, South China University of Technology
Guangzhou 510641, P.R. China
zhang2003174@yahoo.com.cn
Abstract. Vertexes in the hexagonal mesh and torus network are placed at the
vertices of a regular triangular tessellation, so that each node has up to six
neighbors. The network is proposed as an alternative interconnection network to
mesh connected computer and is used also to model cellular networks where
vertexes are based stations. Hexagonal tori are known to belong to the class
of Cayley graphs. In this paper, we use Cayley-formulations for the hexagonal
torus to develop an elegant routing and broadcasting algorithm.
1 Introduction
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 422–428, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Some Properties in Hexagonal Torus as Cayley Graph 423
which complicates the routing algorithm. Their broadcasting algorithm, on the other
hand, is very elegant. Xiao, etc. propose that hexagonal mesh and torus, as well as
honeycomb and certain other pruned torus networks, are belonged to the class of
Cayley graphs, and they also possess other interesting mathematical properties[3, 4].
Using Xiao’s scheme, Zhang et al gave the optimal routing algorithm [8]. But their
routing algorithm is very complicated. And they also gave an upper bound of the
diameter.
The rest of this paper is organized as follows. In section 1, we give some defini-
tions of Cayley graphs that are used in this paper, and we also propose a co-ordinate
system for hexagonal networks that uses two axes: x and y. According to this address-
ing scheme, a simple routing algorithm is developed in Section 3, and we will prove
that this algorithm is optimal. We also give a broadcasting algorithm by using
the Cayley graph and Coset graph properties in Section 4. The paper concludes with
Section 5.
We can adapt a coordinate system for hexagonal networks that uses two axes: x
and y, as shows in Fig. 2. Using the results obtained for hexagonal meshes according
to Lemma 1, we can deal with problems on Hexagonal torus which are, in general,
more difficult. Then, we have the following result.
Lemma 2. For the hexagonal torus Hl×k and integers a and b, l > a ≥ 0 , k > b ≥ 0 ,we
have dis((0, 0), (a, b))=min(max(a, b), max(l-a, k-b), l-a+b, k+a-b).
Proof: See in [4]. ■
According to the Lemma 2 and the properties of Coset graph, we can develop routing
and broadcasting algorithm of the hexagonal torus.
Route(p=(a, b)) //returns p’=(a’, b’), the first vertex on a path from p to e=(0,0)
{compute d=dis((0, 0), (a, b));
if d=0 then success;
if d=max(a, b) then
{if b=0 then
{a’=a-1; b’=b;}
if a=0 then
{a’=a; b’=b-1;}
if a>0 and b>0 then
{a’=a-1; b’=b-1;}}
if d=max(l-a, k-b) then
{if k-b=0 then
{a’=a+1; b’=b;}
if l-a=0 then
{a’=a; b’=b+1;}
if l-a>0 and k-b>0 then
{a’=a+1; b’=b+1;}}
if d=l-a+b then
{if b>0 then
{a’=a; b’=b-1;}
if b=0 then
{a’=a+1; b’=b}}
if d=k-b+a then
{if a>0 then
{a’=a-1; b’=b;}
if a=0 then
{a’=a; b’=b+1;}}
}
Theorem 1. The algorithm Route is optimal.
Proof. Let d’=dis((0, 0), (a’, b’)), then we only need to prove that d’=d-1. As an ob-
vious result, we know that |d-d’|≤1.
z Case 1: d=max(a, b).
1. Subcase a: b=0 and a>0, that is, d=a. Let a’=a-1 and b’=b, then we have
max(a’, b’)=a-1. By Lemma 2, we can get d’ ≤max(a’, b’)=a-1=d-1, that is
d’=d-1.
2. Subcase b: a=0 and b>0, that is, d=b. Let a’=a and b’=b-1, then we have
max(a’, b’)=b-1. By Lemma 2, we can get d’ ≤max(a’, b’)=a-1=d-1, that is
d’=d-1.
3. Subcase c: a>0 and b>0. Let a’=a-1 and b’=b-1, then we have max(a’, b’)=
max(a-1, b-1)=d-1. By Lemma 2, we can get d’ ≤max(a’, b’)=d-1, that is
d’=d-1.
z Case 2: d=max(l-a, k-b).
1. Subcase a: k-b=0 and l-a>0, that is, d=l-a. Let a’=a+1 and b’=b, then we
have max(l-a’, k-b’)=l-a-1=d-1. By Lemma 2, we can get d’ ≤max(l-a’, k-
b’)=d-1, that is d’=d-1.
426 Z. Zhang
2. Subcase b: l-a=0 and k-b>0, that is, d=k-b. Let a’=a and b’=b+1, then we
have max(l-a’, k-b’)=k-b-1=d-1. By Lemma 2, we can get d’ ≤max(l-a’, k-
b’)=d-1, that is d’=d-1.
3. Subcase c: l-a>0 and k-b>0. Let a’=a+1 and b’=b+1, then we have max(l-
a’, k-b’)=max(l-a-1, k-b-1)=d-1. By Lemma 2, we can get d’ ≤max(l-a’, k-
b’)=d-1, that is d’=d-1.
z Case 3: d=l-a+b.
1. Subcase a: b>0. Let a’=a and b’=b-1, then we have l-a’+b’=l-a+b-1=d-1.
By Lemma 2, we can get d’ ≤l-a’+b’ =d-1, that is d’=d-1.
2. Subcase b: b=0. Let a’=a+1 and b’=b, then we have l-a’+b’=l-a-1+b=d-1.
By Lemma 2, we can get d’ ≤l-a’+b’ =d-1, that is d’=d-1.
z Case 4: d=k-b+a.
1. Subcase a: a>0. Let a’=a-1 and b’=b, then we have k-b’+a’=k+a-1-b=d-1.
By Lemma 2, we can get d’ ≤k-b’+a’ =d-1, that is d’=d-1.
2. Subcase b: a=0. Let a’=a and b’=b+1, then we have k-b’+a’=k+a-b-1=d-1.
By Lemma 2, we can get d’ ≤k-b’+a’ =d-1, that is d’=d-1. ■
4 Broadcasting Algorithm
Given a connected graph G and a message originator u, the broadcast time bM(u) of
the vertex u is the minimum time required to complete broadcasting from vertex u
under the model M. The broadcast time bM(G) of G under M is defined as the maxi-
∈
mum broadcast time of any vertex u in G, i.e. bM(G)= max{bM(u)|u V (G)}. In[9],
Xiao proposed the upper bound of bM(G) based on the mathematical properties of
Cayley graph and Coset graph.
Theorem 2. Let G be a finite group and K ≤ G. Assume that Γ= Cay(G, S) and Δ=
Cos(G, K, S) for some generating set S of G. For a communication model M, let
bM(ΓK) be the minimum time required to complete broadcasting in the vertices of K
from the identity element e. Then, we have bM(Γ) ≤ bM(Δ) + bM(ΓK).
Proof. See in [15]. ■
Let G= Zl×Zk, S={(±1, 0), (0, ±1), (1, 1), (-1, -1)}. According to the Theorem 2, we
develop a broadcasting algorithm for the hexagonal torus Γ=Cay(G, S). Let K={(z,0)|
∈
z Zl}, and then we can get K ≤Zl×Zk. It is obviously that ΓK=Cay(K,(±1, 0)) is a
cycle, and it is a subgraph of Γ. The Coset graph Δ= Cos(G, K, S) is a cycle too.
Without loss of generality, we assume the identity element e=(0, 0) be the source
vertex.
Broadcasting Algorithm:
Procedure for the source vertex e=(0, 0):
{send(message, (1, 0), 0);
send(message, (l-1, 0), 0);
send(message, (0, 1), 1);
send(message, (0, k-1), 2);}
Procedure for all vertexes except the source vertex:
{receive(message, (x’, y’), C);
Some Properties in Hexagonal Torus as Cayley Graph 427
if (C= ⎢ k ⎥ or ⎡ k ⎤ ) then
⎢⎣ 2 ⎥⎦ ⎢⎢ 2 ⎥⎥
stop;
if (y=0) then
if (x< ⎢ l ⎥ ) then
⎢⎣ 2 ⎥⎦
{send(message, (x+1, y), C);
send(message, (x, y+1), C+1);
send(message, (x, y-1(mod k)), C+2); }
else if (x> ⎡ l ⎤ ) then
⎢⎢ 2 ⎥⎥
{send(message, (x-1, y), C);
send(message, (x, y+1), C+1);
send(message, (x, y-1(mod k)), C+2); }
if (y>0 and y< ⎢ k ⎥ )
⎢⎣ 2 ⎥⎦
send(message, (x, y+1), C+1);
if (y > ⎡ k ⎤ )
⎢⎢ 2 ⎥⎥
send(message, (x, y-1(mod k)), C+1);}
It is easy to clarify that bM(ΓK)= ⎡ l ⎤ and bM(Δ)= ⎡ k ⎤ , then the total number of com-
⎢⎢ 2 ⎥⎥ ⎢⎢ 2 ⎥⎥
5 Conclusion
A family of 6-regular graphs, called hexagonal mesh, is considered as multiprocessor
interconnection network. Processing vertexes on the periphery of the hexagonal mesh
are wrapped around to achieve regularity and homogeneity, and this type of graph is
428 Z. Zhang
also called hexagonal torus. In this paper, we use Cayley-formulations for the hex-
agonal torus to develop a simple routing algorithm. This routing algorithm is proved
to be optimal. We also develop a broadcasting algorithm by the theory of Cayley
graph and Coset graph. Then, we discuss the diameter of the hexagonal torus, and get
exact value of the diameter.
There are many interesting problems to be pursued for the hexagonal torus archi-
tecture, such as fault-tolerant routing, embedding properties, and the application of
the hexagonal torus to solve or reduce the complexity of some difficult problems.
These topics are all closely related to the associated routing and broadcasting algo-
rithms and will be addressed in our future works.
Acknowledgments
This paper is supported by the Fundamental Research Funds for the Central Universi-
ties (21610307) and Training Project of Excellent Young Talents in University of
Guangdong (LYM09029).
References
1. Chen, M.S., Shin, K.G., Kandlur, D.D.: Addressing, Routing and Broadcasting in Hexago-
nal Mesh Multiprocessors. IEEE Trans. Computers 39(1), 10–18 (1990)
2. Nocetti, F.G., Stojmenovic, I., Zhang, J.Y.: Addressing and Routing in Hexagonal Networks
with Applications for Tracking Mobile Users and Connection Rerouting in Cellular Net-
works. IEEE Trans.
3. Xiao, W.J., Parhami, B.: Further Mathematical Properties of Cayley Graphs Applied to Hex-
agonal and Honeycomb Meshes. Discrete Applied Mathematics 155, 1752–1760 (2007)
4. Xiao, W.J., Parhami, B.: Structural Properties of Cayley Graphs with Applications to Mesh
and Pruned Torus Interconnection Networks. Int. J. of Computer and System Sciences, Spe-
cial Issue on Network-Based Computing 73, 1232–1239 (2007)
5. Tosic, R., Masulovic, D., Stojmenovic, I., Brunvoll, J., Cyvin, B.N., Cyvin, S.J.: Enumera-
tion of polyhex hydrocarbons up to h=17. J. Chem. Inform. Comput. Sci. 35, 181–187
(1995)
6. Laster, L.N., Sandor, J.: Computer graphics on hexagonal grid. Comput. Graph. 8, 401–409
(1984)
7. Carle, J., Myoupo, J.F.: Topological properties and optimal routing algorithms for three di-
mensional hexagonal networks. In: Proceedings of the High Performance Computing in the
Asia-Pacific Region HPC-Asia, Beijing, China, vol. I, pp. 116–121 (2000)
8. Zhang, Z., Xiao, W., He, M.: Optimal Routing Algorithm and Diameter in Hexagonal Torus
Networks. In: Xu, M., Zhan, Y.-W., Cao, J., Liu, Y. (eds.) APPT 2007. LNCS, vol. 4847,
pp. 241–250. Springer, Heidelberg (2007)
9. Xiao, W.J., Parhami, B.: Some Mathematical Properties of Cayley Graphs with Applications
to Interconnection Network Design. Int. J. Computer Mathematics 82, 521–528 (2005)
Modeling Software Component Based on Extended
Colored Petri Net*
1 Introduction
With the development of computer technology, the software requirements are grow-
ing rapidly, but the current software development is not able to fulfill the demand. In
order to solve the problem of the industrialization of software, people put forward the
concept of software component. Large complex software systems are composed of
many software components. Building software systems from reusable software com-
ponents has long been a goal of software engineers. While other engineering disci-
plines successfully apply the reusable component approach to build physical systems,
it has proven more difficult to apply in software engineering. A primary reason for
this difficulty is that distinct software components tend to be more tightly coupled
with each other than most physical components [1]. A component is simply a
data capsule. Thus information hiding becomes the core construction principle under-
*
This work has been supported by the National Science Foundation of China under Grant No.
60963007, by the Science Foundation of Yunnan Province, China under Grant No.
2007F008M, the Key Subject Foundation of School of Software of Yunnan University and
the Open Foundation of Key Laboratory in Software Engineering of Yunnan Province under
Grant No. 2010KS01, the promotion program for youth key teachers of Yunnan university
No.21132014, by the Science Foundation of Yunnan Province Education Department No.
09J0037 and Yunnan University, China under Grant No. ynuy200920.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 429–434, 2011.
© Springer-Verlag Berlin Heidelberg 2011
430 Y. Yu et al.
∪
1) P T≠Φ and P∩T=Φ;
∪
2) F⊆(P×T) (T×P);
3) M0⊆P is the initial mark of the OR-transition Petri-net system;
4) A transition t∈T is enabled in a marking M iff ∃p∈˙t, M (p)=1 and ∀p′∈t˙,
M(p′)=0. It is said that the transition t is enabled under the mark M and the place p.
Let a transition t∈T fires under a mark M and a place p, the mark M is transformed
into the mark M'; we often say that the mark M' is reachable from the mark M in a
step. M′ is the successor mark of M under t and p. It is written as M(p)[t>M'. where:
∀p′∈P:
⎧ M(p')-1, p'=p;
⎪
M'(p')= ⎨ M(p')+1, p' ≠ p and p' ∈ t ⋅ ;
⎪ M(p'), else.
⎩
,,,
Definition 2. In an OR-transition Petri net system ∑=<P T F M0>, the corre-
,
sponding underlying net N=<P T, F> is called as OR-transition Petri net.
∪,
Definition 3. In an OR-transition Petri net ORPN=<P, T, F>, let x, y∈T P ∃b1, b2,
∪
b3, …, bk∈T P, such that <x, b1>, <b1, b2>, <b2, b3>, … , <bk, y>∈F, then we say
that y is structure-reachable from x, which is denoted as xF*y.
Modeling Software Component Based on Extended Colored Petri Net 431
Definition 4. 1) S is a limited and non-empty type set, also known as the color set;
2) The multi-set m is a function of non-empty color set S: m∈(S→N). For the non-
empty set S, m= ∑ m(s)s is the multi-set of S, m(s)≥0 is called the coefficient of s.
s∈S
3) Let SMS be the set of all multi-sets of based on S, and m, m1, m2∈SMS, n∈N
then:
(1) m1+m2= ∑ (m1(s)+m2(s))s;
s∈S
(2) n × m = ∑ (n × m (s ))s;
s∈ S
: ,
5) AF F→SMS AF is the arc expression function, where SMS is the set of all
multi-sets of based on S and meet the following condition: ∀f∈F, AF(f)∈(AP(P(f)))MS,
where P(f) describes the corresponding place p of arc f.
Definition 6. An OR-transition colored Petri net (ORCPN) system is an 8-tuple
ORCPN system ∑=<P, T, F, S, AP, AT, AF, M0>, where:
1) N=<P, T, F, S, AP, AT, AF> is an OR-transition colored Petri net, which is called
as the underlying net of ORCPN system;
2) M0 is the initial marking of ORCPN system ∑=<P, T, F, S, AP, AT, AF, M0>,
and meets the following condition: ∀p∈P: M0(p)∈(AP(p))MS.
:
Definition 7. M P→SMS is the marking of ORCPN system ∑=<P, T, F, S, AP, AT,
AF, M0>, where ∀p∈P: M(p)∈(AP(p))MS.
3 Software Component
A software component is simply a data capsule. Thus information hiding becomes the
core construction principle underlying components. A component can be imple-
mented in (almost) any language, not only in any module-oriented and object-oriented
languages but even in conventional languages [5].
432 Y. Yu et al.
In software architecture, a component should include two parts: interface and im-
plementation. Interface defines the functions and specifications provided by the com-
ponent, and implementation includes a series of related operations[6].
Therefore, in this paper, the definition of component is as follows:
Definition 8. A component C in software architecture is a 3-tuple C = <Interface,
Imp, Spec>, where:
∪
1) Interface = IP OP, is a set of component interfaces, IP represents the input in-
terfaces of component, OP represents the output interfaces;
2) Imp is the implementation of component, and it includes a series of operations:
t1, t2, ..., tn; and each operation completes specific function;
3) Spec represents the internal specification of component, and it is mainly used to
describe the relationships between the implementations and the interfaces.
Definition 9. Each interface in component is a 2-tuple: p = <ID, DataType>, where
ID is the unique identifier of the interface p, DataType is the type of the information
which can be accepted by the interface p.
In component, each input interface represents the certain set of some operations, a
component can have some input interfaces, the outside environment can request ser-
vices from one or more input interfaces of the component. The output interfaces of
component describe the requests of the outside environment, when the component
completes a function, it may need other components to provide some help.
In component, the operation is complete certain function, and it can be defined as:
Definition 10. An operation t is a 5-tuple t = <S, D, R, PR(X), PO(X,Y)>, where:
S is the syntax of the operation t, and Y = t(X). X is the input vectors of the opera-
tion t, and Y is the output vectors of the operation t; X = (x1, x2, ..., xm), Y = (y1, y2,
..., yn). D = D1×D2×...×Dm is the domain of the input vectors, xi∈Di (1≤i≤m). R =
R1×R2×...×Rn is the range of the output vector, yj ∈Rj (1≤j≤n). Di, Rj is a legal data
type. PR (X) is called pre-assertion; PO (X, Y) is called post-assertion. Satisfy the
PR(X) of the input vector X is called the legitimate input. For legal input X, to meet
the PO (X, Y) as the legitimate output of the output vector Y [7].
From the definition, the implementation of the operation t needs certain conditions,
when the conditions are met, the related operations are implemented.
5 Conclusion
In component-based software engineering, components have become increasingly
important. Large complex software systems are composed of many software compo-
nents. A component is simply a data capsule. Thus information hiding becomes the
core construction principle underlying components. In order to descript the software
component effectively, first, the definitions of OR-transition Colored Petri Net and
component are given. And in OR-transition Colored Petri Net, the transitions can
effectively represent the operations of the components. So an approach is presented to
descript the software component formally based on OR-transition Colored Petri Net.
References
1. Weide, B.W., Hollingsworth, J.E.: Scalability of reuse technology to larege systems requires
local certifiability. In: Latour, L. (ed.) Proceedings of the Fifth Annual Workshop on Soft-
ware Reuse (October 1992)
2. Talor, R.N., Medvidovic, N., Anderson, K.M., et al.: A component- and message-based ar-
chitectural style for GUI software. IEEE Transactions on Software Engineering 22(6),
390–406 (1996)
434 Y. Yu et al.
3. Shaw, M., Garlan, D.: Software architecture: Perspectives on an emerging discipline. Pren-
tice Hall, Inc., Simon & Schuster Beijing Office, Tsinghua University Press (1996)
4. Yong, Y., Tong, L., Qing, L., Fei, D., Na, Z.: OR-Transition Colored Petri Net and its Ap-
plication in Modeling Software System. In: Proceedings of 2009 International Workshop on
Knowledge Discovery and Data Mining, Moscow, Russia, January 2009, pp. 15–18 (2009)
5. Wang, Z.j., Fei, Y.k., Lou, Y.q.: The technology and application of software component.
Science Press, Beijing (2005)
6. Clements, P.C., Weiderman, N.: Report on the 2nd international workshop on development
and evolution of software architectures for Product families. Technique Report, CMU/SEI-
98-SR-003, Carnegie Mellon University (1998)
7. Li, T.: An Approach to Modelling Software Evolution Processes. Springer, Berlin (2008)
Measurement of Software Coupling Based on Structure
∗
Entropy
1 Introduction
Software complexity measures are meant to indicate whether the software has desir-
able attributes such as understandability, testability, maintainability, and reliability.
As such, they may be used to suggest parts of the program that are prone to errors. An
important way to reduce complexity is to increase modularization [1]. As part of their
structured design method, Constantine and Yourdon [2] suggested that modularity of
software system be measured with two properties: cohesion and coupling. Cohesion
represents the tight degree of the module in software system. High-cohesion of mod-
ule is always an aim of software developer. On the other hand, coupling is the degree
of inter dependence between pairs of modules; the minimum degree of coupling is
obtained by making modules as independent as possible. Ideally, a well designed
software system maximizes cohesion and minimizes coupling. Page-Jones gives three
∗
This work has been supported by the National Science Foundation of China under Grant No.
60963007, by the Science Foundation of Yunnan Province, China under Grant No.
2007F008M, the Key Subject Foundation of School of Software of Yunnan University and
the Open Foundation of Key Laboratory in Software Engineering of Yunnan Province under
Grant No. 2010KS01, the promotion program for youth key teachers of Yunnan university
No.21132014, by the Science Foundation of Yunnan Province-The Research of Software
Evolution Based on OOPN.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 435–439, 2011.
© Springer-Verlag Berlin Heidelberg 2011
436 Y. Yu et al.
principle reasons why low coupling between modules is desirable [3]. Myers [1]
refined the concept of coupling by presenting well-defined, though informal, levels of
coupling. However, how to measure the coupling of a module formally?
The contribution of this paper is to propose a novel approach to measure the cou-
pling of Object-oriented software system. The approach is based on structure entropy.
Entropy is the most influential concept to arise from statistical mechanics. Entropy
measures the disorder in a system. Based on structure entropy, the approach can
describe the coupling better.
The remainder of this paper is organized as follows. Section two analyse the cou-
pling of Object-oriented software system. Section three gives a short introduction to
the concept of structure entropy. In Section four, an approach to measure the coupling
is presented.
j=1
i =1
Definition 4. The structure entropy of the object based on the attribute is defined as:
ip
E Ci -A = − ∑ ρ ( k ) ln ρ ( k )
k = i1
ip
where : ∑ ρ ( k ) = 1 .
k = i1
Definition 5. For the object Ci (i=1,2,…,n), if the number of the methods is iq, and
the coupling fan-in degree of method Ak (k=i1, i2, …, iq) is id(Mk), let
id ( M k )
ρ (k ) = iq
∑ id ( M l)
l = i1
Definition 6. The structure entropy of the object based on the method is defined as:
iq
E Ci -M = − ∑ ρ ( k ) ln ρ ( k )
k = i1
iq
where : ∑ ρ ( k ) = 1 .
k = i1
438 Y. Yu et al.
Definition 7. For the object Ci (i=1, 2, …, n) in the software system S, the coupling
fan-in of the attribute Aik or method Mil is called as the coupling fan-in of the object
Ci, denoted as id(Ci), and
ip iq
id ( C i ) = ∑ id ( A k ) + ∑ id ( M k ).
k = i1 k = i1
Definition 8. If there are n objects in the software system S, and the fan-in degree of
object Ci (i=1, 2, …, n) is id(Ci), let
id (C i)
ρ (C i) = n
∑ id (C l)
l =1
Definition 10. If there are n objects (C1, C2, …, Cn)in the software system S, the cou-
pling of the object Ci based on structure entropy and fan-in degree is defined as:
id (C i ) ρ (C i ) ln ρ (C i )
H (C i ) = − ×
n ln n
Definition 11. The coupling of the software system S based on structure entropy and
fan-in degree is defined as:
n
id (Ci ) ρ (Ci )ln ρ (Ci) n n
H ( S ) = −∑ × = ∑ H (Ci), where : ∑ ρ (Ci ) = 1.
i =1 n ln n i =1 i =1
From the definitions, we can conclude that H(S)≥0. If the value of H(S) is greater than
1, there are multiple coupling between objects in the software system S.
5 Conclusion
Software complexity measures are meant to indicate whether the software has desir-
able attributes such as understandability, testability, maintainability, and reliability.
An important way to reduce complexity is to increase modularization. Modularity of
software system is measured with two properties: cohesion and coupling. Coupling is
the degree of inter dependence between pairs of modules. In order to measure the
coupling of a module formally, an approach is presented. In this paper, the coupling
of Object-oriented software system is analyzed and the definition of the structure
entropy is given. Based on the structure entropy, a novel approach to measure the
coupling of the oriented-object software system is presented.
Measurement of Software Coupling Based on Structure Entropy 439
References
1. Myers, G.: Reliable Software Through Composite Design. Mason and Lipscomb Publishers,
New York (1974)
2. Constantine, Yourdon: Structured Design. Prentice-Hall, Englewood Cliffs (1979)
3. Page-Jones, M.: The Practical Guide to Structured Systems Design. YOURDON Press, New
York (1980)
4. Lai, Y., Ji, F.-x., Bi, C.-j.: Evaluation of the Command Control System’s Organizational
Structure (CCSOS) Based on Structure Entropy Model. Systems Engineering 19(4), 27–31
(2001) (in Chinese)
5. Yong, Y., Tong, L., Na, Z., Fei, D.: An Approach to Measuring the Component Cohesion
Based on Structure Entropy. In: Proceedings of 2008 International Symposium on Intelligent
Information Technology Application (IITA 2008), Shanghai, China, December 2008, pp.
697–700 (2008)
6. Yu, Y., Tang, J., Li, W., Li, T.: Approach to measurement of class cohesion based on struc-
ture entropy. System Engineering and Electronics 31(3), 702–704 (2009)
A 3D Grid Model for the Corridor Alignment
1 Introduction
A corridor of highway or railway generally may be considered as a continuous trajec-
tory between two points in space through which service is provided. A corridor
alignment problem may refer to procedures for locating corridors of highway or rail-
way in three dimensions space. In the design stage, its space position is expressed by
vertical and horizontal alignments. The quality of corridor alignment is frequently an
important factor in determining the overall success of highway or railway engineering
projects. A well designed corridor provides positive results such as reduced project
costs, increased levels of service, and improved accessibility of service. A poorly de-
signed corridor may lead to negative environmental impacts, poor level of service, and
sometimes life-threatening danger.
The corridor alignment problem is to find a 3D route connecting two given points.
The result of the finding corridor alignment makes the total costs of the route associated
with minimized, and meets some horizontal and vertical constraints. The corridor
alignment is generally a piecewise linear trajectory, which is coarse and another design
stage is usually employed to detail the alignment further. However, the procedure of
finding a corridor has a great influence on the design stage.
The research for corridor alignment problem must set up basic grid structures for
search methods at first. The paper discusses different grid structures in literatures firstly.
Then, 3D model in GIS and GMS (3D Geosciences Modeling System) are introduced,
and from the models a new model for 3D corridor alignment is presented in part 3.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 440–444, 2011.
© Springer-Verlag Berlin Heidelberg 2011
A 3D Grid Model for the Corridor Alignment 441
found. Parker [1] developed a two-stage approach to select a route corridor, subject to
gradient constraints. Yuelei [2] employed cellular automata model in design of railway
route selection. In addition, a heuristic tabu search algorithm is used for solving the
problem [3]. All the models employ a 2D model to realize a space alignment.
A square grid with eight directions from each square(except for boundary squares
which have less than eight). Turner[4] and Nicholson [5]started developing a model
based on the grid where all kinds of relevant costs are evaluated for each square. The
best corridor is found as the shortest path upon the grid. This grid has a blunt since
the selective angular determination only integer multiplications of 45o, but it makes
possible backwards bends. The grid is the most frequently used for its simplicity.
Honeycomb Grid: A honeycomb grid (depicted in Fig.1) with twelve directions and
the angle between the two adjacent directions is 30 degrees which is less than square
grid. This leads to shifts of up to 15 degrees, and has better angular selectivity. How-
ever, it increases heavy computational effort to any search algorithm for its larger
search space.
3 3D Spatial Models
A corridor may locate on the earth (fill earthwork or bridge) or under the earth surface
(cut earthwork or tunnel). It should be represented as a 3D spatial model. 3D geometric
representation models provide geometric descriptions of objects for storage, geometric
processing, display, and management of 3D spatial data by computers in 3D GIS and 3D
GMS(Geosciences Modeling System).These models are classified as facial models,
volumetric models and mixed models[9][10].For facial models, geometric characteris-
tics of objects can be described by surface cells. As for volumetric models, which de-
scribe the interior of objects by using solid information, instead of surface information,
the model includes voxel, needle, TEN, block, octree,3D Voronoi, tri-prism and so on.
Volumetric model based on the 3D spatial partition and description of real 3D entities,
emphasizes on representation both of border and interior of 3D objects [8].
Facial model emphasizes particularly on the surface description of 3D space and it is
convenient for visualization and data updating, but difficult for spatial analysis. As for
the corridor alignment, it is not necessary to have more knowledge about sub-surface in
the present. A natural extension of the facial model will meet the need of the corridor
alignment. The facial model includes grid model, TIN model, B-Rep (Boundary
Representation), series sections, etc.
Grid and TIN(triangular irregular network) model are two kinds of importance
structures of the digital elevation model (DEM) which is a digital representation of
ground surface topography or terrain. A DEM can be represented as raster (a grid of
squares) or TIN. It may consist of single raster points arranged in a regular grid pattern,
single points in an irregular pattern. It is a digital representation of a continuous variable
over a two-dimensional surface by a regular array of z values referenced to a common
datum.
z = f ( x, y ), x, y ∈ D (1)
where z represents the elevation of the point (x,y) in the field D. In particular, A DEM
may be defined as regular one by a gridded matrix of elevation values that represents
surface form[11].
The structure of DEM with irregular triangulation is more complicated than that with
regular grid in creation, calculation, data organization, while single raster points ar-
ranged in regular grid corners may be simply represented with only row and column
number. The grid-based model can be extended to a 3D one on which a corridor is based.
So the regular grid is chosen for the application.
z2=f2(x,y) with the angle θ on the horizontal plane is a slant plane. A new curve surface is
constructed from the slant plane π2 and the surface π1:
π : z =f
0 0 0
( i ) = ω1f1 ( i ) + ω f (i )
2 2
(2)
where ω1, ω2 are the weight of the two surface, and ω1+ω2=1. The curve surfaceπ0 is
named as tendency surface which not only embodies ground surface for the vertical
alignment of the corridor but also meets short and direct demand for the horizontal
alignment of the corridor.
Ver t ex
Axi s
Layer
z
y
x
A corner point extended up and down from the tendency surface and spaced at equal
vertical intervals constructs axis. A point on axis i (i=1,…, N axis ) ( N axis is the total axis
numbers) is called vertex. Let dz denote the distance between two contiguous vertexes.
The elevation of vertex k on axis i ( k = 1, 2, ..., N layer ) ( N layer is the total layer numbers on
one axis) is:
zi ,k = f 0 (i ) − N layer ⋅ dz + k ⋅ dz (3)
A set of points with the same k on different axis is called layer. All the points on the
axes and layers are the research space of the path finding algorithm. A 3D alignment
can be represented as a sequence of adjacent vertexes originating from a designated
start vertex to a designated end vertex. The purpose of the algorithm based on the grid
structure is to find continuous contiguous vertexes to meet some constrains in the space
to form a 3D corridor alignment.
5 Application
This paper presents a kind of grid structure to optimize highways or railways corridor
alignment, on which some optimization algorithms may be presented based. For ex-
ample, we set up a kind of ant colony optimization method based on the grid structure.
The basic process to form a corridor using the method is: an ant put on the start vertex
selects another vertex in its candidate set according to selection probability, repeating
the action until the end vertex. The candidate set of a vertex includes all the vertexes
satisfying constraints for the corridor alignment around it. A geographic map to test the
444 K. Miao and L. Li
References
1. Parker, N.A.: Rural Highway Route Corridor Selection. Transportation Planning and Tech-
nology (3), 247–256 (1977)
2. Yuelei, H., Lingkan, Y.: Cellular automata model in design of railway route selection.
Journal of Lanzhou Jiaotong University 23(1), 6–9 (2004)
3. Hou, K.J.: A Heuristic Approach For Solving The Corridor Alignment Problem, p. 110.
Purdue University (2005)
4. Turner, A.K., Miles, R.D.: A Computer Assisted Method of Regional Route Location.
Highway Research Record 348, 1–15 (1971)
5. Nicholson, A.J., Elms, D.G., Williman, A.: Optimal highway route location. Com-
puter-Aided Design 7(4), 255–261 (1975)
6. Trietsch, D.: A family of methods for preliminary highway alignment. Transportation Sci-
ence 21(1), 17–25 (1987)
7. Jong, J.: Optimizing Highway Alignment with Genetic Algorithms. University of Maryland,
College Park (1998)
8. Cheng, P.: A Uniform Framework of 3D Spatial Data Model and Data Mining from the
Model. In: Li, X., Wang, S., Dong, Z.Y. (eds.) ADMA 2005. LNCS (LNAI), vol. 3584, pp.
785–791. Springer, Heidelberg (2005)
9. Lixin, W., Wenzhong, S.: GTP-based Integral Real-3D Spatial Model for Engineering Ex-
cavation GIS. Geo-Spatial Information Science 7(2), 123–128 (2004)
10. Li-xin, W., Wen-zhong, S., Gold, C.H.: Spatial Modeling Technologies for 3D GIS and 3D
GMS. Geography and Geo-Information Science 19(1), 5–11 (2003)
11. Weibel, R., Heller, M.: Digital Terrain Modelling. In: Maguire, D.J., Goodchild, M.F.,
Rhind, D.W., Maguire, D.J., Goodchild, M.F., Rhind, D.W. (eds.) Geographical Informa-
tion Systems: Principles and Applications, pp. 269–297. Longman, London (1991)
The Consumers’ Decisions with Different Delay Cost in
Online Dual Channels
Shengli Chen
Abstract. By constructing the linear delay cost and exponential delay cost func-
tion, we formulate the consumer decision models with different delay cost
based on the threshold strategies in dual-mechanism, and prove that there exists
a unique symmetric Nash equilibrium in which the high-valuation consumers
use a threshold policy to choose between the two selling channels. Then we ex-
tend our model under general delay cost function and find that consumers with
higher valuation will choose threshold strategies to make decision once arriving
at websites only if the delay cost function of them continuous strict increases
with the auction remaining time.
1 Introduction
With the rise of Internet, online auctions are growing in popularity and are fundamen-
tally changing the way many goods and services are traded. Nowadays, in the business-
to-consumer market, many firms are selling the same or almost identical products
online using auctions and fixed prices simultaneously. For example, airline and cruise
companies sell tickets through their own websites at posted prices, but also through
auctions run by Priceline.com; IBM and Sun Microsystems sell their products at their
own website, but also offer some selected new and refurbished products through
eBay.com auctions.
In some literature, some researchers pay attention to the problem of jointly manag-
ing auction and list price channels. Wang studied the efficiency of posted-price selling
and auctions [1]. Kultti studied the performance of posted price and auction [2]. Within
the B2C framework, Vakrat and Seidmann compares prices paid through on-line auc-
tions and catalogs for the same product [3]. In the infinite horizon model of van Ryzin
and Vulcano, the seller operates simultaneously auctions and posted prices, and replen-
ishes her stock in every period. However, the streams of consumers for both channels
are independent, and the seller decides how many units to allocate to each of the chan-
nels [4]. Etzion et al. studied the simultaneous use of posted prices and auctions in a
different model [5]. Caldentey and Vulcano considered two different variants of the
problem when customers have the channels of listed price and auction [6]. Sun study
Web stores selling a product at a posted price and simultaneously running auctions for
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 445–453, 2011.
© Springer-Verlag Berlin Heidelberg 2011
446 S. Chen
the identical product. In this paper, they study a dual mechanism, where an online
retailer combines the two conventional mechanisms (posted price and auction) for
multiple units of a product [7].
In this paper, we mainly study how consumers behave and decide upon their
purchasing decision when they arrive at the website in dual channels. We analyze
simultaneous dual online auctions and list price channels in a B2C framework, with
consumers arriving according to a Poisson process, and deciding which channel to join.
We characterize the delay cost of high-valuation consumers by linear function, expo-
nential function and general function, respectively.And we formulate the consumer
decision models with different delay cost based on the threshold strategies in dual-
mechanism, and prove that there exists a unique symmetric Nash equilibrium in which
the high-valuation consumers use a threshold policy to choose between the two selling
channels. Then we find a general conclusion: consumers with higher valuation will
choose threshold strategies to make decision once arriving at websites only if the delay
cost function of them continuous strict increases with the auction remaining time re-
gardless of different delay cost.
We model an online seller who offers identical items using two selling mechanisms,
posted price and auctions, simultaneously. The seller’s objective is to maximize his
revenue per unit time. The seller chooses the auction duration T , the quantity to auc-
tion q , and the posted price p . Without loss of generality, we assume that the mar-
ginal cost of each unit is zero. The seller’s publicly declared reserve price is r .We
also assume that the seller can satisfy any demand. Consumers visit the website ac-
cording to a Poisson process with rate, and each consumer is interested in purchasing
one unit of the good. Consumers have independent private values for the good. We
assume that each consumer’s valuation, vi , is independently drawn from a probabil-
ity distribution with cumulative density function F (⋅) with support set [ vl , vh ] ,where
r ≤ vl . We assume that consumers can observe both channels on arrival, with no
additional costs. Hence, consumers are fully informed.
We model the auctions using the sealed-bid ( q + 1 ) -price format with risk-neutral
bidders having unit demand and independent private values for the good. In a sealed-
bid ( q + 1 ) -price auction, the dominant strategy for each bidder is to bid his true
valuation of the item. By doing so, the bidder is setting an upper bound on the price
that he is willing to pay: he will accept any price below his reservation price and none
above.
In our setting, the posted price first splits the consumers into two parts. One part is
the consumers with low valuations (i.e., valuations less than the posted price), and the
other is those with high valuations (i.e., valuations greater than or equal to the posted
price). All low-valuation consumers become bidders in the auctions. A fraction β of
the high-valuation consumers also become bidders, while a fraction 1 − β purchase at
The Consumers’ Decisions with Different Delay Cost in Online Dual Channels 447
the posted price. The probability of participation, β , depends on the design variables
( q , T ,and p ), and much of the analysis in this section is devoted to their determina-
tion. Some bidders win the auction and some lose. The high-valuation losing bidders
will purchase at the posted price at the auction’s conclusion.
We characterize the consumer i by his valuation vi , and the time remaining in the
auction when he arrives, t c . Low-valuation consumers, those with vi < p , cannot buy
the item for its posted price because the value they get from doing so is negative, so
they choose between bidding and staying out of the market. High-valuation consumers,
those with vi ≥ p , who prefer to receive the item earlier rather than later, may choose
not to bid, while choose to purchase at the posted price, if the remaining time of the
auction is significantly long. However, low-valuation consumers must choose to bid,
because they have no other option for obtaining the item. If we define U −A (vi , t c ) as the
maximum expected value from participating in the auction for a consumer of
type (vi , t c ) , when vi < p , the optimization problem faced by these consumers is
Pr( ZLQ b) is the probability that the consumer wins the item in the auction by bid-
ding b , and E[auction_payment b] is the expected auction payment by a bidder who
bids b .
When the consumers who participate in the auction arrive at the web site, if the
time remaining in the auction is enough long, they do not obtain the item instantly;
their utility from buying the item for the bidding is negative. We denote the negative
utility as delay cost. And they think that the delay cost is an increasing function of the
time remaining until the end of the auction. So they assume that this delay cost is
linear in the time remaining until the auction ends. If Dc (t c ) denotes the delay cost,
the linear delay cost is
,
⎪⎧ w t low-value consumer
l c
Dc (t c ) = ⎨ h c
,
⎪⎩ w t high-value consumer
(2)
Where, wl denotes the delay cost per unit time of low-valuation consumers, wh de-
notes the delay cost per unit time of high-valuation consumers, and wh > wl .The
optimization problem faced by these consumers with vi < p is
In general, Low-valuation consumers prefer to receive the item earlier rather than
later. So, to simplify the problem, we therefore assume that the delay cost per unit
time is wl = 0 for consumers with vi < p . The decision problem faced by these con-
sumers with vi < p is
Max{U −A (vi , t c ), 0} (4)
448 S. Chen
High-valuation consumers, those with vi ≥ p , would buy the item for its posted price
if auctions were not offered. High-valuation consumers choose between buying the
item for its posted price and participating in the auction. It is never optimal for these
consumers to do nothing, because their utility from buying the item for the posted
price is nonnegative. We assume that when high-valuation consumers purchase the
item for its posted price, they obtain the item instantly. When they choose to bid, they
are choosing to experience a delay in obtaining and using the item, because they must
wait until the end of the auction. Hence, when choosing to bid, these consumers incur
a delay cost that is an increasing function of the time remaining until the end of the
auction.
If we define U +A (vi , t c ) as the expected net revenue from participating in the
auction for a consumer with valuation vi ≥ p , then U +A (vi , t c ) is as follows.
U +B (vi , t c ) = vi − p (6)
Because the consumer evaluates the expected payoff from bidding, using an optimal
bidding strategy, and compares it with the payoff from purchasing the item for the
posted price, a high-valuation consumer arriving with t c time units remaining in the
auction solves the following optimization problem:
Max U +j (vi , t c ) (7)
( )
j∈ A , B
In the sealed ( q + 1 )-price auction, it is a dominant strategy for bidders with valua-
tions below the posted price to bid their true values, and it is dominant for bidders
above the posted price to bid the posted price. In other words, if a buyer with valua-
tion v decides to enter the auction then he will bid b(v) , where
⎧vi vi < p
b(vi ) = ⎨ (8)
⎩ p vi ≥ p
For consumers with V ≥ p , they may choose between buying the item for its posted
price and participating in the auction. A high-valuation consumer chooses to partici-
pate in the auction if and only if his net expected revenue from participating in the
auction exceeds the net expected revenue from buying the item for its posted price.
If we define ΔU + (vi , t c ) as the excess of expected revenue from participating in
the auction over expected revenue from buying the item for its posted price,
ΔU + (vi , t c ) may be denoted as follows
Δπ (t c ) ≥ wh t c (11)
Definition 1. In dual mechanism, the high-valuation consumer with type (vi , t c ) use
the following threshold policy about time remaining t ∗ to choose between buying the
item for its posted price and bidding in the auction: (1)When t ∗ < T , if t c ≤ t ∗ , the
high-valuation consumer chooses to bid in an auction; if t c > t ∗ , the high-valuation
consumer chooses to buy the item for its posted price. (2) When t ∗ = T , the high-
valuation consumer chooses to buy the item for its posted price.
Although the high-valuation consumer may choose between to buy the item for its
posted price and to bid in the auction according to the above threshold policy about
time remaining, they must calculate the threshold time before their choices. So, the
following lemma 1 is given.
Theorem 1. In a dual channel with a sealed-bid q + 1 -price auction ,if bidders with
linear delay cost follow the weakly dominant bidding strategy of equation (8), thresh-
old strategies of all high-valuation consumers with threshold time t have a unique
symmetric Nash equilibrium, t is given by the solution of the following fixed point
equation
(1) if Δπ h (T ) ≤ whT , Δπ h ( t ) = wh t
(2) if Δπ h (T ) > whT , t = T
⎧⎪ 0 ,
Dc′ (t c ) = ⎨ wh t c
low-value consumer
⎪⎩e - 1 , high-value consumer (12)
In the above equation (12), when the time remaining, t c , is equal to zero or relatively
small, the delay cost of high-valuation consumer is equal to or approximately equal
to the delay cost denoted by equation (2). When the time remaining, t c , is very long,
the high-valuation consumer’s delay cost denoted by equation (12) is greater than the
linear delay cost denoted by equation (2). In particular, when the time remaining, t c ,
is very long, the deviation between the linear delay cost and the exponential delay
cost is very big. Therefore, the delay cost function denoted by equation (12) is used
to describe the time-sensitive customers’ delay cost. When high-valuation custom-
ers’ delay cost is denoted by equation (12), will their decision behaviors be changed?
The following Theorem 2 shown that all high-valuation consumers with exponential
delay cost denoted by equation (12) similarly choose between bidding in auction and
buying item from the posted price according to the time remaining until the end of
the auction.
Theorem 2. In a dual channel with a sealed-bid q + 1 -price auction ,if bidders with
exponential delay cost follow the weakly dominant bidding strategy of equation (8),
threshold strategies of all high-valuation consumers with threshold time tˆ have a
unique symmetric Nash equilibrium, tˆ is given by the solution of the following fixed
point equation
(1) if Δπ h (T ) ≤ e w T − 1 , Δπ h (tˆ) = e w t − 1
h hˆ
(2) if Δπ h (T ) > e w T − 1 , tˆ = T
h
Although all high-valuation consumers with exponential and lineal delay cost have
the same decision mechanism when they arrive at the web site, they have the different
threshold strategies under the different delay cost, their threshold time is different. In
the following numerical experiments, we explain further the different threshold time
between lineal delay cost and exponential delay cost.
Fig.1 (a) depicts the two different threshold time under linear delay cost and expo-
nential delay cost, when we assume that the arrival rate λ = 1 , the delay cost per unit
time of high-valuation consumers wh = 0.025 , auction length T = 100 , the posted
The Consumers’ Decisions with Different Delay Cost in Online Dual Channels 451
Δπ h (t )
Δπh (t )
Dc′ D′c
Dc Dc
tˆ t t tˆ t
(a) (b)
Fig. 1. The different threshold time under the different delay cost
For the high-valuation consumers who arrive at website, there exist other form delay
cost except for the exponential delay cost denoted by above equation (2).We define
the following general form delay cost function
⎧⎪ 0
Dc′′ (t c ) = ⎨
, low-value consumer
⎪⎩ f D( t )
c
, high-value consumer (13)
)
unique symmetric Nash equilibrium, t is given by the solution of the following fixed
point equation
) )
(1) if Δπ h (T ) ≤ f D (T ) , then Δπ h (t ) = f D (t )
)
(2) if Δπ h (T ) > f D (T ) , then t = T
From the above Theorem 3, we can see that when the delay cost of all high-valuation
consumers with the type (vi , t c ) is denoted by equation (11), they similarly choose
between to bid in auction and to buy item from the posted price according to the time
remaining until the end of the auction.
3 Conclusions
In this paper, we characterize the delay cost of high-valuation consumers by linear
function, exponential function and general function, respectively. Then we formulate
the consumer decision model with three different delay cost based on the threshold
strategies in dual-mechanism, and prove that there exists a unique symmetric Nash
equilibrium in which the high-valuation consumers use a threshold policy to choose
between the two selling channels. We find that consumers with higher valuation will
choose threshold strategies to make decision once arriving at websites only if the delay
cost function of them continuous strict increases with the auction remaining time re-
gardless of different delay cost.
Our analysis relies on the independent private value model proposed by Riley and
Samuelson (1981), many auctions have bidders that follow common value or affiliated
value models. These bidders' valuations are determined, at least in part, by an unob-
servable but objective value for the item. In addition, it is assumed that the seller is risk
neutral. However, in fact, some sellers are not risk neutral. When bidders are risk
averse, the four types of auctions that follow the rules of the family of auctions in Riley
and Samuelson (1981) will not generate identical expected revenues and optimal re-
serves will change. Therefore, these need to be considered in future research.
Acknowledgment
Financial supports from a project with special fund for the construction of key
disciplines in Shanxi Province, Shanxi Nature Science Foundation under Grant
No 2010JQ9006 and Shanxi Department of Education Foundation under Grant No
2010JK552 .The helpful comments from anonymous reviewers are also gratefully
acknowledged.
References
1. Wang, W.: Auctions versus posted-price selling. American Economic Review, 838–851
(1993)
2. Kultti, K.: Equivalence of auctions and posted prices. Games and Economic Behavior, 106–
113 (1999)
The Consumers’ Decisions with Different Delay Cost in Online Dual Channels 453
3. Pinker, E., Seidmann, A., Vakrat, Y.: Managing online auctions: Current business and re-
search issues. Management Science, 1457–1484 (2003)
4. van Ryzin, G.J., Vulcano, G.: Optimal auctioning and ordering in an infinite horizon inven-
tory-pricing system. Forthcoming in Operations Research 52(3), 195–197 (2004)
5. Etzion, H., Pinker, E., Seidmann, A.: Analyzing the simultaneous use of auctions and posted
prices for on-line selling. Working paper CIS-03-01, William E.Simon Graduate School of
Business Administration, University of Rochester (2003)
6. Caldentey, R., Vulcano, G.: Online auction and list price revenue management. Working
Paper, Stern School of Business, New York University
7. Sun, D.: Dual mechanism for an online retailer. European Journal of Operational Research,
1–19 (2006)
Fuzzy Control for the Swing-Up of the Inverted
Pendulum System
1 Introduction1
The inverted pendulum system is a standard problem in the area of control systems
[1, 2]. Inverted pendulum is the organic combination of multiple technologies in dif-
ferent fields such as robotics, control theory and computer control. The system itself
is absolutely instable, high-order, multivariable, strong-coupling and non-linear, so it
can be studied as a typical control object. Nowadays fuzzy control theory is a kind of
intelligent control which is the emphasis and difficulty of the control theory [3-5].
The merit is that it is independent with the accurate math model of the system and
robustness; it has intelligence, self-learning ability, the feature of computer control
and friendly user interface. In this page, compare the result of the fuzzy control of the
linear single inverted pendulum with PID and LQR control methods, and confirm the
superiority of the fuzzy control on the field of non-linear and strong coupling system
like inverted pendulum.
2 Modeling
A schematic of the inverted pendulum is shown in Figure 1[6].
∗
Corresponding author.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 454–460, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Fuzzy Control for the Swing-Up of the Inverted Pendulum System 455
2L
m
ϕ
A vehicle equipped with a motor provides horizontal motion of the vehicle while
vehicle position, x, and joint angle, φ, measurements are taken via a quartered en-
coder, the pendulum to the center of the length of the rotation axis, l. By applying the
law of dynamics on the inverted pendulum system, the equations of motion are:
⎧ Mx&& = F − bx& − N
⎪
⎨ d2 (1)
⎪ N = m ( x + l sin θ )
⎩ dt 2
It follows that
Torque-equilibrium equation is
⎡0 1 0 0⎤ ⎡ 0 ⎤
⎢ ⎥ x ⎢ ⎥
⎡ ⎤ ⎢
&
x −( I + ml )b
2 2
m gl 2
⎡ ⎤ I + ml 2
⎢ && ⎥ ⎢0 0⎥ ⎢ ⎥ ⎢ ⎥
I ( M + m) + Mml 2 I ( M + m) + Mml 2 ⎥ x& ⎢ I ( M + m) + Mml 2 ⎥
⎢x ⎥ = ⎢ ⎥ ⎢ ⎥ +⎢ ⎥u
⎢ϕ& ⎥ ⎢0 0 0 1 ⎥ ⎢ϕ ⎥ ⎢ 0 ⎥
⎢ ⎥ ⎢ ⎢ ⎥
⎥ ⎣ϕ& ⎦ ⎢ ⎥
⎣ϕ&&⎦ 0 −mlb mgl ( M + m) ml
⎢ 0⎥ ⎢ 2⎥
⎣⎢ I ( M + m) + Mml 2 I ( M + m) + Mml 2 ⎦⎥ ⎣⎢ I ( M + m) + Mml ⎦⎥
⎡x ⎤
⎢ ⎥
⎡ x ⎤ ⎡1 0 0 0 ⎤ ⎢ x& ⎥ ⎡ 0⎤
y=⎢ ⎥=⎢ ⎥ + ⎢ ⎥u (8)
⎣ϕ ⎦ ⎣0 0 1 0 ⎦ ⎢ϕ ⎥ ⎣ 0⎦
⎢ ⎥
⎣ϕ& ⎦
In fuzzy controller design and the practical application of fuzzy control theory, the
fuzzy characteristics of the object being studied to determine the membership function
is a very important issue. Determination of membership function should reflect the
specific characteristics of an objective fuzziness [7, 8].
Control rules are the core of fuzzy control, are generally given by experts. In this
study, because the control object is a single inverted pendulum, taking into account
the volume control program and operating efficiency, by many experiments try to
finally choose a 3 * 3 rule. Control rules are listed in Table 1:
The classical PID control can only be achieved on a single input control, it plans to
integrate the state space and fuzzy control method or point of view, fuzzy dual-loop
control of displacement are two ways to achieve a single-stage inverted pendulum.
According to the actual control needs and experience, decided to fuzzy control
Fuzzy Control for the Swing-Up of the Inverted Pendulum System 457
structure for multi-input single-output (MISO). As the cascade control of their own
complexity, real-time adjustments to the model have a certain degree of difficulty,
achieve inverted pendulum in the case, the outer loop control of position effect is not
very good, which makes the control and LQR control Xiang there is no advantage
compared. Therefore, this experiment chose the combination of state space and fuzzy
control method, the system block diagram shown in Figure 2.
Where ke, kec and ku are quantization and scaling factor, K1T and K2T is system of
feedback gain matrix K transposed matrix. Because single inverted pendulum has four
state variables Angle, displacement error and error rate, and the design of fuzzy con-
troller with only two input variables, and so on four state variables must be processed.
Single inverted pendulum Angle, displacement volume K1T after two errors after an
error matrix E. The same error rate is Angle, the displacement of the Ec error amount
after K2T matrix derivation.
3.4 Single Inverted Pendulum Fuzzy Control Model and Real-Time Simulation
This experiment using Matlab software, therefore, the control model of this experi-
ment is through Matlab SIMULINK. Because of this experiment to connect to high
grade single inverted pendulum in real-time control, so need to join solid high real-
time control module toolbox, eventually model as shown in Figure 3. One set of auto-
matic add up the module, the more convenient, can not use hand every time, and
Initi al ize
GT400-SV Vel
In1
GT400-SV Initial ization
Acc
Vel1
0.0 In1Out1 -K- Pos
Acc1
Pos Ref.
du/dt Subsystem Gain2 Sine Wave
1 Vel2
Angle Scope1
Deri vative Si gn Gain Acc2
improve the safety of the experiment. The quantitative and scaling factor model
is convenient for the design and debug model online adjustment fuzzy controller
can rise, input and output of actual control effect. Swing-up module is shown in
Figure 4.
1
Vel
1 -1 em Swi ng
pi
2
In1
Gai n Acc
Swing-up
Results were gathered from the implementation of both swing-up control methods.
Data were collected from experimental runs where each control scheme swings up the
pendulum from an initially downwards position to an upright position and balances
the pendulum around the unstable equilibrium point. If there is excessive cause hand-
stand down, and will again do pole into be put through the adjustment model, and
once again put into stable state. A plot of the controller output during an experimental
run for the single inverted pendulum fuzzy controller is shown in Figure 5 The fol- (
lowing diagram oscilloscope waveform is the car in the above position, the following
waveform is the angle of the pendulum. In the car into position to set 0.0m from
0.1m, the single inverted pendulum transition shown in Figure 6.
Fig. 5. Transition process of car swing-up Fig. 6. Car position from the transition process
0.1m to 0.0m
To analyze the single inverted pendulum fuzzy control features, and classical con-
trol theory and modern control theory, PID control in the LQR control results
were compared. Experimental curves obtained as follows. Inverted pendulum PID
control curve shown in Figure 7 and Inverted pendulum LQR control curve shown in
Figure 8:
Fuzzy Control for the Swing-Up of the Inverted Pendulum System 459
Through comparison of these two control methods, fuzzy control is not difficult to
find and PID, LQR control the difference between. PID control can only control as
the single input single output system, so this inverted pendulum is used to control
multi-input system will be a loss, ensure the inverted pendulum can not control the
horizontal displacement, so when the pendulum moving to the limit switches Depart-
ment the role of the out of control. LQR can control multiple input, multiple output
system control, in the inverted pendulum control to be better than the PID, LQR con-
trol but has some limitations, in the swing phase and was disturbed, the level of dis-
placement if the pendulum had led the General Assembly the role of the system out of
control, and the system into a stable state, the rate is not quick enough.
4 Conclusion
The analysis of experimental results can be seen, through the linear quadratic state
feedback and fuzzy control of combined control methods to control, the LQR control
with some good features, but also to participate as fuzzy control; it has more good
robustness, and can more quickly into a stable state. But this test is also inadequate,
that is, steady-state system, more frequent small shocks, mainly due to fuzzy control
can not be achieved without static error control and subtle distortions result output
control volume control, and as to swing to control the process shorter, choose the
larger scale factor, therefore increasing the system's small shocks, it will be such a
phenomenon. If fuzzy control rules more detailed classification, expanding the scope
of the domain, you can reduce the shock, but also make the program a huge change, to
control the Host Computer have higher requirements.
References
1. Butikov, E.I.: On the dynamic stabilization of an inverted pendulum. Am. J. Phys. 69(6), 1–
14 (2001)
2. Tao, C.W., et al.: Design of a Fuzzy Controller With Fuzzy Swing-Up and Parallel Distrib-
uted Pole Assignment Schemes for an Inverted Pendulum and Cart System. Control Systems
Technology 16(6), 1277–1288 (2008)
460 Y. Wu and P. Zhu
3. Yorhida, K.: Swing-up control of an inverted pendulum by energy- based methods. In: Pro-
ceedings of the American Control Conference, pp. 4045–4047 (1999)
4. Anderson, C.W.: Learning to control an inverted pendulum using neural networks. Control
Systems Magazine 9(3), 31–37 (1989)
5. Muskinja, N., Tovornik, B.: Swinging up and stabilization of a real inverted pendulum. In-
dustrial Electronics 53(2), 631–639 (2006)
6. Huang, S.-J., Huang, C.-L.: Control of an inverted pendulum using grey prediction model.
IEEE Trans. Ind. Appl. 36, 452–458 (2000)
7. Ma, X.-J., Sun, Z.-Q., He, Y.-Y.: Analysis and design of fuzzy controller and fuzzy ob-
server. IEEE Transactions on Fuzzy Systems 6(1), 41–51 (1998)
8. Kovacic, Z., Bogdan, S.: Fuzzy Controller Design Theory and Applications. Taylor & Fran-
cis Group, LLC (2006)
A Novel OD Estimation Method Based on Automatic
Vehicle Identification Data
1 Introduction
The Origin-Destination (OD) matrix is the core information for transportation system
planning, design and operation. Traditionally, the OD matrix is gathered by the
large-scale sample surveys of vehicle trips. In order to avoiding the problems such as
labor-intensive, high-cost and time-consuming of sample survey, the theory of
OD matrix estimation based on detected traffic information had became a hot
research topics. Since 1978, Van Zuylen and Willumsen firstly used the theory
of maximum entropy to estimate the OD matrix based on the traffic volume, the re-
、 、
search of OD estimation had more than 30 years history. The methods include least
squares state-space models information theory and so on [1][2][3].
On the other hand, the technology of traffic information collection has showed the
trend of from the “fixed-point” detection to the “full-scale” Automatic Vehicle Identi-
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 461–470, 2011.
© Springer-Verlag Berlin Heidelberg 2011
462 J. Sun and Y. Feng
fication (AVI) detection. The AVI technology includes the Video License Plate
Recognition, Vehicle-beacon communication identification technology and so on. The
fundamental of these technologies is that AVI detectors can collection the vehicle
IDs time of passing the detector and vehicle location. With the application of the AVI
technologies, some scholars have made some explorations on OD estimation under
AVI environment. The study can be divided into two categories. One is to revise the
classical model of OD estimation and add the new information from the AVI detection
to improve the accuracy of OD estimation. Dixon [4] extended Kalman filter model and
put the traffic volume and the travel time between the AVI detectors as observed
variables to estimate the OD matrix. Xuesong Zhou[5] considered the impact of the
travel time when use the nonlinear least squares estimation. Another is to use
high-resolution path (OD) information to estimate the vehicle OD. Jaimyoung Kwon
used the Method of Moments(MOM) to estimate the OD matrix of the highway in
California Bay Area based on the toll station data[6]. Dong Jingxin analyzed the reli-
ability of the OD matrix by using the taxi GPS data [7]. However, the above methods
also have the following problems and challenges:
() 1 OD estimation is not only affected by network topology, traffic volume, but
also with the route choice (assignment matrix) and the corresponding traffic states. The
previous studies were limited by the data and supporting environment so that OD was
estimated by assuming some parameters on an example network. It’s practicability still
()
need to be improved.
2 AVI detector not only can get the link volume and travel time, but also can
get the partial trajectory of vehicles which is more important on OD estimation through
()
the data matching between the multiple AVI detectors.
3 The AVI layout density and accuracy were certain limited under the field
detection environment, investment budget, installation condition and other restrictions.
So it’s a big challenge to improve the accuracy of the OD estimation under the limited
information.
Facing to these problems, this paper proposed a new method of OD estimation based on
particle filter which can effectively integrate the traffic volume, travel time and the
partial trajectory from the AVI detectors. The AVI samples of particle and the history
OD were established based on probability model by using the Bayesian estimation
theory. The state of random AVI samples of particle was estimated by Monte Carlo
simulation. The state-space of this model was classified and updated based on the
different kinds of the AVI data, so that the results of the Bayesian estimation are more
realistic approximation of the actual state-space.
At the AVI locations, the vehicle IDs, time of passing the detector vehicle location
can be collected. Among a numbers of AVI detectors, the travel time and the partial
trajectory information can be calculated through the vehicle ID mapping. Because of
A Novel OD Estimation Method Based on Automatic Vehicle Identification Data 463
AVI facilities can’t cover all section and have detection errors, for any AVI network,
the detected data can be divided into four categories. Case 1 is that the vehicle origin
and destination can be detected and expressed as Ox+Dy. Case 2 is that one of vehicle
origin or destination and partial trajectory can be detected and expressed as
Ox/Dy+Path(S). Case 3 is that only vehicle origin or vehicle destination can be detected
and expressed as Ox/Dy. Case 4 is that only partial trajectory can be detected and
expressed as Path(S).
Based on different kinds of AVI data, OD estimation process can be described as fig-
ure1 and this process can be divided into six steps.
:
Step1 According to the average rate of growth to update the initial OD matrix by
using the flow which was detected by AVI facilities. Meanwhile, the errors from the
:
AVI facilities were considered.
Step2 According to the updated initial OD, the initial path flow database 1 can be
:
obtained by using the initial assignment matrix.
Step3 According to case 1 to expansion the initial OD and get updated data
(Ox+Dy), the path flow database 1 then can be fixed by the updated data (Ox+Dy)
and the path flow database 2 was gotten as shown in figure 2. Firstly, the path flow
database 1 was scanned and the data which include case 1 (Ox+Dy) was selected.
Secondly, different paths in proportion which depend on the path flows were deter-
mined. In order to reduce the weight which the origin-destination had been known
in the OD estimation, The new path flow database 2 could be got by using the path
flow database 1 minus the partial path flow data which include case 1(Ox+Dy)
:
samples.
Step4 As for the case 2 (Ox/Dy+Path(s)) data, the first step is to search the path
flow database 2 for the data (Ox/Dy+Path(s)) and these data can be used as prior in-
formation for Bayesian estimation. The second step is to get the probability of path
choice through estimated and corrected based on Bayesian estimation. The third step is
to make sure any vehicles actual origin-destination and trajectory by using the Monte
Carlo simulation process. The last step is to accumulate the OD information which got
from the previous step as the OD estimation data 1 and update the path flow database to
be a new database which depends on the OD estimation data 1. The process had been
showed in figure 3. The algorithm of Bayesian estimation and Monte Carlo was de-
:
scribed in section 2.3, 2.4 respectively.
Step5 As for case 3 and case 4, it has the similar method with case 2. The OD esti-
mation data 2, OD estimation data 3, path flow database 3 can be got through above
method.
:
Step6 The final OD matrix can be gotten by accumulating all of the estimation
data, including updated Ox+Dy, OD estimation data 1, OD estimation data 2 and OD
estimation data 3.
˖
Path 1 Flow f1' Save result of OD estimation Confirm trajectory and Estimation selection
…… and count the amount of OD of vehicles based on probability of vehicles based
Path i Flow fi ’ different path Monte Carlo simulation on Bayesian estimation
D2 (2)
P ( D2 / H 2 ) =
D总
Di (3)
P ( Di / H i ) =
D总
n
D总 = ∑ Di (4)
i =1
Where: D总 contains all path flow information through the detection section.
Di Stands for volume of path i.
(
All kinds of path posterior probability Case2, Case3, Case4 can be gotten based on )
Bayesian formulas:
466 J. Sun and Y. Feng
P ( H1 / D1 ) = P( D1 / H1 ) × P ( H1 ) (5)
P( H 2 / D2 ) = P( D2 / H 2 ) × P( H 2 ) (6)
P( H i / Di ) = P( Di / H i ) × P( H i ) (7)
Where
P总 = P(H1 / D1 ) + P(H 2 / D2 ) + L+ P(Hi / Di ) (8)
Supposing under a kind of detection information, the probability of path selection for a
,,
vehicle is P1 P2 L Pi .
P1 = P( H1 / D1 ) / P总 (9)
P2 = P( H 2 / D2 ) / P总 (10)
Pi = P( H i / Di ) / P总 (11)
As the figure 4 shows, assuming the vehicle No.001 has been detected by passing the
section H1 and H2, through the default traffic assignment, the trajectories and path flow
of section H1 and H2 can be gathered. Naming all paths as No.1(path 1-7), No.2(path
3-9) and No.3(path 2-10). The prior probability of path selection can be got by traffic
survey or other technologies, so assuming the vehicle path from the above three are,
p ( H 1 ) = 0.3, p ( H 2 ) = 0.3 and p ( H 3 ) = 0 .4 . The path flow of No.1, No.2 and No.3 was ex-
tracted from the path flow database. Assuming the volume of path 1 is 20, the volume
of path 2 is 30 and the volume of path 3 is 50. According to the path flow, the prior
probability of different paths based on the path volume would be got: P( D / H ) = 0.2 , 1 1
P( D2 / H 2 ) = 0.3 , P ( D3 / H 3 ) = 0.5 . By using the Bayesian estimation to correct the prior prob-
ability, the posterior probability of those paths can be calculated 0.06,0.09,0.2. The last
is to deal with the posterior probability in normalization to get the final probabil-
ity:P1=0.17 ,P2=0.25,P3=0.58.
Monte Carlo method can simulate the real physical process and approximate to the real
physical results by simulation, so that the AVI sample particle choice of state-space
based on the probability can be gotten by Monte Carlo simulation.
For all paths which come across a AVI detector, sets:
W1 = P1 (12)
W2 = P1 + P2 (13)
Wi = P1 + P2 + L + Pi (14)
Wi is the cumulative probability of all path posterior probability. Other symbols are the
same as mentioned above.
Figure 4 still was taken as an example for the process of Monte Carlo simulation by
generating random number. If the random number is less than W1, the path of vehicle
001 is No.1 and corresponding OD is confirmed. If the random number is bigger than
W1 and less than W2, then the path of vehicle 001 is No.2 and corresponding OD is
confirmed. The other possibilities can be concluded by above method.
3 Case Study
In 2008, Shanghai started to install video license plate recognition equipments in ex-
pressway system. The North- South expressway which has 17 entrances/exits and 9
sections with AVI equipments was chosen as the testing site. The expressway system
and AVI facilities were shown in figure 5.
L u B a n In te rch a n g e
Y an D o n g In te rch a n g e
T ia n M u In ter ch a n g e A V I d e te cto r
G o n g H e X in In te rch a n g e
Because of the difficulty to obtain the true OD, the evaluation of the accuracy of the OD
estimation has been a difficulty in a long time. Several studies are based on assump-
tions of OD matrix to measurement [1]. On the other hand, because of some origins or
destinations are located in the ramp of expressway interchange, the full-scale field
survey can’t be organized for safe consideration and the true OD can’t be obtained. We
use the simulation method to measure the precision of the OD estimation. In accor-
dance with the field layout of the AVI detectors, the “virtual detectors” were set into the
VISSIM simulation model to simulate the true AVI parameters and accuracy. The data
for OD estimation which come from the virtual detectors can be compared with the
“true” OD from VISSIM model. So the problem about the accuracy of OD estimation
can be solved [8]. The “true” OD is gotten by using the microscopic traffic simulation
“VISSIM” to establish the North-South expressway model based on the manual vehicle
license survey in March 2004. The model was calibrated by using the volume data from
the loop detectors and the travel time from the AVI detectors. The relative error of
volume and the travel time which come from 7 sections and 9 AVI detectors were less
than 15% between VISSIM model and field outputs, which can meet the practical
application [9].
By using the data which was collected on the virtual detectors in North-South ex-
pressway VISSIM model, the OD was estimated based on the above method. The
absolute error of the estimation value is shown in figure 6 and the relative error of
estimation value is shown in figure 7.
Most of the absolute errors of estimation values are from 0 to 20, and a little of
()
absolute errors are larger than 20, such as 1-2, 19-2. The main reasons may include
followings: 1 Because of some true value of OD pairs are relative large and path flow
in traffic assignment is larger than some other path flow, and the probability of path
A Novel OD Estimation Method Based on Automatic Vehicle Identification Data 469
selected which have larger flow may greater than those which have smaller flow.
Therefore, the result of estimation will be larger; 2 Because of the random error from
the simulation model, the bias would be produced by estimation process. Most of the
relative errors are from 0% to 20% and a little of relative error are larger than 40%, such
as 15-11, 15-12. The main reason is that true values of some OD pairs are relative
small. For example the true value of OD pair 15-12 is 14 and the estimation value is 20,
so the relative error is 42.86%. Finally, the relative error of true value and estimation
error is 12.09% by calculating the overall average relative error of OD estimation,
which means that the application of method can get an objective reality result of OD
estimation.
4 Conclusion
With the AVI detectors have been gradually used in urban network, this paper proposed
a method of similar particle filter under AVI environment and the method was tested
through the Shanghai North-South expressway model. The main conclusions of this
paper were as followings:
() 1 The data of different kinds AVI information were classified based on AVI
detection features. The probability of path selection would be more approximate reality
by Bayesian estimation and the accuracy of OD estimation can be improved by
modifying path flow.
() 2 With the application of this method, a higher accuracy of OD estimation was
gotten from the case study of Shanghai North-South expressway.
() 3 Because of lack of true OD matrix, the OD estimation error and the simu-
lation model error may exist together. With the application of this method, the accuracy
in this paper can be described as the overall average relative error is 12.09% under the
model error is less than 15% and AVI detection error is about 10%.
() 4 The similar particle filter provided a very good tool to research OD estima-
tion and path estimation. In the further studies, the precision of OD estimation can be
improved and the impact of random parameters can be reduced by the dynamic
non-linear programming method based on the link flow. The accuracy and reliability of
OD estimation can also be further improved.
Acknowledgement
The authors would like to thank the Natural Science Foundation of China (50948056)
for supporting this research.
References
[1] Chang, Y.T.: Dynamic OD Matrix Estimation Model of Freeway With Consideration of
Travel Time. Journal of Tongji University 37(9), 1184–1188 (2009)
[2] Tsekeris, T., Stathopoulos, A.: Real-Time Dynamic Origin-Destination Matrix Adjustment
with Simulated and Actual Link Flows in Urban Networks. Transportation Research Re-
cord 1857, 117–127 (2003)
470 J. Sun and Y. Feng
[3] Bierlaire, M., Crittin, F.: An efficient Algorithm for Real-Time Estimation and Prediction of
Dynamic OD Tables. Operation Research 52, 116–127 (2004)
[4] Zhou, X.S., Mahmassani, H.S.: Dynamic Origin-Destination demand estimation using
automatic vehicle identification data. IEEE Transactions on Intelligent Transportation
Systems 17(1), 105–114 (2006)
[5] Dixon, M.P., Rilett, L.R.: Real-time OD estimation using automatic vehicle identification
and traffic count data. Journal of Computer Aided Civil Infrastructure Engineering 17(1),
7–21 (2002)
[6] Kwon, J., Varaiya, P.: Real-time estimation of origin-destination matrices with partial tra-
jectories from electronic toll collection tag data. Transportation Research Record, 119–126
(2005)
[7] Dong, J., Wu, J.: An Algorithm to Estimate OD Matrix With Probe Vehicle and Its Reli-
ability Analysis. Journal of Beijing Jiaotong University 29(3), 73–76 (2005)
[8] Sun, J., Yang, X., Liu, H.: Study on Microscopic Traffic Simulation Model Systematic
Parameter Calibration. Journal of System Simulation 19(1), 48–51 (2007)
[9] Sun, J., Li, K., Wei, J., Su, G.: Dynamic OD Estimation Simulation Optimization Based on
Video License Plate Recognition. Journal of Highway and Transportation Research and
Development 26(8), 130–135 (2009)
Stress Field in the Rail-End during the Quenching
Process
Abstract. It is obvious that the railway develops towards the trend of high
speed and overload. So the rail quality required becomes more and more strict.
The quenching is highlighted as the final method to improve rail behaviors.
Stress field of U71Mn rail-ends during quenching was simulated by the FEM
software. And various kinds of factors that may influence the stress field’s
distribution were researched. The result shows that the rail-end stress can be
significantly diminished, if the heating, holding time and air-blast pressure
during the wind cooling can be accurately chosen. It is significantly valuable for
the choice of relevant parameters for the heat treatment of U71Mn heavy rails.
1 Introduction
U71Mn heavy rails are discarded since phenomena, such as spalling and crack, occur
on their ends after trains move on them for a period time. Frequently changing rails
affects train schedules and diminishes the railway efficiency. Moreover, subtle crack
is a potential hazard to safe movement for trains. The case that spalling and crack
appear on rails results mainly from that the thermal stress and structural stress
(together called the residual stress) generated during the rail-end’s quenching process
were not entirely eliminated and then superimposition of rails’ residual stresses and
impact stresses, which continuous hitting of hubs against rails introduces, exceeds the
rail strength. In this paper, the stress distribution during the rail-end quenching and its
causes were investigated. In addition, the numerical simulation was fulfilled, in order
to improve the quenching process to diminish the internal stress during the quenching.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 471–476, 2011.
© Springer-Verlag Berlin Heidelberg 2011
472 S. Xu et al.
The change law of the residual stress during the entire wind cooling of rail-ends
was: at the early stage of cooling, surfaces were placed in tension but centers were
placed in compression; at the latter stage of cooling, surfaces were compressed while
centers were tensed. The final residual stress state at the end of cooling was that the
compressive stress existed in the surfaces whereas the tensile stress occurred in the
centers. As it showed, variety of the thermal stress introduced during the rail-end
quenching was complex and instantaneous. The change law as a function of time
couldn’t be illustrated precisely by formulas or experience. However, not only the
thermal stress distribution at a certain time during the quenching process could be
simulated, but also the stress concentration and magnitude were attained directly by
the finite element method.
The chemical composition of materials used for U71Mn heavy rails was shown in
℃
Table 1. The heavy rail’s heat-treatment technology involved as follow: firstly the
section 200mm apart from the rail-end was heated to 910 for 40s in the
electromagnetic field, then held for 5s, and cooled for 25s by intensive convection of
cold air, at last air cooled to room temperature. The wind-cooling apparatus was
shown in Fig.2.
Table 1. Chemical composition of materials used for U71Mn heavy rails (wt%)
The heavy-rail’s 3D model, which was plotted according to the size regulated by
YB(T)68-1987 standards for the heavy rail of 60kg/m with the three-dimensional
software, was exported into the finite element analysis software to be swept with the
element SOLID5 to obtain the FEM shown in Fig.4, as presented in Fig.3.
Referring to the Materials Handbook, the rail-end density was 7920kg/m3, besides,
the rest of main physical properties were shown in Table 2. Heavy-rail’s thermal
physical parameters, such as relative permeability, specific heat, resistivity and
thermal conductivity, varied with temperature from the Table. They were applied to
load in tabular method in application.
The direction of magnetic induction lines was guaranteed parallel to the rail-end
boundary by setting boundary conditions for the electromagnetic field; distribution for
induction current of the magnetic field was calculated by loading in coils alternating
current, whose magnitude, frequency and time were 1.12e6(A/m2), 1000Hz and 40s,
respectively. Then the magnetic field was loaded into the temperature field as an
℃
initial condition by the sequential coupling approach. Meanwhile, initial temperature
was set to 25 and temperature distribution was attained at this load. At last the
temperature histories were loaded on the FEM to solve as initial conditions for
stresses. Stress distribution in the rail-end was obtained namely.
474 S. Xu et al.
℃
T[ ] 25 100 200 300 400 500 600 700 800 900 1000 1100
Relative
200 194.5 187.6 181 169.8 157.3 140.8 100.36 1 1 1 1
Permeability
℃
Specific heat
472 480 498 524 560 615 700 1000 806 637 602 580
J/[g ]
Resistivity 1.84e 2.54e 3.39e 4.35e 5.41e 6.56e 7.9e 9.49e 1.08e 1.16e 1.20e 1.23e
[Ω] -007 -007 -007 -007 -007 -007 -007 -007 -006 -006 -006 -006
9.16e 3.56e 7.5314e 1.16e 1.63e 2.12e 2.65e 3.19e 3.72e 4.22e 4.52e 5.14e
Enthalpy[J/m3]
+007 +008 +008 +009 +009 +009 +009 +009 +009 +009 +009 +009
Thermal
[W (m ⋅ ℃)]
conductivity 93.23 87.68 83.53 80.44 78.13 76.02 74.16 71.98 68.66 66.49 65.92 64.02
3.3 Analysis of Factors Influencing Distribution for Thermal Stress Field in the
Heavy-Rail
It was found through analyzing stress distribution contours that locations of the
maximum stress during heating in two heating methods were at A zone which was
shown in Figure 5(1) and 6(1), respectively. And maximum stress values were
1.62Mpa and 1.35Mpa, respectively. This occurred because A zone was located at the
transition region between the heated region and unheated region. A zone was
compressed and subjected to compressive stress since volume changes in two regions
were nonuniform.
In the meantime, it was also found that locations of the maximum stress at the end
of wind cooling were at B zone which was shown in Figure 5(3) and 6(3),
respectively. And maximum stress values were 0.8Mpa and 0.5Mpa, respectively. The
main reason was that the radiating area at the position of the B zone was less due to
the cross section vertical to the rail-end’s longitudinal section during wind cooling.
The thermal stress at the location was the largest.
It was discovered with numerous comparative analyses that heating time of rail-
ends could not be less than 30s but redundant heating time would decrease production
efficiency of heavy-rails. Heating time could be set to 40±1s after both them were
taken into consideration comprehensively.
(b)Stress field distributions in the rail-ends cooled by air blast for 40s, 50s and 60s
at pressure of 0.4Mpa were shown as Fig.7(1), Fig.7(2) and Fig.7(3), respectively;
stress field distributions in the rail-ends cooled by air blast for 40s, 50s and 60s at
pressure of 0.8Mpa were shown as Fig.8(1), Fig.8(2) and Fig.8(3), respectively.
Analysis of Figure 8 and 9 performed, it was obtained that concentration regions of
the stress, whose values were 1.28Mpa uniformly, were at A zone shown in Figure
7(1) and 8(1) after 40s during wind cooling; concentration regions of the residual
stress, whose values were 0.59Mpa and 0.56Mpa, respectively, was at B zone shown
in Figure 7(3) and 8(3) at the end of cooling.
Comparison between the residual stress value in Figure 7(3) and that in Figure 8(3)
was achieved. It indicated that residual thermal stress of rail-ends was less when they
were cooled by compressed air at the pressure of 0.8Mpa. This was because
compressed air at the pressure of 0.8Mpa possessed the better behavior for convective
heat-transfer and could diminish temperature more rapidly in the heated regions. As a
result, it shortened time of compressing B zone and decreased thermal stress.
From the above analysis, pressure of compressed air was set at 0.8Mpa in practical
manufacturing in order to diminish residual stress and consider cost simultaneously.
Fig. 7. Stress distributions of heavy rails at different time when p=0.4Mpa (when heating 30s)
476 S. Xu et al.
Fig. 8. Stress distributions of heavy rails at different time when p=0.8Mpa (when heating 30s)
4 Conclusion
The common action between thermal and structural stress during the rail-end
quenching made distribution of internal stress in the workspiece extraordinarily
complex since both acted reversely and were difficult to be measured. In generally,
stress and deformation in the rail-end during heat treatment were assessed indirectly
through gauging final stress and deformation after heat treatment. It was of lagging
and apparently had a bad effect on production for heavy-rails. In this paper, stress
distributions for the rail-ends during quenching was simulated successfully by the
FEM. Based on considering cost, the method, that pressure of forced air was set at
0.8Mpa, then heating and holding time of the rail-end was set to 40±1s and 5s,
respectively, and at last it was air cooled for 1850s after it was cooled for 40s by the
air blast, of the residual stress relief was proposed. Practical productive operation in a
certain steel and iron corporation indicated that these strategies were helpful to actual
production for heavy rails and possessed a certain popularizing value.
Acknowledgement
This research reported in the paper is supported by Research Center of Green
manufacturing and Energy-Saving &Emission Reduction Technology in Wuhan
University of Science and Technology. This support is greatly acknowledged.
References
1. Jiang, G., Kong, J., Li, G.: Research on Temperature Distribution Model of Ladle and Its
Test. China Metallurgy 16(11), 30–36 (2006)
2. Li, G., Kong, J., Yang, J.: Simulation on Temperature Field of Heavy Rail during
Quenching. Heat Treatment Technology and Equipment 30(1), 13–15 (2009)
3. Kong, J., Xu, S., Li, G., Yang, J., Xiong, H., Jiang, G.: Temperature Field and Its
Influencing Factors of Heavy Rail During Quenching. Advanced Materials Research 15, 65–
69 (2010)
A Novel Adaptive Target Tracking Algorithm
in Wireless Sensor Networks
1 Introduction
Potential applications of Wireless Sensor Network (WSN) include environmental
monitoring, military surveillance, search and rescue, soldiers and vehicles tracking,
avoiding traffic conflicts, and so on. Target tracking applications has become one of
major applications in WSN.
Solutions for mobile target tracking in WSN can be divided into five categories in
general [1]: Tree-based Tracking, Cluster-based Tracking, Prediction-based Tracking,
Mobicast Message-based Tracking, and Hybrid Tracking. Cluster-based Tracking
algorithm has DELTA [2], RARE [3], CODA [4], they have better scalability and band-
width usage than other types, but their energy consumption is relatively higher.
Prediction-based Tracking algorithm has double projections [5] and PES (Prediction-
based Energy Saving) [6], they maintain most of the nodes sleep until they are awak-
ened by an active node, to reduce the energy consumption in the phase of target
monitoring and tracking, but the tracking accuracy cannot be guaranteed. Energy
issues and track accuracy in WSN have been always the important problem, but there
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 477–486, 2011.
© Springer-Verlag Berlin Heidelberg 2011
478 X. Wu et al.
2 IATT
This paper uses the strategy of dynamic cluster formation and prediction mechanism.
When the mobile target comes into the sensor network monitoring area, the cluster
head and the cluster monitor a mobile target, which are dynamically formed. This
paper proposes concepts of Real Cluster Head and Virtual Cluster Head, Real Cluster
and Virtual Cluster. Real Cluster is that the cluster members are cycle or adaptive
information collection in the target tracking status. Virtual Cluster is cluster members
are on call to target tracking, except the cluster head engages in target tracking. It
determines whether objectives’ state is anomalism, through observing signal intensity
changes and the time of the target within the cluster. When the target is anomalous in
the virtual cluster, the virtual cluster can be transformed into a real cluster.
Cluster head can adaptively adjust the cluster sampling period to activate sleeping
cluster members into monitoring, according to the target motion state. Then cluster
head receives information from cluster members, and processes it. Combined with the
target’s historical data, it estimates the target's current position, adaptively predicts
future position and time interval, then selects next cluster head according the
movement direction of the target, and sends a packet containing the target information
to the cluster head. This new cluster head generates a new cluster according to certain
rules, and obtains data from the old cluster head. Combined with the new data report
of cluster members, it can obtain motion state of target to adaptively track mobile
targets. When the predicted time interval which is used to generate the next real
cluster leads to the formation of the dynamic real cluster is not continuous, it
introduces another level monitoring mechanism, namely virtual cluster monitoring,
which ensures that the mobile target has been monitored in WSN all the time.
Literature [7] proposed Velocity Adaptive Target Tracking algorithm (VATT), which
dynamically adjusted the collection frequency of the sensor nodes in the cluster only
using the speed. The literature [8] had improved the VATT algorithm, not only consid-
ering the velocity, but also adding acceleration and variation of the movement angle in
the motion prediction. After taking into account the acceleration, it must meet the fol-
lowing Eq. (1) to ensure accuracy of the description of a mobile target’s trajectory.
1
l = vnt + ant 2 (1)
2
Where, t is the time interval of the next location, and tn is the time of the current col-
lection, then the time of the next collection is tn +1.
t = tn +1 − tn
(2)
A Novel Adaptive Target Tracking Algorithm in Wireless Sensor Networks 479
Sample Point
Arc Center
O
C
R P
ΔE l
d/2 Q
Δθ n
A
B
The error caused by the change of velocity has been solved by considering the ac-
celeration, but the direction of the target’s movement was random. If l is constant, and
the change of the target direction is very large, it will cause a decline in accuracy of
trajectory description. So this paper will accord the angle change and tracking error
ΔE to dynamically and approximately determine the collection location distance l in
next time (Fig. 1). Where, B (xn-1, yn-1), C (xn, yn), OB AB, OQ ⊥
BC, lBC = d, lPQ ⊥
= ΔE, arc BC= l. The angle of a straight line BC:
xn − xn −1
θ n = arccos (3)
( xn − xn −1 ) 2 + ( yn − yn −1 ) 2
Tilt angle of a straight line AB is θn-1, and AB and BC's angle is
Δθ n = θ n − θn−1
(4)
The prediction of location distance l obtained from Fig. 1 is
⎧ d /2
⎪⎪ sin Δθ n = R
⎨ (5)
⎪cos Δθ = R − ΔE
⎪⎩ n
R
Where
2 ⋅ Δθn 2 ⋅ ΔE ⋅ Δθ n
l = 2π ⋅ R ⋅ = (6)
2π 1 − cos Δθn
It sets the maximum localization interval of l, when l is bigger than RS, l equals to RS.
When the target trajectory is straight line, the minimum of l is
2 ⋅ ΔE ⋅ sin Δθ n
d= (7)
1 − cos Δθ n
480 X. Wu et al.
When the mobile target mobility is large, namely the velocity and angle change is
large, t will be relatively small, and so it can achieve the application requirements of
the trajectory described accuracy. The other hand, t will be relatively large, it can
dynamically increase the next location time interval t.
Error of the target’s state can largely reflect the situation of target tracking. If direc-
tion changing of the mobile target is small, it can take adaptive predicted time
interval between WSN cluster, which is achieved by move the target’s velocity
and the amount of angle changes, as the maximum time interval in the next
real clusters formation. It should not only consider the original target changes,
but also take into account that the object is possible to maintain the original motion
direction, it form the maximum distance interval when two motion trajectories sepa-
rate with the sensor radius. Fig. 2 shows the largest track distance formed by adap-
tive prediction real cluster head, which is indicated by the Cartesian coordinate
system.
For simple description, it assumes that the position of a mobile target coincides
with the cluster head in a time. The location of a mobile target is Ln (xn, yn) in time tn,
in the previous time tn-1 the target location is Ln-1 (xn-1, yn-1), the angle variation of the
mobile target which moves from the position of Ln-1 to Ln is θn. So it can predict the
position of a mobile target Ln +1 (xn +1, yn +1) in the recent time tn +1.
A Novel Adaptive Target Tracking Algorithm in Wireless Sensor Networks 481
θn θ n LT ( n ) Pn Qn
Ln θn
rs rs
Ln +1
Ln−1 LS ( n ) θ n
LO( n )
The time interval formed real cluster head is ∆tn= tn+1 − tn. It can predict the
target’s velocity between Ln and Ln +1 through literature [9], namely the predictive
velocity in Δtn is
When the mobile target keeps steady state, in order to reduce unnecessary cluster
forming and the wealth of information collecting within the cluster, it adaptively ad-
justs the real cluster form time interval, through the angle change between the report
by the target's current position and historical reports. As the steady target motion, it
can be speculated that the target may continue to move with the angle change θn in the
recent period. The Fig. shows that the two tracks separate with the sensor radius,
when the target moves to LS (n). So it needs to increase the real cluster formation time
interval. The assumption based on the previous prediction of the target location over-
lapping, the predictable target location Ln+1 is the location where the real cluster head
forms. If the mobile target maintains the current movement direction in position LT (n),
then when the target moves to position Pn, the node at LS (n) cannot monitor the infor-
mation of a mobile target, which is beyond the sensor's sensing radius Rs. As to LnPn
tangent to the circle taking LS (n) as the center, it can obtain that LS (n) Pn = RS.
According to the literature [9], the target location LO (n) can be calculated for k (k =
1,2,3, ..., m) times, then
k=m, ⎛ x ⎞ = ⎛ cosθ
O(n) n − sin θn ⎞ ⎛ xn+1 + v%x ( n ) ⋅ t ⎞ (14)
⎜ ⎟ ⎜ ⎟⎜ ⎟
⎝ yO(n) ⎠ ⎝ sin θn cos θn ⎠ ⎝ yn+1 + v%y ( n ) ⋅ t ⎠
It can obtain the angle between LnQn and the x-axis from historical data, which is set
φ. So the coordinates of Qn is
⎛ xQ(n) ⎞ ⎛ xn + LnQn ⋅ cosφ ⎞
⎜ ⎟=⎜ ⎟ (16)
⎝ yQ(n) ⎠ ⎝ yn + LnQn ⋅ sin φ ⎠
The distance between LO (n) and Qn is more than or equal RS, until the two tracks first
separate as sensing radius, which
To maintain the target is not distorted or have great tracking error, the predicted track
curve length LnLn+1 must meet the following conditions
Ln Ln +1 = ( m − 1) ⋅ RC (18)
With the velocity of mobile targets estimated within ∆tn, it can calculate the
maximum time interval for the real cluster head forming, which is
(m − 1) ⋅ RC (19)
TC ( n ) =
v%n
When the predicted time interval of clusters formation causes that the dynamic real
cluster is not continuous, it proposes to use virtual cluster and virtual cluster head
mechanism to fill the tracking blind spots between real clusters, and introduces an-
other level of monitoring mechanism, namely, virtual cluster monitoring, in order to
ensure a mobile target has been monitored all the time. The two relative variables are
the interval prediction distance and interval period:
LT ( n +i −1) = iRc (20)
LT ( n + i −1) iRc (21)
TT ( n +i −1) = =
v%n v%n
And i is integer satisfying 1 ≤ i <m-1.
In addition, when the angle change of a mobile target tends to be very small, pre-
diction interval will tend to infinity, according adaptive prediction interval of the real
cluster formation. It is clearly undesirable. In order to ensure reliable tracking, it
needs to rule the largest range of adaptive prediction, for example, when m-1 ≥ M, m-
1 equal to M (M should carefully choose with the motion characteristics and the track
quality, and so on).
2 RS
Tin = (22)
v%n
To steady targets, the velocity is stable, and target is in the time interval [Tin-ΔTin ,
Tin+ΔTin] of virtual cluster. If it doesn’t satisfy with this range, the target’s motion is
abnormalities, and the velocity maybe slow or increase, or the trajectory is in the
virtual cluster edge and so on.
When virtual cluster is aware of the target anomalism, there are three ways to
approach. First, the target is still in the cluster, the virtual cluster is converted to real
cluster. Second, virtual cluster head perceives signal strength is weak, but the target
can be perceived all the time, it must promptly elect the next cluster head that has a
certain distance from the current cluster head and is the nearest target. Third, the
virtual cluster head can no longer perceive the target, and it need use the target loss
recovery algorithm to recover target tracking. The third way is also the worst track
measures within the target tracking loss occurred. Once the virtual cluster has
anomalism, the next cluster should be a real cluster, and it must abolish the maximum
time interval of real clusters formation in early prediction. It can re-open to predict the
maximum time interval of real clusters formation until the target motion is stable.
4 Conclusion
Aiming at the past target tracking algorithms have not fully considered mobility of a
mobile target, the paper proposes a new target tracking algorithm IATT. Considering
the target velocity and angle changes, it reduces the complexity of the algorithm, the
network traffic load under the premise which ensures tracking accuracy and
extendable, to achieve less energy consumption. It compares performance with a fixed
period, from simulation and analysis of the tracking error and energy consumption
between linear motion target tracking and curve motion target tracking.
Simulation results show that, regardless of the target track linear motion, or
curvilinear motion of the target, adaptive adjustment of information collection and the
time intervals of tracking, can ensure the accuracy and reliability of tracking the same
time, and save energy consumption, and also extend the entire monitoring network life.
References
1. Bhatti, S., Xu, J.: Survey of Target Tracking Protocols using Wireless Sensor Network. In:
Fifth International Conference on Wireless and Mobile Communications (ICWMC 2009),
pp. 110–115 (2009)
486 X. Wu et al.
2. Walchli, M., Skoczylas, P., Meer, M., et al.: Distributed Event Localization and Tracking
with Wireless Sensors. In: Proceedings of the 5th International Conference on
Wired/Wireless Internet Communications (2007)
3. Olule, E., Wang, G., Guo, M.: Dong. M.: RARE: An Energy Efficient Target Tracking Pro-
tocol for Wireless Sensor Networks. In: International Conference on Parallel Processing
Workshops (2007)
4. Chang, W.R., Lin, H.T., Cheng, Z.Z.: CODA: A Continuous Object Detection and Tracking
Algorithm for Wireless Ad Hoc Sensor Networks. In: 5th IEEE Consumer Communications
and Networking Conference, pp. 168–174 (2008)
5. Xu, Y., Lee, W.C.: On localized prediction for power efficient object tracking in sensor
networks. In: Proceedings 23rd International Conference on Distributed Computing Systems
Workshops, pp. 434–439 (2003)
6. Xu, Y., Winter, J., Lee, W.C.: Prediction-based Strategies for Energy Saving in Object
Tracking Sensor Networks. In: Proceedings of the IEEE International Conference on Mobile
Data Management, pp. 346–357 (2004)
7. Liu, Z.Y., Zhang, X.W., Chen, X.Q.: A Velocity Adaptive Target Tracking in Wireless Sen-
sor Network. J. Huazhong University of Science and Technology Transaction (Nature Sci-
ence) 33(suppl.), 335–338 (2005)
8. Yin, F.: Mobile Target Tracking Algorism in WSN. Master’s thesis of Hunan University,
Chang Sha, China (2007)
9. Kim, H., Kim, E., Han, K.: An Energy Efficient Tracking Method in Wireless Sensor Net-
works. In: Koucheryavy, Y., Harju, J., Iversen, V.B. (eds.) NEW2AN 2006. LNCS,
vol. 4003, pp. 278–286. Springer, Heidelberg (2006)
Computer Management of Golden Section Effect of
Physical Pendulum
1 Introduction
The golden section effect physical pendulum is a self-made instrument which uses
golden section combined with compound pendulum of Hefei University of Technol-
ogy. A common compound pendulum has the physical functions such as researching
relationship between system mass distribution and period changes, verifying the rota-
tional rule and Steiner theorem, measuring rotational inertia and gravity acceleration
etc[1]. Moreover, this instrument can optimize the extreme value of pendulum period
by golden section (0.618 method), record data and draw curve by computer manage-
ment system. It really achieves good effect not only on better data disposition but also
increasing the comprehensive and interesting of experiment.
As Fig.1 shows, the golden section pendulum is structured an L length scale un-
derneath a diamond shape frame. On the scale, there is a pendulum with m mass and r
radius, which can move along the scale. Let’s regard the scale end on the frame as the
pendulum fulcrum; define M as pendulum mass, g as gravity acceleration and H as
vertical distance from the rotor to pendulum centroid; suppose the pendulum centriod
is in fulcrum, the distance from pendulum to fulcrum is x0, the rotational inertia of
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 487–491, 2011.
© Springer-Verlag Berlin Heidelberg 2011
488 J. Jing et al.
pendulum is J0; it can get rotational inertia J when the pendulum moves down to
x (x > x0)[1].
It is assumed that the pendulum movement is simple harmonic vibration, from the
period formula of pendulum,
J
T = 2π (1)
mgh
From h=x-x0, we have
2 2
J = J + m x - m x 0= J + m h + 2m hx
0 0 2 0 (2)
MH=mh (3)
By inserting (2), (3) into (1), we obtain
J 0 + mh 2 + 2mhx0
T ( h) = 2π
mgh
By the above formula, we can get that the two variables are contained within the
change of pendulum vibration period, it equals to the change of system centriod posi-
tion H and pendulum mass M when the pendulum position x and pendulum mass m
change. In case the pendulum mass m and diamond shape frame are determined, the
impact to period T is the only item to be discussed when the pendulum position h
changes. Thereby, it becomes simple to optimize the extreme value by using golden
section method. Evidently the instrument is of a state of art design.
3KRWRWXEH 'DWD
3HQGXOXP *DWH
$FTXLVLWLRQ &RPSXWHU 3ULQWHU
&RQWUROOHU
interface and functions of the operation software platform is divided into control area,
table area, chart area and result area[2].
Control area. In this area, we can setup system parameters, connect and measure
the controller, save or remove data, export data to draw curve, print chart, translate
and magnify coordinate axis, transfer calculator to calculate test site and experiment
result etc.
Table area. In table 1, it records the minimum periodic data of the pendulum by us-
ing golden section (0.618 method). In table 2, it records the auxiliary point data which
is for full spoon curve drawing.
Chart area. X-T coordinate system comes from coordinate and period. Spoon curve
will be automatically drawn in the coordinate system by the pendulum period changes
with coordinate, when data in table 1 and table 2 are completed. In order to observe
and analyze, we can adjust the coordinate axis to get clearer spoon curve and scatter
the intensive points in the bottom.
、
Result area. By automatic or artificial calculation method, we can get the results
、
such as the minimum periodic point Xm in, periodic conjugate x1 x2, minimum
period Tm in and h0 J0 etc.
3 Experiment Process
Firstly, startup settings, and the parameter settings include such as selecting pendulum
mass, no extreme value point tips, experimental process prompt, designating the
number of period, setting up the number of decimal digits, the requirement of curve
drawing etc. Secondly, put the pendulum on the compared points respectively to make
it slightly swing, the computer will record the time of the pendulum swings to get the
period by optical gate and collector[3].
When we use the 0.618 method to select points, we can suppose the extreme value
is in the extent [ a, b ] of the L length scale. Let’s put the pendulum at two points ( 0.
500±0. 118)L respectively to do period test. Except the bigger period is regarded as
the endpoints of the new extent, other points are abandoned. The smaller period point
will be remained for next points test. The new test point will be get from xn=an+( 0.500
±0.118 ) L. The extreme value will be searched in the extent L = 0. 618L - 1
n n
Using this method will shrink 40% test range each time. In a 1m length scale, the
extreme value will be narrowed in 0.01m after 10 times test, and the period gap of the
two compared points is only a few milliseconds. Thus we know the advantage of this
method is getting minimum period value rapidly that search the extreme value
through the least test times and the best test path.
For the data in Fig 3 table 1, students should test and compare the period of two
compared points. By using 0.618 method to determine the new endpoints, compared
points calculation will be iterated until the extreme value of the pendulum is get to
calculate minimum period value. Students will better understand the advantage of
gold section points by continuous iteration. Additionally, the no extreme value tips
function in the system can monitor the experiment process. If wrong operation causes
no minimum period value in the new extent, the system will issue a prompt to avoid
“one slip leads to fatality” when you choose the function. Students can judge the ex-
periment whether successful or not by the aspects whether the period gets smaller and
smaller after times iteration, whether the latest period gap of two compared points is
very small, and whether there is minimum period value in the latest extent. If the
experiment fails, the wrong data should be deleted and retest.
In the Fig 3 table 2, auxiliary points should be scattered and evenly get to make
curve artistic. In order to read the number, the auxiliary points can choose integer. Be
noted, auxiliary points and compared points can not be duplicate, thus the auxiliary
points are no use[4]. (System will issue a prompt if they are duplicated.)
After the measurement in table 1 and table 2 complete, the computer will automati-
cally draw the curve and calculate x0 h0 J0 value. (Students are recommended
artificially drawing the curve). From the curve, students can check whether all the
compared points and auxiliary points are on the curve. If a point deviates from the
curve and deforms it, computer will issue the prompt to retest when the mouse clicks
the wrong point. Smooth and artistic curve will be got after retest.
4 Conclusions
Using advanced technology such as Applied Mathematics and Computer in physical
experiment teaching will make students deeply understand Optimization method,
Computer Management of Golden Section Effect of Physical Pendulum 491
make the experiment process and result visually[5]. The learning enthusiasm and
initiative of students are fully mobilized. After finish the experiment, students can
self-check the experiment result to reduce the blindness of experiment. This technical
support will ensure experiment process smoothly. What’s more, the whole experiment
process is also an autonomous learning process that students can find problem, solve
problem and explore conclusion by scientific research.
Acknowledgments
This research work was supported by the National Science Foundation of China
(No.10805012, 60872112 and 30900386), Science Research and Development Fund
of Hefei University of Technology (2010HGXJ0075 and 2009HGXJ0086).
References
1. Xiao, S., Ren, H., Mei, Z.-y.: College Physics experiment. USTC Publishing House, Hefei
(2008)
2. Ren, H., Chen, D.-y., Liang, J.: Affecting factors of vibration period of golden section effect
physical pendulum. Journal of Hefei University of Technology 32, 1106–1120 (2009)
3. Zhongyi, M., Linghu, N., Xiangyuan, L.: The Application of Fibonacci Method on Physical
Experiment-Physical Pendulum of Golden Section Effect. Physical Experiment of Col-
lege 19, 51–54 (2006)
4. Hou Z.-j., Mei, Z.-y., Sun, W., Qing-jie, S.: Experimental Study of Physical Pendulum of
Golden Section. Experiment Science & Technology 6, 150–152 (2009)
5. Jing, J., Zhu, Y., Ren, H., Mei, Z.-y.: Study of bilingual teaching for physics experiment. In:
International Conference on E-Health Networking, Digital Ecosystems and Technologies
(EDT), vol. 2, pp. 424–426 (2010)
Simulation Study of Gyrotron Traveling Wave Amplifier
with Distributed-Loss in Facilities for Aquaculture
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 492–496, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Simulation Study of Gyrotron Traveling Wave Amplifier 493
Keeping the beam current less than the threshold for the absolute instability of the
operating mode and the interaction length shorter than the critical length for gyro-
BWO oscillation[4]. The critical current of TE01 is predicted to be 20A, so the design
beam 10A, which provides a 50% margin of safety is available. Also, all other poten-
(1) (1) (2)
tial competing modes including TE 11 , TE 21 and TE 02 are considered, the shortest
critical lengths for gyro-BWO oscillations are predicted to be 15 times the radius
cylindrical radius.
The design is based on the use of MIG, and the basic parameters are calculated
from linear theory. Combined with the nonlinear simulations, we can choose and
optimize the most promising design. The circuit of the present design includes the
input coupler taper, the uniform interaction circuit, and a nonlinear output uptaper.
The collector follows the nonlinear output taper and also serves as the RF output
waveguide.
The design of output uptaper and input coupler is very important to attain the de-
sired output results. The input coupler with a coaxial slotted structure is designed to
convert the input power from TE10 mode to the TE01 mode with HFSS code, the
characteristic is shown as Fig.1 and Fig.2.
The uptaper of the output section employed a Dolph- Chebyshev profile to ensure
minimal mode conversion. the characteristic is shown as Fig.3 and Fig.4.
Fig. 4. Dependence on frequency from HFSS simulation of the output uptaper’s performance
3 Conclusion
The simulation results show an instantaneous 3dB bandwidth of 5% with the peak
power of 250kW and 25% efficiency. Test of the amplifier will be carried out in the
following work.
References
1. Yeh, Y.S., et al.: W-Band Second-Harmonic Gyrotron Traveling Wave Amplifier with Dis-
tributed-Loss and Severed Structures. International Journal of Infrared and Millimeter
Waves 25, 29–42 (2004)
2. David, B., et al.: Design of a W-Band- TE01 Mode Gyrotron Traveling-Wave Amplifier
with High Power and Broad-Band Capabilities. IEEE Transactions on Plasma Science 30,
894–902 (2002)
3. Chu, K.R., et al.: Theory and Experiment of Ultrahigh-Gain Gyrotron Traveling Wave Am-
plifier. IEEE Trans. Plasma Sci. 27, 391–402 (1999)
4. Lin, A.T., Chu, K.R., et al.: Marginal Stability Design Criterion for Gyro-TWTs and Com-
parison of Fundamental with Second Harmonic Operation. Int. J. Electronics. 72, 873–885
(1992)
The Worm Propagation Model with Dual Dynamic
Quarantine Strategy
Abstract. Internet worms are becoming more and more harmful with the rapid
development of the Internet. Due to the extremely fast spread and great destruc-
tive power of network worms, strong dynamic quarantine strategies are
necessary. Inspired by the real-world approach to the prevention and treatment of
infectious diseases, this paper proposes a quarantine strategy based on dynamic
worm propagation model: the SIQRV dual quarantine model. This strategy uses
dynamic quarantine method to make the vulnerable host and infected host
quarantined, and then release them after a certain period of time, regardless of
whether quarantined host security is checked. Through mathematic modeling, it
has been found that when the basic reproduction number R0 is less than a critical
value, the system will stabilize in the disease-free equilibrium, that is, in theory,
the infected hosts will be completely immune. Finally, by comparing the simu-
lation results and numerical analysis, the basic agreement between the two curves
supports the validity of the mathematical model. Our future work will be fo-
cusing on taking both the delay and double-quarantine strategy into account and
further expanding the scale of our simulation work.
1 Introduction
In the current environment of high-speed network, the diversification of communi-
cation channels and the complex application environments increase the frequency of
network worm attacks and the damages caused by worms indirectly[1][2]. Since
worms spread very quickly and artificial defenses alone have little effect, there
must be an automatic defense system[3][4]. In such system, once a host is infected,
it will automatically be quarantined by the immediate host defense system and treated
by the security group to prevent the further spread of the worm. In order not to
affect the users’ normal use, the duration of confinement should be limited in
a shorter amount of time and the quarantined host should be released after this period
of time.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 497–502, 2011.
© Springer-Verlag Berlin Heidelberg 2011
498 Y. Yao et al.
2 Related Work
Mathematical modeling of network worms is one of the primary means of studying
network worms. Infectious disease model has been used to understand and simulate
biological spread of infectious diseases [5]. Qing analyzes worm propagation models
and points out that Internet worm propagation model can be described by the SEM and
the SIR model [6]. The Two-Factor model proposed by Zou adds parameters that in-
fluence the worm propagation to the SIR model [7]. In order to suppress the rapid
spread of worms, Zou introduces the dynamic quarantine strategy into the Two-Factor
model [8]. But the dynamic quarantine model also has shortcomings, which is that it
does not consider the immunity of the vulnerable hosts after quarantine, also regarding
to human considerations, the host should not be long-term quarantined. Based on all
these discussions, this paper proposes a SIQRV dual quarantine model.
In this propagation model, every host will state in four states as follow: S (Suspected), I
(Infectious), Q (Quarantined) and V (Vaccinated). The transition diagram of the
quarantine model with newborn is given in Fig.1 with the following assumptions.
From the state transition diagram, the differential equations of the SIQRV model are
formulated:
⎧ dS
⎪ dt = pμ − β SI − α 1 S − ωS + δ 1Q − μS
⎪
⎪ dI = βSI − γI − α I + δ R − μI
⎪ dt 2 2
⎪ (1)
⎪ dQ
⎨ = α 1 S − δ 1Q − ρ1Q − μQ
⎪ dt
⎪ dR
⎪ dt = α 2 I − δ 2 R − ρ 2 R − μR
⎪
⎪ dV = (1 − p ) μ + ωS + γI + ρ Q + ρ R − μV
⎪⎩ dt 1 2
The Worm Propagation Model with Dual Dynamic Quarantine Strategy 499
Since S+I+Q+R+V=1, then V=1-S-I-Q-R; the system (1) is equivalent to the differ-
ential system (3):
⎧ dS
⎪ dt = p μ − β SI − α 1 S − ω S + δ 1Q − μS
⎪
⎪ dI = β SI − γI − α I + δ R − μI
⎪ dt 2 2
⎨ (3)
⎪ dQ = α S − δ Q − ρ Q − μQ
⎪ dt 1 1 1
⎪ dR
⎪ = α 2 I − δ 2 R − ρ 2 R − μR
⎩ dt
It is obvious that there always exists an equilibrium E 0* (S 0* , I 0* , Q 0* , R0* ) of system
(3), S 0* =
pμ
α1 + ω + μ + A
,I 0* = 0 , Q0* = α1
S 0* , R0 = 0 , Where A =
* δ 1α 1 .
δ 1 + ρ1 + μ δ 1 + ρ1 + μ
And the basic reproductive number is obtained.
βS 0* (4)
R0 =
γ +α2 + μ
sign, and can’t identically equal to zero in every subdomains of G, then system have no
closed orbit in G.
Theorem 3.2. If and only if basic reproductive number R0 <1, system (3) has a unique
globally asymptotically stable equilibrium E 0* (S 0* , I 0* , Q0* , R0* ) .
⎜α1 0 a3 0 ⎟⎟
⎜0 α2 0 a4 ⎟⎠
⎝
Where a1 = −α 1 − ω − μ , a 2 = β S 0 − γ − α 2 − μ , a 3 = −δ 1 − ρ 1 − μ , a 4 = −δ 2 − ρ 2 − μ
*
( λ + α 1 + ω + μ )( λ − β S 0* + γ + α 2 + μ )( λ + δ 1 + ρ 1 + μ )( λ + δ 2 + ρ 2 + μ ) = 0 (7)
If R = β S 0*
< 0 , Eq.(7) has no positive root, which implies that equilibrium E 0 is
*
0
γ +α2 + μ
unique globally asymptotically stable.
Fig. 2. Numerical Result of SIQRV model Fig. 3. Simulation result of SIQRV model
(a) (b)
Fig. 5. Numerical Curves and Simulation Curves: (a) the distribution of the quarantined suscep-
tible hosts (b) the distribution of the quarantined infected hosts
5 Conclusion
In this paper, we improve the quarantine model by introducing the dual quarantine
strategy. In our model, the quarantine with the suspected hosts and the quarantine with
502 Y. Yao et al.
the infectious are employed. By our mathematical analysis, the main results are sum-
marized as follows
• This model is suitable for describing the worm propagation with dual quarantine
strategy.
• The simulation experiment implies that the dual dynamic quarantine strategy is
better to regarding to human considerations and controlling the worm propagation.
• Under the condition R = β S 0*
< 1 , our model can achieve a stable state with
γ +α2 + μ
0
the infections eliminated. This implies that a worm propagates could be controlled
at final.
These conclusions are not only discussed by theoretical but also verified by our nu-
merical and simulation results very well. Due to the limitation on space, the impact of
time delay on this model and the comparison with more models cannot be discussed
here. We will state those questions in the future and this pager is just the first step in the
field.
References
1. Spafford, E.H., et al.:The Internet worm program: an analysis. Technical report CSD-TR-823,
Department of Computer Science, Purdue University, pp. 1–29 (1988)
2. Kephart, J.O., White, S.R., et al.: Measuring and Modeling Computer Virus Prevalence. In:
Proceedings of the IEEE Symposimum on Security and Privacy (1993)
3. Moore, D., Shannon, C., Voelker, G.M., Savage, S., et al.: Internet Quarantine: Requirements
for Containing Self-Propagating Code. In: IEEE INFOCOM (2003)
4. Kienzle, D.M., Elder, M.C., et al.: Recent worms: a survey and trends. In: WORM 2003
(October 2003)
5. Daley, D.J., Gani, J., et al.: Epidemic Modelling: An Introduction. Cambridge University
Press, Cambridge (1999)
6. Qing, S., Wen, W., et al.: A survey and trends on Internet worms. Computer & Security,
334–345 (2005)
7. Zou, C.C., Gong, W., Towsley, D., et al.: Code Red Worm Propagation Modeling and
Analysis. In: 9th ACM Symposium on Computer and Communication Security, Washington
DC, pp. 138–147 (2002)
8. Zou, C.C., Gong, W., Towsley, D., et al.: Worm Propagation Modeling and Analysis under
Dynamic Quarantine Defense. In: Proc. the 2003 ACM workshop on Rapid malcode Symp.
Computer and Communications Security, pp. 51–60. ACM Press, New York (2003)
Keyword Extraction Algorithm Based on Principal
Component Analysis
1 Introduction
With the development of information era, the means of expressions increasingly di-
versified, in which text information is irreplaceable. With the text information on the
network explosive growth, to acquire it by artificial means become increasingly diffi-
culty, how to improve the efficiency of information access becomes an increasingly
important issue. In order to organize and process the mass text information effec-
tively, researchers do many studies on information retrieval, automatic abstract text
categorization text clustering, etc, and all the research studies have involved a key
basic problem, namely how to extract keywords from the text.
Keywords highly summarize the main contents of the text which makes it easy for
readers to judge whether the text content is that they need. We can use key words to
measure the text relevance by small computational cost, thus we can go on effectively
for information retrieval, text clustering and classification, etc. The text retrieval is the
most widely used in this aspect. After users enter the keywords in the search engine,
the System will return all the text in which the key words appear to the users. The
study of the keywords in foreign countries begins earlier and some practical or ex-
perimental system has been built.
Although there are many domestic keywords extraction algorithm, but the aspect of
the keywords attribute weight calculation is still a difficult point. Based on the text as
the research object, the article improves the attributes of the keywords, and uses the
principal component analysis method for attribute weight, eventually gets the total
score of keywords.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 503–508, 2011.
© Springer-Verlag Berlin Heidelberg 2011
504 C.-J. Li and H.-J. Han
⎧ λ1 0 0 ... 0 ⎫
⎛ b11 b12 ... b1 p ⎞ ⎪ ⎪
⎜ ⎟ ⎪ 0 λ2 0 ... 0 ⎪
b b
⎜ 21 22 ... b2p ⎟ ⎪ ⎪
= ⎨ 0 0 λ3 ... 0 ⎬(λ1 > λ2 > λ3 ... > λ p ) (2)
⎜ M ⎟
⎜⎜ ⎟⎟ ⎪ M ⎪
b b
⎝ p1 p 2 ... b pp ⎠
⎪ ⎪
⎪⎩0 0 0 ... λ p ⎪⎭
We assume that the relevant matrix R is the matrix which we get after the original
indicators (x1, x2 , ⋯, xp) standardized. Then λi are the eigenvalues account to p. ci1,
p
ci2, ⋯, cip are eigenvectors. We take these eigenvalues into formula α i = λi / ∑ λk .
k =1
Then we get the contribution rate αi of new indicator zi. That is the weight coeffi-
cient of keywords’ properties.
Keyword Extraction Algorithm Based on Principal Component Analysis 505
n × log( M / m)
Fi = (3)
N
n is the number that the word Ci appears in the text. M is the number that the word Ci
appears in other texts. M is the number of the total texts. N is the number of the total
words. Through formula3, we know that TF-IDF tends to filter out common words
and retain the important words.
(2)Part of speech (POS)
Firstly, different POS correspond to different importance of words in a text, for ex-
ample cc (Coordinating Conjunction), o (Onomatopoeia), w (Punctuation), etc. These
POS have little impact on the texts. We give these POS lower weighted. Other POS,
for example, nr (Name), nz (Terminology), v(Verb), etc have higher impact on the
texts. We give them higher weight.
Secondly, Chinese sentence is composed by phrases and the relation among words
in phrases is either modified or be modified. Words in different locations show differ-
ent degrees of importance and then the weights among words are different. Through
statistical methods, we summarize six categories phrases: Subject-predicate phrase,
Modification-center phrase, Verb phrase, Verb-complement phrase, Parallel phrase,
Prepositional phrase. According to their POS, we weight them separately. We take
parallel phrase for example. It is a parallel relationship between two words. Then the
two words have the same weight. We take Verb-complement phrase as example. Verb
is a main and complement is a modification of the verb. So the weight of the verb is
higher than complement.
Through two areas above and statistical experiments, we divide 79 POS which we
generalized into five grades. We have the right values normalized:
posi
Xi = (4)
5
506 C.-J. Li and H.-J. Han
posi indicates that the POS weight recorded in the weight table. Through formula4, the
POS like conjunction have lower weight. Whereas, the POS that is modified like noun
has higher weight.
(3) Word length weighted
Word length also determines the importance of a word. The longer the words in
Chinese are more important.
leni
Li = (5)
L max
leni is the length of word Ci and Lmax is the length of the longest word.
(4) The contribution rate of the word
The meaning of a sentence is composed of the meaning of the words and the rela-
tionship between these words. The co-occurrence relation is the most direct relation-
ship among words. The meaning of the sentence is embodied in the co-occurrence
relation among every word. So the co-occurrence relation in the sentence is also a
property of word. We assume that the total time the word Ci appears in all sentences
M within the same text is Si, that word frequency tf(Ci, M). The total time word Cj
appears in the M(a set of all sentences) is Sj, namely word frequency tf(Cj, M). The
co-occurrence frequency between Ci and Cj is Sij.
Sij Sij
Pij = = (6)
Sii + S jj − Sij Si + S j − Sij
Pij is the co-occurrence probability [6] between word Ci and Cj. We know Pij=Pji,
Pii=1.
Finally, we can gain a co-occurrence probability matrix in word space through the
word database. The contribution rate of the word Ci is :
Di = ∑ Pij
j =1
Higher contribution rate means that these words often modify other words or be
modified. We believe that these words are more important and make a greater contri-
bution to sentences. These words have higher weight and which the words associated
with also should have higher weight.
(5)Stop words
Stop words is characteristic of high frequency in all the documents, but the attrib-
utes to the document subject was very small. However, the level of the stop words
should also be different. Some words may not be stop words in this one and they may
be stop words in other articles. So as to statistical, this paper will arrange the stop
words into four grades, Ti respectively for 0, 0.2, 0.4 and 1. 1 means the words is not
stop words; 0 means the words is stop words certainly, we can delete them directly.
(6)Keywords position[7]
The words in the headlines are more likely keywords, so a given position attribute
value Wi = 0.2 or 0.8, 0.8 meaning the word in headline, 0.2 said not.
(7) Cited stressed
It represent whether the words are closed by quotes, bracketed quotation marks, etc.
Keyword Extraction Algorithm Based on Principal Component Analysis 507
We statistic all keywords and gain a sample observations with the number n, using xij
to express. i stands for sample(i=1,2,3…n). That is the keyword of the text. j stands
for the indicator(j=1,2,3…p). Here p is equal to 7. We use Z-score here:
xij − x j
xij' = (7)
sj
Make the main diagonal elements of the correlation matrix R obtained above instead
of (1 − λ ) ,constitute the matrix determinant, and make value of the determinant for
0,then can work out p eigenvalues of λ1 , λ2 ,...λ p . Make the eigenvalues into the
equation AX = λ X which can be obtained with the corresponding eigenvectors of
the eigenvalues.
From the front reasoning: λi means the variance of the ith new index,
eigenvector means coefficient of the orthogonal transformation,
zi = c x + ci 2 x 2 + ... + cip x p
'
i1 1
' '
.The percentage of each new factor’s corresponding
p
variance λi / ∑λ
k =1
k
means the relative position of this variable in all of the variables,
called contribution, for αi . αi is the weight coefficient of the new index for the
highway network node important degree.
Keywords’ last weight scores are:
possible, include 100 conference papers, 100 News Agency text files, and 100 Inter-
net texts, total 300 experimental texts. This article conducts artificial statistical analy-
sis of keywords for each article and finds 20 keywords.
Test 1: Contrast precision ratio
Acknowledgement
This work is supported by National 863 High-Tech program of China
(2009AA01Z304), National Natural Science Foundation of China (60603077) and
Shan Dong Province Natural Science Foundation of China (ZR2009GM029,
ZR2009GQ004, Y2008G37, Z2006G05).
References
1. Li, Y.-S., Zeng, Z.-X., Zhang, M., Yu, S.-J.: Application of Primary Component Analysis in
the Methods of Comprehensive Evaluation for many Indexes. Journal owf Hebei University
of Technology 1(28), 94–197 (1999)
2. Liu, L., Zeng, Q.-T.: An Overview of Automatic Question and Answering System. Journal
of Shandong University of Science and Technology (Natural Science) 26(4) (2007)
3. Information on, http://www.ictclas.org/
4. Hua, B.-L.: Stop-word Processing Technique in Knowledge Extraction. New Technology of
Library and Information Service, 48–51 (2007)
5. Jia, Z.-F., Wang, Z.-F.: Research on Chinese Sentence Similarity Computation. Sci-
ence&Technology Information (11), 402–403 (2009)
6. Gong, J., Tian, X.-M.: Methods of Feature Weighted Value Computing based on Text Rep-
resentation. Computer Development & Applications 21(2), 46–48 (2008)
7. Deng, Z., Bao, H.: Improved keywords extraction method research. Computer Engineering
and Design 30(20), 4677–4680 (2009)
A Web Information Retrieval System
Abstract. An approach for the retrieval of price information from internet sites
is applied to real-world application problems in this paper. The Web
Information Retrieval System (WIRS) utilizes Hidden Markov Model (HMM)
for its powerful capability to process temporal information. HMM is an
extremely flexible tool and has been successfully applied to a wide variety
of stochastic modeling tasks. In order to compare the prices and features of
products from various web sites, the WIRS extracts prices and descriptions of
various products within web pages. The WIRS is evaluated with real-world
problems and compared with a conventional method and the result is reported
in this paper.
1 Introduction
The internet has become a vital source for information. However, as the number of
web pages continues to increase, it becomes harder for users to retrieve useful infor-
mation including news, prices of goods, and research articles. Commercial search
engines are widely used to locate information sources across web sites. One obstacle
that has to be faced when searching the web page via a query is that commercial
search engines usually return very large hit lists with a low precision. The list inevita-
bly includes irrelevant pages and the results are ranked according to query occur-
rences in the documents rather than correct answers within the documents. Especially,
these search engines focus on the recall ratio instead of the precision ratio. Users have
to find the relevant web pages from amongst irrelevant pages by manually fetching
and browsing pages to obtain specific information. For these reasons, many research-
ers are trying to develop solutions to perform the web information retrieval (WIR)
function in a more effcient and automatic manner[1]-[4]. Early successes in WIR
include the PageRank algorithm [5] and the HITS algorithm of Kleinberg [6]. Other
linked-based methods for ranking web pages have been proposed including variants
of both PageRank and HITS [7]. The PageRank algorithm globally analyzes the entire
*
Corresponding author.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 509–514, 2011.
© Springer-Verlag Berlin Heidelberg 2011
510 T.-H. Kim et al.
web graph while HITS algorithm analyzes a local neighborhood of the web graph
containing an initial set of web pages matching the user's query.
The process of extracting information from the result pages yielded by a search en-
gine is termed web information retrieval. Several automated or nearly automated WIR
methods have been proposed representative methods such as Mining data records in
web page (MDR) [12], Object mining and extraction system (OMINI) [13] and In-
formation extraction based on Pattern Discovery (IEPAD) [14]. In this paper, a Web
Information Retrieval System, called WIRS, is utilized to extract prices of goods on
the internet. From a large number of web pages relevant to prices of goods appearing
on the internet, the WIRS can help to extract the prices of goods of interest with
maximal accuracy.
The remainder of this paper is organized as follows: A brief review of HMM and
the web information retrieval system based on HMM are presented in Section 2.
Section 3 describes an actual implementation of WIRS and experiments involving a
practical price extraction problem and presents results including comparisons with a
conventional MDR. Finally, Section 4 concludes the paper.
In this section, we demonstrate the performance of the WIRS using a real world data
in comparison with a conventional system, MDR, which is a state-of-art web informa-
tion extraction systems based on HTML tag structure analysis. The data set for train-
ing each product contains observations of 100 URLs returned from a general-purpose
512 T.-H. Kim et al.
search engine, Google, and the next 100 URLs are prepared for testing. Some typical
web sites with sufficient information about product features and prices are listed in
Table 1 while other web sites with irrelevant features are omitted. The trained HMM
used in the performance evaluation includes 8 states for labeling.
The performances are evaluated in terms of the precision and recall [18], which are
widely used to evaluate information retrieval and extraction systems. These are
defined as follows: Precision = CE/EO and Recall = CE/CO, where CE is the total
number of correctly extracted observations, EO is the total number of extracted ob-
servations on the page, and CO is the total number of correct observations (target
observations). Precision defines the correctness of the data records identified while
recall is the percentage of the relevant data records identified from the web page.
3.2 Experiments
The WIRS is evaluated and compared with MDR on a real-world problem. An execu-
table program of MDR can be downloaded at [19]. MDR has a similarity threshold,
which was set at 60% in our test, as suggested by the authors of MDR. The product
description or data record is considered as a block containing both the product infor-
mation and the noisy information in MDR. The WIRS, however, can handle the web
pages deeper, because it is able to extract the specific field of the data record: image,
name, price, and URL excluding noisy objects.
Table 1 shows results of the performance comparison among the WIRS and MDR.
A total of 10 web sites containing different formats and product information are
evaluated in this experiments. In Table 1, column 3 shows the number of target prod-
ucts available in the corresponding URL shown in column 2. The listed web pages are
WIRS MDR
No Web sites Products
Returned Correct Returned Correct
1 www.flash-memory-store.com 21 16 16 38 20
2 www.tigerdirect.com 19 16 16 23 19
3 www.usbflashdrivestore.com 25 24 24 25 25
4 www.supermediastore.com 32 30 28 0 0
5 www.ecost.com 25 22 22 27 25
6 www.pricespider.com 10 10 10 16 10
7 www.usanotebook.com 27 21 21 27 27
8 www.nextag.com 11 10 10 15 10
9 www.mysimon.com 25 21 21 11 16
10 shpping.aol.com 16 16 16 30 16
Total 211 188 184 212 168
Recall Performance (Rc) Rc : 87.2% Rc : 79.6%
Precision Performance (Pr) Pr : 98.9% Pr : 79.2%
A Web Information Retrieval System 513
returned from a commercial search engine, Google, and the web pages include suffi-
cient information about product features including prices, images, and descriptions
regarding the specific products such as usb flash drive, laptop, web camera, computer
mouse, etc. The numbers shown in the rest of columns are the numbers of extracted
products and correctly extracted products from the WIRS and MDR systems. As
shown in Table 1, the average recall obtained by the WIRS is 87.2% while recall of
79.6% are obtained by MDR, respectively. With respect to the extraction precision,
the WIRS proves to be a more powerful tool for the information extraction. The aver-
age precision obtained by the WIRS is 98.9% while precision of 79.2% is obtained by
MDR.
In our experiments, MDR provides better extracting records from HTML tables
than those from non-tables while the WIRS method performs well in both cases. The
WIRS method shows far better performance than MDR due to the fact that MDR was
primarily designed to handle tables only. In addition, MDR does not identify the cor-
rect data sections for extracting product records and MDR sometimes extracts some
advertisement records. The WIRS method is fully automated, since the extraction
process is performed on URLs returned from any general-purpose search engine.
4 Conclusion
In this paper, a novel and effective web information retrieval method to extract prod-
uct prices from internet is evaluated. This method can correctly identify the data re-
gion containing a product record. When a data region consists of only one data record,
the MDR fails to correctly identify the data region. The WIRS is evaluated and com-
pared with MDR on a real-world problem. The WIRS method overcomes the draw-
backs of the conventional MDR in processing HTML contents. Experiments show
that the WIRS outperforms the MDR method significantly in terms of precision and
recall.
Acknowledgment
This work was supported by National Research Foundation of Korea Grant funded by
the Korean Government (2010-0009655) and by MKE Configurable Device and SW
R&D"(No.: KI002168).
References
1. Chorbani, A.A., Xu, X.: A fuzzy markov model approach for predicting user navigation,
pp. 307–311 (2007)
2. Godoy, D., Amandi, A.: Learning browsing patterns for context-aware recommendation.
In: Proc. of IFIP AI, pp. 61–70 (2006)
3. Bayir, M.A., et al.: Smart Miner: A New Framework for Mining Large Scale Web Usage
Data. In: Proc. of Int. WWW Conf., pp. 161–170 (2009)
514 T.-H. Kim et al.
4. Cao, H., et al.: Towards Context-Aware Search by Learning A Very Large Variable
Length Hidden Markov Model from Search Logs. In: Proc. of Int. WWW Conf., pp. 191–
200 (2009)
5. Brin, S., Page, L.: The Anatomy of a Large-Scale HypertextualWeb Search Engine. In:
Proc. of Int. WWW Conf., pp. 107–117 (1998)
6. Kleinberg, J.M.: Authoritative Sources in a Hyperlinked Environment. Journal of the
ACM 46(5), 604–632 (1999)
7. Tomlin, J.A.: A New Paradigm for Ranking Pages on the World Wide Web. In: Proc. of.
Int. WWW Conf., pp. 350–355 (2003)
8. Rilo, E., Jones, R.: Learning Dictionaries for Information Extraction by Multi-Level Boot-
strapping. In: Proc. of the 16th National Conf. on Articial Intelligence, pp. 811–816 (1999)
9. Sonderland, S.: Learning information extraction rules for semi-structured and free text.
Machine Learning 34(1), 233–272 (1999)
10. Leek, T.R.: Information Extraction Using Hidden Markov Models. Master thesis, UC, San
Diego (1997)
11. Rabiner, L.R.: A tutorial on hidden Markov models and selected applications in speech
recognition. Proc. of IEEE 77(2), 257–286 (1989)
12. Bing, L., Robert, G., Yanhong, Z.: Mining data records in web pages. In: Proc. of ACM
SIGKDD, pp. 601–606 (2003)
13. Buttler, D., Liu, L., Pu, C.: A fully automated object extraction system for the world wide
web. In: Proc.of IEEE ICDCS, pp. 361–370 (2001)
14. Chang, C., Lui, S.: IEPAD: Information extraction based on Pattern Discovery. In: Proc. of
WWW Conf., pp. 682–688 (2001)
15. Park, D.-C., Kwon, O., Chung, J.: Centroid neural network with a divergence measure for
GPDF data clustering. IEEE Trans. Neural Networks 19(6), 948–957 (2008)
16. Jiang, J.: Modeling Syntactic Structures of Topics with a Nested HMM-LDA. In: Proc. of
ICDM, pp. 824–829 (2009)
17. Park, D.-C., Huong, V.T.L., Woo, D.-M., Hieu, D., Ninh, S.: Information Extraction Sys-
tem Based on Hidden Markov Model. In: Yu, W., He, H., Zhang, N. (eds.) ISNN 2009.
LNCS, vol. 5551, pp. 55–59. Springer, Heidelberg (2009)
18. Raghavan, V.V., Wang, G.S., Bollmann, P.: A Critical Investigation of Recall and Preci-
sion as Measures of Retrieval System Performance. ACM Trans. Info. Sys. 7(3), 205–229
(1989)
19. http://www.cs.uic.edu/~liub/WebDataExtraction/
MDR-download.html
An Effective Intrusion Detection Algorithm Based on
Improved Semi-supervised Fuzzy Clustering
1 Introduction
With the rapid development of the Internet, network intrusion event are frequently
happened, intrusion detection technology shows more and more important role. Data
classification is to define the process of attack and attack recognition, more specific
technology can achieve this process such as pattern matching, statistical analysis,
integrity analysis and other methods, the nature of these method is to compare to
get difference between normal data and detect data; According the difference to
determine whether the system had been invaded [1].
Intrusion detection system is similar to most machine learning systems, to rely on
existing labeled datum. But got tag data is more difficult, it needs professional worker
to spend a lot of time to collect and identify large amount datum. Unlabeled data
access is much easier than the label data, but only contrast classification only with the
unlabeled data is less effective [2]. Clustering technology can be used to identify
nature cluster of unlabeled data, but the class division is not always consistent with
natural clustering [3].Fuzzy clustering algorithm has better classification accuracy and
generalization ability, and evolution semi-supervised learning algorithm has better
self-learning nature. The evolution semi-supervised combines with fuzzy clustering,
namely ESSFC. In this paper, (Improved Evolutionary Semi-Supervised Fuzzy Clus-
tering) [4] IESSFC is uses to intrusion detection systems, label data as the role of
chromosomes, provide the class label information is used to guide each chromosome
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 515–520, 2011.
© Springer-Verlag Berlin Heidelberg 2011
516 X. Li et al.
evolution. The fitness of each chromosome includes cluster variance of unlabeled data
and misclassification error of labeled data. The classification structure got by IESSFC
used to classify new unlabeled data. The experiments verify the efficiency of IESSFC.
Here, l indicates the designation labeled data, and u, as a superscript, indicates the
designation unlabeled data. n l = X l , n u = X u , n = X = n l + n u . Assume that all
data will be classified into C clusters. A matrix representation of a fuzzy c-partition of
X induced by Eq. (1) has the form:
⎡ 6 4 4 4 447 U l
4 4 4 4 48 6 4 4 4 447 U n
4 4 4 4 48 ⎤
⎢ fuzzy membership s fuzzy membership s fuzzy membership s fuzzy membership s⎥
⎢
of labeled data n 1 of unlabeled data 1
} ⎥
of unlabled data n u
}
of labeled data 1
} }
⎢ u 11l L u 1l n1 u 11u L u 11u ⎥
⎢ l l u u ⎥
L u L
U = ⎢
u 21 2 n1 u 21 u 21 ⎥ (2)
⎢ M L M M L M ⎥
⎢ l l u u ⎥
⎢ u c1
L u cn 1 u c1
L u c1 ⎥
⎢ ⎥
⎢ ⎥
⎣⎢ ⎦
Here, the fuzzy values in U l are determined by domain experts after a careful investi-
gation on X l . In general, Eq. (2) should satisfy the following conditions:
∑
c
u ihl ∈ [0,1] ,
i =1
u ihl = 1 , 1 ≤ i ≤ c,1 ≤ h ≤ nl (3)
∈ [0,1] , ∑
c
u ihu
i =1
u ihu = 1 , 1 ≤ i ≤ c,1 ≤ j ≤ nu (4)
The goal of the problem is to construct, using X, a classifier which can assign a future
new pattern to one or more pre-defined classes with as least error as possible.
nl nul nl nu
vi = (∑ (u ) x + ∑ (u ) x ) /(∑ (u ) + ∑ (u iku ) 2 x kl )
l
ik
2 l
k
u
ik
2 u
k
l
ik
2
(5)
k =1 k =1 k =1 k =1
( )
−1
u ijl ' = ⎡∑ h =1 ( x lj − v i ) ⎤
c 2
) /( x lj − v h (6)
⎢⎣ C C ⎥⎦
For, i = 1,2, L , c and j = 1,2, L nl , the fuzzy memberships of labeled data can be re-
2
computed as (6), C is a covariance matrix, and x lj − v i = ( x lj − v i ) T C ( x lj − v i ) . Accord-
C
ingly, the misclassification error of labeled data, denoted as E, can be measured as a
weighted sum of variance between u ijl and u ijl ' .
Although minimizing misclassification error of labeled data is necessary for the classi-
fier to get good generalization ability. Fuzzy within cluster variance is a well-known
measurement of cluster quality in fuzzy clustering, which is defined as:
J = ∑ ju=1 ∑i =1 (u iju ) 2 x uj − vi
n c 2
(8)
c
We can see that minimizing fuzzy within cluster variance is equal to maximizing the
similarity of data within the same cluster. Thus, we argue that fuzzy within cluster
variance of unlabeled data can play the role of capacity control in our problem. We can
define the objective function as follows:
f (U u , V ) = J + α ⋅ E (9)
Here, α should be proportional to the ratio nu / nl .Till now, our problem has been
converted to minimizing the objective function in Eq. (9). In this paper, evolutionary
programming is chosen to optimize the objective function, because not only can it
alleviate the local optima problem, but also it is less sensitive to initialization. More
specifically, an evolutionary searching procedure is employed.
method can avoid selecting boundary point as the guiding data. The centre points ate
calculated as follow:
vi = ∑ii=1 (∑ j i=1 xij ) /Li
r L
(10)
Here, Li indicates the data number of ith class.
Second: Second, ESSFC algorithm select unlabeled data by random produce in build
classification process. First, produce C real number, r1 j , r2 j , L rcj
( rij ∈ [0,1],1 ≤ i ≤ c ) as the point of one chromosome. Then calculate fuzzy degree
u ijuk = rij /( r1 j + r2 j + ... + rcj ) .The data by randomly generated has uncertainty prop-
erty, the fuzzy degree generated by these real number will not in the scope of real
intrusion data. These fuzzy degrees will increase the burden of evolution and to in-
crease the variance of classification.
IESSFC we proposed can also overcome this disadvantage. By calculate the dis-
tance between unlabeled and labeled data to get the r1 j , r2 j , L rcj . Specifically, calcu-
late the distance between the unlabeled data and the centre points calculated by
Eq.(10). r1 j , r2 j , L rcj are calculated as follow:
2
ri j = x uj − v i / x uj (11)
This evolutionary process is more targeted, and the experimental results show that this
method can effectively speed up the evolutionary process.
The following is the main step of IESSFC, set the generation counter gen=0, and the
number of generations max-gen.
(1)For each k = 1, K , p , generate the offspring U U ( P + K ) , here k = 1,2, L, p .
− x uj − vik − x uj − v hk
u iju ( p + k ) = (u ijlk ) 2 e / ∑ g =1 (u ijlk ) 2 e
c
C C
(14)
(2)Determine the new centers and new objective function value using (15) and
(16).
v ip + k = (∑ l (u ijlk ) 2 x lj + ∑ u (u ijuk ) 2 x uj ) /(∑ l (u ijlk ) 2 + ∑ u (u ijuk ) 2 ) (15)
n n n n
j =1 j =1 j =1 j =1
p+k
f = J p+k + α ⋅ E p+k (16)
UK
(3)Select the p best fit form the p+p matrices of U according to the correspond-
ing value of f , to form the next generation of population. (4) gen=gen+1.
(5) Repeat the steps (1)-(4), until gen=max-gen or time allowed is exhausted.
Given a new data, denoted as x, and the c cluster centers v1 , v 2 , K v c obtained with
IESSFC, the fuzzy membership of x with respect to class i, u i can be computed as:
ui = [∑ ( x − v
c
g =1
2
i C / x − vg
2
C
)]
−1
(17)
4 Experiments
Experimental data are from "KDD Cup 1999 Data"[5], including 38 kinds of attacks
belong to 4 big categories. There are some redundant in 41 features, and large dimen-
sion of feature space will increase the computational complexity. In order to select
some useful feature for classing, we use ID3 algorithm to construct decision tree to
select important features of attributes. By ID3 we choose out 10 features, including 7
continuous features, including duration, dst_bytes, etc.; and three discrete features,
including protocol type, service, flag and so on.
The network data obtained must be processed to be vector form for IESSFC. For
continuous features, use the standard deviation normalized to process. We define:
n n 1
∑x ∑ (x
1 1
x im = fm (18) s im =( fm − x im 2 2
) ) (19) xim = ( xim − xim ) / s im (20)
n f =1
n −1 f =1
Where xim is the mth continuous feature of X i , s im is standard vector difference for-
mula, xim is new continuous value after standard normalized. For discrete feature,
such as{tcp, udp, icmp}convert into binary is {1,0,0},{0,1,0}{0,0,1}[6].
We selected 10000 samples from KDDCUP99, including 4000 normal data, 6000
abnormal data. Abnormal data includes seven kinds of attacks, namely: mailbomb,
ipsweep, back, portsweep, smurf, snmpgetattack, mscan.
520 X. Li et al.
The parameters set as following: population size p is 10, the type of class number
is 8,indicates the normal. Select the 7000 data from 10000 normalized data, generated
classifier. These 7000 data includes 2500 labeled data and 4500 unlabeled
data .Compute the centre points in 8 classes as the cluster centers. In calculating the
objective function α =4500/2500 = 1.8.The rest 3000 data are to be test data. Iteration
times were 20,40,60,80,100 respectively. In experiment, we took contrast between
ESSFC and IESSFC, the experimental results shows in Table1.
The experimental results show that IESSFC intrusion detection algorithm is more
effective than the intrusion detection algorithm based on ESSFC.
5 Conclusion
This paper proposed an effective intrusion detection systems base on IESSFC, address-
ing the fact that labeled data is more difficult to get than the unlabeled data. IESSFC is
uses to intrusion detection systems, label data as the role of chromosomes, provide the
class label information is used to guide each chromosome evolution. The fitness of
each chromosome data includes cluster variance of unlabeled data and misclassifica-
tion error of labeled data. This algorithm can use few labeled data and large unlabeled
data to construct the intrusion detection classifier. Experimental results show that the
algorithm we proposed has higher detection rate than the algorithm based on ESSFC.
References
1. Jiang, J., Ma, H., Ren, D.: A Survey of Intrusion Detection Research on Network Security.
Journal of Software 11(11), 1460–1466 (2000)
2. Castelli, V., Cover, T.: On the exponential value of labeled samples. Pattern Recognition
Letters 16(1), 105–111 (1995)
3. Jian, Y.: An Improved Intrusion Detection Algorithm Based on DBSCAN. Microcomputer
Information 25(13) (2009)
4. Liu, H., Huang, S.-t.: Evolutionary semi- supervised fuzzy clustering. Pattern Recognition
Letters 24, 3105–3113 (2003)
5. KDD Cup 1999 Data. University of Califiornis, Irvine[EB/OL] (March 1999),
http://kdd.ics.uci.edu/database/kddcup99/kddcup99.html
6. Wilson, D.R., Tony, R.M.: Improved heterogeneous distance functions. Journal of Artificial
Intelligence Research 6(1), 1–34 (1997)
What Does Industry Really Want in a
Knowledge Management System?
A Longitudinal Study of Taiwanese Case
1 Introduction
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 521–531, 2011.
© Springer-Verlag Berlin Heidelberg 2011
522 L.-C. Yang and H.-P. Lu
Knowledge circulates within the organization all the time and creates value in use
[4]. The speed might be fast or slow, and the performance might be high or low. In
essence, knowledge is what employees know about customers, products, processes, and
past successes and failures [5]. A key challenge in the application of knowledge is
transferring it from where it was created or captured to where it is needed and should be
used [6]. Thus a “knowledge market” is formed within the organization in a business
context [7]. In order to manage the knowledge within the organization effectively, we
need to understand market forces in the knowledge market within an organization.
Most organizations think knowledge exchange is automatic and doesn’t need the
driver, and there won’t be frictions as well. As a consequence, many knowledge
management projects failed in the end [8].
From industrial perspectives, this paper starts with an empirical review of knowl-
edge management development of Taiwanese companies from year 2001 to 2010.
Secondly, three surveys held in 2002, 2006, and 2010 on knowledge management
issues from MIS and IT professionals in Taiwan were conducted and compared, which
is to explore the perceived understandings and requirements for the applications of a
knowledge management system. Finally, combing intellectual capital theories of other
researchers, we propose a framework based on Michael Porter’s value chain and Stan
Shih’s smiling curve, and suggest that a company should use different knowledge
management approach, according to its corporate main business function.
The period of year 2001 to 2004 was very important to the knowledge management
market in Taiwan. On the user side, the KM issues were discussed, and the concepts
were gradually mature. On the KMS vendor side, various KM application systems
appeared, and they tried to fit the KM issues from different perspectives.
There were 9 questions in the questionnaire, and were divided into 5 sections. The
complete questionnaire is in Table 1.
͘ ŽŵƉĂŶLJĂĐŬŐƌŽƵŶĚ
ϭ͘ ,ŽǁŵĂŶLJĞŵƉůŽLJĞĞƐĂƌĞƚŚĞƌĞŝŶLJŽƵƌĐŽŵƉĂŶLJ͍
Ϯ͘ tŚŝĐŚŝŶĚƵƐƚƌLJĚŽĞƐLJŽƵƌĐŽŵƉĂŶLJďĞůŽŶŐƚŽ͍
͘ ŽŵƉĂŶLJ͛ƐŝŶƚĞŶƐŝŽŶŽŶĚĞƉůŽLJŝŶŐ<D^
ϯ͘ ŽĞƐLJŽƵƌĐŽŵƉĂŶLJŚĂǀĞĂŶLJƉůĂŶŽŶĚĞƉůŽLJŝŶŐĂ<D^͍
͘ ŽŶƐŝĚĞƌĂƚŝŽŶŽĨĂ<D^
ϰ͘ tŚĂƚŝƐƚŚĞŵĂũŽƌŝƐƐƵĞĨŽƌLJŽƵƌĐŽŵƉĂŶLJŽŶƉƵƌĐŚĂƐŝŶŐĂ<D^͍
ϱ͘ /ŶLJŽƵƌŽƉŝŶŝŽŶ͕ǁŚĂƚŝƐƚŚĞŵĂũŽƌŽďƐƚĂĐůĞŽĨLJŽƵƌĐŽŵƉĂŶLJŽŶ
ĚĞƉůŽLJŝŶŐĂ<D^͍
͘ &ĞĂƚƵƌĞƐĂŶĚƐĞƌǀŝĐĞƐ
ϲ͘ tŚŝĐŚĨĞĂƚƵƌĞŽĨĂ<D^ĚŽLJŽƵŶĞĞĚŵŽƐƚ͍
ϳ͘ /ŶĂĚĚŝƚŝŽŶƚŽ<D^͕ǁŚĂƚŬŝŶĚŽĨƐĞƌǀŝĐĞĚŽLJŽƵŶĞĞĚŵŽƐƚ͍
ϴ͘ tŚĂƚŬŝŶĚŽĨĨĞĂƚƵƌĞĚŽLJŽƵĞdžƉĞĐƚŝŶƚŚĞĨƵƚƵƌĞŽŶĂ<D^͍
͘ ĐƚŝŽŶ
ϵ͘ ŽLJŽƵŶĞĞĚĂŶLJĨƵƌƚŚĞƌĂƐƐŝƐƚĂŶĐĞŽŶ<D^ŝƐƐƵĞ͍
In question 1, we’ve found that almost 30% of the enterprises had an employee size
of 50-100, 100-500, or above. It’s very interesting; since Taiwan has many
Small-and-medium businesses (SMB), but it only accounted for 10% to 18% from our
records. It implies KMS are more important in those large-scale businesses rather than
smaller ones.
In question 2, High-tech companies were always the number one industry in volume
attending the seminar. The second one was Manufacturing in year 2002 & 2010, and
Finance in 2006; in that year Manufacturing dropped to the third. It is noticeable that
attendees in 2006 were spread out more averagely through all sectors, because gov-
ernment had funding in building up KM system in their ‘e-Taiwan’ project (Table 2).
In question 3, we wanted to figure out the current market base of KMS in Taiwan.
The result showed that the percentage of “already deployed” enterprises is increasing,
but the percentage of “no plan” also increases. The total of “going to deploy” per-
centage also drops sharply from 2002, 2006 to 2010. It may imply that the KM market
is entering its “maturity” stage from “growth” stage, and those enterprises with no plan
in deploying KMS may need more attention and pay more efforts to cultivate.
What Does Industry Really Want in a Knowledge Management System? 527
In question 4 and 5, we wanted to find out the major issues and obstacles of a
company choosing their KMS. In question 4, the result shows ‘Functionalities’ and
‘Capabilities to integrate’ were always the top two issues in choosing a KMS. ‘Stabil-
ity’, ‘Easy to maintain’, ‘Extensible to new features’ became more important issues as
time went by; especially the percentage of these concerns climbed over choice of
“Price” in 2010, which implying that KMS became a less price-sensitive system, and
Total Cost of Ownership (TCO) of a KMS started to be concerned .
In question 5, “the benefits of KMS are not sure” was the major obstacle in de-
ploying KMS throughout these three surveys. Choice of “total cost might be high”
increased from time to time, which agrees with our viewpoint stated above. “There is
no consensus for KMS in our company.” were also a common issue among attendees.
It implies that companies should first learn about the scope of KMS, reach consensus,
and then turn it into action such as considering TOC if deploying a KMS. However,
to verify the benefits of deploying a KMS is still a major challenge from time to
time.
From question 6 to 8, we wanted to know which were the most desirable applications
and services in KMS. In question 6, the result showed both “Knowledge repository &
map”, and “Document management” were on the top list from three surveys.
“Knowledge search & retrieval”, and “Knowledge extraction from external sources”
appeared twice on the list as well. “Collaboration” was on the list in 2002, but seems
not as desirable nowadays; since the problem can also be solved by other softwares.
“Expert management” emerged in 2006, “Document security” climbed from No.4
(in 2006) to No.2 (tie with “Document management” in 2010), and “Knowledge
automation” emerged as No.1 in 2010. These trends fit well with the changes of
demands for KMS – from storing, retrieving the internal knowledge, to managing
experts, and finally extended the scope of knowledge management from internally to
externally with knowledge automation function, and a caution of security matters
(Table 3).
528 L.-C. Yang and H.-P. Lu
In question 7, we found the most wanted services in addition to KMS are “Integra-
tion with the existing systems”, and “Customization” in 2002 and 2006. It implies KMS
could not perform as a standalone application in a company. It should be integrated
with other corporate information systems like ERP (Enterprise resource planning),
PDM/PLM (Product development or lifecycle management), and HRM (Human re-
source management)…, just to name a few. These matters seem to be taken care of well
What Does Industry Really Want in a Knowledge Management System? 529
In question 9, we wanted to find out what action the attendees might take for the next
step. From 2002 to 2010, the percentage of needing a trial version of KMS decreased.
We interpreted this change was because most people didn’t know what KMS would be
like back in 2002. But after these years of cultivation, people have already learned what
it is, and probably even have had tried the system from several venders a few times.
Therefore they were clearer about what kind of KMS would best suit their needs, and
thus attending a seminar seems to be enough for them to compare the KM systems on
the market.
530 L.-C. Yang and H.-P. Lu
Industrial Perspectives
Staff-centric System-centric Customer-centric
Value Chain Marketing &
Research & Design Manufacturing
Process Logistics
Knowledge Domain Process Product-to-Customer
Focus Knowledge Knowledge Knowledge
Intellectual
Structure Relationship
Capital Human Capital
Capital Capital
Category
What Does Industry Really Want in a Knowledge Management System? 531
From the surveys, we also found that the most useful applications were document
management, knowledge search and retrieval, and knowledge repository and map. The
emerging applications were expert management, document security, and knowledge
automation such as auto-classification, auto-abstract and auto-keyword generation. In
addition, the most wanted services were consulting service, success story-sharing, and
modularization while deploying knowledge management system in the enterprises. It
implies KMS is not a stand-alone system. With different industry focus, KMS may
tailor users’ needs, and organizations may therefore benefit more from utilizing their
KM system.
References
1
School of Printing and Packaging Engineering in Xi’ an University of Technology,
710048 Xi’an, China
2
School of Mechanical Engineering in Xi’ an Jiaotong University, 710072 Xi’an, China
3
Huawei Technologies Co., Ltd in Shenzhen, 518129 Shenzhen, China
malie@xaut.edu.cn, hyzhang@xaut.edu.cn, liwei_109238@hotmail.com
Abstract. Sheet positioning time is one of the main influence factors to improve
the printing velocity of offset printing press. In the process positioning system,
stepping motor, transducer and roller wheels replace the traditional mechanical
front and side guide system. Front and side guiding is finished while the paper is
moving in the feeding table. The paper position signal detected by transducer is
transferred to the stepping motors which control the wheels above the paper.
Then the paper moves in the longitudinal and side direction. So the front and side
position of the printing paper is definite.
1 Introduction
Sheet positioning system is an important part of the printing process. Only when ade-
quate positioning time is ensured, the positioning system can work accurately and
further guarantee registration precision between sheets and colors. Then the printing
quality is improved[1]. Sheet position is completed by the special positioning system.
Traditional sheet positioning system includes feed board, front guider, side guider, and
so on. The incoming sheet is first braked, then stabilized in the front lays, and then
aligned at the side lay.
At present, some high-speed printing press has reached 18,000 sheets per hour or
more. This means that the positioning time of paper in the feed board is shorter. In this
condition, the total sheet positioning time is only 0.03889 seconds. In order to reduce
the positioning time and ensure the positioning effect, KBA[2] developed no-side lay
SIS (Sensoric Infeed System), the side positioning is done by the gripper bar. Precise
lateral displacement of the gripper bar enables the sheet to be exactly positioned before
being transferred to the first impression cylinder. New sheet positioning system[3]
provides a new solution to ensure the sufficient sheet positioning time. In the new sheet
positioning system, the side guiding is conducted dynamically during the paper
transportation process. This paper relates another sheet positioning system in which the
front and side positioning are finished in the feed table. We call it as process positioning
system.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 532–537, 2011.
© Springer-Verlag Berlin Heidelberg 2011
A Process Positioning System for Sheet-Fed Offset Press 533
The positioning principle and the system structure of the process positioning system are
shown in Fig.1.The direction of the arrow shows the direction for the paper transpor-
tation. The system is composed of three motors, four paper rollers, and several sensors
and so on. M1 and M2 control two pair of paper rollers which bring the fore-and-aft
movement of papers (G1 and G1', G2 and G2', G1 'and G2' are the corresponding
wheels below G1 and G2 respectively). M3 controls the other two pair of paper rollers
which bring about left and right movement of paper to complete the side positioning of
sheets(G3 and G3 ', G4 and G4', G3 'and G4' are the corresponding wheels below G3
and G4 respectively). Motor’s rotation is triggered by the corresponding sensor. S1, S2,
S3 and S4 check the sheet position in the fore-and-aft direction. S4 check the
sheet position in the side direction. The front positioning is finished before the side
positioning.
In the process positioning system the movement of paper is finished mainly by four
paper rollers driven by stepping motors. Paper positioning must ensure the accuracy of
registration. As the paper roller is the last actuating element, its minimum rolling dis-
tance is the paper positioning accuracy. The roller is driven by a stepping motor, so the
positioning accuracy is related to the step angle of the stepping motor. Therefore, the
positioning accuracy of the process sheet positioning system depends on the diameter
of the paper roller and the step angle of the stepping motor.
534 L.E. Ma, H.Y. Zhang, and W. Li
As the sheet positioning is one of the important parts of the printing process, it has strict
time constraints, which also put high demands on the acceleration and deceleration of
the stepping motor. We choose S acceleration and deceleration curve(shown in fig.2) to
control the start, stop, acceleration and deceleration of the stepping motor[8]. The
maximum output torque for each frequency of the motor can be gained by the motor
torque-frequency characteristic curve, but most of the torque-frequency characteristic
curve is a non-linear curve with overall downward trend, which is not easy to compute.
Therefore, straight line is used to fit the torque-frequency characteristic curve in a
certain frequency range.
According to the choice of the stepping motor and positioning accuracy, the step angle
must be divided. Microstepping driving circuit is used. We select chopper constant
current circuit as the motor drive circuit according to the response time and drive
efficiency[9].
The stepping motor controller is mainly composed of Atmel's 52-series chip,
MAX232 serial interface chip, eight-bit common cathode digital tube, keyboard and
control chip ZLG7290C, and 82C53 etc. The controller sends CP pulse, rotation and
reversal signal to the three stepping motors in accordance with the motion cycle chart in
sequence. TOSHIBA (Toshiba) company's TA8435H micro-step integration controller
chip is selected for the drive of stepping motor. The stepping motor drive board of the
process position system is shown as fig.3.
When carrying on the stepping motor’s control, we use the data discrete processing
method to optimize and revise the start-up curve, store the different frequency values of
the maximum pull-out torque in memory, call out with a certain delay, and fit the
speed-up curve to achieve speed-up effect.
536 L.E. Ma, H.Y. Zhang, and W. Li
Initializing
Y
Constant speed Constant speed?
˛
N
N
Grade step ˛
Y
Speed-up one grade
Compute grade step
Y
Constant speed Speed-up step ˛
Exit
5 Conclusion
The process paper positioning system can extend the paper position time and meet the
position precision. Using the data discrete processing method to optimize and revise the
start-up curve can meet the demand of position time and precision. The developed
stepping motor division driver can drive three stepping motors together, finish the pulse
output control of the stepping motor, complete subdivision of the step angle, and meet
the positioning requirements.
A Process Positioning System for Sheet-Fed Offset Press 537
References
1. Zhang, H.: Design of Printing Machine. Printing Industry Press, Beijing (2006)
2. One in three KBA Rapida 105 Sheetfed Presses Offer Revolutionary “No-sidelay Infeed”,
http://www.kba-print.de/vt/headlines/news.html
3. Li, W., Zhang, H.: New Paper Printing Equipment Positioning System. Packaging Engi-
neering 28, 104–106 (2007)
4. Liu, B., Cheng, S.: Stepping Motor and Drive Control System. Institute of Technology Press
(1997)
5. Kenjo, T., Sugawara, A.: Stepping Motors and their Microprocessor Controls, 2nd edn.
Clarendon Press, Oxford (1994)
6. Wu, J.: Sheet-fed. Offset Printing Press, Chemical Industry Press (2005)
7. Harbin Institute of Technology: Stepping Motor. Science Press, Beijing (1979)
8. Aguilar, L., Melin, P., Castillo, O.: Intelligent Control of a Stepping Motor Drive Using a
Hybrid Neuro-fuzzy ANFIS Approach. Applied Soft Computing 3, 209–219 (2003)
9. Melin, P., Castillo, O.: Intelligent Control of a Stepping Motor Drive Using an Adaptive-
Neuro–fuzzy Inference System. Information Sciences 170, 133–151 (2005)
Research of Electronic Image Stabilization Algorithm
Based on Orbital Character∗
Abstract. The monocular vision is the key technology for the locomotive anti-
collision warning system. The range precision influence the system’s perform-
ance. In this paper according to the question of video jitter result in the accuracy
reducing, proposes a new EIS algorithm based on the orbital characteristic,
through extracting and matching partial feature template obtained the global
movement vector. Treat the partial feature template instead of treating the
global image, speed of the system is improved obviously. The result of simula-
tion indicates that this algorithm can effectively eliminate image migration
which produces because of the video jitter, has solved the deviation of ranging
precision, and satisfies the real-time request of system.
1 Introduction
The visual range technology is a effective method to enhance mine pit locomotive
safe transportation. However, compartment's collision and vibration between locomo-
tive and track in the locomotive travel process, easy to cause the image sequence
gathering process not to be unstable, reduces the picture quality. In the practical ap-
plication, based on monocular vision's mine pit locomotive anti-collision warning
system, is very high to pictorial information's stable request. Quality of image stabili-
zation determines the accuracy of target detection and the precision of distance meas-
urement, which determine the performance of system.
Technology of image stabilization is a behavior to make image return to the posi-
tion of equilibrium or the initial point, when the image has displacement. The image
stabilization technology has three kinds: Mechanical, optics and electronic. Compares
with those methods, EIS (Electronic Image Stabilization) is a technology to carry on
the revision to the video image sequence, it adjusts in the video image sequence
through the software, make the image sequence is smooth and stable [1].
The commonly used electronic image stabilization algorithm includes pixel gray
algorithm, block matching algorithm, image characteristic algorithm [2].Because the
∗
This work was supported by national Science Foundation of Chongqing Education Commis-
sion (KJ08A01).
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 538–543, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Research of Electronic Image Stabilization Algorithm Based on Orbital Character 539
mine pit locomotive movement's track has the obvious edge feature, this paper basis
image characteristic's method basic principle, using track's characteristic invariability,
proposed a electronic image stabilization algorithm based on the orbital characteristic
match. The experiment demonstrated that this algorithm had guaranteed the image
stability, increased system's range precision.
The track’s width is invariant, we derives range mathematical model as follow [3]:
a1 .
d = (1)
w p + a2
where d is the distance between target and mine locomotive; w p is the pixel’s num-
ber of the target’s most inferior; a1 , a2 respectively is
⎧ a 1 = p 11 × h 3
+ p 12 × h 2
+ p 13 × h + p 14 .
⎨ (2)
⎩ a 2 = p 21 × h + p 22 × h + p 23 × h + p 24
3 2
Electronic image stabilization algorithm method has two kinds: First, calculate current
frame and the beginning frame position vertical and the horizontal drift quantity. Sec-
ond, compares the current frame with the preceding frame [4].
In Fig. 2, r1 , r2 , r3 is relative displacement between the neighboring two frame,
R is the absolute displacement between 1st frame and the 4th frame.
540 X. Xian et al.
U
U U
Ă Ă
Decomposing the frame movement into the horizontal direction Δx , vertical direc-
tion Δy , and take image center as revolution axis degrees rotation θ . Under premise
of the selected image, the following image carry on the computation according to the
above three components [5].
The commonly used characteristic template selection divide the image into n regions
A1 , hypothesis each region's size is a × b , we can find a a′ × b′ template A2 .
The standard of template [6] is firstly it must guarantee in the characteristic tem-
plate to have the obvious change at least in a domain.Secondly the computation is
smaller than other methods and thirdly it must guarantee template has abundant detail.
The template choice must satisfy two essential conditions.The fist one is the algo-
rithm performance can carry on the adjustment according to processor's performance,
thus achieves the best match effect.The second one is the matching template should
select the relatively static object achievement to refer to in the image.
After characteristic template selection, the next step is the template matching.
When the traversal finished, choice the characteristic template relevance biggest
matching template. Obtains this matching template and the characteristic module level
and the vertical distance, is the partial movement vector, as shown in Fig. 3.
6 H D U F K LQ J
E OR F N
0 D WF K LQ J
E OR F N
% O R F N W R E H
P D WF K H G
The mine locomotive operates based on the track, when static or steady movement,
between locomotive's wheel and the track does not have the position displacement.
From this characteristic, we can designate the matching template l × w contains the
track in the image underneath, as shown in Fig. 4.
Research of Electronic Image Stabilization Algorithm Based on Orbital Character 541
The most obvious characteristic is two tracks in the middle of the image. Extracting
this template, after the image pretreatment, track's linear feature is extracted through
the Roberts operator, as shown in Fig. 5.
The system range model is obtained by (1), in the EIS process, only need adjust the
vertical and the revolving displacement, the algorithm is as follows,
step1: Select the reference line n at the underneath of image (10≤n≤20), obtains the
number of pixel is m between this line of tracks;
step2: calculate track's characteristic parameter, denoted by ( x0 , y0 );
step3: After digital image processing, we can obtain the straight line L1 , desig-
nated that treats the adjustment image the left inside track reference point, denoted by
( x1 , y1 ),calculate characteristic parameter ( ρ1 , θ1 ) of L1 ; Revolve L1 ( ρ1 , θ1 ) into
L1′( ρ1 ,θ 0 ) through (5);
542 X. Xian et al.
⎡ x 1′ ⎤ ⎡ cos Δ θ sin Δ θ ⎤ ⎡ x 1 ⎤ .
⎢ y ′ ⎥ = ⎢ − sin Δ θ Δ θ ⎥⎦ ⎢⎣ y 1 ⎥⎦
(5)
⎣ 1⎦ ⎣ cos
Designating the pixel line with track differential value is Δx , carry on the vertical
compensation Δy , make the pixel line at the image underneath line n, and then real-
ize ( Δx , Δy ) compensation.
step5: Make (3) into (4), we can get the vibration compensation formula as
⎡ x ⎤ ⎡ cos Δ θ sin Δ θ ⎤ ⎡ x1 ⎤ ⎡Δ x ⎤ .
⎢ y
0
⎥ = ⎢ − sin Δ θ + ⎢ (7)
⎣ 0 ⎦ ⎣ cos Δ θ ⎥⎦ ⎢⎣ y 1 ⎥⎦ ⎥
⎣Δ y ⎦
step6: Search the arbitrary point on this straight line, compare with the reference
image's corresponding points, if uniform, then finish; else return step 3.
We use MATLAB to make the gradation projection value to the continual 520 images
(each second 10 frames), and makes the related operation with the reference frame's
gradation projection value, examines the image in the vertical direction biggest dis-
placement quantity is ± 20 pixel. Fig. 8 is passes through this electronic steady ele-
phant algorithm adjusts the result. In the image the blank region causes image shifting
for the adjustment process to produce, does not affect the image effect.
(a) The initial frame (b) The 20th frame (c) The 100th frame (d) The 120th frame
(a) The initial frame (b) The 20th frame (c) The 100th frame (d) The 120th frame
The paper has carried on the experiment of the range precision. The experiment uses
the laser range finder to select ten distances as reference, result as shown in Fig. 9.
From Fig. 9, using the EIS algorithm which this article proposed, the range finder
precision had improved, the range error was controlled in the acceptable scope;We
also can see that along with distance increase, the range finder precision reduces
gradually, this is as a result of the monocular range finder's limited result, there are
not direct relation with the EIS algorithm.
5 Conclusion
The paper proposed one kind Electronic image stabilization algorithm based on the
orbital straight line characteristic, this algorithm through the selection characteristic
template, carries on matching of the partial characteristic block, the displacement of
overall image was described by track's change, simplified the module matching algo-
rithm. The experimental result indicated that this algorithm was effective and in-
creased system's range precision.
References
1. Ondrej, M., Frantisek, Z., Martin, D.: Software video stabilization in a fixed point arithme-
tic, Department of Intelligent Systems
2. Chen, Z.-j., Li, Z., Lei, S.: Electronic Image Stabilization Method Based on Feature Block
Matching. Science Technology and Engineering J. 7, 11 (2007)
3. Niu, B., Liang, S., Xian, X., Gan, P.: Design of the tramcar anti-collision warning system
based on s3c2440. Chinese Journal of Scientific Instrument (2008)
4. Lin, C.-T., Hong, C.-T., Yang, C.-T.: Real-Time Digital Image Stabilization System Using
Modified Proportional Integrated Controller. IEEE Transactions on Circuits and Systems for
Video Technology 8(3) (March 2009)
5. Vella, F., Castorina, A., Mancuso, M., Messina, G.: Digital image stabilization by adaptive
block motion vectors filtering. IEEE Trans. 48(3), 796–800 (2002)
6. Brooks, A.C.: Real-Time Digital Image Stabilization, EE420 Image Processing Computer
Project Final Paper, EED Northwestern University, USA, March 2003, p. 10 (2003)
Study on USB Based CAN Bus for Data Measurement
System
Weibin Wu, Tiansheng Hong, Yuqing Zhu∗, Guangbin He, Cheng Ye, Haobiao Li,
and Chuwen Chen
1 Introduction
Control area network (CAN), is widely used in the field of discrete control with signal
transmission using short frame structure, shorter time cost, automatically shut-off
function and strong anti-interference ability [1]. Compared with other buses, CAN bus
data communication has outstanding performance, reliability and flexibility. Its ap-
plication is most extensively in automobile industry and also is the main bus in auto-
motive communication network technology. Therefore, it is in favor of analyzing and
studying all kinds of data within the automotive, and it has important realistic signifi-
cance [2]. Meanwhile, universal serial bus (USB) has gradually become the most
widely used system in the field of the computer peripherals to connect standard. In
control and automation area, USB configuration is more flexible and convenient, and
the processing speed is faster and can be adapted to higher automation and intelligent
request [3]. However, USB technology combined with the CAN bus technology ap-
plication is still in the development stage [4]. With rapid development of CAN bus in
interface rate and general higher requirements, and the trend is that the USB interface
will fully replace traditional interface. Consequently, this study has broad prospects
∗
Corresponding author.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 544–549, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Study on USB Based CAN Bus for Data Measurement System 545
[5, 7] and this paper focuses on the design of a feasible USB-based data CAN test
system for citrus and tree parameter measurements.
&$1 QHWZRUN
In this system, the CAN node selects the independent controller MCP2515 to control
and the PCA82C250 to receive data. The USB use the USB controller PDIUSBD12 to
control it. In this design, the main components used included STC89C52 (SCM), CAN
transceiver (MCP2515), CAN transceiver (PCA82C250) and USB controller (PDI-
USBD12). This system has been mainly divided into the hardware circuit design of
power supply module, CAN control module and USB control module.
As illustrated in fig.2, this system can be divided into the software modules and com-
puter software modules. The function of the software is transmissions the CAN bus
data through the USB bus to Computer. PC software is a human-machine interface
whose function is to analyze, display and save the CAN message.
(1)Software design of the CAN bus module
The CAN bus module of software design includes CAN initialize program, receiving
data process and some function program. Their primary function is to report the basic
state of controller to the CAN network, to determine whether the basic controller is
working normally, to determine if the CAN bus has faults, receive messages sent by the
other nodes of the CAN bus and data processing.
Receiving data is more complicated than sending data, because in the process of
receiving messages, the statements of bus from, error alarm, and so forth should be
handled. The MCP2515 to receive messages can be divided into two ways, namely the
interrupt receiving mode and query modes, here it use interrupt receiving mode.
546 W. Wu et al.
6WDU
0DFKLQH
7UDQVPLVVLRQ
3&
(QG
6HULDO FRPPXQLFDWLRQ
86% FRPPXQLFDWLRQ
PRGXOH
0&8
7KH &$1 FRPPXQLFDWLRQ
PRGXOH
(1) Use STC-ISP program download tools to download the ISP target program to the
CAN Bus Analyzer circuit.
(2) Let the CAN Bus Analyzer link to another piece board of CAN bus Analyzer
communications Modules, use the STC-ISP serial debugging assistant to read the data
which is capture by the CAN bus Analyzer, then observe and analyze them.
Study on USB Based CAN Bus for Data Measurement System 547
(3) Link USB equipment to Computer, observe whether the USB equipment enumera-
tion process successfully and the computer can correctly identify the equipment.
(4) Use the USB Bus Hound to capture the data packets of USB, observe it whether the
data Bus conforms to theory.
(5) Use the CAN Bus Analyzer to read the USB devices data, then observe and analyze
data.
3) The CAN Bus Analyzer is a CAN-bus decoding test analysis and development tools
which is designed for the system. The data show in figure is the message which is
transmitted to the computer through the USB bus after capture from the CAN, the left
dialog can display the network information capture from the CAN Bus. Right dialog
configuration CAN Bus Analyzer, Bus baud rate is rate of data transmission in CAN
Bus. Experimental data analysis, after the CAN Bus Analyzer caught the Bus network
data, the data which is transmitted via USB Bus to CAN Bus Analyzer is the same as
the data capture from the CAN bus, just more than a System time which is the Analyzer
System time of capturing the message. It proves that the system is working normally.
At the same time, data will be stored in EXCEL document, so it is convenient for re-
searchers to study and analysis data.
548 W. Wu et al.
4) This system’s CAN Bus Analyzer was tested with Kvaser USB CAN II. Illustrated in
Fig.5 show that in the interface after Kvaser CanKing send message data, Chn is for
channel 0, Identifier is for the message ID, Flg 0 means standard frame, Flg X means
Expand frame. DLC means data bytes, D0 ~ D7 is field data, Time means the time cost
for sending messages, Dir T means sends messages, Dir R means receiving messages.
Fig.6 is display data interface after the CAN bus analysis receiving the message, the
CAN Bus Analyzer receive data is the same as the data caught on USB.
4 Conclusion
The system hardware design basically satisfies the control system requirements. The
CAN node selects the independent controller MCP2515 to control and the PCA82C250
to receive data. The USB hardware uses the USB controller PDIUSBD12 to control
it. By using embedded C language and Keil 3.0, the functions of the system were
Study on USB Based CAN Bus for Data Measurement System 549
accomplished. And the system software design satisfies the basic control requirements.
It has been designed with a user control software which can display in real-time and
save the CAN bus message, so it is convenient for research and analyzing of data.
The system has been debugged in actual measurement on the test data of USB the
CAN bus system, Kvaser USB CAN II, and besides it has comparative analysis of the
data which Kvaser CanKing sends and CAN Bus Analyzer receives. The CAN Bus
Analyzer can correctly receive all data sends by Kvaser CanKing showing the test error
is zero.
Finally, this system has been successfully used in the projects of Hill Citrus orchard
in combination with the project of Analysis System for Citrus LAI Spectrum Data to
transmit the data of citrus leaf.
Acknowledgment
This work was supported by the National Natural Science Foundation of China with the
Serial Number of 30871450 and Special Fund for Agro-scientific Research in the
Public Interest of the Chinese Ministry of Agriculture with the Serial Number of
200903023.
References
1. Wu, K.: The CAN BUS system design principles and applications. Science Press, Beijing
(1998)
2. Yang, H., Ti, A., Ming, T.: The Can bus protocol analysis. Chinese Instruments 4, 1–4 (2002)
3. Peng, S.: Embedded system. The USB interface technology application research and devel-
opment. Peking University Press, Beijing (2005)
4. Yan, C.: Based on the USB interface design of the control system of CAN bus. Chengdu,
Southwest Jiaotong university degree thesis (1997)
5. Microchip: MCP2515 Stand-Alone CAN Controller With SPI Interface. Microchip Tech-
nology Inc. (2003)
6. Philips company: PCA82C250 CAN controller interface. Philips Semiconductors (2000)
7. Philips company: PDIUSBD12 USB interface device with parallel bus. Philips Semicon-
ductors (2001)
8. Ceng, F., Yu, M.: Examples programming skills. In: MFC. Tsinghua University Press, MFC
Beijing (2008)
Research of WLAN CAN Bus Data Test System
Weibin Wu, Yuqing Zhu, Tiansheng Hong*, Cheng Ye, Zhijie Ye,
Guangbin He, and Haobiao Li
Abstract. Both controller area network (CAN) and wireless local area network
(WLAN) bus technologies are developing rapidly nowadays. But the study on
WLAN technology with CAN bus data is relative to be less researched. So a
WLAN CAN bus data test system was present in this paper. The system hardware
consists of two control modules, each of which consistently allows link test to the
CAN bus network. Each hardware of the module is composed by a
microcontroller STC12C5410AD, a CAN bus control unit MCP2515, a CAN bus
transceiver MCP2551 and a RF wireless module RF903. During the test process,
the CAN bus analyzer, Kvaser USBCAN II, receives and dispatches the CAN
messages. Test results showed that the system’s error score was approximately
zero when messages are transmitted and the system can message transmission
from one side of CAN panel point to the other side. The communication of the
system is suitable for the CAN treaty standard and it can allow wireless message
transmission to other suitable networks meeting the CAN treaty standard as well.
Finally the system has been applied to the project for hill citrus leaf area index
(LAI) acquisition.
1 Introduction
Control area network (CAN), is widely used in the field of discrete control with signal
transmission using short frame structure, shorter time cost, automatically shut-off
function and strong anti-interference ability. With the development of complicated
projects, which are inconvenient to connect the equipment by wired media, there is a
higher demand on network services [1, 2]. Wireless local area network WLAN is ( )
developing rapidly and is going to be the main trend in network [3, 8]. In the citrus
project for information acquisition, there is need to gather information about the
oranges such as leaf area index (LAI), fruitage and so forth, but the complex terrain
makes it very hard to use the CAN system alone. However, the WLAN system in
conjunction with CAN can provide a new way to solve the problem.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 550–555, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Research of WLAN CAN Bus Data Test System 551
to the other module by wireless technology when it receives the messages from the
CAN. At the same time, each module can receive the messages from other modules,
and then send the messages back to the CAN bus network. In this way, two CAN bus
networks can communicate without wired media.
CAN
Module1
WLAN
Module2
CAN bus
is also produced by Kvaser Company, it can test and send the signal which is suitable
for the CAN treaty standard and the signal will be shown on the computer screen.
(2) CAN panel point module XhoCAN, as shown in figure 2, is for sending and
receiving CAN messages.
(3) All system boards are produced independently.
(4) A computer and a number of wires should be prepared.
&$1 FRQWUROOHU
&$1FRQWUROOHU
PLFURFRQWUROOHU
SRZHU LQWHUIDFH
)5
(1) Connect the Kvaser USBCAN II to the computer, run the CanKing software and
then set CAN communication baud rate on 500Kb/s.
(2) Connect the CAN connector of Kvaser USBCAN II to the CAN connector of the
CAN panel point module and start the CanKing software to receive the CAN messages
from the CAN panel point module.
(3) Connect the CAN connector of CAN panel point module to the CAN connector of
one of the system modules and then connect the CAN connector of Kvaser USBCAN II
to the CAN connector of the other system modules. Afterwards, use CanKing to test the
CAN messages through the system and compare the CAN messages with the CAN
messages tested directly at step 2.
(4) Connect the CAN connector of Kvaser USBCAN II to the CAN connector of one of
the system modules, connect the CAN connector of the CAN panel point module to the
CAN connector of the other system modules and connect the CAN panel point module
to the computer. Then, open the sending window of the CanKing software. After setting
identifier and data length in sending window, press the send button to send the CAN
messages to the CAN panel point module through the system. The CAN panel point
module will send the received messages back to the computer and all data will be
shown in STC-ISP.
(5) Set different messages, send them repeatedly and compare them with the former
messages.
5 Conclusion
After making tests, the following conclusions can be made, and in addition the system
has been used in the project for hill citrus leaf area index (LAI) acquisition.
1) The system hardware design is suitable for the control requirements. The CAN bus
interface was tested by CAN bus analysis equipment and the results showed that the
messages were suitable for the CAN protocal. The test results showed that this system
can use the CAN bus analysis meter Kvaser USBCAN II to send and receive the CAN
messages. The error score of the transmission is 0.
2) The system can easily be expanded and there is no need to change the hardware
structure when adding a new module. Instead, modifying the procedure is sufficient,
which makes the network to achieve the wireless communication.
Acknowledgment
This work was supported by the National Natural Science Foundation of China with the
Serial Number of 30871450 and Special Fund for Agro-scientific Research in the
Public Interest of the Chinese Ministry of Agriculture with the Serial Number of
200903023.
References
1. Rao, R., Zhou, J., Wang, J., et al.: Field Bus CAN Principle and Use Technology, 2nd edn.
University of Aeronautics and Astronautics Press, Beijing (2007)
2. Han, B., Huo, C.: Field bus Instrument. Chemical Industry Press, Beijing (2007)
3. Yang, J., Li, Y., Yang, Z.: Wireless Local Area Network Build Practice, pp. 2–7. Electronic
Industry Press, Beijing (2006)
4. Lv, L., Wang, Z.: Base on PIC Singlechip Car CAN Wireless Transmission System. Industrial
Control Computer 21(8), 80–81 (2008)
Research of WLAN CAN Bus Data Test System 555
5. Guan, Y., Li, Z.: Base on CAN bus’s Wireless Transmission Technical Research. Industrial
Control Computer 17(10), 56–57 (2004)
6. Hong, J.: Science and Technology. STC12C5410AD Series Singlechip Chinese Guide (2008)
7. Microchip: MCP2515 Data sheet Stand-Alone CAN Controller with SPITM Interface (2005)
8. Microchip: MCP2551 CAN Controller Interface Data Sheet (2005)
A Framework of Simple Event Detection in
Surveillance Video
Weiguang Xu, Yafei Zhang, Jianjiang Lu, Yulong Tian, and Jiabao Wang
1 Introduction
With the rapidly progressing production techniques, using video cameras for
surveillance has been common in both federal agencies and private firms. Most
surveillance systems need human operators to constantly monitor them. The
Effectiveness and response is seriously constrained by the labor-intensive work
pattern. An automatic tool urgently needed for overcoming the limitation.
Video surveillance automation works in two key modes: alerting for known threaten
events in real-time and searching for events of interest after the fact. Amount of literature
have come forth that survey event analysis in video [1,3,4,5,6], where the word of
“event” can be seen frequently, as well as its alternative word, “activity”. However, the
true meaning of “event” and “activity” is usually ambiguous, even misunderstood.
Generally speaking, events can be categorized into simple ones and complex ones. We
define simple events as the ones detected directly from digital features, and cannot be
divided anymore. They are usually called “simple events”, “atomic events”, “actions” or
“event primitives”. In contract, complex events are defined as the ones made up of series
sub-events. They are usually called “complex events”, “activities” or “interactions”. This
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 556–561, 2011.
© Springer-Verlag Berlin Heidelberg 2011
A Framework of Simple Event Detection in Surveillance Video 557
paper focuses on a framework of detecting simple events rather than complex events.
Detailed procedure of the latter can be found in [5].
The rest of the paper is structured as follows: in section 2, we describe a
framework that accepts raw video data for surveillance camera as input, and out series
simple events. Techniques of each module will be detailed in section 3-6. At last, a
conclusion and future works are drawn in section 7.
2 Framework
In surveillance, the targets of interest are usually humans, cars, luggage, and other
things, such as the suspect people in crowd and the left luggage. Events happened to
theses agents can be identified by observing the motion pattern of them. So, in the
framework, we detect events based on trajectories of these objects. Fig.1 shows the
data flow and functional modules in the framework.
The Framework works as follows: While receiving data from surveillance camera,
the system uses motion compensation to rectify the pixel shift caused by tuning or
unexpected shaking. Then frame difference and some morphology-based procedure
are used to detecting foreground pixels (block of object), which should be moving
relative to the background. When an object is detected, we use HOG-based approach
to determine whether it is a car, a person, or another kind of thing. Mean-shift
algorithm is used for tracking recognized agents. Before all of these, we make a list of
events that we need to detect in particular scene, with each event contacted with a
rule. The last task our system is to check whether these rules are satisfied. Satisfying
of a rule implies occurring of the corresponding event, which will be added to the
detected event list. Some other additional information is also recorded, such as the
time it happens, the event agent, location, etc. All the information in the list might be
useful for further detection of complex events.
558 W. Xu et al.
3 Foreground Detection
Foreground detection aims at detecting interesting moving objects based on
background subtraction. Background subtraction compares the current frame with a
background image to locate the moving foreground objects. This method can extract
shape of object well provided that the static background model is available and
adaptive updated by modeling each pixel as a Gaussian mixture model [8].In many
cases, background is also moving over time due to the moving camera. So frames
need to be stabilized first. Two-frame background motion estimation is achieved by
fitting a global parametric motion model (affine or projective) to inner matched SIFT-
based feature points [9]. Furthermore, some morphological operations may be
performed to improve the accuracy of the object mask [10].
In practice, some factors such as real-time requirements also need to be considered.
Through experiment, we find extracting and matching SIFT feature descriptors is the
bottleneck that limit the efficiency. So, we make change of traditional method to improve
the efficiency by detecting SIFT feature points in the image based on the pyramid of its
low-scale space. As consequence, the number of key points need to be extracted and
matched is reduced drastically. If the affine matrix in low-scale space is estimated as
⎡ a1 a2 a3 ⎤
H ' = ⎢⎢ a4 a5 a6 ⎥⎥ , then the affine matrix in original scale space is
⎢⎣ 0 0 1 ⎥⎦
⎡ a1 a2 a3 ⋅ scale1 ⎤
H = ⎢⎢ a4 a5 a6 ⋅ scale2 ⎥⎥ ,where scale1 and scale2 are the scale parameters used to
⎢⎣ 0 0 1 ⎥⎦
transform the original image into low-scale image. Fig. 2 shows an experiment result we
,
have got. The image scale of the first column is 960×540 scale them to 320×180 and
extract SIFT feature points. The number of feature points is reduced from 687 and 570 to
51 and 34 respectively, the matched inner feature points are reduced from 160 to 16. The
time cost by feature extraction and image match is reduced from 2094ms to 205ms.
Quality of the detected foreground shows that accuracy is also improved by using our
approach.
that might happen in surveillance scene. The list is diverse, but some limitation exsit.
For example, U-turning, speed up, etc. could not be detected due to the key
information lost, i.e. moving direction and speed of cars were ignored in the
definitions. To overcome this, we take into account both directions and speed so as to
detect more diverse events. Besides these, all the simple events we defined are
atomic, which means they cannot be divided anymore. The first advantage of doing
this is it makes simple event detection more easily. Complex rules cost more time to
check the satisfaction and cannot guarantee real-time detection, and simple rules do
the opposite. The second advantage is that the system could be extended easily. All
the simple events we defined are list in Table 1. The rules are checked regularly with
fixed time interval. An event is added to the detected event list immediately when it is
detected, along with some additional information, such as time, location, agents, etc.
The detected event list will be analyzed further to detect complex events.
The rules of simple events are designed to be checked easily, so as to enable the
system to work in real-time.
In the future, some benchmark video datasets will be chosen and downloaded to
test effectiveness and efficiency of the framework. Some specific algorithms and
parameters setting may be adjusted to fit each scene in the datasets. Besides these, we
are going to develop a system to detect more complex event, such as U-turning, etc.
References
1. Leixing, X., Hari, S., et al.: Event mining in multimedia streams. Proceedings of the
IEEE 96(4), 623–647 (2008)
2. Comaniciu, D., Ramesh, V., et al.: Kernel-based object tracking. IEEE Transactions on
Pattern Analysis and Machine Intelligence (PAMI) 25(5), 564–577 (2003)
3. Shah, M., Javed, O., et al.: Surveillance in realistic scenarios. IEEE Multimedia 14(1),
30–39 (2007)
4. Morris, B.T., Trivedi, M.M., et al.: A survey of vision-based trajectory learning and
analysis for surveillance. IEEE Transactions on Circuits and Systems for Video
Technology 18(8), 1114–1127 (2008)
5. Turaga, P., Chellappa, R., et al.: Machine recognition of human activities: a survey. IEEE
Transactions on Circuits and Systems for Video Technology 18(11), 1473–1488 (2008)
6. Asaad, H.: Learning, Detection, representation, indexing and retrieval of multi-agent
events in videos. Dissertation of doctor of philosophy (2007)
7. Dalal, N., Triggs, B.: Histograms of oriented gradient s for human detection. In:
Proceeding of the IEEE Conference on CVPR, pp. 886–893. IEEE Computer Society
Press, Los Alamitos (2005)
8. Stauffer, C., Grimson, W.: Learning patterns of activity using real-time tracking. IEEE
Transactions Pattern Analysis and Machine Intelligence 22(8), 747–757 (2000)
9. Lowe, D.G.: Distinctive image features from scale-invariant key points. International
Journal of Computer Vision 60(2), 91–110 (2004)
10. Zhaozhen, Y., Robert, C.: Moving object localization in thermal imagery by forward-
backward MHI. In: The 3rd IEEE Workshop on Object Tracking and Classification in and
Beyond the Visible Spectrum (2006)
11. Collins, R.T., Liu, Y., et al.: Online selection of discriminative tracking features. IEEE
Transactions on Pattern Analysis and Machine Intelligence 27(10), 1631–1643 (2005)
12. Comaniciu, D., Meer, P.: Mean shift: a robust approach toward feature space analysis.
IEEE Transactions on Pattern Analysis and Machine Intelligence 24(5), 603–619 (2002)
13. Comaniciu, D., Ramesh, V., et al.: Real-time tracking of non-rigid objects using mean
shift. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2,
pp. 142–149 (2000)
14. Zivkovic, Z., Krose, B.: An em-like algorithm for color-histogram-based object tracking.
In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition
(CVPR), vol. 1, pp. 798–803 (2004)
15. Porikli, F., Tuzel, O.: Multi-kernel object tracking. In: IEEE International Conference on
Multimedia & Expo., pp. 1234–1237 (2005)
ESDDM: A Software Evolution Process Model Based on
Evolution Behavior Interface*
Na Zhao1,3,**, Jian Wang2,**,***, Tong Li1,3, Yong Yu1,3, Fei Dai1, and Zhongwen Xie1
1
School of Software, Yunnan University, Kunming, China
2
College of Information and Automation Engineering,
Kunming University of Science and Technology, Kunming, China
3
Key Laboratory in Software Engineering of Yunnan Province, Kunming, China
zhaonayx@126.com, obcs2002@163.com, tli@ynu.edu.cn,
yuy1219@163.com, flydai@hotmail.com, xiezw56@126.com
1 Introduction
During software evolution, the changes at various granularities occur continuously or
discontinuously. An evolution process model must embody the properties of evolution
and be able to define more dynamic components than with traditional development
so that the changes can be described. By observation and analysis, it is found that
the following properties exist in software evolution processes: Iteration [2, 3],
Concurrency, Interleaving of continuous and discontinuous change[4], Feedback-
driven system [5, 6], and Multi-level framework.
This thesis pays attention to the evolution behavior in the software life cycle from
the perspective of components.
*
This work has been supported by the National Science Foundation of China under Grant No.
60963007, by the Science Foundation of Yunnan Province, China under Grant No.
2007F008M, the Key Subject Foundation of School of Software of Yunnan University and
the Open Foundation of Key Laboratory in Software Engineering of Yunnan Province under
Grant No. 2010KS01, by the Science Foundation of Yunnan Province-The Research of
Software Evolution Based on OOPN, by the Science Foundation of Yunnan University,
No. 2009F36Q.
**
These authors contributed equality to this work.
***
Corresponding author.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 562–567, 2011.
© Springer-Verlag Berlin Heidelberg 2011
ESDDM: A Software Evolution Process Model Based on Evolution Behavior Interface 563
2 Relative Work
The components we are discussing here are process-based rather than the ones with
ordinary sense; therefore, we introduce the concept of the evolution behavior
interface. It is a virtual interface that differs from the general component interfaces.
The reason it’s called virtual interface is that it’s not corresponding to a certain type
of function. Therefore, the external caller of the component cannot directly invoke it.
And the main purpose of virtual interface is used for the evolution of components.
Thus, we classify components into two categories:
Functional Interface: This kind of interfaces can be directly accessed by the
external components and used to describe the interaction between behaviors.
Evolution Behavior Interface: this kind of interfaces is used in behavioral evolution
of components. It can be accessed via the exterior of components by special methods.
Interface::=< Function Interface, Evolution Behavior Interface>
4 ESDDM
transition.
Role is an activity related role set and Role = {role1 , role2 , role3 LL rolem } ∧ m ≥ 1 .
IP (Input products) is a set of input products: {I1 , I 2 ,L , I n } . It lists the input
products that an activity needed. The input product set could be represented as IP (a) .
OP (Output products) is a set of output products: {O1 , O2 ,L , On } . It lists the output
products after an activity is executed. The out product set could be represented as OP(a ) .
( IP and OP corresponds to the component’s function interface. )
I and O are the set of the inputs and outputs of activities. I ⊆ P × T is the
nonempty finite set of arcs mapping from Place to Activity. O ⊆ T × P is the
nonempty finite set of arcs mapping from Activity to Place.
F is a transition related output function set. It is a set of tasks. It describes the
state’s changes of pre-set to after-set of an activity. It describes the function of and
activity. And it could be formally described as follows:
∀q j ∈ t * ∃f j : s ( p1 ) × s ( p2 ) × s ( p3 ) × L × s ( pn ) → s (q j )
{ }
Let Ft = f j j < m ∧ ∀q j ∈ t , *t =
*
{ p1 , p2 , p3 ,L , pn } , t * = {q1 , q2 , q3 ,L , qm } .
s ( pi ) describes the state of Place i in the Place set P. The important role of the output
function is that, it can describe the activities of the dynamic behavior, and enable the
system to describe the various activities in different granularities.
G {G1 , G2 ,L , Gn } is the set of targets. We should make sure whether the output
products of activities meet the requirements of targets. This will help us to establish
the foundation for the evaluation of customer satisfaction.
C is the set of restriction function, it is a combination of all the restricting
conditions related with a transition t. Any transition should have a constrained
function that has connections with it. L is denoted as the finite set of a predication, the
constrained function of a transition will be denoted as ∀ti ∈ T
∃Cti : l1 ∧ l2 ∧ l3 ...... ∧ ln → {0,1}, li ∈ L . Unless declared, the constrained function Ct
of transition t has a default value of 1.
ESDDM: A Software Evolution Process Model Based on Evolution Behavior Interface 565
Definition 4. A token can be denoted as tokenk =< value, type > ∧type ∈ CS , where
the value denote the value of token ,the value may be any form such as a number, a
structure, a set or a category. type is the category of token . token could be used to
denote the system resources.
4. Each token that is newly added to the place post-set has a value which is
calculated from an output function related to this transition, ∀q j ∈ t | valuek qj =
*
Next, we will give the model mapping rule, and it can be used to map the real-life
concurrent development process of software into a form of the XPN.
1. The resources used or produced in the concurrent development process of software
can be mapped into the token of the ESDDM.
2. The categories of the resources in the concurrent development process of software
can be mapped into the CS set in the ESDDM.
3. The activities in the concurrent development process of software can be mapped
into the transition in the ESDDM. A set of many activities can be abstracted into a
sub-process or project. Similarly, a process or sub-process can also be decomposed
into many activities.
4. The relevant responsibilities of the activities can be mapped into a role set of
transition in the ESDDM.
5. The specific functions of the activities can be mapped into an output function set of
transition in the ESDDM.
6. The conditions working from the beginning of the activities can be mapped into the
pre-set of transition in the ESDDM.
7. The terminal resulting from the activities can be mapped into the transition post-set
in the ESDDM.
8. The restrictions guaranteeing the beginning and the terminal of the activities can be
mapped into restrictive functions related to the transition in the ESDDM.
9. The beginning of the activities can be mapped into the enable of transition in the
ESDDM..
10.The terminal of the activities can be mapped into the firing of transition in the
ESDDM.
11.The time from the beginning to the terminal of the activities can be mapped into
the firing delay of transition in the ESDDM.
References
1. Li, T.: An Approach to Modelling and Describing Software Evolution Processes. Ph.D.
Thesis, De Montfort University, UK2 (February 2007)
2. Yang, H., Ward, M.: Successful Evolution of Software System. Artech House, Norwood
(2003)
3. Lehman, M.M.: Laws of Software Evolution Revisited. In: Montangero, C. (ed.) EWSPT
1996. LNCS, vol. 1149, pp. 108–124. Springer, Heidelberg (1996)
4. Aoyama, M.: Continuous and Discontinuous Software Evolution: Aspects of Software
Evolution across Multiple Product Lines. In: Proc. 4th International Workshop on
Principles of Software Evolution, pp. 87–90. ACM Press, New York (2001)
5. Chatters, B.W., et al.: Modelling a Software Evolution Process: a Long-Term Case Study.
Journal of Software Process: Improvement and Practice 5(2-3), 95–102 (2000)
6. Lehman, M.M., Ramil, J.F.: The Impact of Feedback in the Global Software Process.
Journal of Systems and Software 46(2-3), 123–134 (1999)
7. Zhao, N., Dai, F., Yu, Y., Li, T.: An Extended Process Model Supporting Software
Evolution. In: Proceedings of 2008 International Symposium on Intelligent Information
Technology Application (IITA 2008), Shanghai, China, pp. 1013–1016. IEEE Computer
Society, Los Alamitos (2008)
8. Mehta, A., Heineman, G.T.: Evolving Legacy System Features into Fine- Grained
Components. In: Proc. 24th International Conference on Software Engineering, pp. 417–
427. ACM Press, New York (2002)
9. Bandinellin, S., et al.: Process modeling in-the-large with SLANG. In: Proc. 2nd
International Conference on Software Process, pp. 75–83. IEEE Computer Society Press,
Los Alamitos (1993)
10. Osterweil, L.: Software processes are software too. In: Proc. 19th International Conference
on Software Engineering, pp. 540–548. ACM Press, New York (1997)
11. Osterweil, L.: Understanding process and the quest for deeper questions in software
engineering research. ACM SIGSOFT Software Engineering Notes 8, 6–14 (2003)
12. Zhao, N., Li, T., Yang, L.L., Yu, Y., Dai, F., Zhang, W.: The Resource Optimization of
Software Evolution Processes. In: Proceedings of 2009 International Conference on
Advanced Computer Control (ICACC 2009), Singapore, January 2009, pp. 32–36 (2009)
A Loading Device for Intervertebral Disc
Tissue Engineering
Lihui Fu1, Chunqiu Zhang1, Baoshan Xu2, Dong Xin3, and Jiang Li1
1
School of Mechanical Engineering, Tianjin University of Technology,
Tianjin Key Laboratory for Control Theory &
Applications in Complicated Industry Systems Tianjin, China
2
Orthopaedics, Tianjin Hospital Tianjin, China
3
Department of Mechanics, College of Mechanical Science and Engineering
Abstract. In recent years, the new technologies related with tissue engineering
improving continuously bring the hope for the complete regeneration of the
morphology and physiological function of the degenerated intervertebral disc.
According to the biomechanics of intervertebral disc, from the mechanical
characteristic of human activities, we have developed a dual-frequency com-
pression-torsion loading device for intervertebral disc tissue engineering. The
device consists of three systems. The dual-frequency compression part contains
two systems which produces low-frequency high-amplitude compression
(frequency 0-3Hz, amplitude 0-3mm) and high-frequency low-amplitude (fre-
quency 10-90Hz, amplitude 0-40μm). Rotary system provides torsion loading to
the culture by back and forth rotation controlled by step moter. The maximum
rotation angle is set smaller than 30°. The design goal of the loading device is to
achieve the mechanical condition of dual-frequency compression- torsion load-
ing as close as possible to that of in vivo condition of the native disc tissue. And
also is used to study the mechanobiology of intervertebral disc.
1 Introduction
In the orthopedic field, low back pain and other spinal diseases caused by interverte-
bral disc degeneration have serious influenced people’s normal life, the annual cost in
terms of morbidity, lost productivity, medical expenses and workers’ compensation
benefits is significant. Development of new treatment modalities is critical, as adopt-
ing many new technologies and methods for treatment clinically. In recent years, with
the development of tissue engineering techniques, bone and cartilage tissue engineer-
ing was becoming an ideal way for repairing the tissue defects, as well disc tissue
engineering has been a new way for the degenerated intervertebral disc, it brings hope
for us to cultivate intervertebral disc with structure and physiological function [1].
H H
The disc is the largest no vascular tissue, including the nucleus pulposus and annu-
lus fibrosus and cartilage endplates. The cartilage endplate plays a critical role in
maintaining the viability of nucleus pulposus cells. The nutrition of nucleus pulposus
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 568–573, 2011.
© Springer-Verlag Berlin Heidelberg 2011
A Loading Device for Intervertebral Disc Tissue Engineering 569
2 Methods
The overall structure of the dual-frequency and torsion loading device is shown in
Figure-1. From the function of this device, it includes three parts: low-frequency
high-amplitude system, high-frequency low-amplitude system and rotation system.
The main framework of the device is the two triangular panels and support columns.
The device dimensions (160mm × 130 × 60) are small enough to be placed in
hatching boxes to use.
The low-frequency high-amplitude system includes the stepping motor, cam, cam-
shaft, return spring and mandril. Once the stepping motor is launched to the cam
rotates with the cam shaft rotation. The cam stroke is adjustable by changing the ad-
justing screw, thus the alteration of the cam rotation center achieves compressive
displacement adjustment, providing different amount of compressions (Figure 2). The
speed, acceleration and frequency of cam rotation should be control by controller. The
return spring and cam rotation cause the mandril back and forth movement. These
570 L. Fu et al.
machinery all together provides different magnitude and frequency of the axial
compression load for the culture, in which the cam rotational speed is less than
0 - 200r/min, and the low-frequency and adjustable cam amplitude range is separately
0-3Hz and 0-3mm.
Rotary mechanism includes the motor control box, motor and loading platform. The
motor output shaft connects with the loading platform, which contacts with the base
of culture room directly. Culture room rotates together with loading platform rotation
driven by motor to get the torsion loading on culture. The forward and reverse rotate
of motor can make the culture room rotated back and forth. The maximum angle of
rotation is 30 °.
Fig. 3. View profile along the centerline 1-camshaft, 2-stepper motors a,3-up- plate, 4-sleeve,
5-piezoelectric, 6-loading block, 7-culture room, 8-loading platform, 9-stepper motor b, 10-
low- plate, 11-base of culture room, 12-cultures, 13-culture room cover, 14-connected block,
15- brackets, 16- mandril, 17- cam bracket, 18-cam
3 Results
The dual-frequency compression-torsion loading device we proposed provides a con-
formable mechanical environment for intervertebral disc tissue engineering, including
axial dual-frequency compression loading and radial torsion loading, The dual-
frequency compression contains two systems which are low-frequency high-
amplitude (frequency 0-3Hz, amplitude 0-3mm) and high-frequency low-amplitude
(frequency 10-90Hz, amplitude 0-40μm). And the low-frequency high-amplitude
loading is applied via cam rotating which controlled by step motor, the adjusting of
the motor speed can produce different frequencies loadings, high-frequencies load
works by piezoelectric ceramics. Rotary system uses stepper motor to control the
stage rotation, and the torsion angle is controled by adjusting the rotation step angle
(back and forth rotation, and the maximum rotation angle <30°). The device can
572 L. Fu et al.
4 Discussion
References
1. Kandel, R., Roberts, S.: Tissue engineering and the intervertebral disc: the challenges.
Spine 17(suppl. 4), S80–S491 (2008)
2. Korecki, C.L., MacLean, J.J.: Characterization of an in vitro intervertebral disc organ
culture system. Spine 16, 1029–1037 (2007)
3. Lotz, J.C., Hsieh, A.H.: Mechanobiology of the intervertebral disc. Biochemical Society
Transactions 30(part 6), 853–858 (2002)
4. Guehring, T., Nerlich, A.: Sensitivity of notochordal disc cells to mechanica loading: an
experimental animal study. Spine 19, 113–121 (2010)
5. Zhang, C., Zhang, X.: A Loading Device Suitable for Studying Mechanical Response of
Bone Cells in Hard Scaffolds. Journal of Biomedical Materials Research Part B: Applied
Biomaterials (2009)
6. Wang, D.-L., Han, M.: Bioreactors for animal cell culture and the related technologies.
China Biotechnology 23(11), 24–27 (2003)
7. Niu, P., Xiong, W.: The development of research on intervertebral disc degeneration model.
Spinal 20(2), 160–162 (2010)
8. Concaro, S., Gustavson, F.: Bioreactors for Tissue Engineering of Cartilage, Berlin,
Heidelberg (2008)
The Optimization of Spectrum Sensing Frame Time
1 Introduction
With the great increase in wireless applications, spectrum scarcity, which the
traditional approach of fixed spectrum allocation results in, has raised a pressing
problem. However, recent researches [1-2] have shown that most of licensed spectrum
remains idle at any given time and spaces. Regarded as a promising solution to
improving spectrum utilization efficiency, Cognitive radio (CR) technology [3-4] has
received considerable attention. CR enables CR user, also termed “secondary user”
(SU), to utilize the spectrum by a opportunistic way, when licensed user, also termed
“primary user” (PU), is not occupying it, and to ensure no interference to PU at the
same time. According to IEEE 802.22 standard [5], known as CR standard, SU should
stop its data transmission within 2 seconds after the start of PU’s data transmission.
As one of key technology in cognitive radio, spectrum sensing needs to rapidly and
efficiently detect PU’s presence or absence to avoid collision or losing spectrum
opportunity respectively. Therefore, SU generally adopts the periodic spectrum
sensing structure as depicted in Fig.1. Usually, each spectrum sensing frame consists
of sensing slot (τ) and action slot (T-τ). Firstly, SU carries out spectrum sensing in
sensing slot, and obtains a result of channel state, idle or busy, which means PU
absent or present on the operating channel respectively. Then, SU takes corresponding
action, data transmission or channel switching, among action slot. After SU’s data
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 574–579, 2011.
© Springer-Verlag Berlin Heidelberg 2011
The Optimization of Spectrum Sensing Frame Time 575
: Spectrum sensing
Frame k Frame k+1
: Data transmission
Channel A
τ T−τ ack
Channel switching
Channel B
2 System Description
We consider the energy detection as a channel sensing method. After sensing slot, the
energy detector produces a decision statistic given by
1 N
∑ [ x ( n) ]
2
X= (1)
N n=1
Where x(n) is the sampled received signal, N is the number of samples. If sampling
rate is fs, then N = τ f s . By energy detection, SU should solve a binary hypothesis
testing problem, i.e., determine whether PU is transmitting (i.e., the operating channel
is busy), given by H1, or not, given by H0, as follows
Where n=1,2,…, N, w(n) is the additive white Gaussian noise (AWGN), s(n) is the PU
signal, η is the detection threshold, θ is the output of energy detector. The goal of
spectrum sensing is to reliably decide on the two hypotheses with high probability of
detection, i.e., Pd=Pr(H1|H1), and low probability of false alarm, i.e., Pf=Pr(H1|H0).
There are two possibilities for SU to access the channel: 1) Under hypothesis H0,
SU makes no false alarm of the presence of PU; 2) Under hypothesis H1, SU makes
mis-detection of the absence of PU. SU can identify the two cases from the result of
ack. For case 2, data transmission collision will occur between PU and SU, and it will
degrade the quality of service (QoS) of PU directly. There are two possibilities for SU
to stop data transmission on the operating channel too, i.e., false alarm and successful
detection of the reappearing of PU. SU can’t identify the two cases. False alarm will
cause a losing spectrum opportunity. The spectrum sensing frame time T determines
the maximum time during which SU will be unaware of the reappearing of PU, and it
will depend on the type of PU’s QoS. When spectrum sensing time τ is fixed,
increasing frame time T would increase the available time for data transmission, and
the gain of throughput increases, but the probability of collision increases too.
1 − e−αT
e−αT e−βT
0 1
(idle) (busy)
1 − e −βT
Fig. 2. State transition
At the beginning of a frame the probability that the operating channel is idle denotes
as π, 0<π<1, it can be seem as a belief state. The belief state will be updated after a
frame. We denote the beginning of a frame as the starting point of time t=0, and
define t as the time that state of the channel keeps idle or busy. When the channel is
idle or busy, the probability density function of t is respectively given by
Since the SU has less knowledge of channel state, the initial probability is given as
the stationary probability
β
π0 = (5)
α +β
In a frame, SU firstly senses the channel state, secondly chooses action based on
sensing result θ. The belief state can be updated according to one of the following
three cases.
Case1: θ=0, SU carries out data transmission, and receives ack=1, the belief state in
the next frame is:
L1 (T ) = e −α T (6)
L2 (T ) = 1 − e−β T (8)
We suppose that the energy cost of performing spectrum sensing and data
transmission and the throughput gain of a successful data transmission are Cs, Cd and
Cr per unit of time respectively, the cost of channel switching is Csw. Accordingly, the
expected net gain in a frame for SU can be calculated as
Pc = η2 (π , T ) (13)
For the guarantee of QoS for PU, we suppose the maximal collision probability is
c
Pmax , then the optimization problem can be defined as
3
R (π , T ) = max G (π , T ) + λ ∑ ηi (π , T ) R( Li (T ), T ) (14)
T i =1
Pc ≤ Pmax
c
4 Conclusion
In this paper, we consider a multiple-channel CRN, energy detection is chosen as the
underlying detection scheme. Then we study the design of optimal spectrum sensing
frame time under considering dynamics of the state of PU and restriction on collision
probability, in order to maximize the expected reward in throughput, at the same time,
the energy cost is taken into account. We adopt the framework of partially observable
Markov decision processes (POMDP) to analyze the problem.
Acknowledgment
This work has been supported by the National Natural Science Foundation of China
under Grant No. 60874108 and 60904035.
References
1. Federal Communication Commission, Facilitating Opportunities for Flexible, Efficient,
and Reliable Spectrum Use Employing Cognitive Radio Technologies, Rep. ET Docket
No. 03-108 (December 2003)
2. Roberson, D.A., Hood, C.S., LoCicero, J.L., MacDonald, J.T.: Spectral Occupancy and
Interference Studies in support of Cognitive Radio Technology Deployment. In: IEEE
Workshop on Networking Technologies for Software Defined Radio Networks (SDR
2006), pp. 26–35 (September 2006)
3. Mitola, J., Maguire, G.Q.: Cognitive radio: making software radios more personal. IEEE
Personal Communications 6(4), 13–18 (1999)
4. Haykin, S.: Cognitive radio: Brain-Empowered Wireless Communications. IEEE Journal
on Selected Areas in Communications 23(2), 201–220 (2005)
5. IEEE 802.22 WRAN System (2006)
6. Yücek, T., Arslan, H.: A Survey of Spectrum Sensing Algorithms for Cognitive Radio
Applications. IEEE Communications Surveys & Tutorials 11(1), 116–130 (First Quarter
2009)
The Optimization of Spectrum Sensing Frame Time 579
7. Liang, Y.C., Zeng, Y.H., Peh, E.C.Y., Hoang, A.T.: Sensing-Throughput Tradeoff for
Cognitive Radio Networks. IEEE Transactions on Wireless Communications 7(4), 1326–
1337 (2008)
8. Pei, Y., Hoang, A.T., Liang, Y.-C.: Sensing-Throughput Tradeoff in Cognitive Radio
Networks: How Frequently Should Spectrum Sensing Be Carried Out? In: The 18th
Annual IEEE International Symposium on Personal, Indoor and Mobile Radio
Communications (PIMRC 2007), September 2007, pp. 1–5 (2007)
9. Ghasemi, A., Sousa, E.S.: Optimization of Spectrum Sensing for Opportunistic Spectrum
Access in Cognitive Radio Networks. In: Proc. 4th IEEE Consumer Communications and
Networking Conference (CCNC), January 2007, pp. 1022–1026 (2007)
10. Hoang, A.T., Liang, Y.C., Wong, D.T.C., Zeng, Y.H., Zhang, R.: Opportunistic Spectrum
Access for Energy-Constrained Cognitive Radios. IEEE Transactions on Wireless
Communications 8(3), 1206–1211 (2009)
11. Choi, K.W.: Adaptive Sensing Technique to Maximize Spectrum Utilization in Cognitive
Radio. IEEE Transactions on Vehicular Technology 59(2), 992–998 (2010)
A Novel Model for the Mass Transfer of Articular
Cartilage: Rolling Depression Load Device
,
Abstract. The mass transfer is one of important aspects to maintain the
physiological activity proper of tissue specially, cartilage cannot run without
mechanical environment. The mechanical condition drives nutrition in and
waste out in the cartilage tissue, the change of this process plays a key role for
biological activity. Researchers used to adopt compression to study the mass
transfer in cartilage, here we firstly establish a new rolling depression load
(RDL) device, and also put this device into practice. The device divided into
rolling control system and the compression adjusting mechanism. The rolling
control system makes sure the pure rolling and uniform speed of roller applying
towards cultured tissue. The compression adjusting mechanism can realize
different compressive magnitudes and uniform compression. Preliminary test
showed that rolling depression load indeed enhances the process of mass
transfer articular cartilage.
1 Introduction
The mass transfer process in biosome is an important biological aspects of necessary
physiological activity, some nutrients, growth factors ,cytokines, metabolic waste and
secretory or degradation of the molecules moving depends on the process of mass
transfer [1,2,3]. Most of the early tissue lesions in biosome and the failure rebuilding
of tissue is just because of the process of mass transfer cannot successful[2,3].
Researches of the mass transfer are fundamental in the clinical treatment and tissue
repair etc; additionally, it is the key content of tissue engineering.
Articular joints are supported by cartilage tissues, which are complex multi-
component materials, but avascular. Approximately 90% of the tissue is the
mechanically functional extracellular matrix, which is synthesized and remodeled by
chondrocytes within it[3,4,5]. There are many non-substitution functions: bearing
mechanical load and absorbing the stress wave of dynamic loading conditions,
reducing the direct pressure on bone and lubrication within synovial joints. [3,4,5]
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 580–585, 2011.
© Springer-Verlag Berlin Heidelberg 2011
A Novel Model for the Mass Transfer of Articular Cartilage 581
When there is mechanical pressure, the change of fluid and tissue cause extracellular
matrix's (ECM) mass transfer performance different; so that can adjust the
biosynthesis of tissue. Therefore, researches of internal transfer behavior in tissue are
significant in order to understand the nutrition and mechanical signals mass transfer in
cartilage.
The optimized process of loading may enhance the effectiveness of mass transfer.
These studies show that some level of dynamic load can change macromolecular
solute moving in tissue, in order to strengthen the effectiveness, many researchers
studied the mass transfer of cartilage under different compressive condtions[2,5].
Here, we put forward a new loading approach-- rolling depression load. In common
daily activities (like walking and running) the joint contact area sweeps quickly over
some region of the articular surfaces. The two opposing cartilage surfaces have two
relatively movements: rolling and sliding[6]. In nature, apparent compressive and
shear deformations induced by the RDL are primary loading modes subjected to
native cartilage tissue. The RDL can mimic the primary mechanical environment
experienced by articular cartilage in vivo. Thus we designed this load device to study
the process of cartilage mass transfer.
The overall structure is shown in figure 1, it can be divided into two systems from the
function: the rolling control system and the compression adjusting mechanism. The
roll control system could have roller roll back and forth along backboard by stepper
motor driving screw rod, while the compression adjusting mechanism is used to adjust
the compression towards cultivation. The RDL device is divided into two parts from
it’s stucture: the part of mechanical structure and the part of electronic control. The
frame of device is a square broad. The rolling control system and the compression
adjusting mechanism are equiped on this broad. The device (except the control box) is
small enough to put into incubator easily for work.
The rolling control system (figure 2), including stepper motor, motor shaft, screw rod,
connecting piece, linkage, roller, gear and rack, motor shaft’s one side connects to
stepping motor, and the other side links to linkage; the linkage suits to the slot of
terminal roller (as cross section showed in figure 2); the roller’s shaft was embedded
in the slot; the roller has gears on both side, gears fixed on the broad matches to gear
teeth. The roller provide rolling depression load. Stepper motor drives the screw rod,
stepper motor lets the roller to roll back and forth. The rolling distance has a large
range, taking 5mm for example, and the frequency can reach to 0.6hz. Because of
adopting screw rod connection, the roller can move with a uniform velocity on the
training bag. The index of rolling control system: the maximum distance is 30mm, the
maximum rolling speed is 100cm/s and the maximum frequency is 3Hz.
Acknowledgments
The authors thank Natural Science Foundation of China (31000422, 10872147) and
Tianjin Natural Science Foundation (09JCYBJC1400) for their support.
A Novel Model for the Mass Transfer of Articular Cartilage 585
References
1. Quinn, T.M., Morel, V., Meister, J.J.: Static compression of articular cartilage can reduce
solute diffusivity and partitioning: implications for the chondrocyte biological response.
Journal of Biomechanics 34, 1463–1469 (2001)
2. Evans, R.C., Quinn, T.M.: Dynamic Compression Augments Interstitial Transport of a
Glucose-Like Solute in Articular Cartilage. Biophysical. 91, 1541–1547 (2006)
3. Greene, G.W., Zappone, B., Zhao, B., Derman, O.S., Topgaard, D., Rata, G., Jacob, N.:
Changes in pore morphology and fluid transport in compressed articular. Biomaterials 29,
4455–4462 (2008)
4. Evans, R.C., Quinn, T.M.: Solute convection in dynamically compressed cartilage. Journal
of Biomechanics 39, 1048–1055 (2006)
5. Arkill, K.P., Winlove, C.P.: Solute transport in the deep and calcified zones of articular
cartilage. Osteoarthritis and Cartilage 16, 708–714 (2008)
6. Guilak, F., Butler, D.L., Goldstein, S.A., Mooney, D.J.: Functional Tissue Engineering, pp.
332–400. Springer, Heidelberg (2004)
7. Hung, C.T., et al.: A paradigm for functional tissue engineering of articular cartilage via
applied physiologic deformational loading. Ann. Biomed. Eng. 32, 35–49 (2004)
Sensor Scheduling Target Tracking-Oriented
Cluster-Based
1 Introduction
Wireless sensor have found widely use recently, such as area surveillance,
environmental monitoring, military field and so on. There are lots of small and low-
cost nodes with signal processing, wireless communications, and power sources,
which can be randomly deployed in sensing area.
Since the sensor nodes carry limit energy and it is usually difficult or impossible to
exchange battery, energy consuming should be focus on in designing wireless sensor
network. Thus, how to design implies power-efficient software and algorithms that
perform signal processing and communications for prolonging systems’ life is key
problem for designers.
Target tracking is one of important applications with wireless sensor networks,
for example, tracking the movements of birds, small animals, and insects in
environmental applications, vehicle tracking and detection on the road etc [1].
The task must be distributed over the sensor nodes due to the limited energy and
communication constraints in target tracking.
Maheswararajah, S. et al. present several suboptimal sensor scheduling algorithms
for single target tracking. Performance measure is given by the sum of the Kalman
filtering error covariance of the target state and the weighted sum of sensor usage
cost. They also proposed an iterative suboptimal algorithm that iteratively refines the
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 586–591, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Sensor Scheduling Target Tracking-Oriented Cluster-Based 587
sensor sequence. The proposed methods is shown to perform better than existing
methods by studying a numerical problem of single target tracking in order to
illustrate the effectiveness of their algorithms in realistic settings [2].
Hanzalek, Z . et al. presents a methodology that provides a Time-Division Cluster
Scheduling mechanism based on the cyclic extension of Resource Constrained Project
Scheduling with Temporal Constraints WSN. The objective is to meet all end-to-end
deadlines of a predefined set of time-bounded data flows while minimizing the energy
consumption of the nodes by setting the TDCS period as long as possible. Since each
cluster is active only once during the period, the end-to-end delay of a given flow may
span over several periods when there are the flows with opposite direction. The
performance evaluation of the scheduling tool shows that the problems with dozens of
nodes can be solved while using optimal solvers[3].
( )
Xu Y et al. proposed a geographical adaptive fidelity GAF algorithm that keeps
energy by identifying nodes equivalent from a routing perspective. GAF can keep a
constant level of fidelity by turning off unnecessary nodes. Simulation shows that
network lifetime increases proportionally to node density. GAF can be used for
extending network lifetime by exploiting redundancy in order to keep energy [4].
In this paper, we proposed a cluster-based sensor scheduling algorithms for target
tracking in Wireless Sensor Networks, which sensing area is divided into clusters
based on transmitting range of nodes. The nodes’ state will change during with the
target moving in order to keep energy. The cluster head is selected by remaining
energy of the node and the position function is also given. When a target move to the
sensing filed, cluster heads is firstly selected, and then these cluster heads invite their
neighboring sensors to form a cluster. The trajectory of targets is estimated by
tracking function. The sensor information is passed on to cluster heads where, in turn,
transmit information to the base station [5-15].
2 Cluster
Sensor nodes in WSN can be divided into some groups called clusters to realize data
gather with efficient network organization, in which has a cluster head (CH) and a
number of member nodes. Clustering produces a two-layer hierarchy in WSN. Cluster
heads (CHs) belong to the higher layer while member nodes the lower layer. The
members of a cluster can communicate with their CH directly. A CH can forward the
fused data to the central base station through other CHs.
The cluster heads with more power can perform more complex tasks, such as
fusing and transmitting data. Clustered can communicate with other cluster heads for
sending useful information to abstention.
Different shapes to forming node into clusters have been proposed for deploying nodes,
such as circle, quadrilateral, and hexagon. The circular is often used because it
corresponds to the shape of the radio transmission range, which may result in
overlapping between clusters or uncovered areas. The hexagon is an ideal shape for
clustering a large area into adjacent, no overlapping areas, which can be proved as
follows.
588 D. Yan et al.
Quadrilateral structure for parting sensing region is shown in Fig.1 and Fig.2, in
which nodes are divided into small cluster. The clusters are defined as that two
neighboring clusters A and B, where all nodes in A can communicate with all nodes
in B and vice versa.
B
A
R
a
i d il l
Fig. 1. Quadrilateral Structure
Note that transmitting range is R, so the distance between two possible farthest nodes
,
in any two neighboring clusters, for example node 1 in cluster A and node 2 in
cluster B (Fig 2), must not be larger than R. Based on marked value in Fig1, function
can be expressed as follow:
2
a (2 a )2 d R 2
2 2
(1)
2a r
a d R / 5 (2)
A
R B
2
b (2 3b )2 d R 2
(3)
b d R / 13 (4)
Sensor Scheduling Target Tracking-Oriented Cluster-Based 589
Sq a2 R2 / 5 0.2R2 (5)
3 3 2
Sh 3sin 60qb2 R | 0.2R2
26 (6)
Note that the one-hop coverage area of quadrilateral structure is Sqc= 5Sq= R2 and the
one-hop coverage area of honeycomb structure forwarding is Shc = 7 Sh≈ 1.4R2, so the
one-hop coverage area of honeycomb structure is about 40% larger than that of
quadrilateral structure.
3 System Utility
The number of sensors in the system is n and the set of sensors is M (M = {1;
2; …n}). Let ni be the number of states that sensor i has different states: sleeping state
and active states. Sensor can see different subsets of the set of targets in different
active states. A sensor will produce an additional cost ci in active mode. Define k the
number of targets and T the set of targets: T = {t1; t2; …tk}. When a coalition C ∈ M
tracks a target tk , it then gains a value V (C; tk) equal to the value of the fused
information about target tk, denoted Inf (tk): V (C; tk) = Inf (tk). Then the sensor nodes
try to form a set of cluster {C1;C2; : : : ;Cp} in order to maximize system utility:
590 D. Yan et al.
Specifically, let visibility {i; si; tj} be true if sensor i in state si can see tj . So each
sensor i selects its state si to maximize system utility:
k
S ¦ V (C
j 1
j ) ¦ Ci
(7)
i is in active mode
k
S ¦V ({visibility(i, s , t )
j 1
i j true}, t j ) ¦ Ci (8)
Because value of a cluster is not known before the cluster actually performing the
sensing task, we can use a predicted value for the coalition which is achieved by the
rough measurement achieved by the regional sensors. Because, the cluster values can
change with the targets moving around, the set of sensors can vary with time.
4 Conclusion
In this paper, a cluster honeycomb structure-based sensor scheduling algorithms for
target tracking in Wireless Sensor Networks is proposed the sensing region is
distributed into clusters based on transmitting range of nodes, in which cluster head is
selected by remaining energy of the node. At the same time, only cluster head is
active to save energy. The nodes’ state will change during with the target moving.
When a target move to the sensing filed, cluster heads is firstly selected, and then
these cluster heads invite their neighboring sensors to form a cluster. The trajectory of
targets is estimated by tracking function. The sensor information is passed on to
cluster heads where, in turn, transmit information to the base station.
Acknowledgments. This work has been supported by the National Natural Science
Foundation of China under Grant No. 60874108 and 60904035, and by Directive Plan
of Science Research from the Bureau of Education of Hebei Province, China.
References
1. Akyildiz, I.F., Su, W., Sankarasubramaniam, Y., Cayirci, E.: Wireless sensor networks: a
survey. Computer Networks 38, 393–422 (2002)
2. Maheswararajah, S., Halgamuge, S.K., Premaratne, M.: Sensor Scheduling for Target
Tracking by Suboptimal Algorithms. IEEE Transactions on Vehicular Technology 58(3),
1467–1479 (2009)
3. Hanzalek, Z., Jurcik, P.: Energy Efficient Scheduling for Cluster-Tree Wireless Sensor
Networks With Time-Bounded Data Flows: Application to IEEE 802.15.4/ZigBee 10.
2010. IEEE Transactions on Industrial Informatics 6(3), 438–450 (2010)
4. Xu, Y., Heidemann, J., Estrin, D.: Geography-informed energy conservation for ad hoc
routing. In: Proceedings of the Seventh Annual ACM/IEEE International Conference on
Mobile Computing and Networking (ACM Mobicom), pp. 70–84 (2001)
5. Chamam, A., Pierre, S.: On the Planning of Wireless Sensor Networks: Energy-Efficient
Clustering under the Joint Routing and Coverage Constraint. IEEE Transactions on Mobile
Computing 8(8), 1077–1086 (2009)
Sensor Scheduling Target Tracking-Oriented Cluster-Based 591
6. Wang, Z., Zhang, J.: Energy Efficiency of Two Virtual Infrastructures for MNAETs. In:
IPCCCC 2006, pp. 547–552 (2006)
7. Li, X.-Y., Wan, P.-J.: Constructing minimum energy mobile wireless networks. ACM
Journal of Mobile Computing and Communication Reviews 5(4), 55–67 (2001)
8. Blough, D.M., Santi, P.: Investigating upper bounds on network lifetime extension for cell-
based energy conservation techniques in stationary ad hoc network. In: Proc of 8th Annual
Conference on Mobile Computing and Networking, pp. 183–192 (2002)
9. Zebbane, B., Chenait, M., Badache, N., Zeghilet, H.: Topology control protocol for
conserving energy in wireless sensor networks. In: IEEE Symposium on Computers and
Communications, pp. 717–720 (2009)
10. Younis, O., Krunz, M., Ramasubramanian, S.: Node clustering in wireless sensor
networks: recent developments and deployment challenges. IEEE Network 20(3), 20–25
(2006)
11. Chamam, A., Pierre, S.: On the Planning of Wireless Sensor Networks: Energy-Efficient
Clustering under the Joint Routing and Coverage Constraint. IEEE Transactions on Mobile
Computing 8(8), 1077–1086 (2009)
12. Yan, D., Wang, J., Liu, L., Gao, J.: Collaborative signal processing cluster-based in
wireless sensor network. In: WiCOM 2008, vol. 4, p. 4678792 (2008)
13. Yan, D., Wang, J., Liu, L., Wang, B.: An Overlapping Cluster-Based Protocol for Target
Tracking in Wireless Sensor Networks. In: FCC 2010 (2010)
14. Yan, D., Wang, J., Liu, L., Wang, B.: Dynamic cluster formation algorithm target tracking-
oriented. In: 2010 International Conference on Computer Design and Applications, vol. 4,
pp. V4-357–V4-360 (2010)
15. Briers, M., Maskell, S.R., Reece, S., Roberts, S.J., Dang, V.D., Rogers, A., Dash, R.K.,
Jennings, N.R.: Dynamic Sensor coalition formation to assist the distributed tracking of
Targets Application to wide Area surveillance. IEE 2005 (2005)
Improved Stack Algorithm for MIMO Wireless
Communication Systems
Li Liu, Jinkuan Wang, Dongmei Yan, Ruiyan Du, and Bin Wang
Abstract. The use of multiple antennas at both transmitter and receiver can
increase wireless communication system capacity enormously. The optimal
detection algorithm for MIMO system is Maximum likelihood detection (MLD)
algorithm, which provides the best bit error rate (BER) performance for MIMO
system. However, the computational complexity of MLD algorithm grows
exponentially with the number of transmit antennas and the order of
modulation. An improved MIMO detection algorithm which combined M-
algorithm with stack algorithm was presented in this paper. The proposed
algorithm was a multistage detection which was consisted with three parts:
MLD, M-algorithm and stack algorithm. In MIMO communication system with
m transmit antennas, after performing QR decomposition of the channel matrix,
the MLD with length L was done firstly. The partial accumulated metrics were
calculated and sorted, which produced an ordered set. Then selecting the first
M partial symbol vectors to form a new ordered set. Based on the new ordered
set, stack algorithm was performed to search for the symbols with the minimum
accumulated metrics. The proposed algorithm combining M algorithm and
stack algorithm reduced numbers of ordering with the original stack algorithm,
the probability of look back and improving the detection performance.
1 Introduction
Using multiple antennas at the transmitter and receiver, multiple input multiple output
(MIMO) wireless communication systems has been considered as a promising
technique for its potential to significantly increase the spectral efficiency and system
performance in rich scattering multipath environments[1], [2]. The channel capacity
increases linearly with the smaller number of transmit antennas and receiver antennas.
In order to decode symbols corrupted by inter-antenna interference, efficient signal
detection algorithms for MIMO systems have attracted much interest in recent years,
and lots of detection algorithms have been proposed for MIMO systems in the
literature [3], [4]. Maximum likelihood detection (MLD) algorithm detects all sub-
stream symbols jointly by choosing the symbol vector with maximized likelihood
function. From the viewpoint of bit error rate, MLD is optimal detection scheme.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 592–598, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Improved Stack Algorithm for MIMO Wireless Communication Systems 593
antenna i th , which is chosen from the complex constellation point set Ω . The
element hij of H ( t ) represents channel gains between the j th transmitter and the i th
receiver antennas at the discrete time t.
Assume the channel is slowly time varying flat fading channel, and the channel
state information H is known perfectly in the receiver, for brevity of notation the
discrete time index t is abandoned in the subsequent consideration, so equation (1)
can be written as
y = Hx + n (2)
When performing MLD, an exhaustive searching has to be done over the whole
alphabet set. ML detection of the transmitted signal can be formulated as
xˆ = arg min{ y − Hx }
2
(3)
x∈Ω m
⎡ Rm×m ⎤
R=⎢ ⎥ (4)
⎣0( n − m )×m ⎦
Rmm is a upper triangular matrix, Noting that Q H Q = I , after left-multiplying received
signal by Q H , and ignoring the zero part at the bottom of R , equation (2) can be
rewritten as
y = Rx + n (5)
Improved Stack Algorithm for MIMO Wireless Communication Systems 595
xˆ = arg min( y − Rx )
2
x∈Ω m
m m
2 (6)
= arg min(∑ yi − ∑ Ri , j x j )
x∈Ωm i =1 j =i
Here xˆ = [ xˆ1 , xˆ2 ,..., xˆ Nt ] . To account for the case when the decision is made on symbols
from xm to xk , 1 ≤ k ≤ m , we generalize the partial accumulated metric as
2
m m
d k ( xˆ (k )) = ∑ yi − ∑ Ri , j xˆ j (8)
i =k j =i
Stack algorithm using a best-first tree search strategy is an efficient algorithm to find
the shortest path from a point to a destination in a weighted graph, and it always first
visits the best node, i.e., the node with the minimum accumulated metric in the list
(memory) can be visited firstly. It starts with the root node, and visits the best node at
each time. The best node is then replaced by its children, and the list is sorted again
based on the partial accumulated metric. This process is repeated until the best node is
a leaf node, which produces the ML estimate. The flow chart of the original stack
algorithm is shown in fig. 1.
596 L. Liu et al.
The original algorithm shown in fig. 1 is iterative, and each of iteration consists of
three basic steps. In the first and third steps, the candidate set need to be sorted.
Additionally, we need to evict inferior candidates form the union of the previous
candidate set and the new candidates generated by the tree-expansion. The algorithm
tends to look back the node with short path because of the non-negativity restriction
of the accumulated metrics. The detection time is delayed so.
The proposed algorithm based on the following model consisted by two concatenated
detection modules as fig. 2.
The proposed algorithm combining the idea of M-algorithm with stack algorithm,
reduces computation complexity of ML detection by the newly adopted parameter L
and M . The trade-off between performance and complexity can be adjusted by setting
the parameters at different values which can provide proper performance to
complexity ratio.
The algorithm can be expressed as four steps: step of forming tree structure, step of
performing ML detection of length L, selecting the first M symbol vectors step and
step of performing stack algorithm.
Forming tree structure step, also as initialization step, with perfect knowledge of the
channel state information, the QR-decomposition of channel matrix is performed
firstly, after left-multiplying Q H to the received signal, and then ignoring the zero part
at the bottom of R , y and R matrix can be obtained. This step forms a proper tree
structure for performing tree search.
Second step: exhausted search of length L layers, computing the partial
accumulated metrics according to equation (8) then ordered the partial symbols with
the partial accumulated metrics.
Selecting the first M symbols at the top the ordered set, deleting other candidate
symbols with bigger accumulated metrics, consisting a new ordered set in this step.
Then, stack detection is executed according to the new ordered set, respectively.
Then the symbol which has the minimum accumulated metrics can be found and put
out as the original transmitted symbol.
These steps can reduce the number of ordering in original stack detection algorithm
and reduce the number of look back, which can decrease the delay time of detection.
The proposed algorithm can get trade-off between detection performance and
computing complexity by adjusting parameter L and M .
4 Conclusions
References
1. Foschini, G.J., Gans, M.: On limits of wireless communications in a fading environment
when using multiple antennas. Wireless Personal Communications 6(3), 311–335 (1998)
2. Telatar, E.: Capacity of multi-antenna Gaussian channel. Europ. Trans. Telecommun. 10,
585–595 (1999)
3. Chun, K.L., Chun, J.: ML Symbol Detection Based on the Shortest Path Algorithm for
MIMO Systems. IEEE Trans. on Signal Processing 54(11), 5477–5484 (2007)
4. Xie, Y., Li, Q.: Georghiades: On Some Near Optimal Low Complexity Detectors for
MIMO Fading Channels. IEEE Trans. on Wireless Communications 6(4), 1182–1186
(2007)
5. Chin, W.H.: QRD based tree search data detection for MIMO communication systems. In:
Proc. IEEE VTC, June 2005, vol. 3, pp. 1624–1627 (2005)
6. Baek, M.-S., You, Y.-H., Song, H.-K.: Combined QRD-M and DFE detection technique
for simple and efficient signal detection in MIMO-OFDM systems. IEEE Transactions on
Wireless Communications 8(4), 1632–1638 (2009)
7. Kim, K.J., Yue, J., Iltis, R.A., Gibson, J.D.: A QRD-M/Kalman filter-based detection and
channel stimation algorithm for MIMOOFDM systems. IEEE Tran. on Wireless
Communications 49(10), 2389–2402 (2005)
8. Choi, J., Hong, Y., Yuan, J.: An approximate MAP-based iterative receiver for MIMO
channels using modified sphere detection. IEEE Trans. on Wireless Communications 5(8),
2119–2126 (2006)
9. Wang, R.Q., Giannakis, G.B.: Approaching MIMO channel capacity with soft detection
based on hard sphere decoding. IEEE Trans. on Communications 54(4), 587–590 (2006)
10. Dai, Y.M., Yan, Z.Y.: Memory-Constrained Tree Search Detection and New Ordering
Schemes. IEEE Journal of Selected Topics in Signal Processing 3(6), 1026–1037 (2009)
11. Kim, T.H., Park, I.C.: High-Throughput and Area-Efficient MIMO Symbol Detection
Based on Modified Dijkstra’s Search. IEEE Transactions on Circuits and Systems I:
Regular Papers 57(7), 1756–1766 (2010)
The Cooperative Hunting Research of Mobile Wireless
Sensor Network Based on Improved Dynamic Alliance*
1 Introduction
Mobile wireless sensor networks can be used to many applications such as industry
detection, emergency rescue, military reconnaissance and unmanned remote control
etc. It has many advantages than the fixed wireless sensor networks, such as the
mobility, the repairable ability and so on. The cooperation technology is useful for
improving the performance of mobile wireless sensor networks.
Dynamic alliance based on event-triggering mechanism, is widely applied in
cooperation of the fixed wireless sensor networks and the multi-robot system with its
little communication and real-time merits. Many dynamic alliance approaches have
been proposed, for example, Han Li et al use ant colony parallel algorithm to form the
multi-robots alliance [1]. B. Soh L K et al firstly adopted dynamic alliance
mechanism based on case reasoning to solve the task allocation-problem for wireless
sensor networks[2]; LIU Mei et al proposed a formed approach of dynamic alliance
based on dynamic planning and eliminated the conflict of sensor resource among
multiple dynamic alliances [3]. CHEN Jian-xia et alproposed a dynamic alliance
mechanism based on auction [4] and a alliance member updating scheme based on
*
This work was supported by the National “863” Project under Grant 2007AA04Z224 of
China.
∗∗
Corresponding author.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 599–605, 2011.
© Springer-Verlag Berlin Heidelberg 2011
600 D. Pei et al.
The mobile sensor nodes carry out random search behaviors in one area to find the
target at cooperative hunting stage. When node i or any other mobile nodes detect
target T, it can be seen as an event happened. Mobile node i, which found target T,
sponsor an alliance ALi. In order to saving resource and avoiding other nodes
repeating to sponsor alliance, the sponsor node i sends a broadcast message to tell its
neighbor nodes that a alliance has been built. The node which received the alliance-
built information replies its information to sponsor node, such as position, mission
number etc. The sponsor node i create neighbor node set Ne.
The sponsor node i choose one predetermined hunting strategy S according to
environment information. S includes task type Ty, member number Num, requiring
capability Ab etc. For example, a pentagon hunting strategy can be set as: member is
five, the maximum moving speed >2m/s, the minimum distance between node i and
target T is 5m.
Then the sponsor node uses its neighbor information, data information and
weighted coefficient to calculate the weighted sum value SUMj of each candidate
node j. The data information includes remain energy Enj, maximum speed Vj, the flag
The Cooperative Hunting Research of Mobile Wireless Sensor Network 601
Joj of joining alliance, the distance Dj to target, the task number Tnumj, the round of
rejecting join into alliance Rnumj. Every data has an effect factor, set as Ref1- Ref6.
The sum value SUMj of neighbor node j can be calculated:
· 1 · 2 · 3
· 4 · 5 V · 6 (1)
The sum value SUMj is sorted descending according their value. The candidate alliance
member set ALi is selected according to the requiring member number of task.
The sponsor node calculates the position of alliance member according to selected
hunting strategy S and the requiring number of strategy. Fig.1 shows that the target set
which is built up by R and hunting strategy S.
Target set {Tar1, … , Tar5} and current member position should be sent to task
allocation function—Genetic Algorithm (GA). GA calculates the total moving
distance of alliance members. The relationship between target position and candidate
members can be gotten according to the minimum total moving distance.
The sponsor node calculates the time from its arrival target point according to its
own average speed and the distance between its own target point and itself. Then the
sponsor node sends an alliance request Qj to every candidate alliance member to make
alliance, where:
Qj = [sponsor node ID, task type, target position information, arrival time].
If the sponsor node receives alliance request from other nodes at initial stage, it will
end the alliance sponsor and execute negotiation protocol at alliance formation stage
as one candidate member.
The alliance members use their receiving target position, their own current position
and arrival time to calculate their average-moving speed.
Because each node in mobile wireless sensor networks is mobile and their positions
change timely. The more nodes, the more negotiation turns is needed. Therefore, if
the negotiation process is too long, some candidate alliance members or target maybe
move too far during the negotiation process. In such case, the results which are
calculated at the alliance initialization stage will lose their practical significance.
602 D. Pei et al.
In order to improve the dynamic alliance in mobile wireless sensor networks, the
process of alliance formation should be as simple as possible. Define single minimum
negotiation time is Tmin, alliance member number is Al_num, the negotiation turn
between sponsor and single member is Neg_num1. The negotiation time of each turn is:
_ · · _ 1 (2)
Where Al_num and Tmin are constant, therefore the only way to speed up negotiation
process is decrease Neg_num1.
If two-thirds of candidate members accept the alliance, the dynamic alliance will be
formed and a confirmation message will be sent to members. Otherwise, it will fail to
sponsor a dynamic alliance.
If the alliance is formed, the sponsor requires members to execute their promising
tasks. On the other hand, sponsor requires members to discard their promising tasks.
During the negotiation process, if the members which negotiate successfully but
cannot receive the confirmation message in N cycle times, it will discard the promise
unilaterally and release their resource. N depends on the number of alliance members,
the maximum time of single negotiation, and so on.
In the hunting application of mobile wireless sensor networks, the whole alliance keep
on hunting the target and tries to encircle it. The alliance might meets the obstacles or
the target moves irregularly during the hunting process. So the alliance members
should detect the target position at intervals, its own position and moving direction
etc. The alliance sponsor uses this information to estimate the moving direction of
target, adjust the hunting strategy, send next step information to every alliance
member and lead the moving of alliance members.
The process could be described as follows.
1) Moving law forecasting of target
The moving law of objects can be described as a quadratic function: 1·
2· 3, where is time, is displacement. In this paper, the five history position
data of target is used to estimate the next position of target. The estimation approach
based on least square method uses the quadratic function to estimate the position of
target by subsection. The estimation process is shown in Fig.2.
2) Updating the positions of alliance members
Because the target is mobile, the positions of alliance members need to be updated
by real-time way. Every alliance member updates its goal position according to
estimation coordinate value at every cycle time.
3) Leave or disband alliance
After N cycle periods, the sponsor does not receive the position of one node, then
the node can be seen as left alliance and no position information will be sent to it. On
the other hand, the sponsor may recruit new alliance members according to the
environment. If it is needed, new alliance members can be added by the process of
A, B, C.
The Cooperative Hunting Research of Mobile Wireless Sensor Network 603
If the alliance member number is less than one-third of the task requiring number
during N cycle periods, the sponsor discards the task, disbands the alliance and
notifies every alliance member.
If the task is completed, the sponsor will send a message to disband alliance. The
alliance member stops executing alliance task and flushes its task list and task number
etc. Finally, when receiving disbanding alliance message from sponsor, the alliance is
disbanded.
3 Hunting Strategy
Three basic tasks are included in the hunting strategy: searching target position,
surrounding target and shrinking circle.
Mobile node i randomly selects a direction vector [i, j] in a definite area (i, j is
random value between 0 and 1). Node i search the target position according to the
direction vector.
The process should be executed at initial stage or when losing target during the
hunting process.
When the mobile node found target firstly, it sponsors the alliance and starts
surrounding target process. In order to avoid the mobile nodes are too close to the
target, alliance members form an envelope circle to surround the target at a big circle.
The alliance members averagely distribute on the circle.
When all the alliance members enter the range of target, and the radius R of envelope
circle is bigger than the requiring distance for hunting, the alliance sponsor sends a
shrinking command to update the goal position of every member. If some nodes
cannot reach their goal position in a promising time during this process, a gap may be
604 D. Pei et al.
occurs on the envelope circle. At this case, surrounding target behavior shall be
executed and enlarge the range of envelope circle.
4 Simulation
There are many mobile nodes at initialization site in Fig.3. When a node detects
target, it sponsors the dynamic alliance as the sponsor. Then it choose some mobile
nodes to form an alliance and allocated task to these nodes after calculation in Fig.4.
In this simulation, five nodes were selected to hunting the target according to
pentagon hunting strategy in Fig.5~7. Fig.8 is the alliance member’s status when the
procedure stops. Fig.8 shows that alliance member evenly surround the target.
Fig. 3. Start to hunting target at initialization site Fig. 4. Start to hunting the target
Fig. 5. Begin to surround the target Fig. 6. Begin to shrink envelope circle
Fig. 7. The whole process of dynamic alliance Fig. 8. Alliance member has hunting the target
The Cooperative Hunting Research of Mobile Wireless Sensor Network 605
5 Conclusion
In this paper, an improved dynamic alliance mechanism was proposed to solve the
cooperative hunting of single target in mobile sensor networks. The simulation
experiment result indicates that the improved dynamic alliance mechanism is suitable
for the cooperative hunting task of mobile wireless sensor networks.
Acknowledgment
We would like to express our gratitude to all the colleagues in our laboratory for their
assistance.
References
1. Han, L., Xu, L., Dang, C.: Ant Colony Parallel Algorithm Based Multi-Robots Alliance
Generation. In: Proceedings of the 7th World Congress on Intelligent Control and
Automation, Chongqing, China, pp. 364–365 (June 2008)
2. Soh, L.K., Tsatsoulis, C.: Reflective Negotiating Agent s for Real-Time Multisensor
Target Tracking. In: Proceedings of the International Joint Conference on Artificial
Intelligence (IJCAI 2001), pp. 1121–1127 (2001)
3. Liu, M., Li, H.-h., Shen, Y.: RESEARCH on Task Allocation Technique for Aerial Target
Tracking Based on Wireless Sensor Network. Journal of Astronauteics 28(4), 960–965,
971 (2007)
4. Chen, J.-x., et al.: A Dynamic Task Allocation Scheme for Wireless Sensor Networks.
Information and Control 35(2), 189–192, 200 (2006)
5. Chen, J., Zang, C., Liang, W., Yu, H.: Auction-Based Dynamic Alliance for Single Target
Tracking in Wireless Sensor Networks. In: Proceedings of the World Congress on
Intelligent Control and Automation (WCICA), pp. 94–98 (2006)
6. Chen, J.-x., Yu, H.-b.: An Updating Scheme of the Working Dynamic Alliance for
Collaborative Task Allocation in Wireless Sensor Networks. Chinese Journal of Sensor
and Actuators 22(4), 499–504 (2009)
7. Jiang, J., Yan, J.-h., et al.: Heterogeneous multi-robot alliance formation based on artificial
immune system. Computer Engineering and Design 30(9), 2246–2248, 2253 (2009)
8. Zhou, P.-c., Hong, B.-r., Wang, Y.-h.: Multi-robot Cooperative Hunting Under Dynamic
Environment. ROBOT 27(4) (July 2005)
9. Zhang, S., Zhang, Z., Zhu, J.-C.: Dynamic Alliance based on Genetic Algorithm in
Wireless Sensor Networks. Computer Science 35(4), 20–22, 50 (2008)
10. Chen, J.-x., et al.: Multiple Targets Tracking Oriented Collaborative Task Allocation
Scheme in Wireless Sensor Networks. Information and Control 38(4), 412–416 (2009)
11. Liu, M., Li, H., et al.: Elastic neural network method for multi-target tracking task
allocation in wireless sensor network. Computers and Mathematics with Applications 57,
1822–1828 (2009)
Identification of Matra Region and Overlapping
Characters for OCR of Printed Bengali Scripts
Abstract. One of the important reasons for poor recognition rate in optical
character recognition (OCR) system is the error in character segmentation. In
case of Bangla scripts, the errors occur due to several reasons, which include
incorrect detection of matra (headline), over-segmentation and under-
segmentation. We have proposed a robust method for detecting the headline
region. Existence of overlapping characters (in under-segmented parts) in
scanned printed documents is a major problem in designing an effective
character segmentation procedure for OCR systems. In this paper, a predictive
algorithm is developed for effectively identifying overlapping characters and
then selecting the cut-borders for segmentation. Our method can be successfully
used in achieving high recognition result.
1 Introduction
In the field of OCR technology, researchers have noted that a large number of
recognition errors in OCR systems are due to the character segmentation errors
[1, 2, 4]. In case of Bangla scripts, a major reason is incorrect detection of matra
region. In previous techniques, the knowledge that the headline (row with maximum
black pixels) runs along the middle of the actual matra region, has been used.
However, this may not be true for all fonts. To increase the accuracy, we have
designed an algorithm using the fact that the number of black pixels in the rows above
and below the matra region falls drastically.
Existence of overlapping (Fig. 1-a) and touching (Fig. 1-b) characters are the other
two major problems that arise during segmentation and result in under-segmentation.
The other possible reason for under-segmentation is the presence of compound
characters, called “Juktakshar” (Fig. 1-c) in Bangla. A number of strategies for
segmenting touching characters in various Indian scripts are available in the literature
[3, 5]. Here we have proposed an algorithm to detect whether the cause of under-
segmentation is overlapping characters in case of Bengali script. Our algorithm
extends further to separate the detected overlapping characters in the said script. To
the best of our knowledge, no similar study is available in Bangla printed script.
Overlapping characters are different from touching characters and compound
characters in a way that in the first case, consecutive characters do not touch each
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 606–612, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Identification of Matra Region and Overlapping Characters 607
other. However, a portion of each of the two overlapping characters lies in the same
vertical column. As a result, we cannot find any cut-column between the two
characters. Since in the process of character segmentation, we separate characters by
generating cut-columns, overlapping characters have to be dealt with in a special way.
Earlier we have seen, we can avoid overlapping by segmenting the characters
following connected component analysis but this has comparatively high time
complexity.
Examples of characters for Bangla script are shown in Fig. 2. For a large major
number of characters (basic as well as compound), there exists a horizontal line at the
upper part called “matra” in Bangla language. The characters of a word are
actually connected through the matra (see Fig. 3). Many characters, including vowel
modifiers have vertical strokes. The punctuation mark for period or full stop is also a
vertical line.
Text digitization is done by a flatbed scanner having resolution between 100 and 600
dpi. The digitized images are usually in gray tone and required to convert them to
two-tone images. This step is called binarization. Various methods are available for
binarization like Otsu’s method, Sauvola’s method, Bernson’s method etc. When a
document page is fed to the scanner, it may be skewed by a few degrees. There exists
a wide variety of skew detection algorithms based on projection profile, Hough
transform, nearest neighbor clustering, docstrum analysis and line correlation etc.
proposed by researchers.
Generally, the text lines are detected by finding the valleys of the projection profile
computed by a row-wise sum of black pixels. This profile shows large values for the
Identification of Matra Region and Overlapping Characters 609
matra of an individual text line. The position between two consecutive matra, where
the projection profile height is minimum, denotes the boundary between two text
lines.
A text line may be partitioned into three zones. The upper zone denotes the portion
above the matra, the middle zone covers the portion of basic and compound
characters below the matra, and the lower zone mainly contains some vowel and
consonant modifiers. We call the imaginary line separating the middle and lower
zone, the base line. A typical zoning is shown in Fig. 3.
In existing methods of matra region detection, a single row of pixels which consists
of the maximum number of black pixels i.e. the peak of the histogram for that line is
considered as the headline (Fig. 3). Generally it is considered that the headline goes
along the midline of the matra region and a certain number of pixel rows are
considered in the upper side as well as lower side of the headline as the matra region.
The peak of histogram of that line may not pass through the exact middle row of the
actual matra region. Moreover, depending on the type of font the width of the matra
region may vary and thus the proper region may not be correctly detected.
Here we have a robust method for the detection of matra region accurately. In a text
line, the rows above and below the matra region will have much less number of black
pixels compared to that of the matra region. Let, f(x) denotes the number of black
pixels in a row. So the first order difference i.e. f(x+1)-f(x) gives us a positive peak at
the point where the matra starts and a negative peak where the matra region ends. For
example, the number of pixels in the rows of upper half of the height of a text line is
given below
0 0 0 0 13 16 19 19 14 11 10 10 13 18 110 219 249 259 218 167
145 157 172 188 179 163
and their first order differences are: 0 0 0 0 13 3 3 0 -5 -3 -1 0 3 5 92 109
30 10 -41 -51 -22 12 15 16 -9 -16
1
610 S.S. Goswami
Thus we can find out the boundaries of the matra region following this algorithm.
Row u is the upper boundary of the text line
Row b is the baseline of the text line
Row h is the imaginary headline.
H[ i ] is the no of black pixels of ith row in that
line
Calculate midline m of the text line as
m = h + ((b-h) / 2).
For i = u to m, k=0
Do D[k] = H [i+1]-H[i]
k++; i++;
End
Find the j for which D[j] is max(D[j]), j=0 to k
Then jth row of that text line is the upper boundary of matra region
Find out p for which D[p] is min(D[p]), p=0 to k
Then pth row of that text line is the lower boundary of the matra region
The virtual pixel projection profile method for character segmentation treats
overlapping or touching characters as a single character and leads to the under-
segmentation, which in turn leads to a failure in the character recognition phase. Due
to font style, inferior printing or bad paper quality, a document may consist of several
number of overlapping (Figure 7-a) or touching (Figure 7-c) characters. Table 1 gives
an idea of frequency of overlapping characters in a document. Under-segmented
characters generally have an aspect ratio (width/height) larger than that of
single isolated characters. Another reason for under-segmentation is “Juktakshar”
(Figure 7-b).
We start from the point where the midline of the under-segmented character first
touches the leftmost character and run along the boundary of that character till the last
column (the rightmost column of the character segment) is reached. Figure 7 shows
this by marking the character segment’s boundary by grey pixel. However, if we
612 S.S. Goswami
reach the matra region before reaching last column, we know that the under-
segmentation is due to overlapping characters (Figure 7-a). Otherwise (Figure 7-b
& 7-c) the reason for under-segmentation is not overlapping. Our next job is to deal
with the separation of the overlapping characters.
We get the cut-boundary by starting from the lowermost point of the left
character, following the right-hand boundary upwards and thus can separate those two
overlapped characters through the cut-boundary.
5 Conclusion
A robust technique for accurately detecting matra in Bangla scripts has been
developed. To the best of my knowledge, this is the first matra region detection
technique of high accuracy. A method for identifying overlapping characters and
generating cut-boundary for segmenting overlapping characters has also been
proposed here. The algorithms can be suitably modified for segmentation in other
Indian scripts that contain matra (eg: Devnagari, Gurmukhi, etc.). It is likely that this
approach will find application in other document processing problems.
Acknowledgement
We are grateful to Prof. Bidyut Baran Chaudhuri, CVPR Unit, Indian Statistical
Institute for his support and we would like to acknowledge the supports and grants
provided by the Department of IT, Govt. of India, New Delhi, India.
References
1. Impedovo, S., Ottaviano, L., Occhinegro, S.: Optical character recognition-A survey.
International Journal of Pattern Recognition and Artificial Intelligence 5, 1–23 (1991)
2. Mori, S., Suen, C.Y., Yamamoto, K.: Historical review of OCR research and development.
Proceedings of IEEE 80(7), 1029–1058 (1992)
3. Garain, U., Chaudhuri, B.B.: On recognition of touching characters in printed Bangla
documents. In: Chaudhury, S., Nayar, S.K. (eds.) Proc. Indian Conf. Computer Vision,
Graphics, Image Processing, Delhi, India, pp. 377–380 (1998)
4. Chaudhuri, B.B., Pal, U.: A complete printed Bangla OCR system. Pattern Recognition 31,
531–549 (1998)
5. Bansal, V., Sinha, R.M.K.: Segmentation of Touching Characters in Devanagari in
Computer Vision Graphics and Image Processing. In: Chaudhury, S., Nayar, S.K. (eds.)
Recent Advances, pp. 371–401. Viva Books Private Limited (1999)
New Energy Listed Companies Competitiveness
Evaluation Based on Modified Data Envelopment
Analysis Model
Abstract. This paper introduces a minimum input and largest output ideal
decision-making unit on the basis of using the traditional method of data
envelopment analysis from the point of efficiency view. The DEA evaluation
model is modified, and the decision-making units are full evaluation and
ranking. The modified DEA model analyses the Competitiveness evaluation of
representative listed companies in China new energy industry representative.
Through the analysis and solution for the data on 11 new energy listed
company, the results indicate the effectiveness of this method, reflect the
competitiveness of new energy listed companies, and provide relevant decision-
making information for new energy industry operation.
1 Introduction
Continuously improving the competitiveness of new energy industries, ensuring the
new energy sector has a rapid, sustained and healthy development has important
practical significance and research value.
This paper establishes the modified data envelopment analysis (MDEA) model to
evaluate the competitiveness of representative listed companies in new energy
industry, and provide relevant decision-making information for new energy industry
operation, with strong reliability and practicality.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 613–618, 2011.
© Springer-Verlag Berlin Heidelberg 2011
614 C. Gao, Z. Fan, and J.-z. Zhang
data analysis, based on "Pareto optimal" concept, we determine whether the DMU is
DEA efficient, that is, whether the DMU is at the "front surface" of production
possibility set essentially.
Given there are n decision making unit, each unit has decision-making
has m types of input and s types of output. The input matrix of the jth
DMU j0 is x j , the output matrix is y j . The CCR model is the first and the most
wildly used model of DEA to evaluate the technical efficiency of DMU. In order to
judge the effectiveness of decision-making units conveniently, we chose the CCR
model with a dimensionless of non-Archimedes:
min[θ − ε (e1T s − + e2T s + )] (1)
⎧n
⎪∑ x j λ j + s = θx0
−
⎪ j =1
⎪⎪ n
s.t.⎨∑ y j λ j − s = y0
+
(2)
⎪ j =1
⎪λ j ≥ 0, j = 1,2,..., n
⎪ + −
⎩⎪s , s ≥ 0.
In it, e 1
T
= (1,1,...1) T ∈ E m , e2T = (1,1,...1) T ∈ E s .
ε +
is dimensionless of non-Archimedes, s and s are slack variable.
−
Assuming that the solution for the linear programming over the optimal
n
is ∑λ
j =1
*
j ,s
+*
, s , θ , and according to the results to determine the efficiency of
−* *
input-output :
The thinking of this method is that giving an ideal which is about input is
minimum DMU and output is maximum DMU. This ideal unit DMU is undoubtedly
valid for the DEA relative to the others. Its efficiency index for the largest measured
by the weight is a relatively reasonable weight. It was the public weight. To achieve
all the decision-making unit of the relative efficiency index, and compared with the
ideal decision-making units, based on the public weight, we can sort the entire
decision-making unit. Ideal DMU because it is taking all the decision-making unit of
the minimum input and maximum output. Therefore it is an ideal state, there may be
not available. However, the effective of DEA is a relative concept; we can get a
reasonable public weight to all decision-making units using ideal DMU, and to sort all
of the decision-making units using it as a reference.
3 Empirical Study
There are 56 listed companies in China new energy sector, involved in photovoltaic
power generation, biomass power, and wind power equipment manufacturing, solar
thermal utilization and so on. Excluding ST and * ST Series, we select the 11
representative companies in the industry as the research samples. (accounted for
57%of new energy listed companies).
As the enterprises are in different professions, in order to research objectively, in
this paper, we define company as an organization which uses its available resources,
through the production and operation, to make economic profit. The input indicators
are: total assets, current assets, total number of employees, annual output value,
research funding; the output indicators are: net profit, return on assets, research,
patents, number of sales area, customer satisfaction.
Input the original data into DEAP2.1 software; the evaluation results as shown
in table 1.
From the Table 1, it has 10 faculties which DEA is effective. It is shown that the
companies’ overall level is high. DMU11 is non-DEA efficient. The information
provided by the results above is as follows:
616 C. Gao, Z. Fan, and J.-z. Zhang
n
1) ∑λ
j =1
*
j = 0.48 shows that the economies of scale is incremental, that is, while
increasing the investment the output index result of company will be a corresponding
increase.
2) si+* ≠ 0 corresponding to DMU11 shows that the corresponding i investment
indicators are relatively excess or inefficient, that is to say, s2+* =1778 , s3+* = 4.37,
s4+* = 111, s5+* = 39.9 indicates that the corresponding input is relatively excess, it is
necessary to fully use of available resources to produce more.
−*
3) si ≠ 0 corresponding to DMU11 also indicates that the corresponding i output
−* −*
indicators need strengthened amount. s2 = 20.3, s5 = 20.6 shows that compared to
other companies, the amount of corresponding output needs a substantial
improvement, Company can formulate corresponding policies to improve the present
status.
From the above calculation we can see: it is difficult to sort decision-making cells
directly when it appears that several units are DEA efficient at the same time. To the
sort of departments’ talents training efficiency, just the traditional method of DEA is
not enough. We evaluate the sort of decision-making unit according to the improved
DEA methods. Ruling out the non-DEA efficient decision-making unit in
DMU11 and introducing the ideal decision-making unit DMU12 (see Table 2).
A new model of the DEA is built up with 10 other effective decision-making units.
The method is based on the principle: the ideal maximum efficiency of the DMU for
the goal to determine the weight and the relative effectiveness of solving the ideal
must be the most effective, it must be the efficiency of the evaluation index is 1. As a
result, it obtained on the weight of the optimal solution is a vision to achieve an
effective sense of DMU identified. DMU and the ideal relative to other decision-
making unit is indeed effective, so in that sense to calculate the weight of all the
decision-making unit is reasonable to apply, so as to avoid the traditional emphasis on
each of the DEA model calculated by the DMU weight not one-sided universal
applicability of the shortcomings. According to the decision-making for each unit of
the sort efficiency index, will be able to sort of decision-making unit, according to the
above-mentioned methods and data, the decision-making unit and the efficiency of
the index results in Table 3.
New Energy Listed Companies Competitiveness Evaluation Based on MDEA Model 617
4 Conclusion
MDEA models are used extensively in production management effectiveness
analysis, and this method has high validity and reliability. This paper solve and
calculate the model, the results can to some extent reflect the current competitiveness
of new energy market business operations, provide useful information and guide
observations for decision-makers.
References
1. Guo, Y.J.: Comprehensive Evaluation Theory and Method. Science Press, Beijing (2002)
2. Wang, X., Cui, Y., Yang, C.: Efficiency Evaluation for University Laboratory Based on
Multi-layer SVM. In: Proceeding of ICNC 2007 Conference, Haikou, August 2007, pp.
558–561 (2007)
618 C. Gao, Z. Fan, and J.-z. Zhang
3. Wang J.-m., Chen L.-r.: Using DEA to measure the relative efficiency of DSM
implementation. In: International Conference on power System Technology (September
2006)
4. Czajkowski, Mu, X.-z., Liu, B.-y.: The research of New energy and renewable energy
development and industrialization (2009)
5. Liu, B.: Development and Countermeasures of new energy technologies in China. Journal
of Liaoning University of Technology(Social Science Edition) (2), 30–33 (2009)
6. Shim, W.: Applying DEA technique to library evaluation in academic research libraries.
Library Trends 51(3), 312–332 (Winter 2003)
A SMA Actuated Earthworm-Like Robot
Y.K. Wang, C.N. Song, Z.L. Wang, C. Guo, and Q.Y. Tan
1 Introduction
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 619–624, 2011.
© Springer-Verlag Berlin Heidelberg 2011
620 Y.K. Wang et al.
the rear of the robot respectively as the setae of earworms for temporal stopping. All
of locomotive and clamping mechanism is mimicked from earthworm.
Extend by cooling
Fig. 2 shows the turning movement principle of the proposed robot. The difference
from the linear motive principle is that it has only two SMA wires on one side
worked. As the steps from (a) to (d) in Fig. 2 are repeated, the turning motion of the
robot can be achieved.
A SMA Actuated Earthworm-Like Robot 621
Contract by heating
(c) Retraction state (d) Elongation state
SMA actuator. Fig. 3 is the schematic analysis of the linear motion. The SMA wires
have an original length of LSO and the bias spring has an original length of LBO. The
bias spring rate is KB, which is almost independent from the temperature change.
When the spring is assembled at the normal temperature, the SMA wires which have
low spring rate, KL, are elongated by LE-LSO and the bias spring is compressed by
LBO-LE. And then, the potential state is in equilibrium at the point of PL. When the
temperature of the SMA wire rises due to the current, the SMA wire rate is changed
into the high spring rate of KH. Therefore, the SMA wire's elongation is changed into
LR-LSO and the spring's compression into LBO-LR. At a high temperature, the potential
state is in equilibrium at the point of PH. The actuator reciprocates between the point
of PL and PH by heating and cooling. As this process is repeated, it generates the linear
motion.
LB LR
LSO LE
In order to design the linear actuator, we calculate the maximum stroke of pro-
posed actuator. We assume that the relation between force and deformation is linear.
From the equations in Fig. 3, the following relations are obtained.
622 Y.K. Wang et al.
KB KL
LE = ( )L + ( )L (1)
K B + K L BO K B + K L SO
KB KL
LR = ( )L BO + ( )L (2)
KB + KH K B + K H SO
By formula (1) and (2), the theoretical stroke of the two-way linear actuator is
KB KB KL KH
Stroke = [( )÷( )]L + [( )÷( )]L (3)
KB + K L K B + K H BO KB + K L K B + K H SO
Where KB: spring rate of silicone bellow; KL: spring rate of SMA at low temperature;
KH: spring rate of SMA at high temperature; LBO: initial length of bias spring; LSO:
initial length of SMA wire; LE: system length at elongation state; LR: system length at
retraction state [5].
The higher temperature of SMA means higher strain of the SMA wire and larger
bending angle of the bias spring. The voltage of 3.6 V is too low to be applied while
the voltage of 7.2 V could be chosen as the activating voltage. With this voltage, high
temperature can be obtained by a pulse of 1 s, and the corresponding return time is
about 3 s. This means that large bending angle and an actuating frequency of 0.25 Hz
can be obtained. Even with a pulse of 0.4s, the temperature is already high enough to
activate the SMA wire to contract with a small strain, and higher frequency could be
obtained [6].
Φ
Bellow 165mm
Electromagnet
33mm
Bias Spring
SMA Wires
Clamp
Circuit
Baffle Battery
board
The artificial earthworm weighs 135g and consists of a body and two electromag-
nets. The artificial earthworm is 146 mm in length. The length, width and height of
the SMA actuator are 42 mm, 18mm and 16 mm, respectively.
The diameter of the SMA wire is 0.089 mm. The SMA actuator weighs 23g. The
actuator weight ratio is 17.0%, much smaller than the muscle tissue-body weight
ratios of most earthworms, which are about 50%-60%. The power source and control
board are included in the body. The power is by two 310mAh Li-polymer batteries in
series. The control board consists of an infrared receiver and an MCU (Mega8) con-
trolled pulse generator module to drive the SMA wires.
Experiments. The straight moving and steering capabilities were investigated. Duty
ratio is defined as the ratio of pulse width to the period. To realize steering, only the
SMA wires of one side of the earthworm are actuated.
Fig. 5. Motion analysis of the artificial earthworm by pulse voltage at a 25 ﹪ duty radio
Fig. 5 shows the earthworm’s movement analysis at frequencies from 0 to 2 Hz at a
25% duty ratio. Higher frequency means smaller power-on time, and it results in
smaller bending angle and amplitude of the earthworm. Speed increases with the
decrease of the frequency for both forward moving and turning, with an exception at
0.25 Hz. The maximum forward moving speed of 1.2mms-1 steering angle of
19.8°and speed of 0.68mms-1 were achieved at a 25% duty ratio, 0.25 Hz. Fig. 6
shows the images of turning.
The advantages of the earthworm-like robot using a SMA actuator are low cost,
simple structure, quietness, low weight, low actuator robot weight ratio and no me-
chanical transmission parts.
4 Summary
A SMA actuated earthworm-like robot of all-round movement was developed, and the
forward moving and steering moving capabilities of the artificial earthworm robot
were investigated. The results showed that the robot equipped with SMA actuator was
qualified to be a module for natural earthworm.
In order to improve the performance of the artificial earthworm, more research on
materials, moving mechanisms and control methods are needed. A key problem which
should be addressed for improving the locomotion efficiency is related to the imple-
mentation of micro-legs which could increase the differential friction during the con-
traction and elongation phases of earthworm. Although electromagnet provides a
good adhesive force, it also limits the movements of earthworms in different envi-
ronments. Due to the advantages of the artificial earthworm, we believe this work
may serve as a foundation for quiet micro-bionic worms.
References
1. Full, R.J.: Invertebrate locomotor systems. In: Dantzler, W.H. (ed.) Handobook of Physiol-
ogy, Section 13: Comparative Physiology, vol. II. Oxford University Press, New York
(1997)
2. Hirose, S.: Biologically Inspired Robots: Snake-Like Locomotors and Manipulators. Uni-
versity Press, NewYork (1993)
3. Choi, H.R., Ryew, S.M., Jung, K.M., Kim, H.M., Jeon, J.W., Nam, J.D., Maeda, R., Tanie,
K.: Microrobot actuated by soft actuators based on dielectric elastomer. In: Proc. of
IEEE/RSJ International Conference on Intelligent Robots and System, vol. 2, pp. 1730–
1735 (2002)
4. Pratt, G.A.: Legged robot at MIT: what’s new since Raibert. IEEE Robotics & Automation
Magazine 7(3), 15–19 (2000)
5. Kim, B., Lee, M.G., Lee, Y.P., Kim, Y., Lee, G.: An earthworm-like robot using shape
memory alloy actuator. Sensors and Actuaators A 125, 429–437 (2006)
6. Wang, Z., Hang, G., Li, J., Wang, Y., Xiao, K.: A micro-robot fish with embedded SMA
wire actuated flexible biomimetic fin. Sensors and Actuators A 144(2), 354–360 (2008)
Design of Instruction Execution Stage for an Embedded
Real-Time Java Processor
1 Introduction
Java is a popular language for the development of server and desktop applications, but
in embedded area, the requirements of low resource occupancy, high execution effi-
ciency, predictable execution time, etc. make conventional Java technology not so
popular than C and assembler. However, the features like portability, high develop-
ment efficiency, object-oriented, safety etc. make Java attractive for embedded devel-
opment. Sun’s Java ME [1] provides an approach for the development of embedded
Java mainly in mobile area. The relatively large resources requirement and low execu-
tion efficiency makes the execution mode of software Java virtual machine (JVM) not
so suitable for some hard real-time applications. In light of this, as a hard Java execu-
tion engine more efficient than the software JVM, JPOR-32 [2, 3], a Java processor
for embedded real-time applications is designed.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 625–630, 2011.
© Springer-Verlag Berlin Heidelberg 2011
626 G. Hu, Z. Chai, and W. Zhao
This paper presents the design of instruction execution stage (EX) for JPOR-32. In
JPOR-32, the preprocessing work generates the memory image file where the unpre-
dictable operations of software JVM like class dynamic loading, verification, prepara-
tion, resolution, initialization, etc. are handled in advance. This image file determines
the structure of run time data area of the processor. And the improved garbage collec-
tion scheme suitable for embedded real-time Java avoids the common influences of
conventional garbage collection. In JPOR-32, the instructions are optimized adaptable
to the run time data area for predictable execution time, and the operands of the in-
structions need to be revised accordingly. The remainder of this paper is organized as
follows. Section 2 gives the overview of JPOR-32. Section 3 describes the pipeline
latches. A description of control signals for EX of JPOR-32 is provided in section 4.
Section 5 gives the execution stage data path and section 6 concludes the paper.
2 JPOR-32 Overview
JPOR-32 is a 32-bit embedded real-time Java processor based on stack with five-
pipeline stage. It consists of instruction fetch (IF), buffering, instruction decode (ID),
EX and memory access. The bytecodes instructions are reorganized in buffering
stage, and the ID stage convert complex instructions to microinstructions and generate
control signals sent to pipeline latches for EX. The more detail about JPOR-32 pipe-
line structure is shown in [2, 3]. In JPOR-32, the operand stack of JVM is imple-
mented by sixteen general registers SR0-SR15 in the top of the stack register set. The
method stack is implemented in data memory. Current method’s local variables are
saved next to the invocating method’s context which is on the top of the method
stack. The pointer SP pointing to the top of the method stack is saved in the stack
register set. The local variables section pointer LV is also saved in it. The base ad-
dresses of different areas for constant pool, class area, static area, heap, etc. of the
memory directed by the image file are also saved in the stack register set.
3 Pipeline Latches
The ID/EX pipeline latches are shown in Fig.1. These latches transfer data for instruc-
tion execution and memory access from ID stage [2]. The 32-bit constant register
Cst_Reg stores the constant used for the instructions like pushing a constant onto the
stack, loading a local variable onto the stack or storing a value from the stack into a
local variable. Ctl_sig stores control signals. Byte2to4 stores 32-bit extended oper-
ands. Twobit_Reg stores two-bit register for object operation data used for instruc-
tions that deal with objects or arrays, and Classaddr stores 32-bit class address data
used for instruction like object creation.
stack register set, or sent to ALU_out for data memory addressing. And data written
to memory is sent from stack register set to data memory through Mem_data for bit
reduction. The data read from memory is also extended and then written back to regis-
ter set. In execution stage, taking A port as an instance, the scheme of data selection
of ALU input ports is described as below.
if(A_in_sel==000) A_in <= cst_reg; else
if(A_in_sel==001) A_in <= Byte2to4; else
if(A_in_sel==010) A_in <= classaddr; else
if(A_in_sel==011) A_in <= Regs_out; …
And the ALU_op control operation is described as below.
if(ALU_op==00) ALU_out_in <= A_in; else
if(ALU_op==01) ALU_out_in <= B_in; …
The operation of operand register stack pointer register update is described as below.
If (RSP_en==1 and clk’s triggered)
{If(RSP_ref==0) RSP<= RSP+1;
else if(RSP_ref==1) RSP<= RSP-1;
else if(RSP_ref==2) RSP<= RSP+2;
else if(RSP_ref==3) RSP<= RSP-2;
else if(RSP_ref==4) RSP<=ALU_out_in;}
RSPm1 <= RSP-1;
RSPm2 <= RSP-2;
RSPm3 <= RSP-3;
RSPa1 <= RSP+1;
RSPa2 <= RSP+2;
Design of Instruction Execution Stage for an Embedded Real-Time Java Processor 629
For stack register set write, firstly, the update of pointer for write operation is needed,
which is described as below.
RSP_wr <=RSP when RSP_sel = 0 else
RSP_wr <=RSPa1 when RSP_sel = 1 else
RSP_wr <=RSPa2 when RSP_sel = 2 else
RSP_wr <=RSPm1 when RSP_sel = 3 else
RSP_wr <=RSPm2 when RSP_sel = 4 else
RSP_wr <=RSPm3;
The operand stack general registers update operation is described as below.
If (SR_en == 1 and clk’s triggered)
{If( RSP_wr==0) SR0<=ALU_out_in; else
if (RSP_wr ==1) SR1<=ALU_out_in;
……
else if (RSP_wr ==15) SR15<=ALU_out_in;}
For stack register set read, the operation is described as below.
RSPm1_out<=SR0 when RSPm1 = 0 else
RSPm1_out<=SR1 when RSPm1 = 1 else
RSPm1_out<=SR2 when RSPm1 = 2 else
……
RSPm1_out<=SR15 when RSPm1 = 15;
RSPm2_out<=SR0 when RSPm2 = 0 else
RSPm2_out<=SR1 when RSPm2 = 1 else
RSPm2_out<=SR2 when RSPm2 = 2 else
……
RSPm2_out<=SR15 when RSPm2 = 15;
……
RSPa2_out<=SR0 when RSPa2= 0 else
RSPa2_out<=SR1 when RSPa2= 1 else
RSPa2_out<=SR2 when RSPa2= 2 else
……
RSPa2_out<=SR15 when RSPa2= 15;
6 Conclusion
Currently, Java is playing a more and more important role in embedded real-time area
owning to its features like platform independence, high development efficiency, ro-
bustness, safety, etc. In this paper, taking JPOR-32 as an example, the instruction
execution stage design for embedded real-time Java processor is presented. The run
time operand stack and method stack is implemented, which lays the foundations of
execution of instruction set based on stack. For instruction execution stage, the con-
trol signal operation mechanism and the instruction execution stage data path are
presented on emphasis.
630 G. Hu, Z. Chai, and W. Zhao
References
1. Java ME at a Glance,
http://www.oracle.com/technetwork/java/javame/overview/
index.html
2. Hu, G., Chai, Z.L., Zhao, W.K., Tu, S.L.: Instruction Decode Mechanism for Embedded
Real-Time Java Processor JPOR-32. In: 2010 International Conference on Electronics and
Information Engineering, Kyoto (2010)
3. Hu, G., Chai, Z.L., Tu, S.L.: Memory Access Mechanism in Embedded Real-Time Java
Processor. In: The 2nd International Conference on Computer and Automation Engineering,
Singapore (2010)
4. Lindholm, T., Yellin, F.: The Java Virtual Machine Specification, 2nd edn. Addison-
Wesley, Reading (1999)
5. Chai, Z.L.: A RTSJ-based Java Processor for Embedded Real-Time Systems, Ph.D disserta-
tion, Fudan University (2006)
6. Bollela, G., Gosling, J., Brosgol, B., Dibble, P., Furr, S., Hardin, D., Trunbull, M.: The
Real-Time Specification for Java, 1st edn. Addison-Wesley, Reading (2000)
7. Venners, B.: Inside the Java Virtual Machine, 2nd edn. Mc-Graw Hill, New York (2003)
8. Schoeberl, M.: A Java processor architecture for embedded real-time systems. Journal of
Systems Architecture 54(1-2), 265–286 (2008)
9. Schoeberl, M.: JOP Reference Handbook: Building Embedded Systems with a Java Proces-
sor. Beta ed., CreateSpace (2008)
A Survey of Enterprise Architecture Analysis Using
Multi Criteria Decision Making Models (MCDM)
Abstract. System design becomes really important for software production due
to continuous increase in size and complexity of software systems. It is a com-
plex design activity to build architecture for the systems like large enterprises.
Thus it is a critical issue to select the correct architecture in software engineer-
ing domain. Moreover, in enterprise architecture selection different goals and
objectives must be taken into consideration as it is a multi-criteria decision
making problem. Generally this field of enterprise architecture analysis has
progressed from the application of linear weighting, through integer program-
ming and linear programming to multi-criteria decision making (MCDM) mod-
els. In this paper we survey two multi-criteria decision making models (AHP,
ANP) to determine that to what extent they have been used in making powerful
decisions in complex enterprise architecture analysis. We have found that by
using ANP model, decision makers of an enterprise can make more precise and
suitable decisions in selection of enterprise architecture styles.
1 Introduction
We provide an overview of the area of investigation, the inspiration for this work and
the results obtained in this section.
The discipline of Enterprise Architecture (EA) has been emerged since a few years
ago in order to recognize and manage the muddled nature of enterprise-wide IT sys-
tems in the real world. This discipline focuses on the technical aspects as well as the
various aspects of the enterprise upon which the IT systems operate.
Generally, a knowledge base and support is provided by EA for making proper de-
cisions on the overarching IT related issues within the underlying enterprise. Basi-
cally EA models act as maps with information of the current situation and strategies
to prescribe the future directions of the enterprise [3]. As a discipline EA considers
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 631–637, 2011.
© Springer-Verlag Berlin Heidelberg 2011
632 M.J. Zia, F. Azam, and M. Allauddin
the improvement of different features of the enterprise and can deliver exceptional
benefits. It is critical to perform an architecture assessment before taking any decision
of exact scenario selection because the risk and impact of EA are persistent across the
enterprise. The analysis of EA is the property assessment criteria application on en-
terprise architecture models [4].
Generally, to select the best design from a group of design alternatives such as various
architecture styles [5] and [6], many problems have been faced by the software devel-
opment organizations. Such as it is a complex design activity to Architect the systems
like distributed software and it involves making decisions about a number of interde-
pendent design choices that relate to a range of design concerns. Selecting among
numerous alternatives, each and every decision has different effect on different qual-
ity attributes. A number of stakeholders participating in the decision making, trigger
conflicting constraints such as about cost and schedule. It becomes difficult for the
architect to select architecture styles that satisfy all criteria related to the problem of
measuring the importance of each requirement in the best way. For the precise and
powerful decision support, evaluation methods and techniques of architecture styles
selection [8] are usually used.
The primary purpose of this paper is to gain an understanding that to what extent dif-
ferent multi-criteria decision making models are used in the field of enterprise archi-
tecture analysis and find out the comparative strengths of these models. Various
methods using formal languages and influence diagrams have been proposed for the
analysis of enterprise architecture. But most recently in 2009, the use of multi-criteria
decision making models in the field of enterprise architecture analysis is introduced.
In this paper we survey two multi-criteria decision making models to determine that
to what extent we can make an effective use of these models for the analysis of enter-
prise architecture.
The models selected for this survey are as follows:
• Analytic hierarchy process (AHP) model
• Analytic network process (ANP) model
A frame-work is already provided there using AHP model through which before im-
plementing expensive enterprise- wide scenarios, they are analyzed, their quality at-
tributes achievement level is predicted and according to the analysis results decision
making process is supported between them. In this frame-work there is given a set of
quality attributes for enterprise architecture, with its pre-defined criteria and sub crite-
ria, and a set of enterprise architecture scenario candidates, which architecture candi-
date is the most appropriate according to the situation of the enterprise or we can say
to fulfill the quality requirements in the enterprise best [1].
A Survey of EA Analysis Using Multi Criteria Decision Making Models (MCDM) 633
We found from the surveyed models that still yet only AHP model is used in the
analysis of EA. The ANP model is used for the analysis of software architecture but
still it is not used in the field of EA. as property or quality attribute assessment of
software models is different from quality attribute assessment of EA models. Soft-
ware application is one of the four fundamental layers of EA based on [9] and [10],
therefore assessing an attribute in enterprise architectures includes assessing the at-
tributes from different points of view where assessing quality attributes in software is
from one of these points of view.
2 Survey of Models
Choosing the software architecture style to design good software architecture is one
of the important parts in software design process and it can help us in satisfying par-
ticularly non-functional and functional requirements precisely and correctly [5],[6].
The most recently research [1] discusses the use of analytical hierarchy process
(AHP) for enterprise architecture analysis and assigns weight to the different criteria
and sub-criteria of each utility. It provides a quantity method of measuring quality
attribute achievement for each scenario using AHP based on the knowledge and
634 M.J. Zia, F. Azam, and M. Allauddin
experience of EA experts and domain experts. All the methodologies used by other
researchers deploy formal languages such as influence diagrams or their intended ver-
sion to support the analysis of EA, but here in [1] analytical hierarchy process (AHP)
as a multi-criteria decision making method is used. This AHP based approach helps
them to select between different scenarios’ according to level of quality attribute
achievement in different EA.
Fig. 2. Illustrates the solution based on the enterprise’s situation provided in [1] for measuring
the EA quality attributes achievement for different EA scenarios. These yellow boxes represent
the steps where they carry out the pure (AHP) analytic hierarchy process.
The general form of AHP is the ANP which is powerful in dealing with complex de-
cisions where interdependence exists in a decision model. ANP has started to be used
in architecture style selection in software engineering fields regardless of the increase
in AHP applications in various fields that involve decision making. ANP model is
used for software architecture analysis in [2].
A Survey of EA Analysis Using Multi Criteria Decision Making Models (MCDM) 635
Fig. 3. Illustrates the frame-work that contains criteria cluster and alternatives cluster for archi-
tecture selection [2]
2.3 Algorithm
Quantitative Approach
The five major steps for the quantitative approach are given below:
• A quantitative questionnaire is setup for collecting priorities of various
stake holders such as Programmers, Users, Architects, etc.
• Make estimation for the relative importance between alternative architec-
ture styles and criteria of selection.
• By employing the consistency ratio calculate the inconsistency of each of
the matrices when pair wise comparison is made.
• For the formation of super matrix place the eigenvector of the individual
matrices.
• Augment the super matrix to high power until the weights have been re-
maining stable and converged.
636 M.J. Zia, F. Azam, and M. Allauddin
The major benefit of the AHP model [1] is that in assessing the quality attributes
it considers all possible combinations. This method also uses the knowledge and ex-
perience of EA experts, domain experts and clearly indicates disagreements between
participants. No feedback from lower to higher levels and no inner dependence are
represented in AHP. In the AHP, the decision criteria, each element in the hierarchy
and the alternatives are considered to be independent of all the others. But this is not
what happens in many real-world cases, there is interdependence among the alterna-
tives and elements. Due to this reason for complex enterprise architecture analysis
AHP decision approach is not suitable.
ANP called a super matrix technique where hierarchies are replaced by networks, to
capture the outcome of feedback and dependence within and among the elements clus-
ters. There is a goal, levels of elements and connections between them in a hierarchy.
The network has clusters of elements where dependencies exist among elements. The
mutual outer dependence of criteria in two different clusters is represented by feed-
back. The ANP can be used as an effective tool in complex enterprise architecture
analysis because it does not require independence among elements.ANP approach al-
lows us to use quantitative and qualitative information making this method flexible [2].
2.5 Conclusion
This survey paper reports how to enhance the selection process of enterprise architec-
ture by using a MCDM model. The above survey indicates that the developed
feedback among the network structure and inner and outer dependence excludes the
hierarchy shapes and calls for the network form to model the selection process. Due to
the time consuming nature of the ANP method, applications of AHP are noticeably
more than that of the ANP. The time taken for the entire process can be reduced and
decision makers can reach proper decisions concerning the architecture selection [12]
with an experienced focus group. From this survey we found that for making the more
accurate and faster estimates in enterprise architecture quality attribute analysis, ANP
model can be used. However there is still need for more papers presenting the use of
ANP in architecture style selection. Still yet this ANP approach is used for software
architecture analysis (which is one layer in EA) not for enterprise level architecture
analysis [2]. There is not a proper solution using (ANP) provided for enterprise
A Survey of EA Analysis Using Multi Criteria Decision Making Models (MCDM) 637
References
1. Razavi Davoudi, M., Shams Aliee, F.: A New AHP-based Approach towards Enterprise
architecture Quality Attribute Analysis. IEEE Xplore (August 1, 2009)
2. Delhi Babu, K., Rajulu, G., Ramamohana Ready, A., Kumari, A.: Selection of Architec-
ture Styles using Analytic Network Process for the optimization of Software Architecture.
International Journal of Computer Science and Information Security (IJCSIS) 8(1) (April
2010)
3. Armour, F.J., Kaisler, S.H., Liu, S.Y.: Building an enterprise architecture step by step.
IEEE IT Professional 1(4), 31–39 (1999)
4. Johnson, P., Lagerstrom, R., Naman, P., Simonsson, M.: Enterprise Architecture Analysis
with Extended Influence Diagrams. Information Systems Frontiers 9(2-3), 163–180 (2007)
5. Shaw, M., Garlan, D.: Software Architecture: Perspectives on an Emerging Discipline.
Prentice Hall, Englewood Cliffs (1996)
6. Bass, L., Clements, P., Kazman, R.: Software Architecture in Practice, 2nd edn. Addison-
Wesley 14 Professional (2003)
7. Albin, S.T.: The Art of Software Architecture: Design Methods and Technique, 1st edn.
Wiley, Chichester (2003)
8. Dobrica, L., Niemela, E.: A Survey on Software Architecture Analysis Methods. IEEE
Transaction on Software Engineering (2005)
9. Fedral Chief Information Officer (CIO) Council: Federal Enterprise Architecture Frame-
work (FEAF): Version1.1 (1999)
10. Spewak, S.H.: Enterprise Architecture Planning, Developing a Blueprint for Data, Appli-
cation and Technology. John Wiley & Sons, Inc., Chichester (1992)
11. Moaven, S., Shahrouz Moaven, J., Habibi, J., Ahmadi, H., Kamandi, A.: A Decision
Support System for Software Architecture-Style Selection. IEEE, Los Alamitos (2008)
12. ANP applied to project selection: Journal of construction engineering and management
(April 2005)
Research of Manufacture Time Management System
Based on PLM*
1 Introduction
With the development of enterprise management information, management leaps to a
new development platform through network and computer. Each management of
enterprises will be expanded to information. Manufacture time management of the
processing workshop is particularly important to enterprise, which is all essential to
the policymakers and administrators of enterprises, so enterprise's administrative
system should offer sufficient information and swift inquiry means to users.
But people have been using the traditional way to deal with quota work ticket and
labor hour statistic inquiring for a long time, which has a lot of defect such as low
efficiency, high error rate, vast files and data generated that bring out much difficul-
ties to inquiring , uploading, maintaining. So, manufacture time management system
combined with information technology is badly needed to deal with the labor hour
information unity.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 638–643, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Research of Manufacture Time Management System Based on PLM 639
Fig1 is work flow of current. After a new job order being issued by the sale depart-
ment, dispatcher dispatches work ticket after looking for corresponding technologic
Information from BOM according to the new job order and its corresponding product
drawing no. Once the production process need a long time, it can be split into several
work tickets. Or if the production process needs other work posts assist to finish, it also
can be split into several work tickets. The printed work tickets are given to workers,
and workers process according to the information on work ticket. Every day, work
tickets are handed on to the department statistician, and the process information are
statistic. At the end of the month, several report forms are made such as finished labor
hour daily report forms, finished job order daily report forms, work posts/job order
monthly report forms. Parts are handed to the testing center after finished. The invalid
labor hour caused by processing errors or defect materials, Defect Material Notice is
dispatched by the testing center, then dispatcher dispatch industry invalid/material
invalid work tickets. During the process, if the quota hour is not enough because of
adding process or adjusting labor hour, staff can applied to the quota department for
added labor hour. When the equipment breaks down or needs maintaining, staff can
apply for the maintenance of equipment. Machine repair tickets can be dispatched by
the equipment division according to equipment maintenance sheet.
Dispatch lists management consists of work ticket print and special work ticket
print, work ticket print are consisted of process quota and process quota printing. The
concrete are as follows: single printing, parts printing, multi-selected printing, gather
printing.
Data management consists of staff information management, work ticket cancella-
tion, oil labor hour management.
Statistical analysis module mainly statistic quota manufacture time and manufac-
ture time of finished dispatch list, collect and count relevant data. The system
integrates strong function of data analysis, including e-report, query language and
decision support. It provides management personal with detailed branch plant produce
report forms and enterprise summary reporting, displays and collects analysis results
combined with chart, as the reference data to decision support and progress control of
the management.
The function of comprehensive query can extract, filter, compress and trace core
data, then query of work order, utilization rate query of equipment, completed manu-
facture time status query of branch plant and completed manufacture time status
query of every occupation etc. can be realized.
According to the function requirement of the work management system, Adopt EX-
CEL VBA to work out the processing procedure of the front desk, select Sql2005
database as backstage supporter's database. The database consists of 8 tables as fol-
lows: user's information table, job order information table, part information table,
process information table, work ticket information table, finished labor hour statistics,
642 N. Jing, Z. Juan, and Z. Liangwei
staff information table, attendance table. Once job order is issued, the information of
it will be stored in job order table. Then the order table is divided into two tables: part
table and process table. When printing work ticket, the information is called from
process table, the record of printing is stored in work ticket information table. The
data of the finished work ticket is stored in statistics.
In the logical structure, C / S model has less level then B / S model, so, for the same
task, the speed dealing the task of C / S is faster than B / S, making the C / S more
conducive in handling large amounts of data. There are a large amount data to deal
with in work ticket printing and statistics, so C/S model is adopted in this module. As
users are administrative staff at all levels of enterprises and public institutions and
staff member, but not IT professional personnel, according to company's actual condi-
tions of workshop, the management information system adopt Excel as the platform
creatively, using VBA as developing language, connect SQL database as the frame of
the whole system through Excel. VBA connects database through setting up ODBC
datum sources of each database, use Connection target to connect SQL Sever data-
base, and operates the record in the database with Record Set target. So long as users
can use Excel, they can operate this system fluently. They do not need to grasp ad-
vanced IT technology.
Part of labor hour query. This part can inquire each workshop, job order, type of
work, and quotas labor hours that finished and actually finished. Department leader
has authority to inquire the statistics office, workshop and labor division reports.
Statisticians of branch factory can query labor hour’s monthly reports of their own
factory. This part works on J2EE platform and adopting Server SQL2000 as database,
the Eclipse as a development tool, loads my Eclipse plug-ins. Using a WEB client -
business logic layer- data persistent layers constitutes multi-layer system structure,
Research of Manufacture Time Management System Based on PLM 643
and the method that connecting to the database is via JDBC driver management which
uses the method of get Connection .
5 Conclusion
This text process with a crusher manufacturing enterprise workshop, study workshop
labor hour modeling method, set up workshop labor hour and manage the business
model on the basis of this method, survey and study the workshop on the spot, analy-
sis the system requirement. Regard machine shop of this factory as the target, analyze
its demand of business, constructs workshop labor hour management system model
and information management faces to the manufacture process. Combined with WEB
technology, based on VBA development approach, a workshop labor hour manage-
ment system of hybrid network is developed.
References
[1] Yan, H., Zhong, L.-w., Ni, J.: Analysis and design of a manufacture time management in-
formation system in mechanical workshop. Manufacturing Automation 1, 13–15 (2010)
[2] Zhang, B., Liang, L.: Man-hour Optimization of Nested Block Work Based on Genetic Al-
gorithm and Dynamic Programming Method. Journal of Wuhan University of Technol-
ogy 2, 422–426 (2010)
[3] Guo, C., Zhou, D.: Man-hour quota sysytem based on genetic neural network. Computer
Applications and Software 8, 205–208 (2010)
A Non-rigid Registration Method for Dynamic Contrast
Enhancement Breast MRI
1 Introduction
Breast cancer is the most common malignant disease in women worldwide [1-2]. The
major goals of breast cancer diagnosis are early detection of malignancy. Comparing
with other imaging such as X-ray mammography, dynamic contrast enhanced breast
magnetic resonance imaging (DCE breast MRI) has been shown to be highly sensitive
for the early detection of breast carcinoma in patients [2]. During the process of DCE
breast MRI imaging, a series of volumetric images are acquired and computed sub-
tractions of the post contrast time points and the pre-contrast base image are com-
monly used to suppress fat and to highlight areas of enhancement. The dynamics of
contrast medium uptake and the morphological features are used to differentiate le-
sions [3].However, during several minutes acquisition of DCE breast MRI, patient
movement and respiration activity can lead to misunderstanding enhancing regions,
causing longer reading times and less clear diagnoses. This suggests that the image
data be spatially registered before subtraction is performed.
In recent years, several registration methods have been proposed in the literature for
solving the specific problem of breast MRI registration [4-7] using rigid and nonrigid
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 644–649, 2011.
© Springer-Verlag Berlin Heidelberg 2011
A Non-rigid Registration Method for Dynamic Contrast Enhancement Breast MRI 645
transformations with different figures of merit. However, these methods, although very
attractive, are often very time consuming for a direct clinical application.
With the development and application of computer assistance for MR based diag-
nosis of breast cancer, a method that can carry out deformable registration on contrast
enhanced images within clinically acceptable execution time is required. Demons
algorithm which is similar with optical flow methods is increasingly used to medical
image registration in several teams as it is remarkably fast [8,9]. However, with the
assumption that the intensity of the image remains constant over time demons is un-
fitted for DCE breast MRI. Some research have been made to overcome this problem
by correcting the intensity of image [10]. In the paper, we has proposed a registration
model for breast MRI in which the intensity of post- contrast image is corrected firstly
then a fast demons approach is employed to match pre and post contrast images.
(m − s )∇s
vs = 2
(1)
∇s + ( m − s ) 2
Then, combining the displacement v s and vm , the total displacement at point p can
be calculated as
∇s ∇m
v = vs + vm = (m − s ) × ( 2
+ 2
) (3)
∇s + ( m − s ) 2 ∇m + ( s − m ) 2
646 Y. Wang et al.
In our paper, equ.3 is used to iteratively get the displacement between baseline image
and deformed image with more quickly and fewer iterations.
ri ( x ) = M i ( x) − Mˆ ( x ) i = 1, ,n (6)
4 Experiment Results
In order to demonstrate the practicality of the proposed registration, DCE breast MRI
data from 30 subjects are used in this study. The MRI examinations were performed
on a 1.5T Signa EchoSpeed using an open breast coils which permitted the subject to
lie prone. The image data originates from routine unilateral breast MRI examinations.
The dynamic series were acquired using a three-dimensional gradient echo sequence
with fat saturation. Gadopentate dimeglumine, 0.2 mol/kg, was administered manu-
ally at a rate of about 3 ml/second. TR/TE times were 5.3/1.5 msec, and an interpo-
lated 512x512 image was acquired at a resolution of .39 mm per pixel. Each of volum
took 100 seconds to acquire. Screen captures of suspicious ROI’s drawn by the inter-
preting radiologist accompanied each data set. Fig. 1 is an experiment case, which
shows the image intensity change between different tissues in breast and movement of
the patient in the process of imaging by original and subtraction images.
For comparison purposes, in the experiments, rigid, original demon and free form
deformation (FFD) based B-spline registration algorithm are employed. For demon
and FFD registration, we select three resolution levels and a control point separation
of 12 mm for FFD [4].
648 Y. Wang et al.
To test the performance of the proposed registration method, we use both quantita-
tive numerical indexes mutual information (MI) and correlation coefficient (CC)
[4,7]. MI has been largely proposed as the optimum similarity measure in images, and
it has already been used for breast image analysis. In the paper, we used normalized
mutual information (NMI) to verify the efficiency of the registration. NMI has been
devised to overcome the sensitivity of MI to change in image overlap and shown to be
more robust than standard mutual information [5],
H ( M ) + H ( R ( S ))
NMI = (9)
H ( M , R ( S ))
CC =
∑ (M − M ) + (R(S ) − R(S )) (10)
∑ (M − M ) (R(S ) − R( S ))
2 2
Fig. 2. Comparison of the registration error between no registration(N-R), original demon (O-
D), FFD and proposed demon(P-D) registration method in terms of NMI and CC
Fig.2 shows the comparison of the registration error in terms of NMI and CC. The
comparison indicates that for DCE breast MRI, demon algorithm has presented a
significantly improved registration in all cases with intensity correction. In the ex-
periment we have found that the proposed demon can perform better than FFD in the
cases of one kind of or two different lesions. For more classes which cause time-
consuming and complexity of estimation on polynomial parameters, we are currently
studying on it.
5 Conclusion
In the paper, we present a novel method based on demons algorithm for the registration
of DCE breast MRI. With the intensity enhancement correction, the proposed demon
model overcomes the problem that original demons algorithm based on intensity change
A Non-rigid Registration Method for Dynamic Contrast Enhancement Breast MRI 649
is unsuitable for DCE breast MRI, the intensity of which will be enhanced over time. In
the experiments, NMI and CC are employed to evaluate the registration performance of
the proposed method by comparison of different registration algorithm. We have ob-
served that proposed method can provide better registration than original demon and
FFD algorithm.
Acknowledgment
This work is jointly supported by National Natural Science Foundation (60962004),
Gansu Province Hi-Tech Research and Develo- pment Program (0708GKCA047) and
Gansu Province Natural Science Foundation (0803RJZA015).
References
1. Hayon, P., Brady, M., Tarassenko, L., et al.: Analysis of dynamic MR breast images using
a model of contrast enhancement. Medical Image Analysis 1(3), 207–224 (1996)
2. Behrens, S., Laue, H., Althaus, M., et al.: Computer assistance for MR based diagnosis of
breast cancer: Present and future challenges. Computerized Medical Imaging and Graph-
ics 31, 236–247 (2007)
3. Kuhl, C.K., Mielcareck, P., Klaschik, S., Leutner, C., et al.: Dynamic breast MR imaging:
are signal intensity time course data useful for differential diagnosis of enhancing lesions?
Radiology 211, 101–110 (1999)
4. Rueckert, D., Sonoda, L.I., Hayes, C., et al.: Nonrigid registration using free-form defor-
mation: application to breast MR images. IEEE Trans Med Imaging 18, 712–721 (1999)
5. Lucht, R., Knopp, M.V., Brix, G.: Elastic matching of dynamic MR mammographic im-
ages. Magnetic Resonance in Medicine 43, 9–16 (2000)
6. Martel, A.L., Froh, M.S., Brock, K.K., et al.: Evaluating an optical-flow-based registration
algorithm for contrast-enhanced magnetic resonance imaging of the breast. Physics in
Medicine and Biology 52, 3803–3816 (2007)
7. Mainardi, L., Passera, K.M., Lucesoli, A., et al.: A Nonrigid Registration of MR Breast
Images Using Complex-valued Wavelet Transform. Journal of Digital Imaging 21(1), 27–
36 (2008)
8. Thirion, J.P.: Image matching as a diffusion process: an analogy with Maxwell’s demons.
Medical. Image Analysis 2, 243–260 (1998)
9. Wang, H., Dong, L., O’Daniel, J., et al.: Validation of an accelerated ‘demons’ algorithm
for deformable image registration in radiation therapy. Physics in Medicine and Biol-
ogy 50, 2887–2905 (2005)
10. Hayton, P., Brady, M., Tarassenko, L., Moore, N.: Analysis of dynamic MR breast images
using a model of contrast enhancement. Med. Image Anal. 1, 207–224 (1997)
11. Roche, A., Malandain, G., Ayache, N.: Unifying maximum likelihood approaches in medi-
cal image registration. Image. System. Technology 11, 71–80 (2000)
Iterative Algorithms for Nonexpansive
Semigroups with Generalized Contraction
Mappings in Banach Space
Qiang Lin
Abstract. The purpose of this paper is to prove the iterative processes generated
by a nonexpansive semigroup and a generalized contraction mapping converge
strongly to common fixed point p of the nonexpansive semigroup, and p is the
unique solution to some variational inequality. The result extends some theo-
rems in the literature.
1 Introduction
For finding fixed points of nonexpansive mappings, Moudafi [1] introduced the vis-
cosity approximation method in a real Hilbert space. Many authors have improved
and generalized the Moudafi’s result (see [2,3]). The following implicit and explicit
viscosity iterative schemes:
xn = α n f ( xn ) + (1 − α n )T (tn ) xn . (1.1)
xn +1 = α n f ( xn ) + (1 − α n )T (tn ) xn . (1.2)
were studied by Chen and He [4] and Song and Xu [5], where {T (t ) : t ≥ 0} is a
nonexpansive semigroup and f is a contraction mapping.
In this paper, we prove the iterative processes { xn } given by (1.1) converge
strongly to common fixed point p of {T (t ) : t ≥ 0} under the condition of a general-
,
ized contraction mapping f and p is the unique solution to some variational inequal-
ity. We establish the strong convergence result which generalizes some results given
in [3-5].
2 Preliminaries
Throughout this paper, let X be a real Banach space and K a nonempty closed convex
X∗
subset of X , let J denote the normalized duality mapping from X into 2 given by
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 650–655, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Iterative Algorithms for Nonexpansive Semigroups 651
J ( x) = { f ∈ X ∗ , x, f = x f , x = f }, ∀x ∈ X ,
∗
where X denotes the dual space of X and ⋅, ⋅ the generalized duality pairing.
∗
Lemma 2.1 ([6]). Let X be a real Banach space and J : X → 2 X be the normal-
ized duality mapping. Then, for any x, y ∈ X , we have
x + y ≤ x + 2 y, j ( x + y )
2 2
(2.1)
for all j ( x + y ) ∈ J ( x + y ).
+
We denote by the sets of nonnegative real numbers, respectively. A nonexpansive
+
semigroup is a family {T (t ) : t ∈ } of self-mappings on K such that
(i) T (0) x = x for x ∈ K ;
+
(ii) T (t + s ) x = T (t )T ( s ) x for t , s ∈ and x ∈ K;
+
(iii) T (t ) x − T (t ) y ≤ x − y , ∀t ∈ , x, y ∈ K ;
+
(iv) for each x ∈ K , the mapping T (⋅) x from into K is continuous.
Lemma 2.3 ([8]). Let K be a nonempty closed convex subset of a Banach space X
ˆ
with a uniformly Gateaux differentiable norm, and {xn } a bounded sequence of X .
If z0 ∈ K , then
μn xn − z0 = min μn xn − y
2 2
y∈K
if and only if
μn y − z0 , J ( xn − z0 ) ≤ 0, ∀y ∈ K .
652 Q. Lin
A mapping f : X → X is said to be
(i) (ψ , L) − function ifψ : + → + is an L-function and for all x, y ∈ X ,
with x ≠ y d ( f ( x ), f ( y )) < ψ ( d ( x, y )) ;
(ii) Meir-Keeler type mapping if for each ε > 0 there exists δ = δ (ε ) > 0
such that for each x, y ∈ X , with d ( x, y ) < ε + δ we have d ( f ( x ), f ( y )) < ε .
If we consider ψ (t ) = α t , for each t ∈ , where α ∈ (0,1) , then we get the usual
+
y∈K
If F (T ) ≠ φ , then K ∩ F (T ) ≠ φ .
∗
Theorem 3.2. Let X be a real reflexive strictly convex Banach space with an uni-
ˆ
formly Gateaux differentiable norm, and K a nonempty closed convex subset of X ,
let {T (t )} be a u.a.r. nonexpansive semigroup from K into itself such that F ≠ φ ,
and f a generalized contraction mapping on K . Suppose lim n →∞ tn = ∞ and
α n ∈ (0,1) such that lim n →∞ α n = 0 . If {xn } is defined by
Iterative Algorithms for Nonexpansive Semigroups 653
xn = α n f ( xn ) + (1 − α n )T (tn ) xn , n ≥ 1. (3.1)
f ( q ) − q, J ( p − q ) ≤ 0 (3.4)
Adding up (3.3) and (3.4), we have that
0 ≥ ( p − f ( p)) − ( q − f (q)), J ( p − q) ≥ p − q, J ( p − q) − f ( p) − f ( q), J ( p − q)
≥ p − q − f ( p ) − f ( q ) p − q > p − q −ψ ( p − q ) p − q .
2 2
Step 2. We show that {xn } is bounded. For any fixed p ∈ F , it follows from (3.1)that
xn − p
2
≤ α nψ ( xn − p ) xn − p + α n f ( p ) − p xn − p + (1 − α n ) xn − p . (3.5)
2
We have
xn − p −ψ ( xn − p ) ≤ f ( p) − p
or equivalently
xn − p ≤ η −1 ( f ( p) − p ) .
Thus {xn } is bounded, so are {T (tn ) xn } and { f ( xn )} .
654 Q. Lin
for all h > 0, where C is any bounded subset of K containing {xn } . Hence,
xn − T (h) xn ≤ xn − T (tn ) xn + T (tn ) xn − T ( h)(T (tn ) xn ) + T ( h)(T (tn ) xn ) − T (h) xn
y∈K
∗
By Proposition 3.1, we can found p ∈ K such that p = T (h) p. Since h is arbitrary,
∗
it follows that p ∈ F . Using Lemma 2.3 together with p ∈ K , we get that
μn y − p, J ( xn − p) ≤ 0, ∀y ∈ K .
From (3.5), we have
μn xn − p [ xn − p −ψ ( xn − p )] ≤ μn f ( p) − p, J ( xn − p) ≤ 0 ,
i.e. μn xn − p = 0. Hence, there exists a subsequence {xnk } ⊆ {xn } which
strongly converges to p ∈ F as k → ∞.
Step 4. We show that p is a solution in F to the variational inequality (3.2). In fact,
for any x ∈ F ,
xn − x
2
≤ α n f ( xn ) − xn , J ( xn − x) + α n xn − x + (1 − α n ) xn − x .
2 2
We have
xn − f ( xn ), J ( xn − x) ≤ 0 . (3.6)
Iterative Algorithms for Nonexpansive Semigroups 655
Since the sets {xn − x} and {xn − f ( xn )} are bounded and the duality mapping
J is singly-valued and weakly sequentially continuous from X to X ∗ , for any
fixed x ∈ F , from Step 3 there exists xnk → p as k → ∞. It follows from (3.6) that
This is, p ∈ F is a solution of the variational inequality (3.2). From this we con-
clude that p ∈ F is the unique solution of the variational inequality (3.2). In a simi-
lar way it can be show that each cluster point of the sequence {xn } is equal to p .
Therefore, the entire sequence {xn } converges to p and the proof is complete.
4 Conclusion
In this paper we proved the convergence of the implicit iterative processes generated
by a nonexpansive semigroup and a generalized contraction mapping. Theroem 3.2
extends the corresponding results due to Petrusel and Yao [3] to the case of a nonex-
pansive semigroup {T(t): t≥0}, and the corresponding results due to Chen and He [4]
and Song and Xu [5] to the case of a generalized contraction mapping f .
References
1. Moudafi, A.: Viscosity approximation methods for fixed-points problems. J. Math. Anal.
Appl. 241, 46–55 (2000)
2. Xu, H.K.: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal.
Appl. 298, 279–291 (2004)
3. Petrus, A., Yao, J.-C.: Viscosity approximation to common fixed points of families of non-
expansive mappings with generalized contractions mappings. Nonlinear Anal. 69, 1100–
1111 (2008)
4. Chen, R., He, H.: Viscosity approximation of common fixed point of nonexpansive semi-
groups in Banach space. Appl. Math. Lett. 20, 751–757 (2007)
5. Song, Y., Xu, S.: Strong convergence theorems for nonexpansive semigroup in Bana ch
spaces. J. Math. Anal. Appl. 338, 152–161 (2008)
6. Liu, L.S.: Ishikawa and Mann iterative processes with errors for nonlinear strongly accre-
tive mappings in Banach spaces. J. Math. Anal. Appl. 194, 114–125 (1995)
7. Benavides, T.D., Acedo, G.L., Xu, H.K.: Construction of sunny nonexpansive retractions
in Banach spaces. Bull. Aust. Math. Soc. 66, 9–16 (2002)
8. Takahashi, W., Ueda, Y.: On Reich‘s strong convergence for resolvents of accretive opera-
tors. J. Math. Anal. Appl. 104, 546–553 (1984)
9. Meir, A., Keeler, E.: A theorem on contraction mappings. J. Math. Anal. Appl. 28, 326–
329 (1969)
10. Lim, T.-C.: On characterizations of Meir-Keeler contractive maps. Nonlinear Anal. 46,
113–120 (2001)
11. Suzuki, T.: Moudafi’s viscosity approximations with Meir-Keeler contractions. J. Math.
Anal. Appl. 325, 342–352 (2007)
Water Distribution System Optimization Using Genetic
Simulated Annealing Algorithm*
Shihu. Shu1,2,**
1
School of Environmental Science and Engineering, Tongji University, Shanghai, China
2
National Engineering Research Center for Urban Water Resource, Shanghai, China
No.230, Xuchang Rd, Shanghai 200092, China
Tel.: +86-021-55218273; Fax: +86-021-55215588
shushihu@tongji.edu.cn
Abstract. Water supply system optimization makes use of the latest advances
in hybrid genetic algorithm to automatically determine the least-cost pump op-
eration for each pump station in large-scale water distribution system while sat-
isfying simplified hydraulic performance requirements. Calibration results of
the original model were pretty good. The comparison results show that the dif-
ference between the simplified and the original mode simulation of nodal pres-
sure and pipe flow was less than 5%. The precision of the simplified model
could satisfy the requirement for pump operation optimization. When the sim-
plified model was used instead of the microscopic model, the calculation time
of optimal control model was greatly saved, but the precision was basically the
same. The results of optimal control model for the water distribution system in
City S showed that the simplified model for optimal control had remarkable ad-
vantage cause the calculation time of optimal control model was greatly saved,
but the precision was basically same.
1 Introduction
*
Part of this work was supported by National Water Special Project of China (2009ZX07421-
005) and Shanghai Post-doctor research foundation (B type) Program (10R21420900).
**
Corresponding author.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 656–661, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Water Distribution System Optimization Using Genetic Simulated Annealing Algorithm 657
long to be pragmatic. Because the network optimization decision has a harsh require-
ment of hydraulic computation, researches on the optimization decision by micro-
scopic model is just for small-scale water distribution network. Therefore, study on
the simplification of microscopic model was carried out and operation schemes of
pump were put forward.
This paper focuses on the development of an optimal operations model for real-
time control of multi-source, multi-tank water distribution systems. The objective is to
minimize the cost of the energy consumed for pumping. The program makes use of
the latest advances in genetic algorithms (GSA) optimization to automatically deter-
mine the optimal pump operation policy for each pump station in a water distribution
system that will best meet simplified hydraulic performance requirements.
In order to build the simplified hydraulic model, a new algorithm is developed for
establishing the water network model. The simplification is based on the Static Con-
densation (Ulanicki et al., 1996; Martinez & Perez, 1993; Zhao, 2003; Yan & Zhao,
1986; Edward & Khairy, 1995). As for the large-scale network, only main pipes are
reserved to build a miniature model to be used for optimization control. Layout of the
simplified and original network can be seen in Fig.1 and Fig.2. The original network
contains 5008 junctions and 7741 pipes whose diameter not less than 75 mm in the
hydraulic model, while the simplified model only contains 352 junctions and 551
pipes whose diameter not less than 400 mm.
The accuracy of the models towards reality is analyzed. There are some online
sampling points in the distribution system, and the calibration results for nodal pres-
sure were good. But there was no sampling point for flow and pressure in the simpli-
fied distribution networks cause the online sampling equipments were always in-
stalled in the pipes whose diameter less than 400 mm. So nodal pressure and pipe
flow comparison between the simplified and the original model simulation were done
658 S. Shu
instead of comparing the simplified modeling results towards the field observed data.
The results show that the difference between the simplified and the original mode
simulation of nodal pressure and pipe flow was less than 10%. The precision of the
simplified model could satisfy the requirement for pump scheduling. The time for
optimal run of the simplified model is much less than the time needed to run a full-
scaled model.
min F = F1 + F2
s.t. f ' (q ' , r ' , H ' , Q ' ) = 0
H min, i , j ≤ H i , j ≤ H max, i , j
HS min, i , k ≤ H i , j ,k ≤ HS max, i , k
J
∑Q
j =1
i, j ≤ Qmax,i + Vi (1)
VH i , j ,min ≤ VH i , j ≤ VH i , j ,max
VH init ,i = VH final ,i
n1 ≤ ni ≤ min{n2 , ne }
where F is the total cost, F1 is water treatment cost, and F2 is the power cost in the
pump station.
3 Algorithm
For the optimization operation decision problem described in Eq. 5, the decision vari-
able is switch degree of the constant-speed pump and the variable-speed pump. Since
there are both continuous variable and dispersed one in the decision variable and
high-dimension network equation, a nonlinear one, in the restraint condition, this
problem is a complex large-scale nonlinear planning one and traditional nonlinear
planning was restricted.
Water Distribution System Optimization Using Genetic Simulated Annealing Algorithm 659
The number of node and restraint conditions are both so many in the large-scale
network that a great deal hydraulic analog and inverse matrix computation are needed
when grads method is adopted, which makes the method hard to be applied. Hybrid
genetic algorithm can be used to carry out the optimization computation since there
are mostly constant-speed pump in the large-scale network system with a range limit
of speed change. Microscopic hydraulic model or simplified model can be transferred
to satisfy the implicit constraint condition automatically. It is propitious to make op-
timal network operation decision, which is an optimization of large-scale, highly
nonlinear and multi-peak function without the need of information such as grads. It
also has a good overall stability.
The hybrid genetic algorithm combines genetic algorithm (GA) and simulated an-
nealing (SA) algorithm (Ilich & Simonovic, 1998). GA and SA are both independ-
ently valid approaches toward problem solving with certain strengths and weaknesses.
GA has been applied to a large number of combinatorial optimization problems owing
to its ability to achieve global or near global optima. SA was used to select the indi-
viduals for next generation and control the mutation rate.
Recently many researchers tried to combine GA and SA to provide a more power-
ful optimization method that has good convergence control. Chen and Flan (1994) had
shown that the hybrid of GA and SA can perform better for ten difficult optimization
problems than either GA or SA independently. Mahfoud and Goldberg (1995) also
introduced a GA and SA hybrid. The combination of GA and SA algorithm, learning
from each other's strong points to offset one's weakness, develops an eximious global
search algorithm. Genetic simulated annealing (GSA) successfully combines the local
stochastic hill climbing features from simulated annealing and the global crossover
operation from genetic algorithm which has been wide used in optimization design,
scheduling and operation (Varanelli & Cohoon, 1995; Chen et al., 1998; Hiroyasu
et al., 2000).
The relevant program was developed. The proposed program was tested and veri-
fied on a actual large-scale water distribution systems in S city.
4 Case Study
The developed optimal operational control model has been applied to a number of
water distribution networks of different sizes and degrees of complexity. Taking city
S for example, The schematic network is shown in Figure 1.
There are 3 WTPs and 9 booster stations in the networks. The investigation of
the pump stations was focus on collecting the information of existing operation,
which include actual operation condition, historical operation and discussion with the
staff. Pumping facilities are well maintained and pumps are reasonably good condi-
tion. Site instrument works properly. Operation and maintenance records have been
well kept.
In this study, we just focus on one of the pump station in one WTP. Based on
Eq. 1, a program was developed. The result of the 24 h optimal operation scheme was
presented in Table 1.
660 S. Shu
The electricity consumption for pump 3# & 5# was about 12537 kWh per day
equal to 7522.2 RMB (assume average electricity tariff is 0.6 RMB/kWh). The opti-
mal pump schemes based on simplified model saved 1% of the total electricity con-
sumption compared to experiential operation as usual. The total volume of the day for
pump station is about 100,000 m3/d current annual water production is about 30 to 40
milllion m3 and annual electricity consumption is about 2 to 3 million kWh. So opti-
mal pump operation based on simplified model will annually save about 20 to 30
thousand kWh electricity consumption which equals to 12,000 to 18,000 RMB. When
Water Distribution System Optimization Using Genetic Simulated Annealing Algorithm 661
the simplified model was used instead of the microscopic model, the calculation time
of optimal control model was greatly saved, but the precision was basically the same.
5 Conclusions
Using the model simplified from the microscopic model, the optimal control models
of pumps were established by minimizing the total operating cost of the system while
satisfying all the required system constraints. The hybrid genetic algorithm was used
to find the optimal operation strategies of pumps. When the simplified model was
used instead of the microscopic model, the calculation time of optimal control model
was greatly saved, but the precision was basically same. The method has been imple-
mented and applied to the water system in City S. The results obtained for the optimal
pump control showed that the simplified model has remarkable advantages.
References
1. Coulbeck, B., Sterling, M.: Optimal Control of Water Distribution Systems. Proc. Institute
of Electrical Engineers 125(9), 43–48 (1978)
2. Ulanicki, B., Zehnpfund, A., Martinez, F.: Simplification of Water Distribution Network
Models. In: Hydroinformatics 1996, Balkema, Rotterdam, pp. 493–500 (1996)
3. Martinez, F., Perez, R.: Obtaining Macromodels from Water Distribution Detailed Models
for Optimization and Control Purposes. In: International Conference – Integrated Com-
puter Applications for Water Supply and Distribution, Leicester, pp. 1–4 (1993)
4. Zhao, H.B.: Water Network System Theory and Analysis, pp. 36–39. China Architecture
& Building Press (2003)
5. Chen, H., Flann, N.: Parallel Simulated Annealing and Genetic Algorithms: A Space of
Hybrid Methods. In: Davidor, Y., Männer, R., Schwefel, H.-P. (eds.) PPSN 1994. LNCS,
vol. 866, pp. 428–438. Springer, Heidelberg (1994)
6. Chen, H., Flann, N.S., Watson, D.W.: Parallel genetic simulated annealing: a massively
parallel SIMD algorithm. IEEE Transactions on Parallel and Distributed Systems 9, 126–
136 (1998)
7. Edward, J.A., Khairy, H.A.: Hydraulic – Network Simplification. J. Water Resour. Plng.
and Mgmt. 121(3), 235–240 (1995)
8. Varanelli, J.V., Cohoon, J.C.: Population-Oriented Simulated Annealing: A Ge-
netic/Thermodynamic Hybrid Approach to Optimization. In: Proc. 6th Int’l Conf. Genetic
Algorithms, pp. 174–181. M. Kaufman, San Francisco (1995)
9. Lansey, K.E., Awumah, K.: Optimal Pump Operations Considering Pump Switches. J.
Water Resour. Plng. and Mgmt. 120(1), 17–35 (1994)
10. Ilich, N., Simonovic, S.P.: Evolutionary Algorithm for Minimization of Pumping Cost.
Journal of Computing in Civil Engineering (10), 232–239 (1998)
11. Mahfoud, S.W., Goldberg, D.E.: Parallel recombinative simulated annealing: A genetic al-
gorithm. Parallel Computing 21(1), 1–28 (1995)
12. Hiroyasu, T., Miki, M., Ogura, M.: Parallel Simulated Annealing using Genetic Crossover.
In: Proc. IASTED Int’l Conf. on Parallel and Distributed Computing Systems, Las Vegas,
pp. 145–150 (2000)
13. Yan, X.S., Zhao, H.B.: Calculation theory of water distribution network, pp. 82–85. China
Architecture & Building Press (1986)
Distributed Workflow Service Composition Based on
CTR Technology
Abstract. Recently, WS-BPEL has gradually become the basis of a standard for
web service description and composition. However, WS-BPEL cannot effi-
ciently describe distributed workflow services for lacking of special expressive
power and formal semantics. This paper presents a novel method for modeling
distributed workflow service composition with Concurrent TRansaction logic
(CTR). The syntactic structure of WS-BPEL and CTR are analyzed, and new
rules of mapping WS-BPEL into CTR are given. A case study is put forward to
show that the proposed method is appropriate for modeling workflow business
services under distributed environments.
1 Introduction
Scientific and commercial applications often require large-scale computational and
data business processes. Today, workflow-based applications, often because of their
scale, either in the number of processing steps or the size of the data they access, are
running under distributed environments [1]. Distributed workflows describe the rela-
tionship of their individual computational components and their input and output data
in a cross-domain way [2]. The Web Services Business Process Execution Language
(WS-BPEL) is a suitable language for describing the behavior of distributed business
processes [3-4]. WS-BPEL is a de facto standard for specifying and execution distrib-
uted workflow specification for web services [5]. It is also a language with rich
expressivity when compared to other languages for business process modeling, in
particular those supported by workflow management systems.
Web service technologies for distributed workflow applications have been widely
accepted by both the industry and academia as the prevailing solution for distributed
platform based data-centric services integration. One of the strengths of distributed
workflow services is their capacity to be composed. Composing services rather than
accessing a single service is essential and offers much better benefits to users under
distributed environment [6].
However, WS-BPEL has several shortcomings that limit the ability to provide a
foundation for seamless interoperability of distributed workflow. The semantics of
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 662–667, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Distributed Workflow Service Composition Based on CTR Technology 663
WS-BPEL are not always clearly defined, thus complicating the adoption of the
language for complex service composition. With fast growing number of workflow
business applications, locating appropriate workflow services and composing them
become tedious, time-consuming and error-prone.
This paper is structured as follows. In Section 2, we give a brief introduction of
CRT technology and WS-BPEL process modeling. Section 3 investigates the way of
mapping WS-BPEL into CRT. Section 4 verifies the propose method on a case study.
Section 5 summarizes the main results.
4 Experimental Results
In this section, we give an example of distributed workflow service composition and
describe how it is described by WS-BPEL and translated to the CTR formulas. Fig.1
shows the corresponding application of the CTR technology to model the sample
workflow. The EU$ process and US$ process handles cheque issues which enter the
system via the task Begin and exit via the task End.
666 Z. Feng and Y. Ye
By the Definition 1-6, the control flow of workflow service orchestration in CTR is
represented by:
workflow(W ) ← ( Pay _ Req ^ Payment type choice(W )) ⊗ (( EU $ ^ Approval by Director(W ))
⊗(( Approve ^ Cheque for EU$ Bank(W )) | (Confirmation(W )))) ∨ (US $ ^ Cheque for US$
Bank(W ))) ⊗ (Get Signature from Director ) | (Confirmation(W )) .
where Payment type choice(W ) = request _ resource[ resource _ id ].START .assigned _
wait _ user[role _ id ].Payment _ Type _ Choice.release _ resource[resource _ id ].FINISH ,
Fig. 1. Application of the CTR technology for the case of distributed workflow services
composition
Cheque for EU$ Bank(W ) = request _ resource[ resource _ id ].START .assigned _ resource(resource _ id ).
wait _ user[role _ id ].Check _ Cheque.release _ resource[resource _ id ].FINISH ,
Approval by Director(W ) = request _ resource[ resource _ id ].START .timer < begin _ time, end _ time > .
assigned _ resource( resource _ id ). Approval.release _ resource[ resource _ id ].FINISH ,
Confirmation(W ) = request _ resource[resource _ id ].START .assigned _ resource( resource _ id ).
Confirmation _ Cheque.release _ resource[resource _ id ].FINISH .
The following xml patch is a general definition of WS-BPEL for EU$ process by the
Definition 7-15. US$ process is similar to EU$, and can be inferred from EU$.
<bpel:process name="test" xmlns:tns="test">
<bpel:sequence name="Begin">
<bpel:receive name="Payment type choice"/>
<bpel:if name="EU$">
Distributed Workflow Service Composition Based on CTR Technology 667
<bpel:flow name="Flow">
<bpel:receive name="Approval by Director">
<bpel:sources>
<bpel:source linkName="link1"></bpel:source>
</bpel:sources>
</bpel:receive><bpel:invoke name="Cheque for EU$ Bank">
<bpel:sources>
<bpel:source linkName="link2"></bpel:source>
</bpel:sources>
</bpel:invoke><bpel:reply name="Issue Check">
</bpel:reply><bpel:reply name="Confirmation">
</bpel:reply></bpel:flow>
</bpel:if>
<bpel:invoke name="End" inputVariable="output"/>
</bpel:sequence>
</bpel:process>
5 Conclusions
In this paper, we are motivated by issues related to the composition of distributed
workflow services by CRT technology. This paper firstly illustrates the concurrent
transaction logic and WS-BPEL in summary, secondly gives the mapping rules from
WS-BPEL to concurrent transaction logic. The proposed method works well for char-
acterizing the behaviors and interactions of the distributed workflow service processes
in terms of the semantics of CRT.
Acknowledgement
The work has been supported by the Important National Science & Technology Spe-
cific Project, China (2009ZX01043-003-003), the National Natural Science Founda-
tion of China under grant(No. 60703042), the Natural Science Foundation of Zhejiang
Province, China (No. Y1080343), the Research and Application Plan of Commonweal
Technology in Zhejiang Province (No. 2010C31027).
References
1. Scott, C.A., Ewa, D.B., Dan, G., et al.: Scaling up workflow-based applications. Journal of
Computer and System Sciences 76, 428–446 (2010)
2. Jun, S., Georg, G., Yun, Y., et al.: Analysis of business process integration in Web service
context. Future Generation Computer Systems 23, 283–294 (2007)
3. Chi, Y.L., Hsun, M.L.: A formal modeling platform for composing web services. Expert
Systems with Applications 34, 1500–1507 (2008)
4. Wil, M.P., Kristian, B.L.: Translating unstructured workflow processes to readable BPEL:
Theory and implementation. Information and Software Technology 50, 131–159 (2008)
5. Chun, O., Eric, V., Wil, M.P.: Formal semantics and analysis of control flow in WS-BPEL.
Science of Computer Programming 67, 162–198 (2007)
6. Therani, M., Uttamsingh, N.: A declarative approach to composing web services in dynamic
environments. Decision Support Systems 41, 325–357 (2006)
Offline Optimization of Plug-In Hybrid Electric Vehicle
Energy Management Strategy Based on the Dynamic
Programming
1 Introduction
As the global oil crisis and environmental pollution become increasingly serious,
improving energy use efficiency and reducing environmental pollution have become
the primary task of the automotive industry development. Plug-in hybrid electric
vehicle (PHEV) can not only significantly reduce human dependence on oil resources,
but also be effective in reducing urban air pollution. So it has become one of most
important technical means of vehicle energy saving and emission reduction[1,2].
Right now according to the customize method the hybrid vehicles energy manage-
ment strategy mainly falls into three categories: Rule-based energy management
strategy, based on intelligent control energy management and based on optimization
algorithm energy management strategy. Different types of strategies have their advan-
tages and disadvantages: rule-based energy management is simple and practical for
real-time online control, but it main according to the designer’s experience and steady
efficiency Map, so it cannot optimize the performance of the dynamic system; intelli-
gent energy control strategy has strong robustness and good self-learning capability,
very suitable for the design of the nonlinear control system, but the difficulty is
how to describe the optimize problem of the energy management strategy under
the framework of intelligent algorithm; instantaneous optimal control strategy can
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 668–674, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Offline Optimization of Plug-In Hybrid Electric Vehicle Energy Management Strategy 669
optimize the efficiency of the systems in real-time, but does not take into account of
the changes in driving conditions influence on the vehicle performance; global opti-
mization control strategy can optimize the efficiency of the system according to the
whole driving conditions throughout the cycle, but the calculation is large and not
suitable for the real-time control[3,4].
The energy management strategy obtain from the off-line global optimization algo-
rithm is the best optimal control in theory, it both can be used as the Dynamic
programming method (DP) is a global optimization method which can convert the
complex decision problem into a series of sub-problems at different stages. It has little
restrictions of the system state equation and objective cost function, the system model
is a numerical model based on the experiment data and the iterative algorithm of the
DP suits for solving by computer, so the DP is suitable for solving hybrid cars optimal
control. DP as a multi-stage decision problem optimization method, it can divide the
problem into a number of relevant stages according to the time and space. It must make
a choice at each stage, this decision not only determines the effectiveness of this stage,
and also the initial of the next stage, then determine the trend of the whole process (so
called the dynamic programming). After the decision of each stage is determined, we
get a decision sequence, known as the strategy. The so-called multi-stage decision
problem is a strategy which makes the sum of every stage’s benefits maximum.
Specific to the optimization problem of PHEV energy management, the selected cy-
cle driving conditions should discretize reasonable, so that the entire cycle of the vehi-
cle performance optimal control problem is converted into the decision problem of
different time stages. Through following DP numerical solving method, we expect to
obtain the PHEV optimal control under specific cycle conditions at last, and the per-
formance which gained under optimal control is the best performance of the vehicle.
As the shift schedule under the NEDC is fixed, the SOC of battery could represent the
state of the vehicular electric power system of PHEV and then we can know the allo-
cation of energy. Under the control of the different global optimal solutions for differ-
ent mileages, the corresponding SOC sequences vary along with the mileage as shown
in Figure 2.
Unlike HEVs, to make full use of the energy charged from the electric grid, SOC of
the PHEV under optimal control sequence declines with the increase of driving dis-
tance. Then according to the allocation of energy under the different global optimal
solutions for different mileages, we can divide the first level of PHEV EMS into 3
cases, referred to the different driving modes[6,7]:
Offline Optimization of Plug-In Hybrid Electric Vehicle Energy Management Strategy 671
Engine-Dominant Driving Mode. When the vehicle mileage exceeds 55km, the de-
creasing rate of SOC of the optimal controlled power cell will slow down as the vehicle
mileage increases, and SOC approaches the lower limit only at the end of the driving
range(when the NEDC cycle ends, SOC will increase a bit by means of regenerative
braking energy recovery).This is because: the energy charged from the energy grid is
insufficient to meet the vehicle driving power demand for the whole trip, so both the
start engine and the electrical power system are needed to drive the vehicle.
As can be seen from figure 5, when the vehicle mileage exceeds NEDC×
11(121km), optimal controlled engine operating point of the best fitting curve has
stabilized in the vicinity of the best efficiency performance curve. This illustrates that
current PHEV energy management strategy should make the engine work in the best
efficiency performance area under the circumstance that the vehicle’s power require-
ment is guaranteed. Whereas in the practical implementation of this strategy it can be
672 S. Yang et al.
described briefly as: as long as the power requirement exceeds the maximum power
that the electrical power system can provide, or the start control logic of the engine is
met, the engine will start; after the engine starts, it runs in the best efficiency condi-
tion (also on the best efficiency curve), with auxiliary drive power provided by the
motor. This driving mode, which uses an engine running in best efficiency area as
main power source of the vehicle, an electrical power-driven system as auxiliary
power source, is called Engine-Dominant Driving Mode, as illustrated in figure 3.
It is needed to be added that when the vehicle mileage exceeds 55km, the SOC
only approaches the lower limit at the end of the driving range. The phenomenon that
the variation trends of SOC bottom-up or bottom-maintain did not occur. This is be-
cause that the power cell’s open circuit voltage decreases and charging/discharging
internal resistance increases when SOC is low, which leads to larger battery energy
loss. So dynamic programming algorithm is bound to shorten the running time of the
power cell under low SOC, which makes the lowest SOC only show up once at the
end of the driving range.
Fig. 4. NEDC×12 engine operating points Fig. 5. NEDC×13 engine operating points
under optimal control under optimal control
Fig. 6. NEDC×14 engine operating points Fig. 7. NEDC×15 engine operating points
under optimal control under optimal control
Power-Balance Driving Mode. When the vehicle mileage is between 55km and
121km,the source of the dynamic programming optimal controlled powertrain’s total
power is neither the motor nor the engine, and the two power source’s output ratio is
between that of the two driving mode discussed above. This is called the Power-
Balance Driving Mode of the PHEV.
Under this driving mode of PHEV, it is more complicated to summarize the power-
train’s corresponding working mode switching rules and energy flow distribution
strategy in various modes, and to acquire this probability statistics and multiple linear
Offline Optimization of Plug-In Hybrid Electric Vehicle Energy Management Strategy 673
regression and other methods are needed, which is the main problem to be solved in
the next unit, the online energy management strategy design.
From the above analysis of the dynamic programming offline global optimization
computing result, we sorted out the relations between vehicle mileage and PHEV
driving modes. That is to say, PHEV energy management strategy on the first level is
determined. As is illustrated in figure 6, when vehicle mileage is less than s1 (55km),
the vehicle will run in Motor-Dominant Driving Mode, whose main power source is
electrical power driven system, and engine cannot be engaged in to drive the vehicle
until vehicle demanding power exceeds the maximum output of motor. When vehicle
mileage exceeds s2 (121km), the vehicle will run in Engine-Dominant Driving Mode,
where engine runs along the best efficiency curve properly providing most power to
drive the vehicle, whereas the electrical power driven system serves as an auxiliary
drive source. When vehicle mileage is between s1 and s2, the vehicle will run in
Power-Balance Driving Mode, where the output ratio of both power sources is be-
tween those of the two driving modes above, and dynamic programming optimal
controlled powertrain energy flow distribution pattern is obscure which needs further
research.
3 Conclusion
For obtaining optimal controls, an offline global optimization method was brought.
Dynamic programming was used to convert the optimal control problem of vehicle
performance into step-by-step decision-making problem. By the rational discretization
of reachable state set and admissible control set, we managed to get numerical solu-
tion of the global optimization algorithm. Through Matlab and C++ mixed program-
ming code calculation, the optimal controls under different PHEV mileages were
obtained. The optimization results showed that:
①. When the mileage is less than 55km (NEDC × 5), PHEV should be run under
the motor-dominant driving mode; When the mileage is greater than 121km (NEDC ×
11), the vehicle should be run under the engine-dominant driving mode; When the
mileage is greater than 55km and less than 121km, the vehicles should be run under
the power-balance driving mode of dynamic.
②. When the vehicle mileage is 55km, the energy obtained from the power grid
could be used most efficiently. So, we got the best vehicle economic performance at
this mileage. When the vehicle mileage is less than 165 kilometers, the vehicle aver-
age equivalent fuel consumption is 2.9 L/100km, which is improved by 63% com-
pared with the prototype vehicle.
References
1. Chan, C.C.: The State of the Art of Electric and Hybrid Vehicles. Proceedings of the IEEE
Digital Object Identifier 90(2), 247–275 (2002)
2. Nowell, G.C.: Vehicle Dynamometer for Hybrid Truck Development. SAE Paper 2002-01-
3129 (2002)
674 S. Yang et al.
3. Simpson, A.: Cost-Benefit Analysis of Plug-In Hybrid Electric Vehicle Technology. In:
22nd International Battery, Hybrid and Fuel Cell Electric Vehicle Symposium (EVS-22),
Yokohama, Japan, NREL/CP-540-40485 (October 2006)
4. Sasaki, M.: Hybrid Vehicle Technology - Current Status and Future Challenges - Effect of
Hybrid System Factors on Vehicle Efficiency. Review of Automotive Engineering 26(2)
(2005)
5. Gonder, J., Markel, T.: Energy Management Strategies for Plug-In Hybrid Electric Vehicles.
SAE Paper 2007-01-0290 (2007)
6. California Air Resources Board: California Exhaust Emission Standards and Test Proce-
dures for 2005 and Subsequent Model Zero-Emission Vehicles, and 2001 and Subsequent
Model Hybrid Electric Vehicles, in the Passenger Car, Light-Duty Truck and Medium-Duty
Vehicle Classes (March 26, 2004) (last amended December 19, 2003)
7. Markel, T., Simpson, A.: Plug-In Hybrid Electric Vehicle Energy Storage System Design.
In: Advanced Automotive Battery Conference (AABC), Baltimore, MD (May 2006)
Development of Field Information Monitoring System
Based on the Internet of Things*
1 Introduction
Over the last decade Internet has been a major driver of global information and media
sharing. With the advent of low cost wireless broadband connectivity, it is poised
to emerge as an “Internet of Things” where the web will provide a medium for physi-
cal world objects to participate in interaction. The Internet of Things is a kind of
*
This research was supported by GuangDong Provincial Science and Technology Planning
Project of China under grant 2010B020315028.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 675–680, 2011.
© Springer-Verlag Berlin Heidelberg 2011
676 K. Cai, X. Liang, and K. Wang
expansion for Ubiquitous Computing (ubiquitous comes from the Latin word and
means existing in any place). It is described as a self-configuring wireless network of
sensors whose purpose would be to interconnect all things [1]. A computer scientist
Mark Weiser in the Xerox laboratory first proposed the concept of ubiquitous com-
puting in 1991 [2], which describes a worldwide network of intercommunicating
devices. It integrates the Pervasive Computing [3], Ambient Intelligence [4], Ubiqui-
tous Network [5]. Although these concepts are different from the Internet of Things,
their inherent ideas are quite consistent. At this point (IOT), it is easy to imagine that
nearly all home appliances but also furniture, clothes, vehicles, roads and smart mate-
rials, and more, are readable, recognizable, locatable, addressable and/or controllable
via the Internet. This will provide the basis for many new applications, such as Soil
and plant monitoring, Monitoring of food supply chain, Monitoring of animals, Forest
monitoring, Tourism support, Homeland security, Pollution monitoring. The Internet
of Things represents the future development trend of computing and communications
technology.
Modern agriculture is the formation of the industrial revolution in agriculture,
which enables farmers to utilize new innovations, research and scientific advance-
ments to produce safe, sustainable and affordable food. It has following main
features: the degree of market matures, commonly used industrial equipment, high
technology is widely used, industrial system and keep improving the ecological envi-
ronment seriously. In China, the agriculture is in the period of transition from tradi-
tional agriculture to modern agriculture. At the same time, the research of farmland
data acquisition and processing, as a very important aspect for Modern agriculture, is
at the initial stage in China. This paper mainly studied the technology of field infor-
mation acquisition and process based on techniques of the Internet of Things, sensors,
embedded system.
2 System Architecture
The framework of field information acquisition system based on the internet of
Things is shown in Figure 1.
The entire system can be divided into three parts: wireless sensor nodes, sink nodes
(intelligent nodes) and control centers. Field information (such as temperature, hu-
midity, etc.) is collected by the wireless sensor nodes transmitted to the sink nodes
(intelligent nodes) using the ZigBee communication network. The sink nodes (intelli-
gent nodes) send the information through Network Bridge to the control center, and
the information are displayed and stored. The sink nodes (intelligent nodes) in this
system act as gateway, it is the protocol conversion used to transform a data package
in ZigBee protocol to TCP/IP protocol before transmitting and a data package in
TCP/IP protocol to ZigBee protocol. As the control center accesses to Internet, the
operators can view and process the data of control center through the Internet-enabled
computer or devices (e.g. PDA, smart phone, etc.).
Development of Field Information Monitoring System Based on the Internet of Things 677
3 System Design
The Internet of Things is a wireless sensor network where many sensor nodes and
intelligent nodes are arranged in certain areas in a random way, which are the most
important and basic components of the Internet of Things. For intelligent nodes, it
plays a role on sinking acted as gateway. The hardware part of the intelligent nodes
consists of four hardware components of sensor modules, processor modules, wireless
communication modules and power supply modules. The sensor module is responsi-
ble for collecting information in sensor field. The processor module is responsible for
coordinating the work of various parts of nodes. The wireless communication module
can be divided into two parts. One is responsible for wireless communication with
other sensor nodes. Another part is responsible for network communication with the
monitor center. The power technical module is responsible for providing the power
required for sensor nodes. The Figure 2 shows the structure.
Intelligent node is mainly responsible for running the operating system, Web Server
and application procedures. It receives the user command, sends the request to the
appropriate monitoring points, process data fed back by monitoring points and then
give feedback to the user through the network. As the user request is sent to the wire-
less module by serial port, so intelligent nodes first finish the configuration settings of
the serial port and make sure to open serial device. And then it sent the data in accor-
dance with the format prescribed by the communication protocol of monitoring points
and waited for data fed back by monitoring. Figure 3 shows software design flow of
the intelligent node.
The sensor nodes are responsible for collecting information in sensor field and im-
plement the timely transmission of data. In our project, the CC2430 is choosing as
for the CPU of sensor nodes. CC2430 is a true resolution to System-on-Chip (SoC),
which can improve performance and meet the requirements of IEEE 802.15.4 and
ZigBee applications on low costs and low power consumption. It has several key
features: 2.4 GHz IEEE 802.15.4 compliant RF transceiver (industry leading
CC2420 radio core), Low current consumption (RX: 27 mA, TX: 27mA, microcon-
troller running at 32 MHz), High performance and low power 8051 microcontroller
core with 32, 64 or 128 KB in-system programmable flash, Two powerful USARTs
with support for several serial protocols. After power up, CC2430 carries out the
system initialization. And then it determines whether to be configured as a network
coordinator or terminal nodes according to the real situation. After that, sensor nodes
can collect and send data. Figure 4 shows ZigBee sensor node flow chart and circuit
of CC2430.
Development of Field Information Monitoring System Based on the Internet of Things 679
C1 E1
15pF U1
1
Y1
XTAL VCC C2
C3 44 41 0.1uF
2
P2_3/XOSC_Q1 AVDD_DREG
43 23
P2_4/XOSC_Q2 AVDD_RREG C4 C5 C6
21 20 L1
15pF XOSC_Q1 AVDD_SOC 0.01uF 0.1uF
VCC 19 7 0.01uF
XOSC_Q2 DVDD
47 1.8nH
DVDD
RST 10
RESET_N R1
C7 34
RF_N
10K P0_0 11 33
P0_0 TXRX_SWITCH
R2 P0_1 12 32
27pF P0_1 RF_P
P0_2 13 L2
1
Y2 P0_2 L3
RST P0_3 14 45 P2_2 Inductor
XTAL P0_3 P2_2
P0_4 15 46 P2_1 6.8nH Inductor
P0_4 P2_1
C8 P0_5 16 48 R3
2
P0_5 P2_0 22nH
C9 P0_6 17
P0_6
0.1uF P0_7 18 24
33pF P0_7 RREG_OUT
P1_0 9 25
P1_0 AVDD_IF1
P1_1 8 27
P1_1 AVDD_CHP
P1_2 6 28
P1_2 VCO_GUARD
P1_3 5 29
P1_3 AVDD_VCO
P1_4 4 30
P1_4 AVDD_PRE
P1_5 3 31
P1_5 AVDD_RF1
P1_6 2 35
P1_6 AVDD_SW C10 C11 C12 C13
P1_7 1 36
P1_7 AVDD_RF2 0.01uF 0.01uF 0.1uF 0.1uF
37
R4 AVDD_IF2
43K 26 38
RBIAS2 AVDD_ADC
22 39
R5 RBIAS1 DVDD_ADC
56K 42 40
DCOUPL AVDD_DGUARD
C14
1uF CC2430
(a) (b)
Fig. 4. (a) ZigBee sensor node flow chart. (b) Schematic circuit diagram of CC2430.
Monitoring center software mainly includes video display modules, parameter wave-
form display, alarm module, file service module and the network communication
module. The user interface and related function modules of the system is used by QT
to program. The video display module of the system is achieved by using the method
of refreshing images. The video collected by the terminal to send to monitor server
are the compressed JPEG images. According to the physical phenomena of human
eye's vision persistence, at least playing for more than 25 frames per second, the static
map can display animation. So the system set a timer with an interval of 40ms and the
timer triggers a call each time to invoke image display program, refreshing 25 images
per second. Parameter waveform display and alarm module also uses QT's QPainter
drawing tools. The core function of QPainter is drawing and it also has very rich as-
sociated drawing functions. The system mainly uses three functions of drawLine (),
drawRect (), drawText () to complete the work of drawing parameters. File service
module is mainly responsible for storing related image information and names corre-
sponding files by time for the convenience of inquiries. At the same time, it is also
responsible for recording environmental parameters in the file, which are used as a
historical record for later inspection and analysis. The mainly used functions are
QFile, QfileInfo and QString class in QT. Network communication module is through
the socket to conduct network programming. Stream socket is used to provide connec-
tion-oriented, reliable data transfer service. The service will ensure that the data can
be transmitted free from errors and non-repeatedly and received according to the
order. Due to the use of the transmission control protocol, namely TCP (The Trans-
mission Control Protocol) protocol, so the Stream socket can achieve reliable data
services.
680 K. Cai, X. Liang, and K. Wang
4 Conclusion
The real-time monitoring system of field information can provide the scientific basis
for modern agriculture management, which is important for improving crop yield and
quality. In the other hand, the Internet of Things is a hot topic today, which will lead
the development of the rise of new industries. In this paper, a farmland information
monitoring and warning system based on the Internet of Things is proposed in order
to overcome the limitations of the existing field monitoring systems by using cable for
data transmission. The system can monitor real-time temperature, humidity and much
other environmental information, and sensor node has low power consumption and
high performance, which can satisfy the requirements of mobility, portability, multi-
point-to-multipoint, multipoint-to-point, convenience in the field information moni-
toring process. In future, some compression algorithm will be used to reduce the data
bandwidth, and the new wireless module will be used to improve the transmission
speed.
References
1. Conner, M.: Sensors empower the “Internet of Things”, pp. 32—38 (2010)
2. Weiser, M.: The Computer for the 21st Century. Sci mer (September 1991)
3. Satyanarayanan, M.: Pervasive Computing: Vision and Challenges. IEEE Personal Commu-
nications, 10–17 (August 2001)
4. Mulder, E.F., Kothare, M.V.: Title of paper. In: Proc. Amer. Control Conf., Chicago, IL,
vol. 5, pp. 3239–3243 (June 2000)
5. Murakami, T.: Establishing the Ubiquitous Network Environment in Japan, Nomura Re-
search Institute Papers, No. 66 (July 2003)
System Design of Real Time Vehicle Type Recognition
Based on Video for Windows (AVI) Files
Abstract. In this system, with technology of motion detection, the data frames
include vehicle digital image can be detected automatically from a Video for
Windows (AVI) File, at the same time, vehicle type will be recognized and dis-
played automatically. The system’s process consists of five steps: Read the AVI
file and decompose it into digital image frames; Motion detection; Vehicle digi-
tal image processing; Counting number of black pixels included in vehicle body
contour and project on car image; Module of vehicle type classification. In par-
ticular, algorithm of vehicle recognition through counting number of black pix-
els included in vehicle body contour is one innovation algorithm. Experiment
on actual AVI files shows: the system design is simple and effective.
1 Introduction
Automatic and real time vehicle type recognition system[1] is an important part of
Intelligent Transportation System[2][3] (ITS),in china, system research on it starts
behind abroad, in all weather condition, recognition accuracy of domestic vehicle
recognition system is not satisfied, the primary method[4][5][6] of vehicle type rec-
ognition are: radio wave or infrared contour scanning, radar detection, vehicle weight,
annular coil and laser sensor measurement.
With technology of digital image processing and recognition, there are more and more
research on vehicle type recognition using video. Because a wide range application
and rich information of image detection, it can be use in road traffic monitoring, vehi-
cle type classification and recognition, automatic license plate recognition, automatic
highway toll, intelligent navigation, so the vehicle type recognition based on video is
a hot research direction.
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 681–686, 2011.
© Springer-Verlag Berlin Heidelberg 2011
682 W. Zhan and Z. Luo
Fig. 1. Pending image Fig. 2. Background template Fig. 3. Image after subtraction
with background
Fig.4. Image after Threshold Segmentation Fig.5. Image after Inverse color
Width
Height
Monitoring Area
Background Image
shows that setting of the rectangular image detection region in 3.2 and the threshold
value in 3.3 are reasonable and effective.
As shown in Fig. 9, if there is a car fleet, system has not response timely, cars in fleet
has not be separated timely, which leads to recognition error. More effective algo-
rithm need be design to solve this difficult recognition problem, at the same time, it is
a key question in the future research.
Monitoring Area
Background Iage
During testing system, often, recognition result on a dark color car is less than that of
a light color car although they are the same vehicle type, suck as shown in Fig.10, the
left is a white Elysee, the right is a black Passat, the right result is less than the left,
but actually, result should be the same one. For this problem, solution will be pro-
posed in the future.
5 Conclusion
Technology of vehicle type recognition can be widely used to automatic statistics of
traffic and toll. Test shows: system design is simple and effective, especially on algo-
rithm of counting the number of black pixels included in vehicle body contour, which
is an innovative approach .But on the problem of car fleet and different color cars
should be lucubrated for higher recognition accuracy.
References
1. Zhan, W.: Research of Vehicle Recognition System for the Road Toll Station Based on AVI
Video Flow. China University of Geosciences (2006)
2. Tian, B.: Research on Automatic Vehicle Recognition Technology in Intelligent Transporta-
tion system. pp. 23–26. XiDian University (2008)
3. Xia, W.: Video-Based Vehicle Classification Method Research. 1–2, 7, 16, 24–29.
Huazhong University of Science&Technology (2007)
4. Ji, C.: Vehicle Type Recognition Based on Video Sequences. Journal of Liaoning University
of Technology (Natural Science Edition) 30, 5–7 (2006)
5. Cao, Z.: Vehicle Detection and Classification Based on Video Sequence, pp. 20–46.
ZheJiang University (2004)
6. Cao, Z., Tang, H.: Vehicle Type Recognition in Video. Computer Engineering and Applica-
tions 24, 226–228 (2004)
7. Xiong, S.: Research of Automobile classifying method based on Inductive Loop, pp. 2–5.
ChangSha University of Science and Technology (2009)
8. Cao, Z., Tang, H.: Vehicle Detection, Tracking and Classification in Video Image
Sequences. Journal of Computer Applications 3, 7–9 (2004)
Subjective Assessment of Women’s Pants’ Fit Analysis
Using Live and 3D Models
1 Introduction
Fit refers to how well the garment conforms to the three-dimensional human body.
However, the principles of fit are vary from time to time, and depend on the fashion
culture, industrial norm and individual perception of fit. Ashdown (2009) noted sev-
eral factors impacting on decisions to understand fit within the research work [1].
LaBat and Delong suggested two external influences (social message of the ideal
body and fashion figure in the industry) impacting and two personal influences (body
cathexis and physical dimensional fit of clothing) which impacting on the consumer’s
satisfaction with clothing fit [2].
To assess if a garment fits or not, it can be used by using some approaches, such as
live models and dress forms. Live models have some advantages when used for as-
sessing clothing fit. Because they are the real human body shape and have real
movements, as well as have their comments on the clothing sensible. However, they
tend to be made personal conclusion based on subjective feeling. Their judgements
would be varied from this model to other model. Also, live models are always expen-
sive. Dress forms are static and convenient to use many times, however, they tend to
be impacted by personal assessment of tension. But dress forms are not convenient
when the fit assessment proceeding by the photo or the video. Fit researchers have
used scanned images and expert judges to evaluate the fit of pants (Ashdown et al.,
2004) [3]. They concluded that there had potentials of substituting 3D scans for the
live fit analysis process in research and industry for 1) recordings can be rotated and
enlarged, 2) creating database of scans of a variety of body shapes and sizes wearing a
single size, 3) scanning garments on fit models in multiple poses to evaluate
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 687–692, 2011.
© Springer-Verlag Berlin Heidelberg 2011
688 L. Zhang, W. Zhang, and H. Xiao
2 Experimental Methods
2.1 Test Samples and Sizes
Women pants were used as the experiment sample. The pant sloper was a trouser style
with two front darts, two back darts, a center back zipper, and waistband. This style
was chosen because of its ubiquity and potential for use as base pattern for other
styles. The pants for test were made from medium weight cotton muslin fabric in a
raw white color. Three sequential sizes pants, which according to China garment
standard GB/T1335-97, were made for test. The sizes used for testing pants are shown
in table 1.
Subjective Assessment of Women’s Pants’ Fit Analysis Using Live and 3D Models 689
Table 1. The sizes used for testing pants according to China garment size standard GB/T
1335-97
We also prepared one kind of next-to-skin underwear for the subjects. The size and
materials used for test pants are shown in table 2.
2.2 Subjects
The subjects of this study were 18 healthy females chosen from 30 to 40 years old and
these females height from 160cm to 165cm. their BMI (body mass index) should be
within 18.5-24.9.
3 Results
Two judges analyzed and assessed the women pants’ fit. They had 36 total results on
estimating fit. Fig. 1 shows a comparison between the results from the live model fit
690 L. Zhang, W. Zhang, and H. Xiao
analyses and the three-dimensional scan analyses in each location. The area with the
highest percentage of identical results between the two methods was the waistband
%
(88.9 ).There was less agreement in the back crotch (36.1%), front crotch seam
(38.9%), back thigh (41.7%), left-side seam (41.7%), right-side seam (47.2%), and
back crotch seam (47.2%).
Fig. 1. Percentage of agreement of live models and three-dimensional scans fit analysis
The mean of confidence scores in rating misfit was calculated to analyze the rela-
tionship between the body location and the confidence level of the judges. Fig. 2
show the level of confidence on the 5-points scale with lower confidence (1), low
confidence( 2), normal (3) and very confident (5). The back crotch was the area of the
lowest confidence because the crotch cloud point sometime lost when scanned. The
waistband, darts, full hip were the areas of very confidence.
The fit correction differences were categorized into low (0–0.5cm), medium (0.6–
1cm), high (1.1-1.5cm). When the fit assessment was in the agreement of misfit, the
differences in alteration were compared in the table 3. The majority of alteration
amount under 1.5cm were found in the fit correction. The alteration of front thigh and
back thigh were high while the inseams, left-side seam, the right-side seam, front
crotch seam, back crotch seam, darts and front crotch were low.
Table 3. Differences in Alteration Amounts between Live Model Fit Analyses and Three-
Dimensional Scan
4 Conclusions
Even though there was no perfect agreement in estimating the area of crotch, side
seams, the high level of agreement of waistband, hip and upper hip indicates potential
for using three-dimensional scans as an assist tool for these body locations. There are
limitations of 3D body scanning on assess fit for missing areas, body posture and
692 L. Zhang, W. Zhang, and H. Xiao
movement, surface texture and accuracy. The future study should be improved in the
fields of data process, data display and navigation interface. That would be good in
utilizing three-dimensional scans data in the clothing industry.
References
1. Ashdown, S.: Introduction to Sizing and Fit Research. In: The Fit Symposium, Clemson
Appearance Research, South Carolina, Clemson (2000)
2. LaBat, K.L., Delong, M.R.: Body Cathexis and Satisfaction with Fit of Apparel. Cloth.
Text. Res J. 8(2), 42–48 (Winter 1990)
3. Ashdown, S., Loker, S., Schoenfelder, K., Lyman-Clarke, L.: Using 3D Scans for Fit
Analysis. JTATM 4(1) (Summer 2004)
4. Ashdown, S., Loker, S.: Improved Apparel Sizing: Fit and Anthropometric 3-D Scan Data.
NTC Project: S04-CR01 (June 2005)
5. Ashdown, S., Loker, S.: Improved Apparel Sizing: Fit and Anthropometric 3-D Scan Data.
NTC Project: S04-CR01 (June 2006)
6. Casini, G., Pieroni N., Quattocolo S.: Development of a low cost body scanner for garment
construction. In: 12th ADM Int. Conf., Rimini, Italy, September 5-7, pp. A41–A48 (2001)
7. Alvanon: Fit Conference for the apparel industry, http://www.alvanon.com
8. Song, H.K., Ashdown, S.: An Exploratory Study of the Validity of Visual Fit Assessment
From Three-Dimensional Scans. Cloth. Text. Res. J. 8(4), 263–278 (2010)
9. Zafu: http://www.zafu.com/
10. Kohn, I.L., Ashdown, S.P.: Using Video Capture and Image Analysis to Quantify Apparel
Fit. Text. Res. J. 68(1), 17–26 (1998)
11. Schofield, N.A., Ashdown, S.P., Hethorn, J., LaBat, K., Salusso, C.J.: Improving Pant Fit
for Women 55 and Older through an Exploration of Two Pant Shapes 55 and Older
Through an Exploration of Two Pant Shapes. Cloth. and Text. Res. J. 24(2), 147–160
(2006)
12. Brown, P., Rice, J.: Ready-to-wear Apparel Analysis. Prince Hall Inc., Upper Saddle River
(1998)
A Geographic Location-Based Security Mechanism for
Intelligent Vehicular Networks
1 Introduction
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 693–698, 2011.
© Springer-Verlag Berlin Heidelberg 2011
694 G. Yan et al.
applications, vehicles need to know each other’s location to avoid accidents. Insur-
ance justifications of car accidents rely on locations. For public service applications,
vehicles must be aware of the location of emergency vehicles to move aside for them.
For entertainment applications, both the location of vehicles and the location of re-
sources are needed to provide a high quality service.
In VANETs, most applications require protection of location information. In some
environments, for example in the battlefield, the locations of vehicles need to be ex-
tremely secure. Therefore, not only the content is required to be encrypted but also the
place where the encrypted message can be decrypted is required to be specified. The
motivation of this paper is to address a location-based encryption method that
not only ensures message confidentiality, but also authenticates the identity and loca-
tion of communicating peers. In our approach, a special region (decryption region)
is specified to the message receiver. The receiver must physically present in the de-
cryption region to decrypt the received message. To achieve this idea, the receiver’s
location information is converted part of a key. We design an algorithm dynamic-
GeoLock to convert the location information into the key. Our main contributions
include: 1) Designing an algorithm to convert a location into a key. 2) Extend the
decryption area from square (proposed in geolock [1]) to any shape; 3) Improving
location error tolerance.
freedom of the size and the shape of the decryption region in order to obtain the feasi-
bility and the accuracy of decryption region prediction.
3.1 An Overview
Our technique involves a security key handshake stage and a message exchange stage,
as shown in Figure 2(a). In the key handshake stage, the client and the server negoti-
ate a shared symmetric key. The client generates two random numbers as keys Key_S
and Key_C. Key S is used to encrypt a message composed of the aggregated location
message and Key C. This encrypted message is E{Req}. The client generates a Ge-
oLock based on the location of the server. This value is XOR-ed with Key S and then
encrypted using the server’s public key Key_E to produce the ciphertext E{Key}. Both
E{Req} and E{Key} are transmitted to the server through the wireless channel. When
the server receives E{Key}, it is decrypted using the server’s private key Key_D to
produce the XOR of the GeoLock and Key_S. The GeoLock generated from the GPS
location of the server is used to recover the secret key Key_S. Then, Key_S is used to
decrypt E{Req} to obtain the aggregated location message and the secret key Key_C.
In the message exchange stage, the server and client use the shared Key_C to commu-
nicate. When the server wants to reply to a client, it generates a random number,
Key_S’. The reply message is directly encrypted using Key S’ to generate a ciphertext,
E{Rep}. Since the aggregated location message contained the client’s GPS position,
the server can generate a GeoLock of the client vehicle’s decryption region. This
GeoLock is XOR-ed with Key_S’ and then encrypted with Key_C to generate a ci-
phertext, E{Key’}. Both E{Rep} and E{Key’} are transmitted to the client through the
wireless channel. E{Key’} is then decrypted using Key_C to recover the XOR of the
client’s GeoLock region and Key_S’. The client generates its GeoLock based on its
current location. This is used to recover the secret key Key_S’. E{Rep} is decrypted
using Key_S’, and the reply message is recovered. The client repeats the algorithm in
the message exchange stage to communicate with the server.
The GeoLock mapping function converts geographic location, time and mobility
parameters into a unique value as a lock. This unique lock value validates that the
recipients satisfy certain restriction, for example the decryption region at a certain
time interval. There are no preinstalled mapping tables in our proposal.
The process of generating a lock value/key is shown in Figure 2(b). First of all, all
the input parameters are operated respectively. The location (x0; y0) will be divided
by the length of decryption region (square) L. Each of coordinate number of P0(x0;
y0) will be divided by 100. The integral part after division will be obtained. There-
fore, bigger L will cause less digital numbers of the output from step one. Less digital
numbers will result in weaker lock key. If the value of L is small, there is a risk that a
lock key may be computed by brute force attack. Second, the output of the first step is
multiplexed or reshuffled. Third, the output of the second step is hashed by a
hash function. The hash function in practice can be designed as mod operation or as
696 G. Yan et al.
standard hash function, like Secure Hash Algorithm (SHA) functions. An example is
shown in Figure 2(c). First step, two numbers are divided by the length of the region
100, i.e. (042.00, 915.00). The integer part after division, i.e. (042, 915) is kept. Sec-
ond step, the two numbers: (042,915) are multiplexed as 042915. Third step, the mul-
tiplexed number is hashed by SHA to generate the lock value.
From the recipient’s view, the secret key will be recovered. The recipient’s GPS
coordinates are read from the enlisted GPS receiver. The other parameters in figure
2(b) can be obtained on the recipient vehicle b. If the vehicle b is restricted by the
decryption region in terms of location, time and relative speed, the exact same lock
value will be generated. An example of the mapping function on the receiver’s view is
shown as Figure 3(a). The receiver vehicle b is located at location (04250; 91520)
(UTM 10 digital coordinates) and the decryption region L is 100 meters. Figure 3(a)
shows GeoLock on recipients. First step, the xy-axis coordinates of location (04250;
91520) are divided by the length of the region 100, i.e. (042.50, 915.20). The integer
part after division, i.e. (042, 915) is obtained. Second step, the two numbers (042,
915) are multiplexed as 042915. At this point, the multiplexed number is exactly same
as the one in key generator on sender side. Hash function (SHA) will generate exactly
same key as well. We show that the lock value generated on the receiver side is ex-
actly same as the one computed in GeoLock from the sender’s view. It is obvious that
the vehicles will pass the geographic validation.
Fig. 2. GeoEncryption
In the previous section, a simple mapping function deals with a square decryption
region. The square region can be used in many scenarios, such as a intersection of
roads, a segment of a highway, or a roadside infrastructure. But there are some cases
that the shape of the decryption region is not expected to be square. Generally, the
shape of decryption region can be any one, e.g. a triangle, or a irregular shape. As an
example, we are going to use a triangle to represent a irregular shape, comparing with
a square. The mapping function is designed to convert a location to a key. If decryp-
tion region is an irregular region, we will partition the shape into a set of square re-
gions, as shown in figure 3(b). Although we will have small margin that is not cov-
ered by the square regions, the majority of the shape is covered. When a sender wants
to encrypt a message, it will predict the reception’s decryption region which is a set of
square regions representing the original triangular decryption region. Figure 3(c)
shows the seven decryption regions. The sender computes the value of GeoLock by
A Geographic Location-Based Security Mechanism for Intelligent Vehicular Networks 697
randomly using one of the square regions and encrypts the message by using the Ge-
oLock value to produce a ciphertext. The ciphertext is transmitted through wireless
channel and received by a receiver. The receiver, thereafter, checks all the decryption
regions of the sub-decryption-squares. If the receiver is valid, one of the square re-
gions will produce the right key and decrypt the ciphertext.
Fig. 3. GeoEncryption
4 Simulation Results
We used SUMO [5] and ns-2 for our simulation. The simulator SUMO is a micro-
scopic road traffic simulation package and ns-2 is a network simulator for VANETs
[6], [7]. The network routing protocol we were running is the Ad hoc On Demand
Distance Vector (AODV). The application is Constant Bit Rate (CBR) with 16 pack-
ets every second. The total amount of vehicles is 320. The map is 3.2km x 3.2km.
First, the decryption ratio over location tolerance/precision is investigated. We
measured the decryption ratio as the ratio of successfully decrypted messages over
those messages that were received. We varied the location tolerance because location
detection has precision problem. As we expected, the increase of location tolerance
will cause the decrease of the decryption rate, shown in Figure 4(a). Besides, the
faster speed will cause lower decryption rate. This is because that the increase of
location tolerance and the increase of speed will cause the increase of false location of
vehicles. The decryption ratio is higher in our algorithm than the one in Al-Fuqaha’s
algorithm which does not consider the prediction errors.
Security is not free but with cost. We measured the overhead (packet size increase)
and the decryption time by varying the updating pause time. Figure 4(b) shows the
result that the percentage of overhead increment decreases while the updating pause
increases. In our algorithm, the fixed-size square is less sensitive to the change of the
pause than the Al-Fuqaha’s algorithm. We are also interested to compare the decryp-
tion ratio between a square decryption region and a triangle decryption region. We
varied the location tolerance and the other simulation parameters are same as the ones
in the first simulation. The result is shown in figure 4(c). We noticed that the triangle
region has smaller decryption ratio for both speeds. The reason is that the summation
698 G. Yan et al.
area of sub-squares’ area is less than the triangle. In some cases, the vehicles are in
the triangle region but not in any of the sub-squares, which will cause a failure of
decryption. Therefore, the deviation of approximation of sub-squares will cause a
degraded of performance.
Fig. 4. Simulations
5 Conclusion
We describe a feasible and novel geographic location-based security mechanism for
vehicular adhoc network environment on the basis of concepts proposed by [2], [3].
Comparing with the [2], [3], our algorithm is efficient on the basis of simulation. The
future work will integrate the model into the existing security methods. The shape of
the decryption region will be extended to any shape.
References
[1] Yan, G., Olariu, S.: An efficient geographic location-based security mechanism for vehicu-
lar ad hoc networks. In: Proceedings of the 2009 IEEE International Symposium on Trust,
Security and Privacy for Pervasive Applications (TSP 2009), October 12-14 (2009)
[2] Denning, D., MacDoran, P.: Location-based authentication: Grounding cyberspace for
better security. Computer Fraud and Security 1996(2), 12–16 (1996)
[3] Scott, L., Denning, D.E.: Location based encryption technique and some of its applica-
tions. In: Proceedings of Institute of Navigation National Technical Meeting 2003, Ana-
heim, CA, January 22-24, pp. 734–740 (2003)
[4] Al-Fuqaha, A., Al-Ibrahim, O.: Geo-encryption protocol for mobile networks. Comput.
Commun. 30(11-12), 2510–2517 (2007)
[5] Open source, Simulation of urban mobility, http://sumo.sourceforge.net
[6] Yan, G., Ibrahim, K., Weigle, M.C.: Vehicular network simulators. In: Olariu, S., Weigle,
M.C. (eds.) Vehicular Networks: From Theory to Practice. Chapman & Hall/CRC (2009)
[7] Yan, G., Lin, J., Rawat, D.B., Enyart, J.C.: The role of network and mobility simulators in
evaluating vehicular networks. In: Proceedings of The International Conference on Intelli-
gent Computing and Information Science (ICICIS 2011), Chongqing, China, January 8-9
(2011)
Intrusion-Tolerant Location Information Services in
Intelligent Vehicular Networks
1 Introduction
In the past few years, Vehicular Adhoc NETworks (VANETs), known as Vehicle-to-
Vehicle and Vehicle-to-Roadside wireless communications, have received a huge
amount of well-deserved attention in the literature. The original impetus for the inter-
est in the intelligent vehicular networks was provided by the need to inform fellow
drivers of actual or imminent road conditions, delays, congestion, hazardous driving
conditions and other similar concerns. But almost all advisories and other traffic-
safety related messages depend on a critical way on location information. For exam-
ple, traffic status reports, collision avoidance, emergency alerts, cooperative driving,
or resource availability are directly related to location information. If location infor-
mation is altered by malicious attackers, these applications will not work at all and
will cause some real traffic accidents under certain condition. For example, there is
scenery that the line of sight to merge a highway is blocked by trees. If the drivers
trust only the location information received from other vehicles, real traffic accident
can happen when a roadside intruder vehicle sends false location information about
other vehicles. Therefore, it is of importance to ensure intrusion-tolerant location
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 699–705, 2011.
© Springer-Verlag Berlin Heidelberg 2011
700 G. Yan et al.
information. The main difficulty, however, comes from the different types of data
sources which are in a noisy environment. In reality, there exists malicious attacker
who can modify the location information. Malicious vehicles can report bogus posi-
tion. Therefore, it is necessary to collect location information from different sources
and estimate the real location among the location information.
The motivation of this paper is to provide intrusion-tolerant location services in in-
telligent vehicular networks by specially designed mechanisms. In this paper, we
describe an adaptive algorithm that detects and filters the false location information
injected by intruders. Given a noisy environment of mobile vehicles, the algorithm
estimates the high resolution location of a vehicle by refining low resolution location
input. The collected location report as input samples are filtered by a statistic method:
multi-granularity deviation factor. The contributions of this paper, therefore, include:
1) filter the false location information inserted by intruders; 2) compute high-
resolution location estimation; 3) improve accuracy of filtering and location estima-
tion; 4) not require any probability distribution of the input samples. The intrusion
detection algorithm, comparing with other algorithms, is more applicable to real traf-
fic situation.
between two random vectors and of the same distribution with the covariance
matrix S:
The intuitive explanation of Mahalanobis distance of a test point from the center of
mass divided by the width of the ellipse/ellipsoid, OA/OW for test point A and OB/
OW for test point B in figure 1(b). The bigger Mahalanobis distance value is, the
more likely the test point is a false location. This method needs that all the input loca-
tion samples follow normal distribution. In this paper, the method we address does not
require the normal distribution of location input samples.
Fig. 1. Algorithms
In our previous work [8], we proposed the box counting method which filters the
outliers by placing all location input into a grid area and counting the number of loca-
tion in each grid. The subgrid with the biggest number of locations will be remained
and refined to obtain the final location estimation. However, the box-counting method
has one potential risk. If the grid granularity is not fit, there will have location estima-
tion deviation. An example of the location estimation deviation caused by improper
grid granularity is shown in figure 1(c). If we use the big grid which is composed by 4
small grids in figure 1(c), the location estimation will be covered the dotted-circle.
But if we use a smaller grid granularity, say the shaded grid (one grid), the location
estimation will be covered by the triangle shown in figure 1(c). The distance between
the two location estimation is about 14.4m. In this paper, we proposed a statistic
method that avoids the effect of the grid granularity.
Papadimitriou et al., 2002 propose a Local Correlation Integral method (LOCI) for
finding outliers in large, multidimensional data sets [9]. They introduced the multi-
granularity deviation factor (MDEF), which can cope with local density variations in
702 G. Yan et al.
the feature space and detect both isolated outliers as well as outlying clusters. The
basic idea is that a point is selected as an outlier if its MDEF values deviate signifi-
cantly (more than three standard deviations) from the local averages. Intuitively, the
MDEF at radius r for a point pi is the relative deviation of its local neighborhood
density from the average local neighborhood density in its r-neighborhood. Thus, an
object whose neighborhood density matches the average local neighborhood density
will have an MDEF of 0. In contrast, outliers will have MDEFs far from 0. The main
symbols and basic definitions we are going to use are described in reference [9]. To
be consistent, we use the same notation here as Papadimitriou et al.,2002. For any pi, r
and α the multi-granularity deviation factor (MDEF) at radius r is defined as:
(1)
(2)
4 Simulation
The proposed position filtering and estimating algorithms were simulated for straight
line traffic. We used a tailored Java simulation engine. Vehicles’ mobility model is
implemented as IDM [10], [11], [12]. We assumed 1% of vehicles will send out com-
promised position. They are malicious attackers who can create a random bogus loca-
tion. The remaining 99% vehicles are honest on the location information. The input is
processed by MATLAB R2009A where our algorithms are implemented.
To know the accuracy of filtering outliers, we compared the filtering effect of three
algorithms: LOCI, m-distance and box-counting. The size of samples is varied from 100
to 210. We recorded the number outliers filtered by the three algorithms. Figure 2(a)
Fig. 2. Comparison
704 G. Yan et al.
References
[1] Widyawan, Klepal, M., Pesch, D.: Influence of predicted and measured fingerprint on the
accuracy of rssi-based indoor location systems. In: 4th Workshop on Positioning, Naviga-
tion and Communication, WPNC 2007, March 2007, pp. 145–151 (2007)
[2] Yan, G., Olariu, S., Weigle, M.C.: Providing VANET security through active position de-
tection. Computer Communications: Special Issue on Mobility Protocols for ITS/VANET
31(12), 2883–2897 (2008)
[3] Kim, H.-S., Choi, J.-S.: Advanced indoor localization using ultrasonic sensor and digital
compass. In: International Conference on Control, Automation and Systems, ICCAS
2008, October 2008, pp. 223–226 (2008)
[4] Bartelmaos, S., Abed-Meraim, K., Attallah, S.: Mobile localization using subspace track-
ing. In: Asia-Pacific Conference on Communications, October 2005, pp. 1009–1013
(2005)
[5] Moraitis, N., Constantinou, P.: Millimeter wave propagation measurements and charac-
terization in an indoor environment for wireless 4g systems. In: IEEE 16th International
Symposium on Personal, Indoor and Mobile Radio Communications, PIMRC 2005, Sep-
tember 2005, vol. 1, pp. 594–598 (2005)
[6] Yan, G., Yang, W., Olariu, S.: Data fusion for location integrity in vehicle ad hoc net-
works. In: Proceedings of The 12th International Conference on Information Integration
and Web-based Applications and Services (iiWAS 2010), Paris, France, November 8-10
(2010)
[7] Mahalanobis, P.C.: On the generalised distance in statistics. Proceedings National Insti-
tute of Science, India 2(1), 49–55 (1936)
[8] Yan, G., Chen, X., Olariu, S.: Providing vanet position integrity through filtering. In:
Proceedings of the 12th International IEEE Conference on Intelligent Transportation Sys-
tems (ITSC 2009), St. Louis, MO, USA (October 2009)
Intrusion-Tolerant Location Information Services in Intelligent Vehicular Networks 705
[9] Papadimitriou, S., Kitagawa, H., Gibbons, P.B., Faloutsos, C.: LOCI: fast outlier detec-
tion using the local correlation integral. In: Proceedings of 19th International Conference
on Data Engineering, pp. 315–326 (2003)
[10] Yan, G., Ibrahim, K., Weigle, M.C.: Vehicular network simulators. In: Olariu, S., Weigle,
M.C. (eds.) Vehicular Networks: From Theory to Practice. Chapman and Hall/CRC
(2009)
[11] Yan, G., Lin, J., Rawat, D.B., Enyart, J.C.: The role of network and mobility simulators
in evaluating vehicular networks. In: Proceedings of The International Conference on In-
telligent Computing and Information Science (ICICIS 2011), Chongqing, China, January
8-9 (2011)
The Role of Network and Mobility Simulators in
Evaluating Vehicular Networks
1 Introduction
R. Chen (Ed.): ICICIS 2011, Part II, CCIS 135, pp. 706–712, 2011.
© Springer-Verlag Berlin Heidelberg 2011
The Role of Network and Mobility Simulators in Evaluating Vehicular Networks 707
the gaps left by previous work. Inspired by Kurkowski et al. [3], [2], we review and
classify the simulators used in vehicular networks studies that have been presented at
the ACM VANET workshops from 2004-2007. More importantly, we present a com-
parison of two VANET simulation solutions and show the impact that the choice of
mobility model and topology have on various network metrics.
Here we present an overview of the vehicular mobility simulators used in papers from
ACM VANET as well as other publicly-available mobility simulators.
SHIFT/SmartAHS [4] is an Automated Highway Systems (AHS) simulator devel-
oped as part of the California PATH project at UC-Berkeley. It was originally built to
simulate automated vehicles, but a human driver model [5], based on the cognitive
driver model COSMODRIVE, was later added. SHIFT/SmartAHS is still available
for free download from PATH. All three papers from ACM VANET that used
SHIFT/SmartAHS come from the research group at UC-Berkeley.
The Microscopic Traffic Applet [6] is a Java simulator that implements the IDM
car-following model. The default scenarios are ring road, on-ramp, lane closing, up-
hill grade, and traffic lights. As this is an applet designed to illustrate IDM, it does not
708 G. Yan et al.
include any method to import maps from other sources, set a path from source to
destination, or output a trace file for input into a network simulator.
VanetMobiSim [7] is an extension of the CanuMobiSim [8] project. CanuMobiSim
implemented Random Waypoint movement, and its trace files can be imported into
the NS-2, GloMoSim, or QualNet network simulators. VanetMobiSim adds an im-
plementation of IDM as well as IDM-LC (lane change) and IDM-IM (intersection
management). Maps can be generated by the user, randomly, or using the TI-
GER/Line database. In addition, the mobility model used in VanetMobiSim has been
validated against the CORSIM mobility simulator.
SUMO (Simulation of Urban Mobility) [9] is an open-source mobility simulator
that uses Random Waypoint path movement and the Krauß car-following model.
SUMO supports maps from TIGER/Line and ESRI. MOVE [10] is an extension to
SUMO that allows its vehicle mobility traces to be imported into NS-2 or QualNet.
Since vehicular networks involve solely wireless communications, all of the network-
ing simulators described here support performing simulations with mobile wireless
nodes. We briefly describe the publicly-available simulators.
NS-2 [11] is an open-source discrete event network simulator that supports both
wired and wireless networks, including most MANET routing protocols and an im-
plementation of the IEEE 802.11 MAC layer. NS-2 is the most widely-used simulator
for networking research and is the most-used network simulator in the ACM VANET
workshop.
J-Sim [12] is an open-source simulation environment, developed entirely in Java.
J-Sim provides two mobility models: trajectory-based and random waypoint. J-Sim is
presented as an alternative to NS-2 because it is designed to be easier to use, but J-
Sim has not been updated since 2006.
SWANS (Scalable Wireless Ad hoc Network Simulator) [13] was developed to be
a scalable alternative to NS-2 for simulating wireless networks. Based on compari-
sons of SWANS, GloMoSim, and NS-2, SWANS was the most scalable, the most
efficient in memory usage, and consumed the least runtime [13,14]. Recently, the
network model in SWANS was validated against NS-2 [14]. It was shown that along
with better performance, SWANS delivered similar results as NS-2, at least for the
network components that were implemented in both.
3 Comparison by Experiment
In addition to analyzing what simulators have been used in previous VANET re-
search, we present a comparison of two mobility simulators to demonstrate the differ-
ences observed at the network level caused by different mobility models. We choose
to compare VanetMobiSim and SUMO not only because they are publicly available,
but also because they are both well-maintained and still continue to be developed and
improved. In order to keep the networking components the same, we use NS-2 as the
network simulator in these experiments. We constructed a 2000m x 2000m area of
city streets. Both VanetMobiSim and SUMO use a random trip generator to choose a
The Role of Network and Mobility Simulators in Evaluating Vehicular Networks 709
path between source and destination. We place 60 vehicles at five different initial
positions as shown in Figure 3(a). Once moving, vehicles never leave the map, mak-
ing random turning decisions at each intersection. The speed limits on the streets
range from 5-20 m/s (11-45 mph), and the vehicles are constrained by the speed limit.
SUMO and VanetMobiSim are run with the configuration described above. Both of
these mobility simulators generate a mobility trace that can be input into NS-2. Then,
NS-2 is run with the configuration as described below. For the car-following model,
VanetMobiSim uses IDM, and SUMO uses the Krauß Model.
Out of the 60 vehicles in the simulation, 10 are chosen (at random) as data sources
and 10 are chosen as data sinks. This results in 10 constant-bit rate (CBR) data flows.
Each data source sends four 512-byte packets per second, resulting in an offered load
of 16 kbps (kilobits per second) from each CBR flow. Packets are routed from source
to destination using the Ad-hoc On-Demand Distance Vector Routing (AODV) [15]
algorithm (described below). Each vehicle will buffer packets (in a finite queue) until
a route has been found to the destination.
1) Throughput: Figure 1(a) shows the throughput obtained with VanetMobiSim
mobility and SUMO mobility. We also show a line for offered load, which indicates
the amount of data that senders are transmitting into the network. Each dot on the
offered load line represents a new CBR flow starting. In both cases, the throughput is
bursty with spikes of high throughput along with periods of very low throughput. This
suggests that the topology is changing and that routes are being broken. To investigate
the performance further, we look at loss and delay.
2) Loss Rate: We computed the loss rate for CBR packets every 10 seconds in each
experiment and show the results in Figure 1(b).The loss rates are shown on the graph
in the time interval that the packets were sent. With SUMO mobility, no packets sent
before time 100 seconds were ever received, and with VanetMobiSim mobility, no
packets sent before time 50 seconds were received. This high rate of loss is due to
vehicles being initially deployed at the borders of the map (Figure 3(a)). Since the
average speed is about 20 m/s, and the area is 2000m x 2000 m, vehicles need to
move about 1000 m before they are in range of one another. The time needed for this
movement is about 50 seconds. Once vehicles cover the whole area of the map, more
routing paths are available, and the loss rate begins to decrease.
3) Packet Delay: In Figures 2(a) and 3(b), we show the packet delay for each packet
successfully received by the destination with VanetMobiSim mobility and SUMO
mobility, respectively. In the figures, the delay for a packet is shown at the time the
packet is sent by the source. We give the delays from each CBR flow a different color
and symbol to distinguish them.
We note that the absence of packet delays up to time 50 seconds with VanetMo-
biSim mobility and up to time 100 seconds with SUMO mobility matches the loss
rates for each, since we can only compute packet delays for packets that are actually
received. In both figures there are delay spikes followed by sharp decreases in delay.
This is indicative of a queue draining, especially since all of the packets in the de-
creases are from the same CBR flow. We looked in particular at the first spike in
Figure 2(a), which is enlarged in Figure 2(c). All of the packets represented in this
area are from a single flow, from source node 7 to destination node 9. Source node 7
begins sending packets at time 4.08 seconds, but according to Figure 1(b), no packets
sent before time 50 seconds are received. The first packet is received 10 seconds after
it was sent. Subsequent packets have a linearly decreasing delay. The issue is that
AODV cannot set up a route from the source to the destination until time 61.04 sec-
onds. Once the route is setup, all of the packets in the source’s queue can be sent and
are delivered.
To further investigate the reason for the length of time needed to setup the route
between source node 7 and source node 9, we show the position of the vehicles at the
time node 7 sends an AODV request (Figure 3(c)). Node 7 sends the RREQ to node 6,
but the next hop, node 0, is out of communications range as the distance between the
The Role of Network and Mobility Simulators in Evaluating Vehicular Networks 711
two vehicles is about 525 m, which is larger than the transmission range of 250 m.
Nodes 6 and 0 will only be able to communicate with each other once they are in
range, so the vehicles need to travel about 275 m before they are within 250 m of each
other and can communicate. Based on the speed that the two vehicles are traveling
(about 28 m/s), it will take about 10 seconds for them to travel the required distance
and be in communications range. Thus, the topology of vehicles has a dominant effect
on packet delay when there is no direct route. We look at the details of packet losses
to confirm this. In Figure 2(b) we show each dropped packet in the case of SUMO
mobility and the reason given for the drop. We note that in our loss rate figures, some
packets are not dropped, but just are never heard by the receiver. In this figure, we are
looking at specific reasons why packets were dropped at a node. The reasons include
queue overflow, wireless collision, packet has been retransmitted a maximum number
of times, and no route to the next host. Early in the simulation, several packets are
dropped because there is no route, and many are dropped because of queue overflow.
These queue overflow drops are occurring at the source nodes because they are not
able to deliver any of the CBR packets. Starting around 140 seconds, there are many
packets dropped due to collision. This indicates that routes are available and the wire-
less channel has become busy.
4 Conclusion
We have presented an overview of publicly-available simulators used for vehicular
networking research. Typically, vehicle movements are determined by a vehicle mo-
bility simulator, and the movement trace is then input into a network simulator to
simulate communication between the vehicles. In addition to reviewing currently
available simulators, we performed experiments to compare the SUMO mobility gen-
erator and the VanetMobiSim mobility generator. We show that vehicle mobility can
greatly affect network performance in terms of throughput, loss, and delay.
References
[1] Haerri, J., Filali, F., Bonnet, C.: Mobility models for vehicular ad hoc networks: a survey
and taxonomy. IEEE Communications Surveys and Tutorials (epublication)
[2] Yan, G., Ibrahim, K., Weigle, M.C.: Vehicular network simulators. In: Olariu, S., Weigle,
M.C. (eds.) Vehicular Networks: From Theory to Practice. Chapman and Hall/CRC
(2009)
[3] Kurkowski, S., Camp, T., Colagrosso, M.: Manet simulation studies: the incredibles.
ACM SIGMOBILE Mobile Computing and Communications Review 9(4), 50–61 (2005)
[4] Antoniotti, M., Göllü, A.: SHIFT and SmartAHS: A language for hybrid systems engi-
neering, modeling, and simulation. In: Proceedings of the USENIX Conference of Do-
main Specific Languages, Santa Barbara, CA, pp. 171–182 (1997)
[5] Delorme, D., Song, B.: Human driver model for SmartAHS, Tech. rep., California PATH,
University of California, Berkeley (April 2001)
[6] Microsimulation of road traffic, http://www.traffic-simulation.de/
[7] Vanetmobisim project home page, http://vanet.eurecom.fr
[8] Canumobisim, http://canu.informatik.uni-stuttgart.de
712 G. Yan et al.