Proceedings

2009 International Conference on Computer Engineering and Technology

ICCET 2009

Volume I

Proceedings
2009 International Conference on Computer Engineering and Technology

Volume I

January 22 - 24, 2009 Singapore

Edited by Jianhong Zhou and Xiaoxiao Zhou Sponsored by

International Association of Computer Science & Information Technology

Los Alamitos, California Washington

Tokyo

Copyright © 2009 by The Institute of Electrical and Electronics Engineers, Inc. All rights reserved.
Copyright and Reprint Permissions: Abstracting is permitted with credit to the source. Libraries may photocopy beyond the limits of US copyright law, for private use of patrons, those articles in this volume that carry a code at the bottom of the first page, provided that the per-copy fee indicated in the code is paid through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923. Other copying, reprint, or republication requests should be addressed to: IEEE Copyrights Manager, IEEE Service Center, 445 Hoes Lane, P.O. Box 133, Piscataway, NJ 08855-1331. The papers in this book comprise the proceedings of the meeting mentioned on the cover and title page. They reflect the authors’ opinions and, in the interests of timely dissemination, are published as presented and without change. Their inclusion in this publication does not necessarily constitute endorsement by the editors, the IEEE Computer Society, or the Institute of Electrical and Electronics Engineers, Inc. IEEE Computer Society Order Number P3521 BMS Part Number CFP0967F ISBN 978-0-7695-3521-0 Library of Congress Number 2008909477 Additional copies may be ordered from:
IEEE Computer Society Customer Service Center 10662 Los Vaqueros Circle P.O. Box 3014 Los Alamitos, CA 90720-1314 Tel: + 1 800 272 6657 Fax: + 1 714 821 4641 http://computer.org/cspress csbooks@computer.org IEEE Service Center 445 Hoes Lane P.O. Box 1331 Piscataway, NJ 08855-1331 Tel: + 1 732 981 0060 Fax: + 1 732 981 9667 http://shop.ieee.org/store/ customer-service@ieee.org IEEE Computer Society Asia/Pacific Office Watanabe Bldg., 1-4-2 Minami-Aoyama Minato-ku, Tokyo 107-0062 JAPAN Tel: + 81 3 3408 3118 Fax: + 81 3 3408 3553 tokyo.ofc@computer.org

Individual paper REPRINTS may be ordered at: <reprints@computer.org> Editorial production by Lisa O’Conner Cover art production by Joe Daigle/Studio Productions Printed in the United States of America by The Printing House

IEEE Computer Society

Conference Publishing Services (CPS)
http://www.computer.org/cps

2009 International Conference on Computer Engineering and Technology

ICCET 2009

Table of Contents
Volume - 1
Preface - Volume 1 ...........................................................................................................................................xiv ICCET 2009 Committee Members - Volume 1..................................................................................xvi ICCET 2009 Organizing Committees - Volume 1..........................................................................xvii

Session 1
Overlapping Non-dedicated Clusters Architecture .................................................................................................3
Martin Štava and Pavel Tvrdík

To Determine the Weight in a Weighted Sum Method for Domain-Specific Keyword Extraction ..................................................................................................................................................11
Wenshuo Liu and Wenxin Li

Flow-based Description of Conceptual and Design Levels ................................................................................16
Sabah Al-Fedaghi

A Method of Query over Encrypted Data in Database ........................................................................................23
Lianzhong Liu and Jingfen Gai

Traversing Model Design Based on Strong-Association Rule for Web Application Vulnerability Detection .......................................................................................................................28
Zhenyu Qi, Jing Xu, Dawei Gong, and He Tian

Attribute-Based Relative Ranking of Robot for Task Assignment ....................................................................32
B.B. Choudhury, B.B. Biswal, and R.N. Mahapatra

A Subjective Trust Model Based on Two-Dimensional Measurement .............................................................37
Chaowen Chang, Chen Liu, and Yuqiao Wang

A Genetic Algorithm Approach for Optimum Operator Assignment in CMS ................................................42
Ali Azadeh, Hamrah Kor, and Seyed-Morteza Hatefi

v

Dynamic Adaption in Composite Web Services Using Expiration Times .......................................................47
Xiaohao Yu, Xueshan Luo, Honghui Chen, and Dan Hu

An Emotional Intelligent E-learning System Based on Mobile Agent Technology .................................................................................................................................................................51
Zhiliang Wang, Xiangjie Qiao, and Yinggang Xie

Audio Watermarking for DRM Based on Chaotic Map ......................................................................................55
B. Lei and I.Y. Soon

Walking Modeling Based on Motion Functions ...................................................................................................60
Hao Zhang and Zhijing Liu

Preprocessing and Feature Preparation in Chinese Web Page Classification ..................................................64
Weitong Huang, Luxiong Xu, and Yanmin Liu

High Performance Grid Computing and Security through Load Balancing .....................................................68
V. Sugavanan and V. Prasanna Venkatesh

Research of the Synthesis Control of Force and Position in Electro-Hydraulic Servo System ..............................................................................................................................................................73
Yadong Meng, Changchun Li, Hao Yan, and Xiaodong Liu

Session 2
Features Selection Using Fuzzy ESVDF for Data Dimensionality Reduction ................................................81
Safaa Zaman and Fakhri Karray

PDC: Propagation Delay Control Strategy for Restricted Floating Sensor Networks .....................................................................................................................................................................88
Xiaodong Liu

Fast and High Quality Temporal Transcoding Architecture in the DCT Domain for Adaptive Video Content Delivery .....................................................................................................91
Vinay Chander, Aravind Reddy, Shriprakash Gaurav, Nishant Khanwalkar, Manish Kakhani, and Shashikala Tapaswi

Electricity Demand Forecasting Based on Feedforward Neural Network Training by a Novel Hybrid Evolutionary Algorithm ..........................................................................................98
Wenyu Zhang, Yuanyuan Wang, Jianzhou Wang, and Jinzhao Liang

Investigation on the Behaviour of New Type Airbag ........................................................................................103
Hu Lin, Liu Ping, and Huang Jing

Performance Evaluation of PNtMS: A Portable Network Traffic Monitoring System on Embedded Linux Platform ..................................................................................................................108
Mostafijur Rahman, Zahereel Ishwar Abdul Khalib, and R.B. Ahmad

PB-GPCT: A Platform-Based Configuration Tool .............................................................................................114
Huiqiang Yan, Runhua Tan, Kangyun Shi, and Fei Lu

A Feasibility Study on Hyperblock-based Aggressive Speculative Execution Model .........................................................................................................................................................................119
Ming Cong, Hong An, Yongqing Ren, Canming Zhao, and Jun Zhang

vi

Parallel Method for Discovering Frequent Itemsets Using Weighted Tree Approach ...................................................................................................................................................................124
Preetham Kumar and Ananthanarayana V S

Optimized Design and Implementation of Three-Phase PLL Based on FPGA .............................................129
Yuan Huimei, Sun Hao, and Song Yu

Research on the Data Storage and Access Model in Distributed Environment .............................................134
Wuling Ren and Pan Zhou

An Effective Classification Model for Cancer Diagnosis Using Micro Array Gene Expression Data .............................................................................................................................................137
V. Saravanan and R. Mallika

Study and Experiment of Blast Furnace Measurement and Control System Based on Virtual Instrument ..................................................................................................................................142
Shufen Li and Zhihua Liu

A New Optimization Scheme for Resource Allocation in OFDMA Based WiMAX Systems .....................................................................................................................................................145
Arijit Ukil, Jaydip Sen, and Debasish Bera

An Integration of CoTraining and Affinity Propagation for PU Text Classification ............................................................................................................................................................150
Na Luo, Fuyu Yuan, and Wanli Zuo

Session 3
Ergonomic Evaluation of Small-screen Leading Displays on the Visual Performance of Chinese Users ...............................................................................................................................157
Yu-Hung Chien and Chien-Cheng Yen

Curvature-Based Feature Extraction Method for 3D Model Retrieval ...........................................................161
Yujie Liu, Xiaolan Yao, and Zongmin Li

A New Method for Vertical Handoff between WLANs and UMTS in Boundary Conditions ..........................................................................................................................................166
Majid Fouladian, Faramarz Hendessi, Alireza Shafieinejad, Morteza Rahimi, and Mahdi M. Bayat

Research on Secure Key Techniques of Trustworthy Distributed System .....................................................172
Ming He, Aiqun Hu, and Hangping Qiu

WebELS: A Multimedia E-learning Platform for Non-broadband Users .......................................................177
Zheng He, Jingxia Yue, and Haruki Ueno

Implementation and Improvement Based on Shear-Warp Volume Rendering Algorithm ..................................................................................................................................................................182
Li Guo and Xie Mei

Conferencing, Paging, Voice Mailing via Asterisk EPBX ................................................................................186
Ale Imran and Mohammed A. Qadeer

A New Mind Evolutionary Algorithm Based on Information Entropy ...........................................................191
Yuxia Qiu and Keming Xie

vii

An Encapsulation Structure and Description Specification for Application Level Software Components ..................................................................................................................................195
Jin Guojie and Yin Baolin

Fault Detection and Diagnosis of Continuous Process Based on Multiblock Principal Component Analysis ..............................................................................................................................200
Libo Bie and Xiangdong Wang

Strong Thread Migration in Heterogeneous Environment ................................................................................205
Khandakar Entenam Unayes Ahmed, Md. Al-mamun Shohag, Tamim Shahriar, Md. Khalad Hasan, and Md. Mashud Rana

A DSP-based Active Power Filter for Three-phase Power Distribution Systems .........................................210
Ping Wei, Zhixiong Zhan, and Houquan Chen

Access Control Scheme for Workflow .................................................................................................................215
Lijun Gao, Lu Zhang, and Lei Xu

A Mathematical Model of Interference between RFID and Bluetooth in Fading Channel ......................................................................................................................................................................218
Junjie Chen, Jianqiu Zeng, and Yuchen Zhou

Optimization Strategy for SSVEP-Based BCI in Spelling Program Application ..........................................223
Indar Sugiarto, Brendan Allison, and Axel Gräser

Session 4
A Novel Method for the Web Page Segmentation and Identification .............................................................229
Jing Wang and Zhijing Liu

Disturbance Observer-Based Variable Structure Control on the Working Attitude Balance Mechanism of Underwater Robot ..........................................................................................232
Min Li and Heping Liu

Adaptive OFDM Vs Single Carrier Modulation with Frequency Domain Equalization ..............................................................................................................................................................238
Inderjeet Kaur, Kamal Thakur, M. Kulkarni, Daya Gupta, and Prabhjyot Arora

A Bivariate C1 Cubic Spline Space on Wang's Refinement .............................................................................243
Huan-Wen Liu and Wei-Ping Lu

Fast Shape Matching Using a Hybrid Model ......................................................................................................247
Gang Xu and Wenxian Yang

A Multi-objective Genetic Algorithm for Optimization of Cellular Manufacturing System ............................................................................................................................................252
H. Kor, H. Iranmanesh, H. Haleh, and S.M. Hatefi

A Formal Mapping between Program Slicing and Z Specifications ...............................................................257
Fangjun Wu

Modified Class-Incremental Generalized Discriminant Analysis ....................................................................262
Yunhui He

Controlling Free Riders in Peer to Peer Networks by Intelligent Mining .......................................................267
Ganesh Kumar. M, Arun Ram. K, and Ananya. A.R

viii

Servo System Modeling and DSP Code Autogeneration Technology for Open-CNC ..........................................................................................................................................................272
Shukun Cao, Heng Zhang, Li Song, Changsheng Ai, and Xiangbo Ze

Extending Matching Model for Semantic Web Services ...................................................................................276
Alireza Zohali, Kamran Zamanifar, and Naser Nematbakhsh

Sound Absorption Measurement of Acoustical Material and Structure Using the Echo-Pulse Method ...........................................................................................................................................281
Liang Sun, Hong Hou, Liying Dong, and Fangrong Wan

Parallel Design of Cross Search Algorithm in Motion Estimation ..................................................................286
Fan Zhang

Influences of DSS Environments and Models on Current Business Decision and Knowledge Management ................................................................................................................................291
Md. Fazle Munim and Fatima Binte Zia

A Method for Transforming Workflow Processes to CSS ................................................................................295
Jing Xiao, Guo-qing Wu, and Shu Chen

Session 5
An Empirical Approach of Delta Hedging in GARCH Model .........................................................................303
Qian Chen and Chengzhe Bai

Multi-objective Parameter Optimization Technology for Business Process Based on Genetic Algorithm ..................................................................................................................................308
Bo Wang, Li Zhang, and Yawei Tian

Analysis and Design of an Access Control Model Based on Credibility ........................................................312
Chaowen Chang, Yuqiao Wang, and Chen Liu

Modeling of Rainfall Prediction over Myanmar Using Polynomial Regression ...........................................316
Wint Thida Zaw and Thinn Thu Naing

New Similarity Measure for Restricted Floating Sensor Networks .................................................................321
Yuan Feng, Xiaodong Liu, and Xiangqian Ding

3D Mesh Skeleton Extraction Based on Feature Points ....................................................................................326
Faming Gong and Cui Kang

Pairings Based Designated Verifier Signature Scheme for Three-Party Communication Environment ................................................................................................................................330
Han-Yu Lin and Tzong-Sun Wu

A Novel Shared Path Protection Scheme for Reliability Guaranteed Connection ................................................................................................................................................................334
Jijun Zhao, Weiwei Bian, Lirong Wang, and Sujian Wang

Generalized Program Slicing Applied to Z Specifications ................................................................................338
Fangjun Wu

PC Based Weight Scale System with Load Cell for Product Inspection ........................................................343
Anton Satria Prabuwono, Habibullah Akbar, and Wendi Usino

ix

Short-Term Electricity Price Forecast Based on Improved Fractal Theory ....................................................347
Herui Cui and Li Yang

BBS Sentiment Classification Based on Word Polarity ....................................................................................352
Shen Jie, Fan Xin, Shen Wen, and Ding Quan-Xun

Applying eMM in a 3D Approach to e-Learning Quality Improvement ........................................................357
Kattiya Tawsopar and Kittima Mekhabunchakij

Research on Automobile Driving State Real-Time Monitoring System Based on ARM .....................................................................................................................................................................361
Hongjiang He and Yamin Zhang

Information Security Risk Assessment and Pointed Reporting: Scalable Approach ...................................................................................................................................................................365
D.S. Bhilare, A.K. Ramani, and Sanjay Tanwani

An Extended Algorithm to Enhance the Performance of the Gridbus Broker with Data Restoring Technique .............................................................................................................................371
Abu Awal Md. Shoeb, Altaf Hussain, Md. Abu Naser Bikas, and Md. Khalad Hasan

Session 6
Prediction of Ship Pitching Based on Support Vector Machines .....................................................................379
Li-hong Sun and Ji-hong Shen

The Methods of Improving the Manufacturing Resource Planning (MRP II) in ERP ........................................................................................................................................................................383
Wenchao Jiang and Jingti Han

A New Model for Classifying Inputs and Outputs and Evaluating the DMUs Efficiency in DEA Based on Cobb-Douglas Production Function ..................................................................390
S.M. Hatefi, F. Jolai, H. Kor, and H. Iranmanesh

The Analysis and Improvement of the Price Forecast Model Based on Fractal Theory ........................................................................................................................................................................395
Herui Cui and Li Yang

A Flash-Based Mobile Learning System for Learning English as Second Language ...................................................................................................................................................................400
Firouz B. Anaraki

Recognition of Trade Barrier Based on General RBF Neural Network ..........................................................405
Yu Zhao, Miaomiao Yang, and Chunjie Qi

An Object-Oriented Product Data Management .................................................................................................409
Fan Wang and Li Zhou

Study of 802.11 Network Performance and Wireless Multicasting .................................................................414
Biju Issac

A Novel Approach for Face Recognition Based on Supervised Locality Preserving Projection and Maximum Margin Criterion ....................................................................................419
Jun Kong, Shuyan Wang, Jianzhong Wang, Lintian Ma, Baowei Fu, and Yinghua Lu

x

Association Rules Mining Based on Simulated Annealing Immune Programming Algorithm .........................................................................................................................................424
Yongqiang Zhang and Shuyang Bu

Processing Power Estimation of Simple Wireless Sensor Network Nodes by Power Macro-modeling .....................................................................................................................................428
M. Rafiee, M.B. Ghaznavi-Goushchi, and B. Seyfe

A Fault-Tolerant Strategy for Multicasting in MPLS Networks ......................................................................432
Weili Huang and Hongyan Guo

A Novel Content-based Information Hiding Scheme ........................................................................................436
Jun Kong, Hongru Jia, Xiaolu Li, and Zhi Qi

Ambi Graph: Modeling Ambient Intelligent System .........................................................................................441
K. Chandrasekaran, I.R. Ramya, and R. Syama

Session 7
Research on Grid-based Short-term Traffic Flow Forecast Technology ........................................................449
Wang Xinying, Juan Zhicai, Liu Xin, and Mei Fang

A Nios II Based English Speech Training System for Hearing-Impaired Children .....................................................................................................................................................................452
Ningfeng Huang, Haining Wu, and Yinchen Song

A New DEA Model for Classification Intermediate Measures and Evaluating Supply Chain and its Members ..............................................................................................................................457
S.M. Hatefi, F. Jolai, H. Iranmanesh, and H. Kor

A Novel Binary Code Based Projector-Camera System Registration Method ..............................................462
Jiang Duan and Jack Tumblin

Non-temporal Mutliple Silhouettes in Hidden Markov Model for View Independent Posture Recognition ..........................................................................................................................466
Yunli Lee and Keechul Jung

Classification of Quaternary [21s+1,3] Optimal Self-orthogonal Codes ........................................................471
Xuejun Zhao, Ruihu Li, and Yingjie Lei

Performance Analysis of Large Receive Offload in a Xen Virtualized System ...........................................475
Hitoshi Oi and Fumio Nakajima

An Improved Genetic Algorithm Based on Fixed Point Theory for Function Optimization .............................................................................................................................................................481
Jingjun Zhang, Yuzhen Dong, Ruizhen Gao, and Yanmin Shang

Example-Based Regularization Deployed to Face Hallucination ....................................................................485
Hong Zhao, Yao Lu, Zhengang Zhai, and Gang Yang

An Ensemble Approach for Semantic Assessment of Summary Writings .....................................................490
Yulan He, Siu Cheung Hui, and Tho Thanh Quan

A Fast Reassembly Methodology for Polygon Fragment ..................................................................................495
Gang Xu and Yi Xian

xi

A Data Mining Approach to Modeling the Behaviors of Telecom Clients ....................................................500
Xiaodong Liu

Simulating Fuzzy Manufacturing System: Case Study ......................................................................................505
A. Azadeh, S.M. Hatefi, and H. Kor

Research of INS Simulation Technique Based on UnderWater Vehicle Motion Model .........................................................................................................................................................................510
Jian-hua Cheng, Yu-shen Li, and Jun-yu Shi

Modeling and Simulation of Wireless Sensor Network (WSN) with SpecC and SystemC .............................................................................................................................................................515
M. Rafiee, M.B. Ghaznavi-Ghoushchi, S. Kheiri, and B. Seyfe

Session 8
Sub-micron Parameter Scaling for Analog Design Using Neural Networks ..................................................523
A.A. Bagheri-Soulla and M.B. Ghaznavi-Ghoushchi

An Improved Genetic Algorithm Based on Fixed Point Theory for Function Optimization .............................................................................................................................................................527
Jingjun Zhang, Yuzhen Dong, Ruizhen Gao, and Yanmin Shang

P2DHMM: A Novel Web Object Information Extraction Model ....................................................................531
Jing Wang and Zhijing Liu

An Efficient Multi-Patterns Parameterized String Matching Algorithm with Super Alphabet ................................................................................................................................................536
Rajesh Prasad and Suneeta Agarwal

Research on Modeling Method of Virtual Enterprise in Uncertain Environments ............................................................................................................................................................541
Jihai Zhang

Design of Intrusion Detection System Based on a New Pattern Matching Algorithm ..................................................................................................................................................................545
Hu Zhang

To Construct Implicit Link Structure by Using Frequent Sequence Miner (FS-Miner) ................................................................................................................................................................549
May Thu Aung and Khin Nwe Ni Tun

Recognition of Eye States in Real Time Video ...................................................................................................554
Lei Yunqi, Yuan Meiling, Song Xiaobing, Liu Xiuxia, and Ouyang Jiangfan

Performance Analysis of Postprocessing Algorithm and Implementation on ARM7TDMI .......................................................................................................................................................560
Manoj Gupta, B.K. Kaushik, and Laxmi Chand

NURBS Interpolation Method with Feedrate Correction in 3-axis CNC System .........................................565
Liangji Chen and Huiying Li

Implementation Technique of Unrestricted LL Action Grammar ....................................................................569
Jing Zhang and Ying Jin

xii

...578 Zhongwen Guo............................................................. and R.................................593 xiii .................................Volume 1 ....573 Hilal Adnan Fadhil.......................Improving BER Using RD Code for Spectral Amplitude Coding Optical CDMA Network ......................... Badlishah Ahmad USS-TDMA: Self-stabilizing TDMA Algorithm for Underwater Wireless Sensor Network ...................................... Aljunid.....................................................................588 Zhuo Li.................................... and Feng Hong Mathematical Document Retrieval for Problem Solving ......................................................................................... Xuezeng Pan.A...... Zhengbao Li.......................583 Sidath Harshanath Samarasinghe and Siu Cheung Hui Lossless Data Hiding Scheme Based on Adjacent Pixel Difference ........................... S........................ Xiaoping Chen................................................................ and Xianting Zeng Author Index ..........................

featuring high-impact presentations.Preface Dear Distinguished Delegates and Guests. 2009 in Singapore. and applications will be covered during these events. the submitted papers were selected on the basis of originality. Europe. and clarity for the purpose of the conference. multi-disciplinary. Africa and Oceania. Asia. scientists. This proceeding records the fully refereed papers presented at the conference. external reviewers and editorial board depending on the subject matter of the paper. and practitioners to exchange and share their experiences. and research results about all aspects of the main conference themes and tracks and discuss the practical challenges encountered and the solutions adopted. expanding our community’s knowledge and insight into the significant challenges currently being addressed in that research. xiv . After the rigorous peer-review process. held on January 22 . All the submitted papers in the proceeding have been peer reviewed by the reviewers drawn from the scientific committee. ICCET 2009. you are aware that the conferences together report the results of research efforts in a broad range of computer science.24. The conference has solicited and gathered technical research submissions related to all aspects of major conference themes and tracks. significance. Reviewing and initial selection were undertaken electronically. The conference Program Committee is itself quite diverse and truly international. new ideas. The Organizing Committee warmly welcomes our distinguished delegates and guests to the International Conference on Computer Engineering and Technology 2009 (ICCET 2009). If you have attended a conference sponsored by IACSIT before. The conference program is extremely rich. These conferences are aimed at discussing with all of you the wide range of problems encountered in present and future high technologies. and the accepted papers of ICECS 2008 have been included in the ICCET proceeding as a special session. The selected papers and additional late-breaking contributions to be presented as lectures will make an exiting technical program. The ICCET 2009. The main conference themes and tracks are Computer Engineering and Technology. inter-disciplinary. The conference aims to bring together researchers. with membership from the Americas. core areas of computer control and outward research. engineers. The main goal of these events is to provide international scientific forums for exchange of new ideas in a number of fields that interact in-depth through discussions with their peers from around the world. ICACC 2009 and ICECS 2008 are sponsored by International Association of Computer Science and Information Technology (IACSIT) and Singapore Institute of Electronics (SIE). ICACC 2009 and ICECS 2008 are organized to gather members of our international community of computer and control scientists so that researchers from around the world can present their leading-edge work. Both inward research.

rewarding and enjoyable week at ICCET 2009. We would like to thank the program chairs. Finally. for her wonderful editorial service to this proceeding. With our warmest regards.The high quality of the program – guaranteed by the presence of an unparalleled number of internationally recognized top experts – can be assessed when reading the contents of the program. scientifically. CPS Production Editor. Yi Xie January 22 .24. and the members of the program committees for their work. Thanks also go to Ms. 2009 Singapore xv . from academia and from industry. The conference will therefore be a unique event. Included in this will to favor interactions are social events at prestigious sites. ICACC 2009 and ICECS 2008 in Singapore. We hope that all participants and other interested readers benefit scientifically from the proceedings and also find it stimulating in the process. Lisa O'Conner. geographically. The program has been structured to favor interactions among attendees coming from many diverse horizons. Conference Publishing Services (CPS). IEEE Computer Society. where attendees will be able to appreciate the latest results in their field of expertise. organization staff. We hope you have a unique. and to acquire additional knowledge in other fields. We are grateful to all those who have contributed to the success of ICCET 2009. we would like to wish you success in your technical presentations and social networking.

India Yi Xie. USA Wen-Tsao Pan. Islamiz Azad University. St. West Bengal University of Technology. USA Zhihong Xiao. USA Anupam Shukla. China Amrita Saha. Swinburne University of Technology Sarawak. University of Newcastle. Qingdao Technological University. Australia Jinlong Wang. China xvi .Saravanan. India Gunter Glenda A. Canada Lau Bee Theng. Cagayan State University. University of South-Brittany. Germany Tahseen A. Institute of Advanced Computer and Research. Malaysia Poramate Manoonpong.. Indian Institute of Information Technology. Francis Xavier University. China (Taiwan) Gopalakrishnan Kasthurirangan. India Wei Guo. Karunya University. China Mahanti Prabhat Kumar. University of Central Florida. Jinwen University of Science and Technology. India Narasimhan V. Philippines Sevaux Marc. Canada Hrudaya Ku Tripathy. University of New Brunswick. Iran Yang Laurence T.. Iowa State University. Tianjin University. Columbia University. University of Karachi. Jilani. Lakshmi.ICCET 2009 Committee Members V. Pakistan Qian Chen. France Amir Masoud Rahmani. University of Gottingen. Zhejiang Wanli University.

C. Eberhart. Singapore Conference Steering Committee Yi Xie. Kandel. C. Lebanese American University. Korea xvii . Pakistan Nashat Mansour. USA A. USA J. Dalhousie University. Bengal Engineering and Science University. Lebanon Publicity Chairs Basim Alhadidi. Shinn. USA Program Committee Chairs S. Purdue University. Jordan M. Chungbuk National University.ICCET 2009 Organizing Committees Honor Chairs R.M. Yale University. Pakistan Nazir Ahmad Zafar.D. Aqeel Iqbal. University of Central Punjab. Foundation University. University of Karachi.R. Vietnam Kamaruzaman Jusoff. Nanyang Technological University. Bhadra Chaudhuri. Aqil Burney. Al Balqa’ Applied University. Sichuan University Xiaoxiao Zhou. Cagayan State University. University of South Florida. Pakistan Brian B. Canada Conference Chairs S. India Jianhong Zhou. Hue University. Pinter. Philippines Hoang Huu Hanh.

.

International Conference on Computer Engineering and Technology Session 1 .

.

2009. in improving user experience. Kerrighed [9] or OpenSSI [10] are well known examples of such clusters. E XISTING ARCHITECTURES The most common form of clustering are dedicated clusters. Czech Republic {stavam2. I. we first briefly review the most important existing architectures. however. For example. In this paper. not from his machine. [5]. In other environments. A concept of traditional dedicated clusters was extended by several existing projects [5]–[8] to support utilization of non-dedicated computers. A generic extension of non-dedicated clusters that satisfies these requirements is defined and a feasibility of one particular extension is demonstrated on our implementation. seems to be a reciprocal offer of cluster computing power to volunteering users. we argue how an ability to run multiple independent clusters without requiring trust among participating users can be capitalized to increase user experience and thus attract more users to participate in the cluster. and file system.cz Abstract—Non-dedicated computer clusters promise more efficient resource utilization than conventional dedicated clusters. III. all machines are fully dedicated to the cluster. the volunteers cannot use the earned cluster computing power directly from their machine. [2]. as a scalable high available solution for commercial applications [3] as well as load leveling clusters for ordinary day-to-day use [4]. In these clusters. Some projects offer only a good feeling from the participation.2009 International Conference on Computer Engineering and Technology Overlapping Non-Dedicated Clusters Architecture ˇt Martin Sˇava and Pavel Tvrd´k ı Department of Computer Science and Engineering Czech Technical University in Prague Prague. The reciprocal computing power trading. however. forming a core of the cluster. they all share user account-space. the machines of volunteering users can form their own clusters and the users can use the granted cluster time transparently from their machines. if there is support for coexistence of multiple clusters. 978-0-7695-3521-0/09 $25. The most interesting method used. This is a reasonable limitation. In this case. A feasibility of the architecture is demonstrated for one particular case on our research clustering solution called Clondike [8].tvrdik}@fel. These clusters consist of one or more dedicated machines. Any such a trust requirement complicates forming and expansion of the cluster. In this paper. a user who is about to perform a parallel compilation on his machine would like to use his granted cluster time. Methods of attracting users to participate vary. Second important aspect for attracting users to participate in non-dedicated clusters is a trust relationship. the users are granted a computing power of the cluster proportional to the power they have given to the cluster. An extension of dedicated clusters are non-dedicated clusters [5]–[8].00 © 2009 IEEE DOI 10. This separation .cvut. process space.66 3 Such a scenario is clearly not well suited for the volunteering users. or they do not take into account a possibility of running multiple independent clusters on a same set of computers. On the other hand.1109/ICCET. needs a well suited cluster architecture. but they do not fully belong to the cluster even at the time they are joined. Users offering their computers as non-dedicated computing resources should not be required to fully trust the cluster administrators and neither should the administrators be required to trust the users. These projects rely on users offering their idle computers to participate in a cluster. Existing non-dedicated clustering solutions either expect trust among participating users. but he can not. and any number of non-dedicated machines. S COPE We primarily focus on clusters attempting to provide a single system image (SSI) illusion at the operating system level. In case there is just a single instance of non-dedicated cluster running. since the motivation for developing SSI clusters is. They are used as a cost-effective alternative to expensive supercomputers [1]. II. the participation may be enforced by a system administrator. instead they first need to login to the cluster and perform their resource demanding computations there. usually standard workstations. Then we present a relaxation of existing architecture concepts and argue abouts its advantage over existing systems. I NTRODUCTION Clusters build from commodity computers are a popular computational platform. like university laboratories. similarly as in our architecture. The non-dedicated machines can join or leave cluster at any time. since he can perform the cluster operations only from the cluster itself.

In addition. Because of the nature of the proposed solution. We will refer to these blocks uniformly as nodes in both cases. . Examples of projects supporting the multi-cluster architecture are Mosix2 [11] or LSF [12]. By forming a core we mean that it defines its account space. Modern grid solutions. Figure 2. This is not really an architecture of a single cluster. Another significant architecture are multi-clusters. [14]. P ROPOSED ARCHITECTURE Figure 1. The main difference between VO and ONDC concept is that the virtual organizations are designed for a mutual cooperation Non-dedicated clusters are an extension of dedicated clusters. In this section. All nodes belong to a single cluster. but the key factor is that each node forms a core of its own SSI cluster. Mosix with its architecture is very close to the ONDC with all nodes as single machines. every node can use the others as its non-dedicated nodes. but they are still assumed to share the user account-space. Multi-clusters are as well a special case of ONDC. thus it is a super-set of both. current projects require trust among the participating clusters. A basic building block of the proposed cluster can be either a single machine or a dedicated SSI cluster. An interesting alternative to standard architectures is represented by the openMosix [4]/Mosix2 [11] projects. V. IV. User having a few machines at home would likely not use any of grid solutions to interconnect them. and 3 4 Figure 3. All clusters are isolated from each other. The nodes can be connected in an arbitrary way. ONDC can be useful as well for a local resource sharing. its own view of SSI still exists. As implied by the definition. The existing grid solutions are close to ONDC especially when ONDC is used for a large scale deployment in a distributed area. C OMPARISON WITH OTHER ARCHITECTURES illustrate schematically the difference among the three types of clusters. like XtreemOS [15]. and process space. Moreover.is often achieved by running the cluster code inside virtual machines running on the non-dedicated machines. its SSI view is fully separated from the SSI view of the node that is using this node as its non-dedicated node. In contrast. 2. all SSI views participated by the node. These two attributes imply a need for strong security model of the architecture implementation. There can be mixed environments. there should be no requirement for trust among participating nodes. but its architecture seems to be driven more by technical aspects than by an intentional design. Similarly as Mosix. A node can be used as a non-dedicated node by more than one node. ONDC. The biggest problem with Mosix is that it requires a full trust among all participating nodes (and it assumes either shared user account space or a consensus on mapping of user ids). The main difference is that the grid solutions are primarily targeted only on large scale deployments. where some blocks are single machines and some are clusters. In that case. Non dedicated cluster. we define our envisioned architecture. including the local view. Multi-clusters are gaining popularity during last years as a next logical step towards an idealized grid solution. The ONDC is an extension of the non-dedicated clusters. distinguishing explicitly between clusters and single machines where required. using each other as its non-dedicated node. we will refer to it in the paper as an overlapping non-dedicated cluster (ONDC). Figures 1. but the term refers to clusters interconnected together. but rather a different SSI depending on a machine they are logged in. When some node is used as a non-dedicated block. should be separated from each other. Nothing prevents 2 nodes to interact with each other. The upper left node is forming a core of non-dedicated cluster and is using 2 other nodes as non-dedicated nodes. All nodes are using the other nodes as non-dedicated nodes. are often based on VOs. using the other nodes as non-dedicated blocks. but an ONDC cluster may be a good candidate for that. file system. Every node can possibly form its own administrative domain. Dedicated cluster. The concept of overlapping clusters is similar to virtual organizations mechanisms (VO) [13]. Cluster machines do not share common file-system. Users of a cluster are not provided standard SSI features. which may be a limiting factor in real deployments.

there is either a single cluster shared by all users (enforcing the same environment for them). This combination of features has a high potential for attracting users to participate in the cluster. VII. a perfect match for ONDC architectures. while our solution does not require any relation among cluster users. the ONDC architecture can be used as a world-wide cluster. Properly parallelized applications can use other machines computing . underlined by the existence of commercial solutions like Mosix2 or LSF. but all other clusters are still functional (they just possibly loose some processes running on the crashed node). the whole cluster fails. where the users can get back the resources offered to the cluster by using the other nodes. This idea was already leveraged by successful projects like BOINC [1]. When the core fails. a standard non-dedicated cluster can be sufficient. In addition. the cluster formed by this core stops to work. In a largest scale. as this is directly applicable to them. In addition. First. using resources of any other computer which is not being used at the moment. He does not need any administrator privileges for the computers in the network. The cluster nodes can immediately use his machine as a non-dedicated node and the user’s node can as well immediately use all other machines. An important architectural advantage of the ONDC architecture is a better fault-tolerance with respect to standard non-dedicated clusters. If there are trust requirements. This is. The cpu utilization patterns we can expect are as follows. A DVANTAGES OF THE ONDC OVER OTHER ARCHITECTURES The key advantage of the ONDC architecture is an unique combination of a system without trust requirements and ability to form a separate cluster from each participating node. when some core fails. in IT companies with frequent application compilations or in graphical studios where rendering represents the most CPU intensive operations. The ability of each node to form its own cluster (and hence export its file system) is another factor contributing to easy expansion of a cluster. indeed. if the clustering is to be used in this environment. such patterns are already seen now with current machines. By relaxing the trust requirement. VIII. the developers of software will have to focus more on the parallelization of common programs. but commonly believed future development in the industry is that the commodity computers are going to have more and more cores. it would be a simple configuration task to let the participation in the wider cluster build on ONDC. users generally have to undergo some registration and possibly a (mutual) trust review process. if it is idle). The existing solutions could benefit especially from the security research of the ONDC architecture.agreement and some degree of trust. Fault-tolerance in standard nondedicated clusters relies on fault-tolerance mechanisms of its dedicated core. while the cores themselves will not become much faster. Any user needs just a common ONDC code and does not need to install or configure anything specific for the other clusters. ONDC is. ONDC based solutions could possibly attract larger user base. similar to the SETI [16] or BOINC [1] projects. For such a project. Actually. As long as they are already taking use of (compatible) ONDC infrastructure. Clearly. if any user brings his own laptop. There are 2 main factors. With ONDC. Another advantage of allowing each node to form its own cluster is a natural option of coexistence of different installations of clusters (even with conflicting versions of software installed on them) on the same physical hardware. In ONDC. But the non-dedicated clusters are generally based on the idea of utilizing idle machines and the ONDC allows continuous utilization of those idle machines even in presence of some cluster failures. indeed. All of the mentioned examples can coexist and cooperate as a single instance of ONDC. the computers in the laboratory can use his laptop. The ONDC architecture targets on using commodity computers. By allowing a coexistence of multiple independent clusters we enable a natural user rewarding mechanism. we will describe a few possible use-cases of ONDC in this section. ONDC can contribute a lot to such environments. For the rest of the time most of the cores would be idle. For example. since users can be rewarded by their offered time with the proportional cluster computing power. Currently. or there is some non-transparent job scheduling system. which itself may be sufficient to deter users from joining. U SE CASES In order to better illustrate the architecture. Another use case of ONDC are the multi-clusters. R ELATIONSHIP WITH MULTI CORE COMPUTING Any project being developed should plan for the future as well. where the users can send their jobs to be processed. It is always hard to predict future hardware evolution. Most of the cores would be utilized for a limited time when cpu intensive parallelized tasks are running. 5 The smaller scale example can be a university computer laboratory. users can easily join the cluster. VI. but whole clusters and multi-clusters can be connected and offer their spare computing power. These are just an illustrative examples. this does not increase fault-tolerance of any cluster in the ONDC. he can simply plug it to the network. each computer can form its own cluster. with ONDC not only single volunteer computers can be connected. based on the assumption that the users does not know each other and have a very limited trust among themselves. and start using the other computers as non-dedicated nodes of a cluster based on his laptop (and of course. There is a clear demand for such computing platforms.

we would like them to use all other nodes as detached nodes.power at the time of high CPU demand. These modules implement most of the lowest level functionality required for process migration and actually the process migration support itself. Second related extension is an ability to act as a detached node of multiple independent clusters. Scheduling As outlined in previous sections. A. while Beta will use Alfa and Gamma as detached nodes. changes required to meet the ONDC and how we address 2 key topics . IX. C. Clearly. but this is more technical question and is out of the scope of this paper). The userspace part performs all tasks that do not need to be directly in kernel. but it already supports most of the requirements on such systems. Clondike The original idea of Clondike [8]. network. Therefore Alfa will use Beta and Gamma as detached nodes.) and the cpu is not the only resource consumed by a running application (memory usage. Clusters based on Clondike consist of one dedicated node. disk I/O. it can act as a core of its own cluster. Changes required to support ONDC Clondike was since the beginning designed to allow cooperation of untrusted parties. an extension to standard Clondike system was required. The usage of detached nodes is continuously monitored by the core node and if there is some opportunity to migrate a core node local process to idle detached node. Clondike is still in an experimental research phase. This is technically a most complicated part and a description of this implementation is out of the scope of the paper. Users who work mostly with such programs would have some of their cores idle for most of the time. I MPLEMENTATION The proposed ONDC architecture is quite generic and there may be many implementations. system signals (for requesting migration) and as well using a standard linux kernel netlink sockets when a bidirectional interaction is required (for example non-preemptive scheduling decisions on process execution).). power consumption. the main advantage of overlapping clusters support is a possibility of a scheduling based on reciprocal computing power exchange. The implementation details can be found in [17]. In this section we will briefly describe our system and its technical background. It makes use of the kernel part of the system. the biggest problem here can be support for distributed shared memory and I/O bounded tasks. . we have started development of our own implementation of the architecture. With the migration support. This way. etc. since the usage of another core is not for free (cache conflicts. The second factor contributing to non-dedicated clusters generally is the observation that some programs are not easily parallelizable (either due to the nature of algorithms or due to a prohibiting complexity of such a parallelization). it is possible to utilize detached nodes that would sit idle otherwise. Machines of these users are good candidates to participate as nondedicated nodes in any non-dedicated clustering solution. while still offering its resources to another cluster. the migration mechanisms are used. Finally. To verify our ideas. called core node and a number of non-dedicated nodes called detached nodes. and Gamma) each of them forming its cluster. especially to measure impact of such a sharing. there is a userspace part of the system. there can be often some spare resources available in the network (of course. This is a natural requirement. Technical background The implementation consists of 3 parts. This is a typical setup of any non-dedicated cluster. A key feature of Clondike is a support for both preemptive and non-preemptive process migration based on process checkpointing. Mosix multi cluster solution has some support for overlapping clusters and the market based schedulers are one of the scheduling options used [21]. [19] studied in a standard non-dedicated Clondike environment [20] seems to be a promising candidate for our goals. if we have for example cluster of 3 nodes (Alfa. interacting with it via a special control file system (exporting cluster specific data about processes). Economy inspired market based schedulers [18]. From a practical point of view (coding and debugging) it is a big advantage to put as much functionality as possible to userspace.scheduling and security/trust handling. to address the trust less nodes cooperation requirements. The patch consist mostly of a simple hooks for second part of the system that are a kernel modules. so this functionality did not require any modification to match ONDC requirements. Gamma is then acting as a detached node for 2 independent clusters (and as a core node of its own cluster). monitoring or information distribution. D. more research is needed in this area. 6 B. [17] was to implement a first non-dedicated clustering solution based on the Linux operating system. The lowest level is a kernel patch. The system needs to allow coexistence of a core node and a detached node on a single physical machine. however. The existing projects closest to the ONDC architecture are multi-clusters and Mosix. They fail. etc. bus contention. Beta. like scheduling. Assuming that the limited high cpu demand periods do not always overlap for all machines. In order to allow overlapping clusters. that is kept as minimal as possible so that upgrades to new kernels are not unduly complicated. although some may contain a whole dedicated cluster as their core.

Market based schedulers seems to be a better candidate for ONDC than for standard non-dedicated clusters. the market based scheduling strategies were not yet tested in ONDC version of Clondike system and it is a future research topic to do so. Moreover. he needs to login to Alfa and execute some process there (since Beta is only a detached node). he can use his credit transparently from his machine. so only non preemptive migration is used. To illustrate the difference between non-dedicated cluster and ONDC we will use an example of 2 nodes cluster. Obviously. The processes of cluster users run with the least possible privileges on the detached nodes and thus the detached node is protected against these processes. So when the owner of Beta has credit to execute something on Alfa. This scheduler is tracking load and a count of cluster tasks running on associated detached nodes and it is trying to level the load of these nodes (including the core node itself. if the machine has too many remote jobs running on itself. which is preferred for some tasks). In contrast. since that machine forms a cluster on its own. Clondike is currently relying on establishing IPSEC channels between nodes to provide transparent link level security. Our approach to this issue is based on deferring the security critical decisions to the cluster users. In ONDC. the situation is a bit cumbersome. that is bellow its accepting threshold and try to emigrate there 3) If no remote node is found. the node with some active local tasks does not accept any cluster jobs. but this is not always the case. He must login to the cluster core and he can use his credit only there. Security and trust The security functionality and trust management of original Clondike system is directly applicable in ONDC environment. In . Since non dedicated clusters span over administrative domain boundaries and they are potentially used in an untrusted network environment. The scheduling decision is performed only at the process start time. [24]. but unlike the market strategies.e. it does not have any fairness guarantees. Despite the apparent suitability. details can be found in [23]. he can alter those processes memory and code. In some special cases the results of remote execution can be quickly verified on the core node. Each cluster in the ONDC version of Clondike has its own scheduler running on the cluster core node. It make use of the cluster overlapping nature and allows reciprocal computing power sharing. 2) Else find the least loaded remote node. Each user can specify what programs can run on what machine. Similarly. He can then use this credit for execution of his tasks. he can execute a local process and that could be transparently migrated to Alfa and use the credit there. it is much easier task to implement a new scheduler than in other similar systems. Such processes may need some co-scheduling [22] techniques to perform well. Let Alfa be a core node of nondedicated cluster and Beta a detached node. To use his credit. none the less it served well for system testing as will be demonstrated in the Performance section. there should be no trust requirement between node owners. Second. an owner of machine forming cluster core node can try to send malicious code to remote machine to get access to that node. This strategy is very simple and has many problems in a real life. In non-overlapping cluster case. There is no reliable way how to prevent second type of attacks. The first attack is technically easy to prevent. Since the scheduler in Clondike resides completely in userspace. First. even if some scheduler requests such a migration (so. To honor the local user priority over cluster users. In this section we will briefly review the mechanisms used. they are in fact isolated from the cluster environment as much as possible). the line security must be ensured. it does not accept any new migration requests. The owner of the detached node has superuser privileges so he can do basically everything. A simpler and more straightforward scheduling strategy was used in the existing prototype. the owner of Beta will get a credit to run a comparatively expensive calculation on cluster formed by Alfa. the task is kept running locally. with nodes Alfa and Beta. the owner of machine acting as a detached node can read anything from memory of cluster processes running on his machine. not for collection of closely cooperating highly dependent processes like for example a MPI application. in ONDC case both machines can be a core node. it implies two main possible classes of attacks. When somebody runs a calculation on Alfa that will use resources on Beta. it is usable only in a smallest scale and only for scheduling of sufficiently independent processes (i. where detached node owner reads or modifies cluster processes running on his machine. From the security point of view. but not directly on his machine (since his machine acts as a non-dedicated node only. The scheduling algorithm performed by each node is as follows: 1) If local cluster running tasks 7 count is lower then threshold.) E. In standard non-dedicated clusters with market based schedulers. do not emigrate task. he is refused as long as the local tasks are running). the tasks started locally on detached nodes are not clustered. By the ONDC definition. The user offers his machine to the cluster and gets some credit for that. if machine Alfa is running a local user’s job and a scheduler on machine Beta requests a job migration on Alfa.

The anonymous nodes can as well participate in cluster. but trust less (or perhaps do not even know about) machine Gamma. AND THE SEQUENTIAL TIME IS IN SECONDS . A solution to this (and related) problem is discussed in [24] and a stigmata mechanism is proposed. the users can use them for executing of processes whose results can be either easily verified. BUILD TIME IN FORMAT MINUTES : SECONDS . All nodes for which no specific rules are specified are considered of a same trust level. Graph with times when a single compilation was started from the machine Alfa There is a vast amount of cases that could be measured. taken before migration to the detached node. The action can be either rollback to previous process checkpoint. the applications used were not designed to run in a cluster environment. cannot always know which files are going to be accessed by a process being migrated to a detached node. X. the owner can modify the process. The scheduler. M EMORY IS IN G IGABYTES . The last problem in the user defined restrictions enforcement is a direct result of preemptive migration support. Moreover. . but easily verifiable operations (NP-complete calculations are one possible example). he can change it to delete all accessible files. The user would specify that any process can be migrated to Beta and the Beta can access any file on his file system. it will be eventually terminated due to access violation. Thanks to the migration capabilities. When a non-sensitive process is migrated to untrusted node Gamma. So the processes on the detached nodes must be monitored for a file system access requests and when a violating access is detected an action must be taken. Migration back to core node is not an option. or a process termination. it would be allowed to execute and all accessible files would be deleted. the process is terminated. To illustrate the decisions the user can make. and Gamma. A user on Alfa can trust owner of Beta. He can then specify that this process can be executed on any node. we will use example with 3 nodes Alfa. We have decided to demonstrate one common possible use case . The user on Alfa has as well some process that performs some resource demanding. Beta. The user defined restriction has to be obeyed by the scheduler. however. as there is relatively big overhead due to a high communication-to-computation ratio. Name Alfa Beta Gamma Delta Cores 2 2 2 1 Mem. For example. If the process gets to execution of these command on Gamma. This is a good representative of a harder class of problems. P ERFORMACE Figure 4. and the process visit history is not consulted. the user can specify the files from his file system that can be access from other nodes. Graph with times when a single compilation was started from the machine Delta Figure 5.the parallel compilation. the process can visit many nodes during its life time. If any of the visited nodes does not have required privilege. we refer to them as anonymous nodes. or for processes that perform operations with non-sensitive data. The process may need to write result somewhere. This means.addition. time 6 7 8 11 stigmata. the file system access violation checks must not consider only the node from which they are being performed. 4 2 2 1 Build time 2:13 3:28 3:46 6:43 Seq. In case the process migrates back to Alfa before executing the deletion code. but all nodes that were visited during the process execution. so that the result can be written. The problem can be illustrated again on our 3 nodes cluster example. An owner of any of visited detached nodes can alter the process being executed. since the process may be already altered by the owner of detached node. so the user may need to give a write access to some restricted part of the file system to any node. Each process is marked by a stigma of all nodes visited and any sensitive operation is consulted against user defined rules and all 8 Table 1 PARAMETERS OF THE TEST MACHINES .

based on the runtime observation of the system behavior while compiling. there are 3 running times: the best time achieved in the cluster. Quite low security overhead is due to the fact that IPSEC was configured to run the AES encryption. etc. In addition. The theoretical minimal time accounts for the sequential time of the calculation. In cases when multiple concurrent compilations were measured. . but the performance most demanding part of the security. The application being compiled is a Linux kernel. the measured times are very good. however. which is a sufficiently large application to benefit from cluster parallelization. Graph showing overheads of runs corresponding to Figures 5 and 4 both with and without IPSEC. The key factor for the lower overhead is the fact that less percentage of work is sent to other machines. etc. The second important observation is that the overhead due to security (represented by IPSEC) is apparent. gcc. Each group of bars shows compilation times for different cluster configurations. but still quite small (15% in the worst case). First. In contrast to Figure 4. which is very fast on 64bit platforms. It is not purpose 9 (1) where St denotes the sequential part of the compilation. but this cannot be so easily cleaned up). The important observation here is that the compilation has lower overhead both with and without the IPSEC. Tests were performed on a realistic platform. it seems that the scheduler can more effectively use other machines while running on Alfa. of this paper to do a detailed analysis of variance (time variance was in range of few seconds). The overheads are expressed in percents. Another important value is a sequential portion of the build time . The increasing overhead is due to inefficiencies of current experimental scheduler that cannot effectively use all machines especially in the end of calculation. In each bar group. Figure 5 captures the results of a single compilation started from the fastest machine (Alfa). and Ti denotes the sequential compilation time on node i. so rather than meaningless frequency numbers. in order to isolate inherent parallelization limitations (due to Amdahl’s law) from inefficiencies of the system itself. All test machines have x86 64bit architecture and are interconnected with a standard 100Mbps Ethernet. Ratio of achieved times to theoretical minimum times calculated this way reflects the overhead of the system (there is still some inherent limitation due to network transfers. Each test was performed 10 times and the presented values represent the minimum time achieved. the best time achieved in a cluster with IPSEC. Figure 7. was used and therefore the performance figures are representative. The first set of performed tests demonstrates a standard non-dedicated cluster functionality of Clondike. as can be seen on Figure 6. The first 2 times are clear. The set of participating nodes includes the node that started the compilation. This is. Clondike does not have all security mechanisms implemented yet. an area that requires more research in future. the choosen set of results was one with a shortest time of the compilation on the slowest machine.Figure 6. even though the overhead is increasing with the number of nodes. The theoretical optimal time is calculated as follows: St + ((Ts − St )/(Ts )) ∗ (1/( i∈setof participatingnodes 1/Ti )). Graph with times when each machine simultaneously started one compilation In our demonstration. the IPSec [26] based channel security. It is generally hard to compare machines performance. using 4 heterogeneous computers. and a theoretical minimum time. we use standard unmodified gnu tools [25] like make. Ts denotes the time of compilation on the node that started it. There are a few noteworthy observations regarding the graph on Figure 4. the table captures the time it takes to build the kernel on each of the machines. Table 1 lists all key characteristics of the machines used for testing.this includes mainly the final linking. Figure 4 shows compilation times of a single compilation started from the slowest machine (Delta). worst case scenarios.

The best non-secured time measured when starting from Alfa was 1 minute and 2 seconds. Stava and P. 79–91. ˇ [17] J. Morin. Kacer and P.. 107–112.berkeley.mosix. J. pp. Coppola. [19] R. P. Becker. and U. Conf. [26] S.” in Proceedings of the IEEE. D. 2005. no. C.” in Proceedings.” http://www. and P.org/. Holzle. Kent and K. J. 2005.” http://www.” in Advanced Computing and Communications. pp. “Non-dedicated distributed environment: A solution for safe and continuous exploitation of idle cycles. Amar. pp.org/.platform. no. L. Anderson. Lai. “A virtual machine monitor for utilizing non-dedicated clusters. R. 2004. NY. F. scar David Snchez. marketbased resource allocation system. There can be many other test combinations even for this simple use case. 10 [6] C. Scheer. [3] L. and S. Stosser. Czech Technical University. and A. Prieto.openssi. 2005. Novaes. Neumann. Barak.org/html/rfc4301. E. http: //tools. Koˇt´ l and P. Schafer. E. 1. and B.org/. but we believe the presented results demonstrate clearly enough that parallelization of a ordinary programs can be achieved with acceptable overheads. and W.” http://www. since it shows the total time spent since the start of the compilations till the end of the last.” RFC 4301 (Proposed Standard). J. Winton. Probably the most important number is the time of a compilation on the slowest machine (Delta). [11] “Mosix. Tvrdik. Tvrdik. “Web search for a planet: The google cluster architecture.” http://www. “File system security in the environment of non-dedicated computer clusters. 107–115. Australia. so we can see that this case is even more resource utilization efficient. Langr. since only core machines can use the others in that case.org/. Capek. “Boinc: a system for publicresource computing and storage. Luque. Inc.” Master’s thesis. 1–11. Washington. we have defined an extension of existing clustering concepts.” http://www. C. pp.eu/. 2004. “Preemptive process migration in a cluster of nondedicated workstations. 4–10. 20–28. pp. 169–182. “Beowulf: Harnessing the power of parallelism in a pile-of-pcs. 327–336. USA: ACM. Dec. L. J.uni-jena. [14] L. D. Barroso.” in CCGRID ’05: Proceedings of the Fifth IEEE International Symposium on Cluster Computing and the Grid (CCGrid’05) . “Coscheduling and multiprogramming level in a non-dedicated cluster. Australia: Australian Computer Society. “Security Architecture for the Internet Protocol. [8] M.pdf.” http://www. Oyama. This is a use-case. Venugopal.” http://setiathome. that cannot be performed on any standard non-dedicated cluster. To verify our ideas. Northfleet. R EFERENCES [1] D. USA: IEEE Computer Society. 445–452. [13] M. “Clondike: Linux cluster of non-dedicated workstations. [18] K. pp. “Protecting non-dedicated cluster environments by marking processes with stigmata. [7] R. Roisenberg. DC. We have discussed its relationship with other architectures and its advantages. P.de/∼ckauhaus/2005/harpy.jsp?arnumber=1196112 [4] “openmosix. B. 1997. Tvrd´k.” in ACSW Frontiers ’05: Proceedings of the 2005 Australasian workshop on Grid computing and e-research. pp. 574–581. Y.1109/GRID. pp.” in Recent advances in parallel virtual machine and message passing interface.14 [2] D. pp. 3. Dean. [16] “Seti. IEEE Aerospace. Kauhaus and A. [22] M.xtreemos. In ONDC. Fifth IEEE/ACM International Workshop on. If we divide this number by 4. P. Merkey. 2005.” in Grid Computing. . Hernandez.org/. 698–714. Jornada. 2. Solsona.” http: //www2. Gine. ADCOM 2006. [Online]. [Online].” in Parallel and Distributed Computing and Systems. Sterling. 23. Yu. and H. 2003. D. USA: IEEE Computer Society. June 2005. Buyya. each machine should first do only its compilation and offer resources only after it is done with that.org/10. 2007.. 2003.org/xpls/abs\ all.2004. Available: http: //ieeexplore.ietf.” http://www. [15] “Xtreemos. [23] M. A. Y. Huberman. Washington. 2. vol. Yang. 102/06/0943 and by the research program MSMT 6840770014.” in SOSP ’05: Proceedings of the twentieth ACM symposium on Operating systems principles. In numbers. [12] “Lsf. 2007. pp. 12. [25] “Gnu. Y. “The grid economy. A.Volume 1. [20] M. ACKNOWLEDGEMENTS This research has been supported by the Czech Grant Agency GACR under grant No. 2008. “Virtual organization support within a grid-wide operating system. 2006.informatik.openmosix. and D. Kacer. Kaneda. “Tycoon: An implementation of a distributed. [9] “Kerrighed. no.” Micro. “A simple virtual organisation model and practical implementation. New York. P.ieee. vol. L. on Grid Computing.” in PDCAT ’07: Proceedings of the Eighth International Conference on Parallel and Distributed Computing. Tvrdik. XI. pp. F.kerrighed. Jgou.” IEEE Internet Computing. Cirne. [10] “Openssi.The second set of tests captures an ONDC-specific use case in which all machines start a compilation at the same time. 57–65. pp. Yonezawa. vol. 2006. Darlinghurst. [24] M. we get an average build time of each of the kernels in this case. International Conference on. Applications and Technologies. Ridge. 22–28. A. we have demonstrated feasibility of one possible use case of the architecture. and T. Adar. We have argued why we believe that the architecture is going to be even more interesting in the future. 2005. “Harpy: A virtual machine based approach to high-throughput cluster computing. Zhang. DC. pp.” in In Proceedings of the Workshop on Adaptive Grid Middleware.” Multiagent Grid Syst. 2006. Available: http://dx.edu/. “Economically enhanced mosix for market-based scheduling in grid os. 2005. Matthews. Seo. C. “Evaluation of heterogeneous nodes sa ı in a nondedicated cluster. Results of this test can be seen on Figure 7: each machine completes its work in about the same or better time than in non-clustered case (the slightly worse times in some cases are only due to random time variations). [5] K.” in 8th IEEE/ACM Int. and E. Abramson. Hanzich. [21] L. C ONCLUSIONS In this paper. 2005. Rasmusson. H. P. Proceedings.doi.com/. it would be around 58 seconds for a nonsecured compilation. IEEE.gnu.

00 © 2009 Crown Copyright DOI 10. for example in web-based system.edu. so a certain weight vector working well in one domain might be totally ineffective in another domain. such as document retrieval.Ltd Beijing. It is a weighted sum method and the preparing work focus on finding out the weight.2009 International Conference on Computer Engineering and Technology To Determine the Weight in a Weighted Sum Method for Domain-Specific Keyword Extraction Wenshuo Liu Wenxin Li Key Laboratory of Machine Perception Peking University Beijing Supertool Internet Technology Co. The difference is illustrated in Figure 1. effectiveness of the proposed approach. which are quite simple for modern computer.1109/ICCET. Four different features are used: TF×IDF. Some experiments show that. This paper proposes an approach which will complete some preparing works focusing on exploring the linguistic characteristics of a specific domain. While in traditional methods. 978-0-7695-3521-0/09 $25. multiplication and sort.2009. the extraction can be completed by addition. sort the candidates according to the score and choose the few top ones. Once we have the weight.136 11 . lwx}@pku. Different domains have different characteristics in the usage of word. The weight vector is mainly the domain-specific information we here need to explore. keyword extraction involves assigning scores to each candidate words considering various features.cn Abstract—Keyword extraction has been a very traditional topic in Natural Language Processing. In my work. the weighted sum of the feature vector can be a good choice for the score. the weight extraction part in my work is finished once and for all. the part-of. most methods have been too complicated and slow to be applied in real applications. China {lwshuo. INTRODUCTION Keyword extraction is the process of extracting a few salient words from a certain text and using the words to summarize the content. This article is primarily about constructing a model to learn the weight vector. Domain-specific keyword extraction came into sight when researchers found out fully exploiting domain-specific information can greatly improve the performance of this task. in order to simplify the real extraction process. This paper presents a method emphasizing on doing enough and effective preparing works. because it is important for many text applications. This task has been widely studied for a long time in the natural language processing communities. The model is very much like a perceptron. as long as we have a proper weight vector. relative position of first occurrence and chi-square statistics. every step must be performed for every document. I. Experimental results show the Traditional methods focused on efficient algorithms to improve the performance of the task of keyword extraction. This part can be completed once and for all and thus reduce the burden in the real extraction process.speech (PoS) tag. However.

are shown to contain more relevant terms. each candidate words will be represented as a four-dimension feature vectors. when we consider noun: Probabilistic methods and machine learning have been widely used in the task of keyword extraction. which are classified into suitable domains. the vast majority turns out to be nouns. But in sports field. KEA was proved to be equally effective compared with GenEx. . Graph-based algorithms have also been explored. For example. A. (1) The TF is measured by counting the times that term T occurs in document D. Their approaches constructed the prediction models from texts with manually assigned keywords. B. called KEA. Their approach fully exploited the sentence-to-sentence. We count the occurrences for each kind of PoS tag as manually assigned keywords in the whole corpus and then divide by the total number of keywords.II. For example. DATA REPRESENTATION PoS(noun)= manually assigned keywords which are noun manually assigned keywords (2) The results are numbers between 0 and 1 and they indicate which kinds of words are more likely to be keywords in the target domain. For any document. as mentioned above. such as in the first sentence of paragraphs. Ian H. This feature is calculated as the number of words that precede its first appearance. Paynter. D) = P[term in D is T] × log P[T in a Document]. and the IDF by counting the number of documents in the corpus in a specific domain. For example. Relative Position of First Occurrence Not only the occurrence. and then divided by the document’s length.D) = the position of first appearance the length of the document (3) The result is a number between 0 and 1 and indicates the proportion of the document preceding the term’s first appearance. which was based on Naïve Bayes. Eibe Frank. Peter D. to form a feature vector for each candidate keywords. in entertainment news. In this article. headlines and in sentences at certain positions. and Craig G. such as “的”. are used for training and testing. the terms themselves are useless. Hulth (2004a) and Hulth (2004c) presented approach using supervised machine learning. Gordon W. for example. RELATED WORK TF×IDF (T. and sentence-toword relationship. But there are still differences between different domains. However. Jianwu Yang and Jianguo Xiao (2007) proposed an iterative reinforcement approach to simultaneously finishing the task of keywords extraction and document summarization. word-to-word. Nevill-Manning (1999) described a simple procedure. keywords might always be people’s names. It is designed to measure how specific a term T is to a certain document D: 12 RPFO(T. C. and even outperformed when fully exploiting domain-specific information. Turney (1999) developed the system GenEx. Web page articles.Witten. POS When inspecting manually assigned keywords. I choose four attributes. Those articles all have manually assigned keywords for the model to learn. but also the location of the terms is important. Words which are so common that they have no differentiating ability. Xiaojun Wan. Carl Gutwin. III. verbs are also quite important. which are nouns. TF×IDF TF×IDF combines term frequency (TF) and inverse document frequency (IDF). The system exploited genetic algorithm and used to be the state of the art. considering term T in document D: The input document will be first split up to get separate terms. have been stored in a stop list and removed during pre-processing. Terms occurring in. It is their attributes that matter.

n22 indicates the times that T occurs in domains other than D. 8. n21 indicates the times that terms other than T occur in domain D. D) = (n11 × n22 − n12 × n21 ) (n11 + n12 + n21 + n22 ) × (n11 + n12 )(n21 + n22 )(n11 + n21 )(n12 + n22 ) (4) β TF×IDF = E (keywords 'TF × IDF ) ∑ TF × IDF E (non − keywords ' TF × IDF ) − ∑ TF × IDF After similar calculation we have a vector as ( β TF×IDF, (5) In the equation. We used the Chinese lexical analysis system ICTCLAS (Institute of Computing Technology. We need weights because the four features have different discriminating ability. we update the weight vector ω by: ωn+1 = ωn +β (6) Thus with every article. So after the nth article. chi-square statistic is defined as: use the result to update the weight by adding the result to it. So far we have talked about how candidate keywords are generated and represented. The whole text and the keywords manually assigned in the meta keywords tag were extracted. P refers to the proportion of automatically selected keywords which are also manually assigned. the more dependent term T is on the domain D. when we consider the feature TF×IDF: CHI (T . 0.76. which shows: precision(P). If n11×n22-n12×n21 > 0. We EXPERIMENTS AND EVALUATION and update the weight vector according to the difference. Then we scraped 2563 web pages from http://tech. The chisquare statistic is used to test the dependence between a term and a domain. The extracted 7 words are compared with the manually assigned keywords. then term T is negatively relevant to domain D.250. IV.202). All the articles in the training corpus are with manually assigned keywords. But doing it manually is too time-consuming if we try to determine weight vectors for many domains. The results are presented in Table 1. And finally we As the title suggested. 22. Chinese Lexical Analysis System) to complete word segmentation and PoS tagging. 1563 of them were used for training and 1000 for testing. Apparently the more the feature can discriminate between keywords and non-keywords. We can find the weight manually and actually that is the inspiration of this work. The higher this value is. we examine the difference between keywords and non-keywords. and divide the results by the sum of all candidate keywords. n11 indicates the times that T occurs in domain D. In the following experiments we choose 7 words with the highest scores as the keywords.59. and F-measure(F) . calculate the difference between them. then term T is positively relevant to domain D. and if n11×n22-n12×n21 < 0. first on each dimension we examine the average value of keywords and non-keywords respectively. All texts are about information technology (IT).D. V. it is a weighted sum method and finally in the real extraction task all have to be done are multiplication. have a weight vector. n22 indicates the times that terms other than T occur in domains other than D. TRAINNING MODEL β β PoS, β FirstOccurence, β Chi ) for update. addition and sort. 6. In every article. the higher weight it should be assigned. In order to get the weighted sum of the fourdimension feature vector. 0). This system is developed by Chinese Academy of Science. For example. Chi-Square Statistic For term T and Domain D. recall(R). The weight vector is initially set to all zero. The weight vector we get for this IT domain is (66.mop. 0. On average. we still miss the weight vector.8 keywords are manually assigned per text. R refers to 13 .com/. We use some learning ideas from perceptron. 4. namely (0.

Sapporo. 2004b. Sapporo. VI. they might well benefit text categorization. 2004c. I would especially like to take the opportunity to thank professor Von-Wun Soo. The second experiment shows a significant increase. thesis. 14 Xiaojun Wan. 552-559. document retrieval. Current Issues in Linguistic Theory (CILT). 100 of them are used for training and others for testing.644 0. This method relies heavily on the data.. so we manually refined 200 documents. June 2007 Anette Hulth and Beáta B. In: Nicolov. In proceedings of the 2003 Conference on Empirical Methods on Natural Language Processing (EMNLP’03). (eds. In the proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association of Computational Linguistics.the proportion of manually assigned keywords selected by this method. Combining Machine Learning and Natural Language Processing for Automatic Keyword Extraction.496 0. N. Moreover. It is usually used as a standard information retrieval metric: F= 2× P× R P+R After observation. 2003. Recent Advances in Natural Language Processing (III). If used properly.Ltd for all the help and suggestions they have provided. John Benjamins. Angelova. 2006. July 2006 Anette Hulth. If we denote: ASMA = the number of terms both automatically selected and manually assigned the weighted sums. pp 537-544..D. and Mitkov.554 0.449 0..) Department of Computer and Systems Sciences. In proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL’07). Reducing false positives by expert combination in automatic keyword indexing. P= R is defined as: ASMA AS ASMA MA Songtao Chi and Chaoxu Zhang from Beijing Supertool (7) Internet Technology Co. pp. 2003. Hao Xu. July 2003. R. Anette Hulth.732 0. July 2003. Xuan Zhao. Anette Hulth. Jianwu Yang and Jianguo Xiao. and thus simplifies the real extraction task. and other natural language processing tasks AS = the number of terms automatically selected MA = the number of terms manually assigned Then P and R are defined as: ACKNOWLEDGMENT I’d like to thank Minghui Wu. Table 1compared the performance of our method on different data set. Botcheva. (Ph. which are used to rank the candidate keywords. Towards an iterative reinforcement approach for simultaneous document summarization and keyword extraction. We still can not make sure the weight vector we get is the optimum solution. 2004a. pp. Takashi Tomokiyo and Matthew Hurst. Enhancing linguistically oriented automatic keyword extraction.). 2007. May 2004 Anette Hulth. I also want to thank Chun-Tsung Endowment Fund for (8) giving me a chance to take part in real research. pp. K. A study on automatically extracted keywords in text categorization. This method focuses on doing enough and effective preparing works to explore the linguistic characteristics of a specific domain. Stockholm University. CONCLUSION AND FUTURE WORKS [3] [2] In this article. In proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’03). REFERENCE [1] R= F-measure combines precision and recall. A language model approach to keyphrase extraction. Improved automatic keyword extraction given more linguistic knowledge. However. THE PERFORMANCE P raw data refined data R F [6] [5] [4] 0. As long as the data is reliable. are not fully exploited.685 [7] that it did lead to a better performance. And experiments show TABLE I. Sydney. Prague. we found out that the keywords manually assigned are not all reliable. we explored a new method on domainspecific keyword extraction. G.. . who has been so kind and given me a lot of valuable instructions while I was in National (9) Tsing Hua University. In the proceedings of the Human Language Technology conference/North American Chapter of the Association for Computational Linguistics Annual Meeting (HLT/NAACL 2004). there are still much to improve. Megyesi. this method can perform quite well. 367–376.216-223. Boston.

1999 [8] 15 . Technical Report ERB-1057. In the proceedings of the 16th International Joint Conference on Artificial Intelligence (IJCAI’99). [9] Peter D. Sighan workshop ACL’03. pp. and Craig G. Ian H. Information Retrieval. Yu shiwen and Xin Chengsheng. Sapporo. 1999. Domainspecific keyphrase extraction. 1999. Turney.303-336. News Oriented automatic Chinese keyword indexing.Witten. Learning to extract keyphrase from text. Gordon W. Nevill-Manning.Li sujian. Turney. J. 2000. NRC 44105 [10] Eibe Frank. 668-673. pp. [11] Peter D. Institute for Information Technology. Carl Gutwin. July 2003. 2003. Wang Houfeng. National Research Council. Paynter. Learning algorithms for keyphrase extraction.

2009.” Researchers have examined and some proposed extending the use of object-oriented software design languages such as UML (e. Strongly claimed Conceptual model Weakly claimed (real world domain descriptions) FM UML Semantically extended UML FM (Needs further research) FM + UML Information system design and software modeling Figure 1. The claims in this paper. [7]. In this paper. UML] possess no real-world business or organizational meaning.e. Conceptual modeling is a technique that represents the “real world domain” and is used in this understanding and communication. Scott W. UML) are typically used. 978-0-7695-3521-0/09 $25. We claim that FM provides conceptually a “better” description. This paper argues that it is not a matter of lacking appropriate conceptual structures. its features suggest that it is also appropriate for the system design phase. through development of a conceptual modeling methodology based on the notion of flow. An information system (IS) represents a real business establishment.00 © 2009 IEEE DOI 10. This flow model provides a conceptually uniform applied description that is appropriate for the system analysis phase. Ambler observed that “although the UML is in fact quite robust. but rather that UML is inherently an unsuitable tool for use in conceptual modeling.” then a second phase of design of the software system. object-oriented methods and languages (e. UML. The resultant model serves as a guide for the subsequent information systems design phase that involves description of the software system under development. Keywords-conceptual modeling. There are many proposed UML extensions to enhance its capabilities through incorporating semantic features for conceptual modelling. UML is used as an example of an object-oriented modeling language and methodology. The Objectoriented IS design domain deals with objects and attributes. UML has been developed with the specific intention of designing and describing software. On the other hand. FM has not yet been formalized. In contrast. Instead we present a new conceptual model called the flow model (FM). It is known that UML lacks constructs to facilitate rich conceptual modeling.130 16 .. For the system analysis process that produces the conceptual description. building IS begins by drawing a picture of the business establishment as part of a real-world domain. However. “UML is suitable for conceptual modelling but the modeller must take special care not to confuse software aspects with aspects of the real world being modelled. software system analysis I.g.g. however.” The problem with this approach is “that such languages [e. Works that extend UML for use in conceptual modelling claim that UML can be used in both systems analysis and system design. object-oriented techniques or their semantic extensions are used. Opdahl and Henderson-Sellers [12]). 1.1109/ICCET. Dussart et al. i. as opposed to the engineering content behind them. “production of UML diagrams. while the real-world domain deals with things and properties. conceptual modeling languages are intended specifically for describing the application domain. Consequently. Furthermore. A conceptual model is a picture describing this real world domain independent of any aspects of information technology.. INTRODUCTION Understanding and communication processes in the business and organizational domain are an essential aspect of development of information systems.. the paper argues that enhancing UML to incorporate the semantics necessary for conceptual modeling will not lead to achieving the desired results. According to Bell [4]. the IS design phase is concerned with describing the information system through design models intended to describe the software system. it is unclear what the constructs of such languages mean in terms of the business” [9]. These claims are illustrated in Fig.. Evermann and Wand [10]. According to Evermann and Wand [9].2009 International Conference on Computer Engineering and Technology Flow-based Description of Conceptual and Design Levels Sabah Al-Fedaghi Computer Engineering Department Kuwait University Kuwait sabah@alfedaghi. This later phase typically utilizes object-oriented techniques to model the software system. is the single most important activity in the software-development life cycle. the reality is that it isn’t sufficient for your modeling needs” [3]. It proposes an alternative to extending UML. This paper concentrates on UML as the most widely used language for these purposes. UML is an important tool in both of these processes.g. It should reflect the reality of an organization and its operations.com Abstract—Building an Information System involves a first phase of conceptual modeling of the “real world domain. thus it eases the transition between these two stages. He then suggested using “UML as a base collection of modeling techniques which you then supplement with other techniques to meet your project’s unique needs” [3]. For describing the software system.

... money. processing.. consider that a person in Barcelona uses the Internet to ask a person in New York whether it is raining in New York. it is generated as a new piece of information using different methods such as data mining). It is disclosed and transferred to another sphere.e. 2. The discussion will include some important features of FM. The following are ultimate possibilities for the information: 1. transferred. mined). and communicated.g. Information is destroyed. where it is subjected to some process. 2. The New Yorker’s reception of the query triggers an action such as physical movement to look outside to check whether it is currently raining. It is destroyed. Therefore. Information is received (i. ready to move outside the current sphere. It is stored. Information is processed (i. information is not a patient. States of information. 4.. The five-stage scheme can be applied to individuals and to organizations. 6. so we apply these sub-stages only to the receiving. 6. The storage and uses/actions sub-stages (called gateways) can be found in any of the five stages. 7. one for the manager. A flow model is a uniform method to represent things that “flow. To simplify the review of FM. 3 shows a typical example of an UML graph [13]. etc. are exchanged. in the release and transfer stages.e. as illustrated in Fig. it is in a sub-stage because it occurs at different stages: when information is created (stored created information). we can start at any point in the stream. and so forth.g. processed (stored processed information). it is designated as released information. and creation stages without loss of generality. it is utilized in some action. or received (stored received/row information). like passengers ready to depart from airports). 5.g. Even though this section is simply a review the basic model.e.. Fig.. 8. created. e. 3.e. Information is disclosed/released (i. it comprises nine information schemes. We call this point a gateway in the flow. one for department 1. It is processed in such a way that it generates new information (e.g. 3. Created Processed Disclosed Received Figure 2. Information is stored. translated. from a customer’s sphere to a retailer’s sphere). “Things that flow” include information. Thus. The five information states are the only possible “existence” patterns in the stream of information. The gateway in his/her information system transforms the information flow to an action flow. we introduce flow in terms of information flow. Suppose that a small organization has a manager and a secretary. THE FLOW MODEL The flow model (FM) was first introduced by Al-Fedaghi [2] and has been used since then in several applications such as information requirements [1].. one for the secretary. Suppose that information enters the processing stage. 5. it includes new illustrations of the model.II. it arrives at a new sphere. comparing certain statistics generates the information that Smith is a risk). one for the organization at large.. It is processed in such a way that it generates implied information (e. it remains in a stable state without change until it is brought back to the stream of flow again.g. 4. compressed. upon decoding or processing the information. it is subjected to some type of process. Information is created (i. e. processed.. materials (e. as follows: 1. It is used to generate some action (e. like passengers arriving at an airport).g. manufacturing). Typical UML graph. analogous to police rushing to a criminal’s hideout after receiving an informant’s tip).. The first five states of information form the main stages of the stream of flow. however. To illustrate the “gateway” sub-stage. UML VS. FM To illustrate the advantages of FM-based conceptual modeling. 2.e. When information is stored. it has two departments with two employees in each department. the FBI sends its agents to arrest the spy who wrote the encoded message). Additionally. Using information indicates exiting the information flow to enter another type of flow such as actions. To follow the information as it moves along different paths. Information goes through a sequence of states as it moves through stages of its lifecycle. information is not usually subject to these sub-stages. we compare UML and FM description.” i. Shop Sell product Customer Customer 17 . a is the father of b and b is the father of c generates the information that a is the grandfather of c). Information is used (i. In the uses substage. It is reusable because a copy of it is assigned to each agent.. Communicated Vender Figure 3. The patient is a term that refers to the thing that receives the action. Information is transferred (disclosed) to another sphere. III. First. the query flows to the receiving stage of the person in New York.e.

in the general case. The vendor-“Sell product–Shop” connection may also denote these flows. response to order). Example: Imagine the following scenario: . this in an ambiguous connection. the “Sell product–Customer support” connection indicates flow of information (e. These three situations are represented in UML by an actor. FM description that corresponds to figure 3.” It may be argued that “Sell products” is that part of the Shop that interacts with the customers. The usual situation is that every thing that is received. In UML. and communicated by the Shop. Nevertheless. a use case. Shop Customer Creation Processing Disclosure Creation Vender Creation Processing Processing Sell product Cust. For example.The vendor receives the request.The “sell product” module in the shop receives the order and processes it. process. Additionally. The Shop is an active entity in the interaction. 3 are modeled as black boxes without interiors.” to deliver the product to the customer. the kind of connection is not clear. processed. and/or money. For example.The customer creates an order for a product. The following example shows specific types of flows. released. . . They are connected to the shop indirectly through having the two use cases in the Shop. generating personal information may be governed by privacy rules. Apparently. Fig. The “Shop” is different from “Sell products” and “Customer support. “customer support. compares it with tables to decide whether the person is fit for a certain job or not.” If we replace shop with “Market. For example. 4 shows an FM description that corresponds to Fig. The point here is that the semantics of the “lines” connecting the Shop to Customers and Vendors is ambiguous.The figure includes two use cases: Sell product and Customer support. order. In FM. the two actors seem to be directly connected to the interior of Shop. The “flowing things” are information. and money. products. The actors in the UML description of Fig. etc. a time clock hat reads information on ID card and registers the ID. creates the product. The shop’s description seems to be incomplete. and delivers it to the customer. released. support Receiving Receiving Disclosure Receiving Disclosure Communication Communication Communication Figure 4. materials. Each of these flows is represented by its own flow in the actual conceptual. However.The “sell order” module sends the request to another processing module. money. 5 shows the resultant flows in the scenario. and communicate (transport) these flowing things. create. 2. 3. Spheres can receive. Fig. Shop and the actors are spheres. A system that reads data. There is apparent materials (products) flow in customer-“Customer support” and vendor““Customer support. then produces new information. Fig. Still. each may necessitate different requirement that should be understood at this level by all participants. For example. release. 4 is drawn in a very general way to illustrate different directions of flow. support Receiving Communication 2 Communication 6 Customer’s Receiving physical 9 sphere Transport Vender’s physical Manufacturing sphere 8 Transportation Releasing Figure 5. A system that reads information and processes it. . personnel. “Sell product” and “Customer support” are processes in the Shop.g. Example FM with two types of flow. created. the “global sphere” of the Shop may or may not allow this direct interaction. Shop’s informational Customer’s information sphere Disclosure Creation 5 Vender’s information sphere Processing Creation Sell product Processing 7 Receiving 1 Disclosure 3 4 Customer. . processes it. a machine that reads health data of a person. and communicated by the two use cases is first received. created.. A system that registers only information. processed. a time clock that scans employees’ fingerprints then searches in a fingerprint database to allow entrance. 3.The “customer support” module sends a request to the vendor to deliver the product to the customer. Conceptually. 18 . Consider the following hypothetical situations: 1.” then the semantics of the connections represent everything: eggs.

Actors and the systems are simply spheres of information that interact (hence communicate) with each other. For example. more uniform) conceptual tool than UML. This section compares some of their features. COMPARISONS OF SOME FEATURES FM differs from UML in its basic structure and components.g.g. Actors and Spheres In UML. This happens when the controlling sphere allows the flow of information to the controlled sphere. stick figures and 3-D elements make UML appear jarring. The triggered flow does not necessarily start from the communication/transporting stage. but the sweeping generalization of notions accompanied by zooming in on interiors of spheres promises abundant modeling benefits. it represents any type of interaction topology. No spheres means no events. This is especially important for a network environment. there are two types of flows: information flow (orders and requests) and physical products flow.In Fig. We claim that the FM description is a “better” (i. Controllability in UML refers to the interior processes of an outsider that the system cannot control. It is observed that “ad hoc mixing of abstract notation (2D ovals. 19 . Secondly. 2008). When two systems interact with each others. the flow of information may eliminate the specification of certain kinds of associations among UML actors. and other information spheres. The five stages schema in FM provides such a general notion. In UML we may also introduce artificial actors such as those representing common behavior shared by a number of different actors. In FM.. The disclosure stage causes the creation of delivery request that is to be released to the vendor (circle 5). Our weaker claim is that FM can be complemented with some UML structures such as procedural control flow inside the modules “sell product” and “customer support. Distinguishing between the system and the actors is suitable for computer developers. more complete. In computer science.e. Notice that the dotted arrow indicates change in the type of flow. events are products of spheres. Additionally. “Outside” here refers to controllability (the system cannot control the outsider). an actor is an outsider who interacts with the system. UML associations are associations among spheres regardless whether they are systems or not. However. the notion of actors is introduced for lack of a common notion that covers systems. going to the post office) allows him/her to receive the product. there are—most of the time—two sides: computer and applications. In UML. Even though the FM model appears to be an “extensive” description in comparison with the UML description. In FM there is no necessity for differentiation between actors and the system. spheres just like any other spheres. this makes the concept of actors very confusing. An information sphere (as an example of a sphere) <includes> information spheres regardless whether they are a system’s information spheres or not. specialization. since we can now model the interior flow (stages) of all assemblies.. It flows to “sell order” in the shop’s processing stage (circle 3). these interactions cannot be represented in the same diagram as the system. analysts.” but interact with each other. again. a sphere does not “turn on” until the flow reaches it (it receives information). where one should not make more assumptions than the minimum needed. which triggers (circle 7) actual actions in the vendor’s physical sphere. system). there is a uniform modeling of interiors and exteriors of spheres. A. and software developers. then to the communication stage. The FM approach is suitable for describing generalizations (e. In FM. The figure shows the remaining three spheres where the creation stage is omitted because it is unused. In FM the basic concept is sphere. This is in line with Occam's razor. The delivery request flows to the vendor’s informational sphere (circle 6). boxes). More effort could have been made to construct a uniform and aesthetically pleasing representation” (Reference.g. Roles (a term used extensively in UML-related works) are different kinds of interactions between two spheres. UML mixes information spheres with such notions as events.g.. The “sell order” module sends to the “customer support” module to initiate a request to deliver the product (circle 4). Similarly. Every actor interacts with the system. In FM. Such a scenario is understood by the customer. the FM representation is simple because of its repeated application of the five-stages schema and the uniform application of a single flow in each sphere. Conceptually. At circle 1 in the figure the customer creates an order. from informational flow to actions flow. It flows from the creation stage to the disclosure stage. FM is more general. the shop manager. interaction between the system and actors embeds controllability of interaction from either side. All are “outside. Further research may lead to inject some UML constructs into FM. Simply. The direction of arrows in FM has one meaning: flow of information from one sphere to another. "navigable association" that indicates one actor controlling or "owning" the interactions can be reflected in FM by the flow of information. The only thing that distinguishes “the system” from other actors is that the system is the central actor. Actors in this case are abstractions of something common to outsiders. The order crosses to the shop’s informational stage at circle 2. We will not discuss these details..com. Finally. thus from the functional point of view they are all interacting entities. these are. This is not a central concern in FM. The system acts on actors and actors act on the systems.. then both are actors. the customer’s transporting action (e.” IV. In this last sphere the product is created (manufactured) and transported to the customer (circle 8). Notice that FM allows interaction between different spheres. inheritance. or subclassing) as structural relationships among actors. human beings. actors) are in the interior or exterior of any sphere (e. 5. Then. Interactions among actors are usually minimal and related to their interactions with the system. it flows to its processing stage. regardless whether spheres (e. This conceptual generalization eliminates a great deal of UML technicalities but also allows more expressive power.

there is nothing in the “real world domain” called “order. Use Cases are supposed to be top-level service that the system provides to its actors. It depicts a situation typically found in object-oriented models. requesting. UML does not allow drawing interaction lines between the actors because it is not clear what the interactions mean.” meaning they specify requirements and features as they are commonly understood by participants including users. B. and the server it contacts (From [6]). 8 shows an example of UML class diagram (from Evermann and Wand [9]). Fig. Ordering. customer. Fig. 7 shows different FM information spheres and possible flow of information among them. Example UML class diagram without ontological semantics (from [11]. What type of relationship can exist between a customer (a person or corporation) and “order”? Apparently. demanding. Internal behavior within the system should not appear in the top system level.1 Employee Server Receiving Disclosure Communication Communication Receiving Disclosure dateReceived is Prepaid number:String price:Money Line items Order Line Quantity:integer Price:money IsSatistied:boolean 1 Communication Disclosure Receiving Communication Disclosure Processing Receiving Coperate Customer ContactName creditRating creditLimit Remind() billForMonth(Integer) 1 Product Processing Creation Figure 8. In FM. used by [9]) User Creation Administrato Figure 7.” It is an act of some agent. expressing. it employs heterogeneous notions. signaling. Consider the edge between Customer and order. the description is not suitable for conceptual models. have a similar ontological nature: acts of an agent. Browser Processing Start/Stop Serve Page Set Target URL Processing Receive Page Instead of UML.” The proposed semantics are used to derive modeling rules and guidelines on use of object oriented languages for business domain modeling. It is supposed to stand for a relationship between them. rather they are “communication. Fig. V. and the server it contacts. UML WITH SEMANTICS Example: Suppose we want to diagram the interactions between a user. etc. and developers. this section discusses some proposals to enrich UML with semantics.Administrator Server Browser Start/Stop Serve Page Browser Set Target URL User Server Serve Page Figure 6. actors who interact with the system) in the process. Clearly. The Diagrams tell what the system should specify and facilitate requirement specifications and communication among users. Use Cases can <<include>> Use Cases. “order” means a request to purchase something..g. we will concentrate on works that constrain UML constructs in order to align it with conceptual modeling. consequently. to achieve some depth in making comparisons with FM methodology. Here we achieve simplification in addition to preserving identification of Use Cases. objecting. Interactions among several infospheres 20 . Use cases Use Case Diagrams are descriptions of the relationships between a group of Use Cases and participants (e. The status of common Use Cases (those used by parts of the system and actors) is not clear. 6 shows the resultant description. it is easy to identify Use Cases of outside spheres through identifying where they cross the sphere’s boundary. and developers. Thus. It is loaded with technical consideration. More importantly. There are many extensions of UML in this direction. at the least we would draw two diagrams. The interactions between a user. a Web browser. Order 1 Customer Name Address creditRating():string Personal customer CreditCard# Customer Sales rep 0. managers. It is important to notice that use cases are not (internal) design tools. a web browser. Ontologically.. Evermann and Wand [9] proposed “to extend the use of object-oriented design languages [such as UML] into conceptual modeling for information system analysis. In UML.

an actor is an outsider who interacts with the system.” as shown in Fig. In FM. transfer. 9. and communicate acts. we tried to stick to what is expressed by the UML model. Receiving Transporting 1 Company Transporting 2 Transporting Receiving Processing Processing Receiving Releasing Processing Creation 3 Releasing Creation Personal customer Releasing Creation Corporate customer Receiving Transporting 4 Creation Releasing Receiving Transporting Employee Processing Releasing Creation Receiving Releasing Processing Creation Order Line Releasing Processing Transporting Product 5 Figure 9. the arrows (except in the product sphere) represent the flow of orders. So is having the order stand side by side with its actor (e. Orders are also created for individual customers.” To simplify the figure. Similarly. creator. Hence. According to our understanding of the case. just as a human being has several “systems”: digestive system. however. in UML. process. The flow from these sources of orders are received by the company (circles 1 and 2). orders are created in the corporate sphere either by the corporation itself or by some of its employees. processor) conceptually disturbing? The best we can do is to explain the edge between Customer and Order as actor/actee correspondence.What is the relationship between an agent and its acts? The answer is “acts flow out of agents. The black boxes represent the customers. nervous system. In Fig. which is different from its money sphere.” in the sense that agents create. We assume that the company has two units: one to control the queue of orders (the Order Line box) and one to deliver products (the Product box). FM description of the system in Figure 8. The company divert the flow to the Order Line sphere (circle 3). 21 . the corporate customer’s sphere contains the employee’s sphere. Notice that each entity in FM has multiple spheres. Food does not flow in the nervous system. we do not mean computer information system denoted previously as IS—is different from its physical sphere. The dotted arrow in the figure denotes a change in the type of flow from “flow of orders” to “flow of products. orders and products are “things that flow. etc. “Order” is a type of “things that flow. A property is defined in FM as something that flows in entities.g. and obtaining oxygen is not the business of the digestive system. a business information sphere—here. etc. 9. musculoskeletal system.” As mentioned previously. receive. which in turn triggers (circle 4) the Product sphere to release the product (circle 5). The FM description can include a great deal more details. the unused internal arrows have been removed..

Carnegie Mellon University (accessed March 2008). 2001. Finland. “Some aspects of personal information theory. 2008).com/essays/realisticUML. and Patry. 2008. MA [12] Opdahl. B. Jajodia.andrew. Holland. 2006.. United States Military Academy.pdf Evermann. Proceedings of the 20th 22 . Reading. A. 3. Death by UML Fever. Nov. NY. (2007). No.. (2001). Communication of the ACM. and Wand. P. CONCLUSION International Conference on Conceptual Modeling. Vol.” 7th Annual IEEE Information Assurance Workshop. However.html Dussart. and Kendall. J.htm Bell. Idea Group Publishing. West Point. http://www. such as the circular flow model used by economists and the supply chains used in operation research. Reidel Publishing Company.cmu. where the control flow is a basic concept in computer science. [13] Reference. The flow model introduced in this paper is drastically different in nature from UML. M. (2005).ac. (2002). Working paper. Towards ontologically based semantics for UML constructs. and A. D. W. In addition. July 28 .vuw. Dordrecht. http://www. While such efforts are commendable. VI. S. S. (2005). It is also suitable for software developers. Y. IEEE 32nd Annual International Computer Software and Applications Conference (IEEE COMPSAC 2008). Yokohama. this should not hinder the researchers from seeking fundamentally different approaches. The flow notion is familiar in analysis. S.) Ontologies and Business Analysis. Ontology based objectoriented domain modelling: fundamental concepts. S.nz/~jevermann/EvermannWandRE05 .). Al-Fedaghi. J. A.August 1. Solvberg. M. E. “Software Engineering Interpretation of Information Processing Regulations”. Addison-Wesley. 1(1).reference. Requirements Eng 10: 146–160. Bunge.We claim that the FM model as described in Fig. In H. In: Rosemann. (2000). 27-30.. there is a possibility that it can complement and be complemented by UML in developing new methodologies for conceptual modeling. where many flow-based models are used.com (accessed March. B. [11] Fowler M. 9 is a conceptualization that is suitable for both systems analysts and software developers. Aubert. A. 43-67. Thinking Ontologically {Conceptual versus Design Models in UML. Evermann. Vol. UML distilled: A brief guide to the standard object-oriented modelling language. (2004. 82-104. http://www. and Green. extensions of UML are proposed for conceptual modeling.com/browse/wiki/Unified_Modeling_La nguage This paper has examined the problem of building a conceptual model of “real world domain” as part of the process of developing a business information system.agilemodeling.edu/course/90-754/umlucdfaq. Software and Systems Modeling. and Wand. Kunii. M. Be Realistic About the UML: It's Simply Not Sufficient http://www. Y.mcs. Turku. accessed March 2008. For this purpose object-oriented techniques are utilized. 2. UML Use Case Diagrams: Tips and FAQ. Unified Modeling Language.. (1977) Ontology I: the furniture of the world. Ecole des Haute Etudes Commercials Montreal. Evermann.. J. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] Al-Fedaghi. typically based on UML as an object-oriented modeling language. An evaluation of inter-organizational workow modeling formalisms. Ontological evaluation of the UML using the Bunge-Wand-Weber model. and Henderson-Sellers. Ambler S. A. Japan. editors. (eds. A. 1 – March. (2002).

with flexible choice of encrypted fields and good efficiency especially for only a few fields sensitive. We perform range query of numeric by constructing a bucket index... the length of index has come to more than hundred bits.173 23 H. A compromise solution between . and complete fuzzy query of characters effectively by using the bloom filter as the encrypted index. We analyzed the false positive probability of bloom filter to get optimal parameters. since a character string’s match is usually not as quick as the match of a numeric data [2] . There are also some researches on the fuzzy query of character string. security scheme has been improved compared with the pairs coding method and the traditional method.00 © 2009 IEEE DOI 10. encrypted data. Zhengfei Wang proposed pairs coding function to support fuzzy query over the encrypted character data [7] [8] . and focused on the query performance at the cost of storage space.edu.2009.. and we classify the data into sensitive and not-sensitive. introduced DAS model [3] . The experimental results shows that the performance of the queries in our 978-0-7695-3521-0/09 $25. And they combined privacy homomorphism technique to support arithmetic computation [5] .2009 International Conference on Computer Engineering and Technology A Method of Query over Encrypted Data in Database Lianzhong Liu and Jingfen Gai Key Laboratory of Beijing Network Technology. and could perform badly for big character string. An ) to represent a database We use relation model. the first of which returns coarse result by using index. THE ARCHITECTURE OF ENCRYPTED DATABASE A. and they have proposed bucket index. Our encryption scheme base on column-level. Encrypt storage model R( A1 .. a scheme to support query over encrypted data is proposed. A new method to construct Bucket index for numeric data is proposed and bloom filter compression algorithm is used for the character string. 100083 China lz_liu@buaa. is an effective way to protect data integrality and privacy. how to query encrypted data becomes important and challenging. we extend two-phase framework to complete query.1109/ICCET. When the data is encrypted. gaijingfen0313@163. We analyze the relation of the parameters that affect the false positive probability. Keywords-query. Paper [9] had proposed characteristics matrix to express string and the matrix will also be compressed into a binary string as index. Database encryption. which will be saved in database as a numeric data. we use triple to express a character string as a set. RELATED WORK I. then convert the bloom filter into a numeric data. in which constructing index for encrypted data is the key to improve performance. INTRODUCTION Database is the most important part of information System. which support the range query for the numeric data. How to query encrypted data efficiently becomes a challenge. while they are threatened by inside and outside attacks.cn. III. in addition. Hacigumus et al. II.. Database.. build bloom filter on it according to the real situation. Experiment results show the performance of our scheme has improved. We construct different indexes to support different computations for different data types in two phase query. in which the query is completed on the client side and the server side together. Then in [4] they proposed two-phase query. This method can’t deal with singe character. Every character string need a matrix size of 259x256.. Firstly.. The methods based on index is supported by DBMS. Bijit Hore further optimized the bucket index method on how to partition the bucket to get the trade between the security and query performance [6] . When data is encrypted. the column-level encryption minimizes the number of bytes encrypted. avoiding full table decryption.com Abstract—The encryption mechanism is an effective way to protect the sensitive data in database from various attacks. the query performance degrades greatly. Many researches adopt two-phase query method. Pairing coding method encodes every adjacent two characters in sequence and converted original string directly to another characteristic string by a hash function. in which database is provided as a service to the client. enterprise databases hosts much important data. constructing different indexes for different data types. School of Computer Science and Engineering Beijing University of Aeronautics and Astronautics Beijing. and the other of which decrypts the encrypted result and does a refined query. which is not suitable for storage in database. Firstly. In this paper. Am . it is large and will lead to much computation. as an active protect mechanism.

The process of encrypted data Our scheme extends two-phase query framework [4] as shown in Fig. The application interface is the only thing we provide to applications like setting a client proxy. Am . Architecture of storage and query of encrypted data The construction of index should follow two principles. in order to get the quotient and residue.. the essential of two phase query is generation and use of index. which will be sent to QRF (Querying Result Filter) as a querying condition. this characteristic will support range query for numeric data.. For example. (2) a client-query for postprocessing results of the server-query. and we got the digit number of a numeric data.1. Moreover there are many data types in DBMS. AiC = Cindex( Ai ) (2) Where we assume Encrypt() is the encrypt function.. They are expressed as follows: AiE = Encryt ( Ai ). and then QRF sends the result to application through the secured communication.. an Encryption/Decryption Engine layer is added between application and DBMS.. also a larger F(IB).performance and security can be achieved by only encrypting the sensitive fields [1] . Am (1) where the column with upper E stands for the encryption column. and C stands for the index column for character string. and secondly it should be safe enough not to leak the true value. we have the IB blurred. It is proved easily that a larger v will have a larger IB. which also enhance the data security.. In fact. The Store Module will turn the insert SQL into a corresponding one with data encrypted. the column with upper S stands for the bucket index column.. we have an application listener for listening request come from every application interface on the E/D engine side like setting a server proxy. we got encrypted column and index column for every sensitive column data. CONSTRUCT BUCKET INDEX FOR NUMERIC DATA B. mainly for numeric data and date data. in contrast. ( ) the two proxies. A1S . IV. Different from the bucket index in DAS [4] . and the true value is obscured and safe on the condition that F(x) is secure.. see details in [4] . which also decides the query efficiency.. which is another virtue of column level encryption. ArS . QRF takes the encrypted result as input. And we construct index using bloom filter for character data to support fuzzy query. The communication between application and the E/D engine depends on interact of To Store Encrypted data Store Module (with encryption) Insert Application interface Select Application/ Web Browser Querying Result Security Dictionary Coarse Querying on Encrypted data Querying Translator DBMS Querying condition Encrypted data Plain text Querying Result Filter With decryption Temp Query Result Encryption/Decryption Engine Figure 1. numeric data needs equation and range query.. It is not possible an index supporting all computations. ) ( 24 . in this way we have final bucket index. The query Translator will split an original query over unencrypted relations into: (1) a corresponding query over encrypted relations to run on DBMS. using metadata in Security Dictionary to translate user SQL. Corresponding encryption relation can be expressed as E E C R S A1 . We build bucket index to support range query. and then computes every record by the query condition. while date data needs query with “between…and”. We build different types of index to support different computations. So we have IB as the initial bucket number. Every time an application puts forward a query. so we built different types of indexes according to the data type... Encrypted column is used for equation query.. AiS = Sindex( Ai ). without storing and accessing the metadata about partition information. In order to fasten the query of encrypted data. The computations are as follows: Step 1: we divide every data with a Seed. and there will be many types of queries. decryption function will be called when encrypted data involved. the application interface on its own side accepts it and transfers it to the E/D Engine. We take a monotone increasing function (F(x)) to transform IB. output of process is exact query result set.. v−r IB = digitNum(v) || (3) seed Step 2: In order to enhance the security of the bucket number. Seed and F(x) are the only ones we need to protect. Store Module or Querying Translator deal with it according to SQL type. Sindex() is the bucketIndex() function and Cindex() is the function generating the character index. character data need equation and fuzzy query. we construct order-preserving bucket index for numeric data by a series computations. firstly the index filters false record efficiently. An .

or else the false positive probability will be high.a2 s1 s2 si Sn Si h1 h2 hk m-bits array 0 m Figure 2.…. then … ” … ” a1a2 an … wn} ar} ( ) … Algorithm of index based on bloom filter The bloom filter algorithm we used is as described in Fig.…. To check if an element x is in S.. We need to analyze these relations. In practice. a larger m will be needed. r is used to describe the relationship of adjoining characters. Analysis of the false positive probability for bloom filter The key of our method is to minimize the false records returned. the smaller of a false record returned probability gets. we will see that it becomes pairs coding function. C. D. the better of the n false positive probability is. 3 where we use MD5 to get k hash functions. we can say pairs coding is a special case of bloom filter. we have w presenting the set of word split by blank. Take ORDERS relation as an example. Triple for character string In this section. A. Initially. if all hi(x) are set to 1. which is shown in Fig 2. For each element s ∈ S . which will then be stored in the database as index in the form of numeric. So if n is large. all bits in the bit array are set to 0. CONSTRUCT INDEX FOR CHARACTER STRING Large quantities of string data are used in many applications and fuzzy query is used frequently. while the false positive probability depends on multiple factors. By the conclusion above. String s= the character string will be mapped to a m-bits array comprised of ‘0’ or ‘1’. k k ( ) String[] w={w1 String[] r={r1. r . where NUMBER(matchStr) stands for the number of elements that matchStr has. and m is the total bits size of the index. and smaller k might be preferred since they reduce the amount of computation necessary. k must be an integer. then the condition changed to WHERE AiS &value=value. 25 .m-1}. The bloom filter built on triple We build bloom filter index on the triple according to the real situation.s2. the bits hi(s) are set to 1 for 1 ≤ i ≤ k .hk with range {0. Every element in the u and r will be encoded into multiple positions of the binary string with the tag of ‘1’. Bloom filter on the triple A Bloom filter is a simple randomized data structure that answers membership query with no false negative and a small false positive probability [11] . where n is the number of elements. u presents the set of characters in s. The probability of a false positive for an element is kn k ⎛ ⎛ − kn 1⎞ ⎞ f= (1 − p ) = ⎜1 − ⎜1 − ⎟ ⎟ ≈ ⎛1 − e m ⎞ (4) ⎜ ⎟ ⎜ ⎝ m⎠ ⎟ ⎝ ⎠ ⎝ ⎠ where a large m and a small n both have a large f.sn}. we construct a triple to present a character string. Assume the condition for a query is “WHERE Ai like ‘%matchStr%”. not full-table decryption any more. using m-bits array for representing a set S={s1. It is space-efficient. So we have the larger matchStr is. is to check whether all hi(x) are set to 1. When k=1. k = 1 (5) ⎝ m⎠ The more (ln 2) ⋅ m is close to 1. and then convert all the hash positions to numeric as index. In this way. n When a query condition is “LIKE %matchStr%”. we first get the index value of ‘matchStr’. u .….V. which depends on false positive probability of the bloom filter. which can express the characters in the string and the relations of the characters. . we think that x is in S with false positive probability [12] . we make index for the sensitive column is o_comment. A bloom filter uses k independent random hash functions h1. So the triple is comprised of these three. Now we can convert the LIKE query over encrypted relation to the query over the index attribute.r2 rn} char[] u={a1. comma and full stop characters. At last we have t = w. and f is smallest when k = (ln 2) ⋅ m [12] . How to execute a query efficiently over the encrypted string data are the emphases of this paper. pairs coding function has n 1⎞ −n ⎛ f = 1 − p = 1 − ⎜1 − ⎟ ≈ 1 − e m . then clearly x is not a member of S. a fuzzy query over character data equivalent is a membership query over the bloom filter only by simple & operation of numeric data. if not. B. the probability of a not-matched record returned is f NUMBER( matchStr ) . cn’. It is defined as follows: Definition 1 Triple for a character string: For a string s=’ c1 c2 ….

Also we compare our method to pairs coding method extended by different k and traditional method.re tli f moo lb VI. Compared with conventional 26 6 e 5% g n i r t s % 4 k i l r e n i a t n o c . We construct a bucket index without saving the metadata of every partition. We have the following definition of false query probability: Definition 2 We assume the number of tuples returned in the first phase of the query is n1. and fq is the proportion of get the wrong records. Purpose of our experiments is to test the performance of fuzzy query among our method. we analyzed the parameters of bloom filter to get the minimal false positive probability.k=4.Fig. programming language is Java. while fq can reflect f. we use table ORDERS and PART as experimental data source. Then a converted query condition is bitand(e_comment. pairs coding and the traditional method.gn ido c s ria p d ed net xe t pyr ced el ba tll uf 1=k .0 f a l 2. length of the Key is 128. we evaluated the performance and efficiency of our two phase query. and the sensitive column is P_CONTAINER.m=31. When a query is ‘o_comment like ‘%unusual%’’. using bloom filter to check the existence of elements of the Triple. the environments are windows XP.0 b a b 7. 5 is query about PART.0 s e 3. noitpyrceD elbaTlluf gnidoc sriap retlif moolb 13=m. we use a triple to denote a string. and then converted the query over the sensitive attribute into the query over the index attribute in two phase query. Fig. 4 is the query over ORDERS.0 b i l i t y 0 1 : . We compare the method of pairs coding and bloom filter when k=1. t r a p2 3 1 0 We leave the efficiency of bucket index out of account of this paper. we take scale=1. Comparison of bloom filter. which enhance the security of the sensitive data. THE EXPERIMENT AND THE ANALYSIS Figure 5. P4 2. The encryption algorithm is DES.redro 3 2 1 1. we get the index value is 480497654 with the same parameters. Comparison of different index methods VII.0 t y 2 . the result shows the improvement of the bloom filter based index is obvious. 480497654)= 480497654 (6) 4 %gnirtS% ekil tnemmoc.0 let len=2.gn ido c s ria p d ed net xe 2=k .0 r y 5.66 CPU. Finally.0 u e r y 6 . and the sensitive column is O_COMMENT. fq = We proposed a query scheme to query encrypted data efficiently by using different types of index for different data. 1G RAM and database is Oracle 10g. and got n1. n1 ≥ n2 .0 q u e 4. then the false query probability is: 3=k . and the number of tuples satisfying the original query is n2.0 p r o 6.0 i l i 8.0 a l s e q 4 . f is the probability of an element not in the set being taken as a member. We get fq by executing the fuzzy query over encrypted relation.gn ido c sri ap 1 9. pairs coding and traditional methods 2=k .1 f 2 . which shows the improvement of our method. CONCLUSION n1 − n 2 (7) n1 By the way fq is different from f. The bloom filter algorithm Figure 4. Firstly.1=k s doh tem tn ere ffi d nee wte b n ois rap moc Figure 3. we use the tool of Benchmark Factory for Database to make our database. We focus on the fuzzy query over encrypted character string. n2 recorded. Thirdly.0 p r o b a 8 . According to TPC-H standard.

Iyer. [9] H. Zhu. 2005AA113040 and the Co-Funding Project of Beijing Municipal Education Commission under Grant No. Niu. In the Proceedings of the International Conference on Data Engineering (ICDE). Intelligent Information Hiding and Multimedia Signal Processing. “Storage and Query over Encrypted Character and Numerical Data in Database”. Zhang. “Providing Database as a Service”. S. “Compressed Bloom Filter”. pp. “Executing SQL over encrypted data in the database service provider model. Mitzenmacher. 2008. In Proceedings of the 30th VLDB Conference. 720–731. 186-189. J. no. Cheng and R. Z. 2007. Li and S. 604-612. [7] Z. Information Theory. Sesay.L. IEEE/ ACM Transactions on Networking. 2006. Communications In Information and Systems. REFERENCES [1] S. pp. 90-97. H. pp. Jiang. ACKNOWLEDGMENT This work is partly supported by the National High-Tech Research and Development Plan of China under grant No. Mehrotra. J. “Execution Query over Encrypted Character Strings in Databases. 1083-1090. Wang and B. pp. [5] [2] [3] [4] H. Dai. Li and X. pp. 2002. pp.289-300. [8] Z. pp. B. Jin. Mehrotra. Bruck.” In ACM SIGMOD Conference. Hacigumus. 2002. pp. 10. Gao and A. pp. Consumer Communications and Networking Conference (CCNC). 2304-2308.” Frontier of Computer Science and Technology. pp. “Partial Disclosure of Searchable Encrypted Data with Support for Boolean Queries”. [6] B. H. vol. 2004. 2004. Shi.L. W. [12] M. Reliability and Security (ARES08). Wang. “A secure Database Encryption Scheme”. “A Method of Bucket Index over Encrypted Character Data in Database”. 2005. 2004. Mehrotra. W. Xu.F. W. [11] J.queries and the pairs coding method. Iyer and S. Wang. B. C. 5. Yang. [10] Y. 216-227. Hore. 27 . Shi. pp. In the proceedings of Database Systems for Advanced Applications (DASFAA). 591-595. In Proceedings of the 2005 The Fifth International Conference on Computer and Information Technology. Hacigumus. J. J. 2007. Availability. 49-53. the experimental results show the performance improved. B. Ohtaki. JD100060630. 2005. “Efficient execution of aggregation queries over encrypted relational databases”. “Fast Query over Encrypted Character Data in Database”. Hacigumus. Wang and B. Chen and D. pp. Mehrotra and G. 29-38. “A Privacy-Preserving Index for Range Queries”. “Weighted Bloom Filter”. and S. 125-136. Tsudik. Y. 2002. Iyer.

but also hyper links、active links、active interactive units. The rest of algorithm mainly works in the base set. which provides strong support for simulating attack model and then give some method to traverse web page aiming to detect web application vulnerability. Therefore. Kleinberg [3] demonstrates that this algorithm convergence.cn. OVERVIEW OF HITS ALGORITHM I. They provide basis for effective detection but bring unavoidable cursoriness. Thus. Can we find out attack point from so much information as quickly as hacker in the web pages? The answer is yes. Keywords-software dependability. Therefore. and a good authority is pointed to by many good hubs? HITS algorithm gives every page two weight: authorities weight and hub weight. the strongest authorities consciously do not link to one another. More popular web application becomes. GONG Dawei1. Web pages which are browsed and interacted most frequently are the pages which interest hackers most. XU Jing1. In a strictly dual fashion. we adapt improved Apriori algorithm to get optimized frequency set. there are still some problems.n} and define their n×n Adjacency Matrix. Most active method traverses all web links and interactive units in traversing step.00 © 2009 IEEE DOI 10. software dependability has been in higher demand. This paper focuses on characteristic of web application. For a wide range of topics. which link in a correlated way to a thematically related set of authorities. The base set includes pages pointed by start set and others pointing to start set. Finally it outputs a series of pages with highest weight. active detection method simulating hacker becomes the development trend. TIAN He1 Institute of machine intelligence. Traversing problem in vulnerability detection is presented on basis of this consideration. Finally.2009 International Conference on Computer Engineering and Technology Traversing Model Design Based on Strong-association Rule for Web Application Vulnerability Detection QI Zhenyu1. 300071 qi_zhenyu@hotmail. We can mark pages by integers like {1. We define Xp and Yp as non-negative integers for every page in the base set. HITS gets start set which consists of pages through initial URL and then generate base set on basis of it. China.2009. These relations is the right thing hackers look for. this model applies the HITS algorithm to generating a series of pages which may be used by hackers as attacking. Because this algorithm ignores the relation between content of web pages and way of attaching.…. the page will be considered a good hub. Although the result of HITS is good. authorities and hubs exhibit what could be called mutually reinforcing relationships: a good hub points to many good authorities. it means that there exists some logical relation between them. II.edu. which is easy to cause low efficiency and no pertinence. xp = q→ p ∑y q , yp = p →q ∑x q (1) For a page p. the weight of Xp is updated to be the sum of Yp over all pages q that link to p: where the notation q→p indicates that q links to p. From hacker’s view. especially web pages and presents one traversing model based on high-related rule. Web Vulnerability. Some links from different pages but to one page could display the importance of this page. Nankai University TianJin. this paper presents the traversing model for web application vulnerability detection. HITS is the classical algorithm based on hyperlinks. they can only be connected by an intermediate layer of relatively anonymous hub pages. this paper presents one traversing model. then 28 978-0-7695-3521-0/09 $25.2. But pagerank based on weight has nothing to do with initial parameter selection [6]. which make detection more efficient. Authorities are the central Web pages in the context of particular query topics. which contain lots of logical relations. more vulnerability can be used as attacking. Firstly. the page will be considered a good authority. If the value of Xp is big. Therefore. So lots of link information can be used as an important resource to find content correlation. Web application vulnerability has become one of the biggest threats for software security. It distinguishes the importance of links according to weight. 2] that consist of ‘authorities’ and ‘hubs’. the weight of Yp is updated to be to the sum of Xp.com.79 . Detecting and solving vulnerability is the effective way to enhance software dependability. HITS algorithm mines the link structure of the Web and discovers the thematically related Web communities [1. In section 3 we will discuss association rule. If Pi points to Pj. Due to ignoring content in web pages for HITS algorithm. flyingdonkeycn@msn.com. These two types of Web pages are extracted by iteration that consists of following two operations. Web application not only consists of web pages. xujing@nankai.1109/ICCET. this model adapt association rule When one page links to another page. [5] also have description of this algorithm. Finally. HITS algorithm. if the value of Yp is big.com Abstract—With more important function in information society. But the speed of vulnerability detection development is slower obviously. on basis of which we deduce high-related rule between properties of interactive unit and way of attacking. Apriori algorithm algorithm to get some high-related rule by analyzing transaction data stored in database. because logical relation means that there could be data stream between pages. these pages getting high weight by HITS algorithm are the place easy to generate vulnerability. INTRODUCTION Web application has been applied into many kinds of field. which will be demonstrated in section 2. fyonatian@hotmail. Other papers [4]. Therefore.

where strong association rules satisfy both minimum support and minimum confidence. support(A->B)=P(A B)=s. Association Rule Association rules mining can he stated as follows: Let = (i1 . That is. In database. HTML document is actually general text plus special tag. both A and B). y2. until no more frequent k-itemsets can be found. The rule has confidence c in the transaction set D if c is the percentage of transactions in D containing A that also contain B. so I U A is not frequent. im ) be a set of items. Theorem 2: If a transaction does not contain any item set in Lk-1. Let A he a set of items. where s is the percentage of transactions in D that contain A B (i. which is used to find L3. minconfidence and minsupport.then L1 is used to find L2. and so on.. where each transaction T j (j=1,2,⋯ ,n) such that T j ⊆ I .. x2. else 0. Apriori algorithm Apriori algorithm is an influential algorithm for mining frequent itemsets for Boolean association rules. T2 . If support(X)≥minsupport. confidence(A->B)= P (B|A)=support(A B) / support(A)=c . hub weight and authority weight are intrinsic property of page. Improved Apriori algorithm API Based on the Apriori algorithm. (2)If A⊆B and A is non-frequent itemset,then B is non-frequent itemset. C. According to Linear Algebra theory. When we collect all HTML documents of one web application displaying in the client side. These tags are keywords for analyzing HTML document. The problem of mining association rules can be decomposed into two major steps: 1) Find out all frequent itemsets.. That is. Finally. xn). a transaction T is said to contain A if and only if A ⊆ I . …. Admittedly. called TID. This is taken to be the probability P(A B). which can be got and read by user easily. then the matrix form of formula (1) is: (2) x ← AT y . download etc. B. This type of analysis collects keywords displaying together and then find their relation. the result itemsets (I U A) can not be more frequent than I.. The popular association rules Mining is to mine strong association rules that satisfy the user-specified both minimum support threshold and confidence threshold. we define authority weight and hub weight as x=(x1. The finding of each Lk requires one full scan of the database. 2) Use the frequent itemsets to generate the strong rules. it is straightforward to generate strong association rules from them. Based on the definition of the frequent itemset. Similarly. P (B|A). This is taken to be the conditional probability. (3) If A⊆B and B is frequent itemset,then A is frequent itemset. T j . y can be deduced by formula (3). X is frequent itemsets. Once all frequent itemsets from transactions in a database D have been found. In the meantime. If support(A->B)≥minsupport and confidence(A->B)≥minconfidence , A->B is strong correlation. these association rules should satisfy the minimum weighted interestingness. association rule [7-9] could be used as extracting high-related rule. y ← Ax ← AAT y ← ( AAT ) y (3) Therefore. where k-itemsets are used to explore (k+l)-itemsets. be a set of transactions in a database. ASSOCIATION RULE EXTRACTION IN WEB APPLICATION In client side web page is actually HTML document. we could find a series of keywords. Theorem 1: Any item set in Lk must be the super set of a certain item set in Lk-1. y=(y1.. We can find out some association rules in web pages. This set is denoted L1. Based on a strategy of transaction database fuzzy pruning. the task-relevant data. then deleting this transaction will not affect the calculation of Lj (j > k) The conclusion of Theorem 1 and Theorem 2 is that a I Let D = (T1 . keywords in HTML should be collected. employs an iterative approach known as a Level-wise search.. if we add item A into I.. In other words. an improved algorithm for association rule mining named APL is proposed. Several Theorems are introduced as follows: (1)If A⊆B,support(A)≥support(B).. The rule A → B holds in the transaction set D with support s. all of them constitute document database.the value in matrix of (i. vector x. set_of_keywords} A. So database could be displayed as follows: {document_id. Because there is association between these text and interactive form corresponding to some keywords. If all the subsets of an itemset are frequent. and keywords in document could be viewed as items. …. password. yn).. Association analysis deals with etyma and then removes useless words. search. if itemset I is not frequent.. Tn ) . Iterative sequences will convergence at Eigenvector by standardization.. After pre-processing. 29 . Frequent k-itemsets is always marked as LK.e. every document could be viewed as a transaction. Each transaction is assigned an identifier. It dynamically values the support of all the counted itemsets. it is important to find relationship between web application page and hacker attack.. III. Since main source where hackers get information is information in webpage. some keywords in webpage document also should be paid attention such as username. So it has nothing to do with initial vector and selected parameter [6]... An association rule is an implication of the A → B where A ⊆ I , B ⊆ I and form A ∩ B = ∅ . y ← Ax Stretching out formula (2) we can get: x ← AT y ← AT Ax ← ( AT A ) x . It doesn't like the Apriori which fixes a new candidate after each complete database scanning. APL reduces the size of the database and improves the efficiency of the algorithm. the itemsets will be as a new candidate. j) is 1. the set of frequent 2-itemsets. First the set of frequent 1-itemsets is found. It can add new candidate at any start point. i2 . upload. namely P (I) < minsupport.

can it be further changed? If the 0 can be replaced by a bigger number. then deleting this transaction will not affect the calculation of Lj (j > k) Theorem 3 and 4 guaranteed this conclusion: during every time calculating the support of Lk. As figure1 this paper designs one traversing model based on strong association. So this paper presents one traversing model more efficient from hacker’s view.com. L1= {large 1-itemsets} return L1 for(k=2. What need to pay attention is that URL should be indeed part of web application. For example.count > minsupport } return L = kLk. Compared with theorem 1 and 2. prunes the database with larger scale. All kinds of CGI program are responsible for responding dynamic form. which could lead to a dead cycle.cn/20080110/n33543. They will look for useful information continuously and then do attack on purpose by this information. and improves the efficiency of the algorithm. and put forward more theorems below: Theorem 3: Any item set in Lk must be the super set of k certain item sets in Lk-1. Strong association in Web Application Vulnerability Detection Algorithm: Use the mediate results of database scanning last time to prune the database. Content in the dynamic form is actually Input box embedded in the tag <form>. then it will be faster to prune the database obviously. and further reduces the time needed by the next scan. which provides support for traversing on purpose. The improved algorithm. filename. the most important ones are dynamic forms and links. We can deduce strong relation between interactive properties and attack type. //return all the frequent item sets According to the output of API. name. So input and output of CGI program is the object we concern most. further reduces the records which will be scanned next time.transaction contains no item set in Lk-1. Typical form is listed as follows: <FORM name=" myform” action=http://myWEB/a. Through response from server. Content embedded in Tag <href> should be found to get web pages.count++. So they get information just by analyzing HTML document. Each document is stored in database. APL deletes more invalid transactions. Through several cycle of optimizing. Theorem 4: If a transaction contains less than k item sets in Lk-1. They do interaction by browser as general users. //the item set that new generated Forall transations t D { If (Ldelete=0) { Lt=subset (Lk. these URL requests get pages in the form of document. By association rule algorithm we can get a frequent itemsets. HITS algorithm outputs a series of high-weight pages by inputting initial URL. //the candidate item set contained in t Forall candidates l Lt l. hackers can not browse source code directly. They could be used by predictable technology based on experiment. such as <a href=http://news. we extend theorem 1 and theorem 2. Finally. find <href> to locate links and find <form> to locate several kinds of field in forms. In all kinds of data. //make a delete mark}}} Lk={l k| l. length. post. Based on the thinking above.delete=1. it is easy to find out that hackers attack web application vulnerability when interactive. We locate these data mainly by HTML tags. IV. mark this transaction a delete and consider no more in the scans later. value. It deletes more transactions. Lk-1≠null. input field、hidden field、selection field of form. It is fundamental for detection to collect more information that hackers are interested in.shtml class=red>alcate bell</a>. that is to say when it is 0. So we combine APL and feedback from database to enhance performance. Traversing depth and breath could be set in advance. If (|Lt |<k+l) { t. If there are dynamic forms in response. text. DATA IN WEBPAGE In HTML document there are links.cgi method=" post”> <input type=hidden name=site value='com' length-10> <input type=text name=ABC Value="abc" length=10> <input type=hidden name=chatlogin value='in'> <input type=hidden name=product value='mail'> <input type=submit name=sumbmission value=“submit”> </FORM> Dynamic form can be located by finding keywords <form> and then we get the properties of form by some keywords like method. As for web application. corresponding request must include these forms. it is quicker and better. t). this strong association will be used to direct vulnerability design in simulating attack. We must avoid traversing same page repeatedly. But its performance is not good enough. CONCLUSION AND FURTHER WORK This paper presents one traversing model for Web 30 . Input: Transaction database D. which provides data for analysis. get. k++){ Ck=Apriori-gen (Lk-1). it is their final purpose to find weakness in web application. if a transaction contains less than k+1 item sets in Lk. Dynamic form is to generate dynamic web pages by sending request to server. Regarding the "0" here. hidden. minimum support threshold minsupport Output: All the frequent item sets in the database D. D.sina. V. TRAVERSING MODEL BASED ON STRONG ASSOCIATION RULE From hackers’ view. VI. the transaction can be deleted from the database. evidently theorem 3 and 4 can delete more invalid transactions. It is meaningful to combine experience of test export to web application traverse. which conclude programmer’s tradition、 rules in IT field.

[9] Agrawal R. The Structure of the Web. Srikant R. of 9th ACMSIAM Symposium on Discrete Algorithms. 1999. Henzinger M R. In Proceedings of the ACM-SIGIR. Lawrence S. CA: [s. Mining the Link Structure of the World Wide Web. Proc. et al. which improve the efficiency in the traversing process. In Proc. Also Appeared as IBM Research Report RJ 10076. 294: 1849-1850 [2] Flake G W.]. how to take less space and how to avoid redundancy is our future work. CA: IBM Almaden Research Center. 1997-05 [7] R Agrawal ,T Imielinski,A Swami.Mining Association Rules between Sets of Items in large Database[C]. Proceedings of the ACM In: SlGMOD Conference on Management of Data.1993:2O7—216 [8] Agrawal R. Authoritative Sources in a Hyperlinked Environment. Proc. 1994. 32(8) [5] Bharat K.application vulnerability detection. The model gets a set of web pages by HITS algorithm. Improved Algorithms for Topic Distillation in a Hyperlinked Environment. Authoritative Sources in a Hyperlinked Environment. 487-499 Figure 1. AKNOWLEDGMENT The research work here is sponsored by Tianjin Science and Technology Committee under contract 08ZCKFGX01100 and 06YFJMJC0003. it is necessary to design a reasonable way to store data which was got from page documents. San Jose. Giles C L. How to search easily. Traversing Model 31 . of the Sixth International Conference on Knowledge Discovery and Data Mining (ACM SIGKDD-2000). This set provides dataset for Apriori algorithm. Technical Report FJ9893. Gibson D. 1997-05 [4] Chakrabarti S. Srikant R. IEEE Computer.n. 1994. Strong association between interactive properties and attack types could be got and then support the simulating attack model. Admittedly. Lawrence S. Science. Fast algorithms for mining association rules in large database[R]. 2004:150-160 [3] Kleinberg J. Dom B E. In: Proc of 20th Int Conf Very Large Databases (VLDB’94) [C]. of 9th ACMSIAM Symposium on Discrete Algorithms. REFERENCES [1] Kleinberg J. Also Appeared as IBM Research Report RJ 10076. 1998 [6] Kleinberg J. Fast algorithms for mining association rules [A]. Efficient Densification of Web communities. 2001.

The process of selection of the appropriate kind of robot must consider the various attributes of the robot manipulator in conjunction with the requirement of the various operations for accomplishing the task.Relative ranking . Zhao and Yashuhiro [4] introduced a genetic algorithm (GA) for an optimal selection and work station assignment problem for a computer-integrated manufacturing (CIM) system. both aspects of kinematics and dynamics should be looked into.com R. Orissa. none of these solutions can take care of all the demands and constraints of a user specific robotic workcell design.B. link lengths and shapes. and then outline the most appropriate cases for smoothing robot selection process.Choudhury Department of Mechanical Engineering NIT Rourkela.N. The speed of operation significantly depends on the complexities of the kinematic and dynamic equations and their computations.Biswal Department of Mechanical Engineering NIT Rourkela. In addition. in order to select a suitable robot.B.2009. I.Mahapatra Department of Mechanical Engineering SIT Bhubaneswar. The work is also aimed at creating an exhaustive list of attributes and classifying them into different distinct categories. Boubekri et al. [1] developed a coding and classification system which was used to store robot characteristics in a database. Fortunately. India e-mail:rabindra@silicon. organizational and economical factors in the selection process. Offodile et al. Keywords-Multirobot. The present work is an attempt to develop a systematic procedure for selection of robot based on an integrated model encompassing the manipulator attributes and manipulator requirements. Rao and Padmanabhan [3] proposed a methodology based on digraph and matrix methods for evaluation of alternative industrial robots.205 . and then selected a robot using economic modeling. 32 978-0-7695-3521-0/09 $25. a number of tools and resources are becoming available to help designers select the most suitable robot for a new application. Liang and Wang [2] proposed a robot selection algorithm by combining the concepts of fuzzy set theory and hierarchical structure analysis. just meeting the customer requirements can be a challenge. Orissa. workspace. The index was obtained from a robot selection attributes function. The developed procedure can advantageously be used to standardize the robot selection process with view to perform a set of intended tasks. However. etc. INTRODUCTION Recent developments in information technology and engineering sciences have been the main reason for the increased utilization of robots in a variety of advanced manufacturing facilities.1109/ICCET. The digraph was developed based on robot selection attributes and their relative importance for the application considered. and to obtain fuzzy suitability indices. or learn from novel situations. Hence. Eventually the designers must use the available information and make their own decisions. The algorithm was used to aggregate decision makers’ fuzzy assessments about robot selection attributes weightings. India e-mail:bibhuti. A robot selection index was proposed that evaluates and ranks robots for a given industrial application. manipulability. The authors suggested step by step procedure for evaluation of a robot selection index. The use of DEA for robot selection has been addressed by Khouja [6]. joint placement. Attributes. Today’s market provides variety of robot manipulators having different configurations and capabilities. Orissa. The results of this study will help robot workcell designers to develop a more efficient and effective method to select robots for robot applications. the parameters that determine the capability of the robot is heterogeneous in natrure and therefore. Robots with vastly different specifications are available for a wide range of applications. The coding scheme for the attributes and the relative ranking of the manipulators are illustrated with example. Selecting the right kind of robot for an application is not easy. ease and speed of operation. formulating an integrated model with all these becomes a difficult task at times.com B.But they had assumed that the user knows which robot to buy and the question was from whom to buy. The selection of robots to suit a particular application and production environment from among the large number available in the market has become a difficult task.00 © 2009 IEEE DOI 10. Huang and Ghandforoush [7] stated the procedure to evaluate and select the robot depending on the investment. in turn obtained from the robot selection attributes digraph. The addition of system integration in workcell design processes may further complicate the picture. In this paper.biswal@gmail.ac. India e-mail:bbcnit@gmail.The capability of a robot manipulator can be assessed through parameters like number of joints. type of joints. It deals with the issues of using past experiences or cases to understand. plan for. budget requirements and comparing the suppliers of the robots.in Abstract-Availability of large number of robot configurations has made the robot workcell designers think over the issue of selecting the most suitable one for a given set of operations. we propose a new mathematical based methodology for robot selection to help designers identify feasible robots. However. [5] developed an expert system for industrial robot selection considering functional.2009 International Conference on Computer Engineering and Technology Attribute-based Relative Ranking of Robot for Task Assignment B.

11. physical parameters. 26. Weight of the robot. 15. I. 3. which may be denoted by some number whose numerical value will have no significance . 28. For instance reliability can be expressed in terms of Mean Time Between Failure (MTBF) or Mean Time To Repair (MTTR) methods. 12. DOF. and degree of freedom (DOF) are usually considered as basic parameters. and Force The manipulator attributes are found out based on its broad area as general parameters. etc. are given in Tabel. MANIPULATOR ATTRIBUTES With the growth in robot applications. Maximum reach Workspace. horizontal reach. MAJOR ATTRIBUTES FOR MANIPULATOR AND TASK Attribute type General Physical Parameter Price range. Type of robot and Arm geometry Type of actuators. 19. etc. 31. 32. Maximum end effector speed. 22. Sensors. Repeatability. 24. Attributes like life expectancy may be estimated through experimentation. large numbers of robots are available for various manufacturing applications. However some attributes such as built quality. Proper identification of manipulator attributes is critically important when comparing various alternative robots. 27. The identification of various pertinent attributes and their values. if not mentioned by the manufacturer. the coordinate system. 30. The robots with same number of joints and joint sequence but 33 . Type of grippers supported. performance based. 25. repeatability. A robot manipulator can be specified by a number of quantitative attributes such as payload capacity. 18. in most cases the user needs to be assisted in identifying the robot attributes logically. 8. 14. Number of axes . such as type of drive. 17. 23. Parameter General Physical Performance THE CODING SCHEME 0 4 0 0 7 0 0 3 0 4 Code 6 0 0 0 0 0 6 5 0 0 5 0 1 0 0 0 0 0 1 0 0 0 4 8 5 Parameter coding scheme 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Structure 19 20 Application 21 Sophistication 22 23 Control 24 25 26 27 28 29 Availability 30 31 Task 32 33 34 35 TABLE III. 7. 21. Time. 5. TABLE II. However. cannot be expressed quantitatively. Resolution can be coded as per the parameters coding scheme mentioned in Table II. There are some attributes which are informative in nature. 2. 16. These may be expressed by a rate on the scale of (say 1-10). TABLE I.Space requirements of the robot. 35. These data individually or collectively help the user to select the most suitable robot for a task that he intends to perform.II.02mm 1 Resolution 0 Degree of freedom 2 7 Type of joints 0 Working environment 0 Maintainability 0 Safety features 0 Control of robotic joints Any one 3 Gripper control 0 Sensors 0 Programming method Any one 1 Number of input channels 0 Number of output channels 0 Down time 0 Reliability 0 Space 4 Time 4 DOF required 8 Force 5 Control/feedback system Control of robotic joints. axes. 4. Stroke. Sl. the sequence of joints and their respective orientations and arrangements are left untouched.Payload of the robot. 9. Number of input and output channels of the controller Availability/reliability Task Downtime and Reliability Space. The attributes The identification code in Table III specifies the attribute information with the allotted code in the respective cells. Type of joints Application Sophistication Working environment Maintainability and Safety features Attribute Information Code Price range 0 Type of robot 0 Arm geometry Any one 6 Type of actuators Any one 4 Weight of the robot 0 Size of the robot 0 Type of grippers supported 0 Number of axes 0 Space requirements of the 0 Maximum reach 1000 mm 4 Types of end effectors 0 Payload of the robot 5kg 5 Workspace 0 Stroke 0 Velocity 50 deg/sec 5 Accuracy 0 Repeatability ±0. rates and estimates help the user for create a database. Accuracy. 10. IDENTIFICATION CODE Performance Structure/architecture Degree of freedom (DOF) availability. Sometimes it may be possible to arrive at a rational choice without formal application of some quantitative or semiquantitative methodology by mere articulation of what attributes are important in the context of particular alternatives under consideration. 34. after sales service etc. Size of the robot. 6. 29. Programming method. Although the number of joints. 20. 13.There are also attributes of which the quantification is not available and needs to be done by some mathematical model and analysis. 33. 1. Gripper control.

maximum reach. iii) selection of attributes. The problem is now one of finding out the optimum or best out of these satisfying solutions.1 V=⎢ ⎢ ⎢ ⎢ w 1n m. The robot selection attributes are identified for the given application and the robots are shortlisted on the basis of the identified attributes satisfying the requirements. some of the robot selection criteria can be ignored and others may become critically important. There will be few attributes. Normalization is used to bring the data within particular range. III. for the i th robot. This can be obtained as follows: ⎡ w 1n 1. The normalized specification matrix. On the basis of the threshold values of the pertinent attributes. swept area. Each row of this matrix is allocated to one candidate robot and each column nij = d ij 1/ 2 ⎛ m ⎞ ⎜ d 2⎟ ij ⎟ ⎜ ⎝ i =1 ⎠ ∑ where. therefore.n ⎤ ⎥ ⎥ ⎥ ⎥ v1. It allows faster comparison in various formats.with different joint orientations will have different performance characteristics. can be quantified as manipulability measure and can be used as an attribute.. gives the value of jth attribute in the row (non-normalized) form and units. The robot selection system can be divided into four activities. The robot selection criteria should include the key parameters such as. needs to rank these solutions in order of merit. Recent advances in robotics technology allow robotic workcell engineers to design and implement more complex applications than ever before. to be carried out by a suitable robot where it is expected to avoid some obstacles. termed as manipulability. a shortlist of robots is obtained.1 ⎣ w 2 n 1. which will have direct effect on the selection procedure. This architecture gets representation in the formulation of the problem for the present work. Therefore an element dij of the decision matrix ‘D’. A mini-database is thus formed which comprises these satisfying solutions i. Henceforth the selection procedure focuses solely on the pertinent attributes leaving out the rest. This coding scheme can be used as it is for the visual comparison between two robots up to certain extent. The value obtained from the weight matrix is applied to the normalized specifications since all attributes have different importance while selecting the robot for particular application. THE ROBOT SELECTION PROCESS to one attribute under consideration. To facilitate this search procedure an identification system has been made for all the robots in the database. This is achieved by scanning the database for those attributes. In order to demonstrate and validate the methodology of the proposed method five robots with different configurations and capabilities are considered. The ease of operation. 2 v1.1 ⎥ ⎦ v1. to eliminate the robot alternatives. The normalized specification matrix has the magnitudes of all the attributes of the robots on the common scale of 0 to 1.n ⎥ ⎢ v1. where wij contains the relative importance of ith attribute over the jth attribute. The minimum requirement for this application is tabulated in Table IV. i) operation requirements and data library of robots. Values of all such comparisons are stored in a matrix called as weight matrix.Hence they are normalized. The information supplied by the manufacturer to the user is not sufficient and it is required to be more elaborate. which combines the relative weights and normalized specification of the candidates. which have one or more of these attribute values that fall short of the minimum required (threshold) values. pay load. some of these considerations also make the robot selection process more complicated.e. ‘D’. It is a value. An element n ij of the normalized matrix ‘N’ can be calculated as. Moreover. It gives the true comparable values of the attributes. Based on different robot applications. ‘N’. The main attributes have been broken down to sub-attributes and sub-sub-attributes so that the robot manipulator can be identified in very precise and detailed manner. and iv) ranking of robots.1 ⎦ ⎣ … … w 2 n m.2 w n n 1. is then formed from the decision matrix. one at a time. and repeatability. it provides the dimensionless magnitudes. are of different dimensions and units. ‘D’. The objective values of the robot selection attributes. The selection procedure.n ⎤ ⎡ v1.1 V. maximum speed. ‘W’. ii) coding scheme. The first activity mainly consists of listing the requirements of robots and the desired operations. The next step is to obtain information from the user or the group of experts on the relative importance of one attribute with respect to another. cost.2 v1. Such a matrix is called as decision matrix. ‘V’. ILLUSTRATIVE EXAMPLE The example considers a task of pick-and-place. These attributes may be set aside as 'pertinent attributes' as necessitated by the particular application and/or the user. The ‘0’ represents that the information relating to the particular cell is not available. Further.1 ⎢w n 1 2. is weighted normalized matrix. alternatives which have all attributes satisfying the acceptable levels of aspiration. This phenomenon is used to calculate the normalized specification matrix.1 ⎥ ⎢v ⎥ = ⎢ 2. The matrix. which are given in Table V. degrees of freedom. The threshold values to these 'pertinent attributes' may be assigned by obtaining information from the user and the group of experts. The first step is to represent all the information available from the database about these satisfying solutions in the matrix form. d ij is an element of the decision matrix. ‘D’. which indicates the standing of that particular attribute magnitude when compared to the whole range of the magnitudes for all candidate robots. 34 .1 ⎥ ⎢ ⎥ ⎢ … w n n m.

The calculations of ranking factors are made for the other robots with a total of 12 different sets of weights. 674 0 .979 0.437 0 .109 0 .406 0 . 252 0 .5 0.TABLE IV.337 0 .461 0 . ‘W’.435 0 .57 3 5 ⎥ 3.979 0 .5 6 3. According to the results obtained and the analysis thereby. These factors were not previously considered in coding and evaluation. 35 35.0 6 0.02 3 0. management constraints and corporate policies. 571 0 .4 0.V.4895 0. before a final decision is taken to select a new robot.5 2 1 1 .691 0.475 0 . ⎡0 .5 1.447 ⎥ 0 .894 ∑σ 1000 2 5 50 4 7 4 0.359 DOF(DF1)* 3 Force (FR)* 5 Robot-2 2000 3 10 90 9 10 6 0.344 4.019 0 .447 ⎤ 0 .447 0 .344 0 .5 Time(TE)* 0.811 0.171 ⎢ ⎢ 0 .02 Robot programming(RP) 3 Space(SC)* 0.447 ⎥ ⎥ 0 .877 0.2 3 5 Step 4: Calculation of normalized value (N.285 0 . Parameter MR DF PL VL AG AT CM RT RP SC TE DF1 FR RANKING FACTOR WITH ONE SET OF WEIGHTAGE OF ROBOT-1 Value N. 61 0 .5 2 .28 3 5 Robot-4 5000 5 40 200 20 10 10 1. may be considered.33 3 5 ⎥ ⎥ 5 3 5 ⎥ ⎦ The calculation of the total ranking factor.647 0 . 876 0 .447 ⎥ ⎥ 0 .339 0.5 3 3 3 3 .33 8 5 TE DF1 FR ⎤ 2.146 0.337 0 . etc.5 1 DF1 FR ⎤ 13 2 ⎥ ⎥ 12 3 ⎥ ⎥ 11 4 ⎥ 10 5 ⎥ ⎥ 9 6 ⎥ 8 7 ⎥ ⎥ 7 8 ⎥ 6 9 ⎥ ⎥ 5 10 ⎥ ⎥ 4 11 ⎥ 3 12 ⎥ ⎥ 2 13 ⎦ Step 3: Calculation of the normalized specification matrix.505 0 .072 0 .019 0 .262 0.674 0.344 0.796 0.844 0. 259 0 .5 RP SC TE 1 1 2 3 4 5 6 6 5 4 3 2 1 6 5 4 3 2 1 1 2 3 4 5 6 1 1 .3 3 5 Robot-5 5500 6 60 250 24 7 8 1. 331 0 .5 2 2 2.356 0 .2 0.323 0 .3 3 5 Robot-3 5000 4 30 120 20 3 8 0. 713 0 .5 6 0. 174 0 .447 0.259 0.7715 .447 ⎥ ⎦ TABLE V.876 0. 174 0 . 803 0 .171 ⎢ ⎣ 0 .3 0. The procedure for the selection of the robot is as follows :Step 1: Formation of decision matrix. ‘D’.195 0 .438 00 . 237 0 .78 3 5 ⎥ ⎥ 3. The 1st and Step 2: Formation of weight matrix.039 0 .241 0 .155 0 .5 3 2 2. for one set weights in robot-1 is presented in Table VI.447 0 .33 3 5 ⎥ ⎥ 3.034 5.5 0 .844 ⎢ 0 .447 0 .5 2 .22 6 2.447 0 .345 0 . The normalized values of all these parameters are taken to form the decision matrix. the factors such as economic considerations.447 0 .398 0 .V 0.5 1.674 0.339 0 . However only one task has been considered for calculation due to page constraints.428 ⎢ N = ⎢ 0 . availability. and total score σ.5 1 1 6 13 2 Ranking factor(σ) 10.691 0 .199 0 . The values of these ranking factors for all the robots are given Table VII. ∑ Step 5: Calculating the average of the ranking factors of all the robots TABLE VI. ∑ σ.5 3 .345 0 .276 0 .45 0.796 2. Robot-5 and Robot-4 has the highest ranking factors should be recommended as the best robot alternative. Sl 1 2 3 4 6 7 8 9 Parameter MINIMUM REQUIREMENT OF A ROBOT Values minimum 5 kg.1 4 0.0 8 0.241 0 .145 0 .796 0 .344 0 .877 0 . CRITERIA FOR ROBOT SELECTION Criteria Robot-1 Maximum Reach(MR) 1000 DOF(DF) 2 Payload(PL) 5 Velocity(VL) 50 Arm geometry(AG) 4 Actuator(AT) 7 Control mode(CM) 4 Repeatability(RT) 0.146 0 .447 W 13 2 6 1 3 1 6 0. ⎡ MR DF ⎢1000 2 ⎢ ⎢2000 3 D =⎢ ⎢5000 4 ⎢5000 5 ⎢ ⎢5500 6 ⎣ PL VL 5 50 10 90 30 120 40 200 60 250 AG 4 9 20 20 24 AT 7 10 3 10 7 CM 4 6 8 10 8 RT 50 10 2 1 1 RP SC 3 2 4 2.287 0 . V. ranking factor ( σ =W * N). However.).713 0.02 mm at least 50 deg/sec any one at least 2 any one any one any one Load capacity Repeatability Velocity Types of drives Degree of freedom Arm geometry Control mode Robot programming ⎡ MR DF ⎢ 13 2 ⎢ ⎢ 12 3 ⎢ 4 ⎢ 11 ⎢ 10 5 ⎢ 6 ⎢ 9 W =⎢ 8 7 ⎢ 8 ⎢ 7 ⎢ 6 9 ⎢ 10 ⎢ 5 ⎢ 11 ⎢ 4 ⎢ 3 12 ⎢ 13 ⎣ 2 PL VL 6 5 4 3 2 1 1 2 3 4 5 6 1 2 3 4 5 6 6 5 4 3 2 1 AG 3 4 5 6 7 8 9 10 11 12 13 14 AT 1 2 3 4 5 6 6 5 4 3 2 1 CM 6 5 4 3 2 1 1 2 3 4 5 6 RT 0 .259 2.RESULTS AND DISCUSSION The robots are arranged in order of their ranking factor based on the significant attributes chosen keeping the application of the robots in view. 0.462 5.359 3 5 *These values pertain to task-1 of the fifteen tasks actually considered for the problem.159 0 .628 0.388 0 .972 1.

1005.6215.Resare.7315.799. 19.The procedure provides a coding system for robots depicting the various attributes.1855.Rao . identification and comparison of industrial robots using digraph and matrix methods”.798. 27. 20. 18. 19.Sahoui and C.857.1235.8415. The ranking curves of robots are shown in Fig.4805 16. [6] M.6095.6875. 39. Ranking curves of robots Average Ranking Factor 40. and processes the information about.504.8435. 28.4015. K. 43. Sl. REFERENCES [1] O.659.Padmanabhan.6293 26. 36.6995 TABLE VIII. Booth.0915. Huang. Int J Prod Res 1987.9895. 26. Ghandforoush.2005.2915 17. 21.304 18. 19.0775. 42.1445. “Genetic algorithm for robot selection and work station assignment problem” Computers & Industrial Engineering .8675.1735. 19.V.7325.2665. pp. As a result of the application of both numerical and qualitative inputs and outputs. [3]R.9205. [4] L.4325. “Selection. 19.F. 14.6835. 23. Comparison of robots Although ranking of robots on the basis of the manipulators parameters alone has been attempted by some previous researchers. VALUES OF TOTAL RANKING FACTOR Robot Robot-1 Robot-2 Value of ∑ σ with different set of weights.1996.2025. 2. 18. Essentially the present work contributes to developing a methodology based on matrix methods which helps in selection of a suitable robot from among a large number of available alternative robots 36 . 22:P. 25:PP. Journal of Manufacturing Systems. 18.P. “Development of a computer aided robot selection procedure (CARSP)”.7655.Lambert and R. 17.2006.251.244. 25. Robotics and Computer Integrated Manufacturing. TABLE VII. 1984. Figure 2. Lakrib.4955.5535.5215.Dudek. 38. two robot alternatives are found to be more efficient compared to other candidates. 19.731.1109–12.8095. 41. Yashuhiro.599–602. It recognizes the need for.703 The present work is aimed at developng a generalized tool to combine manipulator attributes and task requirements in a comprehensive manner for relative ranking of the manipulators. “Development of an expert system for industrial robot selection”.This is sure to help the designers and users in selecting the robots correctly for the intended application. “Fuzzy clustering procedure for evaluation and selection of industrial robots”. 41. [2]A. PP.Offodile .5175 17. 38:PP.926. 17. ranking of the robots in view of performing a given set of tasks is a novel attept. In order to discriminate between these two robot alternatives the ranking factor should be looked at. 26. 25. 1.17 21. 1995. 19. P. 44–48. 24. 22.A. [7] P.35 attributes of the robots are identified and consciously coded to take care of the characterstics of a robot manipulator precisely. 16.Khouja.2805.7355. The average values of the ranking factor are presented in Fig. M.2nd ranked robots have the highest figures amongst all the robots.5605.1991.Y. 22.8625. ‘Medium’ and ‘High’ and in relation to the group of the robots under consideration and are shown in Table VIII.5315. 22. 19. 25.5445. relative importance of attributes for a given application without which inter-attribute comparison is not possible. On the basis of the ranking factors the robots are rated as ‘Low’. 27.In the intial phase of the formulation .K.688.M. 19. 2000.J.7625. Boubekri . L. 22. 43. 18. “Robotics procedures given for evaluating selecting robots”.119– 27. 20:PP.9215.8495. 19. “Algorithm based decision support system for the concerted selection of equipment in machining/assembly cells”.8405.2245. 28. 18.31:PP. 19. The methodology developed through this work can be applied to any similar any similar set-up . Comput Ind Eng . and D. 19. [5]N. 37. The present work considers practcal aspects and takes and takes experimental data to form an integrated model for relative ranking of the available candidate robots. 24.63. 23. 35.794.651. Industrial Engineering 16 .9975. 19.Zhao .323–339.053.T. 19. 19.703 18. 19. No 1 2 3 4 5 Robot Robot-1(R-1) Robot-2(R-2) Robot-3(R-3) Robot-4(R-4) Robot-5(R-4) SCORES OF ROBOT Relative Ranking 4 3 2 1 1 Relative Rating Low Medium Medium High High Robot-3 Robot-4 Robot-5 Figure 1.6385. 21. International Journal of Production Research.Layek .8255.E.1815.K. 42. 21.373–383.

it needs to get the trust information of K from entity J who has interacted with K. the Evidence-based models are able to reflect the variety of the environment.2. The kernel of it’s is that if entity someone wants to know global reputation values of any entity K.g. Jøsang model and EigenRep model. and then combines the history of interactions. The literature [11] defined the assurance trust and global reputation. this paper gets the trustee’s inner attributes through the remote attestation mechanism under the trusted computing specification of TCG. described the trust relationship effectively.com model [5] brought in the concept of experiences to describe and measure the trust relationship. School of Electronic and Information Engineering.1109/ICCET. experiences) in order to make the evaluation of the trust degree more flexible and reliable. Zhengzhou. Institute of Electronic Technology. Aiming this issue. There are some classic Evidence-based models. PolicyMaker[1]. It is very difficult to give a standard definition because of the complexity of trust relationship and the variety of application requirement. All of these models based on the trustee’s outer information. The literature [10] proposed a new method based on fuzzy theory according to the third party’s trust values of recommendation so that the assess of users trust values become more flexibility and more reliable. Wang Yuqiao2 1. Information Engineering University.g. In the Credential-based models. most of them don’t have an effective way to get the initial trust degree.2009. inaccuracy and evolvement adequately. However it is more reasonable to get the evaluations by both the outer information and the inner attributes.2009 International Conference on Computer Engineering and Technology A Subjective Trust Model based on two-dimensional measurement* Chang Chaowen1. The literature [12] proposed a formalism method for subjective trust using cloud Abstract Trust models like Beth. (3) Although these models have calculating formula about trust. It is hard to eliminate the impact of malicious recommendation. KeyNote[2]. assured the anonymity of entities and the secrecy and integrity of trust value and prevents aiming cheat and combining cheat. China ccw@xdja. and some other learners brought fuzzy theory. functions) which is characterized by the number of positive events and negative events that are observed. REFEREE[4]. Jøsang and EigenRep evaluated trust degree by the history of interactions or the reputations. whereas the Evidence-based models consider subjectivity. Maybe it is not very reasonable. In the open environment. and designs a trust model based on two-dimensions (attributes. gives the pcdf (probability. cloud model into the researches of trust management. Beth model. Liu Chen2. Jøsang proposed a trust management model based on subjective logic [6][7]. 1. No.86 37 . China 2. EigenRep model [8] is a classic global trust model at present. e. At present the study of trust management models can be divided into two parts: Credential-based models and Evidence-based models[3].Blaze brought forward the concept of trust management and the corresponding trust model system PolicyMaker[1] and KeyNote[2] for the first time in 1996. Beth model classifies the trust into direct trust and recommendation trust. Xi’an Jiaotong University.2007AA01Z479) 978-0-7695-3521-0/09 $25. This model bases on the Beta distribution function of expression for posteriori probability estimates of binary events. What all of these typical models considered is the outer information of the trustee. it’ll result in some problems [9]: (1) Using events probability to describe and measure trust relationship will result in confusion between the subjectivity of trust and the random probability. Xi'an. the essence of trust is using a precise and reasonable method to describe or deal with the complicated relationships. certainty density. Beth * This work was supported by National High Technology Research and Development Program of China (863 Program. and brought in the concept of evidence space and opinion space to describe the trust relationship and measure it. Introduction M. (2) Most of these calculate arithmetical average value to integrate the trust degree from different recommendation paths. e. The experiments show that this two-dimensions-based model can not only avoid the trust accumulation spoofing better but also calculate the initial trust value more effectively.00 © 2009 IEEE DOI 10.

38 .u} to describe the trust relationship. Get inner attributes base on TCG remote attestation TCG offers a specification for the trusted computing and security technology definition and development of industrialized criterion [13]. we can know the main application attributes’ present state is trusted or not. is a cognitive process that trustor believe the trustee has the ability to meet its requests through judging the inner attributes and the outer behaviors of trustee. and the measured values should be preserved in the Platform Configuration Register (PCR) to prevent them from being tampered. Thereby uncertainty can also be called relativity. Stored Measurement Log (SML) and other correlative information to remote trustor by a trusted way [18].u }∈[0.model. Combining the remote attestation mechanism under the trusted computing specification of TCG with the interaction experience between the two parties. the platform configurations may not reflect the actual state of the platform sometimes [19][20]. these ameliorations merely consider the outer information as the same as classic models instead of the inner attributes of the trustee. d is the distrust degree and u is the incertitude degree. It is also the verifying party. but the platform state is still secure and trusted. estimating trust degree of the trustee synthetically can make the measured results more accurate. based on which. Definition 3: Inner attributes trust degree is the trust degree of trustee’s inner attributes. any module must be measured before it obtain the control right. Definition 2: Trust degree is the quantitative denotation of trust grade. { b . All of them are continuous values. The specification of TCG has defined the protocol for the remote attestation [14]. it must base on the objective. 2) Definitions Definition 1: Trust is to believe others. trust is a dialectic unification of subjective and objective. here it means the trust grade of integrity of the platform configurations. Definition 5: integrated trust degree is the mean value of the inner attributes trust degree and the behavior trust degree by weight. 3. A trust model based on two-dimensional measurement As above. Integrity measurement [16][17] is a very important function of trusted computing. Based on the remote attestation protocol. we adopt the remote attestation mechanism under the trusted computing specification of Trusted Computing Group(TCG) [13] to get the inner characteristic of trustee. the configuration will be changed and attested unsuccessfully. As we know. trustee convey the PCR values. In the trusted node with TPM. This paper has used subjective logic-based model for reference. The Trusted Platform Module (TPM) [14][15] is the foundation stone of trusted computing. 3. present models emphasize particularly on only one factor. there are close relevancy between outer information and inner attributes. the measurement values in PCRs cannot be tampered and written in advance. the trustee must set the TPM chip. This definition emphasize that although trust is a subjective behavior. It is also the verified party.d. Definition 4: behavior trust degree is the trust degree of trustee’s outer behaviors. which offers us an effective method to get the inner information of trustee. there is no absolute trust or absolute distrust. outer information is the phenomena and inner attributes is the essential.1 Definition and Properties of Trust 1) Roles in Trust Relationship Trustor: The proposer of the interacting services. Based on TPM. When a attestation begins. 3) Properties of Trust According to the definitions above. thereby determines whether interact with it. The aggregate is in the situation: B + d + u = 1.d . 2. However. used the ternary aggregate {b. the interaction between the two parties may also get fail because of the trustee’s low enthusiasm to offer some services to trustor. thereby we can evaluate the trust degree at a bran-new aspect. for everything. even though platform state attests successfully. and it should have some properties: Property 1: Uncertainty means it is hard to describe trust in an accurate way.However. Property 2: Asymmetry means trust is one-way. In our model system. So it is hard to avoid being unilateral. the trustor then can estimate the security state of the trustee according as the integrity report. 1]3 b is the trust degree. here it means the trust grade of interactions history with other nodes or the recommendation of others and so on. What is more. Trustee: The provider of the interacting services. and a qualitative reasoning mechanism of trust cloud to enable trust-based decisions. So it is also not integrated and objective to evaluate the trust degree only by the TPMbased attestation. when software updates. A trusts B doesn’t mean that B trusts A too. For example. the outer factor or inner factor.

dO. we can calculate the behavior trust degree TO = {bO. regard the events that have been validated unsuccessfully as the negative events and the events that have been validated successfully as the positive events. when a software updates can result in the fail validation but it is hurtles). we can calculate the integrated trust degree T = {b.PCR1 .f PCR values which have been validated successfully have the same trusted situation. Assume that the n + 1 . The f PCR values which have been validated unsuccessfully have the same trusted situation too. We can calculate the inner attributes trust degree TI = {bI. and the integrated trust degree T = {b. s = 0. (2) Validate the configurations integrity of trustee platform. uI} by a simple formula: Behavior trust degree describes the trusted situation of the trustee’s outer behaviors.u} by the formula: b = WIbI + WObO d = WIdI + WOdO u = WIuI + WOuO (3-4) 4) the Initial Trust Value of Integrated trust degree In the initial state. dF denotes the possibility that a single PCR value which has been validated unsuccessfully may destroy the security of the system (That a PCR value is validated unsuccessfully doesn’t mean the system is threatened predicatively. dI . The formula is: f n +1− f bF bS + n +1 n +1 f n +1− f dF dS + dI = n +1 n +1 uI = 0 bI = (3-2) T(b) ≥ T 0(b0 ) . We use the theory of evidence space in Jøsang model [21][22] for reference. the parameters r = 0.dF. the ternary aggregate is {bF . and get the number f of the PCR values which have been validated unsuccessfully in PCR0 . PCRn. 3. SML and so on. dS.u0}. The evaluation of trust contains three conditions as follows: Condition 1: integrated trust degree satisfies In the situation that trust neither the trust degree nor distrust degree doesn’t attenuate. the behavior trust degree is TO = {0. uO} by the formula: bo = r r + s +1 s do = r + s +1 1 uo = r + s +1 (3-3) 3) Integrated trust degree evaluation algorithm Integrated trust degree is the mean value of the inner attributes trust degree and the behavior trust degree by weight. validating the configurations integrity of trustee’s platform. Assume WI is the weight of inner attributes trust degree and WO is the weight of behavior trust degree. Here. we don’t discuss how to set the threshold in this paper. In this paper. there were no interactions between the two parties. T(d) < T 0(d 0 ) . … .Property 3: Incomplete transitivity means A trusts B and B trusts C. WI +WO=1.d.. the information contains PCR values.e. we can see that the initial trust value can be obtained by the inner attributes trust degree. Some other literatures think trust has the transitivity. e. thus it is hard to avoid the malicious recommendation effectively. Assume r denotes the number of the positive events and s denotes the number of the negative events.d. the set of threshold bases on the context and the subjective factors of the trustor. trustor gets the configurations integrity information which is needed in trustee’s platform. 2) Behavior trust degree evaluation algorithm Then trustor regards interactions are allowed.d0. but A could not trust C completely even though B recommends it. both uS and uF is 0.u} is: bI = f n +1− f bF bS + n +1 n +1 f n +1− f dF dS + dI = n +1 n +1 n +1− f f uI = uS + uF n +1 n +1 b = WIbI d = WIdI u = WIuI + WO (3-5) (3-1) From this formula. uS}.3 Evaluation of Trust Trustor sets the threshold T0 ={b0. bS denotes the possibility that the module has not been impacted by the malicious code (modules may impact each other due to non-isolation). T(u) < T 0(u 0 ) trustee trusted. i.2 Descriptions of Trust Relationship 1) Inner attributes trust degree evaluation algorithm (1) Through the TCG protocol. 3.g. the 39 .0. Here. uF}. we suppose that recommendation is only a reference but not the unattached way to calculate trust degree.1}. the ternary aggregate is {bS.

dF. and the faster the interacting experience is forgotten.36. the malicious code run to destroy the integrity of the platform at the 15th time.0}. the number of PCR values which are validated unsuccessfully is 2. the trust degree is at a high level at the beginning.8. β1 = 1 . and descends to a quite low level immediately when the malicious code runs. so that some loss can be avoided. Emulation experiment and analysis 4.0. i. At the beginning.0.0. The variety of β can also reflect the stability of the interactions. the threshold T0={0. β decreases a certain value ∂ until to a certain lower limit.0}.Condition 2: integrated trust degree satisfies T(b) < T 0(b0 ) . there are 40 times of interactions between the two parties totally.8.0. then it offered one malicious service suddenly. T(d) < T 0(d 0 ) . Using the formula of our model. i. β value will decrease. The trust model based on two-dimensions proposed in this paper can avoid the cheat effectively.0. the trust degree will play down.2.5. {bS.8.uS}={0. PCR values are all validated Fig. f=2 , {bS. We call this problem as malicious accumulation of trust degree. if they are instable. the configurations of trustee’s platform are all right. so that the trust degree could accumulate to a very high grade.uF}={0. Until the first time that interaction is fail. Attenuation element β ∈ {0.2. WI=0. The trust degree in the assurance-based model descends much faster than the Jøsang model when interactions begin to fail.0. WI=0. In initial time. f=2. i.uS}={1. the number of PCR values which are validated unsuccessfully is 2.dS. The smaller this value is.4. 0. but it cannot avoid the cheat from happening. the initial values are all {0. The trust degree is influenced mainly by the latest interactions. There is a problem in this method: Assume that trustee offered services honestly at the first stage. f=0.1 the Trend of Trust Degree The analysis of the result: In the former 30 times. {bS. {bF.uF}={0.2}.2 Experiment 2: Set Initial Trust Value Assume that trustee is the malicious node who has destroyed the integrity of the platform.0.e. The model based on two-dimensions could detect the startup of the malicious code. WO=0. the trustor would have not the least guard and suffer a serious loss. Fig. The node repeats to join into the network 5 times independently. Parameters are set as follows: The number of PCR values which are applied by the trustor is 10.1 has described the trend of trust degree in our model. the number of positive events r=0 and the number of negative events s=0. T(u) < T 0(u 0 ) Then trustor regards trustee distrusted. The assurance-based trust model can make the trustor who suffered loss adjusts its states. but it doesn’t avoid the threat from happening.4. T(u) ≥ T 0(u 0 ) Then trustor sends requests to validate configurations integrity of trustee’s platform. we can suppose that every interacting evidence will influence the trust degree equally. the trust degree in the Jøsang model will accumulate to a high level gradually. We will contrast this model with the subjective logic-based model and the assurance-based trust model to illuminate its advantage in anti-cheat aspect. malicious code destroyed the integrity of the platform but trustee concealed it deliberately and continued to offer the service. all interactions are successful.6. the integrity has been destroyed.1 Experiment 1: Avoid Malicious Accumulation of Trust Degree The calculating formula of the subjective logic-based model is as formula 3-3. the interactions are un-allowed.1} .0.8.0}.2. {bF. Condition 3: integrated trust degree satisfies T(b) < T 0(b0 ) . The subjective trust model based on assurance [11] has proposed the concept of attenuation element that can decrease the impact about the above problem. the number of positive events r=5 and the number of negative events s=5.0.2. the Jøsang model and the assurance-based model in which result changes. 4. n+1=10.e. After running of the malicious code. and the 40 .2}. WO=0. the more obviously the trust degree attenuates.dS.64.3.e.dF.0}. the threshold T0={0. the 4. Then every time the interacting successfully when configurations are all right.uS}={0. After the two parties interacted successfully for some times. 0}.0}. At the beginning of this experiment.3. n+1=10.dS. T(d) ≥ T 0(d 0 ) .5. Parameters are set as follows: The number of PCR values which are applied by the trustor is 10. β m denotes the β value at the m time. the former 30 times are successful and the later 10 times are unsuccessful.6.

Knapskog S J.1.2. Chen. Rohe.trustedcomputing group.2 The analysis of the result: There will be a very large possibility (the probability in this experiment is 3/5) to result in that trustor trusts the malicious node and interacts with it by the mechanism of random number.The Research on Key Technologies of Trust Management. the improved model does not consider the recommendation trust. and Usage Principles for TPM-Based Platforms Version 1. 0 3 0. Journal of System Simulation. Global IT Security[M]. No. 0. Proceedings of the European Sysposium on Research in Security (ESORICS).3. [EB/OL]. 0. In Proceedings of the 1st ACM Workshop on Scalable Trusted Computing (STC’06).Jøsang. Amsterdam:IOS Press 2005.2006.1 4 0.2 Design Principles.J. Loehr. [18]Trusted Computing Group. Zhang Guang-wei. https://www. A. E.html.9.6. Initial values 1 0.Xiaolan Zhang.14. Li He-song. https://www.Feigenbaum.19 No. 2006. References [1]M. In: ASIACC'07[C]. San Diego. (in Chinese) [10]Zhang Yan-qun. in New Security Paradigm Workshop (NSPW). [19]Sadeghi. 0.7. Subjective Evidential Reasoning[C]. Zhang Chen.8.1.1 the Initial Trust Value for 5 Random Numbers Seq.Kinateder. However. 5. M. The experiments show that this twodimensions-based model can not only avoid the trust accumulation spoofing better but also calculate the initial trust value more effectively. [21] A. Valuation of trust in open networks. [14]Trusted Computing Group(TCG). Tab. In: Gollmann D.July.Sadeghi.Blaze. 1-5 July. 541-549. TPM Specification v1.28 No. Michael J. Songqing Chen and RaviSandbu.org/specs/. Model of trust values assess based on fuzzy theory. Fu Jiang-liu. Computer Engineering and Design. 0.1510-1521. Journal of Information Engineering University. and C.Covington. This paper designs a trust model based on two-dimensions to make the evaluation of the trust degree more flexible and reliable. R. [17]Xinwen Zhang. Liu Chang-yu. 1998.2007.2. Subjective Trust Model Based on Assurance in Open Network.dstc. DC. and M. Proceedings of the 4th Nordic Workshop on Secure Computer System(NORDSEC’99). Klein B.pp.TCG Software Stack(TSS) Specification v1.IEEE Computer Society.164-173.org/specs/TPM/. [2] Jøsang A . A Logic for Uncertain Probabilities[J]. vol. france. Web Intelligence and agent Systems. Trust-based Decision Making for Electronic Transactions [EB/OL]. [20]L. http://security.1 5 0.Trent Jaeger. (in Chinese) [12]Meng Xiang-yi.4. 2004:223– 238. CA : USENIX press. Research on Subjective Trust Management Model Based on Cloud Model. EigenRep: Reputation Management in P2P Networks[C]. Journal of Software.pp.trusted computinggroup. ACM Press.Gray.Jøsang. [9] Li Xiao-yong. Liu Yi. 2004. The classic Evidence-based trust models considered trustee’s outer information. 2003: 123-134.R. June 2007. [8] S Kamvar. so that the essential characteristics of trustee cannot be reflected directly and exactly. [11]Gao Cheng-shi. Leendert Van Doorn. Shen Chang-xiang.2007. 3-18. 6.0.au/staff/ajosang/paper. SecureBus:Towards ApplicationTransparent Trusted Computing with Mandatory Access Control[A].2007. 0. 0. Implementation.2007: 117-126.Lacy.Decentralized Trust Management. Proceeding of the 9th Internation Conference on Information Processing and Management of Uncertainty in KnowledgeBased Systems (1PMU 2002).Washington. Research on Dynamic Trust Model for Large Scale Distributed Environment. [15] Trusted Computing Group(TCG) . Trusted platform module protection profile,July 2004. [16] Reiner Sailer. Proceedings of the 1996 IEEE Symposium on Security and Privacy. inaccuracy and evolvement adequately. ed. Landfermann.World Wide Web Journal,1997;2(2):l27~139 [5] Beth T.Jun.2. Brighton: Springer-Verlag. International journal of Uncertainty , Fuzziness and Knowledge-based Systems,2001,9(3):279—311. [3] Yuan Shi-jin.Singapore:ACM Press. 0.Vol. (in Chinese) [4] Chu Y-H , Feigenbaum J , La Macchia B et al . REFEREE : trust management for Web applications [J]. Kang Jian-chu.USA. (in Chinese) [13]TCG Best Practices Committee. 41 . Borcherding M. vol. A Model for Analysing Transitive Trust. A protocol for property-based attestation.6. 1994. Doctor Degree Paper for Fudan University. 0. Stuble. Conclusions In the open networks. In:Thirteenth Usenix Security Symposium[C]. Design.1996. Property-based Attestation for Computing Platforms: Caring about properties.8 No. Wien: Austruan Computer Society. so it is needed to be studied in the future works.2004.Feb.edu.2. Gui Xiao-lin. A.18. [7] Jøsang A. Schlosser MT.In: Proceedings of the 12th World Wide Web Conference Budapest: ACM Press.-R. not mechanisms. [6] Jøsang A. vol. 2005.1 2 0. May 2005.J.5. [22] A. Design and Implementation of a TCG-Based Integrity Measurement Architecture[A]. A Metric of Trusted Systems. 2002. the Evidence-based trust models consider subjectivity. Anecy. H.mechanism of random number get 5 distinct initial values. and C. Stueble.[EB/OL]. 0. 1999.

s_m_hatefi@yahoo. Seyed-Morteza Hatefi Department of Industrial Engineering and Center of Excellence for Intelligent Base Experimental Mechanics and Department of Engineering Optimization Research. such as machines and manpower in a manufacturing system. In the section fourth. The objective is to determine the labor assignment in CMS environment with the optimum performance. which are normalized.00 © 2009 IEEE DOI 10. Keywords . in which there is agreement that a broad distribution represents more uncertainty than does a sharply packed one. This set is directed to achieve a specified objective. Introduction I. with this method. The productive capacity of DRC systems is determined by the combination of machine and labor resources. It is usually given as a set of weights. First. METHODOLOGY This paper presents a GA approach for select the optimum operator allocation (for further information about GA method see [4]).ac. average waiting time of demand. The Entropy idea is particularly useful for investigating contrasts between sets of data. and processing time of the system with each temporary entity [8]. Examples of entity characteristic are arrival time. the development of a multifunctional workforce a critical element in the design and operation of CM systems. it is generally necessary to know the relative importance of each criterion. in information theory.211 42 For solving MADM problems.hkor@ut. Hamrah Kor. We use The GA for getting near optimum ranking of the alternative with accordance to fitness function. number of completed parts. University of Tehran.com Abstract—This paper presents a decision making approach based on a hybrid GA for determining the most efficient number of operators and the efficient measurement of operator assignment in cellular manufacturing system (CMS). average lead time of demand. Visual SLAM. Jobs waiting to be processed may be delayed because of the non-availability of a machine or an operator or both. represented by a discreet probability distribution. College of Engineering. Also.ir. 978-0-7695-3521-0/09 $25. . Then. Temporary entities are incorporated into the system represented within the simulation model and they pass and then leave the system. The entropy method is the method used for assessing the weight in a given problem because.2009.ac. We begin by defining the system and its components. part number. are named as a server. by GA solve the problem and specify the best scenario. we show empirical illustration. where the number of operators is less than the total number of machines in the system. The importance coefficients in the MADM methods refer to intrinsic “weight”. direct access to the values of the decision matrix. the entropy method works based on a predefined decision matrix. Permanent entities.[5] II. INTRODUCTION Cellular manufacturing systems are typically designed as dual resource constraint (DRC) systems.2009 International Conference on Computer Engineering and Technology A Genetic Algorithm Approach for Optimum Operator Assignment in CMS Ali Azadeh. This fact makes the assignment of operators to machines an important factor for determining the performance of cellular manufacturing (CM) systems and therefore. and Entropy method for determining the weight of attributes. the GA approach is performed by employing the number of operator. Entropy method.ir. values of the attributes procured by means of simulation. Entropy. we consider number of run equal 30) [7]. Figure 1 presents the proposed simulation and multi attribute approach for optimum operator assignment. Some works deserve mention because they include information concerning the methods that have been developed for assessing the weights in a MADM problem. in scenario selection problems. we generate most scenarios and run them in the Visual SLAM software (for to obtain the extinct answer. Furthermore. CMS . The system is a set of permanent and temporary entities taking into consideration the entities attributes and relationships between them. operator utilization and average machine utilization as attributes. decision making. In other words. is a criterion for the amount of uncertainty. simulation.Genetic Algorithm. the entropy method is the appropriate method.1109/ICCET. The attributes of each entity are considered as an entity characteristic. Iran Aazadeh@ut. and which add up to one. which is required for identifying the temporary entities. Since there is. the decision matrix for a set of candidate scenarios contains a certain amount of information.

eight operators (one operator for each machine) 2. three operators (one by one operator to three machines and one operator to only two machines) 9. The existing manned cell example for a case model is presented in Fig. The walking multi-functional operators permit rapid rebalancing in a U-shaped. 2: The existing manned cell example for the case model. 1: The overview of the integrated GA Simulation approach III. The considered cell has eight stations and can be operated by one or more operators. six operators (one by one operator to three machines and one operator for each of others) 7. The times for the operations at the stations do not have to be balanced. three operators (one operator to four machines and one by one operator to two by two machines) 11. The sum of operation times for each operator is approximately equal. four operators (one by one operator to three machines and one operator for each of others) 8. PROBLEM DEFINITION Manned cells are a very flexible system that can adapt to changes in the customer’s demand or changes quite easily and rapidly in the product design. The balance is achieved by having the operators walk from station to station. it is divided to transfer batch sizes. seven operators (one operator to only two machines and one operator for each of others) 3. Transfer batch size is the transfer quantity of intra-cell movements of parts. four operators (one by one operator to two by two machines and one operator for each of others) 6. In other words. [1] Alternatives consisted of reducing the number of operators in the cell is as follows in details: 1. not line balancing.Data Collection Input data Define scenarios Generation of outputs data by computer simulation GA for operator assignment D: Decoupler Station: Operator Operator movement when out of work Operator movement with parts Final assignment: By utilizing GA Figure. two operator(one by one operator to four by four machines) 43 . Once a production batch size arrives to the cell. six operator (one by one operator to two by two machines and one operator for each of others) 4. The cells described in this study are designed for flexibility. three operator(one operator to four machines and one operator to three machines and one operator to one machine) 12. The ability to quickly rebalance the cell to obtain changes in the output of the cell can be demonstrated by the developed simulation model. Operators perform scenario movements in cell. any division of stations that achieves balance between the operators is acceptable. depending on the required output for the cell. five operators (one operator to four machines and one operator for each of others) 10. Figure. 2. five operators (one by one operator to two by two machines and one operator for each of others) 5.

– . So in the developed model different demand levels and part types have taken into consideration. System performance is monitored for different workforce levels and shift by means of simulation. To achieve the appropriate rank (array). in each chromosome we can find a new value for total distance. This module is defined to create and manipulate the 50-individual population by filling it with randomly generated individuals. the average of waiting time of demand. the second scenario with the first and the next with upper ranked scenario are calculated respectively. according with the sequence in the chromosome. the scenario 37 a scenario with the best possible attribute. The developed model includes the some assumptions and constraints as follows: ƒ The self-balancing nature of the labor assignment accounts for differences in operator efficiency. • Mutation operator consists of making (usually small) alterations to the values of one or more Genes in a chromosome • Regeneration operator which is used to create 100individual generations. The main structure of GA in this study is formed based on the assumption which describes that the best scenario among them could be a scenario with the indices in which each is the maximum value of its possible values. the other kinds of selection methods named sigma scaling and rank selection are considered in order to best determine method. The time processing of jobs for each of the station works is related to the part type. 44 . and have opposite order than the rest of Indices . The total distance mentioned above is a dependent variable to scenarios’ positions in the array. each shift consist of 8 hours of operation). The basic concept in tournament is that the best string in the population will win both its tournaments. Furthermore. ƒ The time for the operators to move between machines is assumed to be zero. each of the received demand’s parts has a special type and level. The results of the simulation experiment are used to compare the efficiency of the alternatives. 2 or 3 shift per day. Step 5: Define evaluation module: The fitness function to determine the goodness of each individual based on the objectives is defined by total distance and variance that can be shown by: 36 scenarios are selected as the core of our study. ability to quickly rebalance the cell to obtain changes in output of cell can be demonstrated by simulation model. • Uniform crossover operator which combines bits from the selected parents with the probability 85%. Each individual is defined by 64-bit string. when the machines are assigned to the operators. and thus never be selected. which illustrates the maximum present abilities in operator assignment. the indices. The Table I shows Output of the simulation model.In simulation experiments. The types of parts that cell can product and levels of demand within cell determined as two and three in experiment. IV. each day composed of 3 shifts. Step 4: Define recombination module which enlists four sections: • Tournament selection operator chooses individuals with probability 80% from the population. After deletion of transient state. a flexible simulation model is built by Visual SLAM which incorporates all 36 scenarios for quick response and results. Consequently. ƒ The machines have no downtime for the simulated time. the total distance among the first scenario which can be scenario from 1 to 36 and our goal (scenario 37). In fact. The six attributes must be normalized and have same order to be used in GA. Undoubtedly. APPLICATION OF GA MODEL composed 36 scenarios. is considered as a 64-bit chromosome. This is considered as a popular type of selection methods in GA. the cycle time of the bottleneck resource is chosen close as possible as to the cycle time of the operator. The machines are all close to each other. In simulation experiments. average operator and machine utilization and number of completed parts per annum. while the worst will never win. called our goal in the problem. The above concepts of genetics are achieved through a set of welldefined steps as follows: Step 1: Normalize the index vectors. Therefore. Each scenario also replicated 30 runs to ensure reasonable estimates of means of all outputs could be obtained. As mentioned. ƒ The sum of the multi function operation times for each operator is approximately equal ƒ There isn’t any buffer for the station work. every possible array. in this study.They are Step 2: Standardize the indices standardized through predefined mean and standard deviation for each index. Then. The objective of scenarios consists of reducing the number of operators in the cell is observing how the operation is distributed among the operators. the best sequence of the scenarios is an array which has the minimum total distance with high internal cohesion among its scenarios. Each labor assignment scenario considers 3 shifts which 1. However. Moreover. Step 3: Define the production module. outputs collected from simulation model are the average lead time of demand. the 36 scenarios are executed for 2000 hours (250 working days. our fitness function is a multivariate combination that its most prominent components are total distance and variance.

PCA ( ) models.0 8 3. The third scenario is the 9-1 that five operators (one operator to four machines and one operator for each of others) with the one shift per day. assigned to the system is the most efficient.324 2. ANOVA Source block treatment error Total DF 3 35 105 143 Sum of Squares 243.972 80. The second best scenario is the scenario 12-1 that similar to scenari12-2 but the shift per day is one. Lower Bound .24957 .4907 GA DEA DEA PCA GA PCA PCA GA DEA * The mean difference is significant at the . The evaluating operator assesses the ability of each chromosome to satisfy the objective. DMU’s efficiencies are considered for GA model in comparison with DEA and PCA methods. That is compare the pairs of treatment means for all i .000 .8809 -1. Furthermore.38752(*) -.8968 3.98417(*) -3.000 . our motivation to obtain the best array in this problem is to minimize the fitness function mentioned above. The advantage of the GA model with respect to efficiencies is shown in Table II.65 76. The results of LSD revealed that at Ho: and and hence treatment 1 (GA) produces a significantly greater efficiencies than other treatments. the scenario 12-2 that one by one operator to four bye four machines with two shifts for day.000 .0 0.0 TABLE III.362 rank 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 ANOVA is used to evaluate the effects of the optimum operator assignment in CMS model.4776 Lower Bound 2. it is tested whether the null is accepted. it is tested whether efficiencies have the same behavior in GA ( ).8809 -1.98417(*) Std.9099 1. 45 .40335(*) . Therefore. First. VII.362.38752(*) -2.05 level.8941 -2. As we know the TOPSIS methodology is based on minimum distance from best scenarios and maximum distance from the worst alternative.24957 .24957 Sig.4907 -3. We are not looking for the best and the worst solution.24957 .8968 . ANALYSIS OF VARIANCE (ANOVA) According to results. Furthermore.000 95% Confidence Interval Upper Bound 1. GA SOLUTION scenario 31 35 25 34 28 10 26 19 13 22 7 23 36 32 16 29 14 1 Score rank 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Scenario 8 27 30 24 33 20 11 4 17 21 15 5 12 2 18 9 3 6 295.24957 .4776 -2. TABLE I. the least significant difference (LSD) method is used to . As you can see in the following Table our GA method satisfies this goal.24957 .VI. RESULTS We have inspired the GA approach from the TOPSIS methodology.18 P 0. Error Upper Bound . SIGNIFICANCE In mentioned formula. DEA ( ). V. After producing 1000 generations we reach the best fitness function value.18 400.9099 2.000 .000 . related to the chromosome which can be shown by the sequence. at Table I. TABLE II. indicates total distance between adjacent scenarios [9].7256 F 112. but also the solution between these two point are important for us. III. In this GA approach our goal is to obtain total ranking in all scenarios that their fitness function is the least distance between scenarios. MULTIPLE COMPARISONS (I) VAR (J) VAR Mean Difference (I-J) Lower Bound 2.40335(*) 3.8147 Adj Mean Square 81. 295.8941 -. this is the difference this approach and current methods. It is concluded hypothesis Ho: that the three treatments differ at .3044 0.

Canada [3] [4] [5] [6] [7] [8] [9] V. Fourth Edition. Ebrahimipour. Ruan . “Implementation of Multivariate Methods as Decision Making Models for Optimization of Operator Allocation by Computer Simulation in CMS. M. vol. 164. 46 . 1095–1104. for us. as a powerful method.4. “The use of data envelopment analysis for technology selection”. 2005 T. pp. in fact. 309– 334. 800– 810. CONCLUSION [2] In this study. 186. Computers & Industrial Engineering. “Introduction to simulation and SLAM II”. 123–132. 2005. vol. Azadeh. “On the relative performance of functional and cellular layouts—an analysis of the model-based comparative studies literature”. 2. 1996.Ertay.” A GA–PCA approach for power sector performance ranking based on machine productivity” . Pritsker. 159. K. to rank the operator assignment problem based on the attribute discussed in this paper.48. 1995. vol. Wemmerlov.Haupt. “A study of labour assignment flexibility in cellular manufacturing systems”. For future effort proposed the hybrid simulation which GA and simulation software operate simultaneous and select useful scenario. pp. 31 and 6 are determined as the best and the worst respectively. Applied Mathematics and Computation. this is the main difference with the previous research [1. Production and Operations Management.J. M.Haupt. As shown in Table I. vol. pp. Computers and Industrial Engineering. John Wiley and system publishing Corporation. U. S. In this study. 1998 D. second edition.Azadeh. J. REFERENCES [1] A. Shanian. O.5. Rezaie. Anvari. Johnson. GA approach able to ranking the alternative by near optimum fitness function . O. H. pp. Furthermore.the GA method based on the minimum distance between order of the scenarios. no.” Data envelopment analysis based decision model for optimal operator allocation in CMS”. 1205–1215. determining the best order of scenarios important rather than just best and worst solution. 2. 1995 A. no. R. Cesani.E. pp. John Wiley. 28. ”Practical genetic algorithm”. 571–591. Suzuki. Savadogo. 3.VIII.2006. D. vol. no. Journal of Power Sources. in fact. Khouja.L. European Journal of Operational Research. K.Calgary. we used GA.” In proceeding of the 2006 summer computer simulation conference. the two Scenarios.3. Steudel. A. 2007. and 6]. B. “TOPSIS multiple-criteria decision support analysis for material selection of metallic bipolar plates for polymer electrolyte fuel cell”. vol.I. 2006 V. GA determine the best solution by ranking with minimum distance.

use pre-defined event-conditionaction rules as a workflow adaptation strategy. and cost of services may change frequently. consider an information fusion application. we present an approach of discovering the “same skill” web services to replace the expired one by using the service expiration times. the paper presents a process adaption framework and algorithm that can support dynamic service selection of a failed task. While this is similar in concept to our approach. researchers are increasingly turning their attention to managing processes in volatile environments. When the web process executes.e. we suppose that the failed web services may be substituted by a new one identified in UDDI registry center. [2] provide a T-correct algorithm to ensure the web t 978-0-7695-3521-0/09 $25. we considere the “task” as a capability offered by a web service. [12] offers a solution by using Bayesian learning. However.com. [11] offers an architecture of RelevanceDriven Exception Resolution but there is no algorithm of how to select the “same skills” services. and the approach may bring on plan recomputations that can’t bring about any change to the failed Web process. 4] use the insight that service providers are often able to guarantee that their reliability rates and other qualityof-service parameters will remain fixed for some time exp .cn HU Dan Airforce Engineering University.In [3].7. the value of changed information with expiration times (VOC) and Query-Cost are used to judge if it is deserved to choose a new plan when the service parameter changes.2009. By using the insight that service providers are often able to guarantee that the QoS items will persist for some amount of time [2. When a change occurs in the environment. III. WS providers may define exp in a WSAgreement document. There are a number of web services (WS) for giving information about a certain target. In this context. To address the changes of web services. Doshi et al. CHEN Honghui Department of Information System and Management National University of Defense Technology Changsha 410073.00 © 2009 IEEE DOI 10.4]. DEALING WITH WEB PROCESS USING EXPIRATION TIMES π ∗ . but it also need to recompose web services when compute the expected cost of following the optimal policy. the Bayesian learning method does not have any advantage in the time consuming of updating the parameters. II. It is time consuming and may lead to frequent unnecessary computations. these web process systems are currently ill-suited for dealing with exceptions effectively. the previous selected web service may not satisfy the designer’s requirement because of the unstable communication link.10]. this method also has limit that in unpredictable conditions or complex workflows it can’t list all possible actions and preconditions that may arise. To complete a task we may design a web process using web service composite method [5. China Abstract—A key challenge in composite web processes is that the quality of participating services changes during the life time of the process.1109/ICCET. instead of finding the optimal policy and calculate the query-cost. process to be unchanged for T times. we take an 47 t Recently. As an example. quality of products. While this method proposes a usually used basis for performing contingency actions and has been implement in many BPEL execution engine. In this paper.146 . Keywords-Web Expiration times service adaption. In [4]. Service matching.the authors I. the event will be triggered to adapt the workflow. However. The web process will be recomposed after T times. RELATED WORK Existing approach for Web Service Composition formulate the problem in different ways. because the characteristics of the service providers who participate in a web process may change during the life-cycle of a process [11]. after which they may vary. depending mainly on how to select the optimal web services to meet the user’ requirements. China yxhtgxx@yahoo. Xi’an Shaanxi 710077. LUO Xueshan. It is necessary to find an available equivalent service to substitute the failed component which has been made unavailable. In this paper different from the previous method based on recompositing the web process. This method can assure to select the best plan and reduce the computation times. target change or service unavailable. The replacement service should have the “same skill” with the failed one i. It is time consuming and ignores the fact that different service providers may provide “same skill” [11] web services. In these cases. A straightforward approach to solve the problem is to try a different process for achieving the same goal. INTRODUCTION In volatile environments[2] where the Quality of Service (QoS) items such as deadlines. Au et al.2009 International Conference on Computer Engineering and Technology Dynamic Adaption in Composite Web Services Using Expiration Times YU Xiaohao. [2. to have same functionality and QoS. the composite web process may become invalidate if it not update with the changes.

such as availability. here o ⊇ o ' indicate that o = o' or o ' is the subclass of o Fig. the FR module decides the execution path and selects the required web service in UDDI to replace the failed one. whereas the input/output matching includes higher-level semantics. ES (in line 14). In the web service replacement. To find “same skill” service Task2. execution cost. each service’s execution times will be added by 18-20). In step 3 the CSM module returns the replacement web service characters. 1: Service Replacement Framework (2)QoS matching: There is a set of metrics that measure the Quality of Service.e. Definition We assume that Task1 is already functioning in BPEL execution. task 1 and task 2 are the output set of Task1 and Task2. Also. successful execution rate. then when Task1 is failed to complete. Consider two web services that are functional equivalent. When a web service expired and the BPEL orchestrator decides to resolve it by finding and replacing the task that stopped the web process. In the Figure. The parameter T is * designated by the police. Failed diagnosing Module (FD). I I In these two rules.1 shows the proposed service selection framework which is designed to support the dynamic adaption in composite web services using expiration times. it sends a signal/message tagged by the failed component id to EE module to stop the execution. To formally express these semantics. Notice that a web service might expire while query the “same skill” web service. it will be updated only when a component failed at the first time. since the WSDL is machineoriented types. suppose Task1 which has response availability A1 and cost C1 has already functioned in BPEL execution. The EE module executes the new web process in step 5. Here the replacement policy can be predefined policies or “default”[11] (all attribute values of the replacement task need not only to be the Input/Output matching but also QoS “better” than the corresponding attributes of the failed task). here i ' ⊇ i indicate that i = i ' or i is the subclass of i ' (EE). In line 12. When there is an exception. Composition service Management Module (CSM) and Execution Engine Module C. If Task1 expires then we will choose Task2 to replace it. To keep the “default” not changing with time. we set out from the following two rules to define a Input/Output semantic match. In line ∗ 14. Using this message in step 4. Motivated by functional match and current concept similarity techniques [1]. the times query is added to the other executing services. frequency. we show the algorithm for adapting the web process in volatile environment. Failed Recovery Module (FR). In line 11. In step1 when SM module detects a BPEL execution exception. the algorithm invokes the procedure which finds the service s that can assure T times fixed to replace the expired one. The algorithm will select the “same skill” web service to replace the expired one (line 4-17). Adaption Framework Fig. Two providers of same task supporting the same generic Input and Output may have different values of QoS. Rule 1: for each i ' ∈ I task2 there exist an i ∈ I task 1 and i ' ⊇ i . the framework is composed of 5 modules: Service Monitoring Module (SM). the operation names can different but the input and output must semantic equivalent. we must anticipate this and add those services in advance to the set.9]. In the algorithm the times t tresponse (In line tquery is the times spend to 48 . A. i. during which other services may expire. the sequence of process performed are presented below step by step. QoS and so on) to the CSM module which will update the database of replacement policy. they have the same Input/Output attributes. we define two matching rules as follows: (1)Input/output matching: Inputs and outputs are functional attributes of a web service. If there is no expired services and the web process runs successfully. “minimize-cost”. reputation. “maximizereputation” and so forth. task 1 and task 2 are the input set of Task1 O O and Task2. These two rules which make the Input set of Task1 equal or contain Task2’s and also make the Output set of Task2 equal or contain Task1’s semantically to ensure that Task2 adapt to the web process context of the failed web service Task1. 2. Algorithm In Fig. the t[s ] is set zero. In step 2 FD module analyses causations of the exception and then send the failed service’s characters (Input and Output parameter. domain ontologies or do-main taxonomies can be employed [6.architecture and an algorithm to find “same skill” services in order to replace the one that expired in the composite process flow [8]. ∗ because the service s is the newly selected service. if the following condition is true Task2 can replace Task1 in the web process: (Input/Output matching) AND (A2 ≥ A1) AND (C1≥C2) B. Note that descriptions of inputs and outputs go beyond the specifications employed in a WSDL specification. These kinds of QoS properties have been emphasized on previous research works [5]. Rule 2: for each o ∈ Otask1 there exist an o ' ∈ O task2 and o ⊇ o ' .

In Fig. 4(b). V. We will also study the storage strategy of “same skill” web services to accelerate the searching times in UDDI registry center. CONCLUSION AND FUTURE WORK Fig. Then we present the process and algorithm of our approach to deal with the expired web service. First we define the two conditions that must be satisfied when we select the “same skill” web services. Our future work is to abstract the Business Process Language (BPEL) to make the process language support the dynamic adaption of the web process. In our experiment.find the “same skill” web service and the times response is the times spend during the web process successfully executing. t recomposition with the component increasing. We change some web service description profiles such as giving each service an expiration times randomly from 1s to 20s. To reuse the available web services in a web process and increase it’s execution times. EXPERIMENT In this section we simulated our algorithm using a web process scenerio with several web service components in Fig. 4: Successful execution times of a web process using different method In the experiment we can find that out approach is excellent in increasing the execution times of a web process when there are replacements of the failed web services. Under this condition we can avoid the time of finding the failed one by modify the Policy. we compare the runtimes taken in effectively executing the web process when we increase the average web services expiration times. we randomly selected a web service for each component as an initialization. we propose an intelligently web process adapting method by finding the failed services’ replacement. (a) The components change Fig. 49 . It can be seen that the average execution times of the web process using our approach are longer than the method based on process In this paper we deal with dynamic adaption of web process in volatile environments. (b) The expiration times change Fig. However the recomposition method may be more efficiency. 3: Experiment Scenerio The result is shown in Fig. For each component we randomly created some relevant web services by using the OWLS_TC V26. We compare the successful executing times with the method based on process recomposition. 3. That is because the shorter are the web services’ expiration times the longer times that spend on finding replacements or recompositing a web process. That is because the recomposition of a web process will consume more and more time when the component number of a web process increases. when there is no satisfied web service for the failed component. 4(a). 2: Service Replacement algorithm IV. building the ontology relationship between web services’ interface description. Notice that the successful execution times of a web process increase with the expiration times increasing.

May 20-24. Fernandez. Speeding up Adaption of Web Service Compositions Using Expiration Times. Sheth.200-215. Boualem Benatallah.2005. Muller. P. Wohed. 1023-1032(WWW 2007). 2008.. Wu. Of 7th int.pdf. Web service composition with volatile information. Ilghami. Au. Charfi. T. K. International World Wide Web Conference Committee May 8-12..: Aspect-Oriented Web Service Composition with AO4BPEL. [11] Kareliotis Christos..: Adding semantics to web services standards. 51(2):223-256. O. 2005. Verma. R. Geriner. Verma. Proceedings of the 1st International Conference on Web Services (2003) [7] A. Sivashanmugam. J. Doshi. [10] D. Georgiadis Panayiotis. International World Wide Web Conference Committee. 257264(AAMAS 2008). and E. U. Rahm.: Toward Dynamic Relevance-Driven Exception Resolution in Composite Web Services.owl-s/1.1/owl-s. Nau. 2003(WWW 2003). Ossowski: Exploiting Organisation Information for Service Coordination in Multiagent Systems: Proc. 5266. and D. Au. A. Murdock.: Dynamic workflow composition using markov decision process.C. Vassilakis Costas. T. D. Journal of Data and Knowledge Engineering. JAIR 20(2003)379-404. Goodwin. Dumas.org/services. Akkiraju. Agentwork: a workflow system supporting rule-based workflow adaption. Kuter. M. Quality Driven Web Service Composition. Nau. S. Proceedings of the European Conference on Web Services (2004) [8] P.. C. Doshi. Harney. [6] [2] [3] [4] [5] K. 50 . Mezini. S. 2007. [9] OWL services Coalition: OWL-S: Semantic markup for web services(2004) OWL-S White Paper http://www. W. Journal of Web Services Research(JWSR).12-16.May.. and K. R. [12] P. 2(1):1-17. Liangzhao Zeng. Confon Autonomous Agents and Multiagent System . In International Semantic Web Conference. U. M. R.REFERENCES [1] A.: SHOP@: An HTN planning system. 2004.daml. Spriner-Verlag Berlin Herdelberg..: Analusis of Web Services Composition Language: the Case of BPEL4WS.

with an emphasis upon “man-toman” instruction. virtual teacher two-dimension model. Keywords-emotional intelligent. we construct a virtual teacher agent to communicate with students in the process of learning. we will take the student’s emotional state into account by analyzing his learning psychology or motivation. In this project. II. In this paper.com. conquer the network’s latency. learning abilities and needs of each individual student. we construct an emotional intelligent e-learning system based on mobile agent technology. Personalized and Adaptive to diminish network’s load. This will largely reduce the data flow in the Internet. participants exchange intermediate results and other synchronization information. While Current E-learning system is referred to as cognitive education. Thus we can establish a harmonious environment for human and computer’s interaction. etc[6][7].com Abstract—The emergence of the concept “Affective Computing” makes the computer’s intelligence no longer be a pure cognitive one. to process largely emotional computation on the web. Xiangjie Qiao. It offers the theoretical and modeling basis for realizing the emotional intelligence in e-learning systems. Different participants use message-passing to coordinate distributed computation. we try to experiment with such paradigm---Aglet. . They cannot simulate the traditional classroom scenario: The teacher and students can face to each other in the class. China wzl@ies. A “man-to-man” learning environment is built to simulate the traditional classroom’s pedagogy in the system. Therefore we shall consider the cost of the network. While for Mobile Agent paradigm. which then reduces their chances of performing computerrelated tasks well compared to those with positive attitudes towards computers. Next we will introduce every server and component’s functionality and role in the system. Traditional paradigms for building distributed applications are RPC and most recently its object cousins RMI and CORBA. mobile agent. It also has its particular characteristics of Mobility. they exert influences in various behavioral and cognitive process. It’s easy for us to implement the information’s gathering and retrieval by utilizing its mobility property. functionality of applications is partitioned among participating nodes. an educational psychologist. Rozell and Gardner’s [8] study pointed out that when people have negative attitudes towards computers. As Wu Qinglin [4] said.ustb. I. THE ARCHITECTURE BASED ON MOBILE AGENT The system’s architecture is as shown in Figure1. The Artificial Psychology theory put forward by Professor Zhiliang Wang [1] proposes imitating human psychology activities with artificial machines (computers. adapt to a new environment dynamically. qxj7711@163. This research also emphasized that 978-0-7695-3521-0/09 $25. long-term memorizing. computation is migrated toward resources.1109/ICCET. INTRODUCTION Learning is one of the cognitive process affected by one’s emotional state [3]. So E-learning systems should have emotional intelligence to meet the need. an effective individualized learning system should be not only intelligent but also emotional. nowadays learning systems are lack of emotional interaction in the context of instruction. they are still not as effective as one-on-one human tutoring. A dimensional model is put forward to recognize and analyze the student’s emotion state and a virtual teacher’s avatar is offered to regulate student’s learning psychology with consideration of teaching style based on his personality trait. Moreover.2009. In this paper. Computation itself is partitioned. such as attention. objective function algorithm) by means of information science. It provides us an infrastructure for building distributed applications based on mobile agent technology. Aglet is the shorthand for agent plus applet. Researchers in neurosciences and psychology have found that emotions are widely related to cognition.2009 International Conference on Computer Engineering and Technology An Emotional Intelligent E-learning System Based on Mobile Agent Technology Zhiliang Wang. and improve the system’s robustness and fault tolerance.cn. Yinggang Xie School of Information Engineering University of Science & Technology Beijing Beijing.edu. the whole system’s basic architecture and technological support.17 51 individuals with more positive affect exert more effort on computer-related tasks. In our project. In a word. Current e-learning systems cannot instruct students effectively since they do not consider the emotional state in the context of instruction. system would occupy the bandwidth frequently. The virtual teacher can respond to student’s current state to regulate their emotion to optimum by words or expressions. their self-efficacy toward using them reduces. For this class of paradigms. Autonomous. Though Intelligent Tutoring Systems (ITS) can provide individualized instruction by being able to adapt to the knowledge. yinggangxie@163.00 © 2009 IEEE DOI 10. decision-making.

the system will firstly provide three services in a web page for him to choose as he likes. voice inflections and even eye and other body movements. the emotion of a student is mainly recognized by facial expression captured by a camera. By computing the student’s face area obtained by the camera.g. And the Classroom Server will receive the Mobile Student Agent’s request to dispatch a Virtual Teacher Agent (VTA) to begin a class. Here. It will initialize the entire environment (e. When a student enters our system. the E a is the Attention level. This non-verbal communication is completely lost when people communicate with computers [11]. the xe max is the maximum of averaged pupil size and the xe min is the minimum of averaged eyelids’ distance. Here the VTA shall do four things in the context of instruction. When students want to have classes with our system. First. provide registration service and cooperate with other agent systems. On the other hand. Here. If the student is the first time to use our system. By analyzing the facial expression. and the x F min is the Interest level. the x is the space between eyelids. STUDENT’S LEARNING PSYCHOLOGY MODEL Figure 1. Here. we utilize a two-dimension model to describe a student’s emotion. One is “Class”. System’s architecture The E-learning Service Center Server (ESCS) is in charge of the general management of the system. The second is “Question and Answer” and the third is “Online Test”. Student Server’s Creation). the xF max is the maximum face area can be detected. Receiving student’s question and return the answer to the student. the ESEC will create Teacher Agent and send it to the Teacher Agent Server. he should firstly register his information and wait for the system’s verification. the C is the student’s 52 . General speaking. After the student passes the verification. At the same time. We often try to estimate a person’s internal state by observing facial expressions. Each student will match one SA respectively. In our research. Fourth. That is interest level and attention level. the minimum face area can be detected. Interest level is mainly depended on the distance between student and computer. Teachers use these non-verbal signals to make judgments about the state of students to improve the accuracy and effectiveness of interactions. Regulating the student’s state by doing some activities such as words expression or facial expression. the interest level can be achieved [13] (See Equation (1)). Classroom Server. if the student is interested in the instruction he will sit more nearer the computer to care the learning material. To interact most effectively. the information about the courses and teachers will be offered. it is often useful for us to try to gain insight into “invisible” human emotions and thoughts by interpreting these non-verbal signals. VTA tell the ESEC to create a Mobile Query Agent (MQA) and dispatch it to every campus aglet server that has registered in the ESEC and the query result will then be fetched and arranged well to the students. 0 x ≤ xF min ⎧ 1 ⎪ ⎪⎛ x − xF min ⎞2 ⎪ ⎟ x ≤x≤x Ei = ⎨⎜ F min F max ⎜ x − x ⎟ ⎪⎝ F max F min ⎠ ⎪ 1 xF max ≤ x ⎪ ⎩ (1) If the student pays much attention to the instruction. which is provided for students to have classes. the ESCS will send Student Server Management Agent (SSMA) to create the Student Agent (SA) and manage other agents in Student Agent Server. Third. Here. we can get the student’s learning psychology. Information Gathering Agent (IGA) will start to gather pedagogical information and provide useful pedagogical tactics for VTA through data mining mechanism. monitor other servers’ activity.III. the pi is current interest level. Attention level is to detect whether the student is learning carefully enough [13] (see Equation (2)). the Ei is the x is the face area. his pupil will become much bigger than usual. Second. analyzing the student’s emotional and cognitive data to decide next action. These three functions are all implemented based on mobile agent technology. a visible change in heartbeat. 0 ⎧ 1 ⎪ ⎛ x ⎞2 ⎪ ⎪⎜ p − xe min ⎟ ⎪ ⎟ Ea = ⎨⎜ i ⎜ xe max − xe min ⎟ ⎪⎜ ⎟ ⎪⎝ ⎠ ⎪ 1 ⎪ ⎩ x ≤ xe min xe min ≤ x ≤ xe max xe max ≤ x (2) Finally we get the general learning psychology of the student by Equation (3). Listening to the student’s changing state especially emotional state. he can start his class with a virtual teacher say hello to him.

three couple (six) emotions are employed in the model: joy and angry. α. θ joyO is the weight of IV. “let’s…”. Wc ]⎢θ ba ⎢ ⎢θ ca ⎣ p xz ⎤ ⎥ p yz ⎥ (5) p zz ⎥ ⎦ (6) esn+1 = esn + Δes + Δp = esn + esn ps + Δp = [xl . Thinking Module is also a control module. the percentage of each dimension of personality. For each dimension of personality according to the OCEAN model. For instance. Suppose the probability of changing Emotion is Ps (see (4)). OCEAN Openness Conscientiousness Extraversion Agreeableness Neuroticism Wi denotes Positive affects are fundamental in cognitive organization and thought processes. And can get the new emotion TABLE I. the change of teaching style can be obtained by (6). when the teacher wants to give the student suggestion. yl . As every person has his own personality. And the words and phrases will be different according to their teaching style as well. pride and sad. VIRTUAL TEACHER’S EMOTION MODEL personality factor O for the joy emotion. Since there is not a real teacher or a real classroom in an e-learning environment. Meanwhile.edu/~perlin/). Various emotions for the virtual teacher V.nyu. some teachers are rigorous while some are easy-going with his students. Wb . For example. they also play an important role to improve creativity and flexibility in problem solving [8]. E a (3) P = αEi + β Ea + γC 2 (α + β + γ = 1) 2 2 of one emotion x changes to another emotion y. γ is the weight of and C respectively. The virtual teacher’s avatar is revised from the Ken Perlin’s applet program (http://mrl. we construct a teacher’s avatar as a virtual teacher to analyze student’s emotion state and give proper regulation to adjust their negative emotion. we take the virtual teacher’s teaching style as his personal trait. OCEAN Teaching Style Active Responsible Energetic Easy-going Rigorous STB =< SM . we adopt the OCEAN model [12] to map the teaching style (see Table 1). External stimuli are the student’s new emotional states. It is an illusion thinking that learning environments that don't consider motivational and emotional factors are adequate. while the easy-going teacher will say “I suggest that you…”. The p xy means the probability ⎡ p xx ⎢ p s = ⎢ p yz ⎢ p zx ⎣ n s p xy p yy p zy p xz ⎤ ⎥ p yz ⎥ p zz ⎥ ⎦ p xy p yy p zy (4) ⎡ p xx ⎢ Δes = e p s = [x s .cognitive evaluation value. On the other hand. To effectively regulate the student’s state. TESTBED We have constructed a prototype and developed some agent modules to test the effectiveness of the architecture. The change of the virtual teacher’s emotional state depends on two factors: external stimuli and teaching style. Thinking Module (TM) and Behavior Module (BM). β. The virtual teacher’s emotional model contains three modules: Sense Module (SM). it depends on the psychological and pedagogical theory and experience. negative affects can block thought processes [9]. y s . Finally we esn +1 by equation (7). Knowledge is just as the regulation rule. θ ep defines a weight of personality factor p for emotion e. Reciprocally. Once the SM senses the changing it will inform the TM to analyze the student’s state and decide how to regulate the student’s state by doing some activities like words or facial expression. TM . rigorous teacher may express as “You should…”. “Perhaps you would like to…” or “Maybe you could…” etc. 53 . it is made up of knowledge and personality. and then the change of the emotion will be (see (5)). TEACHING STYLE VS. Ei . hope and disappointment (see Figure 2). For example. BM > The virtual teacher will listen to the student’s learning state through VTA by Sense Module. z s ]⎢ p yz ⎢ p zx ⎣ ⎡θ aa Δ p = [Wa . zl ] θ ab θ ac ⎤ θ bb θ bc ⎥ ⎥ θ cb θ cc ⎥ ⎦ (7) Figure 2.

Shanghai: East China Normal University Press. Isen. Cognitive. A. In the future work. C. Idzihowski. 199–222.J. Artificial Psychology-a most Accessible Science Research to Human Brain. Hosseini. Bayonne: Deuxième journée Acteurs. Fear and performance in novice parachutists. (1994) Descartes Error – Emotion. Rozell.P. REFERENCES [1] Wang Zhiliang. British Journal of Psychiatry. Students can ask to the teacher whenever they are confused with the instruction and answer the questions forwarded by the virtual teacher as well. 1995. TABLE II. CONCLUSION AND FUTURE WORK In this paper. 1463-1474. Baddeley.7s 14. The virtual teacher will respond to the student’s emotion state through verbal or non-verbal expressions. we will revise the details of our system and find some school students to test the system’s performance. Agents et Apprentissage. 1998. Handbook of Emotions. 130. Affective Computing. New York: Bantam Books. MA: MIT.5s time for obtaining the first data 13. 30.R. pages 336337. 1997 D. M. Yinggang Xie. Sarrafzadeh. H. The Journal of Computers in Human Behavior. McCrae.Figure 3 (a) shows the classroom server’s management interface and (b) is the student’s learning window. W. Pedagogical Psychology---to Pedagogue. 1987.16 (2000). Wu Qinglin. Psychological Assessment. subsystem’s number 2 3 4 5 TESTING RESULT time for obtaining all data 15. Picard. C. Costa and R. G. Fan.L.. Using cognitive Agents for Building Pedagogical Strategies in a Multistrategic Intelligent Tutoring System. C. 184-185. 2003. R. Ergonomics. Gardner. (a) Classroom server’s management window [2] [3] [4] [5] [6] [7] (b) Student’s learning window Figure 3.5s 13. 1977. Zhiliang Wang.T. (2000). motivation. 478-481. Goleman. Xiangjie Qiao. VI.1992. Overmyer. ISAI’062006-08. ACKNOWLEDGMENT This paper is supported by the National Natural Science Foundation of China Grant #60573059 and Natural Science Foundation of Beijing Grant #KZ200810028016. 2000. 22 (5). F. Frasson. Table 2 shows the time of feedback when query all the sub agent systems. and affective processes associated with computer-relatedperformance: a path analysis. NY. we have constructed an emotional intelligent e-learning system based on mobile agent technology. A. Putnam Press.3s 22. S.5s [12] [13] 54 . W. Reason and the Human Brain. Damasio.2s 14. (3) Students can do online testing to test themselves independently. we have carried out the incremental testing to test the system’s scalability. A. Affective Recognition Based on Approach-Withdrawal and Careness.6s 21. P. E. Facial expression analysis for estimating learner’s emotional state in intelligent tutoring systems. Normal personality assessment in clinical practice: The NEO personality inventory. Proceedings of the 3rd IEEE International Conference on Advanced Learning Technologies.2003. Obsessional cognition: performance on two numerical tasks. System’s interface [8] [9] [10] [11] In addition. An Ping. 2000. Emotional Intelligence. (4): 5-13. Lijuan Wang. Positive Affect and Decision Making. Reed. Journal of University of Science and Technology Beijing. The system’s functionalities include: (1) Students can quickly find the course-related information and the data share is well implemented. (2) Students can communicate with the real teacher or the virtual teacher emotionally.G.7s 17. A.

resampling. Pseudorandom sequences have an advantage in that they can be easily generated and recreated. Digital watermarking technique has received a great deal of attention recently in industry and academic community. Lei School of Electrical and Electronic Engineering. audio watermarking methods have. formatting.00 © 2009 IEEE DOI 10. The proposed watermarking algorithm can extract the watermarking image without the help from the original digital audio signal. Some previous works on MP3 are always based on the frequency-selective method that selects different frequency coefficients to be encrypted.edu. the Fourier transform magnitude coefficients are replaced with the watermarking sequence. to our knowledge. A meaningful gray image embedded into digital compressed audio data is researched by quantizing MDCT coefficients (using integer lifting MDCT) of audio samples based on chaotic map. For a review of the early watermarking schemes and main requirements of a watermarking scheme. In one scheme. a single seed (along with an initial value) will always reproduce the same sequence. As the sixth part of AVS standard.sg I. The majority of watermarking schemes proposed to date use watermarking generated from pseudorandom number sequences. ownership verification. Audio content protection also plays an important role in many digital media applications. insert (key words) I. they share some common problems as follows: . Nanyang Technological University. low pass filtering. The integration could minimize unfavorable mutual interference between watermarking and compression. covert communication. Soon School of Electrical and Electronic Engineering.2009 International Conference on Computer Engineering and Technology Audio Watermarking for DRM based on Chaotic Map B. compression. not been studied much. but currently digital watermarking 978-0-7695-3521-0/09 $25. 639798 Abstract—This paper focuses mainly on using chaos encryption to protect AVS audio efficiently. especially preventing the watermarking from being removed by compression. With the widespread infusion of digital technologies and the ensuing ease of digital content transport over the Internet. watermarking embedding is performed during vector quantization. A novel digital watermarking approach is introduced for audio copyright protection. However. Singapore. The experimental results show that the proposed digital watermarking approach is robust to the audio degradations and distortions such as noise adding. Nanyang Technological University.2009. The digital watermarking scheme is able to provide a feasible and effective audio copyright protection. Similar to the pseudorandom number sequence. Early work on audio watermarking embedding achieved inaudibility by placing watermarking signals in perceptually insignificant regions. style. and re-quantization. As a single seed will reproduce the same sequence of numbers each time the generating function is iterated.196 55 schemes mainly focus on image and video copyright protection [1-3]. 639798 leib0001@ntu. styling.1109/ICCET. Digital Rights Management (DRM) of multimedia data have therefore become of a critical concern. The watermarking is embedded by changing the selected code vector or the distortion weighting factor used in the searching process. Singapore. Keywords-component. One popular choice was the higher frequency region [4]. and/or auxiliary data carrying. Another trend in digital audio watermarking is to combine watermarking embedding with the compression or modulation process. the reader may consult [5]. Although the audio watermarking methods described above have their own features and properties. Y. In some systems. AVS DRM aims to offer the universal and open interoperable standard for various DRM requirements in digital media industry. Audio Video coding Standard (AVS) is China's second generation source coding\decoding standard with fully Intellectual Properties. Chaotic functions have been used to generate watermarking sequences [6-8]. INTRODUCTION Digital audio watermarking embeds inaudible information into digital audio data for the purposes of copyright protection.

The simplest chaotic sequence is one dimensional logistic map which is unimodal and defined as xk 1 P xk (1  xk ). The chaotic sequence has the character which is sensitive to initial values and random alike. The watermarking is generated using deterministic chaotic maps. 3. The motivation for using a chaotic function to generate a watermarking is that a single variable seeding the chaotic function will always result in the same output (mapping) when certain constraints or initial conditions are placed on the mapping. Conclusions are given in Section 5. Pre-processing of Image Watermarking If the watermarking is visually differentiable M 1 u M 2 binary image which can be represented as: W {w(i. System Model There are three ways to embed the binary image as watermarking to audio A: (1) A 'i Ai  D xi A 'i Ai (1  D xi ) (2) A 'i Ai e axi (3) Where Ai is the audio transform coefficient. Section 2 describes the proposed watermarking generation. In this paper. The use of chaotic functions for the generation of watermarking has been previously proposed [9-11]. In practice. we should do some dimension reduction operation if we want to embed it into audio Ai .1) . xk  (0. Proposed embedding and extraction scheme B. the Bernoulli Map. Because W is a two-dimension binary image. j ).1} . (2) The detection procedure needs the original digital audio signals. (3) Integer lifting MDCT to solve round-off error problems. 0 d i  M 1 . Skew Tent Map and also logistic Map. II. Section 4 shows the experimental results and discussions. some of them can only embed pseudorandom bit sequence or binary image. because they utilize the floating point calculation. the watermarking embedding and detection procedures are provided.e. for the Human Auditory System (HAS) is not taken into account adequately. We adopt equation (2) as it can be adaptable to Ai change. k i * M 2  j} (5) 56 . (3) The robustness and invisibility are not so good.57  P d 4 . A chaotic function is unpredictable. Original audio AVS encoder Compressed bitstream Watermarked audio AVS decoder The majority of watermarking generation schemes proposed to date use a pseudorandom number generator to create a watermarking sequence which is embedded in the cover work. as it is a wellbehaved chaotic function which has been extensively studied. The two-dimension image is converted to one-dimension sequence with equation (5). This paper proceeds as follows. where 0  P d 4 is bifurcation parameter. III. j )  {0. (4) Most of transform domain digital audio watermarkings have round-off error problems. 0 d j  M 2 . and a is embedding stretch factor. the logistic map was selected. for example. GENERATION OF WATERMARKING WITH CHAOTIC MAPS on the mapping. V {v(k ) w(i. when P is restricted within narrow limits. The primary advantage is that it is possible to investigate the spectral properties of the resulting watermarking.(l) The amount of hidden information is small. seeding the chaotic function will generate the same output (map) when certain constraints or initial conditions are placed Scrambling with chaotic sequence Watermarking embedding key Watermarking extraction Extracted watermark Figure 1. 0 d i  M 1 . In this study. The features of the proposed algorithm are as follows: (1) Embedded watermarking as meaningful gray image. (2) Blind detection without resorting to the original digital audio signal or any other side information. j ). In Section 3. For example. 0  j  M 2 } (4) where w(i. i. which can be used to select embedding points randomly. we introduce a new adaptive digital audio watermarking algorithm based on the quantization of MDCT coefficients (Integer Lifting MDCT). xk is in chaotic state. Our proposed embedding and extraction scheme can be seen in Figure 1. BINARY IMAGE WATERMARKING EMBEDDING AND EXTRACTION A. indecomposable and yet contains regularity which is sensitive to initial conditions. A single variable.

and then do the Exclusive OR operation to get {vsp '(l )} : (16) (5) Reconstruct {vsp '(k )} from {vsp '(l )} . 0 d k  ( M 1 u M 2 ) vsp '(l ) s '(l ) † r (l ) 57 . (7) Inverse MDCT: Do the inverse MDCT of D 'e A 'e (k ) IMDCT ( D 'e (k )). The detailed embedding process can be shown as follows: (1) Segmentation: Segment M 1 u M 2 data with length N from A and denoted as: Ae { Ae (k ). we employ MDCT transform to embedded watermarking W. Watermarking Embedding There is original audio signal with L sampling data and M 1 u M 2 binary image. (3) Pseudo random sequence sorting: To eliminate the correlation in the neighboring elements. if the sum of the m segments is more than > m / 2@  1 .To eliminate the similarity in the neighboring elements and enhance the watermarking robustness. k m1 u M 2  m2 } v p ( k ). a is stretching factor which can be changed to control the watermarking embedding intensity. 0  k . (2) MDCT: Do the MDCT of Ae and Ase . (6) Embedding watermarking: Adaptively modify the medium coefficient De (k )(mw ) by sorting to embed v p (k ) . 0 d m2  M 2 . k '  M1 M 2 } (6) After sorting. s (l ) v p ( «1/ m » k ) † m(1) ¬ ¼ (7) Chaotic modulation has the advantages of high antiinterference ability. 0 d k  ( M 1 u M 2 )} (8) Where Ae (k ) is expressed as: Ae (k ) {a (k u N  i ). which is weak in anti-interference. k '  ( M 1 u M 2 )} (11) (4) Apply MDCT: Do the MDCT to all the audio data 2) De (k ) MDCT ( Ae (k ). the embedding signal is too small to be perceptible. Watermarking Extraction The watermarking extraction algorithm is expressed as follows: (1) Segmentation: Segment digital audio signal Ae . 0 d k  ( M 1 u M 2 ) (14) De '(k )(i ) (8) Replace A 'e with Ae to get the final watermarking signal. V p sort (V ) {v p (k ) v(k '). 0 d k . mk d l  m(k  1) a u De (k )(l ) (15) (4) Generate the chaotic sequence{r (l )} with key K. r (1)  {0. it can reduce the use value of the signal and cause the hearing distortion. (3) Extract watermarking: 1 s '(l ) ( Dse (k )(l )  De (k )(l )). m2 ). C. If a is too large.1}(0 d k  mM 1 M 2 ) ­ De (k )(i)(1  av p (k )). modulation factor seen as follows. D. But it is based on the increase of the signal capacity to improve the robustness. w(m1 . then use {r (l )} to modulate {v p (k )} and get {s(l )} with m Where De ( k ) {De (k )( m). (10) 0 d m1  M 1 . V p sort (V ) {v p (k ) v(k '). otherwise. low-power spectral density and higher confidentiality. we adopt the linear recursive shifter to sort all the elements in the random sequences. then watermarking {vsp '(k )} is 1. it should be converted to one dimension shown in equation (9) before it is embedded into audio signal. If a is too small. it is robust to the attack. we adopt the linear recursive shift register to sort all the elements in V to improve the robustness. we modulate the watermarking sequence by chaotic signal and generate chaotic sequence {r (l )} with secret seed K. 0 d i  N } (9) (2) Watermarking dimension reduction: Because the image is two-dimension. (6) Inverse sorting: The one-dimension sequence can be obtained by inverse sorting which is expressed as: After the dimension reduction. V {v(k ) w(m1 . it is 0. The value can be chosen according to the application. 0 d m  N } De (k )( m) is the kth audio data. In this paper. To further improve the watermarking anti-attack ability. however. the k ' elements in V will shift to K elements in the v p . m2 ) in the image W can be expressed with v (k). (5) Choose the intermediate frequency coefficient: We choose the median coefficients in the MDCT domain to satisfy the digital audio signal. mk  i  m(k  1) ° ® ° De (k )(i)otherwise ¯ 13) where m is modulation factor.

0 d k . PSNR and normalized coefficient are adopted to objectively assess the original and extracted watermarking. Watermarking security test ( Left: extracted image with correct key. Watermarking Security Test The security test is shown in Figure 2. we use 22. Extracted watermarkings after attacks (a. and other objective and subjective effect. The PSNR after embedding is 40. k i * M 2  j} (18) IV. water mark water water mark mark Figure 3. the normalized correlation coefficient of the extracted watermarking is 1. 0 d i  M1 . Watermarking Attack Experiment and Analysis To remove the effect of observer’s experience. ws ) i 1 j 1 2 ¦ w (i.05 kHz. j ) In this paper. original watermarking. PSNR 1 ¦ (w '(n)  w(n))2 N Normalized Coefficient (NC) PSNR ( w. experiment and device conditions.23dB. d. The watermarking is extracted from the audio signal and computed the similarity of the original and embedded audio watermarking. the images are quite different which means we can not extract the watermarking without correct key. The embedding and (1) AVS attack: The audio bitstream goes through the encoder and decoder.3800001. watermarking can be extracted correctly. The audio signal contains the speech. the finally extracted watermarking image is shown in equation (18). (6) Re-sampling: Up-sample the original audio to 58 . therefore. such as re-sampling. right: extracted image without correct key) To test the robustness of the watermarking. noise and AVS compression attack. re-sampling attack. Without any processing. j ) s M1 M 2 NC ( w.38. clipping attack.8 and P 2 . The attacks are realized as follows: B. and g. the order is 6. Figure 2. e. The compression ratio is 10:1 (2) Clipping: Cut 10% of the original audio. country and classic music. the embedding audio watermarking is under a variety of attacks.(17) The watermarking is increased to 2-D dimension.05 KHz sampling rate. w ') 10 log10 max w '2 ( n) a b c d (19) e f g Figure 4. seed=0. low pass filtering. With only 0. popular. f. (4) Median filter: 4 order median filter. cutoff frequency is 22. With right key. Ws {ws (i. The original and embedded watermarking audio is very similar visually and aurally which also can be seen from the waveforms. j ) ¦ w i 1 j 1 M1 M2 (20) 2 s (i. re-quantization attack. adding noise attack. A 64×64 binary image after chaotic encryption was embedded to the original audio. EXPERIMENT AND RESULTS Vs {vs ( k ) vsp ( k ').0001 differences. AVS compression attack) ¦¦ w(i.01. 8bit quantization resolution and the length of the mono channel audio signal is 30s. k '  M 1 M 2 } extracted results are displayed in Figure 3. Watermarking embedding and extraction comparison A.0 d j  M 2 . if the seed=0. The initial value of chaotic sequence is 0. j )w (i. the watermarking can not be recovered correctly. (3) Low pass filter: Using Butterworth filter. j ) vs (k ). However. low pass filter attack. clipping. (5) Gaussian noise: Embedding Gaussian noise whose mean is 0 and variance is 0. b.

Tefas. [9] A. [11] A. In: Proceedings of IEEE international symposium on circuits and systems. Nikolaidis. Pitas. Choas.. Nikolaidis. 2004. M... and I. 33-46. vol.. “Image watermarking with better resilience”. Jakubowski.. Pitas. speech and signal processing.5579.. pp. A. 1999. the watermarking intensity and capacity can be modified adaptively. S. Podichuk and E.In: Proceedings of IEEE international conference on acoustics. After various attacks. 10.. then down-sample to 11. 2003. Nikolaidis A N. Pitas. IEEE Signal Processing Magazine. Tsekeridou.. Tefas. and I. Electronic imaging: processing. 1989–1992. [7] RL. and then quantize it back to 16-bit. extraction and detection for DRM is realized based on chaotic map in the MDCT domain. Keating. Solachidis... “ Improved waveletbased watermarking through pixel-wise masking”.. Bartolini... Special issue on Proceedings of the IEEE 87. In: Proceeds ICIP. Devaney. Solitons & Fractals vol.341–350. pp.. S. The chaotic key is introduced into the standard AVS audio bitstream to improve robustness. [6] A. It can basically meet the application requirements for AVS DRM and copyright protection. we can see that.Venkatesan. 1998. [8] S. [10] A. AVS audio watermarking embedding.. pp. pp. Mooney and JG.05 kHz. 120–129. [4] M. 2004. the algorithm can resist some attacks and maintain robustness. “Bernoulli shift generated watermarkings: theoretic investigation”. CONCLUSIONS [2] In this paper..567–573.4. pp. [3] R. IEEE Trans Image Process. This scheme can be applied to the AVS DRM for copyright protection. Delp. 59 . 025 kHz (7) Re-quantization: Quantize the audio frequency from 16-bit to 8-bit. Mooney amd JG. F... “Markov chaotic sequences for correlation based watermarking schemes”. Solachidis. REFERENCES [1] C.L. printing and publishing in colour.22. 2001. Barni. Moreover. V. the extracted watermarkings are described in Figure 4. Technology for Optical Countermeasures. no... Nikolaidis. “Digital watermarking: Algorithms and applications”. N. A. “A secure robust digital image watermarking”.. 2001. “Noisy optical detection of chaos-based watermarkings” in: Proceedings of SPIE Photonics North vol..783–91. pp. Tsekeridou. “Comparison of different chaotic maps with application to image watermarking”. Cambridge. “The impact of the theoretical properties of the logistic function on the generation of optically detectable watermarkings”. In the MDCT domain. 2001.14. V.. Optics/Photonics in Defence and Security. The experiment shows that the proposed algorithm and scheme can resist some attacks while retain good quality. In: Proceedings SPIE. vol.2001. MA: Perseus Books. [5] Identification and protection of multimedia information. 1992. pp.. A. J.J.Pereira. Nikolaidis A and I.. Piva. V.509–512. 17. 2000. Ruanaidh.. and A. “A first course in chaotic dynamical systems – theory and experiment”. Keating.. it is secure and easy to implement.

such as Murase and Sakai [3]. P.7].12 60 . whose mass center is the centroid of system. THEORETIC MODEL The physical characters of human and motion are utilized in this model. III. The rest of the paper is organized as follows. and induce final expressions of gait function and modeling. A. creeping etc. There are three main methods [2] of action identification. the centroid is the reference point and described the physical parameters. video surveillance. As shown in Fig. It concentrates on the characters of video surveillance.2009 International Conference on Computer Engineering and Technology Walking Modeling Based on Motion Functions Hao Zhang School of Computer Science and Technology Xidian University Xi’an. Human silhouette of 2D can be extracted directly. devoted themselves into this domain and presented many identifying methods. biometrics II. Liang Wang [10] and so on.163. the centroid of body is an important reference point of mechanism motion in study. we utilize the consecutive characters of video information with the physical law. while walking. China liuzhijing@vip. it is difficult to extract human model of 3D directly. Johnson and Bobick [5]. are constructed. such as walking. with the examples of walking videos. the step length is certain to be restricted by human stature. R. Many scholars.edu. airports. China zhanghao@xidian.2009. R. human’s arms and legs move periodically. and results in walking functions by extracting and dealing with the data from the experiments. which are the proportion of step length to human stature and time. It is universally that body motion is periodic. human silhouette and time. 1. P. such as orientation. The Relationship between the Proportion of Step Length to Human Stature and Time First of all. Section 2 introduces the theoretic model on biometrics. Next. so that it does not meet the applicable demands of video surveillance.cn Zhijing Liu School of Computer Science and Technology Xidian University Xi’an. for example. based-on template. which are the angles between recording orientation and body motion one. In physics. INTRODUCTION The vision analysis [1] of body motion in computer vision consists of detection. For short. so that the goal of detection in body motion is achieved. and present a method of motion-function to analyze body motion.00 © 2009 IEEE DOI 10. Then we analyze the data of body motions and induce and validate the functions furthermore. B. Section 3 describes modeling and functions in motion information. jumping. On this basis. MODELING AND FUNCTIONING I. The Relationship between Human Silhouette and Time As video information represents 3D in the form of 2D. consequently. Conclusive remarks are addressed in the final section. the states of body motion are determined by the variations of rectangles’ areas. In the study of body motions. step length and stature accord with the proportion. identification and action understanding in the sequences of video images. Keywords-computer vision. Kitani [9]. the coordinates of human centroid and time. they are labeled as 90° video and 45° one respectively. and the variations of body motion are detected in consequence. running. C. Then body motions can be detected in term of these factors. on the basis of Long Ye’s idea [12]. and three functions between body biometrics and time. motion-function. On the basis of these. two walking videos of 90° and 45° are selected. Videlicet. there are many factors existing in environmental restriction in application of identifying methods based-on gait. more experiments are required to validate all coefficients in functions. Brand [6. Its definition is a system of particles made of N ones. tracking. probability-network and syntactic technique.com Abstract—A modeling method of motion-function is presented to represent the motion in detection of video images within the physical and motional characters of body. because of the restriction on physical factors. Section 4 discusses the experimental results. Shutler [4]. including banks. Oliver [8]. With the characters of video surveillance [11]. Because of the demands for sensitive scenarios. and it is universal to body motion. customs and so on. it is an important research domain to identify body movement without surveillance in long distance. The Relationship between the Coordinates of Human Centroid and Time Body motion in video information is equal to mechanism motion of a body in physics. As a result. Section 5.1109/ICCET. 978-0-7695-3521-0/09 $25. velocity etc. which reflect the state of motion. motion recognition.

and the expressions were induced as follows: The fitting curve of 90°video: where tn is time of video.78)) − 0.1 (7) 0. the proportion of walking racer’s step length to stature in competition is between 0. by means of statistical data of videos. as shown in Fig. the expressions of the proportions of step length to stature in videos of 90° and 45° are induced. By formula. the areas of rectangles are calculated.15 ⋅ sin( (a) 90°video 2π π ⋅ (t − − 0. The discrete data are represented and fitted with time (t).68 (b) 45°video Figure 1.6. The Relationship between Human Silhouette and Time In video surveillance. 4. g=0.1 ≤ i ≤ tn .02 0. On the basis of data extracted. The Relationship between the Proportion of Step Length to Stature and Time By physical characters of human. and then motion cycles are analyzed in order to induce the expressions of walking function. l (i ) (1) where tn is time of video.7 [13]. 5.7 2 (2) The fitting curve of 45°video: f = 0. the proportions (g) of areas of rectangles between adjacent frames in video images are found by Eq. A.1 ≤ i ≤ tn − 1, si (4) (5) f (i ) = w(i ) .6 and 0.some experiments are implemented. s(i ) = l (i ) ⋅ h(i ).68 61 .36 (3) 0. The proportions (f) of step length to stature are calculated by Eq. The walking videos of 90°and 45° B. The data are fitted by means of the proportion of areas of rectangles to time. as shown in Eq. finding the data of length (l).1 ≤ i ≤ tn .04 ⋅ sin( 2π ⋅ (t − 0. as shown in Fig. 1. 3. 2. the exact silhouette of human is not extracted directly and immediately. labeling with rectangle. as shown in Fig. g (i ) = si +1 . it concludes that the proportion of step length and stature is between 0. Figure 2. 3. y) of the rectangle. The data and fitting curves on the proportion of step length to stature The fitting curve of 90°video: f = 0.06)) + 0. Therefore.38 0. 1. and are utilized to represent equivalently.628 ⋅ x) + 1. 2 and 3. There are several steps as follows: detecting body motion. extracting human silhouette with background subtraction.4)) + 1. 35 ⋅ si n( 2π ⋅ (t − 1.17 ⋅ exp(−0. as shown in Fig.7 (6) The fitting curve of 45°video: 2π g = 0.325)) ⋅ exp(0.5 ⋅ t ) + 0. due to the restrictions of recorders and algorithm. As a result.25 ⋅ sin( ⋅ (t − 0. width (w) and center coordinates (x. the areas (s) of rectangles on human silhouette are computed by Eq.5 and 0.

On basis of the experiments. CONCLUSION d = ( x − x0 ) 2 + ( y − y0 ) 2 (8) where (x0. other simply ways of actions. Video Type f (10) The identification of body motion is one of main and latest orientation on study. As a result.38 ⋅ t + 1.Figure 3. THE EXPRESSIONS OF WALKING FUNCTIONS The Fitting Curves of 45°Video The Fitting Curves of 90°Video g d 2π ⋅ (t − P )) + B1 1 T 2π ⋅ (t − P2 )) + B2 g=A2 ⋅ si n( T d = A3 ⋅ t + B3 f = A1 ⋅ sin( f = A1' ⋅ sin( ' g = A2 ⋅ sin( 2π ⋅ (t − P ' )) ⋅ exp( B1' ⋅ t ) + C1' 1 ' T 2π ' ' ' ⋅ (t − P2' )) + B2 ⋅ exp(C2 ⋅ x) + D2 ' T d = A3' ⋅ t + B3' 62 . as its consequence. The data and fitting curves on the distance of adjacent rectangle centers C. were less than 10%. because of the recording angles and so on. angles. Based on the laws of mechanism motion. D). Euler distance is used as follow: IV. With the experiments on testing set. jumping. human centroid is used to describe physical parameters of body motion. periods and velocity are determined by variation of the relevant parameters. so the expressions were proved to be valid. as shown in Fig. and offset (B. as shown in Table 1. The expressions and results above prove that human actions. the relative errors of main parameters of walking functions. the functions of human walking are studied at large. C. such as period (T). Meanwhile. phase (P). the data were fitted.224 TABLE I. The data and fitting curve on the proportion of the rectangle areas for the adjacent frames Figure 4. The fitting curve of 90°video: d = 77.48 ⋅ t − 7. the characteristic data of human walking are extracted from videos. In video surveillance. 9 and 10. The Relationship between the Coordinates of Human Centroid and Time In mechanism motion. as shown in Fig. Consequently. as shown in Eq. These data are fitted by the characters of video surveillance to find the expressions of walking function. the validity of expressions is proved by experimental results. amplitude (A). a method [14] to calculate the centroid is presented and applied to the representation of silhouette characters [10]. by means of extracting data. 4. it will be applied to video surveillance in order to detect and affirm abnormal actions. By means of the relationship between distance (d) and time (t). EXPERIMENTAL RESULTS The experimental results above conclude three arrays of expressions on walking function in 90° and 45° videos. y0) is an initial pixel of video to represent the distance of body motion. The data are extracted.354 The fitting curve of 45°video: (9) d = 74. remain to be studied and presented further. In addition. the expressions were induced. such as running. it is not easy to calculate the centroid by extracting data. and complex ones which consist of many simply ones. 4. Consequently. the center of rectangle is used to replace the centroids equivalently. V. In contrast. The expressions of human walking derived from the experiments are with the characters of high efficiency in computation.

2004. pp. Chinese Journal of Computers. A Survey of Visual Analysis of Human Motion. 3. LIU Jian-bo. 1996. HU Wei-Ming. 24. Pattern Recognit Lett. pp. Weiming Hu. 35-37. 2005. ZOU Liang-chou. YAO Guo-Qiang. 1997. no. vol. Sakai R. WANG Liang. 2003. vol. no. Brand M. Tieniu Tan. Aug. Mar. IEEE Trans Pattern Anal Mach Intell. 334-352. Coupled hidden markov models for complex action recognition. no. 3. 353360. In Proc IEEE Southwest Symposium on Image Analysis and Interpretation. XU Shu-Kui. Bayesian computer vision system for modeling human interactions. Oliver N M. 3. Kitani K M. CHEN Feng. no. Harris C. Jan. YE Long. WANG Hui. Mar. XU Wen-li. Steve Maybank. 14. pp. HU Wei-Ming. 2006D90704017). Li Yong-bin. Rosario B. In Proc IEEE workshop on VS PETS. 301311. Journal of Communication University of China (Science and Technology). Chinese Journal of Electronics. 63 . pp. In Proc IEEE Comput Soc Conf Comput Vision Pattern Recognit. Beijing Sport University Press. Nixon M. The authors would also like to thank the anonymous reviewers for their valuable comments that resulted in an improved manuscript. 2004. A Survey on the Vision-Based Human Motion Recognition. 1. 2007. pp. FAN Yi-fang. pp. vol. In Proc International Conference on Audioand Video-based Biometric Person Authentication. 13-16. FU Jie. pp. no. 225-237. pp. 2000. 2000. TAN Tie-Niu. Shutler J. vol. pp. Basic Text of Track and Field. 94-99. 2001. Bobick A. 3. 1996. Aug. XIONG Xi-Bei. 34. pp.ACKNOWLEDGMENT The research was supported in part by the Ministry of Education of the People’s Republic of China and Research Project (Guangdong Province. pp. Statistical gait recognition via temporal moments. 291-295. Gait-Based Human Identification. Sato Y. 25. pp. In Proc Int Conf Autom Face Gest Recogn. A multi-view method for gait recognition using static body parameters. Oliver N. vol. Journal of Guangzhou Institute of Physical Education. no. no. ZHANG Qin. no. Johnson A. No. 155-162. 994-999. A Survey on Visual Surveillance of Object Motion and Behaviors. 1997. 831 – 843. YUAN Zhi-run. Understanding manipulation in video. 2. 84-90. pp. Sept. IEEE Trans Syst Man Cybern Pt C Appl Rev. Beijing. Murase H. Sugimoto A. 17. 2007. 22. Research on the Locating of Image Centroid of Human Body Movements——Locating the Human Body Centroid. vol. Deleted interpolation using a hierarchical Bayesian grammar network for recognizing human activity. Moving object recognition in eigenspace representation: gait analysis and lip reading. [6] [7] [8] [9] [10] [2] [11] [3] [12] [4] [13] [14] [5] Brand M. 8. Liang Wang. DU You-tian. vol. REFERENCES [1] WANG Liang. vol. 239-246. 35. Chinese Journal of Computers. TAN Tie-Niu. Feb. Pentland A. Gait Tracking Based on Motion Functional Modeling. Pentland A P. 3. 26. 2002.

etc.72 64 1) HTML Parsing The purpose of HTML Parsing is to remove html code that is irrelevant and extract text. while the hyperlink text is identified by “anchor” tag. The system design and our proposed methods will be described in section 2. Reserve the attributes of “meta” and “a” and discard attributes of other tags. and vocabulary selection. 60% of the work is data preprocessing. the experimental results on a Chinese web-page dataset are shown as well as some discussions. The rule is: the “plain text”.88%. and “applet”. preprocessing.cn Luxiong Xu Fujian Normal University. remove html source code embedded in tags such as “style”. we conclude our work in Section 4. because no special tags are prepared for text in standard HTML. Webpage classification has been a hotspot in the research domain of text mining and web mining. There are six procedures in web-page preprocessing: HTML parsing. Fuqing Brunch Fuqing. In Section 3. SYSTEM DESIGN AND IMPLEMENTATION Figure 1.edu. 978-0-7695-3521-0/09 $25. In order to organize and utilize information effectively. English lexical analysis. Secondly. I. II. personalized search. While providing all-embracing information. stopword removal. “script”. Experimental results on a Chinese web-page dataset show that methoss we designed can improve the performance from 75.edu. and web-page classification. INTRODUCTION With the rapid development of Internet.e. the nonhyperlink text is identified by “text” tag. In this paper. KeywordsText classification. Web-page Preprocessing In the whole data mining task. and some methods on Chinese web-page preprocessing and feature preparation are proposed. Finally. i.China. The reason is that the attributes of tags is not much helpful to classification task except those of “meta” and “a”. Fujian.1109/ICCET. System Architecture There are three parts in our Chinese web-page classification system: web-page preprocessing.00 © 2009 IEEE DOI 10. B.cn Abstract—A detailed design and implementation of a Chinese web-page classification system is described in this paper.China xlx@fjnu. the large collections of web-pages bring people great challenge: how to find what they need.82% to 81.edu.100084 huangwt@tsinghua. feature preparation. a HTML web-page can be represented as a HTML tag tree.810016 lyanmin@qhu. We save the parsing result as files of XML format for further preprocessing and feature extracting. Following the above rule.cn Yanmin Liu Department of Computer Technology and Applications Qinghai University Xining. Feature preparation Chinese web-page Chinese word segmentation. we will introduce a Chinese web-page classification system we implemented. The system architecture is illustrated in Figure 1. A Chinese Web-page Classification System A. Web-page classification is widely applied in the field of vertical search. people hope to classify web-pages according to their contents. The procedure of HTML Parsing is as follows: Firstly. .2009 International Conference on Computer Engineering and Technology Preprocessing and Feature Preparation in Chinese Web Page Classification Weitong Huang Department of Computer Science and Technology Tsinghua University Beijing. we add user-defined tags for text in a webpage. the amount of web-pages has been increasing dramatically. whose leaf nodes are plain text identified by “text” tag or hyperlink text identified by “anchor”. stemming.2009. Qinghai.

Using stemming [7]. while the tables along the two sides. there are two open source Chinese word segmentation projects. it attempts to match the longest word possible. and x'-test (CHI). C. Given a document and a classifier. we adopt it as the Chinese word segmentation module in our classification system. The assumption of this method is: the features (words) with low document frequency have small influence on the classification accuracy. di ) P(c j | di ) i . we discover that in a content page (not a hub page). In this paper. many words have variations. Such words are called stopword. wdi . Feature Selection One difficulty in text classification is the high dimension of feature space. D. verbs. Once there is a HTML tag tree described above. di ) P(c j | di ) |V | i 1 + ∑ d ∈D N ( wt . remove noises and improve classification quality. the top and the bottom are often noises such as navigation bar and ads. Feature Extraction and Weighting HTML web-pages are semi-structured text. The task of “feature selection” is to remove less important words from the feature space. One is ChineseSegmenter [5] which works with a version of the maximal matching algorithm. The former includes “meta” and “title” which are the summary description of the whole page. The content text in the 65 freq( wt ) = Ci ∈C ∑ freq ( wt . As we know. This simple algorithm is surprisingly effective. while there are no separators between words in Chinese text. Part-OfSpeech tagging and unknown words recognition. adverbs. Many feature selection methods have been studied. A web-page consists of two parts: <head> part and <body> part. such as nouns. num(Ci ) is the number of documents in class Ci . We adopt document frequency selection in our system. nouns and verbs express more semantic meaning.k | c j ) k =1 P (c j ) indicates the document frequency of class c j relative to all other classes. mutual information (MI). using an approach based on multi-layer HMM. and the latter is the actual content text that we can see visually. Through experiments (see section 3. P( wt | c j ) = | V | + ∑ s =1 ∑ d ∈D N ( ws . P ( wt | c j ) indicates the frequency that the classifier expects word documents in class document c j . 4) Vocabulary Selection A sentence in natural language is made up of words of different parts of speech.3. and conjunctions. the hyperlink text is usually used for navigation or advertisement. words having the same stem can be considered as to be equal. information gain (IG). Chinese lexical analysis is a prerequisite to Chinese information processing.2). some algorithms [1] [2] can be applied to remove noises and extract content text. We maintain a stopword list containing 580 English words and 637 Chinese words. 2) English Lexical Analysis and Chinese Word Segmentation In English lexical analysis. In English. DF has comparable performance with IG and CHI. and it has the advantage of simplicity and low computation complexity.2. which is used for stopword removal.Usually the page layout is controlled by “table” tag. E. So it is effective to remove noises to only discard hyperlink text. When looking for words. Among them. The result shows that IG and CHI gain the best performance. we adopt a more simple and effective method to remove noises. for example. We use ICTCLAS[6] to extract nouns and verbs from a sentence. prepositions. [3][4] compared the above four methods. Institute of Computing Technology in China developed a Chinese lexical analysis system ICTCLAS [6] <body> part includes two types of text. spaces are usually used for separators. and the experiments are in section 3. pronouns. It performs very well in practice with high accuracy. So it is a feasible way to choose only nouns and verbs as feature words in order to reduce feature space. and the other is hyperlink text. Therefore. we determine the probability P (c j | d i ) that the document belongs in class assumption: c j by Bayes’ rule and the naïve Bayes |d i | P(c j | di ) ∝ P(c j ) P(di | c j ) ∝ P(c j )∏ P( wdi . Assigning different weight to different parts can improve the quality of classification. adjectives.k wt to occur in denotes the k th word in di . 3) Stopword Removal and Stemming Words with high frequency are not only helpless to distinguish documents but also increase the dimension of feature space. Ci ) num(Ci ) freq ( wt . Classifier 1) Naïve Bayes Classifier The Naïve Bayes Classifier is a simple but effective text classification algorithm. Ci ) is the document frequency of wt in class Ci . ICTCLAS includes word segmentation. MI is the worst. We use the revised formula below to compute the document frequency of word wt . Our experiments are in section 3. One is plain text which is content related. articles. document frequency (DF). Because of ICTCLAS’s higher precision.

We decide to use the test set of a different source to evaluate the effect of training. Figure 2 shows the experimental results. Figure 2 66 . and Micro-F1 rises up as the header weight is raised until gets to a maximum when the ratio of the header weight to the body text weight is 5:1. EXPERIMENTS Chinese Web-page Dataset We download 6745 Chinese web-pages from http://www. 2) Evaluation Measure We use the standard measures to evaluate the performance of our classification system. Classification Results of Different Header-Weighting Schemes Category Name Auto Business Entertainment Health House IT Sports Women Sum Training Set (from sohu) 841 630 1254 784 736 1050 841 609 6745 Test Set (from sina) 351 263 523 327 307 438 351 254 2814 The header of a web-page.93%. P (c j | d i ) ∈ {0.com. recall and F1-measure[8].1} P (c j ) = 1 + ∑ d ∈D P(c j | di ) i |C |+| D| | C | is the number of classes. A. Therefore. illustrated in figure 3. We have done an experiment to get the ratio of the header weight to the body text weight: Choose 4. Properly raising the header weight in the feature space can improve the classification quality.com/ as our training set. while plain-text-scheme gets to 78. and compare the different classification quality when the ratio of the header weight to the body text weight varies from 2:1 to 10:1.e. precision. which are always correlated with the topic of the web-pages and contain few noises. and compare it with the one extracting the full text. The results shows that the scheme of extracting only plain text as features can improve the classification quality by 3.000 to 10. These web-pages are distributed among 8 categories.cn/ as our test set. shows Micro-F1 is 79. but that is not the real world. Vocabulary Selection B. Classification Results using vocabularies of different parts of speech Figure 2. i.V is the vocabulary.000.82%.11%. d i ) is the count of the number of times word wt occurs in document di . Feature Extraction and Weight Distribution Figure 4.e. TABLE I. i. we let the ratio of the header weight to the body text weight to be 5:1. We extract only the plain text in every web-page as a feature extraction scheme. and 2814 webpages from http://www. which is concise and exact. the average Micro-F1 of full-text-scheme is 75.sina. D is the set of training documents. When the amount of feature words varies from 1.sohu. Section 2. Classification Results of Different Feature Extraction Schemes The results. Our former work show that the performance on the dataset from only one data source is surprisingly high (usually above 95%). C. CHINESE WEB-PAGE DATASET Figure 3.000 feature words by document frequency method. by effectively removing the noises in the web-pages. III. | D | is the number of training documents.75% when header and body text share the same weight.2. We have excuses to choose training set and test set from two different sources.1 has defined plain text. N ( wt . the nonhyperlink text in web-pages. reflects how the page author highly generalized the content on the web-page. Table 1 shows the amount of documents in both training and test set in each category.

the proposed methods improve the measurement Micro-F1 greatly. IV. Yiming Yang. and Xiaoming Li. D. and Jan O.88% 81. On our Chinese web-page dataset. The classification result of using only nouns and verbs is clearly better than that of using all words.90% 87. London. we set down a final web-pages classification scheme: extract the plain text in web-pages.php?proj_id=6 Porter Stemming Algorithm: http://www. In Proceedings of WWW’04.nlp. we give the following conclusions: extracting only “plain text” can eliminate noises in web-pages effectively. Chinese Segmenter: [2] [3] Category Name Auto Business Entertainment Health House IT Sports Women Micro-F1 [4] [5] [6] [7] [8] http://www. select 4. Computer Engineering and Applications. Using URLs and Table Layout for Web Classification Tasks.J.mandarintools. nouns and verbs. but this is the real world. The precision and recall of each category is illustrated in Table 2.88% when we use the preprocessing and feature preparation methods mentioned above. and use Naïve Bayes classifier to train and test the Chinese webpage dataset. 2004 Chengjie Sun.18% 92.93% to 82. choose nouns and verbs as features.82% to 81. In the future. web-pages about women’s health information can be classified into the category “Health” as well as the category “Women”. Through experiments. Tennessee. Pedersen. both raising the header weight and choosing only nouns and verbs as candidate features can improve the classification quality.39% 84. New York.69% 87. Micro-F1 is only 75. 2004. CLASSIFICATION RESULTS OF EACH CATEGORY Precision 87. A Statistical Approach for Content Extraction from Web Page.00% thanks to increasing the header weight. In comparison.54% 97. The results are illustrated in figure 4. Journal of Chinese Information Processing. 1979. quantifiers. a series of web-page preprocessing and feature preparation methods are proposed. from 75. Nashville.88%.41% 69. A Comparative Study on Feature Selection in Text Categorization. Final Experimental Results Considering the experimental results comparisons and analysis mentioned above.shows the experimental results of feature extraction after raising the header weight.com/segmenter. adjectives. adverbs. 1997 Songwei Shan.89% 64. USA. In Proceedings of ICML. which means nouns and verbs are enough to reflect the content of a web-page and can eliminate the noises caused by pronouns. REFERENCES [1] Lawrence Kai Shih and David R. The figure shows the average Micro-F1 goes up from 78.69% Recall 83. Yi Guan.org/martin/PorterStemmer/ C.tartarus.11% 82. A Comparative Study on Several Typical Feature Selection Methods for Chinese Web Page Categorization. we will enrich our Chinese web-page dataset and do experiments on larger and more datasets. Information Retrieval. Butterworth. raise the ratio of the header weight to the body text weight to 5:1.000 feature words using document frequency method. New York.05% 95. We do the research on different candidate vocabularies including all words. because many cases cannot be assigned to only one category absolutely.56% 70. USA. showed in figure 2.org.cn/project/project. Karger. 2003. 18(5): 17~22.88% We are satisfied with our improvement. if we extract the full text and don’t use special feature preparation methods. For example. and consequently improve the classification quality.82%. Shicong Feng.88% 44. TABLE II.19% 76.van Rijsbergen. Micro-F1 is 81.html ICTCLAS: http://www. 173~176 67 . 39(22): 146~148. CONCLUSION In this paper.08% 90.72% 85. compared to full-text method.

Thus the security is not compromised even if there is load distribution across various systems. Keywords: Security.1310@gmail.00 © 2009 IEEE DOI 10. and emphasize on its advantage of high resource utilization. Thus high performance is assured.2009.1109/ICCET. but involves the combination of the advantages of both of these concepts. High Performance. There by creating an environment in which no system is idle when even two systems are communicating. ADM does the work of job division and integration based on the security implications of its Authenticator. which provides various security implementations and protocols that provide highly secure operations. This clubs both distributed computing and load sharing.Sugavanan Final year Department Of Information Technology St. a technique which doesn’t try to eliminate the short comings of either of the concepts.com V. combines these two advantages. When we look onto the other side of the coin of such networked systems. Authenticated Distribution Managers (ADM). India E-mail: vprasanna. and advocates the development of a new technique.2009 International Conference on Computer Engineering and Technology HIGH PERFORMANCE GRID COMPUTING AND SECURITY THROUGH LOAD BALANCING V. elaborates on the basic concepts of grid computing and load balancing and their advantages. it detects a new opening in the security implementations of load balancing by distributing them through a grid system. The discussion proceeds by explaining about a grid system and discusses the load sharing ideas of the grid. and try to throw light on the tremendous potential that this technique possesses for providing high performance output in the distributed network. But here we have presented a scenario in which a local scheduler of a networked system can schedule its jobs on and across other systems in the network securely.v@gmail. Chennai. balancing.Prasanna Venkatesh Final year Department Of Information Technology St. Thus we have presented an overview of a high performance network without security being compromised. India E-mail: sugavanan. The new technique that we discuss later on.Joseph’s College Of Engineering. These issues are handled by employing new level of managerial systems called Authenticated Distribution Managers (ADMs) for managing the cooperating systems. job division.Joseph’s College Of Engineering Chennai. Load sharing. Introduction The topic chosen here for discussion. The section following the discussion about grid computing explains about the advanced methods of load balancing. along with enhancement in resource utilization and optimization. Security Authentication.77 68 . security concerns arise.com Abstract It has always been a great deal to balance both security and performance. that is. which allows faster implementation of 978-0-7695-3521-0/09 $25. Any system can select any other system as an ADM and transfer its security policies to it.

are almost completely independent of each other in terms of resources. in our new technique. Thus.these security measures and in an efficient manner. timing and synchronization aspects. The nodes involved in the process are voluntary. and the interaction and mediation between the nodes of the grid are performed through a management interface. 69 . The obvious drawback of such kinds of new systems would be the security implementations. it utilizes its available resources to provide enhanced performance. Disadvantage: Grid computing is a new concept and the implications in this field have not been explored as much as the other areas of distributed systems. This is a type of parallel computing technology in which the computers involved. this is the main feature of grid computing. JAVA CLIENT C CLIENT PYTHON CLIENT.etc MANAGEMENT INTERFACE Security. Grid computing: A grid is defined as a group of nodes that are connected to each other through a number of connecting paths for communication. Hence. called “nodes”. that is. we look for systems that have well established security implementations that can be performed faster through a grid system. authentication and other protocol implementation JAVA SERVICE C SERVICE PYTHON SERVICE. For this reason. high performance through enhanced resource utilization. high resource utilization and high-speed systems. we try to implement security in grid computing by using the advanced security provided by load balancing systems in a grid system for producing high performance. Advantage: A grid computing system as discussed above provides sharing of a task that a single node is unable to perform. Grid computing allows a cluster of loosely coupled computers to perform large tasks or tasks that generally consume more resources and time than is feasible for a single system.etc Load Balancing: Load balancing is the process in which the entire workload on a system can be distributed among the nodes of a network so that resource utilization and time management can be enhanced in the network. All transactions in the grid are performed through the management interface. These nodes are more heterogeneous than those present in other distributed systems.

This technique is very useful in providing optimization of the resources of the network. is the highly advanced security implementations through various security and authentication protocols. namely. from the above discussion. 2. Approach: To attain this goal. The server that performs this task is called the “Load Balancer”. 1. Thus. the server manages the distribution of the workload among the nodes of the network. through the prism of their immanent natural distributed parallelism. we primarily explore the inherent parallelism principles of most complex problems. by integrating them in a set of loosely coupled system in a grid. namely. The approach discussed below provides a simple. These security aspects involve the implementation of available security protocols like: The SSL (Secure Socket Layer) protocol for TLS(Transport Layer Security) . This is performed by using the concept of 70 . This feature can as well be combined with the grid concept to enhance its security implementations. Authentication: Authentication is required in load balancing when the load is distributed to many nodes in the network. where in. feasible solution to the above situation and aims at solving it by combining the above two concepts into a single system. This will ultimately cause the main feature of grid systems to suffer attrition. The main features of a load balancer are: Security: The load balancer provides security of data that needs to be shared between systems at the time of load balancing. authentication of the client systems is an important aspect of the discussion. Disadvantage: The drawback of this concept lies in the fact that implementation of so many security features on each and every node in a grid system results in enormous time consumption. which is indispensable in any network that implements high security. Advantage: The notable feature of the load balancing concept. Protection against DDoS (Distributed Denial of Service) and 3. We now implement the usage of the security aspects of a load balancing system. Reliability: Load balancing techniques provide reliability through redundancy. Security through firewalls. It is thus unfeasible to implement the security of a load balancing system to each and every node of the grid system every time two or more nodes are communicating. higher performance by providing faster outputs. and to find effective and efficient methods of implementation of security aspects required for the distributed environment to provide reliability to the entire network. parallelism. It is necessary that each of the client systems are authenticated nodes of the network so that data transfer can be performed. Redundancy is required also for authentication services and in the implementation of the basic feature of distributed systems.

in a grid system. 71 . it needs to implement its security standards and protocols. High performance to grid systems through faster security implementations. thus reducing the time requirements for the security implementation and also providing optimization of resources in the grid. Advantages: The advantages of this new technique are listed as follows: 1. which may or may not be part of the grid. Communicating system Primary node ADM Distribution of security implementation of primary node. 3. is called the “ Primary node ”. which requires security implementation. Such a system in the grid is called the Authenticated Distribution Manager (ADM). where. Once the primary node obtains its ADM. The ADM then identifies other idle nodes in the grid and distributes to each of these nodes. the load of security implementation for the Primary node. it is possible to distribute the security aspects of one node in the system by implementing them through other nodes in the grid.distribution of security implementation throughout a grid. it hands over its security systems and protocol implementations to the ADM. This node. by ADM. Hence. High security to grid systems. A primary node designates the Authenticated Distribution Manager. Improved efficiency in grid computing. Idle nodes in the grid Role of the ADM: In a grid system. the load of security implementation for one node in a grid can be balanced between all the idle nodes present in the grid. since every node is authenticated. The ADM can be chosen from a set of idle systems present in the grid. whose states are monitored by a specially designated system in the grid. when a node is in communication with a system. 2. every node has access to the grid only after authentication have been performed by the respective management agent. In this way.

Australia. namely. it might be necessary to maintain at least one node called the “master node”. and Srikumar Venugopal. July 1. pages 445-465. is available for research and further areas are to be explored in future. that not only integrates their advantages. Rajkumar Buyya. Don Towsley.N. The idea discussed above allows integration of the two major futuristic and fast developing techniques used in distributed environments. Deepa Nayar. Volume 32. University of Melbourne. 2. Optimal static load balancing in distributed computer systems. Key References : Scope for future research: 1. 5. Improved resource optimization by utilizing idle nodes in a grid.4. Any further possibilities in the abovediscussed technique.Tantawi. Technical Report. we enable the implementation of high performance in the grid system by balancing the load of security implementation. The possibility of certain drawbacks in the new approach discussed can be in the implementation of distribution algorithms in the ADM. In such situations. Grid Computing and Distributed Systems Laboratory. and perform only the functions of the ADM. 2004. This in turn allows resource optimization. that will pretend to be always idle in a grid. Utilizing resources of a grid system for security implementation. GRIDS-TR-2004-5. but also provides tremendous scope for development of security as well as performance of any distributed system. issue 2. 72 . Constraints may occur in situations where none of the nodes in the grid system might be idle. that perform nullification of each others’ drawbacks by implementing one technique within another’s domain. Global Grids and Software Toolkits. Conclusion and future vision: Thus. in the Journal of ACM. grid computing and load balancing. Chun Ling Kei. Parvin Asadzadeh. thereby reducing the time requirement for a task in the grid. and also implement the security of the grid system. using load balancing between the idle nodes of the grid. Asser.

7]. an elastic load. The time-position curve of the right side cylinder rod is given. The force feedback signal influence on position control. but we considered it as a special case. Hao Yan. the position interference signal is a random signal uncontrolled.00 © 2009 IEEE DOI 10. System description Fig. the right side subsystem is position control system. and there is no relation between two given curves. 2.35 . In some special occasions. but their performance (given curves) should be met together. We can 978-0-7695-3521-0/09 $25. a position sensor. and two pressure sensors. F1 is the practical driving force. The load simulator used commonly in aerospace field. In such a system. The equivalent input of position interference signal to system is zero. it is a typical force application system disturbed by position interference signal. so. the force control is auxiliary of the position control.edu. Literature [1] discusses all kinds of force application system with influence in detail.1. D 1 . China 06116282@bjtu. Changchun Li. the compensated transfer function has been deduced. Beijing Jiaotong University. an elastic load between two cylinder rods. W 23 are transfer functions of two proportion valves respectively. we faced with the demand of force control accompanied by position control. The value measured by force sensor is the elastic force of spring. they always disturb each other.1109/ICCET. the optimized control formula of the synthesis control of force and position on total state feedback has been deduced . Introduction The electro-hydraulic servo system aim at the force control or the position control independently is familiar.1 shows a typical synthesis control system. two symmetry hydraulic cylinders. 100044. System description and modeling 2. In some existing electro-hydraulic servo systems. W 13 . We regulate system according the force feedback value to ensure position control[2-4]. which contained two proportion valves. The purpose of each control algorithm applied on this system is to eliminate the position interference. The simulation results proved that the compensated statespace model has more superior performance. W 2 2 are transfer functions of two amplifiers respectively.2009 International Conference on Computer Engineering and Technology Research of The Synthesis Control of Force and Position in Electro-Hydraulic Servo System Yadong Meng. For example. control force application accurately. but the position control is the final task of system. the compensated state-space model is presented. W 2 4 is coefficient of position-electro 73 1. This kind of control task has been widely studied in robotic field [6. Beijing. the force control and the position control in one system need to be run simultaneously. Analyzed a typical system which adopted this kind of control algorithm , Applying unchanged structure theory. D 2 are practical position of the two cylinder rods respectively. The left side subsystem is force application control system.2009. W 1 4 is coefficient of force-electro transform. W 1 2 . W 2 1 are input signal amplifiers. but few works have been done in electro-hydraulic servo system. The system parameters as follows: F 0 is the given driving force. Electronic and Control Engineering. the driving force of the left side force application control system is worked out from pressure difference multiplied by efficient area. D 0 is the given cylinder rod position.cn Abstract The synthesis control of force and position in electro-hydraulic servo system has been applied to some special occasions. introduced the linear quadratic performance fonctionelle. The other one system is position control system with force feedback. or to keep the force unchanged [5] . W 1 1 . the time-force application curve of the left side cylinder rod is given simultaneously. Considering the synthesis control should meet the requirement to force and position at the same time . Xiaodong Liu School of Mechanical.

B1V . K 2 p are flow-pressure coefficient of two proportion valves respectively. The single output signal is: The practical driving force of the left side cylinder. It has two input signals: the given driving force and the given position.3.8]. I 2 are electric current to two proportion valves respectively. Q 2 L are load flow of two cylinders respectively . K s is the elasticity coefficient of spring. The system transfer function The coefficient of feedback W14 and W24 are relative to the operation of system feedback signal . K 2 B are flow coefficient of two proportion valves respectively. output signal D 2 . We need to study system open circuit transfer function at first . C 2 t are total leakage coefficient of two cylinders respectively. This two coefficients is set to zero while deducing system open circuit transfer function . D0 B1V = W13 I 1 (7) (8) (9) (10) (11) B2V = W23 I 2 I2 = W22 ( DW21 − D2W24 ) 0 I1 = W12 ( F0W11 − F1W14 ) The elastic force of load spring: Fs = K s ( D1 − D 2 ) 2. output signal F1 . The load flow equations of the two cylinders: V (3) Q1 L = A1 sD1 + C1t P1 L + 1 sP1 L 4βe Q2 L = A2 sD2 + C 2 t P2 L + V2 sP2 L 4βe (12) ⎞ ⎟ s + K 1t ⎠ is total flow-pressure Where K 1t = K 1 p + C 1t coefficient of left side force application subsystem. V 2 are overall volume of pipeline and cylinder of two cylinders respectively. The system fundamental equations This is a typical synthesis control system. According to equations (2)(4)(6)(8)(10) . K 1 B .1 System circuit 2. We can deduce the transfer function relative to input signal F0 and D 2 . The two input signal are : The given cylinder rod position of the right side system D 0 . For the right side position control subsystem . F s is the elastic force of elastic load. B2V are opening of the two proportion valve cores respectively . It have two input and single output . I 1 . P1 L . m 1 . According to the basic working theory of electro-hydraulic servo system[1.transform. A 2 are efficient area of two cylinders respectively. The single output signal is: The practical position of right side cylinder rod D 2 . The two input signals are : The given driving force of the left side cylinder F0 . F1 ( s ) = ⎛ m1 2 ⎞ s + 1 ⎟ W 1 2W 1 3 K 1 B A1W 11 ⋅ F0 − ⎜ ⎝ Ks ⎠ ⎛ V m 1V1 3 m 1 A2 s + K 1t s 2 + ⎜ 1 + 1 4βeK s Ks ⎝ 4βe K s A12 s ⋅ D 2 F0 W11 W21 W22 W 12 I1 D1 I2 − W24 − W14 D2 Fig. The transfer function equations (12)(13) do not contain the feedback elements. For the left side force application subsystem . V 1 . It have two input and single output . they are open circuit 74 . β e is the bulk elastic module of hydraulic oil. two output signals: practical driving force and practical position. We can deduce the transfer function relative to input signal D 0 and Fs . The practical position of the right side cylinder D 2 . The load flow equations of the two proportion valves as follows: (1) Q 1 L = K 1 B B 1V − K 1 p P1 L (2) Q2 L = K 2 B B2V − K 2 p P2 L Where Q1 L .2. K 1 p . D2 (s ) = ⎛ V2 ⎞ (13) s + K 2 t ⎟ Fs ⎝ 4βe ⎠ ⎛K V ⎞ m 2V 2 3 s + m 2 K 2 t s 2 + ⎜ s 2 + A22 ⎟ s + K 2 t K s 4βe ⎝ 4βe ⎠ K 2 t = K 2 p + C 2 t is the total flow-pressure Where ( A2 K 2 BW 23W 22W 21 ) D 0 − ⎜ (4) The movement equations of two cylinder rods: (5) m1 s 2 D1 = F1 + K s ( D 2 − D1 ) m2 s 2 D2 = A2 P2 L + Fs (6) The equations relative to feedback and gain: coefficient of right side position control subsystem. P2 L are load pressure of the two proportion valves respectively . The elastic force of elastic load Fs . m 2 are total mass of two cylinder rods respectively. C 1 t . According to equations (1)(3)(5)(7)(9) . A 1 .

⎡u ⎤ ⎡F ⎤ ⎡a A = ⎢ 11 ⎣0 0⎤ 1⎥ ⎦ 0⎤ 0⎥ ⎦ . According to equation (16). The compensated transfer function is : A1 (14) T1 = W 13 K 1B The input signal of the compensation element in the position control system is right side cylinder load pressure P2 L . the additional compensated elements are relative to the accuracy of the compensated model. Fig. C=⎢ ⎣ u 2 ⎦ ⎣ D0 ⎦ ⎣0 The equation (20) is the compensated state-space model of the synthesis control of force and position control system. we can deduce the following state-space equations: x = Ax + Bu (20) y = Cx Where ⎡x ⎤ ⎡F ⎤ x = ⎢ 1⎥ = ⎢ 1 ⎥ ⎣ x2 ⎦ ⎣ D2 ⎦ Fig. u = ⎢ ⎥ = ⎢ ⎥ .transfer function.2. Two structure compensation elements have been applied simultaneously. 0⎤ ⎡b B = ⎢ 11 ⎥ ⎣ 0 b22 ⎦ 75 . For the multi input and multi output system. leads to the following equation: (19) x 2 = b22 u 2 Where b 2 2 = W 2 1W 2 2 W 2 3 K A2 2B W21 Fs 1 A2 − P2L m2 s2 A2 ⎛ V2 ⎞⎛ 1 ⎞ s + K2t ⎟⎜ ⎜ ⎟ ⎝ 4βe ⎠⎝ W23 K2 B ⎠ V2 K s + 2t 4β e A2 A2 W22 W23 B2V K 2 B A2 − 1 s D2 From the equations (18)(19).2. In this design. leads to the following equation: (18) x1 = a11 x1 + b11u 1 Where a 1 1 = − 4 β e K 1t 4 β eW11 ⋅ W12W13 K 1 B A1 b11 = V1 V1 − A1 W13 K1B W12 F0 D0 D1 A12 W13 B1V − 1 V1 s + K1t 4βe K1B A1 F1 W11 According to equation (17).2 are the structure compensation blocks. Construct the compensated state-space model 3. the practical cylinder rod position D 2 = x 2 . ⎛ V ⎞⎛ 1 ⎞ T2 = ⎜ 2 s + K 2 t ⎟⎜ ⎟ ⎝ 4βe ⎠⎝ W23 K 2 B ⎠ (15) 3.2 Frame chart of the system The input signal of the compensation element in the force application system is the left side cylinder rod velocity D 1 . in order to eliminate the cross influence between force application subsystem and position control subsystem. this model is based on two compensated elements added in system. The compensated transfer function is : . and the structure compensation has been done to the electro-hydraulic position control system and the electro-hydraulic force application system respectively. ⎡1 1 0 . eliminate strong outer interference. The compensated state-space model The state-space equation is the foundation of modern control theory. The transfer function equations (12) (13) change to: ⎛ ⎞ ⎜ W ⋅W W K A ⎟ F1 = ⎜ 11 12 13 1 B 1 ⎟ ⋅ F0 V1 ⎜ ⎟ s + K 1t ⎜ ⎟ 4βe ⎝ ⎠ W 21 ⋅ W 22W 23 K 2 B D2 = ⋅ D0 A2 s (16) (17) 3. set the given driving force F 0 = u 1 . The force application control system changed into a SISO system which input signal is F 0 and output signal is F1 . The system control block diagram can be draw out according to the system foundation equations. the state-space model is more suitable. Applying suitable structure compensation can improve the system response speed. the practical driving force F1 = x 1 .1. Compensated by unchanged structure theory The literature [1] discussed unchanged structure theory in detail. m1s D2 Ks s m1 s 2 + K s s m1 s 2 + K s The system structure has changed after compensation. set the given cylinder rod position D 0 = u 2 . The blocks connected to dashed line in Fig. The position control system changed into a SISO system which input signal is D 0 and output signal is D 2 .

The other variables can be measured easily . Realization of The linear quadratic optimized control 4. and this formula is achieved easily in electrohydraulic servo system . the period is 2s. the simulation parameters as follows: the given driving force curve is a standard sinusoid. but the periods are different. the coupling influence is seriously. the minimum is -5mm. Q 2 are positively definite matrixes. Due to the state variables change at any moment. The simulation of system In this design. Application of the linear quadratic optimized control The control algorithm which is presented by the linear quadratic optimized control is the function of system state variables.1. ⎣ ⎦ looking for the optimized control u ( t ) . the period is 1s. Because the synthesis control of force and position in electro-hydraulic servo system had two input control variables. This optimized control formula make use of all state information of system . The consumed power should be less as far as possible .6 shows the curves of given position and practical position on base of compensated model.3 and Fig. the amplitude is 10mm. the closed loop optimized control by state feedback can be obtained. construct the closed loop feedback control on base of all state variable . the maximum is 5mm. the minimum is 45N.4 shows the curves of given driving force and practical driving force on base of compensated model . t f ⎤ . and the error of force application system disturbed by position 76 . the solution as follow[9] : The optimized control: (25) u * ( t ) = − Q 2− 1 B T P x ( t ) + Q 2− 1 B T g Where P.3 shows the curves of given driving force and practical driving force on base of non compensated model . at same time . The real substance is to keep the lesser error.3 The given force and practical force before compensation The Fig. we change equation (20) as follow: x ( t ) = Ax ( t ) + Bu ( t ) y ( t ) = Cx ( t ) g ≈ ⎡ P B Q 2− 1 B T − A T ⎤ ⎣ ⎦ −1 C T Q1 z (27) The state curve on optimized control submits to the follow equation: (28) x ( t ) = ⎡ A − B Q 2− 1 B T P ⎤ x ( t ) + B Q 2− 1 B T g ⎣ ⎦ The optimized control u * ( t ) contained state variable x ( t ) . the output error is (23) e (t ) = z (t ) − y (t ) The performance fonctionelle is 1 tf (24) J = ∫ ⎡ eT ( t ) Q1e ( t ) + u T ( t ) Q2 u ( t ) ⎤ dt ⎦ 2 t0 ⎣ Where Q 1 .4. asked two output variables meet the requirement. The explanation on the synthesis control of force and position control are: In certain time segment. make the performance fonctionelle reach minimum value . make the practical driving force and the practical position accord with the given driving force and the given position . we can measure the pressure in the two sides of cylinder .5 shows the curves of given position and practical position on base of non compensated model. the amplitude is 10N. Fig. so to reach the optimized control of consumed power and error. Practical force (N) Given force (N) Time (s) 4. The explanation are: in time segment ⎡ t 0 . we adopted physical model simulation. the maximum is 55N. Keep the other conditions no change. (21) The initial condition of the system is: x (t0 ) = x0 (22) The quadratic optimized control problem as follows : Set the demand output is z ( t ) . The driving force of cylinder can be worked out . at the same time. The signals of given force and given position are standard sinusoid signals.4. we can deduce that the error reduce after model compensation.2. The realization of the optimized control The above optimized control problem belongs to multi tracker problem in the optimized control. 5. The given position curve is a standard sinusoid too. Solving the equations should introduce Riccati equation. Introduced the performance fonctionelle is convenience. the simulation executed on non compensated model and compensated model. g are satisfied with the follows equation: − (26) − PA − AT P + PBQ2 1 BT P − C T Q1C = 0 Fig.The Fig. Compared the Fig. Fig.

6 The given position and practical position after compensation 6. References [1] Changnian LIU. XinZHOU. 1989.6. utilized compensated state-space model. 1981.1. [2] Yongjian FENG. Hydraulic control system. 28. introduced the linear quadratic performance fonctionelle.influence near by the minimum force value is restrained.5 and Fig. 3. 2007. Practical force (N) Given force (N) compensated state-space model is a simplified double input and double output system. 4. Jing HUANG. MACHINE TOOL & HYDRAULICS . 08. Yadong MENG. 19.4 The given force and practical force after compensation Given position (mm) Practical position (mm) Time (s) Fig. Xiao-dong LIU. pp. [9] Bao LIU. pp. 2008. “Motion Synchronization of Electro-hydraulic Servo System”. Time (s) Fig. The speed of system response increase. Conclusion The force control circuit and the position control circuit in one electro-hydraulic servo system are compensated by unchanged structure theory simultaneously. [4] Liwen WANG. vol. no.28. the optimized control formula of the synthesis control of force and position on total state feedback have been deduced. pp. [6] Yun ZHANG . CONTROL THEORY & APPLICATIONS . 2007. 2467-2471. 2007. ACTA ARMAMENTII.36. pp. the position error decrease. 2002. vol.6 . [7] Xianlun WANG. 24. this 77 . no. “ Intelligent Force/Position Control of Robotic Manipulators Based on Fuzzy Logic ”. “ Neural network robotic control for a reduced order position/force model ” . Weiguo LIU. pp. Zhi LIU. 2004. vol. no. The practical position curve is more close to given position curve . 690-694. “ A Study of Suppress Strategy for Extra Force on Control Loading Hydraulic Servo System ” . [8] Hongren LI. 541-545. vol. 87-89. 06. pp. Optimization design theory of hydraulic servo system. Beijing: National defence industry press. Yuhua LI. “ Neural-network Internal Feedback Control for Electro-hydraulic Servo Loading ”. 229-282.. no. no. Modern control theory.11. JOURNAL OF SIMULATION. 272-292. [3] Xinmin WANG. vol. 2007. 765768. The control performance of system have been improved obviously. pp. established system state-space model on base of the compensated transfer function.5 The given position and practical position before compensation Given position (mm) Practical position (mm) Time (s) Fig. “The Self-adaptation Control Model for Eliminating Surplus Force Caused by Motion of Loading System”.4.5355. Beijing: Metallurgical industry press. 07. MACHINE TOOL & HYDRAULICS. pp. ACTA AERONAUTICA ET ASTRONAUTICA SINICA. Compared the Fig. The simulation result indicated. Beijing: China machine press. 55-67. Yuxia CUI. no. pp. [5] Changchun LI.

.

International Conference on Computer Engineering and Technology Session 2 .

.

the Fuzzy ESVDF algorithm produces an efficient features set and. Sugeno fuzzy inferencing model. Fuzzy Enhancing Support Vector Decision Function (Fuzzy ESVDF). In general. the Fuzzy ESVDF improves overall system performance of many applications. support vector machines. Fuzzy Enhancing Support Vector Decision Function (Fuzzy ESVDF). and Genetic search. FS approach and fuzzy inferencing systems. A. The wrapper method [13.uwaterloo. In the new algorithm.2009 International Conference on Computer Engineering and Technology Features Selection using Fuzzy ESVDF for Data Dimensionality Reduction Safaa Zaman and Fakhri Karray Electrical and Computer Engineering Department University of Waterloo Waterloo. 12] is an important topic in machine learning. This elimination also simplifies the classification by searching for the subset of features that best classifies the training set. The dominant features extraction techniques are Principle Component Analysis (PCA). Moreover. 18]. It provides better performance in the selection of suitable features since it employs a performance of learning algorithm as an evaluation criterion. Features extraction [4. karray@pami. support vector decision function. However. then applying FS with the fuzzy inferencing model to rank the feature according to a set of rules based on a comparison of performance. We have examined the feasibility of our approach by conducting several experiments using five different datasets. False Positive Rate (FPR). or noisy data. several researchers have proposed identifying important features through wrapper and 978-0-7695-3521-0/09 $25. SVMs and Feature Ranking SVM is a machine learning method that is based on statistical learning theories [20]. but they can achieve better results than filters due to the fact that they are tuned to the specific interaction between an induction algorithm and its training data. It reduces the number of features and removes irrelevant. it helps us to understand the data and reduces the measurements and storage requirements. The experimental results indicate that the proposed algorithm can deliver a satisfactory performance in terms of classification accuracy. they tend to be much slower than feature filters because they must repeatedly call induction algorithm and must be re-run when a different induction algorithm is used. Canada szaman@pami. 9. Backward Elimination (BE).ca. 15. thus improving the overall performance of the classifier and overcoming many problems such as the risk of “overfitting. it uses the underlying characteristics of the training data to evaluate the relevance of the features or feature set by some independent measures such as distance measures.2009. and odds ration. correlation measures. wrapper approaches demand heavy computational resources. The SVM classifier computes a hyperplane that separates the training data in two different sets corresponding to the desired classes. Using a fast and simple approach. Current dimensionality reduction methods can be categorized into two classes: features extraction and features selection.1109/ICCET.00 © 2009 IEEE DOI 10. Keywords-Features selection. one at a time. and testing time. 7. features ranking. In contrast. On the other hand. 5] involves the production of a new set of features from the original features in the data through the application some mapping. The most used wrapper methods are Forward Selection (FS).36 81 filter approaches. I. Elimination of useless (irrelevant and/or redundant) features enhances the accuracy of the classification while speeding up the computation. information gain. provides an effective solution to the dimensionality reduction problem in general.” Moreover. the SVM classifier picks the hyperplane that maximizes the margin. 16] exploits a machine learning algorithm to evaluate the goodness of features or feature set. redundant. training time. This method. Training SVM is a quadratic . by using SVDF to evaluate the weight value of each specified candidate feature. features are selected stepwise.uwaterloo. II. This paper introduces a new features selection and ranking method. It is simple and fast. instead. BACKGROUND This section introduces a brief description of Support Vector Machines (SVMs) and features ranking. 10] selects the “best” subset of the original features. In terms of features selection. 14. Since there are often many hyperplanes that separates the two classes.ca Abstract—This paper introduces a novel algorithm for features selection based on a Support Vector Decision Function (SVDF) and Forward Selection (FS) approach with a fuzzy inferencing model. 8. and consistency measures [17. integrates the Support Vector Decision Function (SVDF) and Forward Selection (FS) approaches with the fuzzy inferencing model to select the best features set as an application input. and allows the extraction of perfectly interpretable rules. the filter method does not use any machine learning algorithm to filter out irrelevant and redundant features. Linear Discriminant Analysis (LDA). thus. features selection [6. INTRODUCTION Dimensionality reduction [11. The most used techniques in this area are the Chi-Square measure.

by interpreting the meaning of the new coming information to the database within the capabilities of the existing knowledge base [21]. the i th feature does not contribute significantly to the classification. The feature with the next highest weight value from the features set (S1) is added to the features set (S2) while calculating their performance metrics. otherwise. J =1 i =1 n n (1) (2) Subject to C ≥ α i ≥ 0. it will be removed. Inference engine. The contribution weight of each feature can be calculated using Support Vector Decision Function (SVDF). 3. but it is not guaranteed to find the optimal solution. Through this process. the weight value is calculated using (4)) and putting them in the features set (S2) and then calculates the classification accuracy and training time for features set (S2). as in (5). and Sugeno fuzzy model [21]. otherwise. Fuzzy Inferencing Systems A fuzzy inferencing model (also called knowledge based system) is a system that is able to make perceptions and new inferences or decisions using its reasoning mechanism. PROPOSED APPROACH (FUZZY ESVDF) We propose a new features selection approach based on SVDF and FS with a fuzzy inferencing model. The value of F ( X ) depends on both the X i and Wi values. z ) + b j =1 s (3) (4) w = ∑ α tj ytiφ ( xtj ) j =1 Where α is a lagrange multiplier. one at a time. This approach. In the FS approach. and then the predictor variables are added one by one to the features subset. If Wi is a large positive/negative value. This approach yields nested subsets of variables. any new information that is generated from external sources (sensors or human interfacing) is stored in the database. Database. variables are progressively incorporated into larger and larger subsets. while evaluating the weight (rank) of each specified candidate feature by using (4). then the i th feature is one of the major features for the positive/negative class. where the absolute value of Wi measures the strength of the classification. 1 n max W (α ) = ∑ α i − ∑ αiα j yi y j K ( xi . The output of the first model is a fuzzy set that is obtained by aggregating the qualified fuzzy sets using the min max compositional rule of inference. which is the driver program of the knowledge based system. otherwise. if the added variable increases the system evaluation criterion. it will stay in the features subset. n F ( X ) = ∑ Wi X i + b i =1 (5) B. the inference engine operates on the knowledge in the knowledge base to solve problems and arrives to conclusions. if the added feature increases the system performance. There are various fuzzy inferencing models. Specifically. Therefore. it results in a combination that comes close to the optimum solution. otherwise. Through this process. the initial variable (feature) is the feature with maximum weight value (the weight is calculated using (4)). An inference model consists of three basic components: 1. As shown in Fig. Knowledge base. inference states. which is a short term memory that contains the current status of the problem. if X i has a positive value. x is the input. “the inference engine”. it will stay in the features subset. and C is trade off parameter between error and margin. the second model (Sugeno model) generates crisp consequence for each rule and then the overall crisp output is obtained using the weighted average method. Through this process. The FS approach starts with the initial variable being selected for the features subset. In our case. The FS approach does not necessarily find the best combination of variables. the algorithm starts by picking three features from the features set (S1) with the highest weight values (S1 contains all the features with weight values equal to or greater than one. b . Defuzzification is therefore required on the output in order to produce a crisp control signal. C. and a defuzzification process that maps output control values from fuzzy sets back to crisp numerical values. x j ) 2 i =1. it belongs to the negative class. 1. it contains domain specific facts and heuristics that are useful for solving problems.optimization problem with bound constraints and linear equality constraints. This decision function is a simple weighted sum of the training patterns plus a bias. b is a bias value. however. Forward Selection (FS) Approach The Forward Selection (FS) approach [19] selects the most relevant variables (features/attributes) based on stepwise addition of variables. However. and the system evaluation criterion is the system III. it will be removed. Sugeno model is used by the work of in this paper. then it belongs to the positive class. and the history of solutions to date. The most common ones are the Mamdani fuzzy model. which contains knowledge and expertise in the specific domain. selects the features in a stepwise way. 2. ∑ α i yi = 0 i =1 For testing with a new data z : s f = w. performance results (the classification accuracy and the training time). K is a kernel function. Moreover. y is the output. Fuzzy ESVDF. φ ( z ) + b = ∑ α tj ytj K ( xtj . Depending on the new coming data to the database. if it is close to zero. The process continues until either all variables are used up or some stopping criterion is met. two types of comparisons are made: a local fuzzy comparison and a global fuzzy 82 . This reasoning mechanism also involves a fuzzification process that transforms input values from crisp numerical values into fuzzy sets.

Train1) If (Global equal or less than Accuracy1) Exit.” and “decrease” with their corresponding membership functions. Nine rules are needed to describe the system and rank each feature as “important” or “non-important. 2. It compares the performance of the features set (S2) with the performance of the previous features set (S2). Train41). Do while (continue_loop ==1) & (count_loop <= length(S1)) Add the next feature f(i) from S1 into S2. The comparison is ranked according to a fuzzy system that takes only one input variable (percentage of change in the accuracy). then the feature is non-important 5. and the percentage of increase or decrease for the accuracy as the second input. The knowledge base is implemented by means of “ifthen” rules.” “same. then the feature is important 6. the classification accuracy of the features set (S2) is compared with the global accuracy. 83 . This input variable is represented by three fuzzy sets: “increase. the added feature is ignored. If training time has no changes and accuracy increase. then the feature is non-important The global fuzzy comparison compares the classification accuracy of the features set (S2) with the global accuracy.” “same. Else Accuracy1 = Accuracy2. Calculate the accuracy and training time of S2 (Accuracy2. Calculate the Accuracy and Training time of S2 (Accuracy1. otherwise. count_loop = count_loop + 1. then the feature is important 7. If training time decreases and accuracy increase. If training time increases and accuracy decrease. Calculate the accuracy and training time of the features with weight >= 1 (Accuracy. The knowledge base is implemented with three “if-then” rules. then stop adding feature 3.comparison. If training time decreases and accuracy decreases. count_loop = count_loop + 1. count_loop = 0. Train). “Same” refers to the case where the training time and accuracy remain almost the same. End if End if End while End if The selected features set = S2 “decrease” with their corresponding membership functions as shown in Fig. “Increase” refers to the case where the percentage of change (selected features set accuracy – global accuracy) in the accuracy is slightly positive. Pick the first three features as an initial features set (S2). The first and the second input variables (percentage of change in the training time and in the accuracy) are represented by three fuzzy sets: “increase. “Increase” refers to the case where the percentage of change (accuracy and time calculated by current selected features – accuracy and time calculated by previous selected features) in the training time and accuracy is slightly positive. In the global fuzzy comparison. If accuracy increases. which is equal to the minimum of two values: the accuracy of all the features and the accuracy of the features set (S1). If training time decreases and accuracy has no changes. and “1” represents a loop to stop. If training time increases and accuracy increases. then the feature is non-important 9. If training time has no changes and accuracy decreases. if the first value is less than the second value. then stop adding feature 2. The system has one output ranging from “0” to “1” where “0” represents a loop to continue. Pick the Global accuracy If Accuracy41 >= Accuracy Global = Accuracy Else Global = Accuracy41 End if [2] Create the features set Sort the features set(S1)in descending order depend on its weight values. it is kept in the features set (S2). The system has one output ranging from “0” to “1” where “0” represents a non-important feature. If training time has no changes and accuracy has no changes. and “1” represents an important feature in the detection process.Train2) If (Accuracy2 less than Accuracy1) and (Train2 greater than Train1) Remove f(i) from S2. [1] Calculate the Global Accuracy Calculate the accuracy and training time of all (41) features (Accuracy41.” and. If training time increases and accuracy has no changes. If accuracy unchanges. If the classification accuracy of the features set (S2) is equal to or greater than the global accuracy value. the algorithm will stop and the features set (S2) will be the selected features set. then the feature is important 4. The local fuzzy comparison is ranked according to a fuzzy system that takes two inputs: the percentage of increase or decrease in the training time as one input. If (Global equal or less than Accuracy1) continue_loop = 0. The Fuzzy ESVDF Algorithm. If accuracy decreases. then continue adding feature [3] Figure 1. The local fuzzy comparison compares the performance of the features set (S2) with the performance of the previous features set (S2). This means that the training time and accuracy slightly increase after a feature is added. Only three rules are needed to describe the system and decide whether or not to add features to the features set (S2): 1. Train1= Train2. then the feature is important 3. 3. “Same” refers to the case where there is no change in accuracy. This means that the training accuracy slightly increases after a feature is added. then the feature is non-important 2.” according to the following rules: 1. as shown in Fig. then the feature is non-important 8. Else continue_loop= 1.

Each instance is characterized by nine attributes.4 %). the points will create either a Hill or a Valley [22]. Output: Feature Rank Figure 2. Each instance is characterized by 100 real-value attributes. and 70/30). of which 55 are normal (20. NN and SVM classifiers are used to evaluate the proposed algorithms in the second step. Each experiment was repeated ten times with a random selection of the training and the testing data with different ratios.Input 1: Time Input 2: Accuracy In the third dataset. 60/40. WDBC dataset. Each time.48 %). In the first step. describes characteristics of the cell nuclei present in the image as either benign or malignant [22]. Each instance is characterized by 41 attributes. the SPECT Heart Dataset. When plotted in order (from 1 through 100) on the Y coordinate. The fourth dataset.5 %) and 600 are Valley samples (49. It contains 267 samples. Datasets Description The Five real datasets are considered. Each instance is characterized by 30 real-value attributes. Wisconsin Breast Cancer (WBC). Each instance is characterized by 44 attributes. KDD Cup 1999 data [23]. four datasets are taken from the UCI Irvine Machine Learning Repository [22]. We carried out five validation experiments for each different dataset (SPECT Heart Dataset. Wisconsin Diagnostic Breast Cancer (WDBC).S.74 %) and 212 are malignant samples (37. of which 3000 are normal samples (50 %) and 3000 are attack samples (50 %). The first dataset. First. In this section. Sugeno Fuzzy Inferencing Model for the Global Comparison 84 . WBC dataset. Air Force LAN to configure and evaluate intrusion detection systems. each record represents 100 points on a two-dimensional graph. B. This dataset contains 6000 samples. and the fifth one is KDD Cup 1999 data [23]. we validate the results by using any classifier type. four datasets are taken from the UCI Irvine Machine Learning Repository [22]. 40/60. the Hill and Valley Dataset. 30/70. This dataset contains 1212 samples. Sugeno Fuzzy Inferencing Model for the Local Comparison IV. we initially describe the content of different datasets and the experimental settings. of which 357 are benign samples (62. about 40% of the samples were randomly selected as the testing dataset. we choose five different datasets. we apply the proposed algorithm (Fuzzy ESVDF) to select the appropriate features set for each dataset. describes characteristics of the cell nuclei present in the image as either benign or malignant [22]. This dataset contains 569 samples. A. Experimental Settings Our experiment is divided into two main steps. and the fifth one is KDD Cup 1999 data [23].6 %) and 212 are abnormal (79. contains TCP/IP dump data for a network by simulating a typical U. of which 612 are hill samples (50. EXPERIMENTS AND RESULTS For evaluating the performance of our proposed approach. which are ((training %) / (testing %): 50/50. The second dataset. Second.52 %) and 241 are malignant samples (34. the remaining 60% were used as the training dataset.26 %).5 %). This dataset contains 699 samples. The last dataset. the proposed algorithms were applied ten times with training and testing data. and IDS dataset). The objective is to select a subset for the features using the Fuzzy ESVDF algorithm. of which 458 are benign samples (65. Input 1: Accuracy Output: Action Figure 3. Hill and Valley dataset. Follow by some experimental results and discussion. describes the diagnosis of cardiac Single Proton Emission Computed Tomography (SPECT) images [22]. and then to evaluate these selected features using both Neural Networks (NNs) and Support Vector Machines (SVMs).

65 % 0. 85 .0064.33 0. there is a dramatic reduction in the number of features for all datasets after the application of Fuzzy ESVDF. The classification accuracy and FPR are also improved. Discussion As shown in Table 1 and Table 2.054 0.016 0. the classification accuracy based on the selected features set is 76. testing time (it is reduced from 0. The seven selected features from the Fuzzy ESVDF algorithm reduce the training time from 911. Table 3 compares execution times for Fuzzy ESVDF approach for the five different datasets.93 % 75.42% 69. Finally. the number of feature is reduced from 41 to 7 (cut nearly 82.015 IDS Selected Set (7) Complete Set (41) 99. Attributes Acc.030 Selected Set (5) Complete Set (44) Selected Set (4) Complete Set (30) 94. Table 1 shows a significant improvement in classification accuracy. the number of features is reduced from 30 to 4 (cut nearly 86.063 0.270 105.43 %.128536 to 0. the classification accuracy.049 0.91 % 95.75% 94.031 206.082 0.7376) based on the selected features. The results of the classifier SVMs for Fuzzy ESVDF for all datasets are presented in Table 1. The classification and FPR in both experiments are nearly the same.002 221. and IDS dataset) to select the best features set for the application. training time. TABLE I. Moreover. by using SVMs.013 Selected Set (11) CompleteSet(100) 77. Experimental Results Fuzzy ESVDF was applied to the different datasets (SPECT Heart Dataset. False Positive Rate (FPR is the frequency with which the application reports malicious activity in error). by using SVMs. However.189 Training Time 33. FPR.7 %).67 %). as shown in Table 2.92 %). and testing time in both experiments (using the four features selected from the Fuzzy ESVDF algorithm.047 Selected Set (3) Complete Set (9) 95.129 0. the number of features is reduced from 100 to 11 (cut nearly 89 %). 74. However.152 0.63 % 0. training time.012 0. for the IDS dataset.015 0.075 COMPARISON OF DIFFERENT DATASETS USING SVMS No.93 sec and the testing time from 0.475 0.006 0. for the SVM classifier.68 to 221.73% 69.C. which is reduced from 0.182 Testing Time 0. To evaluate our approaches.85 % 94.476 23.019 0.182 sec to 2. For the Hill and Valley dataset. there is obvious improvement in training time (it is reduced from 2488.109 0.70% FPR 0.003 0. The proposed algorithm reduces the training time from 206. and testing time between using the selected three features and all nine features.4746 sec to 43. However.134 0. Dataset SPECT Heart WDBC Hill and Valley WBC TABLE II. by using SVMs.6 %). as shown in Table 1. the NN classifier shows significant improvement in training and testing time.054 sec).77% 96. Table 2 shows significant improvement in training time for the selected features. Dataset SPECT Heart WDBC Hill and Valley WBC COMPARISON OF DIFFERENT DATASETS USING NNS No. For the WDBC dataset. However.047 0.591 0.638 2488.64 % 99.376 0. The NN classifier shows obvious improvement in training and testing time.047.069 38. we used NN and SVM classifiers to classify a record as being either zero or one (binary classification).016 0.059 43.94 sec and the testing time from 0. For the SPECT Heart Dataset.83 0.002 0.039 0.940 325. WBC dataset.13% 66.17 IDS Selected Set (7) Complete Set (41) For the WDBC dataset. Finally.73 %.588 42.144 0. as shown in Table 1.031 sec. there is an obvious improvement in the training (it is reduced from 5.01 0. FPR. there is obvious improvement in FPR. training time.43% 96.0186 sec to 0.45 0.00 0.035 0. The different datasets are compared with respect to different performance indicators: number of features. the number of features is reduced from 44 to 5 (cut nearly 88.03 to 0.678 0.61% 99. as shown in Table 1.680 0. D. FPR. and using the entire 30 features) are nearly the same.17 sec to 0. For the Hill and Valley dataset.65% 96.075 to 0. which is better than the classification accuracy for the complete features set.41 5. the number of features is reduced from 9 to 3 (cut nearly 66. and testing time based on the selected features set are very near to the accuracy of the system based on the complete features set.006 Training Time 0. training time. as shown in Table 1. On the other hand.084815 sec). for the IDS dataset.012 0. the overall system performance is improved based on the selected features set.012 0. On the other hand. The results of the classifier NNs for Fuzzy ESVDF for all datasets are presented in Table 2.84 % 0.77 % FPR 0. The five selected features from the Fuzzy ESVDF algorithm reduce the training time from 325.72% 99. and testing time for the selected 11 features (attributes) from the complete 100 features set.036 0.825 sec to 105.270454 sec to 0.03932. For the WBC dataset. On the other hand.085 0. classification accuracy and FPR for the selected seven features does not show significant differences to that of using all 41 features. and testing time.738 72.05 % 0. classification accuracy. which is equal 69. Table 2 shows that the classification accuracy for both features sets (11 selected features and the entire 100 features) are nearly the same.144 sec to 38.088 0. Attributes Selected Set (5) Complete Set (44) Selected Set (4) Complete Set (30) Selected Set (11) CompleteSet(100) Selected Set (3) Complete Set (9) Acc. FPR.6376 sec).53 sec to 33. However.019 0.0128 sec).928 911.85 2. WDBC dataset.452 0.14 % 66. Hill and Valley dataset. and FPR (it is reduced from 0.534 Testing Time 0. and testing time are also improved. For the SPECT Heart Dataset. Table 2 shows no significant improvement in classification accuracy.41 sec) and testing time ( it is reduced from 0. classification accuracy. For the WBC dataset. the training time is improved (it is reduced from 72. 76.

May 26-29. The proposed approach (Fuzzy ESVDF) has many advantages that make it attractive for many features selection applications.093 87. Our new approach overcomes these limitations by proposing a Fuzzy Enhanced Support Vector Decision Function (Fuzzy ESVDF). ESVDF is considered to be a selection features approach regardless of the type of classifier used. becomes slow when the number of features increases to 100 features. execution time increases greatly (more than doubling). facilitating the retention or modification of the system design and allowing this model to be used in a real time environment. it combines good effectiveness with high efficiency. the number of features is nine and when the number of features triples (the case of the WDBC dataset).792 sec Dataset SPECT Heart WDBC Hill and Valley WBC IDS Comparing the different datasets according to execution time. Karray. Moreover. Thus. can be used to replace the complete features. In summary. Table 3 shows that the proposed approach. 16]. EXECUTION TIME COMPARISON FOR THE DIFFERENT DATASETS No. Third. 6th Annual IEEE Consumer Communications & Networking Conference IEEE CCNC 2009. The fuzzy inferencing model is used to accommodate the learning approximation and small differences in decision making steps in the FS approach. making this approach a suitable features selection method for many applications. making this approach a suitable features selection method for other applications. we are unable to specify the appropriate features set for a detection process because selecting features with the highest rank values (weight) can not guarantee that combining these features can create the best features set based on the correlations among candidate features. Fuzzy ESVDF. 6 or 11 as it is mentioned in the previous works) as an indicator for highest rank value may affect system performance. The proposed approach. in the case of WBC dataset. Fuzzy ESVDF approach for Intrusion Detection Systems. satisfactory performance could be obtained much more easily than with other approaches. the experimental results demonstrate the feasibility of the proposed approach. we used SVM and NN classifiers with five different datasets (SPECT Heart Dataset. but.TABLE III. 25]. of Complete Set 44 30 100 9 41 No. 2009. as we showed through our experiments. To evaluate the proposed approach. As is shown. The Fuzzy ESVD approach improves the classification process by optimizing the selected features SVDF and studying the correlation between features through the application of the Forward Selection (FS) approach. the algorithm is reasonably fast and when this number doubles (to 100 features).025 0. Finally. It produces an efficient features subset. Future work will involve planning to investigate the possibility and feasibility of implementing our approach in real time applications. and IDS dataset). Feature Selection for Intrusion Detection System Based on Support Vector Machine. evaluating features weights through SVDF and studying the correlation between these features through the application of the FS algorithm enables the approach to select efficiently the appropriate features set for the classification process. when the number of features is around 50 features. gives the best performance in terms of training and testing time while retaining high classification accuracy regardless of the classifier used.725 134. Zaman. the execution time more than triples. Second. for features selection based on SVMs and FS with the fuzzy inferencing model. limiting the number of selection features to a specific value (e.. which may obstruct the modification and maintenance processes and impede the use of this approach in some types of applications. it depends on how fast SVMs are. On the whole. and F. REFERENCES [1] S. thus. WDBC dataset. of Selected Set 5 4 11 3 7 Execution Time (sec) 125. Hill and Valley dataset. this approach is simple and efficient and it does not require parameters initialization. this approach is considered to be a selection features approach regardless of the type of classifier used. Moreover. Unpublished [2] 86 . but in all situations. Also. so it provides an effective solution to the dimensionality reduction problem in general. The IEEE 23rd International Conference on Advanced Information Networking and Applications (AINA-09). the selected features subset is representative and informative and. With SVMs. Unpublished S. the SVDF used in this approach also depends on SVMs. and F. V. CONCLUSIONS AND FUTURE WORKS SVDF is used to rank the input features by giving a weight value for each of them. Zaman. this amount of time does not depend on the number of features only. as proposed in previous works [14. Fuzzy ESVDF. WBC dataset. 15. by using the weights only. Moreover. In addition. 24. The proposed approach. it can not guarantee the optimal solution in terms of minimizing the number of features to the least number. employing a reduced number of features by SVMs may be more advantageous than other conventional features selection methods [13. Consequently. the efficiency of Fuzzy ESVDF depends on how SVMs are able to classify the dataset. it gives a satisfactory features number with excellent performance results.215 3000.13 January 2009. First. Fuzzy ESVDF. The experimental results demonstrate that our approach can reduce training and testing time with high classification accuracy. because the ranking approach depends on the system performance (classification accuracy and training time) that is calculated by SVMs. 10 .g. We are planning also to improve our approach and decrease its execution time. however. Moreover. On the other hand. Karray. The advantage becomes more conspicuous for many applications as our experiments show. applied the SVDF based on FS with the fuzzy inferening model to select the most effective features set for an application. the proposed algorithm is simple and does not require that many parameters be initialized.

3. no. Neural Networks. Hofmann. Wiley. IEEE Trans. 2004. Lawrence Livermore Nat’l Laboratory. A. Krzanowski. Khaja Mohammad Shazzad. 1. H. I. Pattern Analysis and Machine Intelligence. Page(s): 11541166. first edition. 22. H. R. L. Page(s): 15631568. Rubanau. 2006. Page(s): 4-37. Webb. Mao. Pal.4761. Mukkamala and A. Jain. 2002. M. 69. “Unsupervised Feature Evaluation: A Neuro-Fuzzy Approach. A. July 16-21. Selection of Variables to Preserve Multivariate Data Structure Using Principal Components. Dietterich. The Sixth International Conference on Parallel and Distributed Computing. Pattern Analysis and Machine Intelligence. Page(s): 22-33. Mukkamala. X. March 2002. Dimensionality Reduction and Attack Recognition using Neural Network Approaches. Man and Cybernetics. P. A.html V. Jain. "Learning Boolean Concepts in the Presence of Many Irrelevant Features". vol. “A Bayesian Approach to Joint Feature Selection and Classifier Design. January 2007. Page(s): 162-166. Applied Statististics. Feature Subset Selection and Ranking for Data Dimensionality Reduction. 2003. Artificial Intelligence. Technical Report. August 2005. 25-28 May 2003. vol. R. Page(s). C. March 2000.” IEEE Trans. September 2004. 2005. S. IEEE Trans. Carin. IEEE Transaction on Pattern Analysis and Machine Intelligence. Yang. A Framework for Countering Denial of Service Attacks. vol. De. January 2000. 9. Basak. Sick.[3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] H. 26. Sung. no. Detecting Denial of Service Attacks Using Support Vector MachinesS. Yendrapalli. Page(s): 366-376. Ant Colony Optimization Based Network Intrusion Feature Selection and Detection. Pattern Analysis and Machine Intelligence. Vaitsekhovich. Page(s): 1231 – 1236. and A. and A. Duin.Sung. W. Mitra. August 8th. Page(s):4754 . “Unsupervised Feature Selection Using Feature Similarity”. Krishnapuram. IEEE 2001.T. http://archive. 2. Golovko. P. Optimization of Intrusion Detection through Fast Hybrid Feature Selection. Statistical Learning Theory and Support Vector Machines. Dash. M. “soft Computing and Intelligent Systems Design: Theory. Statistical Pattern Recognition: A Review. A. Almuallim and T.ics. Statistical Pattern Recognition. Dimension Reduction. Center for Applied Scientific Computing. nos. no. International Joint Conference on Neural Networks.edu/ml/ http://www. no. 12-17 Aug. A. 25-29 July 2004. 2006 International Joint Conference on Neural Networks (IJCNN’06). Liu . 2007. 10-13 Oct. September 2004. vol.edu/mission/communications/ist/index. Pal. B. Simultaneous Feature Selection and Clustering Using Mixture Models”. 2004 IEEE International Joint Conference on Neural Networks.” IEEE Trans. Murthy. [25] H. vol. and J. no. 24. 11. and H. and B. Page(s): 3273 – 3278. and S. 2007. Bilings. Smola. Page(s): 18-21. Fodor. S. 2007. Mukkamala. 87 . 98109. L. Motoda. 1. A Survey of Dimension Reduction Techniques. Technical Report.R. Guangzhou. Jong Sou Park. 36. Proceedings of the 4th Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD 2002).mit. Wang.uci. 2004. Silva. 1987. The Fourth International Conference on Machine Learning and Cybernetics. 2005. and K. second ed. Feature Selection for Intrusion Detection: An Evolutionary Wrapper Approach. P. Page(s): 264 – 267. 2004 IEEE International Conference on Systems. UCD-CSI2007-7. June 2002. Technical Report UCRL-ID-148494. Page(s): 2734-2739. F. IEEE Trans. Hartemink. Horeis. 2002. 26. Tools and Applications”. Karray. and M. 05-08 Dec. A. Feature Ranking and Selection for Intrusion Detection Using Artificial Neural Networks and Statistical Methods. The 12th IEEE International Conference on Fuzzy Systems. Wei and S. M. C. Consistency based feature selection. Tamilarasan. Cunningham. 9. Page(s): 1105-1111. Figueiredo.ll. and J. Page(s): 279-3051994. (PDCAT’05). no. S. vol. Law. Gao. Kochurko and U. Page(s): 301-312. H. A. Sung. Applications and Technologies. Pattern Analysis and Machine Intelligence. 1-2. vol. Figueiredo.

In this situation. we discuss the PDC strategy. The storm surge brought 970. j ∈ J i0 j0 } denote the vi0 j0 in the RFS network. Yunhao Liu proposed a Restricted Floating Sensor (RFS) model. the second largest harbor for coal transportation (6. 2006AA09Z113. Let vij denotes node of the network. Qingdao.7 million tons per year) in China. II. variables.com Abstract—Restricted Floating Sensor (RFS) proposed by Yunhao Liu is an important model in offshore data collection. j )} + w(vi0 j0 ) j∈J i0 j0 So it is straight forward the first weight of leaf nodes is: fw(vdepth. the definition of the node’s weight for transmission time (weight. There is a need for highly precise. The sea route has always been threatened by the silt from the shallow sea area. In this paper we propose a Propagation Delay Control strategy (PDC) to control the transmission time. 1 c). 2006CB303000. a longer response interval is necessary to reserve energy in better weathers. We organize this paper as follows: in section II. The experiment shows our work is succeeding in changing propagation delay by orders. which suddenly decreased the water depth from 9.ouc@gmail.000 m of silt to the sea route. Harbor. Propagation delay. H.Bit error rate. H. i = i0 + 1. Throughout this paper. Harbor from Oct. j ) = w(vdepth . 1 a).232 88 . the conclusion is given in Section IV. China liuxiaodong. On the other hand. 10th to Oct. aggregation of all child nodes of where Because of the movement of nodes.2009 International Conference on Computer Engineering and Technology PDC: Propagation Delay Control Strategy for Restricted Floating Sensor Networks Liu Xiaodong College of Information Science and Technology Ocean University of China. For example. 13th in 3 2003. The architecture of PDC is shown in Fig. Sections S (vi0 j0 ) = {vij | vij ∈ V . for short) is given as follows: w(vij ) = 1 (1 − BER) ( H + D ) (1) w(vij ) is the average number of packets required for successfully transferring D bits data. This strategy can modify the transmission delay adaptively depending on users together with the changeful condition of wireless links. by Key Project of Chinese National Programs for Fundamental Research and Development (973 program) under Grant No.2009. Let I. j ) (3) This project is funded by the Chinese National High Technology Research and Development Plan (863 program) under Grant No.00 © 2009 IEEE DOI 10. For example. INTRODUCTION There are significant interests in analyzing the siltation of the estuary and harbor.5m to 5. H. To adjust propagation delay on request is required by RFS networks. In [1]. DESIGN OF PDC In this section. Let V = {vij } . the amount of siltaanytion in H.1109/ICCET. S (vi0 j0 ) and J i0 j0 are J i0 j0 contains all the sequence number of these nodes.7m and blocked most of the ships weighing more than 35 thousand tons. real-time in many offshore monitoring. In this paper. currently suffers from the increasingly severe problem of silt deposition along its sea route (19 nautical miles long). Monitoring sea depth costs this harbor more than 18 million US dollars per year. In this paper we assume all nodes transmit data at a fixed frequency. By locating such sensors. Our experiment site lies in the east coastal area of Qingdao. we mainly describe the design of PDC in detail. H. This strategy gets the Waiting Time (WT) by modifying the First Weight (FW) and the Second Weight (SW) dynamically depending on orders from users. There need a shorter propagation delay to ensure the safety of the channels. For each node according to BER . It is easy to know that Then we give the definition of First Weight as follows: Definition 1: to any node First Weight (FW) of node vi0 j0 in the RFS network. Finally. where i stand for the layer. Records show that strong winds with wind forces of 9 to 10 on the Beaufort scale hit H. we propose a Propagation Delay Control (PDC) strategy for RFS networks. Firstly we give the design of the FW Module. the siltation of the Harbor changes in a moment. Sensor network III describes our experiment and discusses results we obtain from experiment. Keywords. The highly variable nature of wind brings more intensive effects. let p stands for the fixed transferring rate of nodes. and j for the sequence number of the same layer. as shown in Fig. we denote D as the length of packet data item and denote H as the length of header. Harbor is mainly affected by tide and wind blow. the sea depth can be estimated efficiently without the help of extra ranging devices. the vi0 j0 is specified by the maximum (2) of FW of it’s child nodes. that is: fw(vi0 j0 ) = max{ fw(vi0 +1. 978-0-7695-3521-0/09 $25.

For sw(vik ) <1 sw(v1 j ) fw(vi0 j0 ) = max{ fw(vi0 +1. To any other node in the subtree sw(vi0 j0 ) = max{ We and have × sw(vi0 +1. j ) d (vi0 +1. j )  = T at (v1.BER FW model T ΔT FW AT model SW AT SW d SW model PDC Strategy a) Our experiment site b) Topology model for RFS networks c) The architecture of PDC strategy Figure 1. j . The topology is variable because of the restricted motility. And the sink node will finally receive sw(v1. let subtree. j ) = 0 In the PDC strategy. j . From sw(v1 j ) p d min (vi0 +1. j ) denotes the vi0 +1. j )} + max{ fw(vi0 +1. The transmission time of nodes will be worked out according to the FW and SW. where It is straight forward that sw(vdepth .Firstly we give a definition which is important in describing the variability: Definition 2: to any node vi0 j 0 in the RFS network. j ) . Finally we give the design of WT unit. at (vik ) is × ΔT ( L j ) + . The vi0 j0 is specified by the Theorem 1: the propagation delay of Proof: to the root node distance between Second Weight (SW) of node summation of v1. Let d min (vi0 +1. Thus an amended method is needed to deal with the variability. j and it’s nearest parent node. j ) Where tw(vi0 j0 ) = max{ j∈J i0 j0 d min (vi0 +1. j )} . (7) vi0 +1.k ) . vi0 j0 ) denotes the shortest distance ever detected between at (vik ) = sw(vik ) fw(vik ) × ΔT ( L j ) + sw(v1 j ) p L j is less than T. vi0 j0 ) × sw(vi0 +1.k ) < fw(vi−1. the following expression is used to work their own transmission time: d (vi0 +1. j : It is clearly that at (v1. the at (v1. Then we have T ). among which T is the most dominating. As discussed in the introduction. j ) > at (vik ) where i > 1 As shown in (6).k ) < sw(vi−1. (5) p sw(vik ) fw(vik ) L j . j and vi0 j0 . a message carrying the information of ΔT ( L j ) and sw(v1. To nodes on different layers. j . j . j ) d (vi0 +1. j ) is affected by many factors. j . j )} . That is i >1 . each node should transmit their SW to their parent node. j )} + w(vi0 j0 ) i. j ) will be passed to nodes of subtree L j . j ) = ΔT ( L j ) + fw(v1. With the limitation of propagation fw(vi . at (v1. For + ( fw(v 1. j ) is w(vi0 j0 ) and the weighted sw(vi0 j0 ) . j ) d (vi0 +1. j ) p p From (4) we know sw(vi0 j0 ) = tw(vi0 j0 ) + fw(vi0 j0 ) . j ) − sw(v1 j ) ) ΔT ( L j ) and wt (v1. After getting the upper limit of propagation delay for each tw(vi0 j0 ) = max{ j∈J i0 j0 d min (vi0 +1. j )} ≥ 0 We have 89 . k ∈ J i0 j0 j∈J i0 j0 L1 stand for the That is v1. j ) . that is: sw(vi0 j0 ) = tw(vi0 j0 ) + fw(vi0 j0 ) (4) ΔT ( L j ) + fw(v1. j . j . vi0 j0 ) × sw(vi0 +1. vi0 j0 ) j∈J i0 j0 j∈J i0 j0 . the sink work out the limit of sw(v1 j ) (6) propagation delay of L j : ΔT ( L j ) = T − p The adjust time of v1. PDC strategy for RFS networks Then we design the SW Module. nodes in RFS network float within a restricted area. Let subtree with the root of delay (denoted by sw(vi .

that is t < T . Boston.fw(v1. The propagation delay should be shorten when storm comes. We are deploying the working system in Qingdao. CONCLUSION In this paper we proposed a Propagation Delay Control strategy for large scale RFS networks. Computer Engineering. However the complete system is designed to scale to hundreds of sensors covering the sea area off Qingdao..Schurgers C.1991. IV. the Adjustability of Response Time on Request TABLE I. FW. And from (9) and (10) we know that because of differences over BER. 3(2): 426-457. All of these will be helpful to dampen congestion..2002. elecommunications: Protocols and Design [M]. j ) − sw(v1 j ) ) p 50 60 ≤T 50 40 40 30 30 From (7) we know that at (vik ) is also dominated by T . Pawlikowski K. Thus our future work will focus on designing an improved strategy for large-scale RFS networks.ACM Trans on Embedded Computing System. j ) + fw(v1.Park S. reporting sensing data continuously to the base station. The 28th IEEE Real-Time Systems Symposium[C]. IEEE Signal Processing Magazine. We will discuss (7) is efficient in these different conditions. Conditions one two CONDITIONS OF WEATHERS Factors wind scale wave height(m) 1-2 8-9 0. AIDA: Adaptive Application Independent Aggregation in Sensor Networks [J]. From the discussion we know (7) is suitable for all these conditions. The data centre of Ocean Sense has been launched and most of our data can be seen on internet. Yunhao Liu. In our experiment. EXPERIMENT [2] it is vital for RFS nodes to minimize energy consumption during radio communication to extend the lifetime of the network. Blum Brian M. China. In the same way we can change the propagation delay back to normal to decrease the energy consumption when the storm is over. Tingxin Yan.et. These three figures are in different weather conditions. In this section. 33:112-114. Current system consists of 23 sensor nodes deployed in the field. Principle and Performance Evaluation of Routing Protocol in TinyOS[J]. Except T.. The experiment shows our work is succeeding in changing propagation delay by orders. This algorithm modifies transmission time of nodes depend on the limitation of delay afforded by users. These conditions are shown in table I. we take LEPS introduced by SUN Limin et al. 2003. we evaluate the performance of PDC strategy.2007: 469-478. we compare the performance PDC in terms of different wind scale and wave height. This is mainly because the interval between nodes caused by bad weathers. Because the condition of channel between nodes is variable in different weathers. Fig. So we will evaluate the performance of PDC. From (7) we know the transmission time of each node can 20 20 10 10 0 10 30 60 30 Interval of Packets 10 0 10 30 60 30 Requested Response Time 10 Figure 2. reduce the number of collisions resulting in fewer retransmissions and could converge the propagation delay to a certain set point. From theorem 1 we know the adjust time of the root node is the longest in it’s subtree. The sink only needs to send a message containing the new limitation of propagation delay (denoted by t ) when there is a need to change the propagation delay. Hammond J L. j ) − sw(v1 j ) = fw(v1. Each node will work out the new adjust time easily by the following algorithm 3: III. Spragins J D. So different nodes won’t send data at the same time and this will decrease the probability of collision and ease network congestion. 9 displays the adaptability to the change of propagation delay. [2] [3] [4] [5] 90 .Abdelzaher T. This ensures the propagation delay is within the limitation. From (6) we know ΔT ′( L j ) < ΔT ( L j ) .5 3 fw(vik ) × (ΔT ( L j ) − ΔT ′( L j )) be shortened by fw(v1 j ) In this case. He Tian.. 9 we get the adaptability in condition one is better than in other two worse conditions. Mo Li. Addison Wesley Publishing Company. j ) − (tw(v1. Sea Depth Measurement with Restricted Floating Sensors. Limin Sun. Raghunathan V... From Fig. We use TelosB motes and Tiny OS as our development basis. j ) ) ≤ 0 Requested Response Time 60 70 Interval of Packets Thus we have at (v1. Energy Aware Wireless Microsensor Networks[J]. Tucson. under the time-sensitive requirement. From the introduction we know there are various conditions leading to changes of the propagation delay.19(2):40-50. Stankovic John A. as the routing protocol. With limited power resources. all information is stored in the memory in this strategy. the transmission time of each node will be different. REFERENCES [1] Zheng Yang. Here we compare PDC with other transmission time control mechanisms and discuss their effect on efficient transmission and energy saving. degree and etc. j ( fw(v ) =T + 1. all nodes just transfer data gathered within the limitation of delay in stead of waiting for every child node.

lossy channels (especially wireless links. In temporal transcoding of videos.2009. which operates completely in the compressed domain [3.com 1. MP.2009 International Conference on Computer Engineering and Technology Fast and high quality temporal transcoding architecture in the DCT domain for adaptive video content delivery Vinay Chander1 vinay87@gmail.in Department of Information Technology. The re-encoding errors are also minimized. This fact brings about an important feature which is exploited to reduce the transmitted data size. both bilinear interpolation vector selection (BIVS) and forward dominant vector selection (FDVS) are used. 6 Aravind Reddy2 aravind_k_iiitm@yahoo.Motion activity gives a measure of the motion in a frame and is defined as the sum of the motion vector components in that frame. 12]. huge file sizes and the large variety of end terminals with different constraints. which is viewed and interpreted by the end user(s).in Manish Kakhani5 manishkakhani@gmail. low bit rate two way communication links etc).iiitm@gmail. motion based frame skipping. Our architecture operates entirely in the DCT domain. the biological features of human visual system should be considered and the priority in preserving data should be based on these features. since it has 91 Video communication over the Internet is becoming increasingly popular today. A video may be considered as a stream of frames played at quick succession with very short time intervals. First of all.00 © 2009 IEEE DOI 10. This is due to the large 978-0-7695-3521-0/09 $25. pre-encoded frames may be dropped from incoming sequence to freely adjust the video to meet network and client requirements [4. which operate partially or fully in the pixel domain. Thus we propose an architecture which adapts the incoming compressed video to the end terminal’s frame rate constraint as well as the bandwidth of the available channel by using a motion based frame skipping algorithm [5. we attempt to help with the problem of video content delivery to heterogeneous end devices over channels of varying capacities by proposing a fast and efficient temporal transcoding architecture.1109/ICCET. An overview of the other approaches for video content adaptation can be found in [1. which is found to be better.com Shriprakash Gaurav3 gaurav. 3.ac. 2. 4. short time interval between consecutive frames means that the contents should be very close to each other. Video content delivery has many issues associated with it. the frame cannot be skipped. we limit ourselves to solving the problem of bandwidth and end terminal frame rate requirements in the context of video content distribution over the internet.com Nishant Khanwalkar4 nis. If the motion activity is larger than a given threshold. We propose an efficient temporal transcoding architecture in which motion change is considered.174 . We have implemented the algorithms and carried out experiments for Mpeg 1 video sequences and the results are found to be promising. 2 and 6]. Gwalior. which can be made use of by dropping the less important frames .agnos@gmail. The macro block coding types are re-judged to reduce drift errors. This method involves calculation of motion activity of the frames . Keywords: Video transcoding. 5.co. We use the modified definition of motion activity [11]. DCT domain. In this paper.com Shashikala Tapaswi6 stapaswi@iiitm. Indian Institute of Information Technology and Management. thus avoiding computationally expensive operations of inverse DCT and DCT. unlike some architectures. 5 and 14]. In this paper. For motion vector composition. 11] to skip frames. Transmission of videos over networks has always been a problem due to reasons like high bandwidth requirement. motion vector composition and prediction error re-calculation 1. Introduction The eminence of digital video on the internet is increasing by the day.Since the final user of the video stream is a human. INDIA Abstract variety of end devices that exist in today’s internet and also because of the varying constraints of the channels that make up networks [7].

Also. The smoothness and quality of the transcoded video stream is also maintained by using an efficient motion based frame skipping algorithm. where in both bilinear interpolation vector selection (BIVS) [8] and forward dominant vector selection (FDVS) [9] are used. Thus we compute the quantized DCT coefficients for non-skipped frames entirely in the DCT domain. re-encoding errors are incurred in performing DCT and re-quantization. We implement a scheme. motion vectors and the quantized DCT information for each macro block. Our architecture allows skipping of B frames as well as P frames. the conclusion and future scope is given in Section 6. Also.The pixel-domain approach for re-calculating the prediction residual involves high computation due to the computationally expensive operations of inverse DCT. Section 4 briefly summarizes our implementation and Section 5 presents the experimental results obtained. Transcoding architecture The block diagram of our transcoding architecture is as depicted in Figure: 1. To summarize our work.considerable motion. (1) (MA)m is the motion activity of a macro block. Finally. Transcoding Architecture 92 . achieving good results . frame coding modes. the input bit stream is parsed by a variable length decoder (VLD) which performs the extraction of header information. Section 3 describes the macro block reencoding scheme. The rest of the paper is organized as follows. Firstly. the quantized DCT coefficients of residual signal of non-skipped frames become invalid as they may refer to the reference frame(s) which no longer exist(s) in the transcoded bit-stream. which is briefly described below [11]. thus further minimizing the re-encoding errors. We calculate the new motion vectors using motion vector composition. The architecture is described in greater detail in the next section. DCT. k is a properly tuned constant and |xm| and |ym| are the Figure 1. 15]. which accelerates the transcoding process. and so transcoding this frame improves the quality and smoothness of the video sequence delivered. in which inverse quantization is carried out. Thus. The method which is appropriate for the macro block under consideration is employed to make the process computationally cheap yet. This is achieved using the block translation method [13. in this paper we present a fast frame skipping transcoding architecture that operates entirely in the DCT domain. When the P frames are dropped. 2. macro block coding modes. we explain our transcoding architecture. In Section 2. we present our scheme to re-judge the macro block coding types to minimize the drift errors [4]. For this. we use the modified definition of motion activity. it becomes necessary to re-compute the new set of quantized DCT coefficients with respect to the past reference frame(s) that will act as reference in the transcoded bit-stream. This differs from [4]. We re-calculate the new quantized DCT coefficients of the prediction residuals by processing the quantized DCT coefficients available from the incoming stream even in the case of MC macro blocks. which covers three parts: (i) Rejudging macro block types under different situations (ii) Motion vector composition and (iii) Recalculation of the residual signal. ……. for performing the recalculations and validating the transcoded frames. So. the motion vectors that refer to the dropped frames become invalid and need to be recalculated with respect to their new reference frames that are part of the transcoded bit stream. This is followed by the calculation of motion activity for each frame in the video.

as they do not contain references. the smoothness and quality of the output video are maintained to a very good extent. the next step involves application of a motion based. Consider a GOP (in the display order). Q[DCT(et)] is the original error term. equal to the maximum size of the motion vectors. Rt-1 is the skipped frame and Rt-2 is the new temporal reference of Rt. an initial list of frames that can be skipped are available. the 2 B frames that precede it (5th and 6th frames). are skipped. which is done in such a way that the frame rate as well as the bit rate is reduced sufficiently to meet the requirements. MBt is the macro block . as given by equation (2). thus producing the final bit stream. (2) Once the motion activities are computed. New prediction errors computation for non-MC macro blocks Description: Rt is the frame being modified . the B frames (8th and 9th frames) that appear after it and the subsequent P frame (10th frame) need to be modified. The motion activity of a frame is calculated by summing up the motion activities of all the macro blocks of that frame.The algorithm is presented below: Algorithm for frame skipping Motion-based Policy (frame f): if(f = first frame) Thr(f)=0. the output is fed to a variable length coder (VLC). Q[DCT(est)] is the s s modified error term and similarly Ut and Vt are original vectors of MBt . This order is followed to avoid redundant re-encoding of frames. it is observed that till N=3. Since B frames are non-reference frames. Re-encoding architecture When P frames are dropped. Macro block Type Conversions and re-encoding: Case 1: Intra macro blocks need no changes. Then. the final list of frames that are to be skipped from the incoming bit stream is obtained. Dropping P frames requires a re-encoding scheme because the motion vectors and prediction residuals of frames that refer to the dropped P frame become invalid. which follows the B frame skipper) includes a switch S. By experiments. The P frame skipper (portion of figure: 1. we take into account of intra macro blocks also in the motion activity computation. Suppose the 7th frame (P frame) is dropped. 3. MBt-1 is the skipped macro block and MBt-2 is the new reference area of MBt. Then. using the value N.motion vector components. After this. U t and V t are their modified values . if MA(f)<= Thr(f) skip f else transcode f Thus. the frames contained in the list are sorted by their motion activities. After the new motion vectors and prediction residuals are calculated. that is the N frames with the least motion activities amongst a GOP..which is being modified . they are first dropped from the bit stream and no re-encoding is required in this case. 93 . IBBPBBPBBPBBPBB. In this way. we select a value N (which we name as the quality factor). which updates the DCT domain buffer for the quantized DCT coefficients of the residuals. after performing this step. This value of N gives the number of frames to skip per GOP (Group Of Pictures). frame skipping algorithm as described in [11]. depending on the coding mode of the macro blocks. Our re-encoding scheme is presented in the following section. The final list of the frames to be skipped includes both B frames as well as P frames. Since an intra macro block is produced when there are many prediction errors (namely. else Thr(f)= (Thr(f-1)+ MA (f-1))/2. they are assigned the maximum motion activity value. …. which corresponds to the search range used by the motion estimation procedure. the frames that are part of the transcoded bit stream need re-encoding. the macro block is largely different from the reference area in the previous frame). The dropping of P frames is performed next. The results are presented and described in section 5. When the switch opens. Figure 2. the transcoder performs the motion compensation in the DCT domain.

2. then we find out if there exists an Inter macro block.2.Q[DCT(MBt-2)] … (6) Where Q[DCT(MBt-1)] and Q[DCT(MBt-2)] can be found by the block translation method .Case 2: In a B frame that appears before the dropped frame. if the referenced area lies within the macro block boundary. MB4t-1 are all Inter Coded: If all the involved macro blocks (MB1t. In case of Forward predicted macro blocks. MB3t-1. (figure: 2): 3. Inter coded macro block: If the referenced macro block (MBt-1) is Inter coded (Forward predicted).2.2. the newly quantized DCT coefficients of prediction error for MBt are given by: Q[DCT(est)] = Q[DCT(et)] + Q[DCT(et-1)] ….1 and 3. then the vector of the referenced macro block is simply added to the vector of MBt. 4).2.The linear property of DCT is used and since DCT (et) and DCT (et-1) are divisible by quantizer step-size. Case 3: In the frames that appear after the dropped frame. then MBt is converted into an Intra macro block. 15] achieves this.. MB4t-1) are Inter Coded . then it is converted into an Intra macro block and is replaced by the referenced picture area .1 and 3. Intra coded macro block: If the referenced macro block (MBt-1) is Intra coded (which is not the case in figure 2). leading to good results. New prediction errors computation for non-MC macro blocks 94 . since the percentage of these macro block types are quite low.. Hence. The method which is appropriate for the macro block under consideration is employed to make the process computationally cheap yet. motion vector composition (as covered in 3.(3) Vst = Vt + Vt-1 ….. that is available in literature [13. they can be found out by recursively tracing the vectors. For this purpose. MB3t-1. 3. MB2t-1. MB4t-1) is an Intra macro block: If one of the four (MB1t . then we use the Bilinear interpolation [8] to find the resultant vector. If the referenced area does not lie within the macro block boundary. But if the referenced area does not lie within the macro block boundary (figure: 3).1.(4) Where Ust and Vst are the modified components of the motion vector of MBt.. if the macro block is (Backward) predicted.2. then no change is needed.. Q[DCT(et-1)] = Q[DCT(MBt-1)] .(5) where. among the neighboring ones and whether the referenced area overlaps more than 3/4th in area with it . 3. MB3t-1. then its quantized DCT coefficients can be calculated from the incoming quantized DCT coefficients of its neighbouring macro blocks (available from the DCT domain buffer). MB3t-1. And in case the neighbouring macro blocks are not Intra coded. If it exists.2 cover the motion vector composition scheme used. where in both bilinear interpolation vector selection (BIVS) and forward dominant vector selection (FDVS) are used. the final modified vector can be found by simple vector addition (equations 3.. One of the four (MB1t .2) may be required. which follow: Sub-cases 3. then it requires no changes. MB2t-1. MB2t-1. Hence. the final modified vector can be found by simple vector addition Figure 3. which gives the modified vector: Ust = Ut + Ut-1 .2.The Block Translation method. The new prediction error is calculated entirely in the DCT domain [15] unlike some architectures that operate partially in the pixel domain [10]..2. then we have the sub-cases 3. we have the following sub-cases.2. if the macro block is (Forward) predicted.2.1. MB4t-1) turns out to be an Intra macro block . MB2t1. 3. then FDVS [9] is used to find the resultant vector.1 and 3. MB1t .It is found that the increase in bit rate due to this is not significant.2. But if the macro block is (Backward) predicted or (Forward + Backward) predicted. We implement a scheme.

h and Huffman. 3. 5... Case 4: In the frames that appear after the dropped frame. Otherwise the macro block is reencoded as an Intra macro block. From their feedback it is observed that the output video is found to be very good till quality factor (N) =3 (for a GOP of size 15). the new quantized DCT coefficients can be found out by the equation given below: Q[DCT(est)]= Q[DCT(et)] + (Q[DCT(et-1)]/2) .(7) Which can be derived as follows: We have the 3 equations as follows: Q[DCT(est)] = (Q[DCT(MBt)] .Q[DCT(MBt+1)])/2 .h. ADVERTISMENT. The three video sequences.mpg respectively. which is a direct consequence of increase in the drift errors with the increase in the value of N. the motion vectors and the quantized DCT coefficients of each macro block.(equations 3. The new motion vectors.mpg and SCHONBEK. 2.Q[DCT(MBt-2)] .Thus the output generated by our modules include: 95 .Q[DCT(MBt-2)])/2 +(Q[DCT(MBt)] . We have implemented our algorithms in C++. The quantized DCT coefficients of the modified frames. which includes four header files namely. The results are presented in the next section.. This is done using mpeg_stat. if the macro block is (Forward + Backward) predicted. are calculated and presented below. which is a free tool available on [16]. Table 1 gives the results for the video sequences (percentage reduction in the stream size. ACT60.mpg.h containing functions for motion activity calculation. then the Forward vector needs to be modified and is done in the same way as discussed above for case: 3.. ACT60... 1. We also took observations on the smoothness of the output video. 4).. declarations. The re-judged coding types of the macro blocks We reconstructed the new frames by performing inverse quantization and inverse DCT to compare them with the frames from the original video sequence.mpg are compressed mpeg-1 videos. and input and output frame rates for the N=3 and N=4). N=4) and the original frame taken from the input video. each with a resolution of 160×120 and having 15 frames per GOP with the following sequence: IBBPBBPBBPBBPBB. block_dct.(8) Q[DCT(et)] = (Q[DCT(MBt)]-Q[DCT(MBt-1)])/2 +(Q[DCT(MBt)]-Q[DCT(MBt+1)])/2 . re-calculation of prediction errors and computation of the number of bits in the modified frames (this helps in evaluating the size of the output video stream).. a good tradeoff between bit rate reduction and video quality is achieved. motion vector composition.(10) Eq:8 – Eq:9 – (Eq:10)/2 gives us the required equation . Experimental results We conducted experiments over mpeg-1 videos to evaluate the performance of our transcoding architecture and the results for three sample mpeg-1 video sequences are presented. to show the comparison between the visual quality of the reconstructed reference frames (for N=3. Figure 4(a) is a frame from the original sequence.mpg and 4(b). After the new forward vector is found. The average PSNR (peak signal to noise ratio) values for N=3. There by.. Similarly figures 5(a). which is decided by the channel capacity and the target frame rate. We also present few images below. The Mpeg-1 video stream is taken as the input along with the quality factor (N). 4.(9) Q[DCT(et-1)] = Q[DCT(MBt-1)] . 4 frames per GOP. From the results obtained. average PSNR values.. 4. 5(b) and 5(c) are images taken from the original and reconstructed sequences of ADVERTISMENT.. The result obtained is found to be promising. 4(c) correspond to the reconstructed frames for N = 3 and 4 respectively..h. Implementation In this section. The input is processed to get the details of video stream including the required header information. by showing the input and output video sequences to a group of 30 people.. The new values of bits per frame.. frame_modifier. we give a brief overview of our implementation. We have carried out our implementation for Mpeg-1 videos. we observe that the average PSNR value decreases with increase in the value of the quality factor (N).

97 35. Sample frame taken from ADVERTISMENT. Conclusion & future work We have proposed a temporal transcoding architecture.53 19. Average PSNR.mpg 6.97 26.mpg SCHONBEK. Also. which constitute channels of low bandwidth and end terminals with varying frame rate constraints. Sample frame taken from ACT60.mpg 29. The architecture skips frames (both reference and non reference frames) on the basis of motion and is low in complexity because the transcoding is carried out completely in the DCT domain.19 23. change in frame rate for 3 sample mpeg-1 video sequences Input video Input Quality Factor (N =3) Quality Factor (N=4) frame sequence % Averag Output % Averag Output rate reduction in e PSNR Frame reduction e PSNR Frame (fps) stream size (dB) rate in stream rate (fps) size (fps) ACT60.57 29. % reduction in stream size.97 26.17 35.32 37. From the results we have obtained.47 23.79 35.78 21. this architecture works well in terms of the output video quality for both videos with high motion as well as low motion.Table 1.22 34. The 96 . The Macro block coding rejudgement scheme and the re-calculation of the prediction residuals reduce the cumulative errors. which helps in solving the problem of video transmission over networks.97 28.29 17.mp g 23.mpg (a)Original frame (b) for N=3 (c) for N=4 Figure 5.31 35.85 36. the re-encoding errors are low because the quantized DCT coefficients are directly manipulated upon.82 21.97 36.97 (a) Original frame (b) for N=3 (c) for N=4 Figure 4.55 33. One of the areas that still seem to be open is the design of efficient frame skipping policies. while having a low computational complexity.97 ADVERTISME NT.

[11] M.literature consists of frame skipping policies which are mainly defined by motion information in an experimental way. ICME 2005. MobiQuitous 2005. “Compressed-Domain Video Processing” Hewlett-Packard Laboratories Technical Report HPL-2002-282 to be published in CRC Handbook on Video Databases. Shu and L. vol. Nov. It would be interesting to investigate this problem in an analytical way by the use of tools like dynamic programming and randomization to design new frame skipping strategies. vol. [8] J. june-2007. Mar. Volume: 1. 2005. W. “Video transcoding: an overview of various techniques and research issues.” Proceedings of the 17th International workshop on Network and Operating Systems Support for Digital Audio and Video (NOSSDAV'07).” Signal Processing: Image Communication. 2001. “Frame-skipping Transcoding with Motion Change Consideration”. “Temporal transcoding for mobile video communication.2003. Publication Date: 2001. vol. [9] J. Lonetti and F. Hwang. 886–900.” International Conference on Image Processing. October – 2002. [15] V.T. “Hybrid DCT/pixel domain architecture for heterogeneous video transcoding. pp. Youn. Volume:7. Lin. on page(s): 421-424. [3] Chia-Wen Lin. “Motion Vector Refinement for High-Performance Transcoding”. M. 88-98. pp 1456-1459. Lin. July 2005. 2005. 2005.773-776.” IEEE Transactions on Image Processing. Ghanbari. and M.93.18-29. Volume: 51.1. “A novel low-complexity and high-performance frame-skipping transcoder in DCT domain.”.berkeley. L. 1 (1).. N.A. on page(s): 793. Oct.. Christopoulos.T. 2005 [2] A. 11-14 Sept. [5] H. ‘Video Transcoding Architectures and Techniques: An Overview”. Kumar. 30-40. “New architecture for dynamic frame skipping transcoder. “Dynamic FrameSkipping in Video Transcoding. pp. Chan. [14] V. Bo Shen. [4] Chunrong Zhang. pp. [7] Jens Brandt. Bonuccellit. “Digital Video Transcoding”. Sun. Proceedings of the 2004 International Symposium on Circuits and Systems.506. Yu Sun and Ya-Qin Zhang. May 2004. F.” IEEE International Conference on Image Processing. ICIP 2005. C. September 2003. Shanableh and M. Sun.18. P. Chau. Fung. March 1999. [6] J. July 2005. pp.804. Patil and R. Martelli.W. 11. Wu and C. [10] K. Y. [13] T. February 1998. [16] mpeg_stat. Lars Wolf. Lin. on page(s): 1306. Patil and R. Issue: 4.1312. W. Siu.” IEEE International Conference on Multimedia and Expo.20. John Apostolopoulos.84-97.” IEEE Transactions on Consumer Electronics. 44. Volume:1. D. vol. and W. Proceedings of IEEE. IEEE Transactions on Consumer Electronics.” IEEE Transaction on Multimedia. “A DCT domain frame-skipping transcoder. References [1] Ahmad I. and H. Issue: 5. Chi Yuan and Feng Wang. August 2002. Xiaohui Wei. “An Arbitrary Frame-Skipping Video Transcoder. IEEE Signal Processing Magazine.2. Shibao Zheng. T.3. IEEE Transactions on Multimedia. Jan. “Fast algorithms for DCT-domain video transcoding. T. pp 502. Yuh-Reuy Lee.” The Second Annual International Conference on Mobile and Ubiquitous Systems: Networking and Services. vol. C. pp. pp. Vol. 2001 Proceedings. A video analyzing tool for mpeg-1 videos http://bmrc.2005. no. Kumar. Sun and C. no. “Multidimensional Transcoding for Adaptive Video Streaming. Vol.edu/ftp/pub/multimedia/mpeg/stat/ 97 . Xin. on page(s): I-817-20. Vetro. [12] Susie Wee. Ing.

edu.cn liangjzh07@lzu. Introduction With the establishment of the electric power market. The main idea of the hybrid algorithm is to divide the particles into two. in this paper. Lanzhou School of Mathematics & Statistics. China University.2009 International Conference on Computer Engineering and Technology Electricity Demand Forecasting Based on Feedforward Neural Network Training by A Novel Hybrid Evolutionary Algorithm Wenyu Zhang Yuanyuan Wang College of Atmospheric Sciences. The traditional models of the load forecasting such as the time series model.76 . This paper proposes a novel approach based on the above mentioned idea. autonomy.edu. easy to trap into local minimum point. we apply the feedforward neural network for electricity demand forecasting. such as slow training rate. the swarm intelligent methods such as Particle Swarm Optimization (PSO) algorithm. a high forecast precision is much more important than before. Artificial neural networks technology is an effective way to solve the complex non-linear mapping problem. 730000. Lanzhou. we apply the feedforward neural network for electricity demand forecasting. and bad ability on global search etc [4]. etc. and parallelism in the different applications. China wjz@lzu. In order to improve the accuracy of the forecasting.cn Abstract Electricity demand forecasting is an important index to make power development plan and dispatch the loading of generating units in order to meet system demand. whereas experiments show that they have their specific advantages when solving different problems.2009. adaptation. Lanzhou 730000. easily trapping into local optimum. Inspired by the idea of Artificial Fish Swarm Algorithm. Lanzhou. These algorithms reflect different better properties with their characteristics such as scalability. In recent years. The whole system only has a largest fitness and a best position. China yuzhang@lzu. regression analyses model [1] are too simple to simulate the complex and fast change of the power load [2]. The method called AFSA-PSOparallel-hybrid evolutionary (APPHE) algorithm. the first particles execute the AFSA algorithm while the second particles execute the PSO algorithm simultaneously.cn Chejxh07@lzu. Afterward the two subsystems execute the PSO 98 978-0-7695-3521-0/09 $25. What we would like to do is to obtain both their excellent features by synthesizing the two algorithms. artificial immune algorithm and Artificial Fish Swarm Algorithm (AFSA) are applied in the function optimization and parameters optimization. The inaccuracy or large error in the forecast not only means that load matching is not optimized but also influences the stability of the power system running. AFSA and PSO are much similar in their inherent parallel characteristics. 730000. the best solution should be transmitted back to PSO populations. short-term load forecasting of power system. speed. parameter estimation. fault tolerance. parameters selection problems. its usage is flexible and convergence speed is fast. In order to improve the accuracy of the forecasting. Tao[5] applied the PSO algorithm in Optimization.00 © 2009 IEEE DOI 10. Feedforward neural network is a kind of neural networks. Lanzhou.cn Jianzhou Wang Jinzhao Liang School of Mathematics & Statistics. premature convergence. which has a better structure and been widely used [3]. China University. 1. This proposed method has been applied in a real electricity load forecasting. the results show that the proposed approach has a better generalization performance and is also more accurate and effective than the feedforward neural network trained by particle swarm optimization. Lanzhou University. But there are still many drawbacks if we simply use neural networks. 730000. and then find the largest fitness in the two systems. Lanzhou University. slow convergence in the later stage of the evolution. but the PSO algorithm has the disadvantages such as: sensitive to initial values. Lanzhou School of Mathematics & Statistics. The AFSA has strong ability of avoiding local extremum and achieving global extremum. in this paper we proposed one hybrid evolutionary algorithm which based on PSO and AFSA methods through crossing over the PSO and AFSA algorithms to train the feedforward neural network. This algorithm has been applied in nonlinear function optimization.1109/ICCET.

when the constriction where yi is the output of the ith node in the output layer. The remainder of this paper is organized as follows: Section 2 provides a brief description of Multi-layer feed forward ANN.1 The structure of algorithm and definition Suppose that the searching space is D-dimensional and N fish form the colony. ϕ2 = 2. one or more hidden layers and an output layer.1] . Standard AFSA Algorithm Artificial Fish Swarm Algorithm was first proposed in 2002 [6]. AF food consistence at present position can be represented by: FC = f ( X ) . we adopt [10] to describe the AFSA Algorithm. Moreover. … .algorithm simultaneously.05 ).729. vmax ] and vmax is a designated value. The learning error E can be calculated by the following formulation [8]: E=∑ O Ek k k 2 where Ek = ∑ ( yi − Ci ) . and the constriction factor k is 0. Multi-layer feedforward neural network An FNN. xk is the input of the kth node in the input layer.2 The PSO algorithm The PSO Algorithm was first proposed by Kenney and Eberhart in 1995 [9] and could be performed by the following equations: 2.3. and 99 . PSO. Every node in each layer is connected to every node in the adjacent layer. 2. In this paper.….05. converges quickly towards the optimal position. the hidden transfer function and the output transfer function are both sigmoid function. Section 4 summarizes and analyzes empirical results and discusses the conclusions and future research issues. i = 1. We defined the fitness function of the ith training sample as follows: fitness( xi ) = E ( X i ) (2) When the APPHE algorithm is used in evolving weights of feedforward neural network. the hidden layer has H hidden nodes. every particle represent a set of weights and bias. where k q is the number of total training samples. k yi − Ci is the error of the actual output and desired output of the ith output unit when the kth training sample is used for training.1. in order to guarantee the convergence of the PSO algorithm. 2. (5) yi = f (∑ (wij f (∑ v jk xk + θvj ) + θwi )). vid ∈ [−vmax . weight between the nodes in the input and hidden layers. D . Usually. Methodology 2. … .3.1 = 2. convergent accuracy and can avoid overfitting in some extent. output layer has O output nodes. xD ) . x2 . the constriction factor k is defined as follows k= Where ( ϕ1 2 2 − ϕ − ϕ 2 − 4ϕ is used. m. 2. ϕ > 4 factor . And then we apply the AFSA-PSO-parallel-hybrid evolutionary algorithm to train the feedforward neural network. consists of an input layer. AFSA and APPHE algorithm. The AF individual state can be expressed with vector: X = ( x1 . O (1) j =1 k =1 H n ϕ = ϕ1 + ϕ2 . d = 1. The computed output of the ith node in the output layer is defined as follows [7]: vid(t +1 =k⋅[vid()+ϕ ⋅rand ⋅(p −xid)+ϕ ⋅rand2 ⋅(pgd −xid)](3) ) t 1 1 id 2 xid (t + 1) = xid (t ) + vid (t + 1) (4) where i = 1. wij is the connective weight between nodes in the v jk represents the connective ϕ is set to 4. The results which are compared with feedforward neural network trained by particle swarm optimization (PSO) algorithm show much more satisfactory performance. 2 . 2. rand1 and rand 2 are random numbers uniformly distributed within [0. where hidden and output layers. Supposed that the input layer has n nodes. D) is the variable to be searched for the optimal value. Section 3 describes the research data and experiments. i =1 k =1 q * O q xi (i = 1. In this paper. Output the best solution and stop if the termination criterion is satisfied. and then find the largest fitness in the two systems. … . and θ wi (or θ vj ) are bias terms that represent the threshold of the transfer function f . .

and then transmit the best solution back to PSO populations. the final value of the bulletin is the optimal value of the problem. and calculate their 2. select a state X j randomly again and judge whether it satisfies the forward condition. 4. 3. and implement the behavior whose result is the minimum. The acquiescent behavior is searching food. which means that the fellow problem. and then initialize AFSA and PSO subsystems respectively. Visual represents the vision distance. Execute AFSA and PSO simultaneously. AF executes the behavior of searching food. Mathematically x jk − xik ⎧ FC j < FCi ⎪xinext = xik + Random(Step) X j − Xi ⎪ ⎨ (6) ⎪ FCi ≤ FC j ⎪xinextk = xik + Random(Step) ⎩ Where k = 1. 2. the distance between the AF individuals can be expressed as: forward a step to the fellow centers. The two subsystems execute PSO algorithm simultaneously. the simplest way is trial method. x inextk = x ik + Random ( Step ) (4)Bulletin Bulletin is used to record the optimal AF's state and the AF food consistence at present position.2. If it can’t satisfy after try _ number times. Random ( Step ) represents a random number in the range [0. the state of which is the optimal solution of the system. 5. and then selects an appropriate behavior.3. if n f explores the center Mathematically nf ≥ 1. if FCmin / n f < δ FCi . 2. If n f = 0 . the AF evaluates the environment at present. (2) Swarming behavior An AF with the current state seeks the companion's number in its current neighbourhood where satisfy d i . j < Visual . FC i . The mathematical expression is as follows: X min has high food consistence and the surrounding is not very crowded. k element of state vector X vector at the next j . forward a step to the fellow X min . If n f = 0 . Step is the maximum step length and crowd factor. For example. j = X i − X j . and find j X i ' s visual field. it moves a step randomly. If FCi > FC j in the minimum X min . (3) Following behavior Let X i denote the AF states at present. Update the bulletin with the better state of the AF’s.3. j ≤ Visual ) . xik and xinext represent the x m in k − x ik (9) X m in − X i Otherwise executes the behavior of searching food. Memorize the best solution as the final solution and stop if the best individual in one of the two X c . Meaning of symbols in following Xi formula is same with these. position of its xck = (∑ x jk ) / n f j =1 (7) If FC c / n f < δ FC i . FC j are food consistence of time state X i . 2. X j . Divide the particles into two.2 The description of behavior (1) Searching behavior We randomly select a new state X in current state xinextk = xik + Random( Step ) xck − xik Xc − Xi (8) δ is Otherwise executes the behavior of searching food. AF executes the behavior of searching food. Step ] . otherwise. FCc denotes the food consistence of the center position and n f denotes the centre position number of its fellows in the near fields.FC is the objective function. D . it goes forward a step in this direction. AFSA-PSO-parallel-hybrid evolutionary algorithm The performance of the novel algorithm is described as follows: 1. Find the best solution in the two systems. 100 . X j fellow. X i and AF’s state X inext respectively. ⋅⋅⋅. x jk . Mathematically d i . which means that the fellow center is high and surroundings is not very crowded.4. Evaluate the values derived by swarming behavior and following behavior. in which FCmin is the minimum value of its fellows in the near fields (d i .3 Select the behavior According to the character of the problem.

3. xk +1 . Yi is the forecasting Y is the average of the time series {Yi . We collected 28-day hour load series from a state of Australia. Supposed that every weight in the network was initially set in the range of [− 60 . it shows that the PSO-FNN may rapidly stagnate. .5 . it can be seen that the APPHE-FNN algorithm has a more accurate forecasting capacity and considerably better convergence than the PSO-FNN. .1 Flow chart of APPHE algorithm 2 = 1− ∑ n [Y i − Y i ] 2 [Y i − Y ] 2 ∑ i =1 n i =1 (13) 3. The closer R 2 | Yi − Yi | Yi i =1 .60 ] .50 ] . n} . we try applying feedforward neural network to estimate the unknown function g . try _ number = 15 . The maximum velocity assumed as 2 and the minimum velocity as -2. 4 . Simulations are performed in matlab7.2 Simulation The results are as follows. . Application 3. 2. The flow chart of the APPHE algorithm is shown in figure 1. xk+(d−1) ) →(xk+T . N . xk+1+T . is given. . using the delays method. Two error metrics are employed to evaluate the prediction performance. we can know that the predictions based on APPHE-FNN algorithm are closer to the actual data than PSO-FNN algorithm’s. . If not. while the APPHE-FNN can still search solution progressively till the global optimum is found. the more satisfactory performance is. and all the bias in the network were set in the range of [−50. xk +1 . xk+(d−1)+T ) (11) xk + ( d −1) +T = g ( xk . and the solution no longer improves anymore. Step= 1. The prediction can be described as X k +T = f ( X k ) (10) f :(xk . The population size is 45. Multi-layer feedforward ANN is trained with 26-day hour load series and then forecast the next two days. In figure 3. Here T = 1 means one-step-ahead prediction. We can use the map f to make the prediction. In this work. Through the comparison analysis. In the AFSA algorithm parameters setting.1 with a 2. It’s found from table 1 that the related index and the relative error of the APPHE-FNN algorithm for the predicted 2 consecutive days are obviously much better than PSO-FNN algorithm’s. Visual = 1 . Yi is the actual data. k = 1. n is equal to 24. we adopt [11] to analyse the problem. xk + ( d −1) ]. From figure 2. go to step 2. a 672-observation series as experiment data analysis and forecasting will be done with it.1 Problem description In this paper. The process of training via the APPHEFNN algorithm and PSO-FNN algorithm are showed in figure 3. one is the related index defined as: R Fig. Supposing that a time series {xk }. And the embedding dimension d is 5. The maximal iterative step is 500. Figure 2 is the predictions of the next two days’ load series. and T > 1 means multi-step prediction. i = 1. . ∑ where d is the embedding dimension.subsystems satisfies the termination criterion. xk + ( d −1) ) (12) g is an unknown function. Time series prediction is to deduce the future values of a time series according to its past values. 2. 101 . n n is to the value 1.8GHZ Pentium PC. δ = 1. and T is the prediction step. In this paper we use an FNN with the structure of 5–5–1 to address the problem.3 . The other is the relative error defined as: X k = [ xk . xk+1. we represent the data in d -dimensional space by vectors Where data.

Australia). [11] Xiaodong Wang. In Press. Jinchao Li. Vol.92 0. Proceedings of the Fifth International Conference on Machine Learning and Cybernetics. 2007. pp:1637-1645. Zhao lei. 3. It should be pointed out that.Table 1 The prediction results about R 2 and relative error for 2 consecutive days Algorithm 27th 28th 0. Acknowledgment The research was supported by the NSF of Gansu Province in China under Grant (ZS031-A25-010-G).71% 3. Michael R. Proceedings of the IEEE International Conference on Automation and Logistics August 18 . 2007. Jan 2005. Mingquan Ye. IEEE Service Center. The results show that the proposed algorithm has a better ability to escape from the local optimum and a better predicting ability than PSO-FNN algorithm. pp: 10261037. A hybrid particle swarm optimization–back.Systems Engineering Theory and Practice. [3] Changrong Wu. pp:1058-1062. Issue 2. Dalian. No. “Particle swarm optimization”. Shao Zhijiang. Juntao Zhang and Xinxin Li. Daily Load Forecasting Using Support Vector Machine and Case-Based Reasoning. [5] Tao Xiang.3 Fitness curves of FNN based on APPHE algorithm and PSO algorithm respectively 4. And the high precision has a significant impact on the economic operation of the electric utility since many decisions based on these forecasts have significant economic consequences.1. Prediction of flutter derivatives by artificial neural networks. Beijing: China Electric Power Press.81 2 R PSO-FNN Relative error 11000 10500 Electricity Load Forecasting(MW) 10000 9500 9000 8500 8000 7500 7000 6500 actual data PSO-FNN APPHE-FNN that the proposed method can be used to many other complex time series forecasting such as financial series and hydrological series forecasting. [8] Jing-Ru Zhang. Jinan.38-39. Volume 190. 11:32-38. Time series prediction using LS-SVM with particle swarm optimization.3972. Lyu. Journal of Wind Engineering and Industrial Aerodynamics.2002.Kennedy and R. [7] Chern-Hwa Chen. et al. [10] Yi Luo. Applied Mathematics and Computation. 2006.64% References [1] Niu Dongxiao. et al. Available online 7 April 2008. feedforward neural network trained by AFSA-PSO-parallel-hybrid evolutionary algorithm is proposed for electricity demand forecasting. Ning Li. [6] Li Xiaolei.Eberhart. pp: 747-752.21.86 2 R APPHE-FNN Relative error 2. 15 July 2007. Haoran Zhang. Volume 185. we believe 102 . Tat-Ming Lok. Xiaofeng Liao. pp. An optimizing method base on autonomous animates fish-swarm algorithm [J]. in Proceedings of the IEEE International Conference on Neural Networks (Perth. Fujian Computer. Cao Shuhua. Conclusions In this paper. Corrected Proof. The Optimization of PID Controller Parameters Based on Artificial Fish Swarm Algorithm. Jong-Cheng Wu and Jow-Hua Chen.92% 5. Issue 2. although the processes are focused on electricity load forecasting. 13-16 August 2006. et al.82% 0. 2001. China. [9] J. 0 5 10 15 20 25 hours 30 35 40 45 50 Fig. Power Load Forecasting Technology and Its Application. Jinying Li. Piscataway. Kwok-wo Wong. 1995. Qian Jinxin. Power Demand Forecast Based on Optimized Neural Networks by Improved Genetic Algorithm. [4] Shu-xia Yang. Advances in Neural Networks. Applied Mathematics and Computation. Changjiang Zhang .2 The prediction results based on APPHE-FNN algorithm and PSO-FNN algorithm respectively 200 180 160 140 120 Fitness 100 80 60 40 20 0 APPHE-FNN PSO-FNN 0 50 100 150 200 250 300 Iteration 350 400 450 500 Fig. Jun Zhang.85 0. pp:1271-1274. 15 February 2007. The application of BP neural networks in hospital multifactor time series forecast. [2] Dongxiao Niu. 2007 Second IEEE Conference on Industrial Electronics and Applications. An improved particle swarm optimization algorithm combined with piecewise linear chaotic map. pp:1941-1948.propagation algorithm for feedforward neural network training. pp:28772881.

One common side-effect is that it may harm occupants. the airbag itself must deploy rapidly in less than 50 milliseconds. 1993. In order to determine the potential for occupant protection. Proper development of the airbag system depends on several factors. INTRODUCTION Airbags have been proven to be very helpful in protecting drivers and passengers in many cases of automotive crashes.Liu Ping College of Mechanical Engineering University of Shanghai for Science and Technology Shanghai. The whole deployment system mainly consists of an air compressor. Experimental Set-up . a lower layer flat airbag and a middle layer of tube-type airbags. computer simulations of the OOP occupant with NAB and the normal driver-side airbag (DAB) are carried out using HYBRIDIII FE model. Project N.com likely induced by the “membrane” loading at relative late deployment time[2]. OOP occupant protection. a tank. John et al. Two groups of deployment tests using DAB and NAB were performed to support and validate the computational modeling efforts and compare their deployment properties. METHOD AND MATERIALS A prototype of the NAB was developed.J50503 103 978-0-7695-3521-0/09 $25. 1998. Two phases of airbag deployment have been associated with high. China . The objective of the new airbag system is to enhance the occupant safety in passenger car collisions with reduction of injury risks for small and OOP occupants. when it deploys improperly on crash (Chritina et al. the acceleration of dummy head using NAB is smaller than that using DAB. 200093 Hulin888@sohu. the ignition system and the ignition time etc. It is believed that the side-effect would be minimized if the early development pattern of the airbag can be controlled. It has been found that chest injury often associates with airbag “punch out” force and occurs at very early stage of the airbag deployment. Index Terms . chest and neck are the most vulnerable objects subjecting injuries during airbag interacting with OOP occupant. The upper layer flat airbag is designed to touch the occupant on crash. field automotive accident and laboratory test. To reduce possible side-effect of the airbag system and to have suitable requirement on the inflation system. In the OOP simulation.00 © 2009 IEEE DOI 10. Alex et al. A/D board 8 does the conversion from pressure voltage signal to numerical signal. but neck injury is more * Supported by Shanghai Leading Academic Discipline Project.410082 luckycitrine@163. the inflation system.. Results indicated that the NAB airbag need less gas than the DAB to get the same load and displacement curve. an occupant positioned extremely close to the airbag module at the time the airbag begins to inflate is exposed to highly localized forces..Sandwiched airbag. injury-causing localized forces: the punch-out phase and the membrane-loading phase. Because to achieve occupant protection during a crash using a fully-deployed airbag to dissipate the frontal crash forces experienced by the driver over a larger body area and gradually decelerate the occupant’s head and torso to prevent contact with other interior surfaces. the middle layer is first and mostly inflated while the other two layers are inflated later and less rapidly. 2005) and evaluated in this paper. manufactured and used[1]. and the middle layer tube-type airbags are designed to support the upper layer and the lower layer. The NAB system and traditional airbag I. Safety valve 3 prevented the system from pressure overloading. Pressure sensor 9 and 6 are separately used to transfer tank pressure signal and airbag inside pressure signal to computer. II. The whole NAB’s volume is about 40 L. In the event of a crash. finally the parametric study was carried out to find the more sensitive and critical design variables to occupant injuries.1109/ICCET. sled crash test. 1995). Consequently.com Abstract -This research a new type airbag system (NAB) was developed in this study which consists of two flat layers and a middle layer with tube-type structure. The performance of the prototype airbag was investigated using static deployment test and sled crash test with Hybrid III dummy. especially for children and small women. The outlet of fast-acting valve is connected to the inlet of the new structure airbag. side-effects may be serious if the system is not carefully designed. finite element model Huang Jing The State Key Laboratory of Advanced Design & Manufacture for Vehicle Body Hunan University Changsha. Solenoid valve 2 functions as a switch between air compressor and tank. China. The experimental devise can be used to inflate the airbag and provide gas with prescribed pressure and leakage. and the leading velocity of NAB is lower than that of DAB. which consists of an upper layer flat airbag. The air compressor 1 provides compressed air to the tank 4.2009 International Conference on Computer Engineering and Technology Investigation on the Behaviour of New Type Airbag* Hu Lin. According to the findings of biomechanics research. However. which put high requirements on the systems and thus resulting high costs. fast-acting valves and sensors (shown in figure 1). including the structure of the airbag.An experimental inflation device was designed as shown in Figure 1.. then the sled test and virtual tests were used to estimate the protection efficiency of the NAB.60 . A prototype of the airbag was built up with volume 40 litters. Development of the traditional airbag would require a powerful inflation system and quick response of the ignition system. A FE model of the airbag was developed and validated using results from the airbag deployment test and sled crash test.2009. a sandwiched tube-type airbag system (NAB) [3]was developed (Zhong.

Self-contact interface is also defined within the airbag system.Fixed Airbag 8. NUMERICAL MODEL AND EXPERIMENTAL VALIDATION NAB model . the 5th percentile adult female dummy OOP virtual tests were used to do parametric analysis. The features of the airbag deployments were examined. Finally.chest and hands. neck.The behaviors of the NAB system are investigated using an experimental device as shown in Figure 1.To examine the actual behavior of the NAB system in protecting occupants. sled crash tests are carried out by using a 50% HYBRID-III dummy with standard safety belt at impact speed of 35 km/h.The NAB system is modeled with 18408 Belyschko-Tsay membrane elements.2 mm. Contact interface is defined between the airbag and the dummy’s head. providing a prescribed pressure to the system. computer simulation was also conducted with the HYBRID-III dummy FE models. Pressure Sensor 7.Simulations of the static and dynamic deployment of the NAB system were carried out by using LSDYNA program to study the feature of inflation process.Air Tank 5. III.Air Compressor 2. The NAB system and traditional airbag system are tested with the device in conditions of static and dynamic deployment: (1) static deployment without impact. the two test positions for OOP situations are as shown in Figure 2. It means that the NAB system has the potential to provide similar load-displacement feature as the traditional airbag system would do although the NAB system requires much less gas.1 Layout of the airbag test system: 1. According to above requirement.and moderate-severity crashes. and (2) dynamic deployment with impact. for the 5th percentile female "low risk deployment". The driver position 1(~ISO 1) is to obtain maximum neck loading (membrane) and the driver position 2 (~ISO 2) is to obtain maximum chest loading (punch out) from the deploying air bag. Two energy absorbing square metal beam of the size 120×120×500 in mm with thickness of 1. These tests include performance requirements to ensure that airbags developed in the future do not pose an unreasonable risk of serious injury to OOP occupants[4].A/D Board. (a) test at 50 ms (b) simulation at 50 ms 104 .Safety Valve 4. The dynamic stiffness of the airbag system is estimated through dropping composite blocks onto the inflated airbag. The tested NAB system is inflated with gas storage on the sled. as shown in Figure 4a. Figure 3b shown the simulation results which are similar with that from deployment tests. In the FMVSS-208 NPRM. Static deployment -The high speed films from static airbag deployment are shown in Figure 3a. Material properties of Nylon 66 are assigned to the membrane elements. The acceleration signal of the dummy’s head and the test sled is measured and recorded.Fast Acting Valve 6. Airbag deployment test . proposed that static OOP tests should be a mandatory requirement starting in 2003. The measured acceleration peak value is 16 g at the dummy head center of gravity.system are tested with the device in different loading conditions. t=0ms t=10ms t=20ms (a) t=30ms t=45ms t=20ms t=30ms t=45ms (b) Fig. In response to side-effects of an airbag in low. The airbag is mounted at an angle of 60 degrees with horizontal plane. FMVSS 208 issued by NHTSA in May 2000.Figure 4a shows the 50th percentile HYBRID III dummy response and contact with NAB at 50 ms in the sled crash test. Virtual testing . Experimental results show that by adjusting the pressure alone equivalent dynamic stiffness properties may be obtained with different types of airbags.2 (a) out of position 1 (b) out of position 2 Fig.3 (a)The airbag during inflation test.9. Then.Solenoid Valve 3. (a) (b) Fig. Sled crash test . (b) Simulation of the deployment process of the NAB t=0ms t=10ms Sled test . the virtual test results with DAB and NAB were compared.

the length of tether. and the NAB is 27.036 s.0225 s.5 The time history plots of dummy’s head acceleration from sled crash test and simulation of the sled test Comparison of deployment property of NAB and DAB To compare the NAB and normal DAB’s deployment property and assess the relative potential of different airbag designs to cause injury. modulus of elasticity. So .6 m/s at time 0. The collocation of tether. From the time history plot of the leading-edge velocity. The parametric study was carried out in two steps.0025 ms. porous property. the dummy response and contact with the airbag at 50 ms is shown in the Figure 4b. which is comparable with the crash test result. the peak value of the normal DAB is 53. The first group of simulation is to measure the leading-edge speed between the normal DAB and NAB. and the magnitude and duration of the head acceleration curves are reasonable good. two groups of static deployment simulations were conducted.9 m/s at time 0. venting during inflation. normal DAB reached its maximum volume value at time 0. the normal DAB and NAB have the same volume (40 L). NAB can use less gas to reach the same airbag inner pressure. Head acceleration (m/s2) Simulation 160 120 80 40 0 0. also use the same inflator’s parameters. recorded some ARS 3 (Abrasion Rating System) abrasions to human skin at a contact speed of 234 km/h (65m/s). 200 Pa) at time 0.040 0. NAB reached its maximum volume value at time 0.100 0. The time history plot of head accelerations measured from test is presented and compared in Figure 5. their leakage area was set to the same value (1152 mm2). density. so the possible abrasion injury caused by NAB is lower than ARS 3. The design variables and their control levels are showed in Table 1.6 Leading-Edge Velocity comparison In the second group of simulation. the normal DAB’s volume is 60 L and the NAB’s volume is 40 L.8 Pressure history plot Ⅳ. fabric material property e. vent hole.g.039 s. inflator tank pressure characteristics such as the slope and the peak. the size and location of vent hole are selected as design variables[6]. (b) Simulation of the interaction of the NAB with the dummy From virtual testing using FE model. AIRBAG-DESIGN PARAMETRIC STUDY Design variables and control levels .035s. Figure 6 shows the different leading-edge velocity results from the static deployment simulation. mainly to judge the influence of tether and vent hole to NAB. Table 2 Design Variables and control levels 105 .120 Sled test Fig. either their inflators’ parameters. Figure 8 shows the pressure history plot of two types of airbag. Figure 7 shows volume history plot of two types of airbag. Table 1 Design Variables and control levels Control level Design Variable 1 2 3 A=Tether collocation Col 1 Col2 Col 3 B=Tether Length 100% 90% 80% C=Vent hole Area 1110mm2 80% 70% D=Vent hole Pos1 Pos2 Pos3 circumferential Position E=Vent hole Radial Pos1 Pos2 Position F=Mass flow rate 100% 70% 4 Absence 70% 110% Pos4 Fig. cover break out force. Fig. the influence of inflator characteristics and fabric material properties are discussed. NAB reached its peak value (0. etc[5].Fig.Some of the design parameters that might affect the airbag module performance in drivers' OOP conditions are: airbag structure.080 0.060 0.4 (a) Sled crash test of the NAB with HIII dummy.214M In the second step.020 0. DAB reached its peak value (0. some initial design parametric about NAB’s structure have been carried out. and the design variables and their control levels are showed in Table 2.037s. based on the research results of the first step of the parametric study.000 0. the mass flow rate. In the first step.1445×MPa) at time 0. and is lower than the possible abrasion injury caused by DAB also. The validated NAB module model will be applied in the OOP occupant simulation as discussed in the next paragraph.7 Volume history plot Time (s) Fig. Reed et al.

Adjust the parameters of NAB to the direction of design improvement according to above 2 ( 3 ) matrix.264523 0.0g 7.0 246.0 268.0 650 15.0 279.1 1200 -35.0 263.5 243.1 1500 75.55 − 0.7 1500 -31.The Design Exploration Tools of iSIGHT is used to analyze the previously mentioned virtual test results. head. the 5th percentile female dummy seated in three postures: in-position.023451 0.109046 1.0g 3.0 246.6 950 68. Simulations are accomplished in each case of table 3 and table 4 using validated LS-DYNA simulation models.4 1220 -26.810461 0.021954 0. it is formulated as following: N ij = Fz Fcritical + My M critical (5) Simulation matrix .036827 0.1 1150 95.023699 0.7 -1900 96.991972 Object function definition .7 1050 -28.683555 1.m-3) J=fabric material Porous property 1 120 0 541 0 Control level 2 160 4. And the predicted desirable direction for each design variable are as follows: the pressure peak of inflator should be chosen lower level.042145 0.According to the findings of biomechanics research. And for the second step of parametric study. Dummy pos NO.0 191.After formulation of design problem.111706 0.025381 Nij 0.0 266.5g 27.5 244.865515 0.5g 2.741187 0.5g 20.0 297.0g 11.7 1750 90.731283 0. the density and porous property of fabric should be chosen medium or higher level.844696 1. a design of experiment (DOE) analysis was conducted to design the test matrix.00693 × CHESTG ) N ij can present both upper neck force and upper neck moment. so the number of experiments will be very big and can’t get the mutually affecting relations of these variables.0 220.0 482.0g 10.5 901 100 Parametric virtual tests results . field automotive accident and laboratory test. the minimizing occupant injuries.4 1120 -27.8 1250 25.0 1131 3600 28. The traditional test work generally changes one design variable’s value at one time and other variables maintain invariable.0g 10.8 1300 -32. the orthogonal array is defined as L9 4 3 Analysis of results .5g 12.740663 0.0 318. ‘the pressure peak of inflator’ is sensitive to occupant head and chest injuries.7 1400 -38. R in ANOVA means the coefficient of determination.870469 0.5g 7.8g 10.The virtual tests results for parametric study are shown in Table 3. the choice of design variables should be considered synthetically. out-of-position 1 and out-of-position 2.044563 0.057342 0.0 217.0g 11.020170 0.00351 × HIC36 ) (2) (3) (4) OOP2 PCHEST = 1 1 + EXP ( 5.023877 0.029033 0.0 271.729635 0. it can reduce the experiment numbers greatly and can be possible to estimate the effect of these variables more precisely.0 352.0g 36g 9.02 − 0.399734 0.0 333.024956 0. Based on above three scenes parametric analysis.019228 0.021403 0.2g 18.0g 28.0 195.5 721 50 3 200 6.695707 0. In order to valuate NAB’s protection performance synthetically.0 223.0 1061 -2100 81.0g 7. US NCAP Injury Index— PCOMB and neck injury criteria N ij are chosen as the object functions. The pareto plots indicate the relative effect of each design variable on occupant injury.0g 11. (1) Object Function = MIN ( P ) COMB IP Where.6 1600 28.022805 0.5g 8. According above orthogonal array experiment matrix. the pressure slope of inflator should be chosen lower or medium level.023882 0. the orthogonal array is defined as L16 ( 4 × 2 ) matrix. the ANOVA.3g 9. to provide enough representative test data.483455 0.859642 0. 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 Chest G 8.042293 0.586598 1.0g 8.4 1300 -29.1 1500 31.0 242.0g Table 3 virtual test results My HIC36 Fz(N) (Nm) 182.8 1800 90. LD-DYNA simulations are not required 4 3 × 34 times but only 27.0 302.020036 0.Design Variable G= inflator Pressure peak (kpa) H=inflator Pressure slope (ms) I=fabric material Density (kg. Effectiveness of NAB . The method by using orthogonal array to arrange experiments permits many variables change simultaneously.3 1400 75.5 800 30. PCOMB can presents both HIC and chest acceleration as only one function.0 1450 -30. It is formulated as following [7].026487 0.031676 0.028680 0. ANOVA is the statistical analysis of the contribution of each factor or their interactions to the variance of a response [8].8 1300 -22. pareto plots and main effects graphs are recorded.264300 0.593016 0. In order to make NAB provide good protection for occupant under all conditions.4 2250 25. in another word it is a measure of the accuracy of the model fit. PH E A D PHEAD = OOP1 PCOMB = PHEAD + PCHEST − ( PHEAD × PCHEST ) and PCHEST are defined as following: 1 1 + EXP ( 5. RESULTS AND ANALYSIS 106 .0 216.022394 0.5 Pcom 0. we can draw a conclusion that ‘the pressure slope of inflator’ and ‘the porous property of fabric’ are sensitive to occupant neck injury.020650 0.5g 13.0g 23.0g 18.8g 7.909295 1.0 273.7g 4.208264 0.047185 0. chest and neck are the most vulnerable objects subjecting injuries during airbag interacting with OOP occupant. The main effects graphs indicate the desirable direction for each design variable.883049 0.0 187.894795 0.063818 0. for the first step of parametric study.020887 0.917027 0. Ⅴ.6 1180 95.225341 0.0 370 17.017891 0.031829 0.

IMechE Vol. the actual performance of the NAB system in commercial product has to be examined with an actual inflation system activated by a matched ignition system. [6] J Huang.October.4 0. Firstly. Investigation Into the Effectiveness of Advanced Driver Airbag Modules Designed for OOP Injury Mitigation. Wasserman. ESV 17th Conference. Further investigation will be performed subsequently to develop a new airbag product. which indicate that the NAB could decrease the risk of OOP occupant’s injury and provide better protection. Michael Freisinger.3 0. 5th Percentile Driver Out of Position Computer Simulation. Kutner. the upper layer which would impact the occupant becomes softer. based on an electronic map and real-time kinematics DGPS.Hu Lin got doctor degree of mechanical engineering from Hunan University in 2008. Khan. P. Z Zhong. The recorded injury criteria include: 15ms HIC. page1-5. MA. [2] Jörg Hoffmann.China on September 4. Applied Linear Statistical Models: Regression. Part D: J. Secondly. Soongu Hong. and D Liu. 1990. the system can be exploded more rapidly to take as much space as possible between the occupant and the automobile interiors. However. Craig Hanson. Analysis of Variance. 60 L normal DAB and 40 L NAB are used respectively. Dana Sun.R. 3 ms clip. At the same time. U. Moatamedi. A Shape and Tether Study of Mid-Mounted Passenger Airbag Cushion Using Design of Experiments. Mohamed Hamid . Como. The normalized values with respect to percentages of injuries caused by DAB are shown in figures 9. 1978. Manoj Mahangare.parametric analysis. 5th-OOP1 1 0. M. [4] Raj Roychoudhury.com ) 107 . Irwin Publishing. J.9 0. No 2000-01-1006. Computer and Electronics in Agriculture. A review of airbag test and analysis. Automobile Engineering. neck tension. chest deflection. 24(2005).9. Van Zuydam. R. then total 8 simulations were carried out.5 0. Ⅵ. the inflation system can be made smaller and less expensive. A driver’s steering aid for an agricultural implement. the authors believe that the concept and prototype of the NAB system is worthwhile to be exploited for improvement of the airbag for occupant protection. and focuses on the research of Automotive Safety and Electronics. [5] William Mu.. W.8 0. He has published 25 articles. Fig. 9th International MADYMO User's Meeting. Driver out-of-position injuries mitigation and advanced restraint features development. At least two advantages may be obtained. International Journal of Crashworthiness.1 0 HIC15 3ms-Clip Chest-D Neck-T Neck-C Nij STAB injury/DAB injury 5th-OOP2 50th-OOP1 50th-OOP2 [3] Zhong. Sandwiched Tube-Type Airbag. (Corresponding author to provide mobile phone: 08615821431148 . M. neck compression and Nij. paper NO. Patent CN200410046609.6 0. CONCLUSION Results from computer simulation and sled crash tests show that the NAB system has good potential to provide effective protection to occupants as the traditional airbag system would do. W. 221.Italy . Numerical results also indicate that the NAB system would give less harm to the occupant if the ignition takes place properly. [8] Neter. Volume 13. Modeling and Simulation of Sandwiched Tube-Type Airbag and its Optimization using Design of Experiments. Peter Ritmeijer. Nevertheless. 07-0319-O Hu Lin was born in Hunan province P. [7] Seybok Lee. Proc.. The purpose is to verify NAB’s protective efficiency to different stature drivers. SAE paper. Boston. REFERENCES [1] M. L Hu. Mike Blundell.. In: Proceedings of the 20th International Technical Conference on the Enhanced Safety of Vehicle.7 0. and a series of virtual tests were carried out with 5th percentile female dummy and 50th percentile male dummy sitting on position 1 and position 2.2 0. Z H and He. which is desirable to avoid unexpected injuries to the occupants. Issue 1 February 2008 .2002 . pages 67 – 76. p153-163. 1011th. and Experimental Designs.9 NAB Injury comparison with DAB It is seen that all the injury ratios are less than 1. the NAB system would require less gas for the airbag to take the same amount of space.E-mail: hulin888@sohu. 2005.

Universiti Malaysia Perlis. the PNtMS was developed to make efficent usage of the limited resources. it is crucial to monitor the network. commercial Embedded Linux owns approximately 50 percent more share of the new project market than either Microsoft or Wind River Systems [4]. memory usage. The Internet has been growing dramatically over the past several years. Perlis.com Abstract— The principal role of embedded software is the transformation of data and the interaction with the physical world stimulus. and heterogeneity [2]. d/a Pejabat Pos Besar. Keywords. which lead to frequent changes in network status. The main concern in developing embedded software for network application is the lack of published best practice software architecture for optimizing performance by means of reducing protocol processing overhead. memory usage. multi-processing operating system and purposely made for the required application and target hardware. I. memory and power consumption. With this rapid growth.O. Thus it will help us to design and provide more efficient network for the future network. The principal role of embedded software is the transformation of data and the interaction with the physical world [1]. According to a survey. operation and performance evaluation of the portable network traffic monitoring system (PNtMS) on an embedded Linux platform. P. programmers are focusing more and more on to developing software on embedded system to make it portable and platform independent. and the traffic information can be shown through web browser or onboard LCD. and power consumption. yet raised performance is required. network monitoring. number of traffic going through or comming from a system or application which is causing bottleneck. in terms of the amount of traffic usage as well as connectivity. The main problem of developing embedded software is inadequate software architecture and get better performance in order to reduce protocol processing overhead.1109/ICCET. The common features of a network monitoring software includes. concurrency. traffic generated per node. implementation and performance evaluation of the PNtMS into 108 978-0-7695-3521-0/09 $25. providing data on the volume and types of traffic transferred within a LAN. liveness.embedded linux. INTRODUCTION Rapid growth of hardware technologies brings large variety of smaller hardware architectures and platform orientation that has been leading a large demand of embedded software. Embedded softwares are marked with the stamps as: timeliness. and power consumption. Email : mostafijur21@hotmail. video. the design.2009 International Conference on Computer Engineering and Technology Performance Evaluation of PNtMS: A Portable Network Traffic Monitoring System on Embedded Linux Platform Mostafijur Rahman. floppy disks. 01007 Kangar. The system has been designed to capture network packets information from the network and performs some statistical analysis.37 . Box 77. This paper presents the implementation . These data are then stored into the log files.2009. Ahmad School of Computer and Communication Engineering. So. The primary goal for this work is to see how TSLinux could cope with the limitations inherent in a low end embedded platform in producing reliable embeddded traffic monitoring system. R. and the level of peak traffic.00 © 2009 IEEE DOI 10. and hard drives. single board computer. and thus attempts to be the optimized form of the kernel for a specified application. Because of the resource limitation in terms of processing power. reactivity. Result shows that PNtMS is performing at par with an existing network protocol analyzer with minimal usage of RAM ( 578 KB). In this paper. The Technology Systems (TS) [5] provides Single Board Computer (SBC) with TSLinux operating system(OS). Linux is a multi-tasking. which includes providing data on the volume and types of traffic transferred within a LAN. low end processor (133MHz) and storage(less than 1GB). in order to understand the network behavior and to react appropriately. multi-user. Malaysia. the Internet is used for increasingly diverse and demanding purposes. and the level of peak traffic [3]. Zahereel Ishwar Abdul Khalib. The principal work of network monitoring software includes collecting data from the Internet or intranet and analysis of those data. number of traffic going through or comming from a system or application which is causing bottleneck. B. It is built to develop applications for a very small target that does not require a keyboard. traffic generated per node. Therefore.

it is custom made to be used on a Technologic System's Single Board Computer. and other basic utilities.5). Telnet server and client. which is independent of operating system and hardware. 109 . and is unsupported in any other use [11]. Apache web server with PHP. The PNtMS was developed into TS-Linux. It shows worst case interrupt latencies of under 7 microseconds [10]. data from a Network card is directly received in the user buffer and the data from the user buffer is directly sent to the network card. FTP server and client. The navigation system for an autonomous underwater vehicle using TS-7200 was developed by Sonia thakur [7]. It is used for security purposes. BASH.2. Glibc (V2. The system has been designed to capture network packets information from a switch and perform some statistical analysis. Ahmad Nasir Che Rosli [8] implemented the face reader for biometric identification using SBC (TS5500). like TS-5000 and TS-7000 serieses etc. Among the model TS-5400 was choosen for this research. the data transmission and data access is done by TCP or UDP connection and web page respectively. In this mechanism. The core support is 32-bit instruction set. In their research the system was able to take analog reading through sensor and convert it into digital The Technology Systems (TS) provides different types of Single Board Computer (SBC). As a general purpose controller. This system is used to monitor river or beach. It has 16MB of high speed SDRAM.23). then store into log files. interchangeability and consistency of the hardware and try to make it portable and reliable with better performance. The version of TSLinux is 3. zero-copy mechanism has been incorporated for reducing protocol-processing overhead.07a. alphanumeric LCD interface . 2MB flash disk. Kernel (V2. TSLinux and DOS being an embedded distribution OSs. memory usage and power consumption. R. matrix keypad interface on DIO2. The total footprint is less than 18 MB (requires 32 MB or larger CF card). TSLinux is an open source project and compact distribution based on GPL and GPL like licensed applications and was developed from “Linux From Scratch” and Mandrake. because of its compatible architecture.embedded system will be discussed. compact flash (CF) card interface as IDE0. The objectives of the work were the implementation of a driver for the external ADC and the GPS receiver on a Linux SBC and to demonstrate the use of such a setup in autonomous navigation system. TS-5400 SBC Nowadays researchers are focusing their research on small devices. The processor AMD Elan520 was designed with a memory management unit (MMU) that supports Linux and many other operating systems. RELATED WORK data. grayscale modification and image scaling). B. The hardware platform setup and the device driver are expected to serve as a development platform for implementing a well designed PNtMS. weight. For adapting TCP/IP stack as a selfcontained component for embedded systems. Work by Li and Chiang [6] proposed the implementation of a TCP/IP stack as a self-contained component. execute image preprocessing (such as color space conversion. II. Here CF card in the socket appears as a hard drive to the operating system. The features of TSLinux includes. and to perform biometric identification based on iris detection. Ahmad and W. The TS-5400 SBC runs on a AMD Elan520 processor at 133 MHz (approximately 10 times faster than 386EX based products) with dimension 4. A.4". Numerous works have been done in the the embedded system area. and large scale area such as in agricultural and environmental fields. are installed into (CF) and onboard chip respectively. III. They overlook the size.18 and 2. The other features of TS-5400 are it is fanless with temperature range -20° to +70°C and power requirement 5V DC @ 800 mA. cost. motion analysis. The proposed work was to develop portable network monitoring and protocol analysis software into an embedded Linux board.1" X 5. For development. dual 10/100 Mbps ethernet interface-autosense. Although TSLinux is fairly generic. 3 PC/AT standard serial ports with 16 Byte FIFOs support.4. M. TSLinux providers provide development tools on their web site. In his research the system was able to capture face image. The traffic information can be shown through web browser or onboard LCD. it provides a standard set of peripherals [5]. One of the main requirements for this work was to write the code as generically as possible so that it could be ported to other Linux based SBC’s.4. Mamat [9] implemented a web-based data acquisition system using 32bit SBC (TS-5500). It can be used to read analog sensor such as for sewer or septic early warning system.

Figure 2 shows the components element of PNtMS. This part operates at the network layer and captures data physically through the Network Interface Card ( NIC ). but it is not the only possible setup that should work. development. Here Tskeypad file is used for input funtion.23) does not match the kernel version of the development platform ( kernel 2. The compatibility of the developed application has to resort to chroot during the compilation to ensure with the C libraries. and Secure copy. The TSLinux 3. Embedded Linux development host and target system interconnection setup [12] On the host machine. ZMODEM. Apart from that.[15] The next layer (kernel layer) contains the functionality for data acquisition and places the data into a special memory region (called kernel buffer) to be read and used by a seperate user-application program (third layer). To activate FTP and secure copy .6. The packet filter extracts each packet header information and stores into data buffer for further analysis. but transfers packets to storage without any packet loss. a device driver file TSKeypad is created into the /dev/SBC/ location.07a operating system running onboard does not integrates a C compiler in its set of supported utilities. The probe itself doesn’t process packet header at all. This particular setup is common in embedded linux The structural breakdown of the PNtMS can be generically segregated into three parts which can be mapped into different layers of the OS. SOFTWARE DEVELOPMENT Preliminary setup In TS-5400. the available hosts are selected and updated their information. A. dropbear script is run as a deamon. Keypad(4x4) and LCD panel(24x2) are used as I/O device. C. To start program automatically the network file is needed to be configured as DHCP for eth0 NIC. The communication between the development host and the target hardware is shown in Figure-1. The same goes with the target platform for this research. the pre-allocated memory are used rather than allocating them at run time if buffers are needed at memory managemet. as well as power consumption. To adapt PNtMS components for resource-limited embedded system. Here the gathered packet headers were analysed based on the header information at the TCP/IP protocol suit [13]. The first part is probe. Basically. The functionality of the PNtMS constitutes processing of TCP/IP network traffic with respect to network protocol analysis and traffic monitoring. this part forces the NIC to run in promiscous mode. The cross platform compilation is the mechanism used to develop the firmware. Operation of the system Our focus is to reduce memory usage and CPU processing. In the analyzer part.4. whereby the board requires cross compilation for its application development.IV. Then host 110 . Again there are three ways to transfer file from desktop PC to TS-5400 using FTP. B. minicom (a text-based modem control and terminal emulation program for Unix-like operating system) is used to communicate with the target embedded system through serial port. In this system keypad driver module needs to be mounted into the kernel module.20-16). since the kernel version of the target OS (kernel 2. Development environment It is usual practice for most embedded system developments not to support onboard compilation. Component involve in the PNtMS Figure 1. This functionality is realized through the usage of libpcap library package [14]. Figure 2. that has to do with capturing all incoming packets from the Network. After a while analyzer analyzes all captured data. Because of this.

alarm is displayed into LCD. all hosts information is saved into a file for monitoring through a web browser. and browsable web address that shows total traffic information. such as system IP. After analysis. Some special option are set through the web browser. University Malaysia Perlis.99. Table 1 shows the experimental platform that we used to evaluate the software. and the characteristics of the analysis results can be applied to the whole Internet. Some system information also can be shown through LCD panel.information are sorted according to their total capture bytes. In this system. Also we can see ethernet status. and control the system such as restart and shutdown. The maximum 31000 packets can be captured and more than 200 hosts information can be stored in every interval. We compared our embedded software with Wireshark because it is renown. Figure 3 shows inputed time interval. the traffic measurement and the analysis were carried out at Embedded Kluster Lab. If the original bandwidth is greater than inputed one. Figure 4 and 5 show some statistical information of the available traffics and hosts. Web based individual host statistics into TS-5400 V. Some statistical data are shown through 24x2 LCD panel. The complete System Setup Figure 5. Each packet and available host size limits ares set at 94 and 66 bytes respectively. Figure 3. The browser was developed using HTML and PHP code. RESULT AND PERFORMANCE EVALUATION This section presents the results and performance evaluation of PNtMS after being integrated into TSLinux OS and Wireshark (V0. The experimnt assumes that the traffic is typical Internet traffic. All of these are performed because of memory limitation. Keypad can control the program such as start and stop . In this. We tried to use limited memory and display traffic Figure 4. Web based traffic statistics into TS-5400 the complete system setup of PNtMS with TS-5400. The PNtMS program is used to capture packets and store into a buffer to analyze. memory and CPU usage. packets are captured then analyzed and displayed. memory usage through the web browser. Within the 111 . process status. such as options to shutdown and restart the system.4). The System is controlled by 4x4 matrix keypad. when the traffic is peak. disk usage. keypad is used to set input data such as bandwidth(Kbps) and time interval.

Exprimental Platform TS-5400 PC AMD Elan520 at 133 MHz Intel(R) Pentium(R)4 CPU 2. Processor.378 KBps 2.88%).9.information as possible as we can.862 KBps Software Name Compiled With Execution Type Avg. environment GnuTLS 1.347 6. 1GB storage secondary storage Embedded SBC Desktop PC 4. IP(76.4" Larger than TS-5400 Less than 1 Kg.5.07a UBUNTU 7.4. Packets/Sec.3. Port Audio <=V18. 112 .12.99.2. 17 and 23. 2008 were captured from the same network. libpcre 6. c) Application Layer Protocol Figure 6 clearly demonstrate that the PNtMS can capture three types of network traffics and protocols capture state using the percentage.23 2.6.10.4) gcc (2.661 470. 256MB RAM.883 439.675 7.5 in development 0. in Wireshark. Avg Packet size Avg.1. On the other hand. libz 1. In the Figure 3b percentage of TCP(65%) traffic dominates as majority of communication takes place through TCP protocol.175 14.1" x 5.95) . Different layers protocols statistics. Gcrypt 1. libpcap GTK+ 2.2. 16MB RAM (Avg. and not portable TS-Linux 3.11.7.23MB already used).9. Performance Comparison TS-5400 PC PNtMS Wireshark (0.2 Sequential Sequential 0. CPU Usage (%) Avg.4. and portable More than 1 Kg. Glib 2.150 4. Architechture Type Dimention Weight and portability OS Linux Kernel NIC I/O interface Table 1. 16. 80GB secondary 14. so it can store more information about a packet.3. RAM Usage (%) Avg. A small number of ICMP(62 packets) packets are also captured and put into the overall statistics.66 GHz processor. Eventhough wireshark is high performance protocol analyzer with more features we want to evaluate the performance of PNtMS with it. but we did several experiments at different times and the results showed very similar traffic characteristics. libpcap 0.20-16-generic 10/100 Ethernet Realtek RTL8139 Family PCI Fast Ethernet Keypad (4X4) and LCD(24X2) Keyboard and Monitor Table2.4. each packet size limit is 65535 bytes. Data Rate a) Network Layer Protocol b) Transport Layer Protocol Figure 6. gcc 4. All packet headers of the packets transferred on the network during a 1 h period on September 15. The duration was rather short.958 3.11.725 3. At the same time Wireshark captured network and transport layer protocol such as. ADNS.04 2.

1997. 2007. Breinich. Zelkowitz." in Proceeding of the 2nd International Conference on Informatics. and M. Information. and W. Vision. Current ed. TS Product. Jing.96%). This is only the glimps of what PNtMS presents while performing the protocol analysis. E. 123-128. vol. vol.69%) and UDP(15. M. pp. 1999. CONCLUSION [10] [11] [12] [13] [14] [15] Network performance measurement is an important aspect of network management. MA. "Online book on real-time simulation of a satellite link using a standard PC running Linux as operating system. Austria. 1-6.74%). "An empirical study of the characteristics of Internet traffic." in International Conference on Robotics. S. R. vol. Lee. SSH(17. V. Jacobson. In this article. Our work focuses on how to obtain better performance by using limited resources on TS5400 and show more statistical analysis result and make PNtMS user friendly. Malaysia. and W. T. New York: Prentice Hall PTR. R. 5." Computer Communications. 2007. A. TCP(59. "An embedded Linux based navigation system for an autonomous underwater vehicle. "A heterogeneous evolutional architecture for embedded software. Geer. 2005. D. pp. B. The challenging future work is to use database into CF(1GB) for monitoring long term traffic history. B. "Survey: Embedded Linux Ahead of the Pack." in Computer and Information Technology. From the analysis we can conclude that PNtMS is successfully developed into limited usable memory ( 578 KB). 2005. R. C.. Hong. C. REFERENCES [1] L. W. McCanne. and Signal Processing. Ahmad. London: Academic Press. 2007. pp. M. 2005. "LyraNET: a zero-copy TCP/IP protocol stack for embedded operating systems. Mamat. W. Technologic System Inc. CIT 2005. and M. "Technologic System Web Page. PC/104 Single Board Computers and Peripherals for Embedded Systems. Mei-Ling. 1993. A. Lynch. Xuejian." in Embedded and Real-Time Computing Systems and Applications. DNS(0. Mamat. October 2004. Internet System Handbook. C. S. Hallinan Embedded Linux Primer: A Practical RealWorld Approach. and S. M. services such as. Malaysia.26%). 2005. C." in SoutheastCon. Kim.” Salzburg. From the result we can conclude about the evaluation and validation of the PNtMS through the implementation and protocol analysis." 2008. Rose. 1999. S." Distributed Systems Online. 237-242. Shakaff. Leres. Thakur and J. Conrad. 2002 J. A. 2002. IEEE. Reading. Proceedings. Juhari. vol." in Manual Page. pp. 22. A. Proceedings. "Technologies Systems . Petaling Jaya.90%) and HTTP(25. Netbios(11. Y.83%). The Fifth International Conference on. Minghui. T. pp. "Data Acquisition System Using 32 Bit Single Board Computer: [2] [3] [4] [5] [6] [7] [8] [9] 113 . Ed. VI. and J. M. D. Yun-Chen and C. Rosli. 2007. The most challenging future work is to extend PNtMS so that it can be used to monitor and analyze switched network such as fast ethernet and gigabit ethernet and make it to perform better. Ahmad. Hardware Architecture and Software Development. processor and storage. "Embedded Software " in Advances in Computers. M. 1607-1618. USA: Addison-Wesley. Y. 901-905. 11th IEEE International Conference on. L. pp. Rizon. 56 M. Y." Computer Communications. "WebTrafMon: Web-based Internet/Intranet network traffic monitoring and analysis system. 22. "Tcpdump. K. A. "Face Reader for Biometric Identification using Single Board Computer and GNU/Linux. Kwon. Kushida.. we compared and analyzed PNtMS with desktop base high performance network analyzer (Wireshark). M. Selangor. N. M. Penang.97%). 2007. 1333-1342.ICMP(0.

China E-mail: yanhuiqiang@ scse. rules on how these can be 114 978-0-7695-3521-0/09 $25. Lu fei Institute of Design for Innovation. In terms of customer’s requirements. platform is introduced into PBGPCT.2009.1109/ICCET. the Platform-based generic product configuration tool(PB-GPCT) is presented which is developed by Institute of Design for Innovation. Hebei University of Technology.So that the platform is the components and the constraints among them from which different products can be derived. Being a structure-based and domain independent system. or parts from which a stream of derivative products can be efficiently created and launched” [1]. knowledge-based configuration methods are employed. give two types of platform design methods: (1) top-down and (2) bottom-up [3]. The PB-GPCT is designed to be used by sales engineers for the configuration of complex products. which selects assembly of components from a set of pre-defined components to meet customer’s requirements[5]. Corresponding to these definitions. firstly. Simpson et al. Hebei University of Technology. design methods is to distinguish between module based and scale based platform[4]. The configuration model. companies have to support Mass Customization to keep low cost and meet individual requirements at the same time.2009 International Conference on Computer Engineering and Technology PB-GPCT A Platform-Based configuration tool Yan huiqiang.cn Abstract In this paper. configuration is choosing components from a set of pre-defined components to construct the products. Introduction With the competitiveness become more and more violent in industries.00 © 2009 IEEE DOI 10. The relationship between product families and platform is showed in Figure 1. Shi kangyun. Platform 1 Platform 2 Platform 3 Component n Platform m Constraints User requirements Product family Component 1 Component 2 Component 3 Component 4 Product 1 Product 2 Product 3 Product n 1. the PB-GPCT can be used in different companies without doing any modification. Product families design usually is based on platform. TianJin. Both definitions emphasize product is composed of components and relationship among components. Configuration is described as selecting objects and their relations from a set of objects and a set of relations according to customer’s requirements [6] . so that customers can get customized products in tolerable time. then different products are derived by selecting different components which obey the constraints among platforms and components. platform is determined. For brevity.hebut. There are several methods for designing a platform.edu. Relationship between platform and product family Mittal & Frayman defines configuration as a form of design. Another way of categorizing platform Figure 1.20 . Product families design is the most important method to implement Mass Customization. modules. In order to achieve this goal. the design approach of platform is not discussed in this paper. Tan runhua. Muffato defines platform similarly as: “a relatively large set of product components that are physically connected as a stable sub-assembly and are common to different final models” [2]. Comparing with other configurators. Meyer and Lehnerd define a platform as a “set of common components. The configuration model is defined as set of pre-designed components. Configuration knowledge and configuration constraints are discussed in this paper. Configuration model is the product model which is scoped within the conceptualization of configuration domain. a product configuration is defined as a tool which supports the product configuration process so that all the design and configuration rules which are expressed in a product configuration model are guaranteed to be satisfied [7].

OCL (Object Constraint Language) is a formal language used to describe expressions on UML models. ComponentB Is-a componentA means that componentB is a kind of componentA. CCL (Configuration Constraint Language) is based on OCL which is used to describe the constraints of the configuration model. Component type. we call it Component Figure 2.combined into valid product variants and rules on how to achieve the desired functions for a customer [8]. Ri is constraints among platforms and component types. Configuration model 2. then based on the platform other components are chosen.g. There are two kinds of component types.…Rs}). e. 2. But from viewpoint of product family design.{R1.2. Corresponding to this. For a configurable PC. Lastly. Configuration tasks are summarized to have the following characteristics [10]: (1) A set of components in the application domain. In section 2.3. so that PB-GPCT selects UML to describe configuration model. relations among components and rules represented by CCL. platform-based configuration is introduced into PB-GPCT.g. the properties of virtual component are assigned by sales engineers at the configuration phase. HD-Unit and CPU. e. CMi is the configuration model which the platform has been designated in term of customer’s requirements. This paper is organized into 4 sections.…Rn}). Virtual component and physical components all have component instance associated with it. ComponentA part-of componentB means that componentB is part of componentA. Pi is platform which is composed of component instances and component types. It is also widely accepted to describe product model.{R1. When all the properties of component are given values. the technique relevant to configuration is discussed. Sales engineer first selects platform (determine the MotherBoard and CPU) according to customers requirements.1. However. Component instance.g. configuration defines as: Solution space= f(CMi. One is virtual component which does not have a physical correspondence. VideoCard and HD-Unit are parts of MotherBoard. Configuration model definition PB-GPCT defines the configuration model (CM) as: CM = ({P1.…Cn}. knowledge representation is critical to configuration tool. To this end. Namely. e. CPU. VideoCard and MotherBoard. binary. (3) Control knowledge about the configuration process.…Pm}. the implementation of PB-GPCT and future works is discussed. instance. In section 3. The taxonomies of relations are Part-of and Is-a. (2) A set of relations between the domain components. 2. Rules. it is different as stated above. 2. Part of configuration model of configurable pc Relations. 115 . The other is physical component which typically has a bill-of-material associated with it. The taxonomies of constraints are unary.g. e. The knowledge taxonomies of Configuration Being knowledge extensive system. IDE-Unit and SCSI-Unit are two kinds of HD-Unit. but also include the constraints among component instances. the platform elements are MotherBoard and CPU. the concept of platform-based configuration is introduced. The configuration model is composed of components. the maintenance and analysis of constraints is presented. {C1. In section 1. the properties of physical component are given by product experts before configuration. Configuration model representation UML (Unified Modeling Language) is the leading industrial object-oriented modeling language for software engineering. Part of configuration model of configurable PC expressed by UML and CCL is showed in Figure 2. Platform consists of common elements from which a range of products can be derived. Rules not only include the constraints among component types. Ci is the element of the product model which is called component type. configuration is the process to instantiate configuration model. Platform. global and incrementally constraints [9].

It is the sub collection of component instances from which individual products can be derived. in of to of 116 . However. CCL constraint edit windows Figure 4. Configuration knowledge Corresponding to this viewpoint. the following grammars OCL[11] are not supported by CCL. Edit constraints The Object Constraint Language (OCL) is an expression language based on first-order logic which enables the definition of constraints on object-oriented models. Main tables storing configuration knowledge base The following are two constraint examples Figure 2. Component Instance knowledge. The main tables which stores Configuration knowledge are showed in Figure 4. component instance is mass-produced component. According to Figure 3. In terms of configuration policy. Maintenance and valuation of constraints is show by Figure 5. Figure 6. In general. Configuration constraints How to write and evaluate constraints is core of PBGPCT.port = HD-Unit. Figure 3. all the knowledge is managed by product experts except for control knowledge . as stated above. Control knowledge. Constraint1:MotherBoard. However.port Constraint2:MotherBoard. CCL is sub collection of OCL which can be written easily using the edit windows showed in Figure 6. Figure 5. Let Expressions.1. Maintenance and valuation of constraints 3. the configuration knowledge is categorized into component knowledge. The let expression allows one define a variable that can be used in the constraint. The Configuration knowledge of PB-GPCT is showed in Figure 3. Tuples. Platform knowledge. which describes product architecture utilizing UML. It is used to control the process of configuration. PB-GPCT is different from this view.3. writing constraint expressions in OCL is an error prone task. It describes the rules among components types utilizing CCL. constraint knowledge and control knowledge.port = CPU. Component Instance is component type which is instantiated by product experts.port Comparing with OCL. Constraint knowledge. A tuple consists of named parts. different control knowledge will be required. each which can have a distinct type. It is the element of product. Component knowledge.

3. The following patterns are supported now. In order to realize this function. Constraint patterns are predefined constraint expressions that can be instantiated and subsequently configured for developing constraints in a concise way. then the first logic sentence is transformed into specific language to execute to evaluate the constrains. References [1] Meyer. Figure 7. New York. H. (1) AttributeValueRestriction pattern. there will be numerous constraints for complicated products in configuration model. which is more efficient than java. Constraints analysis and constraints evaluation are main plug-in component of the system. However. 6. (2) UniqueAttributeValue pattern. A future extension might support constraint consistency check. The result of this process is a sequence of the tokens which will be the input of the Syntactic analysis. or write new plug-ins for missing functions. Let Expressions is not supported by CCL. 7. the theory of PB-GPCT is presented. PB-GPCT has to ensure the form of the constraints is correct through Lexical analysis and Syntactic analysis.4. It distinguishes between the identifiers and the keywords of CCL. Implementation The configurator PB-GPCT employs the popular language C#. to ensure to be consistent with constraints. & Lehnerd. P. The power of product platforms. 117 . Conclusions and Future works In this paper. component instance names and the properties of the component types. The task of syntactic analysis is to determine whether constraint expressions are correct input by user. ISBN 0648-82580-5. the detail of CCL is not discussed in this paper. It has been applied to TIANJIN NO. Consequently.3. 3. The Free Press. Another task for the future will be support 3-D configurations which make WYSIWYG(What You See Is What You Get) feasible in the system.2. though users can do it flexible using edit window showed in Figure 7. 2000. the average designing time has been declined from more than one month to about one week. Valuation of constrains At configuration phase. Analysis of constraints After finish inputting the constraints. NY. It is used to specify the communication between classes. A. The main GUI is showed in Figure 7. 2 MACHINE TOOL CO. since plug-in architectures can quickly customize applications by mixing and matching the plug-ins. For brevity. 4. LTD. Acknowledgment It is a project supported by Natural science in Tianjin City (07JCZDJC08900) and National Natural Science Foundation of China (50675059). Constraint patterns Writing constraint expressions is a time-consuming task.. The system of PB-GPCT 5. Plug-in architectures are employed. In order to support extensibility and upgradeability. (5) IfThenElse pattern. PB-GPCT constraint patterns provides a repository of constraint patterns that encapsulate the expertise needed to easily and quickly add constraints to a model. (1) Lexical analysis. Using this software. and uses SQL server 2000 as the Database. PB-GPCT employed the LL (1) [13] approach to do the analysis because Left recursive and Backtracking do not exist in the grammars of CCL. (4) ForAll pattern. To address these issues. PB-GPCT has to evaluate the configuration items. which selected by user. constraints are transformed into first logic sentence [14]at the first step. M. (3) Exists pattern. 3. (2) Syntactic analysis.Messages. so that the identifiers are component type names. The taxonomies of constraint patterns are Atomic Constraint Patterns and Composite Constraint Patterns[12]. contradictory constraints may be defined inadvertently.

(1989).) SpringerVerlag. Germany. AIESAM. 13. Proceedings 33 International MATADOR Conference. Wahler.Kuehn. J. pages284-293. [12] M. D.1570. 2000. In XPS-99 Proceedings. [10] A.GuenterandC. Jannach.Kuehn. [7] Görel Hedin. [13] Chen ying. 1999. [9] Ander Altuna. Maier. “Co-operative and Distributed Configuration”.1570. F. pages 13951401. Wuerzburg. [4] Simpson. Journal of Production Economics.. In XPS-99 Proceedings. Springer-Verlag. F. Towards a generic model of configuration tasks. John McKenna. LNCS-1439. “Knowledge-Based Configurati on-Survey and Future Directions”. 2006. In Proceedings of the Conference on Software Engineering and Knowledge Engineering (SEKE’2000). Lecture Notes in Artificial Intelligence No.In Proceedings of the 8th International Symposium on System Configuration Management (SCM-8). [11] OMG.SpringerVerlag. 1.SpringerVerlag. T. Electronic Communications of the EASST. 1999. Morgan Kaufman.Beijing. Wuerzburg. & Mistree. Koehler. and A. [14] A. 2001. 118 . [8] K.July20-21. NOD 2004 September 27-30 2004 Erfurt. Jan 2004. Special issue: Platform product development for mass customization. pp. vol. Brussels. pp 107-126. 1999.USA. Felfernig. 60-61.W. Introducing a platform strategy in product development. International . Brucker.[2] Muffato. “Modelling Knowledge used in the Design of Hosiery Machines”. Alvaro Cabrerizo. Lecture Notes in Artificial Intelligence No. 2-22. Generating product configuration knowledg Internationa bases from precise domain extended UML models. D. No. July 2000 93-98. S. 2007. No. 5. 1. Lennart Ohlsson. Product platform design and customization: Status and promise. Vol 10.Principle of Compilation. [5] Mittal. G. 1998. “Knowledge-Based Configurati on-Survey and Future Directions”. pp. In the 11th IJCAI. ModelDriven Constraint Engineering.. CA. Object Constraint Language Specification 2006. San Mateo.GuenterandC. Chicago. OLDHAM. & Frayman. Hayhurst (ed. Research in Engineering Design. Friedrich. [6] A. 145-153. Product platform design: method and application. J. M. and D.R. [3] Simpson. T.

HYPERBLOCK-BASED AGGRESSIVE EXECUTION MODEL Current researches of ILP processors focus on exposing more inherent parallelism in an application to obtain higher performance. and analyzes it in the aspects of both control dependences and data dependences. so how to effectively utilize the growing resources and exploit more parallelism to accelerate applications is an urgent problem for computer architects. we analyze factors which impact expected prediction depth and find depth depends more on application than predictors. We concentrate on finding a tradeoff and maximum potential of aggressive execution that can be exploited.2009 International Conference on Computer Engineering and Technology A feasibility study on hyperblock-based aggressive speculative execution model Ming Cong. Section IV introduces related works on aggressive execution models. efficiency is still limited because of misspeculation penalties. China mcong@mail. II. and then analyze the effecting factors of aggressive speculation with control-flow. speculative execution. It’s obvious that the feasibility of aggressive execution model depends largely on the effectivity and prediction accuracy of speculation.ustc.edu. behave like a conventional processor with sequential semantics at the block level.edu. Keywords-hyperblock. control-flow and data-flow constraints inherent in a program must be overcome.00 © 2009 IEEE DOI 10. This paper focuses on analyzing the feasibility of aggressive speculation execution model and finding an appropriate degree of “aggressiveness” under hyperblockbased execution model. especially SPECFP. and analyze dependent behaviors of applications under this model. Section III experimentally evaluates and analyzes the feasibility of aggressive execution. This paper focuses on the analysis of hyperblock-based aggressive execution model. as speculation in high-ILP processors become more aggressive. zcm.2009. and find that it depends more on applications themselves than predictors. which corresponds to the part in section II. and high prediction accuracy on control-flow can be gained by using hyperblock-based branch prediction mechanisms. such as TRIPS and Multiscalar. INTRODUCTION Modern CMOS technology brings the increasing number of transistors on one chip. Our experiments show that most applications have good predictability. However. prediction. and committed atomically. executed. Furthermore. 230027. {renyq.30 119 . in which each block is fetched. and estimate the expected prediction depth. China Beijing.cn. to expose potential of instruction-level parallelism (ILP). we analyze factors which impact expected prediction depth. Yongqing Ren. Finally in section V we make our conclusions.cn Abstract—Speculation execution model which executes sequential programs in parallel through speculation is an effective technique for making better use of growing on-chip resources and exploiting more instruction-level parallelism of applications. and then evaluate the feasibility of aggressive speculative execution model on 8 applications from SPEC2K. However. With hyperblock. Speculative execution [1] which executes programs aggressively has become a mainstream technique to reduce the impacts of dependences in high performance microprocessors. We analyze the characteristic of control dependences and data dependences between adjacent hyperblocks. larger instruction window and more ILP can be achieved than basic block. We evaluate the feasibility of aggressive speculative execution model on 8 applications from SPEC2K. Hong An.cn. At the aspect of control-flow. which may achieve high ILP and high resource utilization. Furthermore. To effectively exploit ILP in programs. high communication overheads and etc. The block-based execution model [4] has been proposed to enlarge the instruction window. and analyze the distributions of data dependences between hyperblocks and their impacts on the depth of prediction. but may exit from one or more locations. han@ustc. block-based execution model utilizes additional resources to execute more blocks simultaneously on the processor substrate. the number of mis-speculation increases with growing numbers of inflight instructions/ blocks. Recent works on computer architectures. The rest of this paper is organized as follows. data dependence I. we propose a quantitative analysis of data dependence on hyperblock-based execution model. with all but one executing speculatively. we evaluate performance of three branch predictors. Although many mechanisms have been proposed for speculative execution. A hyperblock [2][3] is a set of predicated basic blocks combined by compiler in which control flow only enters from the top. junzh}@mail. However.edu. Our experiments show most applications can get high prediction accuracy on control-flow from hyperblock-based prediction mechanisms.ustc. and propose a quantitative analysis method to detect data dependences on hyperblock-based execution model.1109/ICCET. and expected prediction depth differs depending on the characteristics of each application. In the view of data-flow. Jun Zhang Department of Computer Science and Technology Key Laboratory of Computer System and Architecture University of Science and Technology of China Chinese Academy of Sciences Hefei. Canming Zhao. the 978-0-7695-3521-0/09 $25. especially SPECFP applications. Similar to the multi-issue in superscalar. control dependence. accompanied high communication overheads and roll-back penalties can not be neglected. use block-atomic(tasks in Multiscalar [4]) execution. 100080. Section II describes the aggressive execution model in detail.

Both Global predictor and Local predictor are configured to be 16384 96% 94% 92% 90% 88% 86% 84% 82% 80% 78% 76% ammp art bzip equake gzip mcf vortex parser Prediction Accuracy Global Local Tournament Figure 3. in which BTB handles prediction of branch and call exit. In addition. Structure of the BTB Figure 1. it is nearly as accurate as local predictor. processor trains counters to prefer the correct prediction whenever the local and global predictions differ. predicting the exit taken out of multiple possible exits is a multi-way branching problem. we use two HRT. one for updating the prediction information. and the type of branch instruction can be easily got from a pre-decoder in RISC architectures. but their mechanisms are much different with those in conventional superscalar processors from following aspects. branch behaviors in History Register Table (HRT) are also replaced by recent exit history of blocks. each block has several exit points. which is different from global predictor. for 120 As shown in Figure 2. to obtain the Block Target Address. 2) Design space of hyperblock-based branch predictors In this section we describe the design space of hyperblock-based branch predictors in detail. prediction accuracy of three branch predictors . so that the accuracy of predictors depends on the frequency of historical representation of previous branch target address. some bits of exit ID or target address of the branch should be reserved as the exit information. In hyperblock-based model. 2-level hyperblock-based branch predictors a) Exit predictor: Corresponding to branch behaviors of conventional branch predictors. this will certainly degrade prediction accuracy and increase its complexity. Global predictor: Global predictor indexes the table of bimodal counters with the recent global history integrated with branch instruction addresses to get branch behavior. but the type of exit points in blocks is hard to get before block committing. After the exit is predicted. we use both predicted exit and branch address to determine the target. Based on conventional branch predictors and the characteristics of hyperblocks. Tournament predictor: Since different branches may be predicted better with either local or global techniques. One is a local BHT indexed by the low-order bits of each branch instruction’s address. First. Local prediction is slower than global prediction because it requires two subsequent table lookups for each prediction. indexed by the block address and exit ID which was predicted by branch predictor. among which control dependence and data dependence are of the most importance. A return address stack (RAS) used for return address prediction makes it more accurate. it records T/NT history of N most recent executions of each branch. dedicated adder in the fetch mechanism and branch target buffer (BTB) of RISC architectures can be used to compute PC relative target addresses before they are computed by ALU(s) in execution stage. we replace the T/NT stored in Pattern History Table (PHT) with exit numbers of each block. Secondly. So we need a mechanism to predict the type of exit points. and then the second level produces the branch target address. Local predictor: Local branch predictor keeps two tables. the accuracy of prediction. As shown in Figure 1. The other is a PHT indexed by the generated value from the branch history in BHT. the other for recovering. the variable block sizes and different target addresses which exit may correspond to force us to predict target addresses of all exit point. the first level predicts the exit point. Branch predictions in hyperblock-based model have high parallelism. but it may be favorable for deep prediction of a certain local branch. The choice prediction is made from the table of 2-bit prediction counters indexed by path history. 3) Evaluation of three branch predictors In this section we evaluate prediction accuracy of exit predictors with block target address predictor presented previously on 8 benchmarks from SPEC2000. Thirdly. and almost as fast as global predictor. each item of BTB maintains target addresses of several exit and hysteresis bits. tournament predictor uses a choice predictor to dynamically select the best method between two prediction for each branch. which is easy and versatile. So we only use BTB for the prediction. A. the predictors for exit [7] can be organized around following methods. we consider a two-level predictor [5] which predicts the first branch that will be taken in a hyperblock. Conventional methods usually use BTB and RAS. Speculation execution on the control-flow 1) Differences between hyperblock branch predictor and conventional predictor Branch predictions are made based on branch history information. but in hyperblockbased model. in superscalar model. each branch has only one exit so that one bit of exit information is enough for representing taken or not taken (T/NT). RAS predicts address of return exits with known types of each exit.dependences between instructions need to be detected and avoided to prevent pipeline stalls. while in block-based model. b) Block target address prediction Block branch target address determined by the exit instruction is the starting block address of next task. Figure 2.

we conduct a quantitative analysis of data dependence on hyperblock-based execution model. Although our experiments are based on TRIPS. EXPERIMENTAL EVALUATION A. and most applications still keep a high prediction accuracy even the prediction depth is up to 7. To be able to obtain benefits from data speculation. III. achieves better performance than both of them.Branch Predictor with Global-Predictor 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% 1 2 3 4 5 Prediction Depth 6 7 f 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% 1 f Branch Predictor with Local-Predictor 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% 1 f Branch Predictor with Tournament-Predictor ammp art bzip equake gzip mcf vortex parser 2 3 4 Prediction Depth 5 6 7 Prediction Accuracy Prediction Accuracy 2 3 4 5 Prediction Depth 6 7 (a) (b) Figure 4. Tournament predictor performs best as it can adapt to pattern of applications with both previous predictors. It contains compiler (Scale [3]). then we depict the depth distribution and predictability for all applications. parser. Speculative execution on the Data-flow Data dependence is an important factor that influences effectivity of multi-level speculative execution. B. The larger dependence depth means the weaker dependent strength. vortex. so we make two assumptions before the analysis: (1) A perfect branch predictor is assumed. finally. Prediction accuracy of different branch predictor Prediction Accuracy (c) entries. as in figure 4. gzip and mcf in which global histories is predominant. and we analyze impacts of expected prediction depth on different predictors and configurations. TRIPS is a block-atomic execution model. bzip. but the depressive gradient slows down with deeper depths. Tsim_proc can generate trace files containing all events of the simulating process. which can help us adopt appropriate prediction depth while studying aggressive execution model. bzip2. which indicates that it is more powerful for deeper speculative execution. Dependence depth describes the dependence strength between blocks. which means programs could benefit from speculating more blocks according to larger dependence depth. including 3 float point benchmarks: art. we evaluate prediction accuracy with different prediction depths. we must execute instruction streams correctly. These assumptions would not affect the analysis of the natural characteristic of programs. 1) Evaluate predictors with certain prediction depth We evaluate prediction accuracy of global and local predictors with certain prediction depths (range from 1~7). We use 8 whole benchmarks written in C from SPEC2000 benchmark suite. Dependence depeth = ∑ Pi × i i =1 ∞ (1) We define the term “Dependence depth” to measure the degree of dependences between blocks. In this section. From Figure 3. The dependent relation between instructions in traditional processors can be described by dependence-distance (the number of instructions between data producers and consumers). and analyze its impact on the depth of prediction. First. The tournament predictor takes full advantage of global and local histories. and instructions in the same block can execute in parallel. And it is proportional to the potential depth of speculated execution. (2) No complexity of hardware implementation is taken into accounts. ammp. mcf. but equake and vortex on which local histories have more impact perform better with the local predictor. it is clear that global predictor performs better than local predictor on art. because different impacts of instructions on separating producer and consumer. but the effective size of instruction window is limited by the depth of control-flow speculation. our research is not limited to this architecture but aims at all current hyperblock-based execution models. Global and local predictor both have strengths on different types of applications with small depths. the prediction accuracy decreases while prediction depth increases. functional simulator tsim_arch and cycle-accurate simulator tsim_proc [6]. and 5 integer benchmarks: gzip. This demonstrates deeper prediction is feasible to most applications. The Speculative execution on control dependence between hyperblocks Large instruction window is built to issue more independent instructions per cycle. so that we can exploit more parallelism from inter-blocks. Pi denotes the proportion of instructions which have dependence with instructions in the previous ith block to the total number of instructions within current block. There i denotes the distance from current block. Methodology Our experiments are performed on the TRIPS toolchain which supports hyperblock-based multi-level speculation. The BTB has 1024 entries. Tournament predictor contains 8192 entries for both global and local predictor. so we evaluate the feasibility of predicting more aggressively and analyze the appropriate depth of prediction. but local predictor performs better with the increase of depth. the size of instruction window can reach up to 1024 by speculation. equake. B. but it is no longer applicable for measuring hyperblock-based model. 121 . we analyze the expected prediction depth of different applications based on instructions distribution.

Distribution of data dependence in applications 122 . In most applications. Results may be inequable while a block has data dependences with blocks located in different place. Figure 6 shows the expected prediction depth evaluated under the condition that size of local and global PHT is configured to 16348. low prediction accuracy is not made by the conflict on PHT. 2) Distribution of data dependences Previous statistic of instructions number only reflects the ubiquitous of data dependence under block-level execution model. we can see that the types of predictors have big impact on the expected prediction depth. We would not achieve the anticipated speedup if we only increase speculative depths without considering the impact of data dependence. our experiments show 20% or even 40% [parse] of data dependence locates in two adjacent blocks. and an average of 40% or even 60% [parse] locates in six contiguous blocks. For each predictor. We predict each block with unbounded depths until the prediction cannot continue. art. Figure 5 illustrates the proportion of blocks that predicted with different prediction depths (0~1. so we further introduce the Expected Prediction Depth. we can estimate the potential of deep prediction for applications. From the results. 2~5. load instructions and register-read instructions within hyperblocks. Proportion of blocks VS.Instructions distribution in blocks 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% 40 35 30 25 20 15 10 5 0 mcf gzip vortex bzip2 parser art ammp equake rd_reg insts load insts normal insts Proportions of blocks >10 6~10 2~5 0~1 ammp art bzip equake gzip mcf vortex parser Figure 5. we can see values of ammp. 100% 90% Expected speculation depth 80% 70% 60% 50% 40% 30% 20% 10% 0% mcf gzip vortex bzip2 parser art ammp equak e >64blocks 33~64blocks 17~32blocks 11~16blocks 9~10blocks 7~8blocks 5~6blocks 3~4blocks <=2blocks Expected Prediction Depth 12 10 8 6 4 2 0 Global Local Tournament ammp art bzip equake gzip mcf vortex parser Figure 6. Most data dependence of a hyperblock comes from its previous adjacent blocks. mean value of prediction depth distribution that reflects the magnitude of prediction and even prediction accuracy. 14 Figure 7 describes the expected prediction depth with tournament predictor which is configured with 16x the size of old PHT. mcf and vortex exceed 10. Numbers of dependence instructions 2) Distribution of Prediction depth on blocks We introduce a quantitative method for denoting the potential of deep prediction of distinct applications intuitively. applications has distinctive results. Vortex. Figure 8 presents a statistic of the numbers of overall instructions. 3) Expected Prediction Depth Prediction depth distribution can not reflect actual quality of prediction depth for applications. which also shows that these applications can accommodate well to such prediction model. from approximately 18% in gzip to 52% in ammp. 6~10. and others are around 5. According to a statistical analysis of different prediction depths among various sequences. we can still find that the number of dependent instructions is considerable in hyperblock. Although these values are closely related to compiles. art and ammp have high proportions with deeper prediction depth. prediction depth Figure 8. Expected prediction depth with different configured predictor Figure 9. among which the depth 0 indicates blocks cannot be predicted. thus each application can be divided into several block sequences with various block numbers. The unbalanced distribution on data dependences is not what we expect. we can see configurations of predictors have less impact on the expected prediction depth. so we further analyze the distribution of data dependence (Figure 9). as we cannot make subsequent blocks executed even increasing the speculation depth because of the serious dependence between adjacent blocks. 10~∞) on tournament predictor. The Speculative execution on Data dependence 1) Numbers of dependent instructions in applications Data dependence between hyperblocks arises from the load or register-read instructions (dependent instructions). Expected prediction depth Expected Prediction Depth 16 14 12 10 8 6 4 2 0 ammp art bzip equake gzip mcf vortex parser Old New Figure 7. C. we should consider more improvements on the predictors in order to better adapt to patterns of applications. but cannot completely substitute for the data dependence behaviors. bzip and gzip are just the opposite in which the former group can be predicted aggressively and the latter are less predictable. These differences are totally attributed to the natural characteristics of applications. but comes from the characteristic of applications.

R. Sindagi. we introduce a quantitative method for denoting speculative execution potentials for distinct applications intuitively. Lin. Hyperblock-based execution model can tremendously improve the size of instruction windows for high ILP and good performances. Two-level adaptive branch prediction. E. In Proceedings of the 3rd International Symposium on High Performance Computer Architecture. 2006. Exit predictor is a region predictor first proposed by Pnevmatikatos et al. ACKNOWLEDGEMENT This research was supported financially by the National Basic Research Program of China under contract 2005CB321601. C. "Disjoint eager execution: An optimal form of speculative execution.5% data dependence because data dependence within 6 blocks is 48. [7]. in contrast. Yeh et al. applications of SPECFP have better speculation depth that up to 64 with 50% data dependence.5 <= 30 % <= 20 % <= 32 . [8] in the context of Multiscalar processor. Doug Burger. the expected speculation depth is only up to 5 with less than 47. Dynamic predication [3] predicates dynamically at run-time and allows predicating without predicated ISA support. V. who proposed a framework to balance control speculation and predication through smart basic block selection at hyperblock formation. Data dependence depth in applications 70 mcf gzip 50 40 30 20 ammp 10 0 <= 27 . 1995. Feb." Micro-28. RELATED WORK Speculative executions on hyperblocks mainly focus on two directions.utexas. Finally. and G. McKinley. Bringmann.5% <= 40 % <= 42 .45 Data Dependence Depth 40 35 30 25 20 15 10 5 0 mcf gzip vortex bzip2 parser art ammp equake V. and subsequently refined by Jacobson et al. 1997. exit predictors predict only the first branch that will leave a code region (such as a Multiscalar task). In Proceedings of the 24th International Symposium on Microarchitecture. 45-54. although prediction accuracy on control-flow decreases while prediction depth increases. D. Dec. it remains high within acceptable bounds. [4] first developed the modern notion of hyperblock. Moreover. Jacobson. the other is Predication which converts control dependence to data dependence by merging multiple control flows into a single control flow. Smith. Mahlke et al. Mahlke. 1992. Maher. figure 11 shows expected speculation depth under the constraints of certain data dependence proportion. extended by August et al. many applications have high expected prediction depth.696% (Fig. REFERENCES [1] A. S. in 1983 and implemented in the wide-issue Cydra-5. pp. folding of exit histories and hysteresis bits in PHT we used in this paper are proposed from Jacobson et al. Sohi. 1994. Dec. and K.cs. Bennett.Y. bigger than 47. and J. As we observes. “Compiling for the Multiscalar Architecture”. available from: http://www. pp. TRIPS toolset. Nicholas Nethercote. Yeh and Y. “Effective Compiler Support for Predicated Execution Using the Hyperblock”. 313-325. and Conte et al. and R. Patt. In Proceedings of the 25th International Symposium on Microarchitecture. Expected speculation depth VS. 185-195. high prediction accuracy and expected prediction depth with control-flow prediction mechanisms. S. we introduce a preliminary evaluation of aggressive execution model. Predication of individual instructions was first proposed by Allen et al. 1993. [5]. Bertrand A. Bill Yoder. S. Kathryn S. Seznec et al. N. and expected prediction depth which reflects the degree of predictability mainly depends on applications themselves rather than predictors. the China Ministry of Education & Intel Special Research Foundation for Information Technology under contract MOE-INTEL-08-07. global and path-based exit predictors. M. most applications have good predictability. Different dependence depth 3) Dependence Depth and Expected Prediction Depth Figure 10 shows dependence depth of applications. IV. Vijaykumar. Y. but whether they are suitable for aggressive executing depends on data dependences between adjacent hyperblocks. Uht. “Compiling for EDGE Architectures”. the Natural Science Foundation of China grant 60633040.5%. W. One is resolving control-flow between hyperblocks such as Multiple Branch Predictors and Region Predictors. N. As in equake. In Proceedings of the International Symposium on Code Generation and Optimization. the National Hi-tech Research and Development Program of China under contract 2006AA01A102-5-2. Our experiments concentrate on the characteristics of dependence distribution and prediction depth in views of speculative execution both on control-flow and data-flow. pp. Since enlarging a hyperblock would become more difficult under the constraints of compiler technology and inherent characteristics of applications.5% <= 35 % <= 37 .5% <= 45 % <= 47 . T.5% <= 22 . A. Under this model. In Proceedings of the 26th Annual International Symposium on Microarchitecture. Franklin. Aaron Smith. Control flow prediction for dynamic ilp processors. A. Expected speculation depth [2] [3] [4] [5] [6] [7] [8] 123 . pages 51–61. Pnevmatikatos. In Doctor of Philosophy at the university of Wisconsin 1998. Sharma. CONCLUSIONS Figure 10. Hank. Control flow speculation in multiscalar processors. Jon Gibson. then Wallace et al. D. Applications of SPECINT have low speculation depths less than 10(4~8 on average). Chen. and analyze the distributions of data dependences between hyperblocks and their impacts on depth of prediction. 9). Hall. The local. studied multibranch predictors that typically predict 2 or 3 targets at a time.5% <= 25 % <= 50 % 60 vortex bzip2 parser art equake Figure 11. E. T. used saturating counters to predict multiple branches.edu /~trips/dist/ Q.

where each transaction T is a set of items such that T ⊆ I. Firstly. Confidence(A B)=P(B|A) = Support(A B)/Support (A)=c. i. cost and other relevant information of the customer which lead to profit. Mining association rules is an important branch of data mining. The published parallel implementations of association rule mining inherited most of these problems in addition to the new costly communication cost most of them need. in 1993.. the existing algorithms discover all frequent itemsets based on user defined minimum support without considering the components of the transaction such as weight or quantity.cost. cost etc. still have to be found. the existing algorithms depend heavily on massive computation that might cause high dependency on the memory size or repeated I/O scans of the datasets. the well-known Apriori algorithm [3] was proposed by R. This method achieves its efficiency by applying two new ideas. This problem can be decomposed into the following sub problems: All itemsets that have support above the user specified minimum support are discovered. for each frequent item assigned to a parallel node. in an efficient manner with minimum utilization of the space and time. i2. transaction database is converted into an abstraction called Weighted Tree that prevents multiple scanning of the database during the mining phase. Further. Each transaction is assigned an identifier.com Abstract— Every element of the transaction in a transaction database may contain the components such as item number. An association rule is an implication of the form A B. Keywords-attribute. The second sub problem. be a set of transactions. a transaction T is said to contain A if and only if A ⊆ T. The first one requires only one scan of the database and generates a . Parallel association rule mining algorithms currently proposed in the literature are not sufficient for large datasets.com Ananthanarayana V S Department of Information Technology National Institute of Technology Karnataka Surathkal. This motivated us to design a new parallel algorithm that is based on the concept of Weighted Tree. Mining association rules can be stated as follows: Let I={i1. The quantities of items bought are not considered.2009. variant) in databases. has been the focus of many studies in the last few years. all the rules that have user defined minimum confidence are obtained. These itemset are called the frequent itemsets. an item tree is constructed and frequent itemsets are mined from this tree based on weighted minimum support.2009 International Conference on Computer Engineering and Technology Parallel Method for Discovering Frequent Itemsets Using Weighted Tree Approach Preetham Kumar Department of Information and Communication Technology Manipal Institute of Technology. Let A be a set of items. and new solutions. anvs1967@gmail. However. In addition. cost of the item bought and some other relevant information of the customer.. India prethk@yahoo. B ⊆ I. quantity. and do not require a lot of communication costs between nodes..194 124 transaction set D if c is the percentage of transactions in D containing A that also contain B. where A ⊆ I. Let D. In a large database it is possible that even if the itemset appears in a very few transactions.Weight I. Discovering rules for all given frequent itemsets and their supports. Mining of association rules is to find all association rules that have support and confidence greater than or equal to the user-specified minimum support and minimum confidence respectively [2]. This is taken to be the probability.e.1109/ICCET. Therefore these components are the most important information and without which it may cause the lose of information.e. called TID. That is. … im } be a set of items. For each frequent itemset. the task-relevant data. Many solutions have been proposed using sequential or parallel algorithms based on user defined minimum support. II. less reliant on memory size. which describes potential relations among data items (attribute. The rule A B has confidence c in the 978-0-7695-3521-0/09 $25. This motivated us to propose a parallel algorithm to discover all frequent itemsets based on the quantity of the item bought in a single scan of the database. Most of the association rules mining algorithms to discover frequent itemsets do not consider the components such as quantity. P(A∩B). Manipal. The rule A B holds in the transaction set D with support s. Secondly. This is taken to be the conditional probability. this may lead to very high profit.component.00 © 2009 IEEE DOI 10. Discovering frequent itemsets. is relatively straightforward as described in [1]. INTRODUCTION The goal of knowledge discovery is to utilize those existing data to find out new facts and to uncover new relationships that were previously unknown. it may be purchased in a large quantity. and A∩B= . Support(A B)=P(A∩B)=s. considered as one of the most important tasks. Agrawal et al.parallel. that do not heavily depend on the repeated I/O scan. where s is the percentage of transactions in D that contain AUB (i. both A and B). P(B|A). This data structure is replicated among the parallel nodes. PROPOSED METHOD This algorithm is divided into two phases. India.

Branch of a Weighted Tree Figure 2 represents the Weighted Tree corresponding to the Sample Database given in the Table 1. This tree is called an ordered Weighted Tree. Arrange the attribute lists of Weighted Tree in an increasing order of their weighted minimum support D. Structure of Weighted tree The idea of this approach is to associate each item with all transactions in which it occurs. They are A. Removal of infrequent attributes of Weighted Tree C. The second type of node has 2 parts. the Weighted Tree obtained is reduced to contain only frequent items and then all the branches of the tree are sorted according to the increasing order of the frequency. Parallel mining of frequent itemset at different nodes for assigned items based on weight. in which every element of each transaction represents either quantity or cost of the respective attribute or item. This node has only one pointer pointing to the next object having this particular attribute. If an itemset satisfies user defined weighted support then we say that it is weighted frequent itemset. This node represents the head of that particular branch. Consider a sample database given in Table 1. Figure 2 . Construction of Weighted Tree B. if user defined minimum support is 2 transactions then an item D is not frequent and will not appear in the set of frequent itemsets even though if it is bought in large quantity and leads to more profit than other frequent items. Algorithm for constructing Weighted Tree Input: The database D Output: Weighted Tree for each attribute weight w in a transaction t ∈ D do begin Create a node labeled with weight w and add this 125 . Table 1.special disk-based data structure called Weighted Tree. First part is labeled TID represents a transaction number or id and second part of which is labeled weight. Sample Database Figure 1. This tree is replicated among nodes and mined in parallel. one pointing to the nodes containing transaction ids and weights and another is a child pointer pointing to the next attribute. This results in a lose of information.Weighted Tree for Table1 The Parallel Algorithm This algorithm involves 3 steps. The first type of node labeled with attribute contains attribute name and two pointers. In the sample database given in the Table 1. indicates quantity purchased in that transaction or cost or other components. A.The Weighted tree has two different nodes and its branch is shown in the Figure 1. The weighted minimum support is the minimum weight an itemset has to satisfy to become frequent. It may so happen that an itemset appears in a very few transactions in a large quantity or cost which leads to profit will not qualify as frequent itemset based on user defined minimum support. In the second phase.

C and D will be frequent in the database. assuming m< n then processor1 builds the tree for the least frequent item. The attribute lists are sorted in increasing order of their weights and is shown in Figure 4 and is called Ordered Weighted Tree. Ordered Weighted Tree Figure 3. In other words. all frequent patterns generated at each node are gathered into master node to produce the full set of frequent patterns. then builds independent and relatively small trees for each frequent item in the transaction database called item tree. If we consider w_min_sup = 10 then the attributes A. Each parallel node is responsible for generating all frequent patterns related to the frequent items associated to that node. C has 12 and A has 19. The each node of the item tree consists of item number and its count and two pointers called child and sibling 126 . Arrange the attribute list in an increasing order of their weighted minimum support In Figure 4. Reduced Weighted Tree of Table 1 C. To do so each node reads sub-transactions for each frequent items directly from the Ordered Weighted Tree. and n trees need to be built. Processor 2 builds tree for the next least frequent item. By doing so a full set of transaction database D. Parallel mining of frequent itemset at different nodes for assigned items based on weight. The process is repeated until all items are distributed. is shown in Figure 3. Finally. and so on up to processor m. Figure 4. end containing only frequent items would be available to each processor to generate all globally frequent patterns with minimum communication cost at parallel node level.node to the respective attribute node. Removal of infrequent attributes of Weighted Tree Input : w_min_sup = weighted minimum support Output: Reduced Weighted Tree. The attributes B and D are found to be infrequent. In the above case if we consider min_sup=3 then only attributes A and C are frequent in the database. if we have m processors. The ordered Weighted Tree is replicated among all parallel nodes. B. This replication is executed from a designated master node. each processor successively receives one item. Each parallel node mines separately each one of these item trees as soon as they are built. After that processor 1 takes item m+1 and so on until all n items are distributed. for each attribute in a Weighted tree do begin if sum(weights of all nodes) < w_min_sup then remove that branch from the tree end For example. After ordering the frequent items by their support. This approach relies on distributing frequent items among the parallel nodes. The trees are discarded as soon as mined. we see that attribute D ahs weight 10. starting from the least frequent. The reduced Weighted Tree of Table 1.

all frequent items which are more frequent than D and share transactions with D participate in building the tree. Processor 1 finds all frequent patterns related to items D and A. Similarly. tree corresponding to A contains only one node and is also ignored. Tree for D does not contain any other node except itself. Construction of Weighted Tree In a transaction database D. If w_min_sup =10 then applying above algorithm. C. Arrange the attribute list in an increasing order of their weighted minimum support If there are n attributes in the reduced Weighted tree then arranging them in an increasing order of the weight is in O (n2). item tree corresponding to C is built. this tree can be constructed in O (G) steps. a branch is formed starting from the root node. if A has weight greater than B then tree for B is larger than the tree for A. m<n then this step is in O( n/m). the first frequent item. Also all its subsets are frequent by using downward closure property. (i)This method is space as well as time efficient than Apriori since it involves candidate generation method and requires multiple scan over the database. the higher the support of a frequent item. Item Trees for items D. they are merged into one branch and their count fields are incremented. In our example. Figure 5 illustrates trees for frequent items D. if we need to mine the Ordered Weight Tree in Figure 4. The weight of {AC} = 18 is greater than user defined weighted minimum support. C}. Hence Fw = { {A}. Therefore at node 1 only frequent itmestes are one itemsets.pointers. (ii) The algorithm FP-Tree requires 2 scans to discover all frequent itemsets and mining patterns involve construction of 127 .{ A. Traverse item tree for I for each maximal path from a root to a leaf in the tree for an item I begin T= {elements from I to leaf with weight} if sum of weights of elements of [T] >= w_min_sup MI= {elements from I to leaf with weight} end C. for every assigned item I. If more than one frequent items share the same prefix. Figure 5. C}}. at node 2. if there are M weights and G transactions in D. In the worst case. If the average length of the transaction t is equal to m weights. using 2 processors machine with weighted minimum support is equal to 10. then there will be GM nodes in a Weighted Tree. {C}. Therefore it is ignored. {D}. Whereas. In this tree for D. If there are m processors and n trees. Parallel mining of frequent itemset at different nodes for assigned items based on weight. Based on this definition. the smaller its tree is. D. B. Therefore. w_min_sup Output: set of all frequent itemsets at node i. we see that at node 1. Input: A Ordered Weighted tree. Merits In our research we have implemented this algorithm and found that this algorithm is better than Apriori and FP-tree algorithm. If there are n frequent items then there will be n item trees. then there will Gm nodes in the tree. {D} and {A}. Processor 2 would generate all frequent patterns related to item C. then G transactions have to be read by the algorithm to construct Weighted Tree. The tree starts with the root node containing the item D. If there are k infrequent items then Reduced Weighted Tree will contain (m-k) attribute lists with at most G (m-k) nodes. In other words. THEORETICAL ANALYSIS A. III. C and A. then reduction of Weighted Tree is in O (m). Reduction of Weighted Tree If there are m attributes. the starting node for each parallel node would be D. The first an item tree is built for item D. our method requires only one scan of the database. if there G transactions. Similarly. Apply downward closure property[1] to get all frequent itemsets from MI which contains all maximal frequent itemsets. The maximal path corresponding to this tree is {A. A The tree for any frequent item say f contains only nodes labeled with items that are more frequent or as frequent as f and share at least one transaction with f. For each subtransaction containing the item D with other items that are more frequent than D. construct item tree for I containing transactions associated with I at node i. Hence it is frequent. The child pointer points to the following item and sibling pointer points to the sibling node. Each processor will construct one tree at a time.

M. Han and M. [4] Han J. [2] J. “Fast parallel association rule V. D. distributed and dynamic mining of association rules” In Proceedings of HIPC'00. which involves header table.Heidelberg. Dallas. Narasimha Murthy M. CA.. CA:. “Mining Frequent Patterns without Candidate Generation”. Morgan Kaufmann Publishers. CONCLUSION [3] Han. 2000. USA. Lu. 559-566. [5] Agrawal Charu and Yu Philip. Kluwer Academic Publishers. Subramanian.. Pei Jian. of the IEEE 2001 International Conference on Data Mining. 21. S. we are still working on it with the aim of extending the application of this algorithm to various kinds of databases. J. Kamber. [7] O.K..the condition based tree. [6] Ananthanarayana V. El-Hajj. Y. Hyderbad. The Parallel algorithm for discovering frequent itemsets based on weight is a new method and is found to efficient when compared to Apriori and FP-tree. IV. 2004.Zaiane. 1-12. Bulletin of the IEEE Computer Society Technical committee on Data Engineering.R. “Scalable. REFERENCES mining without candidacy generation” In Proc. Runying Mao(2004) ” Mining Frequent Patterns without Candidate Generation: A Frequent Pattern Tree Approach” . 128 .1. TX. India. Data Mining and Knowledge Discovery. “Data MiningConcepts and Techniuqes”: San Franscisco. Proc. and P.(1998) “ Mining Large Itemsets for association rules. Springer Verlag Berlin. Our method uses item tree to mine maximal frequent itemsets and does not involve header table. As such. Yin. J. Pei. of ACM-SIGMOD International Conference Management of Data. 20003. San Jos. December 2001 [1] Arun K Pujari “Data mining Techniques”: Universities Press(Indis) Private Limited. No.(2000). Netherland” 53-87.

system components and implementation algorithm based on FPGA is represented in this paper at first. this improvement design is inspected on ALTERA’s FPGA Cyclone EP1C12Q240C8. 2. Three-phase PLL would obtain better performance. reduce the usage of logic resource and track the variation of frequency and lock the base phase well.. so phase tracking is one of the most important components in the system[1].net. Analog or digital phase-locked loop (PLL) is widely used in general instruments to achieve zero detection [2]. the design is implemented on Altera’s FPGA chip EP1C12Q240C8. a new method which combines CORDIC algorithm with look-up table algorithm is put forward in this paper to generate sine function. it will take a lot of CPU time. At last. which means the width of sampling window is not equal to integer times of the signal. such as phase discriminator. Then. because this way could suppress harmonic wave well[3]. The research of principle. It is verified that the design can save logic resource of FPGA largely. sh_1983@sina. The result shows that this improvement scheme is able to increase the computing speed. Principle of three-phase PLL PLL is composed by three main components: phase discriminator (PD). would cause measurement error because of spectrum leakage [4]. This new method can both increase calculating speed and guarantee accuracy of results. 1. this design is improved by using chip area sharing improvement scheme. Capital Normal University. The PLL fits in electric power system as well as other fields. Discrete fourier transform (DFT) is the most commonly used method to detecting frequency and phase. 100048. then these modules are designed in VHDL language with blocking design method.00 © 2009 IEEE DOI 10.2009. [5] 978-0-7695-3521-0/09 $25. At first.1109/ICCET. This algorithm usually is implemented using DSP technology by software because of its complexity. the components of this three-phase PLL. loop filter and voltage controlled oscillator (VCO) are introduced.. Introduction Flexible electric power transmission system which is widely used in electricity system requires accurate and real-time voltage phase information of the system. the method mentioned above cannot do anything about it.2009 International Conference on Computer Engineering and Technology Optimized Design and Implementation of Three-Phase PLL Based on FPGA Yuan Huimei. In order to save logic resources of FPGA. and it is also proved that the PLL can track the variation of frequency and lock the basic phase well. At the same time. such as asymmetry.74 129 has analyzed the influence of three-phase PLL detecting the error of phases when interference. In order to save logic resource of FPGA.Song Yu College of Information Engineering. To implement three-phase PLL using field programmable gate array (FPGA) by hardware is a new design scheme. The frequency and phase of voltage in the electric network are indispensable. This method is a pure hardware approach to parallel process signals and able to achieve high performance. Beijing. and the performance is limited. harmonic and migration etc. loop filter (LF) and voltage . etc.com Abstract An optimized method to design and implement digital three-phase phase-locked loop (PLL) based on FPGA is presented in this paper. Sun Hao. exists in voltage signal. are designed using VHDL. in this way. China yuanhmxxxy@263. loop filter and voltage controlled oscillator (VCO). an optimized method called chip area sharing is adopted. if system can separate positive and negative voltage and feedback positive voltage[6]. A new algorithm which is combined with CORDIC algorithm and look-up table is proposed to generate sine function aiming to increase computing speed and improve the accuracy of results. But. when the rectifier notch appears in the signal. At last. But using asynchronous sampling. However. principle and basic structure of the PLL including phase discriminator.

the transfer function of LP would be described as U f (s) K = Kp + l (7) U d (d ) s Where. so the value of sine function in (5) may be equal to the phase error proximately. addition 10 times and trigonometric calculation 4 times totally. ud = uαuuα −uβuuβ (5) =Us KL sin(θ1 −θ2 ) Assuming PLL starts to lock. the VCO is able to be simply expressed as an integrator with a gain K v . In order to get a transfer function of the phase of PLL. and the transfer function of this integrator is θ 2 (s) K v (10) = U f (s) s 3. PD’s feedback input uuα and uu β are the cosine and sine function of θ with the gain of K L . However. using θ 2 (t ) = θ (t ) − ω0 t to replace the formula (8). we can obtain ⎛ 1 2⎜ A= ⎜ 3⎜ ⎜0 ⎝ 1 2 3 2 1 ⎞ − ⎟ 2 ⎟ 3⎟ − ⎟ 2 ⎠ θ 2 (t ) = ∫ K v u f (λ )d λ t −∞ (9) (2) Then.1. the VCO module should generate an output of phase signal θ 2 . In order to guarantee both the performance of LP and stability of the dynamic system[7]. in the calculation of three-phase PLL. FPGA design of PD From Fig. formula (5) could be rewrite as ud = U s K L (θ1 − θ 2 ) = K d (θ1 − θ 2 ) (5) Where. we need do multiplication 10 times.1. As the result. θ (t ) can be stated as follows: =Us KL sin(ω0t +θ1 −ω0t −θ2 ) KL cos θ 1 s ω0 KL ⎛ ⎞ ⎜ sin(θ ) ⎟ ⎜ ⎟ 2π = U s ⎜ sin(θ − ) ⎟ ⎜ 3 ⎟ ⎜ ⎟ ⎜ sin(θ + 2π ) ⎟ ⎜ ⎟ 3 ⎠ ⎝ T sin θ Fig. FPGA design of three-phase PLL 3. we can know. we can know.1 we can see that the input of VCO u f (t ) is the output of adaptive low-pass filter. Then. according to vαβ 0 = Avabc . The output of PLL. Where. K p is proportional gain and Kl is integral gain.controlled oscillator (VCO). if we process following the design described as Fig. which is shown in Fig. where θ(t) = ∫ [ω0 +Δω(λ)]dλ =ω0t + ∫ Kvuf (λ)dλ t t −∞ −∞ (8) Then. phase error (θ1 − θ 2 ) is zero or infinitesimal. K d = U s K L . θ1 (t ) = ω1 (t )t + ϕ1 (t ) − ω0 t . proportionalintegral (PI) filter is used in this paper.Three-phase equilibrium voltage can be described as follows formula (1). we can get ⎧ua = U s sin(ω0 t + θ1 ) ⎨ ⎩ub = U s cos(ω0 t + θ1 ) (3) In this way. which are shown as formula (4): ⎧uuα = K L cos θ ⎪ ⎨ ⎪uu β = K L sin θ ⎩ The output of PD is expressed as follows: (4) 130 . and ω0 which is assumed as a constant here is center frequency of VCO. From the Fig.1.1 Block diagram of three-phase PLL uabc (1) Where. uabc = [ua ub uc ] T could be transformed as uαβ = ⎡uα uβ ⎤ in αβ 0 coordinate through transform ⎣ ⎦ matrix A. three-phase voltage needs to be converted to two-phase voltage through the matrix operations at first. this two-phase voltage should multiply with the closed-loop feedback and do addition u operator to get the d-axis component d we need. Then.1.

we find out if we use d. but it is a waste of logic resource of FPGA. Kl and T decide the dynamic performance of the system. we could obtain 1 3 sin θ ] ud = ua cos θ + ub [− cos θ + 2 2 ⎡ 1 ⎤ 3 sin θ ⎥ + uc ⎢ − cos θ − 2 ⎣ 2 ⎦ . In this way. ⎡ud uq ⎤ = Tdq uabc ⎣ ⎦ T (11) 4. And the closed-loop feedback of the system is Φ( z ) = Where. A new scheme which is combined with CORDIC and look-up table is proposed in this paper in order to improve the CORDIC or look-up table individual. Kc = a= Kd Kv K p c + 2 Kd Kv K p + 1 K 1 . In this way. to complete all of the computation would use nine multipliers.It will take up a lot of logic resource in that way.2 Block diagram of FPGA design of three-phase PLL system. b= Kd K p ( z) 1 + Kd K p (z) Kd Kv K p 1 + Kd Kv K p = Kc . FPGA design of LF and VCO The result of input voltage signal θ1 and θ 2 of the system in (6) implements the unit negative feedback of the system’s output. Optimization of design As we described above. so to resolve this problem. the whole system only need one hardware multiplier and logic resource of the FPGA is able to be saved greatly.2. Kv 3 2 sin θ θ2 2π cos(θ + ) 3 1 s Kl Fig. ⎥ ⎢ sin θ sin(θ − 2 π ) sin(θ + 2 π ) ⎥ ⎢ 3 3 ⎥ ⎣ ⎦ Then. but it also have flaw which is the accuracy of CORDIC is limited by its own. z (c − z ) z − az + b 2 (13) . we only need to build up a table 3. c = 1 − l T and T is sampling Kp 1 + Kd Kv K p frequency. After analyzing the cos θ 2 2 ⎤ ⎡ ⎢cosθ cos(θ − 3 π ) cos(θ + 3 π )⎥ Tdq = ⎢ Where. Every multiplication will have a fixed slice to complete computation. However. we find that the process has its time sequence. K p . look-up table would take up a lot memory resource of FPGA. Because of the symmetrical characteristic of sine and cosine function. The angle offsets are selected such that the operations on X and Y are only shifts and adds. addition 5 times and trigonometric calculation twice to complete the calculation. and a lot of logic resource of FPGA is saved at the same time. Nevertheless. At the end of the system. Another important part of this system is sine. a sine and cosine function generator is designed in order to supply the feedback for this system. q coordinate transformation directly. After analyzing the design described above. CORDIC works by rotating the coordinate system through constant angles until the angle reduces to zero. the workload and complexity of computation are greatly reduced.2. uaubuc ua ub Kl (12) uc cos(θ − 2π ) 3 θ1 θ We only need to implement the formula (12). and the program only needs to do multiplication 5 times. this PLL system is unstable because the opened-loop zero point is out of the unit circle. cosine function generator which is usually implemented by using the method of look-up table. according to phase information which 131 . One of the solutions is CORDIC algorithm [8]. we use time division multiplexing (TDM) technology to reuse one multiplier iteratively. The design of this PLL system is shown in Fig. CORDIC would not take up any memory of FPGA. Analyzing the formula (13) can shows when T > 2 K p / Kl . there is is locked by PLL.

The results show that the utilization of resource on FPGA is greatly reduced after optimization. In Fig. we set K v = 128 . 2. Table 2 lists the usage of resource on Cyclone EP1C12Q240C8 before and after optimization of this three-phase PLL. Table1 Usage of FPGA’s logic resource Method Look-up table CORDIC Combined method LEs 242 1335 675 Memory( ROM)(bit) 180224 0 9216 M4K 4 0 3 period of fundamental wave. 132 .4 Hardware simulation result of this system Table2 Usage of resources on EP1C12Q240C8 before and after optimization of three-phase PLL EP1C12Q240C8 LES (Look-up table) Resource on FPGA 12060 Usage of resource 2546 13824 1210 0 983 9216 Rate of resource utilization/ % 21 6 10 0 8 4 ua ub sin θ cos θ uc cos(θ − cosθ 2π ) 3 ua cos θ + ub cos(θ − +uc (θ + 2π ) 3 2π ) 3 Kv 1 s Kl Kl Memory/Bit ( Look-up 239616 table) LES(CORDIC) 12060 3 2 2π cos(θ + ) 3 θ1 − θ2 θ Fig. Table 1 lists the usage of logic resource on ALTERA’s FPGA Cyclone EP1C12Q240C8 in the way of look-up table. Fig. The usage of logic resource in this system is greatly reduced by improvement scheme mentioned above. the number 1. In the picture. here we set it as 25KHz. also it can improve the accuracy of CORDIC. we can see that this three-phase PLL could stably output phase information on the third 6. CORDIC and combined way.3 shows the design of three-phase PLL after optimization. Basic modules of the PLL including PD. K p = 0. u3 represents the three-phase voltage. Results of experiment We use VHDL to implement this design and verify the design on ALTERA’s FPGA Cyclone EP1C12Q240C8. And the processing speed also meets the requirement of the system. u1. and results show this new scheme is able to both increase calculating speed and guarantee accuracy of results. 3 …9 means the slice taken up by calculation.including the information of a quarter of one period to minimize the table [9]. Fig. clk is the system clock.4.4 shows the result of hardware simulation. Conclusion This paper has proposed an optimized method to design and implement digital three-phase PLL based on FPGA. In Fig. A new method which combines CORDIC algorithm with look-up table algorithm has been put forward to generate sine function. Here. and an optimized method called chip area sharing is adopted in order to save logic resource on the chip.3: Block diagram of FPGA design of three-phase PLL after optimization Memory/Bit ( 239616 CORDIC) LES ( Combined 12060 method) Memory /Bit ( 239616 Combined method) 5. LF and VCO are designed in VHDL language as IP cores. Fig. and the dataout is the output of this three-phase PLL. u2. if the input has steady phase. It is also proved that the PLL can track the variation of frequency and lock the basic phase well.8KHz .5 and sampling frequency T = 12.3.

Jun-Koo Kang. C. Second Edition. Seung-Ki Sul. Local System Frequency. IEEE Transactions on Power Delivery.. Record of the IEEE’99[C].Acknowledgment The authors would like to thank Beijing Science & Technology Government for financially supporting this work by project of Beijing Technology New Star (No. 2006B58). Hsieh. (2) : 79-82. [5] Kyo Chung.A Phase Tracking System for Three Phase Utility Interface Inverters.G. on Power Electronic 2000. IEEE Transactions on Industry Electronics. 2004. Springer Press Ltd. 43(6): 609615. 133 . Thirty-Fourth IAS Annual Meeting IEEE Conf. and Rate of Change of Frequency. GUO Yu-Hua. Thorp J. Li Xiao-Chun. [2] G. J.1996. Record ′99. Power Electronics. 6(43): 609-615.1999: 2167-2172. Foundation of Phase Lock Technology. Phase-Locked Loop techniquesa survey. [9] Wu Po. Lee. Improvement and Implementation of One Phase Power Phase-Locked Loop based on FPGA. Beijing: People Post-office Press.1996. Digital Signal Processing with Field Programmable Gate Arrays. J. A New Phase Detecting Method for Power Conversion Systems Considering Distorted Conditions in Power System. [7] YANG Shi-Zhong. References [1] Sang-Joon Lee. A New Measurement Technique for Tracking Voltage Phasors. 1994. K. [3] IEEE Working Group Report. IEEE Trans. 2007. 8: 100-102. 1978. [6] J. K. 1999: 2167. [8] Uwe Meyer-Baese. Synchronized Sampling and Pastor Measurements for Relaying and Control. Adamiak M.G. Hung. 19(1): 339-442. Industry Applications Conference Conf.A New Phase Detecting Method for Power Conversion Systems Consid-ering Distorted Conditions In Power System.2172. WEN Liang-Yu. S. Sul. Kang. [4] hadke A. S. C. IEEE Transactions on Industry Electronics. 2000: 431-438. (4): 80-82.

China rwl@zjgsu. static data storage. 310018. prevent the invasion of illegal user. Zhejiang. Operation layer and application layer are to control the authorized access for system managers and application end users respectively. The storage architecture is logically organized into four classes. The security system is the critical part of data centre in the distributed computing environment.cn Abstract In order to improve the data access efficiency and enhance the security in IT systems.2009 International Conference on Computer Engineering and Technology Research on the Data Storage and Access Model in Distributed Environment1 Wuling Ren College of Computer and Information Engineering.edu.2009. transmission and processing in each subsystem of distributed computing environment. 1 Foundation item: Project supported by the National Science & Technology Pillar Program. The three layers include operation layer. The information may be roughly divided into four kinds in the distributed computing environment. This is the main information in distributed computing executing environment.2006C11239) 978-0-7695-3521-0/09 $25. The security system guarantees data can and only can be accessed by the authorized users. information is stored in numbers of physical distributed storage nodes. Hangzhou. The data logic security shield layer is to shield the difference among the database safety control organizations.2006BAF01A22). department code in each subsystem and so on. China zhoupan@pop. product information. data logic security shield layer and application layer.1109/ICCET.zjgsu. [6] According to information characteristics. whatever the request comes from domestic or external. it keeps privacy and consistency of data in system. In the distributional data management environment. which includes active data storage. China (No. In this paper. The second kind of information is the model meta-data in each subsystem[2]. the material information. These information is shared in various subsystems. The information is so much 1. backup data storage and data warehouse. including system parameter. staff code.edu. The first kind of information is the foundation data in the distributed computing environment. In the model. and maintain the integrity and uniformity in the procedure of data gathering. Introduction It is particularly important while the information security in distributed computing system.4]. we study the architecture of data center according to the characteristic of distributed computing environment. Zhejiang Gongshang University. Data security access is controlled through a 3-layered access control model based on RBAC model[3. China(No. and the Science & Technology Research Program of Zhejiang Province. A security access control model is presented based on the above study. Zhejiang Gongshang University.cn Pan Zhou College of Computer and Information Engineering.2]. customer information. Data access efficiency is increased because most applications will concentrate on active data storage or static data storage where the most recently and frequently used data are stored. including t the model of process control standards and its executing log. The architecture of data storage in distributed computing environment The design of data central of distributed computing environment must consider the security of information. we discuss the data storage architecture structure of data central system structure. Hangzhou. 310018. supplier information. The encrypted centralism authorization provides trustable login.00 © 2009 IEEE DOI 10. therefore the design of data central must realize security control of each kind of database information. It is appreciated after the model has been applied to Zhejiang Fuchunjiang Limited Company. Zhejiang. Data is well protected by good security design and implementation. Backup and restore quickly and reliably is a mandatory requirement of digital system. we must adopt effective method to maintain the integrity and uniformity of visiting to these data in various subsystems. and has high frequency to use.89 134 . a layered access control model with classified distributed storage in distributed environment is presented[1. The data central collects the massive primary data and all kinds of regeneration summarized information in each subsystem. storage. 2.

that we must control their depositing cycle. It deposits the information which is used presently and in low frequency in the system. The former avoids the unintentional destruction of information (for 135 . The data central data storage system is composed by five parts. If the database management system has not provided this kind of tool. It means information transferred among the database or tables. system configure file. The fourth kind of information passes through the distributed computing environment processing. the primary data and so on. Another part is the static data storage. and the storage medium is the hard disk. The design of database security The data security including two aspects. other operators can only be able to visit the distributed computing environmental information through the application subsystem. one is the data protection (or we call anti-damage nature). the information of active data storage and static data storage is the protection object[7]. Figure 1.2 Information Access Control In the distributed computing environment system. The third kind of information is the process information. One of the tasks of distributed computing system is to process this information effectively. In the regular circumstances. The primary data gathers from each related subsystem. The latter refers to the authority of data. It can use the file copy tool to realize the procedure. That information is the basis of dynamic adjusting the system procedure. It can use the hard disk or the magnetic tape as the reserve medium to backup the information by the copy/restore tool which provided by the database management system or the operation system. It deposits information which id used in the lowest frequency in the system. These two parts constitute the active data storage. The last part is the backup data storage. 3. The information in the reserve data storage is their backup. the magnetic tape or CD-ROM. such as incautious deletion. hard disk expiration and so on). the computer hardware breakdown. We can use the database tool EXPORT/IMPORT which the database management system provides to realize. which we call the data warehouse. In figure 2. We must set up the backup system for the above data storage in order to prevent the system from accident collapses and lose of data. another aspect is the data security. and guarantee the security of information. 3. the complete safety control model is as figure 2 shows. It deposits the information which is used presently and in high frequency in the system. The active data storage is organized by the database form.1 Information protection In the distributed computing environment. modeling the executing parameters and the management control standard. parameter file refer to the operation system-level file which need security control and should be controlled by the application system. example: the operation mistakes. Except administrator. Another part is the historical data storage. we only record and retain the information which exceptionally occurs or not yet completely processing in the system. Data export/load:Information transfers between the static data storage and historical data storage. The method of data transfer includes various date storages as followings: Data transfer:Information transfers between the active data storage and historical data storage. then it need program to realize this function Data backup/recovery: (1) Information transfers between the active data storage and reserve data storage . as figure 1 shows. We can use programs to realize the procedure. One of the parts is the current data storage. and the medium is the hard disk. in order to prevent the system from accident collapses. such as some original documents processing in several years ago. such as the system foundation information. The architecture of data storage in distributed computing environment 3. namely who can use the information and how to use it. compiles processing the history data material. and guarantee the security of information. Thus the backup information is stored in the hard disk as files. They will dump the data to text file. (2) Information transfers between the static data storage and reserve data storage. File system safety and security control is to carry out the security control of system configure file. Static data storage and reserve data storage is organized by file form.

Ferraiolo. 4 Summary This paper discussed the architecture structure of data centre in the distributed computing environment. which data the operator can access or modify. Application system defines other 2-truple and triples. The main function of the data logic security shield level is to shield the difference among the database safety control organization. The 3-layered access control model in distributed computing environment 136 . Kwon. Artech House.References [1] K. Visual modeling and formal specification of RBAC constraints using semantic web technology. doi:10. static data storage. operated object). [8] Ninghui Li. [2] Bruce Lownsbery. ACM Transactions on Information and System Security (TISSEC). which delegates privileges to other operator or group.1007/11603771. Security control is distributed in three layers. Web Services and Enterprise Integration.-J. A triple is used to describe the three objects: (operator. The layered access control mode is presented based on RBAC model. how long the privilege is available. et al. Hammer. 2001. Washington. Application operating is granted by the security control layer. 11. Based Syst. IEICE Transactions on Communications 2005 E88-B(3):846-856. operation. SIGKDD’03.9. Nov 2006. [7] Shigeki YAMADA. It was well evaluated. It takes the role of privilege grant. Mahesh V. and presented a layered security access control model. 2003. such as (operator. [3] David F. the data distributed in each physical storage node is organized into four components which includes active data storage.Access Control for Security and Privacy in Ubiquitous Computing Environments. The system has been applied to Zhejiang Fuchunjiang limited company. Distributed Computing . operation layer and application layer.1093/ietcom/e88-b.2008. doi:10. [4] J. [6] Yu Hwanjo.(2008). pub: Springer Berlin/Heidelberg.3. Knowl.02. to extend flexibility. Security analysis in role-based access control. Moon. Helen Newton. v. so as to provide the consistent support way for the application system security control mechanism. Figure 2. C. It decides functions the operator can use. USA. 8. Primary cognizance data central information authorized operation has the same effect as well as data central manager's security control. namely management layer. The security control layer is responsible for the authority of the application system.The Key to Enduring Access: Multi-organizational Collaboration on the Development of Metadata for Use in Archiving Nuclear Weapons Data. The data central management system provides the data central safety control level.. Role-Based Access Control. the file system safety control and the security mechanism. [5] Soomi Yang. In classified distributed storage model.parameter file with the safety control mechanism of operating system. EAI Journal. Classifying Large Data Sets Using SVMs with Hierarchical Clusters. Tripunitara.knosys. The security control layer determines its granting by matching the certain triple in triple database. doi:10. manager of operator). 5. DC. backup data storage and data warehouse.1016/j.007 I. Yang Jiong.846. Effective time can be also added in the triples to solve the temporary authorized problem. with the encryption method which defined by user. 2003.4 n.IWDC 2005. An Efficient Access Control Model for Highly Distributed Computing Environment.

gene selection is one of the critical aspects [4[5][6][7].all and one-againstone method were used with two different kernel functions and their performances are compared and promising results were achieved. Department of Computer Science Sri Ramakrishna College of Arts and Science for Women.com Abstract-Data mining algorithms are commonly used for cancer classification. selecting a compact subset of discriminative genes from thousands of genes is a critical step for accurate classification. Tibshirani [17] successfully classified the lymphoma data set with only 48 genes by using a statistical method called nearest shrunken centroids and used 43 genes for SRBCT data. Karunya University. heterogeneous cancers can be classified into appropriate subtypes and the challenge of effectiveness in cancer prediction lies when there are high dimensional.India tvsaran@hotmail. Prediction models were widely used to classify cancer cells in human body. I. Genetic algorithms (GA) [16] Naïve bayes (NB)[15]. which keep the maximum amount of information about the class and minimize the classification error [10].. Signal-to-Noise. The input to such models is a set of objects (i. This paper used the two publicly available cancer dataset Lymphoma and Liver. Once such a predictive model is built. With the help of gene expression obtained from microarray technology. R. Tzu-Tsung [27]. these algorithms can perform an effective feature selection if that leads to better prediction accuracy. RBF. The key advantage of supervised learning methods over unsupervised methods like clustering is that by having an explicit knowledge of the classes the different objects belong to. SVM one-against. Department of Computer Applications. In microarray classification analyses. the classes. Euclidean distance [13] and the some of the classification algorithms used were SVMs.. it can be used to predict the class of the objects for which class information is not known. The paper uses a classical statistical technique for gene ranking and SVM classifier for gene selection and classification. classification methods and gene selection techniques are been computed for better use of classification algorithm in microarray gene expression data [2][3]. Gene Selection mechanism with 137 Keywords: Classification. In response to the rapid development of DNA Micro array technology.V. the group to which a new individual belongs to is determined accurately. Saravanan.e. the main objective of gene selection is to search for the genes. which these objects belong to (i.com In classification analyses of microarray data. independent variables). and a set of variables describing different characteristics of the objects (i. traditional t-test. Lipo Wang [11] 2007 proposed an algorithm in finding out minimum number of gene up to 3 genes with best classification accuracy using CSVM and FNN. In 2003. With DNA microarray data.1109/ICCET. Selection of important genes using statistical technique was carried out in various papers such as Fisher Criterion.2009 International Conference on Computer Engineering and Technology An effective classification model for cancer diagnosis using micro array Gene expression data Dr. From the samples taken from several groups of individuals with known classes.38 . training data). Efficient gene selection can drastically ease computational burden of the subsequent classification task. 2008 proposed a classification method to classify the causality of a disease is of two stages. SVM one-against-one.e. which uses a part of the dataset as training set and the uses the trained classifier to predict the samples in the rest of the data set to evaluate the effectiveness of the classifier.Mallika. Gausssian.. SVM one-against. chi-squared test. The methodology was applied on two publicly available cancer databases. Supervised machine learning can be used for cancer prediction. India mallikapanneer@hotmail.00 © 2009 IEEE DOI 10.Coimbatore. INTRODUCTION Micro array technology has made the modern biological research by permitting the simultaneous study of genes comprising a large part of the genome [1]. Karunya School of Science and Humanities. dependent variables).all. and MannWhitney rank sum statistic [12]. 978-0-7695-3521-0/09 $25. This paper focuses on finding small number of genes that can best predict the type of cancer. The goal of classification is to build a set of models that are able to correctly predict the class of different objects.e. Coimbatore. k-nn [14]. [11].2009. and can yield a much smaller and more compact gene set without the loss of classification accuracy [8][9].

SVMs were formulated for binary classification (2 classes) but cannot naturally extend to more than two classes. Their performances are compared with their respective training time and accuracy. Yn =1. Each binary SVM classifier creates a decision boundary. It constructs n (n-1)/2 classifiers in which each one is trained on the data for 2 classes. Y1).05. Small p-values indicate a low probability of the between-sample variation being due to sampling of the within-sample distribution. The ANOVA test is known to be robust and assumes that all sample populations are normally distributed with equal variance and all observations are mutually independent. if yj ≠ i. while any value greater than this value will result in non-significant effects.all. K is the class labels corresponding to Xn.To extend SVM for multi-class classification. (1) and ξji ≥ 0. small p-values indicate interesting genes. SVMs are power tools used widely to classify gene expression data [21][22]. Gene Selection-SVM one-against-all and oneagainst-one SVMs are the most modern method applied to classify gene expression data. which can separate the group it represents from the remaining groups For training data t = (X1.ratio is less than the critical value ( )..ξji . Furthermore. This paper efficiently uses the varieties of SVM such as one-againstall method (SVM-OVA) and SVM one-againstone (SVM-OAO) method with heavy trailed RBF and Gaussian kernel function. e. The nth SVM solves Min ½ (wi)Twi + C Σ tj=1 ξji i i w biξ (wi)Tφ(xj) +bi ≥ 1 . This paper gives effective methodology to classify a multi-class problem .j =1…t. The classification problem for the training data with ith and jth class is shown as 138 . SVMs were designed with SVM one-againstone.Of all the information presented in the ANOVA table. SVM one-against-all. Gene Ranking -ANOVA p-values ANOVA is a technique. Xj) ≡ e-γ | Xi – Xj |2 (2) Where Xi is the support vector of the ith class and Xj is the support vector for the new higher dimensional space and γ is the tuning parameter. RBF kernel can be able to give the same decision as that of RBF network. Xn ∈ Rn and n=1…t. The approach chosen in this paper is the one-way ANOVA that performs an analysis on comparing two or more groups (classes) for each gene and returns a single p-value that is significant if one or more groups are different from others. (X2. (wi)Tφ(xj) +bi ≤ -1+ ξji. The SVM-OAA constructs ‘n’ binary SVM classifier with the ith class separating from all other classes. the paper compares the SVM one-against. to assess the significance of treatment effects.individual or subset gene ranking as the first stage and classification tool with or without dimensionality reduction as the second stage. [23]. Y2)…(Xt. which is frequently used in the analysis of microarray data. which can be used to give good classification accuracy.g. and to select interesting genes based on P-values. then the effect is said to be significant. The SVM-OAO method was first used by [28]. The Radial basis function (RBF) is the most popular choice of kernel functions used in Support Vector Machines. How to effectively extend SVM for a multi-class classification is still an ongoing research issue [23]. The very small p-value indicates that differences between the column means are highly significant. This paper proposes an efficient methodology using statistical model for individual gene ranking and data mining models for finding minimum number of gene rather than thousands of genes. any value less than this will result in significant effects. The most significantly varying genes have the smallest p-values. II METHODOLOGY A. In this paper the α value is set at . SVMs are able to find the optimal hyper plane that minimizes the boundaries between patterns [19]. if yj =i. [18]. SVM oneagainst-one classifiers with kernel functions gausssian and RBF (heavy tailed RBF) using two databases for Lymphoma cancer and Liver cancer. The probability of the F-value arising from two identical distributions gives us a measure of the significance of the between-sample variation as compared to the within-sample variation. which can be represented by K (Xi. which works by separating space into two regions by a straight line or hyper plane in higher dimensions. Yt). Where each data xi is mapped to the feature space by function φ and c the penalty parameter. The paper uses the pvalues to rank the important genes with small values and the sorted numbers of genes are used for further processing B. if the p value for the F.

32% OAO GAU 91. Table . For the training dataset with 4026 X 31 dimensions ANOVA p-value was calculated for each gene and the top ranked genes were selected .18% 86.71% 87. the subtypes pf Lymphoma cancer. The dataset was with few missing data.77 96.32% 92. The method used the gene subsets that achieved 100% CV accuracy for the training samples and retrained the classifier. Comparisons of results with previous work are shown in Table.Min ½ (wij)Twij + C Σ tj=1 ξjij ij ij ij w b ξj (wij)Tφ(xt) +bij ≥ 1 .04% 91. COMPARATIVE RESULTS OF TESTING ACCURACY OF ALL ALGORITHMS.50% 88.2. using the 4 parts as training and the other as testing. (wij)Tφ(xt) +bij ≤ -1+ ξjij. CLL with clear boundaries.ξjij .02% 91. Then the classifier was used to predict the samples in the testing dataset. the samples in the training dataset was randomly divided into 5 equal parts.39 OAO-RBF 95.1 shows the average Training accuracy for training all possible gene pairs. This section reports the experimental results of all the datasets exhibiting the SVM varieties such as one-against-all and one-against-one with different kernel functions RBF and Gaussian.6) that shows a good separation of 3 classes-DLBCL. Figure. all the samples were used as test set.39 96. if yt =i.95% 91.39 In comparison with previous works.54% 88. which can be able to predict for a doctor for the 3 subtypes of lymphoma.39 98.33% OAA GAU 91.94 95. For n number of top genes all possible combinations are n (n+1)/ 2!Using these combinations.43% 91.6) was achieved using SVM one-against-all method with RBF kernel function (OAA-RBF) from the top 20 genes.1 AVERAGE 5 FOLD CROSSVALIDATIONACCURACY OF ALL ALGORITHMSLYMPHOMA DATA Top gene 10 20 30 50 100 OAA RBF 91. so that over 5 runs of the classifier.77 98.04% 88.77 98.1 plots the gene pairs (4. proposed method proves well for the number genes needed to achieve best accuracy. Furthermore Table. ξjij ≥ 0. (3) III RESULTS AND DISCUSSION The proposed methodology was applied to the publicly available cancer database namely Liver and Lymphoma cancer.98% 88.16 96.77 96.39 98. In all the cases SVM one-against-all was superior to all other methods were well proved. classification was performed for 5 runs.39 93. a different test set was used.1 gene expression level of gene pairs showing good separation of different classes for Lymphoma dataset TABLE.49% OAO RBF 91.77 OAAGAU 91.97% 91.70% 88. 139 .77 98. TABLE. The performance of the classifier was validated using cross validation (CV) technique with 5folds.55 OAOGAU 91.39 98. For 5 folds.77 98.99% 90. A.LYMPHOMA DATA Top gene 10 20 30 50 100 OAA-RBF 96. Half of the samples were picked randomly for training and all the samples for testing.94 95. CLL and FL.All possible combinations of the top n genes were generated. Fig.62% 88. if yt ≠ j.3 Each time the classifier was trained. Lymphoma dataset The methodology was applied to the lymphoma dataset with 4026 genes and 62 samples with 3 class namely DLBCL.90% 90. the databases were trained using SVM one-against-all and one-against-one.16 96. FL. The KNearest neighbor algorithm as used by [18] with k=3 was used and the missing data were filled.14% The average 5-fold accuracy for each run was calculated and average error rate in training were calculated.16 96. It should be noted that all the genes after ranking were given numbers in ascending order.2 shows the best prediction accuracy for the gene pair (4.

79 OAOGAU 94.44 95.87 96.3 Maximum number of gene pair that achieved 100% CV Accuracy for Lymphoma and Liver dataset. the Liver dataset were from the cancerous and non-cancerous tissue.4 TABLE. As observed in the figure. From the results of the two datasets.87 97. The figure shows the number of gene pairs that achieved 100% CV accuracy using the top 100 genes for Liver and Lymphoma dataset.TABLE . The methodology adopted for lymphoma dataset was applied.87 96. SVM one-against-all could achieve the best prediction accuracy.31 OAAGAU 94. Randomly half of the observation was selected for training and all samples for testing. As with that of Lymphoma dataset.Liver Dataset Liver dataset (http://genomewww.2 the gene pairs (13. it was well proven that SVM one-against-all with RBF kernel can achieve better prediction accuracy compared to all other varieties for liver dataset as well.15 96.23) for liver dataset.51 94.stanford. a doctor can able to diagnose that a patient has HCC if and only if the expression level is less than 0. The paper used a classical statistical technique for gene ranking.39% 97. it was found that SVM one-against-all classifier with RBF kernel function (SVM OAA-RBF) achieves better classification results.edu/hcc) has 2 classes HCC and non-tumor liver for 1648 genes for 156 observations in which 82 are from HCC and 74 from non-tumor livers. Though the computational time for training was longer. 140 . This is depicted well in Table.87 92. Unlike the Lymphoma dataset. which were with the prediction of subtypes in Lymphoma cancer.75 and greater than –0.3 RESULT COMPARISON.23 96.79 95. top 100 genes 400 300 200 100 0 Lymphoma Liver OAA-RBF OAA-GAU OAO-RBF OAO-GAU .4 COMPARATIVE RESULTS OF TESTING ACCURACY OF ALL ALGORITHMS FOR LIVER DATA Fig. The future work shall extend with different classifier and gene ranking methods From the Plot in Fig.2 Plot showing best separation for liver dataset that achieved the best test accuracy for Liver data B.87 92. Top gene 10 20 30 50 100 OAARBF 94. CONCLUSION In this paper an efficient classification method for cancer diagnosis problem using microarray data was presented.87 96.3 show the results for each SVM variety. The results were promising when compared with previous works as well.67 IV.33 94. Figure.69 94. The problem is to predict whether the samples are from cancerous or non-cancerous tissue.3 otherwise the tissue is of a patient without tumor.15 94. On application of SVM one-against-all and one-against-one to the Lymphoma and Liver datasets it was found that SVM one-against-all with RBF kernel function outperforms with the other.LYMPHOMA DATA Method Proposed method Extreme Learning machine (24) Bayes with Local SVM (26) SVM-KNN (25) Accuracy 98.87 94. SVM one-against-all with RBF kernel function has higher or nearing 100% training accuracy gene pairs for the liver and lymphoma dataset.23 91.95 OAORBF 94.33% 93% 96% No of genes 2 10 30 50 Figure.

REFERENCES [1] Per Broberg. 111-124. 2007. [11] Lipo Wang. Larranaga P. January-march 2007. Evo Workshops 2005:74-83 [22] Yoonkyung Lee. Bioinformatics [23] Chih-wei Hsu and chih jen Lin. Cambridge. no. Accurate Cancer Classification Using Expressions of Very Few Genes. Classification of multiple cancer types by Multicategory support vector machines using gene expression data. Michele Sebag.MIT Press . 2001 [3] J.Krebel.L. 1. LNCS. Yakhini Z . Tipping M. Schummer M.A. 54-64 [9] Blanco R.1186/1471-2105-8-370. Ruzzo. B. Statistical Science. Science Direct.4. AstraZeneca Research and Development Lund.Advances in Kernel Methods –Support VectorLearning.17. Bayesian Classification of DNA Array Expression Data. pp. A review of feature selection techniques in bioinformatics. Ruzzo. IEEE/ACM Transactions on computational biology and bioinformatics. Springer Link. Lee Hood. Jaeger. 19 no. Cheo Koo Lee. 104-117. 2003.Saratchandran. [12] Venu Satuluri. IEEE transactions on neural networks. 2007. Walter L. Bioinformatics 2002. vol. Narasimhan Sundararajan. Supervised methods with genomic data: a review and cautionary view. Selecting Informative Genes with Parallel Genetic algorithms in Tissue Classification.Scholkopf . vol. 18:1332-1339 [5] Diaz-Uriarte R. A survey of parallel algorithms for classification. and G. Dougherty ER :Optimal number of features as a function of sample size for various classification rules. I ˜naki Inza and Pedro Larra˜naga. Safaai Deris. BMC Bioinformatics 2005. Sanguthevar Rajasekaran. Campbell C. 2001 [19] Mingjun Song. and Wei Xie. Vol. Genome Informatics 12: 14–23 (2001). [14] Yeo lee chin. T. [13] Yvan Saeys. P. Inza I. Jurnal Teknologi (D).J. An Evaluation of Gene Selection Methods for Multiclass Microarray Data Classification. Hitoshi Iba.B. Pacific Symposium on Biocomputing 8:53-64(2003) [4] Li Y. 18. vol. A study on gene selection and classification Algorithms for classification of microarray Gene expression data. Bayesian learning with local support vector Machines for cancer classification with gene expression data”.1999. Hastie. Hong Hee won. [6] Hua J. Feng Chu.MA. Guang-bin Huang. Improved gene selection for classification of Microarrays. ”Multicategory classification using an Extreme learning machine for microarray gene expression cancer diagnosis”. Gene selection for classification of microarray data based on the Bayes error. First Asia pacific Bioinformatics conference. Keller.3.Burges.IEEE [21] Elena Marchiori. Chu. Proceedings of the fourth annual international Conference on Computational molecular biology 2000. 2002 [24] Runxuan Zhang. Statistical methods for ranking differentially expressed genes Molecular Sciences. Bruhn L.September 2007 [25] Sung Bac cho.” Machine learning in DNA microarray analysis for cancer classification.Troyanskaya. No. July. Sweden 7 May 2003 [2]Hong Chai and Carlotta Domeniconi. Bayesian automatic relevance determination algorithms for classifying gene expression data. Bioinformatics.520-525. Suh E. Michele Sebag “ Bayesian learning with support vector machines for cancer classification with gene expression data”. 8:370doi: 10. 6:148. Two-stage classification methods for microarray data. 2003 [26] Elena Marchiori. Xiong Z. 4. [17] R. and Sierra B: Gene selection for cancer classification using wrapper approaches. 9 2003.Smola Pair wise classification and Support Vector Machines . Technical Report UW-CSE-2000-08-01August 2000 [16] Juan Liu. A greedy correlation-incorporated SVM-based Algorithm for gene selection. [18] O. Lowey J. [28] U.Tissue classification with gene expression profiles. March 15. [10] Ji-Gang Zhang. 21:1509-1515 [7] Jirapech Umpai T. [8] Ben-Dor A.C. 2005 [27] Tzu-Tsung Wong. Data analysis and visualization in genomics and protenomics 2005.pages255-268. Tibshirani. 193-214. MichH Schummer. Expert Systems with Applications 34(2008) 375-383. Class Prediction by Nearest Shrunken Centroids with applications to DNA Microarrays. Nachman I.C. IEEE/ACM transactions on Computational Biology and Bioinformatics. W. BMC Bioinformatics 2007. pp. Aitken S: Feature selection and classification for microarray data analysis: Evolutionary methods for identifying predictive genes. Hong-Wen Deng. A comparison of methods for multiclass Support Vector machines. Friedman N. Missing values estimation methods for DNA Microarrays. S-221 87 Lund. 21st International Conference on Advanced Information Networking and Applications workshop. Vol. Sengupta. R. 2007. Bioinformatics 2005. Narasimhan.J. ISSN 0127-9696 [15] Andrew D. pp. 43 (D). Bioinformatics Advance Access published August 24. Ching-Han Hsu. International Journal of Pattern Recognition and Artificial Intelligence 2004. Proceedings of the Second European Workshop on Data Mining and Text Mining in Bioinformatics. 141 . 18(8): 1373-1390 .

such as the temperature.00 © 2009 IEEE DOI 10. computer.2009 International Conference on Computer Engineering and Technology Study and Experiment of Blast Furnace Measurement and Control System Based on Virtual Instrument Li Shufen Beijing Union University Beijing. the serial COM2 was used and the band rate system was set as 9600b/s. process control. record and storage the measuring information.com. The remaining temperature sensors adopted K-type. which based on the virtual instrument development platform LabView. The application of sensors should consider two factors: the site environment and the measurement request. In this system. the Furnace Condition This system has the following functions: system parameters setting function Real-time measurement and display function Real-time control function Sound and light alarm function Data storage. we can use the S-type temperature sensor. such as: fault diagnosis. Keywords-LabView. it also needs to show. the amount of signal acquisition is large. Diagram of the basic system components is shown in figure 1. High-performance computer is used in the host computer. printer. The ADAM4000 series signal acquisition module is characterized as the acquisition of many kinds of signals. it is necessary to measure all the signals of each furnace point. work stability and RS-485 communication function. In addition. China zdhtshufen@buu. acquisition card. Even worse. which is developed by the American NI Corporation. SYSTEM FUNCTIONS AND HARDWARE DESIGN sensor annunciator executor ADAM4018 ADAM4050 ADAM4021 ADAM4520 host Figure 1. INTRODUCTION Virtual Instrument is a new technology which is developed in the 1990s. All these make it difficult to measure and control the condition of blast furnace. of which the data amount is also great. the 978-0-7695-3521-0/09 $25. The actuator is the drive circuit used to drive the regulating valve. So it is very suitable for the measurement objects. and realize to manage the historical data.com Abstract—This article mainly introduces a blast furnace measurement and control system. measuring transmitter. It has a simple graphical programming environment and powerful hardware-driven features. furnace I. the pressure and other physical quantities. Its temperature ranges from 500 to 1750 and it can convert signal below +22mV. II. equipment design and so on[1]. The ADAM4000 series of data acquisition module developed by Advantech can accept the standard signal from the sensor so as to omit some of the transmitter. China Liuzhihua714@sina. microelectronic. The system can replace the massive display instruments. It has been widely used in various fields. The hardware takes the ADAM4000 series data acquisition module of Advantech as the core. As the site signal is complex and multi-species. the operating mode has manual control and PLC control. B.cn Liu Zhihua Beijing university of chemical and technology Beijing. measurement and control technology. LabView is a virtual instrument development platform for many fields. the data records have computer records and manual records. This system takes the blast furnace system of the No. finish monitoring the furnace conditions in the production process. Diagram of the basic system System Functions In the production process of blast furnace.2009. to ensure the quality and safety of the production.242 142 . browsing and report printing functions. It is the integration of computer and bus technology. A.1 Iron making of Tang gang as the design model. Data Acquisition Module. and more a friendly visual simulation interface. band rate-setting. Since hot-air and the furnace bottom temperature is high. Hardware design The measurement and control system is mainly composed of sensors. the executive drives and actuator. detection. Instrument.1109/ICCET.

it contains both the workers’ number and their working time.respectively.4021 analog output module set output range 0~5V to control regulating valve. III. software function 1. CONCLUSION It is a key process to operate and debug software for the system. VI. flow rate and the furnace condition through the host computer screen. e The impact of other procedures. the reasons are: a Data acquisition delayed too long b Interruption in data collection process c The data acquisition module which needs to scan responses disordered. if necessary. For example. we can set the quality assurance and security assurance of the relevant parameters . So the software mainly has the following functions: 1. display the parameters of furnace points: the workers can know all points’ pressure.single-function. The reasons are: a Error on interception of character or operation of the software program. and this will cause serious consequence to the production. V. we must finish the hardware setting as the requirement and complete the hardware 143 Based on the virtual instrument development platform. set the system alarm limit : in order to ensure highquality products and safety production. b LabView procedure has errors. I have also met some problems in the experimental debugging process. humancomputer interaction. So it must have a quite accurate simulation system to complete the debugging system before running in the field. the followings are some solutions: 1) 4250 operates normally and hardware connects correctly. T2. location and other signals can be shown. flashing alarm and manual operation: the alarm will be operated if any of the parameters of measurement points exceed the set limit. compare with the settings and execute PID calculations then output the adjusting signals. then control the air valve opening. signal acquisition and display. even with special symbols. 4.temperature ranges from 0 to 1000 and it converts signals below +50mV. data management: the operators need to register in the display interface before they began to work. after filtering and giving PID operations to the acquisition signals. These measures can make it convenient to query. the level of software modules is based on the problems which may happen on the production process and the requirement for production management. The reasons are: a The module address is duplicated. Everyday these data will form a daily document. Before operating the system. Software system will read the key parameters. b The signal source for the data acquisition module has hardware error. So it is very important to correctly set up the hardware and software[2]. data management and other functions. IV. 4) System running very slow. pressure. the corresponding parameter will have an error too. which can save a lot of space to the data files.and set the representative parameters(such as: there are four signals T1. b Serial band sate is not consistent with the module band rate. while in china the traditional instruments are still in the separation from computer. 3) ADAM drivers can find the modules. but the collection data shows abnormal. temperature. B. and procedure will record the furnace conditions every 10 seconds. increase testing functions and improve system performance and to improve the accuracy and precision control. and then to adjust the parameters such as temperature and pressure to achieve the objective of adjusting furnace condition. but LabView procedures can’t detect any modules[3]. In the practical production. Virtual instrument has already got a very universal application in developed countries. Storage signals have also taken the ways of representative parameters. Output of the module was automatically selected. it has been occupying the serial. if there is an error on the hardware module address. the measurement and control equipment can significantly improve the testing efficiency. 2. T3. DATA PROCESSING Through displaying each point’s acquisition data.the equipment control. it will be very conducive for the . and in the event of fault or when it alarms the current data can be stored and printed. there will inevitably cause great loss. including setting parameters. d Enter wrong direction. but the computer detects no module. c No power supply for the signal source. 2. and then notify the workers what the reason is. 2) Module operates normally. we can switch to manual operation to directly adjust output signal for emergency. temperature sensor directly connected to the analog input module 4018. take T1 as representative parameter ) as the alarm limits. MAJOR TECHNICAL PROBLEMS AND SOLUTIONS SOFTWARE DESIGN detection because this will affect the stability of the system. flow. And it can also display the scattered data in a centralized way in the function of data analyze. A. Meanwhile. 3. we can use the key parameters to complete the control . while 4050 digital input and output module set the working methods as “40”. the reasons are: a The ADAM driver is not closed. T4 of the furnace top temperature. if directly applying the system. Therefore. the site temperature. Level of Software Function According to the practical production. we can output the control signals. In each of the data records. Through the compensation lead. analysis and investigate the production data. 3.

And it will greatly improve the production efficiency and reduce the production costs. Instrument technology and sensor. [1] [2] [3] ZhaoYong. YangLeping.. Foreign Electronic Measurement Technology. XuYun. 2002(1). The status and development trend of virtual instrument software platform technology. Virtual instrument data acquisition based on serial communication. 2002(1). Electronics Industry Press.development of china’s measurement and control technology to use virtual instrument technology to transform traditional equipment. 144 . REFERENCES . LabView programming and application.

A good resource allocator or scheduler generally depends on the business model of the operator. an efficient scheme to optimize resource allocation for dynamic OFDMA based WiMAX systems is presented. Our proposed scheme dynamically assigns OFDMA resources to the users to optimize the overall system performance in heterogeneous traffic and diverse QoS based WiMAX systems. which is dependent on user’s derived priority profile. Tata Consultancy Services BIPL. Jaydip Sen. where the system is either to attain maximum aggregate throughput or to provide fairness among the users or to have a trade-off between them. INTRODUCTION The next generation broadband wireless applications require high data rate.10 (PF) based optimization heuristically tries to balance the fairness among the users in terms of outcome or throughput. QoS guarantee is an important feature. which selects a user-carrier mapping where the logarithmic sum of the user is maximized. it is quite likely that QoS provision should play a major role in deciding resource allocation and should be coupled together. To achieve this goal. while implicitly maximizing the system throughput in a greedy manner. the limited system resources can not be properly utilized. PILTPF algorithm emphasizes on individual user’s true priority in allocating system resources in order to maintain as well as optimize the requirements of different QoS classes.com Abstract— In this paper. the satisfaction level drops significantly.1109/ICCET. scheduling I. Mostly. if the QoS level is declined below some threshold. PF optimization problem as instantaneous maximization of the sum of logarithmic user data rate in multi-carrier systems is considered in [4]. Premium users are the users with higher priority. India arijit. It also considers the time diversity gain achieved in long-term computation of Proportional Fair (PF) metric. Optimization approaches basically attempt to dynamically match the requirements of data-link connections to the physical layer resources available to maximize some system metric. real-time applications. it is quite obvious that mean QoS guarantee in long term for resource allocation would provide better performance. which is not what the service provider would like. Simulation results show considerable improvement of performance due to priority based resource allocation with long term PF calculation with respect to traditional PF algorithm.ukil@tcs. each subcarrier should be allocated to the user with the best gain on it. From this perspective we propose an efficient resource allocation scheme with integrated QoS provisioning by evaluating and assigning priority index to each user and a biased optimization based on the assigned priority index.2009 International Conference on Computer Engineering and Technology A New Optimization Scheme for Resource Allocation in OFDMA based WiMAX Systems Arijit Ukil. So an integrated scheme to simultaneously satisfy the individual QoS (user requirement) as well as provide system performance maximization (service provider requirement) is very much required. Debasish Bera Wireless Innovation Lab. But the approach of resource allocation based on instantaneous guarantee of QoS does not consider the temporal diversity of mobile wireless channel. Saltlake-91.2009. low latency. affected the most with degraded service quality. In order to be practically feasible. The proposed PILTPF algorithm attempts to allocate the OFDMA subcarriers to the users to achieve its minimum mean data-rate requirement within few frame duration with a degree of 145 . Another degree of freedom in QoS guarantee is the priority of the user. QoS. existing in the current and next generation broadband wireless networks like WiMAX. longterm fairness. OFDMA resource allocation. time-diversity gain. Proportional Fair 978-0-7695-3521-0/09 $25. who enjoy the privilege to get the best and uninterrupted service even if it has bad channel condition. The optimization objective and algorithm developed so far for dynamic resource allocation in OFDMA systems mostly consider instantaneous gain in system performance. which provides priority estimated resource allocation with individual QoS provisioning. Considering the channel dynamics and the fine granularity of OFDMA systems in both frequency and temporal domain. So. previous research work [5] proposes low complexity suboptimal algorithms. By assigning considerable resources to premium users like UGS class in WiMAX. minimum delay. However. and the power should be allocated using the water-filling algorithm across the subcarriers. specifically for delay-tolerant traffic. which cannot be realistically provided unless the limited system resources. Kolkata. which dynamically allocates the OFDMA resources to the users to meet their QoS requirement. bandwidth and transmitter power are intelligently used and properly optimized. user satisfaction of a wellserved user increases only marginally by increasing the QoS level even further.00 © 2009 IEEE DOI 10. It is very much computationally expensive to satisfy each user’s instantaneous data rate requirement in an optimum way. Large gain in throughput can be achieved through multiuser raw rate maximization by exploiting multi user diversity (MUD) but simultaneously fairness must be guaranteed. Keywords— WiMAX. we have introduced priority indexed long-term (PILTPF) resource allocation algorithm. It was shown in [2] that in order to satisfy the objective of maximizing the total capacity. The principle of the resource allocation taken in [3] is to allow the user with least proportional capacity to use that particular subcarrier. in short highly demanding Quality of Service (QoS).

maximum latency and QoS class of the kth user: 146 . Multiuser OFDMA system architecture with Subcarrier Allocation and Dynamic Priority Index Estimator module is shown in Fig. The QoS of the kth user is described by the minimum individual rate requirement = [γ1. A single cell downlink wireless cellular network with one base station (BS) serving total K users is considered. where 1≤m≤M. which is equal to . in WiMAX there exists five (M=5) QoS classes. Priority Index Estimator and Scheduler. Pkn is the transmit power for nth subcarrier when transmitted P to kth user. PILTPF algorithm and the concept of priority indexing of the users are presented in section IV. LTE. The objective of the PILTPF algorithm is to simultaneously provide long term fairness. PILTPF algorithm is very much suitable and practical for wireless broadband systems like WiMAX. γ2. QoS and system optimization in order to satisfy the requirement both user and the service provider. path loss and shadowing with channel gains H = { hkn }. SNR gap in simplified [5] term can be approximated as ∆ gap = -ln(5BER)/1. otherwise 0. if the nth subcarrier is assigned to kth user at tth time instant. ………. 1.sN. where sn =   2  hknt × Pkn   f( hknt ) = log 2 1 + B   NT ×   N     2  N hknt × Pkn   Rkt = ∑ ρ kn x log 2 1 + II.IMT-A for its simplicity and mean QoS guarantee feature. the the subcarriers are denoted as s1. Subcarrier allocation Module. namely. Simulation results roughly based on Mobile WiMAX Scalable OFDMA-PHY and analysis of the results are presented in the next section. OFDMA subcarrier allocation architecture consists of three components. Priority Index Estimator calculates the unique priority of each of the user based on maximum buffer size. Signal to Noise Ratio (SNR). Section VI provides the summary. OFDMA resource allocation algorithms dynamically assign mutually disjoint subcarriers to the users by taking the advantage of MUD to meet the some specified system performance objective with the help of knowledge of the channel condition available from channel state information (CSI). 1. OFDMA is basically Multi-user-OFDM. OFDMA is characterized by a fixed number of orthogonal subcarriers to be allocated to the available users [1]. As shown in Fig. maximum tolerable delay and QoS class of the user and assigns each user with a unique priority index.biasness to the higher priority users in heterogeneous traffic condition. In section III Unbiased Instantaneous Proportional Fair Optimization problem is discussed. Each OFDMA subcarrier n. UGS. Total available Transmitter Power is P. which has Gaussian distributed with zero mean with psd N0 and total noise power is assumed as N T . Perfect channel characteristic is assumed in the form of channel state information (CSI). For an example. In addition. rtps and BE. The paper is organized as follows. In practice the achieved data rate is less than that of what equation (2) suggests as there exists a few dB SNR gap.. conclusion and future scope of work. each of which requires certain QoS guarantee. Let the available QoS classes be Qm .6. Subcarrier allocation epoch is assumed to be less than the channel coherence time. In section II system model and problem formulation are presented. IMT-A. UGS and ertPS class users are given the highest priority and should be satisfied with their QoS requirement all the time. Simulation results also show the performance improvement by PILTPF algorithm over unbiased conventional Proportional Fair algorithm. the signals suffer from AWGN noise. is subject to flat fading. PILTPF algorithm is simplified by equally distributing the total available transmitted power as performance can hardly be deteriorated by equal power allocation to each subcarrier [6]. The interference from adjacent cells is treated as background noise. minimum data rate requirement. Rkt = B N ∑ρ n N knt f (hknt ) (1) generation wireless standards like WiMAX. s2 …. then ρ knt equals to 1. nrtps. Then Rkt becomes: Rkt = ∑ρ n =1 N ktn B N   2  hknt × Pkn   x ∆ gap x log 2 1 + B   NT ×   N   (4) Hz and in order to overcome frequency selective fading si is chosen to be sufficiently smaller than the coherence bandwidth of the channel. PI k denotes the priority index of kth user and PI k is function of the maximum buffer size.. SYSTEM MODEL AND PROBLEM FORMULATION B   n =1 NT ×   OFDMA is the standard multiple access scheme for next N   (2) ∑∑ ρ n =1 k =1 N K knt =N (3) where ρ knt is the subcarrier assignment matrix at allocation epoch t. ertps. γK] in bits/sec. The total available bandwidth B is partitioned into N equal narrowband OFDMA subcarriers. LTE. belonging to user k. Let Rknt = f( hknt )be the N instantaneous achievable rate for kth user when nth subcarrier is allocated at tth time instant and Rkt is the achieved data rate of the kth user at allocation epoch t and Rkt is expressed as.

Traditional PF optimization (6-10) does not consider user’s priority and treats every user equally and unbiased way. inherent to the nrtps and BE class of traffic. User1 S c h e d u l e r (8) k Rkt = (1 − 1 1 ) Rkt −1 + Rkt −1 ∆τ ∆τ (9) The constraint of the system in order to provide QoS is: Rkt ≥ γ k ∀k (10) Subc arrier 1 … User… K buffer1 Subc arrier N QoS K. Basically. UNBIASED INSTANTANEOUS PROPORTIONAL FAIR OPTIMIZATION Proportional Fairness (PF) in OFDMA system maximizes the sum of logarithmic mean user rates [5. The above equation (7) is the optimal PF subcarrier allocation. buffer-size. should be taken into account to provide the premium users’ QoS guarantee. which relaxes the strict priority and reflects the true priority. but can not be implemented due to its high computational complexity. bufferK T r a n s m i t t e r The optimization and sub-optimization schemes (6-10) discussed so far deals with maintaining QoS guarantee instantaneously and without any provision to provide privileged service to the higher priority users.R PI k = f (Qk . subcarrier allocation module does the frequency domain mapping of the user to the subcarriers. how to schedule the packets for transmission. This kind of subcarrier allocation may deprive the higher priority user with bad channel condition to maintain its QoS. Wireless channel is normally very much dynamic in nature. which is basically the monotonically increasing assurance of QoS. The suboptimal form [7] is where subcarrier n is allocated to k* user when Equation (5) can be more generalized when PF is defined as follows [6]: moving average to imply a notion of low pass filter [7] for the purpose of providing fairness. The advantages incurred by considering the parameters like maximum buffer size. The scheduler decides. PRIORITY BASED PROPORTIONAL FAIR OPTIMIZATION AND ALGORITHM Unbiased instantaneous Proportional Fair Optimization as described in (6-10) is based on instantaneous computation of proportional fair metric ( PF ) and the resultant optimization does not consider the time diversity and priority of the users.16. Priority based Resource Allocation module PF t = max ∑∑ ln R kt n =1 k =1 N K (6)  ∑ Rknt    n∈N (7) PF t = ∏ 1 +  (∆τ − 1) Rkt  k∈K    where PF t is the proportional fairness index at tth instant. First part estimates the user’s priority based on (5) and the next one is subcarrier allocation. PF is also not the most optimized solution for delay-tolerant applications. δ k ) (5) k* = arg max knt where bs k is the maximum buffer size and δ k maximum Rkt tolerable delay. Traditional PF can only support a loose delay-bound QoS requirement. ∆τ is average window size or the period between successive allocations and Rkt is the average data rate achieved of the user k at the preceding allocation instant. PF scheduler maintains fairness by the law of diminishing return. Long term optimization for system performance improvement yields time diversity gain. scheduler does the time domain mapping of the users to the time-slots. which demands modification of the traditional approach of PF optimization to handle diverse QoS based traffic. PILTPF algorithm exploits the time diversity as well as considers the priority of the user by calculating priority index based PF metric in long term to meet the constraint of minimum rate requirement. Subcarrier Allocation Module (Fig. BE class traffic in IEEE 802. minimum data rate requirement to a priority value. in each frame. Priority Index Estimator considers system constraints and based on user’s QoS class maps the QoS metric of delay. The mean data rate achieved ( Wirel ess QoS1 Priori ty Index Estim ator P I k Subcarri er Allocatio n CS I {h kn} Rkt ) is computed as Fig. unique to each users. bs k . PF calculates achievable data rate of individual user instantaneously (8-9). IV. like nrtps. γ k . But that will be the strict priority based subcarrier allocation. Priority index is already introduced in equation (5). 7]. maximum latency should be taken in to account. 147 . 1. The simplest way of incorporating user priority is to give the chance to the highest priority user to get allocated by the best of the subcarriers. which is not suitable for real-time multimedia services where the delay bound QoS requirement is stringent. 1) with the help of PILTPF algorithm allocates OFDMA subcarriers to the users. PILTPF has two parts and operates sequentially. which is discussed in the next section. To take full advantage of the time-diversity gain. III. The priority of the user. a QoS class-biased proportional fair optimization has to be introduced.

which converts the traditional PF optimization as weighted PF optimization. else continue Step6: t= t+ T f and go to Step 2 V. This (11) can also be termed as urgency based priority assignment. the more ergodic the optimization scheme becomes and with higher granularity in T f . The optimization is modified PF. It can be clearly seen that: PI k → ∞ . SIMULATION RESULTS AND ANALYSIS To investigate the performance of PILTPF algorithm.Priority estimation evaluates the dynamic priority of the user based on its current QoS class. which PI k × Rknt k* = arg max k Rkt where. δ MAX − δ t = 1 . available buffer–size.i. bsk t . which is termed as strict priority. which justifies the intuitive interpretation that the expected value of a random variable is basically long-term average when sampled repeatedly and can well be considered as independent and identically distributed (i.d) random variable with mean µ k . find next best k. Frequency Reuse factor of 1 is taken T Rkt in a window of Tk and allocates the subcarrier to the user in a way to maintain its minimum average data rate requirement within Tk time duration (14). maximum delay-limit and elapsed delay for kth user at current allocation instant (t). ∀k . Instead it calculates both the PF Step4: Find the mean data rate achieved by the kth user at tth instant. This is due to the fact that in wireless mobile environment over long duration. instantaneous data rate guarantee. More the value of Tk . which attempts to avoid QoS violation to the maximum extent as well as minimize the outage probability. (12) Based on the optimization scheme (11-15) the proposed PILTPF algorithm is described as below: T (13) Step1: Set initial mean achievable data rate: E ( Rkt ) =ε t =1 The constraint of optimization utilizes the TD gain. ∀k as per (11) data rate. It can be noted in traditional resource allocation. Instead of and t=0 . all the QoS related PHY and MAC parameters are taken into account. Subcarrier allocation module selects k* user to assign n-th subcarrier based on: Above equation is the condition of absolute convergence to the QoS requirement with probability one. from (13) that more time diversity gain would be achieved with the increasing value of δ kMAX →∞ describes PILTPF optimization scheme. which reflects the true priority of the user and it is highly dynamic. delay limit and minimum data rate requirement. Maximum system optimization is obtained when all each user’s allocation is complete only when t = δ kMAX . The 148 . Step2: Find the user k* as per (12) for all the subcarriers Tk (14) Step3: E ( Rkt ) t =1 ≥ γ k Calculate the data rate achieved as Rkt = E ( Rkt ) t =1 Tk Tk = ∂ kMAX moving average value of metric and (15) PILTPF optimization as described does not depend on the per Rkt = ∑ρ n =1 N knt × Rknt for all the users at tth instant Rkt nor does the PF metric is calculated instantaneously. Subcarrier allocation part assigns the appropriate subcarrier to the user to optimize the system performance. in fact. in priority index estimation function (11). the problem is purely proportional fair. where ε is a small number. Then theoretically. Equations (11-15) bskMAX . better optimization can be obtained. δ MAX . where as. It can also be noted used buffer-size. time diversity gain becomes high and the mean channel condition ( Rkn ) follows similar distribution according to the Bernoulli’s Law of Large Numbers. it assures long-term average Step 2: Find PI k . when bs kMAX − bsk t Lim P (ω k → γ k ) = 1 or δ MAX − δ t → 0 Equation (11) is based on the notion that user’s priority will increase as the urgency for user’s allocation to resources becomes higher. It exploits the TD gain whenever possible. δ t are the maximum buffer-size. simulation results under the system parameters and simulation scenario given in Table 1 is presented in this section. Higher the priority of the user. higher the magnitude of PI k and more is the chance to get the subcarrier allocated even in bad channel condition and high PI k = where (bs kMAX Qk × γ k − bs k t × δ MAX − δ ) ( t ) (11) Rkt . Tk t =1 exploits the time-diversity gain of each and every users. PI is highest for UGS class and for rest PI is estimated as below: priority of the user is estimated by PI k . The system parameters are roughly based on Mobile WiMAX Scalable OFDMA-PHY. Step5: if E ( Rkt ) t =1 ≥ γ k . For. where T f is the allocation instant. NDC traffic. δ kMAX . If Tk = T f . priority mostly depends on the value Qk only. δ kMAX →∞ Lim TDgain = µ k PILTPF optimization utilizes this performance gain for its attempt to converge to γ k (14) by relaxing the QoS constraint of minimum data rate as statistical mean value.

.pp. SUMMARY AND CONCLUSION We have proposed an efficient but simple OFDMA resource allocation algorithm.” IEEE Trans. ertps. “Fairness and Throughput Analysis for Generalized Proportional Fair Frequency Scheduling in OFDMA. Fig. it can be interpreted that as PILTPF considers both the priority of the user and the time diversity gain. Majid R.”. which has shown the characteristics of better performance and QoS guarantee in long-term than the conventional suboptimal PF algorithm in QoS diversified heterogeneous traffic condition. so it results in better performance both in terms of throughput and QoS guarantee. pp. which is assumed to be less than coherence time of the channel. [6] Wei Xu. Nov. Fig. IEEE ICC. In Fig. 2005. 606-611. In that highly complex scenario of large number of users. IEEE VTC. pp. it is depicted in Fig. pp. 2 is showing the comparison of achievable data rate of the users by PILTPF and PF algorithm. 4. IMT-A to improve the overall system performance.9. [4] Hoon Kim and Youngnam Han. WiMAX. 2. 6. “Multi-Carrier Digital Communications Theory and Applications of OFDM”. If the deprived user. IEEE VTC. 2nd ed. User Achievable throughput by PILTPPF and PF algorithm when number of users = 20 149 . 2.. 3 and 4. Fig."Efficient Adaptive Resource Allocation for Multiuser OFDM Systems with Minimum Rate Constraints".3. “An Improved Low-Complexity Resource Allocation Algorithm for OFDMA Systems with Proportional Data Rate Constraint”. the difference between the PF and PILTPF diminishes. 2 and 3.ertps. et al. “A Proportional Fair Scheduling for Multicarrier Transmission Systems”. nrtps. minimum data rate are taken for simulation purpose. et al. IEEE Communication Letters. buffer-size.5 MHz Maximum Doppler 100Hz VI. 2 clearly shows that PILTPF algorithm follows QoS profile for all the users where as in PF algorithm some of the users get deprived and achieved very less data rate than their minimum requirement due to the inherent feature of its instantaneous PF metric computation. However. Jan Ohlhorst. no. Vol. 3 a bar chart comparing the performance of PILTPF and PF algorithm is depicted. 2007. 5126-5131. 2) is a high priority customer.so that all the subcarriers can be assigned to the users. Shen. May 2000. 3. 2007. no. vol. A random heterogeneous mix of UGS. [3] Z. The unevenness in assigning subcarrier by conventional PF algorithm becomes more visible when more number of users exists. Chart comparison between PILTPF and PF Fig. then the allocation is very much unacceptable from service provider as well as user perspective. [2] Rhee et al. 2005. “Adaptive resource allocation in multiuser OFDM systems with proportional rate constraints. 2726–2737. PILTPF is of very practical importance and can well be implemented for next generation broadband wireless systems like LTE. 210-212. “Increase in capacity of multiuser OFDM system using dynamic subchannel allocation”. It also shows the minimum data rate requirement of individual user as QoS profile. BE traffic with varying QoS metric like delay. Alexander Golitschek Edler von Elbwart. say 10th user (fig. [5] Abolfazl Falahati.3. Fig. pp. pp. Subcarrier allocation instant is taken as equal to be equal to the frame duration (5ms).6 Channel model Rayleigh Modulation 16QAM Frequency reuse factor 1 Channel sampling frequency 1. Springer. User Achievable throughput by PILTPF and PF algorithm when number of users = 30 REFERENCES [1] Ahmad R. Ardestani. in a less number of users and good channel condition throughout the entire cell. From Fig. Wireless Communication. ICACT.1903-1907. TABLE I Simulation Parameters Available Bandwidth 1.25 MHz Number of users 20/30 Number of sub-carriers 72 BER 10-3 SNR_gap -ln(5BER)/1. 4. Simulation results are shown in Fig. 1085–89. March 2005. [7] Christian Wengerter.4 that PILTPF at least attempts to follow the QoS profile in order to preserve the importance of priority of the users. vol. et al.

a general binary text classifier is built by employing some algorithm on the positive dataset and negative dataset. Finally. which could be detrimental to classification accuracy. First.130117. Northeast Normal University.com. Introduction Traditionally. China 3 State Key Laboratory of Electroanalytical Chemistry. Wanli Zuo1 1 College of Computer and Science and Technology. For instance. The first is the proposal of employment CoTraining for purifying unlabeled dataset. a linear One-Class SVM will learn from both the purified U as negative and the expanded P as positive.2009 International Conference on Computer Engineering and Technology An Integration of CoTraining and Affinity Propagation for PU Text Classification Na Luo1.2009. CoTraining iterates to purify unlabeled dataset by filtering out some likely positive examples.2. This kind of algorithms are termed supervised learning algorithm [1]. one needs to collect a sample of homepages (positive training examples) and a sample of non-homepages (negative training examples). CoTraining is employed for filtering out the likely positive data from the unlabeled dataset U. A comprehensive experiment had proved that our algorithm is preferable to the existing ones. Section 5 will briefly make some conclusions. 1.cn. wanli@jlu. a linear SVM will train both on the purified unlabeled dataset as negative examples and the expanded positive dataset as positive examples. The behind rationality is that purifying the unlabeled data by filtering out a small fraction positive data is much easier than exploit a small but reliable negative dataset. 130012. JiLin University.cn Abstract Under the framework of PU(Positive data and Unlabeled data). Those data picked out can be supplied to positive dataset P. affinity propagation (AP) approach attempts to pick out the strong positive from likely positive set which is produced in first step. Collecting negative training examples is especially delicate and arduous because (1) negative training examples must uniformly represent the universal set 978-0-7695-3521-0/09 $25. the proposed algorithm especially performs well in situations where given positive dataset P is insufficient. A number of comparative experiments have been made in section 4. Changchun Institute of Applied Chemstry Chinese Academy of Sciences. Thus. Changchun. 130022. Moreover. Because of the algorithm's characteristic of automatic expanding positive dataset. sample of nonhomepage should represent the Internet uniformly excluding the homepages). Finally. Related Works . However. For above reason. Unlike former PU algorithm [2. This paper has two main contributions. China Luon110@nenu. yuanfuyu@yahoo.1109/ICCET. JiLin. Details about the proposed algorithm can be found in section 3. 2. China 2 Department of Computer. this paper originally proposes a three-setp algorithm. PU problem is transformed to supervised learning problem.3] exploiting reliable negative dataset. in order to construct a ”homepage” classifier. Fuyu Yuan3 . Changchun.131 150 excluding the positive class (e. Our second contribution is the employment of affinity propagation (AP) on likely positive data for expanding positive dataset.g. The remainder of the paper will be organized as follow: Some related works is presented in section 2. Traditional supervised learning algorithms are thus not directly applicable because they all require both labeled positive and labeled negative documents to build a classifier. Second.00 © 2009 IEEE DOI 10. the likely positive data being filtered out can also be made use of supplementing positive dataset. JiLin.edu.cn. and (2) manually collected negative training examples could be biased because of human’s unintentional prejudice. PU classification has been an important problem.edu.

The two base classifiers "help" or "co-purify" each other by filtering out likely positive examples from the unlabeled dataset of the counterpart. 3. arbitrary document is d. such classifier is trained and classifying on the same unlabeled dataset. two kinds of message are exchanged between data points.2.d)==positive. Classify(S. Remarkably. Affinity Propagation Step For the purpose of expanding positive dataset. there is no existing algorithm involving with supplementing positive dataset. Liu Bing etc.d)==positive and Classify(S2i. Specifically.k). So far as we known. i=i+1. 14: Build SVM classifier S1i with P as positive and U1 as negative 15: Build SVM classifier S2i with P as positive and U2 as negative 16: end while 17: L={Classify(S1i. The result classifier is used again to assign "pseudo-label" to unlabeled data. this classifier is not likely to assign a confident label. (3) Perform a linear SVM on the purified unlabeled dataset as negative and the expanded positive dataset as positive. d∈L} 18: pU=U1∪U2 3.1.U) Output: (L. reviewed several twostep PU approaches and proposed his Biased-SVM [2]. taking 151 . (2) Employ affinity propagation on likely positive set for supplementing positive dataset.d)==positive. 5: while(true) 6: Q={Classify(S2i. Q={}. this paper had proposed both the adoption of CoTraining for purifying unlabeled dataset and affinity propagation (AP) for expanding positive dataset. unlabeled dataset is U. Combining CoTraining framework with PU is first proposed by Blum and Mitchell [4]. it attempts to explore the distinguishing words between positive and unlabeled corpus. the way CoTraining differs from traditional CoTraining [5] is that the two basic classifiers are based not on two different feature spaces but on two different training datasets of the same feature space.pU) 1: Randomly split U set into two set U1. Unlike other methods. then using these distinguishing words to obtain reliable negative set (RN). This embarrassment can be indeed avoided by out CoTraining algorithm which use two individual classifier trained on two different datasets to purify U iteratively.Probably Approximately Correct (PAC) learning from positive and unlabeled examples is proposed. It defines a PAC learning model from positive and unlabeled statistical queries. Another comprehensive study on Co-Training is made by Nigam and Ghani [5]. classifier is S. CoTraining Step In order to obtain a reliable negative dataset. R={}. Specifically.d) signify the label that S assigned to d. The Proposed Algorithm As mentioned in introduction section. U2. U1∩U2={} 2: Build SVM classifier S10 with P as positive and U1 as negative 3: Build SVM classifier S20 with P as positive and U2 as negative 4: L={}.d)==positive. PEBL is another two-step approach involving in PU [3]. Obviously. which exemplar it belongs to. sent from data point i to candidate exemplar point k. for every other point. i=0. CoTraining consists of two individual SVM learners (base classifier) which are built on the same positive dataset and the different unlabeled datasets. Messages can be combined at any stage to decide which points are exemplars and. The “responsibility” r(i. d∈U1} 7: U1=U1-Q 8: R={Classify(S1i. former algorithms used to first assigned negative "pseudolabel" to all unlabeled data and then train on it together with positive dataset. affinity propagation is adopted. the proposed algorithm can be divided into three steps: (1) Purify unlabeled dataset with CoTraining by filtering out likely positive set. In algorithm 1 positive dataset is denoted by P. reflects the accumulated evidence for how well-suited point k is to serve as the exemplar for point i. In affinity propagation clustering. each of which takes into account a different kind of competition. Liu Bing employs a weighted logistic regression to PU and originally defines a measure on unlabeled dataset. L represent the likely positive dataset which is filtered out from U. and the purified unlabeled dataset is denoted as pU. Theoretically. d∈U2} 9: U2=U2-R 10: if(Q={} and R={}) 11: break 12: end if 13: L=L∪Q∪R. subject to U1∪U2=U. Algorithm 1: CoTraining Algorithm Input: (P. The advantages of affinity propagation clustering[6-9] over other clustering methods lie in that it’s more stable for different initializations. 3.

k ) = min s (i. So in our design.k) is a varying parameter. the responsibilities and availabilities are iteratively computed as: sum=0 for each nd ∈ s ( k . a(i. k ) 7: if( nd ∈ P ) 8: sum=sum+similarity(nd. To begin with. In the process of semantic feature extraction. r (k . If two features in one category have a common synonym set. j i. the semantic feature extraction is used by WordNet through the above examples. Semantic-based Feature Extraction in SVM Algorithm A word possibly has many meanings. and the initialized values of s(k. The “availability” a(i. k ) ← min{0. In this paper. and the repeated semantics are the meaning of the multi-sense words in the positive set. First. Documents meaning were determined by overlapping semantics of Documents. k ) + a (k . The algorithm provides the method applying semantic extraction to form positive document vector for PU problems. Where s(i. One-Class SVM algorithm uses only the positive set to train the classifier. AP pick out the supplementary set (sP) from likely positive dataset.into account other potential exemplars for point i.k) can be set to be the negative Euclidean distance. and the responsibilities are initialized as r(i. k ' )} ' k ≠k ' sP = sP ∪ {d }. set from candidate exemplar point k to point i. j i. 2: for each d ∈ T 3: Rank documents in pU and P according to its similarity to d. j ) + α ( max s (i . so they can represent the document meaning greatly. s(i. multi-sense words are disambiguated and selected by correct semantic.k. 3. k ) − max{a (i.L) Output: (aP) 1: sP={}. taking into account the support from other points that point k should be an exemplar.k) for all ks are set to be equal to each other because all data points are equally suitable as exemplars.d)*(1). The number of identified exemplars (number of clusters) is influenced by the initialized value of s(k. j )) i. j where α ∈ [0.3. and ensure the later classifier. In this way. reflects the accumulated evidence for how appropriate it would be for point i to choose point k as its exemplar.k) as its input.k) varying from minijs(i.j) to their maximum maxijs(i. which represents the meaning of the document. k )}}. the true number of clusters may be a widely changeful value. while for all i=k.j). As reported in the literature.k)=0. 4: s ( k . we set the initialized value of s(k. Algorithm 2: Affinity Propagation Algorithm Input: (P. and L is likely positive dataset. Algorithm finds the semantic for all the words in the positive set. Next. The principle of minimizing the structure risk enables SVM to be suitable for high dimension text data.1] .pU. pU is the purified unlabeled dataset. Currently. j 152 . However. k ) = min s (i . k ' ) + s (i.d)*(-1). k ) ← s (i. j )) i. The words corresponding to these semantics construct the important features represented positive examples. namely. k ) = − xi − xk 2 . documents meaning found out the features which formed documents vector. For all i ≠ k. but not exactly the moderate number or the small number. In this paper we had improved the original algorithm to form the document vector of positive examples by above method of semantic feature s (k .k). However. j i. i ≠ k ' end if end for aP = sP ∪ P. 9: else if( nd ∈ pU ) 10: sum=sum+similarity(nd. SVM is the most popular algorithm in text classification. and this technique is especially suited for situations where the given positive dataset is insufficient. the availabilities are initialized as a(i. positive dataset is expanded with sP. s(k. namely: After AP step. s (i. j i.k) is set as the median of the input similarities (resulting in a moderate number of clusters) or their minimum (resulting in a small number of clusters). To quantify the labels. we can say that these features represent this category greatly [10]. Then. r (i ' .k). j ) − min s (i . WordNet expresses a meaning with a synonym sets.k) reflects the similarity between the data points i and k. k ) ← ∑ max{0.k)=0. r (i . j ) + α ( max s (i. k )} i' ≠k i ≠i &i ≠ k ' ∑ max{0. we assign 1 to positive label and -1 to negative. 11: end if 12: end for 13: if(sum>0) 5: 6: 14: 15: 16: 17: r (i. Forming the vector of positive document by the words is to avoid losing the meaning of document with reducing the dimension of document vector. j ) − min s(i. The affinity propagation takes a real number s(k. the shared value of s(k.

So we draw the conclusion that out three-step method improves the performance of One-Class SVM algorithm.885 0. Empirical Evaluation 4.3 Experiments Experiments are comparing the methods of document frequency and semantic feature extraction for one-class SVM algorithm. The first one is the Reuters-21578.7.7 0. and the process is as follows: Express data set as the file of sequence of woks Training set Training the set U Training the set P Test set 4. The second collection is the Usenet articles collected by Lang. the popular F can be a good choice. So far as CoTraining step is 0. the precision and recall of negative data in pU (purified unlabeled dataset) are calculated and then combined as follow: f=2*recall*precision/(recall+precision).1. Each category is employed as the positive class. For the second step.45 percent as γ is 0. This gives us 10 datasets.7 0. and use the document vector to construct the classifier. for the third step aiming to build an accurate classifier. which has 21578 documents collected from the Reuters newswire.735 0. it attempts to expand P with a supplementary set (sP) that is extracted from LP (likely positive dataset). only the most populous 10 are used. For each dataset. Obviously.892 0. and by 6.774 153 .716 0. Table 1 Average F of One-Class SVM classifier datasets γ Average F Previous Best F Reuters 20Newsgroup 4. Te rest of the positive documents and negative documents are used as unlabeled set U. DataSets We used two popular text collections in our experiments.1. To measure the effect of being purified. the effect of the second step totally depends on the precision of positive data in this supplementary set (sP).632 0.1 and 0. after remove stopwords and stemmer. 30% of the documents are randomly selected as test documents.1 0. after CoTraining step.742 0.2. In the data preprocessing. Table1 provides the average F score of all categories of classifier and shows that our method increases the F score by 11. 4.780 0. Forming the tfidf vector One-Class SVM classifier using semantic feature extraction One-Class SVM classifier using semantic feature extraction based on document frequency Test document represented by SVM vector Fig 1: The system flow chart In the experiment. Evaluation Measure There are three measures evaluate the effects of three steps respectively. we use each newsgroup as the positive set and the rest of the 19 groups as the negative set. which creates 20 datasets. The algorithm of semantic-based feature extraction in One-Class SVM is as follows: Algorithm 3: Semantic-based Feature Extraction in One-Class SVM Algorithm 1: Set all semantics appeared in LP hm_allSyn=NULL. The rest(70%) are used to create training sets as follows: γ percent of the documents from the positive class is first selected as the documents from the positive set P.extraction. we use tf-idf express document vector. F score in test dataset is employed as measure of the final classifier’s accuracy. Of the 135 categories.1 0. especially when the positive examples are few. In our experiment. we take two experiments as γ is 0. the goal of this step is to purify U by filtering out the positive example.15 percent as γ is 0. Finally.7. Set semantics appeared repeatedly in LP hm_crossSyn=NULL 2: For each document d in LP 3: For each word w in d 4: For each semantic s of w 5: If not s∈hm_allSyn 6: hm_allSyn+=s 7: else if not s∈hmcrossSyn 8: hm_crossSyn+=s 9: For each document d in LP 10: For each word w in d 11: t=tfidf(w) 12: For each semantic s of w 13: if(t≠0 and s∈hm_crossSyn) 14: output w to the file of document vector of positive examples 15: sP=LP∪{d} 16: break concerned. and the rest as the negative class.

Joachims. Li. Beijing.At last. SIGIR’06. “Building text classifiers using positive and unlabeled examples”. Mitchell. Y. which is an important measure of classifier.Tian. pp.166-172 154 . IEEE Computer Society.10-17. References [1] Y. X.315.7896 money 0. “Less is more: probabilistic models for retrieving fewer relevant documents. ACM. W.8000 Avg 0. In: Proc.429-436. MULTIMEDIA’06. Lafferty.7 0. “Text Classification from labeled and Unlabeled Documents using EM”. Dueck.9021 0. Yang. 239-248.C. NY. New York. “Beyond independent relevance: methods and evaluation metrics for subtopic retrieval”.8541 ship 0.Dai. In: Proc. Table 2 F Score of 10 datasets after three steps interest 0. it is used to classify the data in 10 test dataset and the F score corresponding to each γ 0. Kulathuramaiyer. W.9642 com 0. on Web Intelligence(WI2004).8217 0. “Diversifying the image retrieval results”. Song.1.8683 0.8701 wheat 0.183 percent.8742 0. Chen. pp. McCallum.8745 0. pp.39.20070533.8023 0. J. IEEE Computer Society. pp. Gao. MIT Press.8705 0. and learns the problems of PU text classification based on affinity propagation and semantic feature extraction. of the 29th Annual Int’l ACM SIGIR Conference on Research and Development in Information Retrieval.7514 0. J. the algorithm still performs well.9809 0. A.8663 5.9049 earn 0. [3] H. N.8353 0. and Technological Development Projects of JiLin Province under Grant No.103-134. Affinity propagation can improve the performance of PU classifier when the positive examples are few. One-Class SVM increases the F score by 10.9433 0. of the 3rd IEEE int’l Conf on Data Mining. of the 14th Int’l Conf. of the 14th Annual ACM Int’l Conference on Multimedia. D. [6] H. Zhai.8935 0. SIGIR’03. 67-88. 1999. we had trained the final SVM classifier from the purified unlabeled dataset (pU) as negative and the expanded positive dataset as positive. 2003. Cohen. K. [5] K. In order to test the final classifier. 1999.9779 grain 0. S. W. [7] K. D.8822 0.R. “Clustering by Passing Message Between Data Points” Science. of the international conference on Knowledge Discovery and Data mining.9829 0.60803102. ACM. WS. 2002.X. 2000. T. Y. pp.7853 trade 0. even if the given positive dataset is insufficient. NY. [10] S.3 acq 0. Han. Conclusions This paper combines CoTraining in PU classification. Thrun. NewYork. “PEBL: Positive example based learning for Web page classification using SVM”. 972-976. pp. “Semantic feature selection using WordNet”. New York. Machine Learning. Nigam. Moreover. 2006. Chang. The result proves that the combination of three steps is superior to the former PU algorithms.9544 0.5 0. ACM. Its goal is that use these algorithms can improve the performance of classifier. No 1/2. [9] J. 2007.8333 0. Melbourne.8073 0. 2006.7863 crude 0. Vol. PS. Journal of Information Retrieval.Lee. [8] C.8527 0. [2] B. NY. of the 26th Annual Int’l ACM SIGIR Conference on Research and Development in Information Retrieval.Yu. Vol. Huang. In: Proc.8666 0. T. pp.8924 0. “Making large-Scale SVM Learning Practical Advances in Kernel Methods-Support Vector Learning”.” In Proc. Vol. [4] T. 2004. In: Proc. 707-710.9380 0. “An evaluation of statistical approaches to text categorization”. pp.Karger.9211 0.9305 dataset are listed in Table2. pp. Liu. Yu. From the comparison with the method. In: Proc.Chua. FreyB. Acknowledgement This work was supported by National Nature Science Foundation of China under Grant No. 179-188. 2003.

International Conference on Computer Engineering and Technology Session 3 .

.

. Previous studies examining this area focused only on information presented by leading displays. including presentation mode. requires further investigation. The results showed that leading display design factors did not distract participants from static information search tasks but did affect participant reading comprehension on leading displays. dynamic display is a means to present notifying information to users while they are reading static information on displays. Ubiquitous computing and personal networks are believed to be the near-future paradigms for our everyday computing needs. and Chinese typography [1-4.tw Abstract Leading display represents a mechanism for exhibiting temporal information instead of spatial information to overcome the limitations of small-screen mobile devices. Taiwan roland@mail. or fifth day of usage) for a small screen.1109/ICCET. in leading display used on websites. this investigation performed a dual-task experiment (a search task for static information and a reading task for leading display information) to examine the effects of leading-display factors on the visual performance of users during different stages of usage (whether current usage is the first. Actually. However. jump length. Yen Graduate School of Design Management Ming Chuan University 5 De Ming Rd. the connection between visual performance and previous experience in using leading displays. the attention of leading-display users cannot always be assumed to be only on reading leading-display information. 6-8]. Chien-Cheng. speed. Introduction The future is certainly looking mobile and wireless.edu. One possibility in overcoming this limitation is to use a leading display. mobile interaction is often a secondary task performed while doing something else. Therefore. Taoyuan County 333. According to the results of the studies [1-4. Previous Chinese dynamic-display studies have examined the effects of several leading-display factors on the visual performance of users. An assessment of the adaptability of users to leading displays. the reading comprehension of participants in each set of reading conditions has been measured only once. third. anywhere at any time. users would be performing other task with 1.mcu. the string of text moves from right to left sequentially along a single line within a small screen. Taoyuan County 333. Currently. Taiwan ccyen0706@yahoo. mobile interaction is often a secondary task performed while doing something else. Speed and presentation mode significantly influenced the participants’ reading comprehension. essentially giving access to any information. Frequently.2009 International Conference on Computer Engineering and Technology Ergonomic Evaluation of Small-screen Leading Displays on the Visual Performance of Chinese Users Yu-Hung. display space on mobile devices is at a premium. with the amount of information that can be displayed on the small screens at one time extremely limited. Gui Shan District. Gui Shan District. which is widely used to show additional notifying information. font size. that is.21 157 . resulting in a lack of information on how changes in the context of tasks affect the ability of users to perform effectively.2009. 6-8]. Actually. speed and presentation mode were the two most critical factors for visual performance. Chien Department of Product Design Ming Chuan University 5 De Ming Rd. In case of presenting notifying information.00 © 2009 IEEE DOI 10. the majority of leading-display studies have been conducted under idealized single-task conditions. text/background color combination.com. the attention of leading-display users cannot always be assumed to be only on reading leading-display information.tw information. where screen space can be traded for time to present temporal information in a dynamic manner. for instance. Moreover. fourth. the possible applications of leading displays and the implications of these findings on reading Chinese text are discussed.. the use of such devices may be influenced by the context of tasks and adaptability of users was somehow disregarded. In most leading-display studies. second. Consequently. The rapid development of ubiquitous computing has led to the increasing use of mobile devices to read text 978-0-7695-3521-0/09 $25.

The leading display could be maximized to display ten Chinese characters of 14-point Chinese typography on a single-line display. Figure 1. 350. Lin & Shieh [3]. The distance from the center of the screen to the desktop was 8 cm. the screen was 40 cm. In each trial. it is important to address the disconnection between the actual use of leading display and the effects of leadingdisplay factors on users’ visual performance in terms of dual-task scenario. multiple-choice comprehension questions based on the leading-display content. During this period. participants performed two trials under each condition and repeated the same experimental procedure on days 1–5. At the end of the time period. Wang. During the experiment. 2 shows the interface design in which the leading and static displays are presented simultaneously on a smart phone. were conducted on a Sony Ericsson P910i Smartphone (Fig. Photo of the experimental interface showed on an real-world P910i device Twelve college students (native Chinese speakers) from Taiwan were selected as participants. Method We conducted a 3 × 2 × 5 repeated-measures dualtask experiment to examine the effects of speed at 250. and Wang & Kan [8]. . An environment involving both leading and static displays was constructed. The dual tasks. The smart phone was positioned at an incline of approximately 105°. presentation mode (character-by-character or word-by-word) was based on studies by Chien & Chen [2]. Example of the experimental interface used in this study 2. [6]. Lin & Shieh [3]. The smart phone was placed on a 75 cm high table with a desk synchronization stand. while the distance from the participants’ eyes to the center of 158 Figure 2. and Shieh & Lin [4]. At each session. and 450 characters per minute (cpm). users can read dynamic information showed on a single-line display of a facsimile machine. with all text material displayed on a 208 × 320 resolution touch-screen. Participants were also asked to respond to two multiple-answer.attention focusing on the dynamic displays. Speed settings were based on the studies of Chien & Chen [2]. Fig. participants were required to finish the search and reading tasks in 30 s. which consisted of a static Chinesecharacter search task and a leading-display information reading task. Text was presented in black on a light gray background. the leading display repeated continuously. et al. and adaptability (day 1–5 of use) determined the visual performance of Chinese users in using leading displays to accomplish dual tasks on small screens. participants searched for 10 of the same characters among 100 Chinese characters on the smart phone screen and simultaneously read a 30-character leading-display passage. Wang & Chen [7]. and the tasks of searching and pressing keypads have to be accompanied with simultaneously. 1). For instance.Shieh & Lin [4]. Consequently. the number of discovered target characters was recorded.

80 Day 5 0.88 350 0. p< 0.25 0.08 0.95 0.87 Adaptability Day 1 0. and none of the interactions among factors was significant.92.17 0.055 Speed × Day 0.01 level.07 0.94 0. The adaptability factor did not significantly affect reading comprehension.003 0. The character-by-character presentation mode on the leading display required that subjects partially use cognitive resources to divide the characters into the smallest meaningful units.14 0.12 0. However.000 a 0.595 Day 0. 159 .93 0.15 0.084 1 1. 7].035 2.01 ). In this study.60.84 430 0.919 Mode 1.22= 14.20 0. However.279 Significantly different at =0. This finding suggests that users can receive more information using a well designed leading display on small-screen devices without sacrificing visual performance efficiency on a static-information search task. we found that no factors significantly affected the static-information search task score.666 Speed × Mode 0.94 0.83 Day 4 0.084 21. Scores of search and comprehension for each independent variable Independent Static-search Leading-display variable score comprehension Mean SD Mean SD Speed (cpm) 250 0. The level of significance was set at α=0.10 0.07 0.141 4 0. Table 1 shows the means and standard deviations of staticinformation search scores and leading-display reading comprehension and Table 2 shows the ANOVA analysis for the mean comprehension of the RSVP display. presentation mode and speed significantly affected reading comprehension (F1. users had the highest reading comprehension using the word-by-word presentation mode at a speed of about 250 cpm.581 8 0.01. but they did influence comprehension. Chinese script belongs to the logographic system in which characters are used as writing units. Reading comprehension was significantly higher for the word-by-word presentation mode than the character-by-character format and was also higher for the 250-cpm speed compared to 350 or 450 cpm.95 0.732 2 0.001 a 0.93 0.09 0. 3.22 0.82 Day 2 0.038 1.007 2 0.83 Day 3 0.032 0. 4].127 4 0. Discussions and Conclusions Previous studies of visual performance for leading displays focused only on assessing the adequacy of leading display design [1.27 0.619 0. p< 0. F2.065 0.09 0.76 Word-by-word 0.253 Source a 0. and no salient boundary exists between characters.073 1.390 Speed × Mode × Day 0.72 Presentation mode Character-by-character 0.79 Table 2. The effect of leading display design on visual performance for static information thus is also an important issue. and none of the interactions among factors was significant.16 0.09 0.634 Mode × Day 0.127 0.94 0.23 P 0. none of the leading display design factors distracted participants from the static search task. For the leading display. According to the analytical results presented here. Results Analysis of variance (ANOVA) was applied during the statistical analysis of the experimental data and Tukey’s HSD (honesty significant difference) test was used for post hoc comparison.95 0.947 0. ANOVA table for mean comprehension of the leading display Type III Sum of Squares df Mean Square F Speed 1. In contrast.11= 21.11 0. 6. 4.96 0.11 0.866 14. The impact of leading display factors on the static search task was relatively low.18 0.Table 1. 2.302 8 0. a leading display does not usually appear in isolation and is designed to accompany static information. This finding is in accordance with previous Chinese leadingdisplay studies [1-4.05.95 0.

International Journal of Industrial Ergonomics. which is a novel presentation technique to most users [3. Wang and C. International Journal of Advanced Manufacturing Technology. Sun.H. Dynamic Chinese text on a single-line display: Effects of presentation mode. Further studies are needed to investigate how users acquaint themselves with and accept this novel presentation technique by increasing time of use to improve reading efficiency for Chinese text dynamically displayed on a small screen. Effects of screen type. 32 (2): 93– 104.K. 2004. 6. [7] A. Displays. 2005. 100: 1021–1035. speed. thereby lowering the cognitive load and promoting reading comprehension of dynamic text [2-4]. 2003.H. and 250 cpm is far slower than the average Chinese reading speed of 580 cpm [5].H. Chien and C. Wang.K. Perceptual and Motor Skills. Suggestions made in this study may assist interface designers in developing effective leading displays that promote improved user Comprehension for small-screen mobile devices. [8] A. Morita. M. Perception and Psychophysics. Stark. 31 (4): 249–261.H. Kan. H. Shieh and Y.C. Chien. Shieh. [4] K. and jump length for VDT leading display on users’ reading performance. Fang. Comparative patterns of reading eye movement in Chinese and English.H. International Journal of Industrial Ergonomics. Effect of dynamic display and speed of display movement on reading Chinese text presented on a small screen. 37 (6): 502–506. Chen. [5] F. users may need more practice to familiarize themselves with this novel.F. Perceptual and Motor Skills.W. Chen. Effects of VDT leading-display design on visual performance of users in handling static and dynamic display information dual-tasks. The use of dynamic display to improve reading comprehension for the small screen of a wrist watch. Lin. [6] A. Effects of display type. 160 . and dynamic technique.J.H. [2] Y. Lecture Notes in Computer Science. text/background color combination. Acknowledgment The authors gratefully acknowledge the financial support from the National Science Council. 27 (4–5): 145–152. 1985. J. making the boundaries between the words more salient.C. temporaldomain. Republic of China under Project NSC 97-2221-E-130021-. 2003. previous studies on leading displays have shown no significant improvement in reading comprehension with time of use increments. and L. Chen and Y. and C.H. 133–138. [3] Y.the word-by-word format allowed the addition of spaces between words. Thus. this helped users to divide the Chinese characters into Chinese words. 4557: 814– 823. 2007. Chen. 4]. 2006.H. 23 (1–2). 5. In terms of user adaptability to leading displays and speed. Wang and Y. 100: 865–873. Leading displays present a trade-off between space and time. The results of this study demonstrate user comprehension of a variety of leading display designs. and text/background colour-combination of dynamic display on user’ comprehension for dual tasks in reading static and dynamic display information. Chinese typography. 2005. Taiwan. Reading a dynamic presentation of Chinese text on a single-line display. The leading of dynamic information displays has recently been identified as a means of presenting text on electronic device screens. speed. Lin and K. References [1] C.

RELATED WORK INTRODUCTION Computer graphics technology and some 3D scanning devices are all improved largely in the past decades. Particularly for models which are made up of curving pieces and those models with extreme points or parts. such as the normal vector on the surface of models [11. volume [10]. make use of curvature function to get seven views of the model to describe the model in [10]. Traditional classification of these model features goes through two main periods: from the model shape to the content of 3D models. The descriptor we defined describes models accurately and keeps invariant for transformation and rotation. How to find the exact model to satisfy the practical needs has become a heated topic. we use mean curvature and corresponding coordinates of the vertexes on the surface as the feature descriptor of model.cn.edu. we introduce the curvature on surface of the model to show the model well. He adopts the shape function of model. Xiaolan Yao. the feature we can extract from the model is to describe the object accurately. this means is not very accurate and maybe suit to classify models rough. Distance or geodesic distance on the surface 978-0-7695-3521-0/09 $25. square root of the triangle area made up from the random three points on surface. In [8. there is our conclusion in Section 6. 9] Osada proposes a descriptor named shape distribution. So. Many methods are based on shape of 3D model. P. area of pieces [13]. cord-based method [14].2009 International Conference on Computer Engineering and Technology Curvature-based Feature Extraction Method for 3D Model Retrieval Yujie Liu. 7]. In the paper.1109/ICCET. in this paper. At last. the result showed much better than other feature descriptors. It can show the curving degree very well for those models with curving pieces and the ones with extreme points or extension components. China University of Petroleum (east of China) Dongying 257061.49 161 3D model retrieval methods are in the increasing trend in the past years. Thought several retrieval experiments. to describe the model. Section 5 is our results show and analysis during experiment process. lizm@hdpu. the results implied that this descriptor could be used to find the exact model from the PSB model library. Section 3 presents our feature extraction method in detail. we concentrated the geometrical and topological features on the surface. and so on. which are all the shape character. 3D model retrieval has always been an important research issue during the past decades.cn Abstract—Curvature on the surface of 3D mesh model is an important discrete differential geometrical descriptor. EMD. which is related to the geometric characters such as distance of the random two points on surface. normal direction [4. we bring EMD (Earth Moving Distance) method into the similarity measure frame.2009.00 © 2009 IEEE DOI 10. II.com. histogram [13]. Meanwhile much more 3D modeling software is developed significantly and 3D models are not only in large number but also in complex format of data type. 3D model retrieval I. [8. we present a method based the curvature on the mesh model surfaces. More and more 3D models appeared. and so on[15. yaoxiaolan2002@163. However.China e-mail: liuyujie@hdpu. Similarly they also compute a statistic about the function values. Mahmoudi et al. So there are many methods extracting features through the models’ surface shape attribute. So in this paper.R. Feature descriptor. So. variant or indefinite dimension of the features. Then he computes values of one of the functions and gets a statistic to describe the model. It got well result by doing a retrieval experiments on 314 models in PSB. There are many other means based on this statistic way to show characteristic of models. The paper left parts are organized as follow: Section 2 shows the previous works. Keywords—Mean Curvature. and use it in retrieval further. . Additionally. Zongmin Li School of Computer Science and Communication Engineering. shape is easy to be received to humans’ perception directly.16]. which show the shape characteristics directly. This comparison way is very efficient and especially adapt to these conditions: different numbers of the two features. As we known. In this paper. Generally. It has also been implemented in many practical areas prevalently. The shape of the model is fundamental and the lowest level feature. Afterwards.edu. 12]. cube root of volume of the random tetrahedron made up from model and so on. Similarity measurement is described in Section4 particularly. How to describe a model accurately is the key problem in the retrieval subject. Especially. we try to introduce curvature into the 3D mesh models. 9]. which often use the surface geometric features to describe models.

For each point P on the triangle mesh surface. Figure1 shows the curvature result of one face model with curving pieces. it can measure the two models with different dimension features and weights. 2. Secondly. we set N equals to 50 firstly. projection and scaling under canonical coordinate system. and the latter is mean curvature. our method shows the better results. This descriptor is invariant to transformation. and the green ones are those feature point we have chosen according to the curvature value. the bold font means vector. for those models with smooth surface or distinct topological features. 2. The conditional similarity comparison method is not satisfied our needs. ⋅⋅⋅n. and then compute the mean curvature on each vertex [7]. we applied the EMD method to measure the similarity between different models. we can compute the area by the formulate bellow: κ H = {k j }.We used the Meyer method to compute the curvature of each vertex on the mesh surface. our descriptor is the feature in 50 dimensional. and combined them with the standard coordinate position of each vertex to show the overall model[7]. i = 1. we compute the Voronoi area of the 1-ring neighborhood triangles on the model surface. where the left picture presents the change of the curvature on the model surface. Our descriptor Our descriptor is defined by the following formulate: F ( M i ) = ( P. K ( xi ) = 1 2ΑMixed j∈N1 ( i ) ∑ (cot αij + cotβij )( xi − x j ) (2) Where N1 (i ) stands for the neighbor triangles of vertex the opposite angles of the edge respectively. N F is the descriptor of any model M i in our library. we can get the mean curvature of every vertex on the triangle mesh surface easily. P = {( x j . x K ( xi ) = 2κ H ( xi )n( xi ) (3) Note: in all above formulates. but also based on the coordinate vertex dates. which shows the xi . we compute the differential geometrical attribute—mean curvature. Firstly. III. the area which be constructed is the Voronoi area of the vertex P. Because of our feature is not only related to the curvature value. At last. N (4) CURVATURE FEATURE EXTRACTION The common used geometrical characters such as normal vector direction or some other surface angles information often has simple computation and these features is not considered as the feature of one object directly. Features and weights in this method can be defined up to you. ⋅⋅⋅. z j )}. we must use the Voronoi area on surface of the model. The speed of the computation is so fast and it is a well metric tool. Figure 2 shows the definition of the Voronoi briefly. rotation. ⋅⋅⋅. So. This distance belongs to the engineer implementation concept. y j . In my method. κ ΑVoronoi = 1 ∑ (cot α ij + cot βij ) xi − x j 8 j∈N1 (i ) Α 2 (1 ) And then we use the mixed area Mixed to compute the mean curvature vectors by the following formulate: Figure1. j = 1. WAN Li-Li used this method to compute distance between the spatial distributions of different components in every two models [11]. the Voronoi areas depend on the types of mesh triangles. κ H ) . In the paper. these angles relationship is shown in figure1. A. after analysis of all models we choose from PSB. For the non-obtuse triangle. They are usually known as the assistant tool combined with other features. Where. choose the N biggest curvature vertexes. apply it to describe the model directly. B. and the color from red to blue stands for the curvature value from big to small changing. moreover. Mean curvature computation Based on the above principle of the finite element method. The curvature of one face 162 . Especially. the former is the coordinates of the vertex on surface of the model. The right one shows the vertexes of the whole model. connect the circumcenters of its 1-ring neighborhood triangles. and x j is the neighbor of xi . α ij and βij are xi x j relationship between the curvature vector of vertex i and its value. according to formulate(3). It is made up of two sets: P and H . j = 1. In our experiment.

Minkowski distance,Hausdorff distance and so on. it only spent 0. and defined the Euclidian distance between the coordinates of vertexes as the ground distance function. After the curvature computation. wq1 ). (qn . ∑ ∑ EMD ( P . These means are limited to the dimension of features and related to the numbers of the feature points in two models will be compared. a unit of work corresponds to transporting a unit of earth by a unit of ground distance.022880 0. the signature is composed of the mean curvature and the coordinate values for each vertex on the model surface. one can be seen as a mass of earth properly spread in space. The evaluation tool we used is also in the PSB benchmark.Table1 shows the time our descriptor computation cost. Traditional comparison method uses Euclidian distance. If we need to deal with other large mesh models. we adopt the earth mover’s distance (EMD) means common used in engineering implementation in our method. we defined that the SIG_MAX_SIZE equals 100. the retrieval result shows much better.267323 B. and for the small size model s0661. for the sake of avoiding this problem.000858 seconds on computing the feature we want. wqn )} D = [ d ij ] . So. A. Then. we introduce the EMD method to measure our descriptor and then used it into retrieval experiment. and record the time spent to compute the mean curvature of all vertexes and the N biggest ones respectively. All the time listed is for Intel Celeron M 430 1. TABLE1I. this descriptor is suitable to the library with large number of models.883898 Time(N biggest) 0. we only list four models with different size in vertexes and faces. and then be implemented in many areas. The experiment results were compared to the Shape Distribution and EGI descriptors. w p1 ).025905 0.237080 1.000796 0. ⋅⋅⋅. We choose 314 models from the 1814 models and it contains 47 categories and 37 non-empty classes. We recognized the mean curvature as the weight of the signature. V. Run the process of the EMD comparison.279705 2. 163 . It is not difficult to see that the largest model s1734 will only spent 2. the other as a collection of holes in that same space. Intuitively.( p2 . j =1 n fi j di j (5) f j =1 i j EMD SIMILARITY MEASURE In this section.000858 0. The goal of the fij F = [ f ij ] with the flow Model ID s0661 s0838 s0961 s1734 Verte x 71 1981 21013 160940 IV. Q ) = ∑ ∑ m n i =1 m i =1 qi and p j that minimize the overall cost [12]. w pm )} be the first signature with m clusters.( pm . Method Presentation Earth mover’s distance can be dating back to the transport problem. And the ground distance matrix where dij is the ground EMD is to find the flow between p q distance between cluster i and j . the EMD measures the least amount of work needed to fill the holes with earth. which we call the ground distance is given [12]. it needs to make sure the N will be smaller than 100. we can change these constraints and make the SIG_MAX_SIZE much larger. So. TIMING OF OUR METHOD NEEDED Let P = {( p1 . pi is wpi is the weight of Q = {(q1 . It can evaluate the dissimilarity between two multi-dimensional distribution in some features space where distance measure between single features. w p 2 ). In the experiment. Here.73 ghz with 512 RAM. where representative of the cluster and the cluster. ⋅⋅⋅. Faces 138 3907 40384 31649 8 Time(all vertexes) 0. wq1 ). given two distributions. especially for those models with curving pieces and those with extreme points or parts. because it has the lower time cost and feasible to describe different models. we applied the EMD method to compare the similarity and use the model in this database to finish the retrieval process. Our Measurement In our method. is the second signature with n clusters.3 seconds approximately on computing the N biggest vertexes. EXPERIMENT RESULT Our experiment is based on the triangle mesh models in PSB benchmark database [13]. (q1 . the dissimilarity between the different models can be valued and computed fast. It improved that our method got the well result. Besides our descriptor is chosen the N biggest mean curvature.

164 . CONCLUSION In this paper. we can continue to express the model with other differential geometrical operators such as Gaussian curvature. normal curvature and direction on the mesh surface. On the other hand. our method got well results. We can also combine it with other features to get much better feature expression and try to solve the inaccurate description for other general models with plain pieces. We can improve the mean curvature descriptor has higher retrieval quality than shape distribution and EGI methods. ACKNOWLEDGEMENTS Figure3. Retrieval result The figure 2 shows part of our results for five models: head. B. The biggest virtue of our method is to describe the curing pieces and the extreme points or parts of the model accurately. The whole efficiency Applied all the 314 models to do the retrieval experiment. Average precision and recall of all models Figure2. 3) and all classes (fig.A. In our future work. Using EMD method to compare features we computed in our experiments. because each result all contains 314 models and they are in great number. and face. the result showed that our descriptor is much more excellent than the other two. Figure4. spider. From the precision-recall plots of all models (fig. The descriptor combined the vertexes’ mean curvature on the surface of the model and the coordinate position of these vertexes. VI. 4) in our experiment. Moreover. bottle. we propose a new descriptor for the 3D triangle mesh model. we compare our method with the Shape distribution and EGI (Extend Gaussian image) respectively and get one plot showed in Fig8 below. it is notable that the red line is on above of the green and blue ones. Retrieval results We don’t list all the retrieval result. we found that the descriptor showed well results and especially for those models with curving pieces and those with extreme points or extending parts. through the retrieval experiment. got the precision-recall plot of all the models and compared it with the other two methods. By the Precision-recall diagram we can compute the precision and recall for each retrieval with all the models listed in the retrieval results. The first model plot is the model we want to retrieval. the weakness of the descriptor is that it does not adapt to the complex models or other models with plain pieces. Average precision and recall of all classes This work is supported by National Key Basic Research Program of China (2004CB318006) and National Natural Science Foundation of China (60533090. 60573154). and the left three are the results showed in order of ascending dissimilarity. While we compute the P-R curves. feline.

T.Brady. and H.167-178. 1999. The Gaussian and Mean Curvatures" and "Surfaces of Constant Gaussian Curvature. Boca Raton. 2000. 1997. USA. 1998. Computer Vision.REFERENCES [1] Gray. Paquet. Computer Vision.Yuille. Carlo Tomasi. Naveen. 1995. Hilaga. 1997. 21(4):807-832. A. A Method of 3D Model Retrieval based on the Spatial Distributions of Components. Rioux. pp. Mahmoudi . K Michael. [13] [2] [3] [4] [5] [14] [15] [16] [6] [7] [8] [9] [10] [17] [18] [19] [20] [21] [11] 165 . ACM Transactions on Graphics. On Spatial Databases (SSD’99). Symp. Schroder P. pp. P. Yossi Rubner. T. pp. China. M. et a1. [12] E. Y. Journal of Computational Geometry: Theory and Applications. 5th Intl Conf. 203-212. SpringerVerlag. 32(1):1-28. pp. Robert. and Image Processing. Hao Aimin. and Image Processing. et al. Gabriel Taubin. 2004. M. Modern Differential Geometry of Curves and Surfaces with Mathematica. Meyer M. Description of shape information for 2-D and 3-D Objects Signal. 2002. Proceedings of 12th International Conference on Image Analysis and Processing. pp. A query by content system for threedimensional model and image databases management.Discrete DifferentialGeometry Operators for Triangulated 2-Manifolds[A] . In: Visualization and Mathematics. T. 2001. Proceedings of 16th International Conference on Pattern Recognition( ICPR2002). Shinagawa. M. pp. Bernard. Robert and F. Quebec . A Practical Approach for 3D Model Indexing by combining Local and Global Invariants. Topology Matching for Fully Automatic Similarity Estimation of 3D Shapes. 2002. 1998.M. Ankerst. E. Vincent Couillet and Mohamed Daoudi.541-546.Ponce. T. J. Shape Distribution.Besl and R. FL: CRC Press. In: Proceedings of IEEE on Image and Vision Computing. 3D models retrieval by using characteristic views. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition . Graphics. Proceedings of International Conference on Shape Modeling. M. 373-380 and 481-500. 52-58. The Princeton shape benchmark. November 1999. Proceedings of 2nd International Symposium on 3DPVT 2004. 2 :457~460. Jean-Philippe Vandeborre. [O.Paquet. Proc. Invariant surface characteristics for 3D object recognition in range images.59-66. California .S. 2nd ed. F. 1986. 6:103-122. 33:33-80. M. Zhao Qinping.2002). Rioux. 1985. et al. “Matching 3D Models with Shape Distributions”. Patrick. Philip. Hong Kong. Curvature Approximation for Triangulation and Quadric-Based Surface Simplification.Jain. Guibas. ACM Transactions on Graphics. E. Daoudi. 807832. M. (in Chinese). 2002. 2002. T. O. Graphics. Bernd Hamann. on Computer Vision (ICCV’95). 2001. A. Wan Lili. Topology Matching for Fully Automatic Similarity Estimation of 3D Shapes. 2003. Y. Shinagawa. Kohmura.C. A Metric for Distributions with Applications to Image Databases. Journal of software. Tabatabai. Hilaga. and M. Proceedings ICCV pp. Processing: Image Communication. M. Desbrun M. A content based search engine for VRML database. Thomas. Leonidas J. Thomas. Kunii.J. pp: 345-352. Estimating the tensor of curvature of a surface from a polyhedral approximation. A. Genova. In proceedings of ACM SIGGRAPH. Kunii. 6th Intern. Jurgen Assfalg. Murching. Retrieval of 3D Objects using Curvature Maps and Weighted Walkthroughs. Berlin. 3D shape histograms for similarity search and classification in spatial databases. Szymon Rusinkiewicz. 203-212. Santa Barbara . 21(4). In proceedings of ACM SIGGRAPH. Ottawa. C. Describing Surfaces. Kohmura. Rioux. Estimating Curvature and Their Derivatives on Triangle Meshes. Paquet. S. Proceedings of the First International Symposium on 3D Data Processing Visualization and Transmission (3DPVT. 18(11):2902-2913. 2007.Asada.

Special conditions such as boundary condition also considered. One of the most challenging problems in these networks is QoS which must not experience significant changes during the handoff process. hendessi@cc. must be considered in selecting a suitable network. Iran majidfoladi@yahoo. Alireza Shafieinejadc. b Azad Islamic University. The need for transparent services to end users makes vertical handoffs between these networks unavoidable.00 © 2009 IEEE DOI 10.ac.com Abstract We propose a new vertical handoff method in which the number of signaling and registration processes is lowered as a result of reduced utilization of home agent and correspondent nodes while users are mobile. Different algorithms and protocols have been proposed for this process.shafieinejad@ec.2009. Generally. The handoff process mainly depends on the network architecture and its hardware and software elements. which introduced two new elements. and in upward case. Nonhomogenous networks contain a number of wired networks with different technologies. WLAN to 3G). network delay. In this case. in upward handoff the MN tries to maintain its connection with the WLAN because of better QoS and lower cost. Mahdi M. Management of handoff protocols in the network layer is based on MIP1. 1. the only parameter for selecting a network is channel quality which is Fig.e. in addition to channel quality.iut. MN speed and movement patterns. updating and authorization of user connections and the CoA of the foreign network. other parameters such as service cost. Introduction estimated by the received signal strength of each cell. loose and tight. Saveh. and those such as WLANs which provide high bandwidth services over a small geographical area.com mahdimbayat@gmail.iut. 3G networks and WLANs can employ different protocols for authentication and accounting [5]. thus the WLAN traffic is not transferred via the 3G network and routing to the Internet is via the WLAN gateway.1109/ICCET. the MN enters a larger cell form a smaller one (i. Downward handoff usually has low delay sensitivity while in the upward case since the MN switches to a cellular network before disconnecting from the WLAN delay is not a serious problems. Iran c Dept. Qazvin. In horizontal handoff. Iran c . In a tight 1 -Mobile IP 978-0-7695-3521-0/09 $25. In the vertical case. 1.4]. These agents provide synchronization.Bayatc a Azad Islamic University.ir m18. the MN initiates the handoff and connects to the target network.ac.ir. Vertical handoff can be divided into upward and downward cases [1].166 166 . as depicted in Fig.2]. the mobile node (MN) enters a smaller cell from a larger one such as 3G to WLAN. Thus we need to employ an efficient algorithm for handoff in terms of initializing and completing the handoff operation [1. 1. those such as 3G and 2. 1: Tight and loose coupling architectures. Home Agent and Foreign Agent to address the problems of mobility [3. traffic. of Electrical and Computer Engineering Isfahan University of Technology.1 Nonhomogenous integrated networks ETSI has proposed two methods for connecting WLANs and 3G networks. bandwidth.2009 International Conference on Computer Engineering and Technology A New Method for Vertical Handoff between WLANs and UMTS in Boundary Conditions Majid Fouladiana. In a loose connection. a separate path is provided for 3G traffic. a. Morteza Rahimib. However. there are two categories of wireless networks. After selecting the proper network. In the downward case. Performance and delay evaluations which done by ns-2 are presented to demonstrate the effectiveness of the proposed scheme.5G which provide low bandwidth services over a wide geographical area.com.rahimi@gmail. Faramarz Hendessic. Throughput performance of the integrated network is improved by reducing handoff delays and packet losses.

the SRNC computes the approximate position of the MN and compares it with that stored in the WUD. 2. The Proposed method In our approach the mobile node (MN) locations in the UMTS network are estimated as the MN nears the WLAN. and then the FA registers the MN and responds to the AP. The SRNC regularly evaluates the MN position and relays this information to the CN. j ) N (Q sn. it ignores the handoff process and continues its current connection. the FA sends a binding update message to Node B to cleanup the allocated resources. evaluates the MN position and finally responds to the CN [7]. The WLAN WUD contains the geographical position and general specifications of the networks such as bandwidth. the handoff process shown in Fig. Next. It becomes a part of the 3G core and all WLAN traffic is transferred via 3G networks. j ) (2) s ⎣ i ⎦ j If the MN does not find a suitable AP with a cost function better than current UTMS. radius of the WLAN. We estimate the location of the MN using 3GPP methods which can be categorized in four major groups [6]: Cell identifier OTDOA2 UTDOA3 GPS4 Regardless of the method employed. a WLAN is connected to the 3G network in the same way as other radio access networks. 2. center of the WLAN. a message is sent to the MN to turn on its WLAN interface. Otherwise. the MN gets a new IP address from the WLAN. we explain the upward and downward handoff techniques. routed to the AP. A comparison can be done based on the inequality ( x − x 0 ) 2 + ( y − y 0 ) 2 < ( R − R th ) 2 (1 ) where ( x. while for users we store an access and priority list of those WLANs a user can connect to. 2 3 -Observed Time Difference Of Arrival -Uplink Time Difference Of Arrival 4 -Global Positioning System 167 . Fig. ⎡ ⎤ f n = W n = ∑ ⎢ ∏ E sn. In the next sections. we employ a database of user and WLAN information. The SRNC collects the required information from the LMU. the FA is informed of the new IP address of the MN (which is valid in the AP domain). The MN turns on its WLAN interface and searches for advertisement messages from nearby WLAN APs. must be tunneled to the MN by means of this new IP address. all of the packets from the CN with the old IP address. when a CN requests the position of a MN from the SRNC. The received packets from the CN which have the old MN IP address (which is valid in the UMTS domain).connection. the SRNC signals the MN. and then we explain the details of our protocol for upward and downward ones. y ) . a message is sent to the MN to turn on its interface. 2 shows UMTS/WLAN vertical handoff architecture In our scheme. We first describe the process of turning on the WLAN interface. R and Rth show the current MN position. After assigning the new address. respectively. In addition. and finally forwarded to the MN. which is valid for the UMTS network. On the other hand all packets from the MN to the CN must be sent using the old IP address. then UMTS. The AP sends the registration request which contains the new CoA address to the FA. y0 ) . At an appropriate time. denoted the WUD to RNC unit. In this scheme the UMTS’s SGSNs are considered as FA and also the WLAN’s APs in each SGSN are considered as a B Node for that SGSN. While entering a WLAN from a UMTS network. 2: UMTS/WLAN vertical handoff architecture. ( x0 . If the difference is below the threshold (Rth). If the MN nears a WLAN boundary. As mentioned above. Since the source address of the packets from the MN to Fig. Afterwards.1 Downward handoff We assume the MN is connected to the CN via Node B and moves in the UMTS network. upward and downward handoffs differ because it is not possible to use 3GPP estimation when the MN is in a WLAN network. and the threshold. while connecting to the MN. are tunneled to the new IP address by the FA. j ( w s . the MN determines if there is an AP with better quality than the UMTS network according to a weighted function [7] of the AP parameters. In this method. the CN requests the MN position from the SRNC via the Iu interface. i ⎥ ∑ f s . delay and cost. 3 is initiated.

these packets must first be routed to the AP through the new IP address and then forwarded to the CN. At the beginning of the process.the CN is the old MN address. the FA sends a binding update message to the AP to cleanup the allocated resources. M14-15: Data path from the MN to CN.2 Upward handoff We consider the MN is connected to the CN in the WLAN network. Fig. the MN must find another network. 2. Then all packets sent between the MN and CN are transmitted through Node B. If the measured strength is below Sth for at least a threshold interval (Tth).5: The WLAN to UMTS handoff process 168 .5. While the MN is in the WLAN network and connected to the CN via the AP. Fig. Node B forwards the request to the FA which registers the MN and then informs Node B. 6. the MN will send a handoff request to Node B. If the system rejects the request. Fig. 3: The UMTS to WLAN handoff process. only one upward handoff is required. it must continuously measure the strength of the received signal and compare it with the threshold Sth. otherwise it will be disconnected. 4: Signal flow for downward handoff M4 Sending target network information. Moreover. This process is illustrated in Fig. M11-13: Data path for the neighboring WLAN. M5: MN signaling flow. In this case.

Fig.3 Boundary conditions In this section. For the situation where the WLAN is covered by two Node Bs.2. HA 169 . MN sends registeration request to B Node2 which in its turn forwarded to FA2 and then HA repectively. this message must be forwarded along a path B Node2-FA2FA1. MN received advertisement message from B Node2 and then send handoff request to B Node2.2 in spite of the existence of two different Node Bs. the Node Bs have a common FA or different FAs. responses was returned to FA2 and then to B Node2..2.1 Two Node Bs under one FA According to Fig.3. we identify two cases based on the number of FAs related to these Node Bs. while in WLAN and the received signal strength is below SRth.7 . 6: Signaling flow in an upward handoff starting in the WLAN. Affer registeration in HA. 2. the MN connects to the CN via the first Node B and then with a downward handoff enters the WLAN and accesses the CN through the AP.3.1 and upward handoff is the same as in Section 2. For these cases the downward handoff is the same as in Section 2.e. 2. 9) before and after of handoff. Thus for reducing the effects of packet loss. Fig. it is different than in Section 2. we offer propose that FA1 routed the packets with MN destination via B Node1 and in addition send a copy of them to FA2. M1-4: Data path between the CN and MN before handoff. 7: Special handoff when the source and destination FAs are similar. Then the MN exits from the WLAN and enters the second Node B domain.M1-5: Data path between the CN and MN before handoff. 2. Fig. the protocol is completely transparent to the HA and CN which do not need to know the new IP address of the MN. As the strength of received signal form AP in MN become less than treshold. This part of the protocol was simulated and the results show an improvement in handoff delay. i.2 The case of two Node Bs under two FAs When a Node B is under coverage of two different FA (fig. but when the connection begins in a UMTS network. we consider the case where the WLAN is not covered by one Node B. This process is the same as in Section 2. Moreover. For correcting MN address in packets from CN. This process needs a registeration in CN which causes more delay than previous handoffs.8: Signaling for upward handoff when the source and destination FAs are similar. This is discussed in detail below.

First the MN initiates connection with the CN in the WLAN and then enters the UMTS network. In this case. The above results considered a WLAN that is completly located within a UMTS network.11b was selected for the WLAN in addition to a UMTS network. 10: Upward handoff signaling for different source and destination FAs. FA2 send a connection updating message to FA1 which in its turn forwarded to B Node1. After this message valid addresss of MN is the one which was got from FA2. 170 . For cleanup the resources allocated for MN in B Node1. MN sends a message containing the ID of the last received packet to FA2 via B Node2 which make it possible for FA2 to forward the received packet with MN destination to MN. To eliminate the problem of packet lost during handoff delay. Thus the packet lost during handoff is minimized. IEEE 802. FA1 must send all of packets with MN destination to FA2 until CN registers the new address of MN. 11 and 12 show the delay in downward handoff. For upward handoff. 15 and 16 for tight and loose connections. An improvement was achieved relative to the MIP protocol. Fig. Figs. Fig. Fig. the Internet delay was selected randomly between 0 and 55 ms. Second. the MN connects to the CN via the SGSN address and enters the WLAN. Fig. Similar results are shown in Fig. Handoff delay is computed as the difference in time between receipt of the last packet in the previous network and the first packet in the new network. respectively. These results are shown in Figs. Performance Results We use NS-2 for simulation [8]. It is not necessary to register in this case and a reconnect request to SGSN is sufficent. 11: Downward handoff delay in tight coupling.M1-5: Data path between the CN and MN before handoff.11 shows a tight architecure between the AP and SGSN. The results where the WLAN is located between two Node Bs under one FA are shown in Figs. 13 and 14 for tight and loose connections. Figs. 10.must send updating message to CN too.12 in which thw FA and AP do not have a wired connection so packets between them are routed through the Internet. repectively. 12: Downward handoff delay in loose coupling. After receiving updating message. 3. This process is illustrated in Fig. both tight and loose connections were considered. Then it initiates handoff to UMTS during the connection with the CN.

While in the WLAN. The only change is in UMTS where SGSN must tunnel packets to the MN after receiving the new MN address. [8] Network simulator.” Computer Commun.S.. 41. The processing time and handoff delay were both reduced. [6] 3GPP TS25. the MN has two addresses. and Zhu. 4..102-108. June 2004. Radio Resource Control (RRC): Protocol Specification. 11. Low Latency Handoffs in Mobile IPv4.Fig. IEEE Wireless Commun. Guo. and Zhu. in both coupling tight ones which AP has wired connection with SGSN was considerably reduced and cause a higher network throughput. Q. IEEE Wireless Commun.” IEEE Commun. vol.F. June 2004.. .. 29. pp. no.K. Fig. June 2005. 11. J... URL http://www. 1363–1385. W. “Interworking technique and architecture for WLAN/3G integration toward 4G mobile data networks”. F. vol.. “Efficient mobility management for vertical handoff between WWAN and WLAN.. 13: Upward handoff delay in tight coupling Fig 14: Upward handoff delay in loose coupling Fig. Version 6.6. Conclusions We proposed a new vertical handoff method which does not require any modifications of the WLAN structure.. Aug. [7] McNair. Mag. the UMTS address which is used as the source and destination of transmitted packets. Z. K. References [1] Zhang. 15: Upward handoff delay in tight coupling in the case of two Node Bs with one FA.“Mobility management across hybrid wireless networks: Trends and challenges. [5] Salkintzis. pp. 5. et al.. 2002 [4] El Malki. pp. May 2006. A. [2] Siddiqui. pp.Zeadally.0. Guo. RFC 2404. 3...Internet Draft.isi. 2005. “Vertical handoff in fourth-generation multinetwork environments”. and R. no.3. “IP Mobility Support for IPv4”. 9. 8-15.. Aug. 16: Upward handoff delay in loose coupling in the case of two Node Bs with one FA. 171 . Special conditions such as boundary condition also considered This method reduces the interaction between the CN and HN. 2003.. Glenn. vol. vol. and the CoA address which belongs to the WLAN and is used for routing packets to the CN via the AP while in the WLAN. C. [3] Perkins. no. C. 50-61.edu/ nsnam /ns/.331..

China e-mail: aqhu@seu. To arrive at the goal of intensifying the trustworthiness and controllability of distributed systems. these products make high requirement for administrators.cn Hangping Qiu Institute Command Automation. Southeast University. capability of distributed systems is insufficient in mangy conditions such as user behaviors. through building the trustworthy model between the distributed system and user behaviors.sina. In section1. Keywords-trustworthy distributed system. trustworthiness. we present a new scheme for constructing a trustworthy distributed system and make research on secure key techniques. these will lead to the drop of performance of the whole distributed system. Distributed system is confronted with the serious crisis of trust[1]. The secure problem of distributed systems including protection. controllability. The overstaffed abuse is gradually revealed. Because the bulk of terminal systems do not adopt enough safety precautions. miscellaneous. In section 2 we introduce the notion of trustworthy distributed system. intensifying the survivability of services. the technical trends and challenges are briefly discussed. By setting up the trustworthy computing circumstance and supplying the trustworthy validation and the active protection based on identity and behavior for trustworthy distributed system. PLA Science and Technology University. For example: 1) distributed system is built on the insecure terminal system The most prominent secure problem of terminal system is prone to suffer from the erosion by the worm virus and Trojan horse. Nanjing. intensifying the survivability of services. most remote-behavior is unpredictable At the present time.1109/ICCET. 2. those capabilities are absolutely necessary not only for the security of distributed systems but also for health and continuance of development. Nanjing. new ideas are needed to resolve the problems such as security and function of distributed systems. 2) distributed system is scare of the trusted safeguard measures Practices indicate that the worm virus and Trojan horse could not be kept out with firewall. survivability I. China. constructing the architecture of trustworthiness distributed systems. Therefore. At the same time. some important programs and files will be destroyed. as far as the architecture is concerned. This research insists that the security. The secure problems of distributed system need be solved and provide more reliably and simply controllable means to construct the trustworthy environment [2]. Therefore. On the one hand.100 172 software. At the same time. and strengthening the manageability of distributed systems. Through building the trustworthy model between the distributed system and user behaviors. IDS.China e-mail: blue_horse@126. we analyse the situation of security in distributed systems and points out the necessity to build trustworthy distributed systems. the techniques of distributed system are excessive. these products are extra add-ons.Institute Command Automation.com Abstract— To arrive at the goal of intensifying the trustworthiness and controllability of distributed systems. and the cost of implementation is great. and survivability should be basic properties of a trustworthy distributed system. the core function of secure algorithms and chips should be fully exerted. the core function of secure algorithms and chips should be fully exerted. and strengthening the manageability of distributed systems.Information Science and Engineering Institute.2009. 3) distributed system is devoid of controllability and manageability. PLA Science and Technology University. A few users are able to fall short of this kind of demand. They ascribe the passive defensive forms and can not cope with the secure menace which is increasingly variational.2009 International Conference on Computer Engineering and Technology Research on Secure Key Techniques of Trustworthy Distributed System Ming He 1. and recent developments and progresses are surveyed. On the other hand. Nanjing. and antivirus 978-0-7695-3521-0/09 $25.00 © 2009 IEEE DOI 10.edu. Southeast University. Through building the trustworthy model between distributed system and user behaviors. other goal systems will be attacked by the worm virus and Trojan horse. creditability and manageability is to be radically solved. In this paper. Moreover. The key ideas and techniques involved in these properties are studied. controllability. intensifying the survivability of services. the security problem of . The influence on the performance of distributed system is increasingly complex. The security of service of distributed systems is improved effectively and development of Electronic Commerce and Electronic Government is promoted healthily and fleetly. INTRODUCTION At present. constructing the architecture of trustworthiness distributed systems. we will reach the goal of defending the unaware viruses and inbreak.com Aiqun Hu Information Science and Engineering School. controllability and manageability of system resource. and strengthening the manageability of distributed systems. the secure problem of distributed systems is to be radically solved. manageability. run-states. constructing the architecture of trustworthiness distributed systems. e-mail: qiuhp8887@vip.

resources management and schedule under the circumstance of existing misbehaviors. To complete trustworthiness of distributed system. The third step is the difficulty in the whole study. OVERVIEW OF TRUSTWORTHY DISTRIBUTED SYSTEMS At the present time. Controllability provides detailed mechanisms to monitor systems states and control misbehaviors. The last step is the soul of all the study in which the archetype is verified using trustworthiness estimation to prefect mechanism of trustworthiness of distributed system. A. 3) Trustworthiness of remote task: when the distributed application id executed. In that way. Trustworthy model Trustworthy model is the pivotal process in development of system. and spread to corresponding components for correlation computing. Controllability of remote action refers to distribution and controllability of authority. The last two problems are more difficult. as well as difference between requirement and development is magnified. In the TCSEC of Us Department of Defense formal description. trustworthiness information is stored on efficient format for 173 . (2) a core chip of security is designed and completed. initiator of task can affirm that the task has been done inerrably. Trustworthiness information is collected by several methods.e. survivability. quickly querying and updating. 2) Trustworthiness of remote platform: trustworthiness of remote platform contains trustworthiness of identity and computing environment. distributed system. based on trustworthiness of service supported by distributed system. How to build trustworthy model which analyzes distributed system and user behavior availably is the precondition to study the trustworthy distributed system. security protocol and intrusion prevention. spreading and processing. such as behaviors analysis. this research has four steps: (1) a trustworthy model of distributed system and user behavior is present based on exiting security technology through analyzing requirement of trustworthy distributed system. user behavior and its result is divinable and controllable in the trustworthy distributed system. the remote node may be counterfeit or controlled by Trojan horse. The study on security chip is the foundation of substrate hardware. Around algorithm. All the study is based on the second step. which are divided into three parties. otherwise. Then. Trustworthiness maintenance and behavior control. survivability and controllability. behavior. there are four problems: 1) Trustworthiness of remote user: trustworthiness of user is the precondition of security in distributed system and needs identity authentication terminal system so that termination and user can be controlled separately by distributed system. II. while aftereffect of action can be estimated. which are isolated and separated. content and computing environment. controllability and manageability are decentralized and isolating in traditional sense. the trustworthiness of distributed system has a set of attributes that security and survivability must be ensured in user’s view and the manageability of distributed system must be supported in designer’s view. In this paper. technology and production of the distributed system. authority of termination must be restricted. Survivability can be thought of as a special control mechanism. as depicted in Figure 1. The perfect archetype is built through amalgamating pivotal security technologies such as security arithmetic. The divergence to opinion may result in illegibility of definition on trustworthy distributed system and aggrandize difficulty of estimation for maneuverability of resolvent. 4) Trustworthiness of remote action: comparing with trustworthiness of task. To achieve the security strategy of whole system. The significance of this paper is present in section 4. while the conceptions of security. From identity. In another word. Unlike conventional researches on security. there are different cognitions to the trustworthy distributed systems: based on dependable authentication. based on conformity of exiting security technologies. and are formed an organic whole around trustworthiness maintenance and behavior control. trustworthiness of action requires to forbid some actions and to restrict activity of remote user. verification and covert channel analysis is needed form Standard B. process and output. Around maintenance of trustworthiness and manageability of behavior between components those three attributes of trustworthy distributed system can be amalgamated to arrive at the goal of trustworthiness of distributed system. KEY TECHNOLOGY OF SECURITY Figure 1. III.distributed systems is radically resolved in section 3. Status of behavior can be supervised and unconventional action can be managed. (4) an estimation theory of trustiness is present to verify the archetype. (3) an archetype is built to intensify security of trustworthy distributed system. the foundational method which verifies system trustworthiness is studied The role of security is to reduce vulnerabilities on the chain of trustworthiness gathering. based on trustworthiness of distributed system. The first is the main step in the research that the effect to security of distributed system is studied according to the characteristic of user behavior. i. states detection and correlation computing. the three properties are closely related in trustworthy distributed system. Distributed application is secure only when the platform is credible. it must be partitioned to several independent physical modules which need to verify identity and trustworthiness of behavior each other. input.

namely behavior evidences. while the latter is based on trustworthiness of content such as capability of protection and service. That architecture. The chip can be applied to different cryptographic algorithms while the cost and efficiency of cryptographic algorithm are affected lightly. can ensure the security of mechanism. (1) Enhanced security architecture based on P2P There is not a central manager in the architecture of P2P to manage nodes and users. so that security service is consistent in the view of system. subdivided and quantitative idea to convert the complicated and comprehensive evaluation of user's behavior into the measurable and computable evaluation of behavior evidences. as a result trustworthy model of distributed system and user behavior is built. The elements in analysis of trustworthiness are present in fig. Then we subdivided behavior attributes once again into more small data unit. There are two primary advantages in that model. which realizes trustworthy access controllability and supports flexible security strategy. Firstly security vulnerability of system can be analyzed by mathematics model because the requirement of trustworthiness of system is described abstractly and exactly without implementation details. B. server can know which authorization the client has at any moment and grant or retract some authorization to client. secondly security trustworthiness of system is improved by the use of formal description and verification. So this method is feasible in engineering. a kind of analysis model of trustworthy model. recommendation of trust and record of behavior. All the authorization of client must be distributed by server and client has no prerogative to authorize any authorization to other client. Therefore. then enhanced security architecture is built in trustworthy distributed system.through amalgamating method of verifying trustworthiness and traditional Take-Grand model. A security chip which can be applied to different cryptographic algorithms is needed imminently because of different applications with dissimilar cryptographic algorithms. The latter behavior is safe in the view of the owner of authority but unsafe in the view of other node. Authorization of client encapsulated in SKC with encryption is requested from server which supports the key of decryption. restricting acquirement and authorization used only by trusted subjects and adding regulation to verify trustworthiness in model. Estimation of trustworthiness of behavior contains trustworthiness of behavior and identity. there is one point to be realized: physical positions of nodes in distributed system are dispersed. Therefore. C. introducing trusted subject. user’s information including authorization and identity is managed by a trusted centralized management platform exiting in the server. Enhanced security architecture Consulting exiting architectures of security operation system this paper puts forward the architecture of trustworthy termination system based on building trustworthy model through secure kernel chip plays an important role in the controllability of security system. while a trusted proxy is disposed on the client. Thus the evaluation of user's behavior in the trustworthy distributed system can be solved effectively. This problem is avoided easily in distributed system based on C/S for database management of user and authority is centralized and the view of authorization is consistent for all nodes. Secure Kernel chip It is the foundation to trustworthy distributed system on hardware that SOC technology is adopted to design secure kernel chip. because other node may be controlled through the authorization once it is accredited to destination node. also security algorithm and key storage are completed inside chip. We use the layered. In the architecture based on C/S. The server verifies identity of client through authorization with its signature. such as security behavior attribute and performance behavior attribute. In order to get evaluation result effectively. since authentication of platform contains its authenticated key. CA) and every node can authenticate destination node through proof supported by trusted third part (usually it is the signature for certificate of platform and user identity by CA). Figure 2. 174 . A simple architecture of trustworthy distributed system which avoids disadvantage like traditional affixed security mechanism is present in fig. also that chip measures up to the standard of TPM. which is reflected by user's past behaviors. The evaluation of user's behavior is comprehensive evaluation for user's behavior. To research architecture of trustworthy distributed system.2. we first subdivide user's behavior into behavior attributes. (2) Enhanced security architecture based on C/S In distributed system based on topology of P2P authentication to destination is needed to verify particularly in the light of trusted model due to the lack of uniform view of authentication. this problem is solved easily.3. In this paper. Furthermore. their trustworthiness of identity must be authenticated by the trusted third part (usually it is the center authentication. If authorization is distributed only by server and client must request authority from server. Every node can verify trustworthiness of trusted proxy on the platform of destination through EK after destination is verified through authentication with signature of CA. a secure kernel chip SOC which is constituted with kernel of RISC and multiprocessor is designed. secure kernel chip (SKC) supports authentication for trusted proxy in own node and verifies its trustworthiness when the computer system starts.

evaluation theory of trustworthiness which contains estimation of security. The prominent theoretical significance and practice value is present as follows: 1) It is a helpful reference to design and complete the security system and it is also a direction for study on theory and method of security controllability in distributed system. Based on the enhanced terminal system. New network with trustworthiness and controllability as well as expansibility[J]. Monitoring of client behavior is completed using trusted proxy of client. with logic instructions generated to drive particular control actions. Random model method and evaluation technology of network security[J]. 4) Based on trustworthiness of distributed system. http://www. taking advantage of C/S architecture and simplifying strategy of authority controllability. D. Quantifying of trustworthiness is also at exploring stage. attack prewarning. 28(5): 751-758. CONCLUSIONS Trustworthiness is an important aspect to the study on distributed system. 4) security of access control: by the trusted monitor in enhanced terminal system. 28(12): 1943-1956. WANG Yang. the identity of platform is safe and the risk of defense that counterfeit system intrudes distributed system is reduced. Based on the certification technology PKI. In that paper. hardware-encrypted platform like TCP is completed.html. After those measures. such as forwarding control. survivability control. and immunity [8]. LIN Chuang. The study in that paper hasn’t published at present at home and abroad. Though there is no absolute safe system. Software Journal. Evaluation theory of trustworthiness It is not possible to build a perfect trustworthy distributed system.2006. 2) User’s controllability to distributed system is enhanced to insure that system authentication is used exactly and reduce the risk of system out of control bring by virus and Trojan horse. LIN Chuang. estimation of trustworthiness is transformed to a method frame of classic problem. LIN Chuang. IV. the trustworthiness of network and grid system is improved and the development of remote cooperation and distributed application is accelerated. A architecture sketch map of trustworthy distributed system.Figure 3. PENG Xuehai. profiting from flexible ability of chip to support security algorism. trustworthiness of any access is verified to insure the trustworthiness of identity. [3] [4] [5] 175 . security controllability is applied in enhanced architecture of trustworthy distributed system in four aspects: 1) security of password: using secure kernel chip of terminal system. 2005.nap. Enhanced security of distributed system is helpful for increasing authentication of security laws and deterrence to hacker. 2004. Compared with traditional security architecture. 2006. perfect machination of key management and security defense. 15(12): 1815-1821. Emphasizing architecture and quantitative analysis on survivability of distributed system based on the transform between problem and space. 5) security of system audit: all kinds of behavior or event logged in enhanced security system support history of events and supervise afterwards. trusted computing technology and trustworthiness management technology of networks are integrated to resolve the conjuncture with distributed system and enhance the ability to dispose states dynamically so that enhanced trustworthy security system is built. 21(5): 732-739. LIN Chuang. aggressive behavior is taken out from plentiful normal behaviors and access controllability of terminal system is completed.edu/catalog/6161. 2005. 3) security of permission assignment: in enhanced security system inner process scheduling and outer process access is verified strictly under the instruction of trustworthy model so that permission of system is insured to be safe. Security algorism of key is ensured while encryption or decryption of data and digital signature authentication are completed. 2) security of identity authentication: it is no possibility that counterfeit user can operate system illegally because security level is heightened and certification has three steps “USB Key + password + biology keystroke characteristic” . Therefore. survivability and manageability is the precondition to monitor and revises system. Trustworthiness evaluation is performed based on some preconfigured models. REN Fengyuan. REFERENCES [1] [2] Cyber Trust [EB/OL]. Journal of Computer Science and Technology. 3) Security can be separated from management. also the computer is used exclusively. Computer Journal. Quantitative research on trustworthiness could find the weakness and risk in the distributed system and improve them. That system supports a basal strategy to intelligent self-adaptive controllability of system security and service quality. behavior and computing environment in the access from subject to object. LI Linquan. since the identity of platform is the unique sequence code of secure kernel chip with 64-bit in enhanced security system. Research on network architecture with trustworthiness and controllability[J]. It is also insurances of performance of the whole trusted distributed system. Research on trustworthiness of network[J]. security management in the whole distributed system is put into effect to every host and user. the whole view of authority controllability is built on the server. the ultimate goal of estimation trustworthiness of distributed system is not to remove weakness completely but support a scheme to balance service and security for administrator and a measure to defense attack actively such as that through building mechanism to describe behavior of attack. PENG Xuehai. Computer Journal. so the quantitatively trustworthiness estimation of distributed system is valuable.

et. 2007. 2007. FENG Dengguo. 44(4): 598-605. Xue-hai Peng. Research on Trustworthy Networks.al. China Science: E(Information Science). Summarazation of information security[J]. ZHANG Huanguo.[6] TIAN Junfeng. XIAO Bing. MA Xiaoxue.al. Computer research and development. 176 . 28(5): 751-758. Chuang Lin. 37(2): 129-150. 2005. et. China Journal of Computer. Trustworthy model and analysis in TDDSS[J]. May. [7] [8] SHEN Changxiang.

Therefore.2. Introduction The rapid development of computer and Internet technologies has made e-learning become an important education method. media streaming-like downloading and offline player embedded in the downloaded content. x Always-on and Real-time: teachers can archive their lectures or even carry out virtual classroom over the Internet by WebELS. etc. and manager.DOC. . player 2.108 . etc. to support multimedia contents there are three things that a network has to provide: bandwidth.1. for users in non-broadband areas or without wideband network at hand. x Broad file type support: text. x Unicode: multi-lingual is supported. One of the most attractive features of e-learning is its capability to integrate different media such as text. 1. The system includes three main modules: editor. Features WebELS (Web-based E-Learning System) is an online e-learning platform aiming to not only assist teachers to develop. some technologies are implemented to make the multimedia contents possible for users without wideband network at hand: the multimedia content was converted into slide serials synchronized with audio and cursor action. System Overview 2. deliver or archive multimedia contents on the web. It includes three main modules: editor. 1. flash contents with presentation pointer are supported. x Flexible authoring: all necessary tools to create contents online are provided. JPG.2009 International Conference on Computer Engineering and Technology WebELS: A Multimedia E-Learning Platform for Non-broadband Users Zheng HE. the multimedia e-learning with acceptable quality seems to be an impossible task. Moreover. [5] 2. So that teachers can create or edit their own multimedia content by transferring or integrating raw materials. and an offline player was embedded into the multimedia content.ac. consistent quality of service and multipoint packet delivery.54Mbps for MPEG video. Some main features are list below: x Browser-based: no download and pre-installation are necessary. . which can provide flexible information associated with instructional design and authoring skills to motivate the learning interests and willingness inside students. Jingxia YUE and Haruki UENO National Institute of Informatics. Students can access those contents anywhere. audio. player and manager.jp Abstract In this paper. 128kbps to 1Mbps for video conference.PPT. 177 978-0-7695-3521-0/09 $25. The WebELS editor aims to provide editing toolset for users to develop their own multimedia contents from raw materials in various formats such as: . [4] WebELS system presented in this paper provides a multimedia e-learning platform for non-broadband users. Today's multimedia applications have a wide range of bandwidth requirements: 64kbps for telephone quality audio. .2009. Editor. Functions 2. Some technologies are implemented to make multimedia contents possible for users without wideband network: image-based slide synchronized with audio and cursor movement. anytime without wideband network to study as their own styles.00 © 2009 IEEE DOI 10. picture. Contents title and category should be assigned at first.PDF. audio and video to create multimedia content. but also provide students various materials and flexible tools to study in their own learning styles.1109/ICCET.2. [1-3] Generally. the multimedia content download is streaming-like.1. Japan hezheng@nii. Both asynchronous and synchronous e-learning are supported on this platform. then achieve the contents on WebELS server and manage course list or access permission. a multimedia-support e-learning platform called WebELS (Web-based E-Learning System) for non-broadband users is introduced.GIF. video.

Then those materials are converted into image series which are called as slides in this paper. Moreover. Figure 1 describes the architecture of WebELS system. Manager.1. And all user account information is maintained by SQL database server. Not only the contents originally prepared in PDF. Database server is to provide data storage for achieved contents. Architecture In WebELS system. Architecture of WebELS system Web server is responsible for users’ requirements. Obviously.2.2. audio and cursor synchronized slide. The WebELS player provides users a friendly platform to learn multimedia contents online or offline. PPT or DOC formats but also those pictures used to describe lecture scenes (such as GIF. Player. Multimedia Contents 3. manage user information. 178 . Main tasks are executed by server side and only small work is carried out on browser side. Java Servlets response the interaction with the Java tools on clientFigure 2 compares picture quality between imagebased contents provided by WebELS system and video contents played by Windows Media Player. 3. it will much lighten the loads on client-side computers.3. Image-based content An e-learning course based on multimedia contents may be highly bandwidth intensive and students using non-broadband network may have difficulty in playing and retrieving such contents. content permission. users can make attractive multimedia contents by inserting audio synchronized with cursor movements or some video and flash clips into one slide. supports contents editing. video clips. 2. Zoom-in function is available so that details on slide can be displayed clearly. The player was designed as a standard media player and users can easily control their contents. all user interfaces are implemented on web pages. the image-based picture with 800×600 pixels is clearer and brighter than video picture with 512×342 pixels. JPG and PNG) will be automatically converted into a kind of image format by the WebELS editor before they are uploaded to the WebELS server. A customized HTTP server handles web-based user interface includes HTML and JSP pages. this kind of system can be accessed and used without downloading or installing other tools beforehand. implementation of editor. 2. The WebELS content is presented by a sequence of image frames instead of video stream. Moreover. Under this structure. The WebELS management toolset was developed to help users manage the achieved contents including content category. and log and registration information. 2. and verify course dependency. player and manager is based on a typical Browser/Server (B/S) structure. Usually contents can be presented as the following delivery modes: pure slide. side and perform data transformation with database server.2. Comparison of picture quality between image-based and video contents Figure 1.3. Figure 2.

f0) t1 (s1.f1) f1=f4 f5=f7 Figure 5. ii) store the information (s. f) for the embedded audio stream file simultaneously. 0) and (x. SPEEX audio files Audio is indispensable part in an e-learning course. Normally. Implementation of SPEEX audio files 3. the audio stream is paused and the cursor is pointed to the location (x1. and vocal files with high quality assist students to understand lectures more clearly and actively.y1) (x3.So if users have to limit the contents size due to network. the cursor positions are synchronized with audio stream in each slide file by the time shaft. Synchronization of cursor positions with audio stream As Figure 5. iii) when 179 . in which s is the current state (play or stop) and f is frame numbers. y1) on this slide.y6) t6 t7 (s7. Therefore.f4) t5 (s5. image-based contents would be a substitution. 0). y1). or show the points they want to strengthen. f)=(play.f7) Audio stream (xi. (s. WebELS contents with zoom-in function 3. Select one slide to be edit Use sound pick-up outfits? No Yes Read audio data from files Read audio data from sound pick-up outfits Decode audio data to raw format Encode audio data to SPEEX format Save SPEEX audio data into the selected slide file Figure 4. and there is no cursor action. For examples: i) when t=t0. the audio stream is started. ii) when t=t1. students can see those fine details which are difficult to recognize in common video contents. Cursor (x1. y)=(x1.f5) Timeline (x6. It is well-adapted to Internet applications and provides useful features that are not present in most other codec.2. SPEEX is an audio compression format designed especially for speech. the audio is embedded into the contents by WebELS editor. In WebELS.y3) t2 t3 t4 (s4. which is unpractical for non-broadband users. By freely zooming or dragging such content based on high-quality images. f1) and (x. Figure 3 is an example of image-based contents with enough resolution to ensure complete details in the picture. a cursor file is also embedded into each slide of the contents to simulate lectures’ action. Users can edit the slide and point a cursor onto corresponding slide to tip the topic they are talking on. A synchronized cursor In WebELS system. through sound pick-up outfits were encoded into the SPEEX format. y) chronologically. (s. an audio format with high quality as well as low size is necessary. SPEEX provides a free alternative to expensive proprietary speech codec and lowers the barrier of entry for voice applications. Figure 4 illustrates how the SPEEX audio files are implemented on WebELS system. Each slide has its own timeline in order to: i) record the cursor positions (x. y)=(0. to improve the audio quality would result in large size and heavy load of the contents. f)=(stop. The audio datum either recorded and saved previously or simultaneously transmitted t0 (s0.3.yi) tI tn Figure 3.

Moreover. the WebELS contents consist of imagebased slide series and each slide includes the following information: image. the two kinds of contents were played under network with bandwidths of 56Kbps. Comparison between video-based and WebELS contents Size Image resolution Streaming quality 80 70 60 Video Content WebELS Content 4. General Streaming multimedia system can not only ensure that teachers create appealing multimedia contents but also allow students to access multimedia contents without wideband network. Compared with real-time recording system. the audio stream has processed fi frames and the cursor is pointed to the location (xi. the content size was much decreased while the image resolution was improved. Comparison of the time delay under various network bandwidths Furthermore.4. 256Kbps. yi) on this slide. 1Mbps and10Mbps to test their sensitivity to network condition. both of the two contents were tried out by using 56K modem under the same condition. Figure 6 shows the time delay which was necessary to open the two kinds of contents under different network bandwidths. Implementation Basically. Table 2. the file size of the audio stream which was edited based on timeline was decreased. When the bandwidth is less than 256Kbps. Streaming multimedia Comparison was carried out between video-based and WebELS content as Table 2. Therefore. Then the Bandwidth Controller (tool for bandwidth controlling) was installed in the client-side computer to simulate various network bandwidths. server named GNUMP3d for video-based content and WebELS server on the same server machine. after one data package was downloaded. a continuous audio stream can be dispersed at different cursor positions according to the frame numbers recorded by the timeline. From the table. (s. Comparison 4. f)=(play. Therefore the duration of audio stream is normally shorter than cursor action although both of the actions are simultaneously recorded. WebELS content could be opened with some delay during playing. Also in Figure 5. And the audio data as well as correspondent cursor data are divided into some clips. Video-based 154MB 512×342 Time lag and delay WebELS 3. 512Kbps. a streaming-like download process is available based on such data packages. Firstly we installed the streaming Figure 7. Then the video based content was edited by WebELS editor and converted into the image-based content with audio and cursor files attached. yi). and those clips are also considered as buffering packages. timeline. The data of contents are stored into each slide file as separate data packages instead of one complete file. it was hardly to open the video-based content. and only the WebELS contents can play smoothly. WebELS server supports streaming-like download. Data structure of WebELS content 4. the client side’s player would begin to display the slide 180 . WebELS contents were easier to open compared with video-based contents. audio and cursor. y)=(xi.2.9MB 800×600 Smooth and stable Time delay (s) 50 40 30 20 10 0 56K 256K 512K 1M 10M Network bandwidth (bps) Figure 6. 3. See Figure 7. Basically.t=ti.1. The original content was a 42-minute video file with AVI format which was real-time recorded in classroom. And if the bandwidth is under 10Mbps. fi) and (x.

nii. The following figure describes the implementation process of the WebELS offline player. an audio clip cursor data should be downloaded at first.3. then the slide is displayed and the users can start to study. http://www. pp. Carver Jr. Vol. it has the following features: x The offline player was embedded into WebELS contents so all contents are like portable textbook. “Web-based Distance Learning for Lifelong Engineering Education .1. timeline and cursor data.ex. After users decide to download the e-learning contents. IEEE Multimedia. IEEE Transactions on Education. Vol. x The interface style is same as online viewer so that the offline learning mode can ensure the maximum feasibility for users without network. necessary libraries and WebELS contents. x The contents bundled with the offline player were encrypted in order to protect their copyright. No. within the data package of one typical slide.org/webguidelines/Multimedia. R. No. Offline Player WebELS system also provides an offline player for students without settled network connection. 5. Implementation of the offline player Download image and timeline of slide i 6.1. No.39. Journal of Information and Systems in Education.M. So we also adopt streaming technology to audio data included in one slide.JAR package was unpacked and saved on the client-side computer.430-435. No. an offline player was embedded in e-learning content.33-38. and in each slide the cursor and audio data are divided into M clips (j=1. When the download was finished. Then such container was converted to a . pp. The process of streaming download in WebELS system is described in Figure 8.1. N). a kind of image-based contents combined with synchronized audio and cursor was created by an editor module provided by web server. Set i=1 Figure 9. [2] P. “A revolution in education”. 2.A.G. the .1.42.aptivate. Ueno.jp/. Moreover. ….as one seamless stream. Suppose a WebELS content consists of N slides (i=1.. “Enhancing Student Learning Through Hypermedia Courseware and Incorporation of Student Learning Styles”. “Developing and Implementing Interactive Multimedia in Education”. [4] “Web Design Guidelines for Low Bandwidth”. Simultaneously. While the first audio clip is playing. References [1] C. Namely. Implementation of stream-like download The paper introduces an e-learning system for nonbroadband users. Conclusions Set j=1 Download cursor clip j and audio clip j Set j=j+1 Is j>M? Yes Set i=i+1 No No Is i>N? Yes End Figure 8. the size of audio data usually takes a higher percentage than image. Lee and W. Start WebELS player. Streaming-like download was implemented so that such multimedia contents can be displayed smoothly even on client-side non-broadband network. In order to ensure the multimedia learning materials accessible under the network with limited bandwidth. Services have been made available on the Internet and its functions are continuously extended to cover user’s requirements. pp.1-5.4. Lane. M). Sullivan. The WebELS system has been launched at Internet URL http://webels.D. …. Vol.ac. [3] R.html [5] H. Vol. Howard and W. the WebELS server will automatically create a container which includes three main parts: encrypted 7. IEEE Transaction on Education. 181 . Jain. the next audio clip is being downloaded.45-52. Moreover.A Personal View and Issues”.A. 2. download of the data package for the next slide was continued. pp.JAR package by the WebELS server.

1109/ICCET. Based on the problems of traditional shear-warp algorithm. the process of resampling is changed from 3D space to 2D plane. Therefore. The values of the resampling points in the front slice and back slice are inputted to the cost function as its parameters. splatting [5] algorithm based on object space and 978-0-7695-3521-0/09 $25. image quality usually cannot meet the requirements by reason of the image-aliasing. by mending the resampling values used during the process of image compositing , nearly not affecting the efficiency of rendering. In this method. A cost function is defined for describing the two adjacent slices’ effect to the slab between them. but it achieves this rendering speed only by sacrificing the resulting image quality. volume rendering mostly includes ray-casting [4] algorithm based on image space. The shear-warp algorithm uses a method that factorizes the projection transformation in the 3D discrete data field to two steps: shear for 3D data field and warp for intermediate image.2009 International Conference on Computer Engineering and Technology Implementation and Improvement Based On Shear-Warp Volume Rendering Algorithm Li Guo College of Electronic Engineering,University of Electronic Science and Technology of China,Chengdu, 610054 dlbasin@163. and the resulting value can reflect the information of the slab more effectively.edu. Xie Mei College of Electronic Engineering,University of Electronic Science and Technology of China,Chengdu, 610054 xiemei@ee.9 182 2. Volume rendering is also called direct volume rendering.uestc. and overcome the drawback of imagealiasing in the traditional shear-warp algorithm. it is difficult to achieve the requirements of interactive applications by using these two methods. volume rendering is considered as the most promising rendering technology in the field of scientific computing visualization. this paper presents a new method by mending the resampling values for image compositing to make the image quality much improved. (a) . It transforms the 3D data field to final 2D image for displaying directly. As a result. according to the direction of the viewing ray.cn shear-warp [6~7] algorithm based on both image and object spaces. two adjacent slices are defined as front slice and back slice respectively. unfortunately. not needing construct the geometric describing of object’s surfaces. with nearly not affecting the rendering speed. Introduction The process of building screen image by object’s geometric model is called rendering. because the resampling just occurs inside of the slices and only make a single slice’ value to determine the value of the slab between the two adjacent slices.com Abstract The shear-warp algorithm is one of the fastest algorithms based on CPU for volume rendering so far. The shear-warp algorithm The basic principle of the shear-warp algorithm is shown in figure 1 [8]. So.2009. Experiments have proved this new method’s effects. However. Compositing this resulting value as resampling value will improve the image quality significantly. and the nearly interactive speed of rendering is achieved based on graphics workstations by reason of the significantly reduced amount of calculation. it cannot reflect the whole primal data and the specific characters. Owing to the need of resampling after changes of the viewing direction.00 © 2009 IEEE DOI 10. This paper presents a new method that makes the resulting image quality much improved. Rendering includes surface rendering [1~3] and volume rendering. both the ray-casting and splatting methods have the problems of large amount of calculation and a long time for computing. The surface rendering algorithm can usually reach a fast speed. At the present time. which is the most important part during the processes of the scientific computing visualization. 1.

we firstly introduce the foundation theory of image compositing. Transform the distorted intermediate image to resulting image by warp operation using Mwarp. Because the and P with warp is after the projection. Transform the volume data to shear space by translating each slice using Mshear. The shear-warp algorithm is based on the idea of a factorization of the view matrix Mview into a shear component Mshear.1. by transforming the resampled volume data to color value and opacity value during each slice and compositing them together. Image compositing is to form the value for each pixel in image along each viewing ray. the formula (1) can be approximated by: 183 . 3). (b) Figure 1: (a) Standard and (b) Factorized viewing transformation 3. Project the volume into a distorted 2D intermediate image in shear space by compositing the resampled slices together. it becomes a 2D operation and much faster to compute. the values of sx and sy are computed by: si=s(id) sx = m22m13 − m12m23 m11m22 − m21m12 m11m23 − m21m13 sy = m11m22 − m21m12 id Eye d (i+1)d x x The warp factor (Mwarp) that transforms the intermediate image to resulting image can be decided by Mview. So the shear matrix Mshear is described as follows: M =M •M •P cin Eye ci αi cout α in α out ⎛1 ⎜ 0 Mshear = ⎜ ⎜0 ⎜ ⎜0 ⎝ 0 sx 0 ⎞ ⎟ 1 sy 0 ⎟ 0 1 0⎟ ⎟ 0 0 1⎟ ⎠ Figure 2: Front-to-back order compositing We specify colors and extinction coefficients for each scalar value s of the volume data by transfer functions c ( s ) and τ ( s ) . ) ) d x . The order of the compositing can be frontto-back or back-to-front. The improvement 3. Image compositing In this section. Let Mview be the view matrix that transforms points from object space to image space. the shear-warp algorithm consists of three primary steps as follows: Figure 3: Sampling of s(x) along a viewing ray While the viewing ray is divided up to n equal segments of length d=D/n and the s value are constant for each ray segment as shown in figure 3 [8]. in such a way that the viewing direction becomes perpendicular to the slices. 2). Briefly. the shear factor (Mshear) just contains a transformation that simply translates each volume slice. In this paper we use the front-toback order as shown in figure 2. since the shear is parallel to the slices. thus. a warp component Mwarp and a principle axis transformation matrix P view warp shear . As for a parallel with projection. as described like: along a viewing ray parameterized by x from 0 to D is given by: I = ∫ D 0 τ ( s ( x ) ) c ( s ( x ) ) e x p ( − ∫ τ ( s ( x . decided by the viewing direction. ) d x (1) 0 x ⎛ m11 ⎜ Mview3 = ⎜ m21 ⎜m ⎝ 31 m12 m22 m32 m13 ⎞ ⎟ m23 ⎟ m33 ⎟ ⎠ s(x) Then.1). Mshear −1 M warp = M view • M shear • P −1 . The color emitted from one point of the volume is determined by c ( s ) τ ( s ) . the intensity I The sx and sy represent the translating factors in x-axis and y-axis respectively. Let Mview3 be the upper-left 3x3 submatrix of Mview.

0 on a workstation (Pentium(R) 4 2. After correction.3. as the precondition of image compositing theory mentioned before.sb)=w*sf+(1-w)*sb ,where w is the scale factor. The proposed method has little impact on the rendering speed. 256M memory) using the same CT slices images coming from West China Center of Medical Sciences. This function is relatively simple and could be a good reflection of the slab’s information. The more cost function close to the real situation.I ≈ ≈ ≈ ∑ τ ( s (id )) c ( s (id )) d exp( − ∑ τ ( s ( jd )) d ) i=0 j =0 i −1 j=0 n −1 i −1 ∑ τ ( s (id ))c ( s (id )) d ∏ exp( −τ ( s ( jd )) d ) i=0 n −1 ∑ c ∏ (1 − α i i=0 j=0 i n −1 i −1 j ) With the opacity α i of the i-th ray segment defined by α = 1 − exp(− ∫ (i+1) d id τ ( s ( x ))d x ) ≈ 1 − e x p ( −τ ( s (id )) d ) ≈ τ ( s (id )) d and the color ci defined by c i ≈ ∫ (i+1)d id τ ( s ( x ))c ( s ( x ))d x occurs just inside of each slice. the resampling operation in shear space 3. Since the single slice can not represent the slab effectively. the slab’s value s. Sichuan University. having nothing to do with the back slice. a slab is formed as figure 4 illustrated. and literature [9] used a resampling method of real-time interpolation in the middle section.2. Here we define the cost function as s=f(sf. We just discuss the first situation in this paper. Parameters are resampling values of the front and back slices. with the value of s=sf. the length of the ray segment also represents the length of the slab. as shown in Figure 5. Considering the relatively poor image quality of the traditional shear-warp algorithm. In the proposed algorithm.40GHZ CPU. but the image quality is improved a lot compared with the traditional method attested by our experiments As demonstrated above. that is. is used as the scalar value of the ray segment. the final formula of image compositing can be described as follows: α o u t = α in + (1 − α in ) α i c o u t = c in + (1 − α in ) c i (2) viewing ray sb back slice s = f(sf,sb) sf front slice Figure 5: Improved resampling for composting As the slices front and back both contribute to the slab between them. and the scalar value of each ray segment is to be constant. along the viewing ray direction. The return value s represents the value of the slab. the resulting image quality is usually poor for users’ requirements. Between the two slices. viewing ray sb back slice s=sf slab sf front slice Figure 4: The scalar value of a slab in the traditional shear-warp algorithm We define two adjacent slices as a front slice and a back slice. Figure 6 shows the image quality comparison (a) Traditional elevation 184 . literature [8] introduced a pre-integration method. not sf. we use the two adjacent slices to describe the slab. In the traditional shearwarp method. the better the effect of rendering. the other is resampling in the intermediate image while warping it to the resulting image. ≈ τ ( s (id ))c ( s (id )) d ≈ α ic ( s ( i d ) ) Then. That is to say the slab’s value is determined only by the front slice with s=sf. Scalar values sf and sb are the values of the resampled points in the front slice and back slice respectively. it is easy to know the fact that compositing image by formula (2) is based on the precondition of each ray segment’s scalar value being constant. we define a cost function f to describe the effect. When composting. the value for compositing. One is resampling in each slice during the process of image compositing in shear space. 3. The length of each ray segment is determined by the distance of two primal adjacent slices and the direction of the viewing ray. but they both increased the amount of calculation. Improved method There are two kinds of resampling in shear-warp algorithm. Results The traditional and presented shear-warp algorithm are simulated respectively in Matlab 7.

1991. (b) Proposed elevation References [1] Lorensen W E. Technical Report: CSL-TR-95-678.2002. “Fast Volume Rendering Using a ShearWarp Factorization of the Viewing Transformation”. Computer Graphics. Conclusions By using two adjacent slices’ value to define the slab’s value. and thank Peng Gang for instructing writing papers in English. Proceedings of the Symposium on Data Visualization Barcelona:Eurographics Association. pp.2003. 1988. pp. E74(1). [2] Doi A. “Marching Cubes: A High Resolution 3D Surface Construction Algorithm”.109-118 [9] Sweeney J. Medical Physics.Lang U. “Fast Volume Rendering Using a Shear-Warp Factorization of the Viewing Transformation”.Kraus M. 1987. which is more in line with the actual situation. “Two Algorithms for Three-dimensional Reconstruction of Tomograms”. and almost does not affect the rendering speed 185 . Lorensen W E. 163-169. 1990. pp.95-104 (c) Traditional planform (d)Proposed planform Traditional Proposed (e) Partial magnified image Figure 6: Traditional and proposed algorithm’s image quality comparison The simulation results show that images produced by the traditional method of rendering have some visible stripes. pp. 1988.214-224 [3] Cline H E. IEEE Computer Graphics and Application.pp. pp. 1994. Koide A. but in the proposed algorithm. 4. “Shear-Warp deluxe: the Shear-Warp algorithm revisited”. Levoy M. “Display of Surfaces from Volume Data”. 15(3).320-327 [4] Levoy M.Acknowledgment The authors would like to thank Zhen Zheng for helping so much in the daily study. Computer Graphics Proceedings. “Footprint Evaluation for Volume Rendering”. 1995 [8] Schulze J P. “An Efficient Method of Triangulating Equi-Valued Surfaces by Using Tetrahedral Cells”. The principle and implementation results both show that it is effective to obtain a higher image quality. “Integrating Preintegration into the Shear-Warp Algorithm”. 24(4). the image quality is improved. the proposed algorithm has improved a lot as to the traditional algorithm. thank Wu Bingrong for providing the medical images for simulation.367376 [6] Lacroute P.29-37 [5] Westover L. Proceedings of the 2003 Eurographics/IEEE TVCG Workshop on Volume Graphics Tokyo:ACM Press. Finally.pp. Computer Graphics. 21(4).451-458 [7] Lacroute P. IEICE Transactions. Cline H E. pp. Mueller K. the stripes disappeared. thank for all people in 903 staff room. 8(3).

209 186 . Paging. 978-0-7695-3521-0/09 $25. Paging and Voice mailing. Mohammed A Qadeer2 1 2 Dept of Electronics Engg. PBX. A conventional circuit switching based PBX is not only expensive but limited in terms of functionality as well. maqadeer}@zhcet. Voicemail. Keywords-Asterisk.e. such as paging. we can say that with Asterisk you can: • • • • • • • Provide basic service to Analog and Digital phones.2009 International Conference on Computer Engineering and Technology Conferencing. Music on hold to name a few. Set Up of the Work Asterisk is an open source converged telecommunications platform designed to allow different type of telephony hardware. Develop a call routing logic in order to choose a least expensive way to route a particular call. Our approach follows the clientserver model for all subsidiary procedures. Aligarh Muslim University. The contribution in achieving this goal clearly goes to the various advantages that the technology offers. Asterisk is a complete phone system in software. Instead of switching analog lines in hardware. VoIP. INTRODUCTION Fig 1: Overview of Asterisk based system [4] Over the last few years Voice over Internet Protocol (VoIP) has became an important player in supporting telephony. depending on the usage requirements) Interactive voice responses (IVR). However it also supports old analog phones using gateway devices. Provide voicemail and teleconferencing services Develop complex or simple interactive menus Operate small or large queues for call centers and announcing the estimated hold time to the callers Call other programs on the system . Conferencing.in Abstract—This paper is intended to present theoretical and implementation details of various important features that are generally associated with an Asterisk based Voice Exchange i. or it can help users save money on long international call usages. Hence to summarize it up. managing both TDM and packet voice at lower layers while offering a highly flexible platform for PBX and telephonic applications. middleware and software to interact with each other consistently.2009. ST-302 hardphone. (which may be from one-to-one or manyto-one.00 © 2009 IEEE DOI 10. The application is developed in C language and is compatible with all the versions of Linux. Paging. Aligarh Muslim University. COMPLETE ASTERISK SYSTEM A. Voice Mailing via Asterisk EPBX Ale Imran1. India {aleimran.. The development of Asterisk Based Voice Exchange which works on VoIP takes into consideration the various complexities associated with a conventional Private Branch Exchange (PBX). it’s extremely versatile. Route Incoming and Outgoing voice calls over standard voice lines or the internet. essentially providing PC to PC data and voice communications. Because being implemented in software. Asterisk provides more than what one would expect from a conventional PBX. the most important being its all software approach. it provides various other features which one could say are almost patented with it. Asterisk can bridge and translate different type of VoIP protocols such as SIP. Conferencing.ac. Besides being almost free. and phones can be hooked into that. India Dept of Computer Engg. Besides this we will be concentrating on the web based approach for the configuration of the hard phones that will be used at the clients end. II. Voicemail. Aligarh. It provides multiple layers. To talk of a few would certainly include a reduction in the bandwidth requirements and availability of a large number of features like for example selective call forwarding and rejection [4]. It can replace large and expensive phone systems powering thousands of extensions. it routes and manipulates Voice over Internet Protocols (VoIP) Packets in software [4]. We get a variety of features On top of that we can get interfaces to the operating system and programming languages for the extreme in power. The backbone of the system generally becomes an IP enabled network. I.1109/ICCET. detailed call logging into a database and many more features. configuration in SQL databases or flat files. optional web based administration interfaces. Aligarh. Conferencing. easy to customize as and easy to extend. Asterisk is different for several reasons.

Asterisk PBX configuration Now we need to create one user in the iax. The following commands help us in compiling the Asterisk. Then we put this cable in the RJ-45 jack labeled PC.MGCP. allow=all means that the line which this user will be using. one for connection with a PC and another for connection with the existing IP enabled network. Now since here we are interested in having hard phones at the client’s end . the IP enabled hardphone. Fig 4: IAX. In order to set up a Asterisk based Private branch exchange (PBX). Fig 2: Snapshot after the complete installation [4] Once it has been successfully installed we can start Asterisk on the server by running the following commands: /root/ale/asterisk-vvvc. cd/root/ale/asterisk make make install make samples make progdocs Fig 3: Picture showing ST-302. So now we create a new user say by the name of user1. B. host=dynamic means that the IP is dynamic through a DHCP server. We begin implementing our voice exchange by compiling the Asterisk system. could support all audio codec’s. It has got two RJ-45 ports. This one is now going to be used with the ST-302 IP phone. VoIP gateway service in order to enable a particular user to call others who might be on PSTN or on the same IP network.conf file. • • • Asterisk based PBX Phones at the clients end which may be soft or hard depending upon the requirements. we require the following 3 major components [3]. ST-302 is an IP enabled hard phone which uses an Inter Asterisk Exchange (IAX) protocol.conf. 187 . IAX and H. We put the Ethernet cable from our network in the port labeled RJ-45 and use the another Ethernet cable to connect our computer with the phone. This is because the phone is using IAX protocol for being connected with the Asterisk server.we have the choice of selecting a particular hard phone among the various available ones and configuring it .323[2]. context=test shows that the user is working with the extensions in this context of the configuration file extensions.conf file type=friend means that the user can make & receive calls.

Asterisk offers both classical PBX functionality and advanced features. the ST-302 phone will start ringing and call will be connected to this phone. and interoperates with traditional standard based telephony systems and Voice over IP systems. Besides this. automatic gain control(agc) . 4) Dial Plan Settings: In this file actions are connected to extensions. 2) IAX Settings: The following parameters summarize to give up the complete IAX settings: a) Use service: Enabling this particular feature allows calls to made through the gatekeeper. ppp id and ppp pin. iptype. Fig 6: Web Interface for the configuration of hard phone. The one implemented for the configuration of ST-302 is super access mode since it allows IAX settings.Now lets have a look on the extensions.2. dns.User1 is the user which we are going to use for the ST-302 IP phone. local calls etc [4]. Here configuring the ST-302 with the help of web interface allows to perform the following operations : 1) Audio Settings: gives a user free hand as far as selection of various codec’s for the hardphone is concerned.dial(IAX/user2) Exten=>200.automatic echo cancellation (aec). It includes local ip.hangup() Exten=>200. A.however the super mode gives you the access to all the settings .3. high end proprietary PBX’s. The last extension is using the Hangup application.dial(IAX/user1) Exten=>100. III.the call will be answered by the Answer application. either the default context or a specific context that we have created like incoming IAX calls. subnet mask. It may generally depend on the design aspect and for our implementation we have set this to G.1. When somebody dials a number 100. CONFIGURATION ON THE ST-302 IP PHONE The configurations of the ST-302 IP phone could be done with the help of Web Interface and for this we need to know the IP address of the phone . Asterisk based telephony solutions offer a rich and flexible feature set.answer() Exten=>200. Making connections through the Web Interface The web interface has two access modes ie ordinary and super. b) Local port: this is the port on which phone will negotiate registration information with the server. Asterisk offers the advanced features that are often associated with large. So when somebody dials the number 100. Its purpose is to hang up the line after the conversation is over. Here in this paper we will be concentrating in details on the paging.hangup() In the extensions.3. Generally by default it’s 4569.answer() Exten=>100.including the one for the IAX protocol. FEATURES OF ASTERISK Fig 5: Extensions.1.conf file. working in the [text] section we have got 2 phone numbers 100 and 200.conf file that we ‘ll be using for setting up the various extensions. and for each of them we have created 3 extensions . Fig 7: Management & Configuration Modules of Asterisk 188 . voicemails and conferencing applications of the Asterisk enabled Voice exchange [3]. it allows for various other added features as well like for example voice activity detection(vad). The first does not give the access to the settings considering the IAX . Infact its always a good idea to use this application in the dial plans.conf file [test] Exten=>100.711. jittre size and various others. Each extension belongs to a context.2. if however this feature is disabled only IP to IP calls could be made. IV. long distance outgoing PSTN calls. 3) Network Setings: includes the features generally associated with the networks.The next application which will be executed is the Dial application.

The second thing to notice is that the mailbox number at the end of the line is preceded by b9999.4.3.30.goto(conf-conferencename. compilation of the conferencing application.2. with unlimited participants.background Exten=>out. or receive busy treatment.b9999 Now what we have done over here is that when extension 1000 rings .playback Exten=>XXXX.voicemail.pls-enter-conf-password.2. Asterisk comes with a voicemail storage with storage capacity of more than thousand hours and it can be retrieved from any remote phone or it could be attached with the email as . which means that no mandatory downsampling is required. 189 . Voicemail This feature enables users to record messages for incoming calls that are not answered within a specified number of rings.3.100&IAX.6.5. The call conference provided by Asterisk based PBX has the following features: Security passwords to control access to who can call into a conference bridge. delete or reply to a message received from a one or more than one group members with introductory messages. tr) Exten=1000.tr) Exten =>1000. the second entry will be executed which is a voicemail.read(secret.WAV file or to the voice messaging system repository for retrieval from a phone [4].answer() Exten=>in. Exten=>XXXX. 1 dial (IAX/200.join. save.41. B.dial(IAX.This gives a priority of 102.10) Exten=>XXXX.response timeout() Exten=>out.20. or are transferred directly to the voicemail.A. Incoming or outgoing calls may be transferred to a conference. The call conference provides conference room system for use by all users.20. Moreover users have the option of marking a particular message as urgent or confidential.1.1. By accessing voice portal from any phone user can listen.goto(s. or it may be directly dialed [4].4.the first thing we do is dial phone number 1and phone number2 ie 100 and 200 make them ring for 20 seconds.1. The mailbox is specified by u9999 at the end of the line. we need to make the following changes in the configuration files : [Conferences] Exten=>s. dial (IAX/100.conf Now we need to locate the IAX section where we have added the entries for the extensions.DBget(pass=conferences/${exten}).voicemail.3.hangup() [context] Exten=>999999.gotoIf($[“xxx${pass}”=” xxxNONE” Exten=>XXXX. the first is that the priority number has jumped to 102. Moreover we also need to edit the configuration files of voicemail ie vi voicemail.hangup [confhelper] Exten=>in.hangup Exten=>XXXX. the lines should like this: Exten=>1.response timeout. The first thing we need to do is to create the mail box for asterisk to use.4) Exten=>XXXX. with the help of the following utility Usr/src/asterisk/addmailbox.1.digit timeout. While installing Asterisk on the server. tr) Exten=>2. we need to execute the following commands for enabling the conferencing feature via the server: Cd/usr/src/asterisk Cd/app_conference Make clean Make Make install Once the conferencing feature has been installed. 1. Joining a particular conference is as simple as dialing the extension.8 Exten=>s.5 Exten=>s.4.31.1.response timeout() Exten=>in.wait(1) Exten=>s.1answer() Exten=>out. the priority will be jumped to+101(n+101).hangup() Exten=>out.102.background() Exten=>in.background Exten=>s.1] The conferencing feature provided by the Asterisk offers the advantage of doing conferencing without any zaptel hardware as well doing native codec streaming.40.3.2.2. These two factors in fact provide a rather significant boost to the VoIP base conferencing. Call Conferencing The Call conference is an Asterisk solution based PBX system.conference(${EXTEN}/MTV) Exten=>XXXX.20. unlimited simultaneous conferences.u9999 Exten=>1000.this indicates a busy message should be displayed to the user and the user should be allowed to leave a message.200. If the extension is not answered in 20 seconds.This is in fact quite a useful feature of Asterisk which signifies that when the call comes and if a person is on.waitexten(20) Exten=>s. Moreover there are two more things noticeable over here.2.

a person can leave a voice message. The above written configuration will allow users to one way page (broadcast) to all the extensions defined in the variable”One_way_paging_list”. will be a valuable developing guide for similar kinds of applications. Md.chanisaval(${ARG1}/JS) WHERE J:is for dump& Sis for ANY call Exten=>s. Shekhar Prasad .2.1.4.hangup The above written configuration will allow user to do two way intercom to all the extensions defined in the variable”Two_way_intercom_list as follows Two_way_intercom_list=>IAX/100 & IAX/200 V. May 2005 [2] Md.In order to park a call.No Op() Exten=>s.2IAX Addheader( Call-Info:answerafter=0) Exten=>s. Hence as a result the call is retrieved and connected to the retrieving user. 1) One-to-Many Paging: [One_way_page_group] Exten=>1. Mhafuzur Rahman. we need to add the following lines: Exten=>1001.However until here.The call is parked and the caller is held. “Asterisk Voice Exchange-An alternative to conventional EPBX” in Proc. 2) One-to-One Intercom: We first need to define a macro and then use it in the one to one intercom context./${EXTEN:1}) Exten=>*5XX.IAX Addheader(Call-Info:answerafter=0) Exten=>1. Paging This feature supports system wide paging and single phone intercom or unlimited parking of calls simultaneously. Zaidul Alam. In fact. SUMMARY We expect that the design and implementation aspects presented in this paper. Jared Smith”Asterisk the Future of Telephony” Second Edition.3.wait(2) Exten=>1001. the following happens • The phone gets a ringing tone • There is a two second wait(the phone is still getting the ringing tone) • The call is answered and goes straight to the voicemail menu for mailbox 9999.Page($.5.2.1.ringing Exten=>1001.3. C.2007 [3] Jim Van Meggelen. Implementing IVR. Saskatoon.{One_way_paging_list}) Exten=>1. which infact could be defined as One _way_paging_list=>IAX/100 & IAX/200.followed by the user’s extension id.[4]In order to retrieve the call. Ioannis Lambadaris “A comparative study of the SIP & IAX voice protocols” in CCECE/CCGEI.IAXheader (Call-Info:answerafter=0 ) Exten=>**2.Hangup.1.1. This project has provided us with an invaluable experience related to VoIP and collaborative efforts. Dec 2022.2.Page(${Two_way_intercom_list}/d) Exten=>2.hangup [INTERCOM GROUP] Exten=>*5XX. Mohammad Abdullah Al-Mumin”Small office PBX using Voice over IP” in ICACT I2-14 FEB.Dial(${ARG1}) Exten=>s. 2008 190 . Multiple Auto Attendants. Saugata Bose.hangup The above written configurations will allow user intercom with any extension by dialing *5XX. 3) One-to-Many Intercom: [Two_way_intercom_GROUP] Exten=>**2.3. Music-on-hold and various other features of the Asterisk based PBX.hangup Exten=>s. Asterisk has proven to be a viable PBX for future research studies.3.IAX. Call parking enables a user to hold a call and to retrieve it from another station within the group.102. REFRENCES [1] Taemoor Abbasi.s9999 Now when an extension 1001 is dialed.macro(pageext.2. August 2007 [4] Mohammed A Qadeer & Ale Imran. Leif Madsen .voicemailMain.1. [macro-pageext] Exten=>s. IEEE ICCEE 2008. Future work could proceed along the following lines: • • Using an Asterisk based server for connecting two remote locations. a user presses the flash hook and dials the call park feature code.the user can go to any phone in the group and has to dial the call retrieve feature code. but in order to access it. Nabil Seddigh.

Many scholars take this as their research object and some delightful achievements [4]. For example. P. As studied. Thus the self-adaptive strategy can be realized by building population entropy computing module to estimate the region including global optimal solution. Therefore.cn Keming Xie The college of Information Engineering Taiyuan University of Technology Taiyuan. In this way. 030006 qyxljl@yahoo. We try to solve the problem by introducing information entropy and hope to improve other performance of MEA at the same time. China.edu. ‘Similartax’ and ‘dissimilation’ operators are presented and monolayer population evolution is improved to multilayer population evolution. I. Since memory and directional learning mechanism are introduced. 3]. The Population Eentropy When MEA is used to solve an optimal problem. R. In this process. Basic MEA The whole solution space is divided into some subspaces by MEA and ‘similartax’ and ‘dissimilation’ operators are used to realize the evolutionary operation. R. INTRODUCTION Mind Evolutionary Algorithm (MEA) simulates the evolutionary process of human being’s mind. it usually takes algorithms a short time to find a good solution but an opposite long time to find the best solution. Those with higher score are kept as the Figure 1.00 © 2009 IEEE DOI 10. Some with lower score are washed out and replaced by new ones scattered at random in solution space to keep the global searching ability of population. China. Similartax operator completes the competition inside subpopulation on subspace and realizes local optimization. As a new type of evolutionary algorithm (EA). The former 978-0-7695-3521-0/09 $25.2009. competition occurs between the winners of subpopulations. In this process.43 191 . individuals are produced by normal distribution with some variance around each winner and the one with highest score is the new winner replacing the old one in following steps.2009 International Conference on Computer Engineering and Technology A new Mind Evolutionary Algorithm based on Information Entropy Yuxia Qiu The college of Management Science and Engineering Sanxi University of Economic and Finance Taiyuan. That means at the late evolutionary time the convergence will deteriorate. MEA succeeds basic property from traditional EA. Now. When “learning” starts. individuals are randomly scattered in solution space and their score are calculated respectively. good or not good. Mind Evolutionary Algorithm (MEA). As shown in Figure 1. information entropy original winners for superior subpopulations and those with lower score are the winners for temporary subpopulations. prematurity is avoided and the population evolves to the global solution. 030024 kemingxie@tyut. The experimental results show the algorithm is valid and advanced.cn Abstract—A new self-adaptive Mind Evolutionary Algorithm based on information entropy is proposed to improve the algorithmic convergence especially in the late evolutionary time.Evolutionary computing. THE POPULATION ENTROPY OF MEA A. P. Keywords. II. MEA has been successfully applied to solve some practical problems [2.com. the evolution begins with a stochastic population and then the ‘similartax’ operator is used to local research and the ‘dissimilation’ operator is used to guarantee the global ubiquitous . Dissimilation operator completes the competition between subpopulations on whole solution space and realizes global optimization. that is ‘wide explore’ and ‘accurately scan’ hand-in-hand research mode. B. It extracts the virtues from GA and ES and overcomes their disadvantages. search efficiency has been increased greatly [1].1109/ICCET. the exploring of the algorithm is more purposeful and sufficiently and the performance is improved. the evolution is a process that the research region becomes more and more clear. the evolution of a population is a process with entropy reducing and the population entropy can be used to reflecting the evolution state. Sketch Map for Movement of Populations The optimal process is just similar as the communication process in information theory.

else repeat step3 and step 4. the information entropy is biggest. with evolutionary operators running. x k is the new (or son) individual in k th generation.2. SELF-ADAPTIVE MEA BASED ON POPULATION ENTROPY (PEMEA) As analyzed above. Based on the definition of information entropy. the estimate of population entropy can be computed. 2. x k −1 is the old (or father) individual in (k − 1) th generation and σ k is the similartax variance. it’s not difficult to estimate the distance between the population and the optimal solution. i = 2. Drew out an individual x from a population and find out which area it belongs to. the population unlimitedly approach optimal solution and individuals are centralized at it. So.048. ˆ (4) σ i2 = C ⋅ H P (i ) Where C = 1 . Here a strategy is proposed to accomplish the task. M . III. Suppose the solution region can be cut into M areas. f min + α i+1 ⋅ λ ] .Λ . M . material calculation follows formula (5). And the contrast experiment between MEA and PEMEA is done to analyze how the algorithmic performance has been improved. Take the analysis as base. Then. And the individual with highest score is the new winner replacing the old one in following steps.Λ .Λ . Step4 Dissimilation operation: realize global optimization. σ k is calculated according to formula (4). ln M Based on above idea.The number of super subpopulations is NS and let M= NS. Definition 1 The population entropy as a measurement for the denseness of individuals in a population is defined as the following formula. H P = −∑ pi ln pi i =1 M (1) C. If M is the population size and mi is the number of th individuals from i area. variance σ is a key factor to control the population’s research zone. z 2 . Where [ f min + α i ⋅ λ . 5. that means the entropy is smallest. the entropy reduction process is described as following: at the beginning of evolution. Three classic numerical optimization test functions are applied. if the population entropy is calculated. Accordingly. . z M .Λ . α M +1 = 1 + ε . where i = 1.12] (3) • Rosenbrock f 2 ( x ) = 100( x12 − x 2 ) 2 + (1 − x1 ) 2 xi ∈ [ −2. Step2 Initialization: scatter individuals composing initial population in whole solution space. Suppose f min and f max respectively describe the minimum and maximum fitness value that MEA algorithm has explored so far and λ = f max − f min . Hence it is uncertain that a individual falls into which arear. r M Hence. α i > α i−1 and ε is a small positive number. the PEMEA algorithm steps are described as following: Step1 Set evolutionary parameters: population size. In MEA. So the self-adjustment of σ can realize self-adjustment the algorithm. Λ .2. the population moves to the optimal region and the individuals are more centralized and the entropy reduce accordingly. Suppose σ i is the similartax variance of i th generation.Λ .12.2. M α1 = 0. MEA algorithm can be improved by introducing entropy to adjust research region. stochastic experiment is done.2. SIMULATION EXPERIMENT ∑p i =1 M i = 1 . Step6 Output evolutionary result. (5) x k = N ( x k −1 . σ k ) = x k −1 + σ k ⋅ r Where r is a random numeral. subpopulation size and conditions for end. formula (4) describes the parameter self-adjust strategy of the new algorithm.introduces negative entropy into system just like the latter does.048] 192 . IV. The Population Entropy Estimation Accordingly. turn to step6. Step5 Conditions for end: if the end conditions are filled.3. the probability of an individual th comes from i area can be approximately calculated as following. 0 ≤ α i ≤ 1 . M and ˆ The minimum and maximum value of H P are 0 and ln M . the entropy of a population H P (t ) is impossible to be computed truly before the optimal solution is got but be estimated. Then the solution region is plotted into M areas. the population is absolutely stochastic and the least knowledge of optimal solution is known. m (2) ˆ pi = i , i = 1. • De Jong’s function 1 f ( xi ) = ∑ xi2 i =1 3 ˆ ˆ ˆ H P = −∑ pi ln pi i =1 r xi ∈ [−5. pi ≥ 0 , i = 1. If the probability that x comes from area zi is pi . a new research strategy can be proposed. i = 1. some with lower score are washed out and replaced by new ones scattered at random in solution space. Step3 Similartax operation: new individuals are produced by normal distribution with variance around each winner. algorithm ends. In the late time of evolution. marked as z1 .

4. 1162-1165. 29(2): 308-311 [2] [3] [4] Parameter Population size Sub-population size Maximum generation number variance σ k = C1σ k −1 ˆ σ k = 2 C 2 ⋅ H k (i ) TABLE II. EXPERIMENTAL RESULT Test function De Jong's function 1 Rosenbrock Sharffer’s function 6 algorithm Convergence ratio (%) Average convergence generation number Optimal theory solution Optimal experimental solution MEA 100 PEMEA 100 MEA 100 PEMEA 100 MEA 100 PEMEA 100 16 6 39 13 20 11 Figure 2. 0 1.100] 2 [1 + 0.Sharffer’s F6 2 [sin 2 ( x12 + x 2 )1 / 2 − 0. pp. According to the analysis on experimental result.5] f 3 ( x ) = 0. 2000.C (No. ACKNOWLEDGMENT This work was supported by a grant from the Specialized Research Fund for the Doctoral Program of Higher Education. In one word. June. P.001( x12 + x 2 )]2 The parameter value is set as shown in table 1 and the algorithms are respectively run 100 times to optimize test function 1 to 3. The results are showed by figure1-4.78e6 0 2. System Engineering and Electronics.R. Lijun Wei. the performance of MEA is improved largely in PEMEA. If the error between the evolutionary result and optimal solution less then 0. 2006112005)and Visiting scholar foundation of Shanxi province. TABLE I. Thereby the advantage of the new algorithm on parameter adjustment is test. Optimization of Rosenbrock function with PEMEA and MEA.0001. convergence speed and precision of MEA greatly and PEMEA is an efficient and stable optimization algorithm.5 − xi ∈ [−100. but the speed of the former is apparently faster than the latter with absolute precision and arrive theory optimal solution indeed.” Proc. it is accepted that algorithm convergent. “Analysis on the Convergence of MEBML Based on Mind Evolution. 6 and 7. of 6th International Symposium on Test and Measurement. pp.” In Proc. it is the off-line performance but not the on-line performance is improved greatly because PEMEA take one best individual as its representative and ignore the average state. CONCLUSIONS The idea of information entropy in information theory is introduced into the MEA operator design and a new algorithm is proposed as PEMEA. REFERENCES [1] Chenyi Sun.2004-18). 2007.C(No. 1998. P. Yan Sun. 355-359. Chuanlong Wang and Chengyi Shun. otherwise not convergent.. Sept. Run the algorithms to optimize function 4-5 with different coefficient C1 to analyze the affect of the coefficient to MEA. “Multi-model Parallel Tuning for Two Degree-of-Freedom PID Parameters. it can be easily figured out that both PEMEA and MEA has good convergence. CONTROL PARAMETERS OF THE ALGORITHMS MEA N=50 M=10 G=100 PEMEA N=50 M=10 G=100 • V.05e17 1 1 1 193 . Gang Xie and Keming Xie. pp. From table2. Convergence Analysis of Mind Evolutionary Algorithm Based on Sequence Model. “Mind-Evolutionary-based Machine Learning: Framework and the Implementation of Optimization. Keming Xie , Yiuxia Qiu.64e59 1. Of IEEE INES’98. As shown in figure 3. It can be find out from figure 2 and figure 5 that the performance of MEA is precarious because affected by parameter badly but PEMEA overcome the shortcoming. Yuxia Qiu.R.17e7 5. it can be concluded that the research strategy based on population entropy improved the performance including robust. 838-842.” Journal of Computer Research and Development. The experimental result is recorded in table 2. 2005.

Off-line Performance of PEMEA and MEA with Rosenbrock function Figure 5. On-line Performance of PEMEA and MEA with Sharffer’s F6 Figure 7. Off-line Performance of PEMEA and MEA with Sharffer’s F6 Figure 4. On-line Performance of PEMEA and MEA with Rosenbrock function Figure 6.Figure 3. Optimization of Sharffer’s F6 with PEMEA and MEA 194 .

China {jinguojie.buaa. The long term application practice indicated that the reuse ability of such program level components is limited to a low level because of their inherent deficiency as follows: (1) The component granularity is too fine to undertake a complete item of business function. like “software level components”. and consisting an integral item of business function [7][8]. The application level components containing comparatively integral granularity of business function are more suitable for a larger scale of application. The novel component model named Application Level Component(ALC) is proposed in this paper. Keywords-software component. As technology and tools evolving rapidly. etc[2]. and objects/classes.8 195 degree among the code level components. In the end. Traditional software components research mainly aims to the previous category.2009 International Conference on Computer Engineering and Technology An Encapsulation Structure and Description Specification for Application Level Software Components Jin Guojie. application level components can consist relatively more complete function granularity. we established several demonstrations for real practice to manifest the conclusion that the model applies to a large scope of domain engineering and can be made to be the basis of application level component-based design. Among all of these standards.2009.00 © 2009 IEEE DOI 10. II. which has a strong dependency with the developing environment and operating platform. due to which the coupling degree between the components and the runtime environments is hard to lower. Being compared with the program level components. It is defined as a software module at the granularity of a standalone executable software application. However. This is currently. In current literatures only a few related concept keywords can be found. while there is still lack of reference model and description specification in this field. (2) The program level components are developed mainly by skillful programmers with a specified type of programming language. which are difficult to accommodate relatively integral business feature. integration and testing. which is a situation violating the initial objective of component engineering[3][7]. thereby limiting the reuse ability of the component.1109/ICCET. The main advantage of the model is to propel the integration and reuse of legacy software resources by means of transforming currently existing applications into application level components with the wrappers providing standard interfaces. School of Computer Science 100083 Beijing. which involves a . the component granularity is limited to fundamental software elements like code segments. and the dependency between the components and their runtime environments can reasonably be lowered to a certain extent. INTRODUCTION Component technology is of growing importance in software engineering and is considered to be an critical measure to enhance the reuse ability of software modules. yin}@nlsde. thereby causing the absence of a common basis for component analyzing and evaluating. Therefore. The inherent deficiency of program level components is analyzed followed by the observation of the unique requirements of application level components. reference model.edu. COM/COM+. there are few components developed today may remain reusable after a certain number of years [4]. EJB. including CORBA.cn Abstract—Traditional research of software components mainly aims to components at the level of program and source code. In this paper the specialty and characteristic of application level components is first explored. etc[5][7][8]. The generalization and applicability of the model are evaluated by real cases. functions. like the orders management task in a ERP system which is composed by a large number of small code level components working together and relying on each other. the component model and interface standard have not been commonly recognized. and then a reference model in terms of the encapsulation structure and specification is presented. application level component. then a novel reference model is proposed. a bottleneck with no effective solution. First of all. as we observed. a general reference model should be built to model the components falling into this category. for an unique component. specification I. there is still lack of research attention towards them. Although application level components show advantages in many aspects discussed above. This means that there is high coupling 978-0-7695-3521-0/09 $25. SURVEY ABOUT APPLICATION LEVEL COMPONENTS As a specific subclass of software components. Yin Baolin State Key Laboratory of Software Development Environment Beihang University. “components of complete applications”. application level components should meet the requirements that all software components have in common. its reuse degree in a new business environment is significantly low. resulting in the formation of a variety of software standards in business world. Software components can be classified into two main categories according to the functional granularity: the components at program code level and others at application level[1]. so the coupling degree among the components becomes more loose which is more suitable for domain modelling.

Applications and Forms To encapsulate currently existing software and systems. an novel ALC oriented reference model was developed and is now still being improved. a basic run unit can be defined as an application plus a corresponding form. Therefore. which contain business information of great use. advanced flow structures like choices and loops can be used to indicate the execution sequence of the run units.doc) are all of this category.. 1. so only function call and object request are needed for the communicating of program level components.xls) and Word documents(. several run units can be encapsulated in one ALC and be routed according to a specified property named Sequence. 196 . the ALCs within a system should communicate with each other in a way that supporting interprocess data transferring. to complex software suites. (3) The invocation and assembling style of the ALC has a close link with the business requirements which it undertakes.structural model. they can be treated as joint entities of business tools. an Excel spreadsheet containing records of staff payment can be parsed and loaded by the main program of Excel from Microsoft Office. The design target of the model is to represent a set of necessary description information about ALCs and to facilitate the reuse of legacy software systems through the integration and encapsulation of business elements such as applications and forms with a uniform interface standard. As standalone executable programs. III. so there must be provided with specific instructions to control the startup and termination of ALC though its interface. Microsoft Excel spreadsheets(. By contrast. As the ALCs are tend to be formated at higher granularity with more complete domain logic. (3) an parser program administrating the execution. the property Applications is certainly to be a significant part of the model. and the unique characteristics of ALCs are summarized as follows: (1) The core issue of the ALC’s model is to provide the ability of encapsulating and reusing current existing software and systems. like a primitive “Hello world” printing routine. To make the model more applicable and acceptable. The run units are working in an collaborative way to accomplish the business functions. they have many unique characteristics which should be deliberately took into account while designing the reference model. e. The term Forms is used to stand for all sorts of electronic files containing information about visual layouts and manual interactions for a given set of business content. Other necessary instructions besides them are supplied in the same way. a conclusion came out after an intensive study that numerous documents and electronic forms in business practice have been left up for years and years. Forms are usually used as a companion of a dedicated type of application and they work collaboratively to perform business tasks.g. REFERENCE MODEL Form B Form C Form A Begin Application A Data Area Component Parser Component Description File Application B Application C Application Wrapper End Description Interface Data Interface Controlment Interface Figure 1. (2) An interprocess mechanism for communicating among ALCs is needed. For example. Considering more complicated business tasks involving multiple applications and forms. traditional software components are primarily defined as functions or objects. (2) a description file representing description information. From the ALC point of view. and the main components are as follows: (1) the applications and forms to be encapsulated. a survey of the ALC is done by comparing them with traditional software components. After manipulation. where the form is mainly responsible for visual representation of the business data. (4) a data area maintaining runtime data environment. so the records can be shown on the screen display and then be viewed and modified through the graphical interface. As large number of legacy software and systems in business practice have to be abandoned only because they fail to keep following the changeful external requirements. (5) the component interfaces exposed to callers. The structure of the reference model is shown in Fig. A. an interface specification. For a more flexible usage. and the application handles the parsing and loading of the form as well as the interactions with the operators. that is. It is a essential difference between traditional component and ALC. applications can be on any scale from single executable program files. and a description mechanism to represent necessary information about the component’s function and usage[1][4]. Furthermore. like Microsoft Office or an OA/ERP system. the user’s operation result is saved in the spreadsheet and can then be extracted out. Structural Model of Application Level Component Considering the above survey about the ALC’s characteristics and properties. the reuse of numerous functional modules inside these systems is a key path to reduce duplicate coding and investment waste. While traditional components are defined and assembled in terms of program codes. Conclusions can be drawn from the comparison that there are essential differences between traditional components and ALCs according to many factors from the internal structure to interface requirement. the ALCs are executable programs whose life circle starts from the creation of the corresponding process instance and ends when the process halts.

form name. like application A and B in Fig. the common tasks that the applications carried out in each run unit can be described as: (1) fill in the form template with initial business data (-write). TABLE I. (2) load and display the working form for browsing and modification(exec). Beside the items of Applications. all shown in Table1. This is the element called component parser.whether the component is UI-driven Applications .a unique identity code Name . (2) Name & Description: properties that simply provide the textuary name and brief information about usage. Component Description File The description file is a repository for a series of properties representing the necessary information about the component. and Constraint Conditions for Component Data before exit. Other materials like versions and created time can also be put here. while others without any user interface are more likely to execute automatically. 1. and the components doing job for batch printing or email sending usually refer to the latter. If there is no satisfied expression for next step. In this way. a broad scale of legacy systems can be transformed into standard ALCs without touching the inside. Type}. Considering the simplest case. and then can be reused in new business environments. “recordset” type is designated to abstract a typical set of relational data records in business practice. Forms and Sequence discussed above. a specific module is needed to take charge of the scheduling of the run units according to the description file. export the working results which can be manipulated by the caller. Any program in accordance with this standard can be directly integrated into a component. recordset}. When the application ends. C. For example. (4) to ensure validity.data coupling between the component and the inside forms Constraint Conditions for Form Data .. i. the parse output the current component data and exits. so the component itself can learn the erroneous situation timely and do error handling or emit exceptions to the caller. Data-related properties 197 . (3) as the user ends operating. the parser evaluate the expressions of Constraint Conditions for Form Data after each run unit step. Component Parser To administrate the run units in an ALC.the conditions that should be satisfied after each time of form manipulation Constraint Conditions for Component Data . B. which describes the rules about how to insert the component data into each forms and extract them vice versa. datetime. some other properties should also be included. (5) when the component ends up. the parser reads out data from the form and saves them as intermediate working data which is to be processed in the next run unit. The properties in the description file can be classified into to main categories: the descriptive properties and the data-related properties. Category Descriptive properties CONTENT OF THE COMPONENT DESCRIPTION FILE Property Name & Usage ID . The expressions are composed by data fields that representing the component’s current state. extract data from the processed form and output them with a standard style(-read). (5) Constraint Conditions for Form Data & Component Data: a set of logic expressions to keep validity during runtime. a type of applications with visual forms and dialogs are usually to be manipulated by staff. Therefore. In each step. They describe the constraint conditions to be judged when each form is submitted or when the component ends up. other standard business-independent tasks like data I/O and interface handling can all be undertook by this module.data fields that the component receives and outputs (3) Type: indicates whether the component has interactive user interface. a type of wrapper technology[8] can be used to provide an external layer called “application wrapper”. -read}. It works as the call entry of the ALC and conducts the running procedure during the component’s whole life circle as follows: (1) as the component is launching. For the applications and legacy systems incompatible with the interface standard. It gives a strong hint for determining the appropriate running environment of the component. float. It is defined as a 3tuple{ component data name. The descriptive properties are designed to indicate some generic description information as follows: (1) ID: an unique identity code assigned to each component to ensure the uniqueness in designing and running environment. Here UUID protocol can be combined to meet the demand. and Type is the data type including {string. Meanwhile. the calling interface of the applications can be summarized as a collection of command params {-write.the execution route for multiple run units Forms Mapping .the conditions that should be satisfied for the input and output data of the component Input & Output Data Fields .brief information about usage Type .textuary name Description . 1). schedule all run units according to the logic expressions of property Sequence. To enhance the ALC’s power of data processing. components providing dialogs for business data browsing and maintaining naturally fall into the first category.the documents/spreadsheets to be encapsulated Sequence . which dominates the program internally and providing external interface that meets demands (shown as application C and its wrapper in Fig. Each data field is designed as a 2-tuple{Name. (4) Forms Mapping: indicates the mapping relation between the data fields exposed in the component interface and the data within the inside forms. int. (3) handle runtime communication with the caller.the executable programs to be encapsulated Forms .e. a common interface standard should be established for all the applications to be encapsulated. The data-related properties are used to describe the input/output data fields of the component. exec. (2) during runtime. data name in the form}. where Name is the data field’s identification. the form written in with current component data is delivered as a command parameter to the application for manipulating. read in the input data as its original working data.To facilitate the encapsulation of existing applications.

so the parser itself is kept away from any business logic and gains a common sense in all ALCs. It is formed as a collection of expressions like <Source [condition] => Destination>. When the component ends running. It stores all the intermediate working data between multiple run units. (2) Condition is a logic expression indicating the routing path and flow condition between every two run units. During runtime. (3) Destination is the run unit to be executed next when the condition is satisfied. a particular feature is assigned to the model for dumping the intermediate running state to a disk file which can be loaded from to resume the previous state. Data area is created and maintained by the parser.doc). a data crash after an unexpected system halt. Here a platform-independent XML file format is used to facilitate data exchange. where: (1) Source is the current run unit represented as a pair of “Application:Form”. an ALC is abstracted as a core class ApplicationLevelComponent. EVALUATION Type Descriptive instructions Data-related instructions Controlling Instructions The evaluation is done by the development of several typical business systems fully assembled by ALC components built with the method proposed here. can enable the operators to save multiple versions of work effort at will. the parser allocates a piece of the memory and fills it with all the original input data. The interfaces of the ALC model are defined as a set of instructions supporting the minimum collection of calling and integration requirements. All these classes and their relations in the diagram form an explicit description specification of the ALC reference model. and a house mortgage evaluating report (Eval. In this way. carried out by the component parser as a standard function. which simply contains the data names and values in an intuitionistic format. D. As before the development of HFRS. there had been a long time for the staff to manually process the 198 . the values kept in the data area are definitely the final work result. The class Sequence is used to describe the execution steps of run units. Each of the structural elements and interfaces in the model is abstracted as a class in the diagram. e. including a loan contract form (Contract. The data file is assigned as the parameters of the instructions --load-data and --save-data. IV. CONTENT OF ALC COMPONENT INTERFACE Instruction Name & Parameters --get-id --get-name --get-description --get-type --get-input-data / --get-output-data --load-data <data_filename> --save-data <data_filename> --exec <input_datafile> <output_datafile> --terminate --dump-image <image_filename> --load-image <image_filename> Figure 2. the output of each run unit is took over by the parser and then be updated into the data area. Each item in the region is a 2-tuple {DataName. i. (2) Data-related instructions: to provide a uniform rule to handle data I/O between the component and its caller. and classes of FormConstraintCondition and Input/Output DataConstraintCondition describing various types of conditions in the model. InputData and OutputData describing the input/output data fields. The condition is composed by the component data fields which can be evaluated at run time. a credit rating report (Credit.The above tasks of the component parser are all based on the content of the description file which are absolutely business-independent. FormsMapping describing the data coupling between the forms and the component. Table 3 shows the description file of the “loan forms filling component” in the newly developed Housing Fund Reviewing System(HFRS). Other description elements include the classes of Interface describing the collection of interface instructions. TABLE II. E. Classified by functional type. which are necessary for use. the internal details can all be hidden to the outside. This feature. Structural Model of Application Level Component Generally.. thus giving the component a “black-box” feature. the descriptive instructions are mainly used in system assembling stage. where DataName is corresponding to a component data field’s name and DataValue contains the current value. 2. (3) Controlling Instructions: to provide methods for controlling startup and termination of the component. V. (1) Descriptive instructions: to query the basic description properties of the component. Furthermore. As an example. while the data-related and controlling instructions are mainly used in system running stage to invoke the proper execution behavior of the component.doc).e. DataValue}. Component Interfaces The component interfaces are exclusively the only entry exposed to the users for calling and controlling. for the system developer to identify the usage of the component. Data Area Data area is a region allocated in memory. there are three categories of instructions shown in Table 2. As a whole. As the component launches.doc).g.. and revert to one in some situation. The component contains three forms to be filled in by the fund applicants. SPECIFICATION OF THE REFERENCE MODEL An concrete description specification of the ALC model is presented as the UML class diagram shown in Fig.

TABLE III. Software Engineering Conference.doc. A novel method is proposed to encapsulate and reuse current existing software and system resources. and the forms are encapsulated and transformed into ALCs using wrapper technology at low cost.doc. Wang YH. L. 16(8) Yang L. which is a way to promote the reuse level of software modules. mswordwrapper. “Component-Based Software with Beans and ActiveX”.doc]. AppliciantName. “A Component Based Geo-Workflow Framework: A Discussion on Methodological Issues”. string Address. Address = [Contract. 32nd EUROMICRO Conference on Software Engineering and Advanced Applications.doc]. Name = [Credit. Bastos.. datetime AppDate. SunSoft..doc => End FormConstraintConditions: [Contract. the development of HFRS using traditional method would cost approximately a minimum workload of 1.exe”. CONCLUSION AND FUTURE WORK Figure 3. The model is simple and flexible to use by providing a standard framework clearly figuring out the business specific properties to be focused on.B.html Redolfi.AppliciantName == [Credit. “Component Based Development and Object Modelling”. and M. This is just the situation that the ALC model applies to. R. “Software component specification : a study in perspective of component selection and reuse”. 2005 Geisterfer. professional titles declaring and management systems.doc]. P. Eval.exe Forms: Contract. 5/11/2008) Type: Manual InputData: OutputData: string Name. msword-wrapper. Ribeiro. 2000 Liu Y. string SN. 2006 Graubmann. A wrapper for Microsoft Word is then be developed.exe:Eval. …. Sudha Ponnusamy. The HFRS. Sex = [Credit. 1997. SN = [Credit. Wong..doc]. Montgomery. “Storing and Retrieving Software Components: A Component Description Manager”. “Semantic Annotation of Software Components”. is constructed by totally 24 ALCs. Hemesath. Applications: msword-wrapper.doc FormsMapping: Name = [Contract. string HouseSN. It got a efficiency improvement by 45%. Espindola. SN. Spagnoli.doc]. UI Loan Contract Form Figure 4. http://java. P. msword-wrapper. so a collection of standard business forms had been accumulated then. of 38th Annual Hawaii International Conference on System Sciences.exe:Contract. USA. AppliciantName. . K.. and Roshchin. Wang LF. Gao Y. Considering that the wrappers are reusable resources commonly used in business domain. E.doc].J. string Sex. HouseSN = [Eval. AppliciantAddress. International Conference on Commercial-off-the-Shelf (COTS)-Based Software Systems. VI. As a statistic. ….exe: Eval. M. AppliciantSex.exe: Credit.doc => msword-wrapper. Credit. … Sequence: Begin => msword-wrapper. “Combining Mechanism of a Domain Component-Based Architecture Based on Separate Software Degree Components”.application using emails and some desktop office suites like Microsoft Word. and Ghosh. HouseSN.doc].doc]. 3 and 4).M.doc].doc. UI of House Mortgage Evaluating Report The reference model and description specification for application level components have provided the expected advantages in evaluation. AppliciantName == [Eval.B.exe: Credit.com/javastation/whitepapers/javabeans/javabean_ch1.AppliciantName. D.J.5 man-month. while with our method was done in 25 days (including the development of several wrappers for about 10 days). string Phone. 23(9) ID: 5588ce2e-0808-11dc-9a1b-00a0c90a8199 Name: Loan Forms Filling Component Description: ALC for filling loan forms by applicants(V1. … InputDataConstraintConditions: OutputDataConstraintConditions: Name != NULL && SN != NULL && HouseSN != NULL && …… [4] [5] [6] [7] [8] 199 .doc . DESCRIPTION FILE OF LOAN FORMS FILLING COMPONENT The method has been employed in several typical business applications like loans reviewing systems.. R.. Cristal. 2006 Meling. “A Reference Model for Reusable Components Description”. string HouseAddress. Mini-micro System. 2005. of which 75% are UI-driven components constructed with the wrapper for Microsoft word (shown in Fig. A wider evaluation will be made involving a larger scale and scope of systems for continuously improving the model. Wu L. all the components are developed simply by writing distinct description files.doc.0. they are desired to be reused in the new system of HFRS.doc => msword-wrapper. where the model’s applicability and validity were preliminary evaluated.sun. Mehandjiska. M. as a typical medium scale system popular in business domain. Texas Instruments Software. and E. future development can be further accelerated. REFERENCES [1] [2] [3] Short. Proc.. 2002. as well as revenue administration systems. A. which invokes and communicates with the Word application through the opening interface supported by Microsoft Office via COM/OLE[2]. shown in Table 3 as “msword-wrapper. G.M.exe:Contract. Given the wrapper implemented in a short period.. P. Since the forms are familiar to the staff. C. S. Journal of Software. and others are components towards automatical business tasks including document printing and email sending.

When a fault is detected. Xiangdong Wang School of Information Science and Engineering. Owing to its no need to know much about the process mechanism and exact process model .com.thus the fault diagnosis ability has been improved. a novel continuous process fault detection and diagnosis technique based on MBPCA and block contribution[5] is proposed. fault diagnosis. For the model method. The key idea they proposed decomposes the large-scale system into function-independent block or site-independent block. ShenYang University of Technology. especially when the process is complex and the number of monitoring variables is large. typically the principal component analysis (PCA) has attracted much attention by chemical researchers for monitoring process. their method can handle high dimensional and correlated process variables. The rest of the paper is organized as follows. One of the data-driven methods which obtains widespread availability is principal component analysis(PCA)[1]. As an alternative. many approaches have been applied to this field. 110178. multiblock principal component analysis. the data driven method. many factors. such as high process nonlinearity.107 200 . the approach of MBPCA. Because of the favorable features of PCA. and the complexity of a process can make it very difficult to develop. ShenYang. high dimensionality. In section 3. Keywords: fault detection. which uses the integral PCA to detect fault and block contribution and variables contribution to diagnose fault. the concept of PCA is introduced. China bielibo81758@yahoo. block contribution 1.researchers[3.the data driven method. Introduction On-line monitoring of continuous process performance and fault diagnosis are extremely important for plant safety and good product quality.00 © 2009 IEEE DOI 10.edu. principal component analysis. it has difficulties in diagnosing fault correctly in complex process.but also find the fault location exactly.1109/ICCET. Simulation results show these methods are simple and powerful[2]. first judge which block the fault locates in and then determine the fault position in that block . the fault diagnosis based on variable contribution plot in PCA method may make mistakes. PCA is powerful in fault detection . The confidence limits of Hotelling T2 and SPE statistics in those subspaces can be used to detect fault and the variable contribution plot was introduced for fault diagnosis.2009 International Conference on Computer Engineering and Technology Fault Detection and Diagnosis of Continuous Process Based on Multiblock Principal Component Analysis Libo Bie. historical data of normal operation condition has attracted much interest by chemical engineers.2009. the lack of knowledge may confine its application. The main point of the approach is to use PCA to compress normal process data and extract information by projecting the data onto a low dimensional score space.cn Abstract The fault detection and diagnosis of continuous process is very important for the production safety and product quality.cn. The method is demonstrated on the Tennessee Eastman process in section 4.however.4] propose the approach of multiblock principal component analysis(MBPCA) or hierarchical principal component analysis(HPCA). In this paper. methods based on knowledge and methods based on data-driven. whereas for the knowledge method. To enhance the PCA model’s explanation and fault diagnosis ability . the multiblock principal component analysis (MBPCA) is applied for fault detection and diagnosis in continuous process. In section 2.However. which only needs 978-0-7695-3521-0/09 $25. In this paper. In the last decade. which could be classified into three categories: methods based on principle model. The simulations on the Tennessee Eastman process show that the proposed method can not only detect fault quickly . xdwang@sut. combined with block contribution and variable contribution is developed to monitor process and diagnose fault. The confidence limit of SPE statistics in the integral PCA model is used to detect fault and the block contribution and variable contribution plots are applied to diagnose the fault.

λi is the eigenvalue according to reducing order.x )T=x (I. the squared prediction error(SPE). Fault detection can be carried out by confidence limit of SPE or T2 statistics and in simple process fault diagnosis can be realized by variable contribution plot. Supposed k principal components(k<m) are selected. pi represents the orthogonal and unit eigenvector of the covariance matrix C. ( ∑λ i =1 k i ∑λ i =1 m i ) × 100% >65%--90% (6) X T X C= n − 1 (1) Cpi=λi pi (2) where C defines the covariance matrix of X .if the number is small. Many approaches have been proposed for selecting the number of the principal components to be retained in a PCA model[7]. This comparison can be carried out in the principal component space or the residual space. PCA has been used successfully for detecting faults in handling highly correlated variables. increase the difficulty of analysis and diagnosis. On the other hand . which has been collected under normal operation and standardized to zero mean and unit variance comprises n samples and m process variables. the model will be overparameterized and include noise. The essence of PCA is projecting the process data onto the principal component space of lower dimensionality to compress data and extract information.2. is defined by (8) SPE=eeT=(x. the simple but widely used cumulative percent variance(CPV) method is proposed . the formula(8) can be rewritten as (12) SPE=eeT= xPePeT xT According to the determinant standard. The data matrix X(n × m).x where t. SPE can’t exceed the confidence limit Qlim under the normal condition. Matrix X can be reconstructed as X=t1p1 +t2p2 + … T T +tkpkT+E= ∑t p i i =1 k T i +E (5) where E(n × m)is residual. ti=Xpi (4) PCA is aimed to explain the main variation of the data matrix X by choosing the proper anterior k principal component score vectors.Pk PkT) xT and (9) t=xPk x =tPkT (10) (11) e=x. Te(n × (m-k )) and Pe(m × (m-k)) are the residual score and loading matrices respectively. As PePeT= I. e.2.3) 201 . Q lim θ i = θ 1 ⎡1 + ⎢ ⎣ j = k +1 Ca 2 θ 2 h0 2 θ1 i + θ 2 h0 ( h0 −1) θ12 ⎤ h0 ⎥ ⎦ (13) 1 = ∑ m λ 3θ 2 2 j h0 = 1 − 2 θ 1θ 3 (i=1. which are linear combination of the original m variables. usually describing the noise information. Suppose a new process sample x is collected. which judges that the cumulative percent variance captured by the first k principal components is bigger than the predetermined limit. which indicates the amount by which a sample deviates from the model. that will not effectively capture the information of the original data set and may Having established a PCA model based on normal process data. For concision. corresponding the T2 statistic or the SPE statistic. Assuming the normal distribution of x. prediction value.Pk PkT. PCA is intended to construct the independent variables of less numbers to reflect the information of the original data matrix as more as possible. x . I are the score . As follow. The information is measured by variances and PCA actually on the eigenvalue decomposition of covariance matrix C[6]. the formula (5) can be rewritten as the following form: (7) X=TkPkT+ TePeT where Tk(n × k) and Pk(m × k) are the principal component score and loading matrices. otherwise the abnormal state may appear in the process.x )(x . future behavior can be referenced against this “in-control” model. In this paper. The number of the principal components retained in the PCA model is a critical issue in fault detect .If the number is large. Thus PCA can transform the matrix X into the following form: X=t1p1 +t2p2 + … T T +tmpmT= ∑t i =1 m i pi T (3) where pi is the same as the above definition and named as principal component loading vector and ti is the corresponding score vector. which is the difference between original data and data reconstruction with the anterior k principal components. prediction error and unit matrix respectively. Principal Component Analysis(PCA) As a data-driven method. the confidence limit Qlim for the SPE statistic can be calculated using the following formulas.

Therefore the confidence limit for SPEb is δb=gbspeχa2(hbspe) for a given confidence 1-a. prior knowledge can be used to divide the m variables into B blocks. 1 Collect process data under normal condition.1 Blocking Suppose the regular PCA mentioned in section 2 has been established to get the principal component (m × k) and residual loading matrix loading matrix Pk Pe(m × (m-k)). Pk1 : Pkb : PkB Pe1 : Peb : PeB Pk= P e= (15) 3. Valle et al also proposed the concept of block contribution to assist fault diagnosis. 5 Plot the variable contribution of that block and localize the root cause of the fault with process knowledge. 3 Collect a new sample. S. it can approve that SPEb is distributed approximately as gbχ2(hb). Qin. calculate the block SPE. The idea behind contribution plots is simple.2 Block SPE statistics Similar as equation (12). The contribution of the SPE of the jth process variable at the ith observations is as: Qij=eij=(xij. it has difficulties in complex process. where (17) gbspe =tr{(PebTPebΛ)2}/ tr{PebTPebΛ} hbspe =(tr{PebTPebΛ})2/ tr{(PebTPebΛ)2} (18) and Λ is the diagonal matrix composed of the residual eigenvalues. The basic procedures can be summarized as follows. S. Westerhuis JA et al [8] showed the scores in the MBPCA can be calculated directly from the regular PCA.where λi is the eigenvalue of the covariance matrix C. 4 Once a fault has been detected. Fault Detection and Diagnosis Based on MBPCA and Block Contribution Multiblock principal component analysis (MBPCA) or the hierarchical principal component analysis (HPCA) divides the large-scale process into several blocks to enhance the model’s ability of explaining and diagnosing. separate the data vector into B blocks and the bth block is xb (1 × mb). but can’t be assured what place the fault appears in the process. namely Λ=diag{λk+1. … λm}. monitor the process using the SPE in procedure 1 and a fault is detected if SPE is out of limit continuously.x ij)2 (14) Although variables contribution plots are effective to diagnose fault in simple process. 3. 2 Block the scores and loadings acquired by the procedure 1 according to process knowledge and calculate the confidence limit of block SPE.x b. In this section. 3. As defined in [5].j=(xb. The block whose SPE is out of limit severely can be considered as the source of fault or mainly affected by the fault. 3.j)2 (19) 202 . significant contributions may appear across several of them and the variables contribution plots method can’t determine the exact location of the fault. the more likely it is that the variable is the source of the excursion. pretreat and build the regular PCA model of all variables. which the bth block contains mb variables.j. Given a new sample x. the block SPE of the bth block is (16) SPEb=ebebT= xbPebPebT Peb PebT xbT Note that PebPebT is not the idempotent matrix. S. the abnormal situation may occur and can be judged. When monitoring a process. When highly correlated variables are monitored. Since SPEb is a quadratic form of a multivariate normal vector x. fault detection and diagnosis will be accomplished by the MBPCA and block contribution. The next section is attempted to resolve this problem by using the concept of block contribution and multiblock principal component analysis(MBPCA). Qin. Ca is the standard normal deviate corresponding to the upper a percentile. Loading matrix Pk and Pe can also be parsed as follows. The larger the contribution of an individual variable. Valle et al [5] approved the loadings can also be calculated from the regular PCA algorithm and pointed further that the equivalence between multiblock analysis algorithms and regular PCA indicated that the scores and loadings in the MBPCA can be obtained by simply grouping the scores and loadings in the regular PCA. When SPE of the new sample exceeds the confidence limit of the PCA model . each variable will have a unique impact on the SPE and T2.3 Variable contribution in a block The jth variable in the bth block is defined as SPEb. where Pkb (mb × k) and Peb (mb × (m-k)) are the principal component loading and residual loading matrix of the bth block respectively. That is the task of the fault diagnosis and can be realized by the variables contribution plots in the simple process.

it will be believed that the variable 21 is the source of fault or mostly affected by the fault. 22 continuous variables are selected for monitoring and the sampling interval is set to 3 minutes. The difference of contribution among them is not obvious and it is difficult to determine which is mostly affected by the fault.1. namely the material block. a vapor-liquid separator. As the plant is open loop and unstable. It displays there are 3 variables contribute most for the SPE.Stripper 五. which was developed by Downs and Vogel [9]. Making such an decision will lead to check around the compressor and obviously make a mistake. we first draw the block contribution of SPE of each block and determine the most likely block containing fault by comparing their ultralimit. the separator block. In fact. Then we draw the variable 203 .2 . we applied the proposed method to this case.. The process has 12 manipulated variables. The fault 1 A/C composition ratio in feed stream 4 with a step change was introduced at the 5th hour and concluded at the 5. Fig . the fault 1 is located in the material section. SPE chart for fault 1 20 S E ot ibt nr t P c n uio ae r 15 10 5 0 0 5 10 Variables 15 20 25 Fig. 22 continuous process measurements and 19 composition measurements.j by the PCA model. So the fault diagnosis based only on the variable contribution plot can’t determine correctly the position of the fault.Compressor 15 Level 5 Recycle flow 16 Pressure 10 Purge rate 17 Underflow 20 Compressor power 18 Temperature 22 CW temperature 19 Steam flow Calculate the control limit of block SPE statistics under the 99% confidence limit respectively. Case Study The proposed approaches are applied to the well-known benchmark process.3. which generates 500 samples as the reference set. especially in the complex process. 4. Material 二 Reactor 三 Separator 1 A feed 6 Feed rate 11 Temperature 2 D feed 7 Pressure 12 Level 3 E feed 8 Level 13 Pressure 4 A and C feed 9 Temperature 14 Underflow 21CW temperature 四.5th hour. 800 700 600 500 SE P 400 300 200 100 99%confidence limit 0 0 50 100 150 200 Samples 250 300 350 400 25 Fig . These 22 variables are grouped into five blocks according to their locations in the plant. many control schemes have been proposed and in this paper we used the control scheme in [10]. a product condenser. As shown in Fig 1. the reactor block. Tennessee Eastman process In this paper.87% of the variation in the reference set. the stripper block and the compressor block. detecting the fault quickly. 一. If we only consider the variable which has the maximum contribution as the excursion of the fault. After a fault has been detected. Variable contribution plot at sample 105 for fault 1 To resolve the problem. The figure shows SPE oversteps the 99% confidence limit at sample 105. To compare the traditional PCA with the proposed method for fault diagnosis. a recycle compressor and a product stripper. The SPE chart under fault 1 is shown in Fig 2. Control limit of SPE statistics is 16. Ten principal components are selected.357 which is based on the 99% confidence limit. which captures 69. namely the fault 1 is introduced between the 100th and the 110th sample. The process is divided into five blocks. Then the first abnormal case is considered and the process is simulated for 20 hours. variables contribution plot of SPE at sample 105 is present at Fig 3. which are variable 11(separator temperature). Tennessee Eastman process.j is the prediction value of xb. variable 20(compressor power). as shown below.j is the jth variable of the bth block and x b. Collect data in the normal operation condition during the 25 hours simulation time. variable 21(reactor cooling water inlet temperature).where xb. the process consists of five major units: a reactor.

introducing the fault at 5th hour and concluding at 5. SPE chart for fault 2 204 . Block SPE chart for fault 2 140 120 SEo r uo r t P cni t na t bi e 100 80 60 40 20 0 1 2 3 Variables in bloc k 2 4 5 Fig. so the fault is produced through the variable 1 or affects it most. 488 Fig.. J. B. The variable contribution plot of SPE at sample 105 in block 1 is shown in Fig. 1993. Valle.R..Chemometrics 1998. J. et al. Piovoso. 10.. 17(3):245255.Syst 1998. Proc.. 715–742. 1988 3:3-20.8. and Vogel. Qin. [2] Gang Chen and ThomasJ. thus finding the cause of fault. T.S. Chemometrics. J. H.J. A multiblock partial least squares algorithm for investigating complex chemical systems[J].5.Block 1 Block 2 150 100 50 0 20 15 10 5 0 200 Block 5 20 15 10 5 400 0 0 Block 3 contribution plot of that block and combine the process knowledge to diagnose fault. With process knowledge. “On unifying multi-block analysis with applications to decentralized process monitoring. vol. Finally diagnose the fault according to the variable contribution of that block combined with the process knowledge. 2001. Vol. First build the PCA model of all variables to detect fault and then determine which block the fault locates in by the block contribution. “Recursive PCA for adaptive process monitoring. MacGregor JF. and M. Qin.6. and J..” J. Cont.and Wold. Valle. vol. E. the process is simulated for 20 hours..” J. 4 4 2 0 1 2 3 Variables in bloc k 1 References [1] Jackson J E. Fig 8 shows variable 9(reactor temperature) is affected most by the fault.. Having knowing the process knowledge that the A/C ratio should be kept at a fixed value and the great excursion of variable 1 just compensates the change of A/C ratio in stream 4. Analysis of multiblock and hierarchical PCA and PLS models.Adaptive batch monitoring using hierarchical PCA[J]. Simulation on the TE process shows the effectivity of this method. Fig 7 is the plot of block SPE and it displays fault 2 is not serious as fault 1. detecting fault timely. 471–486.Chem. Fig 4 is the plot of block SPE under fault 1. Li. J. [9] Downs. 2000. we apply the MBPCA to detect and diagnose fault. and Kowalski.5th hour. Ind. As same as the case 1. A plant-wide industrial process control problem[J]. the operator can easily confirms the fault is produced through the abnormal variable 4(A and C feed). New York: Wiley-Inter-Science. thus determining the location of fault exactly. User’s Guide to Principal Component. MacGregor. The SPE for fault 2 in Fig 6 shows the out-of-limit is at sample 102.F. Yue. pp.A. Variable contribution of block 1 at sample 105 for fault 1 The other abnormal case is tested for the fault 2 reactor cooing water inlet with a step change. The trend in block 2 appears strongest and is most likely the site of fault. Block SPE chart for fault 1 12 10 S cnb i n t P or u r e E ti to a 8 6 In this paper. [8] Westerhuis JA. the empirical operator is inclined to suspect the reactor cooling water inlet temperature has an abnormal change. [6] Jincheng fan et.4.E. [4] Ra nnar. Eng.5. Block 1 800 600 100 400 50 200 0 0 0 50 100 150 Block 2 150 Block 3 20 15 10 5 0 0 200 Block 4 20 15 10 5 0 0 200 400 400 200 400 0 0 200 400 Fig. pp.Chemometr. S.Lab. Kourti T. It is apparent that the most contribution ratio for SPE is the variable 1(A feed).41:73-81 [5] S. [10] Self-Optimizing control of a large-scale plant:the Tennessee Eastman process Larsson. pp. Conclusions 0 200 400 0 200 Block 4 400 0 200 Block 5 400 150 400 100 300 200 50 100 0 0 200 400 0 0 200 400 Fig.McAvoy Predictive On-line monitoring of continuous processes [3] Wangen.F. Variable contribution of block 2 at sample 102 for fault 2 5. 40. 250 200 150 SE P 100 50 99% c onfidenc e limit 0 0 50 100 150 200 Samples 250 300 350 400 Fig. S. 1991. Res. S..al Data Analysis Science publication [7] W. 15. L.7. It shows that the fault 1 is serious causing all the blocks having obvious out-of-limit and block 1(material block) is most likely the source of the fault as it has the maximum ultralimit. Chemometrics Intell.. Computers and chemical Engineering. 12: 301–321.

2009 International Conference on Computer Engineering and Technology Strong Thread Migration in Hetereogeneous Environment Khandakar Entenam Unayes Ahmed.2009. al. T. Though the full concern was on reducing performance overhead but they lose the portability. This makes Java convenient for implementing distributed applications. Debuggers are able to ‘hook into’ programs running in the JVM by requesting notification of events fired by the framework. Dorman [8] also uses JPDA for state capture and restoration. exception. The Java Virtual Machine then interprets the byte code into an executable form. shohag_2002@yahoo. method entry and method exit events. The code growth rate is high due to the code insertion by Pre-processor. Since the JDI is the highest level and easiest to use developers are encouraged to develop debugger using JDI. Then it adds a primitive migratory for those methods. subeen@acm. INTRODUCTION In Java Programming Language the mobility is considered during design. So we have to serialize/deserialize the thread. III. The Object Serialization allows java to be transmitted or stored which enables a process to be started at new hosts with an initial state that is always starts from the beginning of the process. The system developed based on our framework needs no extra involvement of programming to continue the whole migration system. [2] developed provides a mechanism of strong thread migration. Khalad Hasan. When migration takes place a NotifyMigration exception is generated and is handled by try – catch block. Keywords. In Type Inference they create user defined stack. JAVA PLATFORM DEBUGGING ARCHITECTURE(JPDA) A debugger developer may hook into JPDA at any layer. It first identifies the methods that are to be migrated. It is a powerful autonomous system for heterogeneous environment maintaining its portability since Java Virtual Machine (JVM) is not modified. In Dynamic de-optimization technique it uses the user defined stack named type stack and type frame to restore the states. al.1109/ICCET. sending agent. Tamim Shahriar. Thread migration or thread serialization has many applications in the areas of persistence and mobility. Md. The focus in this project is on the migration of the execution state instead of focusing on transfer of objects since this facility is provided by the JAVA’s transport protocol. Source code is compiled into a transient state called bytecode (an instruction set that is very similar to the one of a hardware processor).com. This makes the developers’ work easy where a user can run and compile his code once and then can run in any machine with JVM. variable modification. Bouchenak et. restoration debugger.com Abstract—This paper provides a complete framework for thread migration using JPDA. dynamic reconfiguration of distributed systems. II. It uses an artificial program counter.207 . It can suspend the thread only inside run method. al. A. So our goal is to develop a complete framework for thread migration. Md. In their framework they used two techniques named Type Inference and Dynamic deoptimization. masudcse@yahoo. M. While the JPDA provides access to information not normally available in standard Java applications. Sekiguchi et. That is we need to transmit/store the state of the execution. At that time local variables are stored in state object. Fuad et. Our concern is here where we can start the process from the point at which the process is transmitted. In this project the JPDA (java platform debugging architecture) is used to capture and restore the execution state for dynamic thread migration. the debugger 205 978-0-7695-3521-0/09 $25.g. In querying the JVM for running objects. operand and pc. [3] utilizes the java exception handling mechanism. S. not in any other method.00 © 2009 IEEE DOI 10. of Computer Science & Engineering Shahjalal University of Science & Technology E-mail: tanvir-cse@sust. His complete focus was on capturing and rebuilding the execution context of running objects and not on particular means of transportation. Al-mamun Shohag. RELATED WORKS I. e. When it serializes a thread it maintains all information about local variables. it also limits access in other areas. receiving agent. Md. khalad-cse@sust. [1] modifies the JVM. capture debugger. These events are placed in an ‘EventQueue’. Their mechanism is based on a preprocessor that inserts code into the program to be migrated.distributed computing. dynamic load balancing and user’s traveling in mobile computing system. In our framework neither we lose portability nor do we insert any artificial code. When any NotifyMigration exception is thrown it is caught in the current method and propagated to the calling method.org. Mashud Rana Dept.edu. operand stack. from which the debugger may consume and further query the running program.edu. strong thread migration. The second limitation is code growth rate. such as check pointing and recovery for fault tolerance purpose. stepping.

Java Debug Wire Protocol Front-End User Interface JDI . There will be also a receiving agent that will receive the serialized data and then run the mobile object in listening mode that will be traced by another debugger in sender side named ‘RestorationDebugger’. and top of the stack. first write and save the content as a separate text file. The system consists of two agents named sendingagent and 206 . The JPDA allows stopping the execution of any running objects and accessing the exact location of execution state and then store the location along with the local variables.Java Debug Interface Figure 1: Java Platform Debugging Architecture The JVM data areas that are related to our project are described below: • A Java stack is associated with each thread in JVM. We access java stack. but in an effort to enforce security constraints within the Java environment there is no accompanying setLocation(). • Build up an agent that will receive the byte code in receiver and then execute the byte code under the debugging mode. PROJECT GOALS Before you begin to format your paper. VI. A frame includes a table with the local variables of the associated method. A new frame is pushed onto the stack each time a java method is invoked and popped from the stack when the method returns. Please take note of the following items when proofreading spelling and grammar: JVMDI .Java VM Debug Interface Virtual Machine Back-End Using JPDA and byte code modification this project will build up a complete framework for the dynamic thread migration. complete content and organizational editing before formatting. Finally. The heap associated with a thread consists of all the objects used by the thread (objects accessible from the thread’s java stack).is not able to obtain direct object references. All references and values returned by the JVM are mirrors of the actual values. and an operand stack that contains the partial results (operands) of the method. JVM CHARACTERISTICS • The heap of the JVM includes all the java object created during the lifetime of JVM. The method area associated with a thread contains the classes used by the thread (classes where some methods are referenced by the thread’s stack). A notable exception is that of the current execution point within a stack frame. Do not use hard tabs. OVERALL DESIGN JDWP . There exists a location() method for the retrieval of the execution point. • Successfully suspend the process and modify the byte code and restore all of the variables and resume the process. • The capturing of the execution state of a process using JPDA. Then the mobile object is serialized along with the stored value of local variables. The total capture and storage are done by developing a debugger using JDI named ‘CaptureDebugger’. The capture and restoration is done using JPDA without inserting any code or modifying the JVM. as it is possible to both get and set local instance variables. • The method area of the JVM includes all the classes that have been loaded by the JVM. RestorationDebugger stops the mobile object in remote machine and then restore the state and values of the local variables and starts the process. and limit use of hard returns to only one return at the end of a paragraph. Keep your text and graphic files separate until after the text has been formatted and styled. heap and method area using JPDA as it provides convenient way to access them. Do not add any kind of pagination anywhere in the paper. • Display of all capturing states for research purpose. Do not number text heads-the template will do that for you. Both execution and transmission are monitored by an agent named ‘SendingAgent’. In this project the design process concentrates on state capture and restoration of running objects along with the transportation of mobile objects. IV. A frame also contains registers such as the program counter (pc). limiting the ability to use an object in the same manner as a conventional program. V. • The byte code of the class is transferred using TCP/IP protocol. For the most part this poses few problems. The total project follows the following steps.

If the mobile object is started then goto step 9.receivingagent for sender and receiver respectively. • Receivingagent: It will be responsible for receiving the mobile object along with captured information and executing the mobile object in suspension mode. It will then restore the captured information and then restart the mobile object. It will also notify the sender about the start of execution of mobile object. Create a file named class name and then execute the class using exec() method of untime class in suspended debugging mode. Then Sendingagent execute the debugger ‘CaptureDebugger’ using the method exec() of Runtime class. 8.Exit Figure 3: Algorithm for Receiving Agent 207 . 4. 11. Receiving Agent Receivingagent will start its function by opening a socket in a specific port number. if connection is closed then goto step 6 else goto step 4. signatures and local variables. Send the class file of mobile object to target machine. At first it will set up connection with receiving agent using stream socket connection. Then it will send a notification about the start of the mobile object to the sendingagent of sender. Sending Agent Sending agent will start its duty only when the migration decision will take place. check for the connection. 6. It will dump all captured information in a stack list named stackframelist. Check wheter the mobile object started in target machine. The receivingagent is always in listening mode by running an accepthread which can receive the mobile object along with the variables. It will always keep itself in communication with receiver until the migration is complete. • RestorationDebugger: It will be started by the sendingagent. If it gets the notification then it will execute the second debugger named ‘RestorationDebugger’ A. RestorationDebugger will restore the captured values and restart the process. values of local variables etc). Now it will wait for the notification that will come from receiver whether the mobile object has started or not in the receiver. Exit B. Else goto step 2. AcceptThread will first accept the server socket and then get the class name. Else goto step 7. Check for further migration request. Algorithm: ReceivingAgent 1. thread name. This thread will continue its execution until the server socket becomes null. size and class file sent by sendingagent of sender. Then the receivingagent will be responsible for running the received mobile object in suspend mode that will be restarted only when the second debugger named ‘RestorationDebugger’ will be connected. 10. 4. Start 2. Else goto step 12. 6. Then it will start a thread named AcceptThread. Setup Connection with target machine. Start a thread named AcceptThread. 12. It will also responsible for setting up connection with the receiver and launching the second debugger named ‘CaptureDebugger’ that will trace the mobile objects in remote machine (receiveer). 3. If there is a request to migrate then goto step 4. If more request(s) then goto step 4. 7. Start 2. Algorithm: SendingAgent 1. It is responsible for tracing the mobile object in remote machine. Sendingagent is responsible for executing the ‘CaptureDebugger’ and user can take decision of migration using sendingagent. Four major components that will be greatly responsible for making the migration success are: • Sendingagent: Responsible for executing ‘CaptureDebugger’ and migrating the mobile objects along with captured information(execution state. 9. Execute RestorationDebugger. open a server socket 3. Check for any request for migration. • CaptureDebugger’: It will stop the running process and will capture the execution state (pc). 5. Execute CapturDebugger. Then using this socket it first read the class file of the mobile object and then send the class file along with its name and size. The total migration process is maintained by sendingagent. 5.

Start 2. Stack contains all frames. If the specified method is found then it will suspend the TargetVM. If the debuggee is present then goto step 7 Else goto step 13 7. Find main thread 11. Accept the server socket 3. To capture information it extracts all threads that are in debuggee. Find specific connector 3. Suspend the debuggee 9. And also it should be noted that the debugger will resume the process after changing the PC.C. Here it is not guaranteed that the process will be started from the exact desired location but it is ensured that all of the local variables are updated successfully. Then it will resume the TargetVM. local variables and its corresponding values. Send a notification to the SendingAgent 7. Attach debugger with debuggee 8. That’s why the CaptureDebugger first passivate/suspend the debugge. Get the class name. Then it connects with debuggee using this connector. It first finds out the connector type by matching the specification provided through argument with connection specification provided by VM. Then it finds out main thread. Capture Debugger Algorithm: AcceptThread 1. method name. Since CaptureDebugger acts as an individual process simply executed by sendingagent. method signature. Extract all frames 12. which are popped recursively. Restoration Debugger CaptureDebugger connects with debuggee running in listening mode at specific port number. If the connection is open then goto step 13 Else goto step 5 5. Then it will update the local variables using the StackFrameList which is obtained from capture debugger. It then dumps the stack from this thread. Then the frame will be distinguished according the method signature and method name. Capture all states 13. Algorithm: CaptureDebugger 1. Exit Figure 4: Algorithm for CaptureDebugger RestorationDebugger connect with debuggee running in suspension mode at specific port number. Create a new class file with the same name and content of the class file received. Now it will search for the main thread and will also dump the main thread’s stack. In the mean time the EventHand thread will be started by this debugger. Then from the frames the program counter (pc). Exit Figure 5: Algorithm for AcceptThread D. as our first concern is with single thread. Then it connects with debuggee using this connector. To capture information the debuggee must be passive instead of active. All information is stored using a user-defined stack named stackframelist. so a handshaking is required to make process communication through which we can transfer the captured information to the sendingagent. It first finds out the connector type by matching the specification provided through argument with connection specification provided by VM. Check the server socket if sever socket is null then goto step 8 else goto step 2 8. Check whether the connection is open 4. Then EventHand will trace all of the method call events and will search for a specific method call that is inserted just after variable declaration in a method. 5. Run the class file in specific port address 6. Check whether the debuggee is present 6. size and class file 4. Extract all threads in debuggee 10. which will later see the light of working with multithread. 208 . Start 2. The frame will be dumped too. After attaching with TargetVM it will first resume the TargetVM by default.

VIII. 1999. volume 4.” In Coordination Models and Languages. CONCLUSION Algorithm: RestorationDebugger 1. Suspend TargetVM 13. Twenty-Fifth Australian Computer Science Conference. 1999”. we didn’t do so as we want to use our system in heterogeneous environment. Yonezawa. Oudshoorn. “AdJava . Start eventhand thread 10. M. Search for the sentinel method 11. R. Update frame by StackFrameList 17. “Byte Code Engineering Library. Sun Microsystems.com/j2se/1. editor.sun.sun. http://java. Attach debugger with TargetVM 8. Sun Microsystems.html. Java I/O. Though we might develop a faster mechanism for strong thread migration modifying the JVM.” INRIA Technical Report No. 1999. Figure 6: Algorithm for Restoration Debugger Our proposed framework provides a new and different way for strong thread migration. M. pages 211–226. Melbourne. Dahm. May 2002. Fuad and M. Australian Computer Society. Find specific connector 3. Find main thread from all threads 15.VII. REFERENCES [1] [2] Sara Bouchenak and Daniel Hagimont. Andrew Dorman. 2001. Masuhara. “A Simple Extension of Java Language for Controllable Transparent Migration and Its Portable Implementation.” 04/12/2002 2002. Disconnect debugger 18.html. If the debuggee is present then goto step 7 Else goto step 18 7. “Java Platform Debugger Architecture Overview. Start 2. “Zero Overhead Thread Migration. Dump frame from stack 16.” [3] [4] [5] [6] [7] [8] 209 . hence not to sacrifice portability. E. If the method is found then goto step 12 Else goto step 10 12. Check whether the connection is open 4. Harold.net/.3/docs/guide/jpda/jpda. If the connection is open then goto step 18 Else goto step 5 5. Exit. “Execution Context Migration within a Standard Java Virtual Machine Environment.” http://java.com/docs/books/vmspec/2ndedition/html/VMSpecTO C. Check whether the debuggee is present 6.doc. O’Reilly.sourceforge.Automatic Distribution of Java Applications.” In Michael Oudshoorn. Sekiguchi. and A. Get StackFrameList from SendingAgent 14. Using JPDA we provide system independent feature in our program (as JPDA is part of standard Java). http://bcel. Australia. “The JavaTM Virtual Machine Specification Second Edition. pages 65 – 75. Resume TargetVM 9. H. T. 0261. 2000.

Introduction There has been a continuous proliferation of nonlinear type of loads due to the intensive use of electronics control in all branches of industry as well as by general consumers. mains supplies only unity power-factor sinusoidal balanced three-phase currents.2009. This paper also presents the application of DSP-based controllers for SAF for three-phase distribution systems. In Section 3.2009 International Conference on Computer Engineering and Technology A DSP-based active power filter for three-phase power distribution systems Ping Wei. reactive power.1109/ICCET.00 © 2009 IEEE DOI 10. Keywords: Harmonic distortion. digital signal processors (DSP) 978-0-7695-3521-0/09 $25. Finally. 2.com wp620125@yahoo. 1 shows the basic SAF scheme including a set of non-linear loads on a three-phase 1.140 210 . mains to feed harmonics.c. neutral current and unbalancing of non-linear loads locally such that a. Zhixiong Zhan. the proposed technique provides both harmonic elimination and power factor correction. The proposed technique uses a fixed-point DSP to effectively eliminate system harmonics and it also provides reactive power compensation or power factor correction. A DSP-based three-phase active power filtering solution is proposed in this paper. the simulation and experimental result also shows that both controller techniques can reduce harmonics in three-phase electric systems drawn by nonlinear loads and reduce hardware components. create phase displacement and harmonic currents in the main three-phase power distribution system both make the power factor of the system worse. DSP-based controller for active power filters has been proposed in some papers in which general purpose and floating-point DSPs are used.cn Abstract This paper presents a new digital signal processor (DSP)-based control method for shunt active power filter (SAF) for three-phase power distribution systems. Active power filter system The main objective of the SAF is to compensate harmonics. Compared to conventional analog-based methods. The system considered in this paper is shown in Fig.Houquan Chen Department of Information Engineering. Nanchang University.com choq521@163. the DSP-based solution provides a flexible and cheaper method to control the SAF. Section 2 of this paper provides the fundamentals of SAF and the structure of the controller is discussed. zhixiong3090@163. 1. reactive power. Fig. simulation results verifying the concept are presented. The proposed technique requires fewer current sensors than other solutions which require both the source current and the load current information. Non-linear loads. especially power electronics loads. The SAF draws the required currents from the a. Shunt Active power filter.c. and neutral current for balancing of load currents locally and causes balanced sinusoidal unity power-factor supply currents under all operating conditions. Conventional rectifiers are harmonic polluters of the power distribution system. Digital signal processors are being used in a variety of applications that require sophisticated control. Furthermore. In recent years.

if1. This load draws a nonsinusoidal current from the utility. The inductors Lf1. Fig. The DC side of the VSI is connected to a DC capacitor. and the state of the switches G1. The switches of SAF must support unipolar voltage and bipolar current.1 an SAF and nonlinear loads considered in this paper The current which must be supported by each switch is the maximum inductor current. The VSI contains a three-phase isolated gate bipolar transistor (IGBT) with anti-paralleling diodes. Cdc that carries the input ripple current of the inverter and the main reactive energy storage element.2. Then the SAF currents can be written as: Lf1 Lf 2 dif 1 = VS 1 − V f 1 dt dif 2 = VS 2 − V f 2 dt (2) (3) (4) Lf 3 dif 3 = VS 3 − V f 3 dt Where. The load may be either single phase. if2 and if3 are SAF currents and Vs1. the SAF consists of three single phase inverters. the capacitor. i*f2 and i*f3 through the control circuit. G1. 2 are as follows: the voltages Vf1. an IGBT with anti-parallel diode is needed to implement each switch. Vf2 and Vf3 supplied by the inverter as a function of the capacitor voltage. and at the same time act as the low pass filter for the AC source current. Then the SAF must be controlled to produce the compensating currents if1. Fig. Lf2 and Lf3. Vc. The maximum voltage which must be supported by controllable switches is the maximum dc bus voltage. Lf2 and Lf3 perform the voltage boost operation in combination with Where.The proposed shunt active power filter 2. In this paper. The VSI is connected in Parallel with the three-phase supply through three inductors Lf1. Vs2 and Vs3 are the supply voltages. G3 and G5 represent three logic variables of the three legs of the inverter. The DC capacitor provides a constant DC voltage and the real power necessary to cover the losses of the system. which is used to inject the compensating current into the power line. G3 and G5 are: ⎡Vf 1⎤ Vc ⎡−2 1 1 ⎤ ⎡G1⎤ ⎢ 1 −2 1 ⎥ ⎢G3⎥ ⎢Vf 2⎥ = ⎥⎢ ⎥ ⎢ ⎥ 6 ⎢ ⎢ 1 1 −2⎥ ⎢G5⎥ ⎢ ⎥ ⎣ ⎦⎣ ⎦ ⎣Vf 3⎦ (1) 2.distribution system. The voltage in the DC capacitor can be calculated from the SAF currents and switching function as follows: VC = 1 C ∫ [G i 1 f1 + G 3if 2 + G 5if 3] (5) The set point of the storing capacitor voltage must be greater than the peak value of the line 211 .1 Description of proposed filter As shown in Fig. we consider three single phase uncontrolled diode bridge rectifiers with resistive–capacitive loading as non-linear unbalanced loads. In this paper. if2 and if3 following the reference currents i*f1. the SAF system consists of a three phase voltage inverter with current regulation. The inverter conduction state is represented by these logics.2.2 System modeling The representation of a three-phase voltages and currents of the VSI in Fig. two phase or three phase and non-linear in nature.

Simulation results of proposed SAF using PI controller. The three-phase compensating reference current of SAF (i*f1.The proposed control system 3. It is noticed that the supply current in phase with the supply voltage. (b) Source current and source voltage with filter.3. has been inserted before the rectifier. The simulation results in steady state operation are presented.3 Capacitor voltage with PI controller The basic operation of this proposed control method is shown in Fig.5 shows the performance of the SAF system using PI controller. In order to limit maximum slope of the rectifier current. Fig. In this paper.5(c) shows the compensation current. rectifier commutations. The PI controller is applied to regulate the error between the capacitor voltage and its reference. 2. i*f3) are estimated using reference supply currents and sensed load currents.5. The output of PI controller is multiplied by the mains voltage waveform Vs1. Then. Waveform of the source current without SAF is shown in Fig. Fig. i*f2. the three-phase controlled rectifier with resistive load has to be compensated by the SAF. a smoothing inductor. 212 . The estimation of the reference currents from the measured DC bus voltage is the basic idea behind the PI controller based operation of the SAF.5 (d). (d) Capacitor voltage with its reference. Fig. Then the supply reference currents are proportional to the mains voltages.4. and the capacitor voltage follow its reference. V*c. Vs2. 5(a). SAF is connected in parallel with nonlinear load.neutral mains voltage in order to be able to shape properly the mains currents. i*s2. Vs3 in order to obtain the supply reference currents i*s1. The capacitor voltage is compared with its reference value. (c) Compensating current. we give several simulation results with uncontrolled rectifier at α=0°. 4. The capacitor voltage superimposed to its reference is shown in Fig. i*s3. it is prevented the inverter saturation even in correspondence of Fig. In Fig. in order to maintain the energy stored in the capacitor constant. Simulation results A number of simulation results with different operating conditions were developed. SAF connected in parallel with nonlinear loads. Lr. (a) Source current without filter. 3. Fig. Fig. 5(b) shows the source current with SAF superimposed to the supply voltage. Also.

Texas Instruments-Configuring PWM Outputs of TMS320F240 with Dead Band for different Power Devices. its reference and source voltage. 1997 Texas Instruments-TMS320C24x DSP Controllers Reference Set. New trends in active filters for power conditioning. 7(b) shows the supply phase voltage.4. Conclusions performance of the active filter. The XDS510PP is a portable scan path emulator capable of operating with the standard test interface on TI DSPs and Microcontrollers. The operation and modeling of the SAF have been described. Experimental results of proposed SAF. 7 shows experimental waveforms for the load condition of uncontrolled rectifier.6. A three-phase PWM controlled shunt active filter was designed to inject harmonic currents with the same magnitude and opposite phase to that of the nonlinear load currents in order to cancel harmonics at the point of common coupling (PCC) with the utility. IEEE Trans. Fig. Fujita H. Fig. IEEE T Power Electr 1998. It is clear from Fig. From these figures. 7(c) shows the compensating current. Vol. Akagi H. This portable emulator works of the computer parallel port. Revision A. [2] [3] [4] [5] . Fig. (b) Source current. Texas Instruments-Dead-Time Generation on the TMS320C24x. 7(b) that the supply current is almost sin waveform and follows the supply voltage in its waveform shape with almost a unity displacement power factor. The DSP is connected to a computer through a XDS510PP emulator. supply current and its reference. Experimental setup of proposed DSP controlled active filter. The feasibility of the approach was proven through the experimental results.Akagi. The Texas Instruments (TI) TMS320F240 processor is a fixed-point 16-bit processor. (a) Source current and voltage without filter. Fig. Fig. and has the capabilities of using advanced PWM motor control techniques for adjustable speed drives. 13(2):577–84. Control and gating signals for the switches of the active filter are generated on a TMS320F240 DSP. 7(a) shows the supply voltage and current without ASF. The unified power quality conditioner: the integration of series-and shunt-active filters. 1997. Fig. System and Instruction Set). The control scheme using three independent hysteresis current controllers has been implemented. A laboratory prototype has been built to verify the Fig. 6 shows the block diagram of the experimental setup.7. operating at 20 million instructions per second (MIPS). Application Report SPRA289. The focus of this paper is to present a novel DSP controlled active filter to cancel harmonics generated in three-phase industrial and commercial power systems. Application Report SPRA371. March 1997. IA 32 (6) (November=December 1996) 1312–1322. 1 (CPU. (c) Compensation current. 5. it is clear that the effectiveness of the proposed controller for active power filter. DSP implementation and Experimental results A laboratory prototype of the active filter has been built to evaluate the performance of the proposed active filter and its implementation in the TMS320F240 DSP. 213 References [1] H.

15(6):495–503. M. Gosbell V.1024–1027 Texas Instruments-TMS320C24x DSP Controllers Evaluation Modul. 2007 (PESC 2007). Dastfan A. Woo M. A novel real-time detection method of active and reactive currents for single-phase active power filters. 2007. Power Electron. 2004 (PESC 04). Yasushi. Buso S. Control of a new active power filter using 3-d vector control.A. Single phase active power filter controlled with a digital signal processor – DSP. Combined deadbeat control of a series parallel converter combination used as a universal power filter. K. de Souza. Singh B.P. J. Al-Haddad K. I. T. 44(3):329–36. L. 14(4):5–12. Y. Chandra A. Electric Power Applic. Toshihiko. IEEE T Power Electr 1999. IEEE T Power Electr 2000. Nishida. Jeong S. Norio. Comparison of current control techniques for active filter applications. Castilla. power factor correction. Mattavelli P. Technical References. I. Power Electronics Specialists Conference. IEEE Trans. S. Platt D. Malesoni L.M. Feedback linearization of a single-phase active power filter via sliding mode control. Miret. H. Matas. m20–25 June 2004. IEEE. Masayoshi. Mussa. 214 . 1997. J. Rukonuzzman. An improved control algorithm of shunt active filter for voltage regulation. O. 2004) 283–288. IEE Proceedings 151 (3) (8 May. Lindeke. D. DSP-based active power filter with predictive current control. Eiji. F. M.[6] [7] [8] [9] [10] [11] [12] [13] [14] [15] Kamran F. 17–21 June. 23 (1) (2008) 116–125. pp. 2. harmonic elimination. J. 13(1):160–8. 2933–2938. 2004 IEEE 35th Annual vol. and balancing of nonlinear loads. Power Electronics Specialists Conference. M. Advanced current control implementation with robust deadbeat algorithm for shunt single-phase voltage-source type active power filters. Barbi. Nakaoka. Habetler T. Garcia deVicuna. Guerrero. IEEE T Ind Electron 1998. IEEE T Ind Electron 1997. pp. IEEE T Power Electr 1998. 45(5):722–9.

executing purchase(task5). then by su. the figure 1 shows the role layer structure. time constraint and the formalized definition and relevant algorithm. If the system fails to provide sufficient security protection for these cooperative staff.2009. general manager role ma and financial executive role su cooperate to execute task3.workflow. but has some weakness in the workflow area. so it fails to control the interactive relations among tasks (e. therefore effective task control model is needed to manage and control the access of these cooperation staff. and the granularity partition of permission has bigger localization RBAC is a kind of access control model tending to be static. user represents the user collection of executing task. It overcomes the shortcomings of the traditional model based on the role access control that has bad dynamic adaptivity of authority assigning. II. by emphasizing the weakness existing in the present workflow system. and the most remarkable characteristic is to divide big task into smaller tasks which can be finished by many men cooperatively. The widespread mainstream model is based on the RBAC role-based access control model. task flow.com Abstract—Access control is an important security measure in workflow system. Keywords. 2008.(2)the execution series of task is task1 task2 task3 task4 task5. The model can therefore enhance the security and practicability of the workflow system.2009 International Conference on Computer Engineering and Technology Access Control Scheme for Workflow Gao Lijun Zhang Lu Xu Lei computer school Shenyang institute of Aeronautical Engineering Shenyang. however the existing RBAC can’t express complicated workflow access control constraint. scrutinizing purchase bill(task3).). making purchase bill(task2). which can’t be related to the transaction process of application tightly. . The shortage of RBAC model in the workflow system RBAC. that main idea is modify the traditional two-layer authorization structure—user-permission into three-layer authorization structure—user-role-permission. confirming purchase bill(task4). task flow. 2. order. The task executing constraint of a real workflow instance is as follows: (1) is project manager pr executes task1. tasks. the faults of RBAC in the workflow system are pointed out. which determined the granularity of permission can be specified up to the role level. therefore through relating user with permission to simplify the difficulty of authorization management.00 © 2009 IEEE DOI 10. The dynamic property of assignment and revocation of permission is bad. time constraint I. Based on the analysis of many access control models. it can not only forbid users executing unauthorized tasks but also ensure authorized users to execute tasks smoothly. etc. and task2 can be executed from 9 o’clock to 16 o’clock on the 15th every moth. which has wide application in the access control field.1109/ICCET.and if user A has executed task1 or task2 then A is forbidden executing task3 and task5. no flow control and time constraint. China e-mail: gaolijun0610@163. is the basic framework of RBAC96[1] model. a new access control model is proposed with the introduction of task set. 2008 to Oct. role c1 executes task5. Shortage of task process control capability and the periodic constraint capability Take the case of a simplified firm equipment purchasing workflow in this section.g. recurrent time constraint. Furthermore permission is relevant to the role directly in the traditional RBAC model. INTRODUCTION Workflow is an efficient method of complex multitask cooperative modeling with extensive application in the enterprises’ informationization. it is inevitable that some personnel can execute illegal operation by the convenience of work. put forward by Sandhu. financial clerk cl executes task2. furthermore the execution time length of task2 is regulated not over 30ms. B. and the minimum permission constraint is the the minimum permission constraint of role[2]. A.(3) The time constraint period is set from Jan. the number on the arc represents the user number needed in this task.120 215 . etc. and can’t satisfy all control demands if the existing RBAC model is introduced into workflow system straightly. RBAC model has its own advantage. ma pr cl Figure 1 s Role Hierarchy Structure 978-0-7695-3521-0/09 $25. but if A has executed task1 then A must execute task4. task3 should activated by ma. That’s to say. so it’s difficult to authorize and revoke the permission timely and rightly according to the need of transaction process dynamically. in the fig. The equipment purchasing workflow consists of five tasks: filing an application for purchasing(task1). especially the shortage of control mechanism of workflow.

StartTime T o TM StartTime taski = ts ts  TM is the beginning time of task. task cooperative constraint. so it’s natural to introduce the concept of task set into RBAC model clearly.which can get the time length of task permitted to be activated. TaskTimeLength T o N TaskTimeLength (taski)= n n  N is natural number(s . R={r1. In order to solve the above-mentioned problem. T-R&TBAC model inherits the characteristics of RBAC3[1]. III. this fault is more obvious because the concept of task is very distinct. so it’s natural to realize the dynamic assignment and revotion of permission. which are all hard to resolve through traditional RBAC.the confilicting task set cti  T and make r  R. on which adding the workflow control mechanism that make the current task can obtain the prescriptive permission only in accordance with the constraint conditions set by the model. In such a kind of access control structure.…cth}. representing the user’s role assigned. CT { ct 1. and make the task execution limited by time constraint so it’s easy to think that modify the original structure into four-level one. Definition 5 T-R&TBAC system status function mapping from ti to real time. Introducing the access control model of time constraint T-R&TBAC based on the role and task Time-Rose & Task Based Access Control The above section analyzes the faults of role access control model in the workflow system but the basic reason is the three-level access control structure which determined these congenital faults of RBAC. and i. just like the following fig. this structure must be changed. it must have the permission during the executing permission task instance period. . cri  R and make u  U . cr2. u2…um} is the set of all users. task flow constraint. which called the role hierarchy. Definition2 Defination of conflicting role and conflicting task set CR={cr1. etc. TSUCC Ž T is the front ordered set of present active task. TP Ž T u P. UA Ž U u R .a many-to-many mapping from user setto role set. a function mapping from session set to user set.4]. representing the role’s task assigned. permission assignment constraint. while having to be related through task. a many-to-many mapping from role set to task set.t2…to} is the set of all divided tasks. GetTaskTR T o TR GetTaskTR taski = tr tr  TR is the time zone of task permitted to activate. S={s1. ct 2. but can set a Currenttime ‡ o TM is a global atomic sub function.…crh}.the conflicting role set.r2…rn} is the set of all roles[3. j  N . P={p1. ct  CT :| task _ set (r ) ˆ ct |d 1 Figure 3 Simplified T-R&TBAC Model The model gives specific role to all users of the whole workflow system.Figure 2 Purchasing workflow The task execution of workflow system is limited by some conditions. TNEXT Ž T is the back ordered task set of present active task. Definition 1 T-R&TBAC model structure U={u1. In the workflow system. which return current time. not to say that the RBAC failed to recognize the task status which results to unable to trace the accomplishment of task and increases the difficulties of calculating the start time of follow-up task. RT Ž R u T. task conflict constraint. Definition 3 Logic control structure of task TSUCC {…tsucc i …tsucc j }. task cooperation constraint. moreover specify the granulites of permission up to the task level. representing the task’s permission assigned. TNEXT { …tnextp …tnextq }. also the minimum access executing permission of every task. No matter which user is given with any role to register . The following gives the formalization description of model and the related algorithm realizing the above-mentioned constraints. 216 where task_set is the function assignning task to role. which is the task set must been finished after finishing this task. task time constraint. and improve the dynamic adaptability of workflow system dramatically. and assigns tasks to every role. Definition 4 the time and status function TM={ti |i  N} TM is the set of all time point of visual world. 3 illustrate. SA Ž S u U. including the task order constraint. permission is not directly related to the role.s2…sq}is the set of all sessions. a many-to-many mapping from task set to permission set. ti tj  TM i<j œ ti<tj TR={(ti tj)| ti tj  TM i<j} is the time zone between two time points. which is the task set must been finished before finishing this task. The constraint conditions include time constraint. T={t1.p2…pp} is the set of all access permission authorities. and finally form a access control model with time constraint -----T-R&TBAC which based on the role and task. RH Ž R u R define the partial order on R. representing the affiliation between session and user. cr  CR :| role _ set (u ) ˆ cr |d 1 where the role_set is the function assigning role to user. ti  TM represents one time point which should not be accordance with real time.

P. {True ( finished While Currenttime/ ' T //rejudge the constraint conditions at ' T interval { If Constraint_valid(state ) task GetSuccTask ||GetTaskStation (taski))==false || (StartTime taski IN GetTaskTR (taski)) || (Currenttime StartTime<TaskTimeLength (taski)) PerAccessFlag = False If PerAccessFlag {DelTaskFrom (taski). 1999. Feinstein and Charles E. VA: George Mason University. On the increasing importance of constraints.CT. taskj  T delete the task from active task set. Thesis]. 39-46. and the relevant resources access permission is also been deprived. showing the constraints are not satisfied. task confliction. judging whether meeting the constraints of user. In: Proceedings of the Fourth ACM Workshop on Role-Based Access Control[C].Shanghai Computer Engineering. this 16 variables set illustrates the status of the task at that occasion. the If with the above judgments of PerAccessFlag is still True. USA: ACM. GetTaskPermission taski . Tsucc Ž T is front ordered set obtaining the current active task.UA. role(including role cooperation). DelTaskPermission T o P DelTaskPermission taski DP.2001. P. DelTaskPermission taski . [5] Fang Chen.G.Computers and Security.29(2) 3847. then access to relevant resources. Eloff. [2] Reinhardt A. obtaining the accomplishment of current task. AT is the active task set.2007.T. task flow. Youman.33(9):15 217 . AT Ž T. GetNextTask T o Tnext GetNextTask taski Tnext.Access Control in Document-centric Workflow Systems An Agent-based Approach[J]. DP Ž P delete the permission of the task.CR.} } task. combining with workflow system seamlessly. Virginia. time. if PerAccessFlag is False. GetTaskStation T o {True GetTaskStation (task )= False i .PT. To a real system. AddTaskTo:T o AT: AddTaskTo(taski)=AT.TNEXT. [6] Gao lijun. which makes the workflow constrained by time. Fairfax. confliction. Coyne.TaskStation} False ( unfinished TaskStation is the task accomplishment status. 1996.Currenttim e. 1999: 33-42 [4] Ahn. etc. The reseach on the turning point choose and the security recovery algorithm in TRBAC [J]. MD: ACM Press. permission.} Else {BackWriteNextUser GetNextTask taski)). permission assignment. BackWriteNextUser Tnext o HEADNODE BackWriteNextUser taski =HeadNode which can be empty . Sandhu. Hal L.state={U. 1996. Tnext Ž T is back ordered set obtaining the current active task.RT. Jan H. Role-Based access control models[J]. task cooperation. enhancing the security greatly. Edward J.20(6):525-532 [3] Trent Jaeger. Ravi S. Sandhu.filling the user of the task in the back ordered task assigned user area. inheritance.SA. Algorithm description: PerAccessFlag=True AddTaskTo(taski) GetTaskHeadStation(taski) {True ( ( False [1] Ravi S. DelTaskFrom: T o AT: DelTaskFrom (taski)= AT. Xu Lei. Definition 6 authorizing constraint judgement Authorizing constraint judgement is a one variable function. The extended model overcomes many inherent shortcomings of traditional model.RH.R. further study should been made to solve the time efficiency and recovery of security status.TSUCC.-J.taskj  T add the task into active task set. Constraints for role-based access control. GetTaskPermission: T o P:GetTaskPermission taski =P’ P’ Ž P taskj  T obtain the limited permission of the task. In: Proceedings of the ACM RBAC Workshop.D. which show that the task is feasible and obtain the permission. PerAccessFlag={False ( True ( The resolution scheme and relevant algorithm of task flow control and periodic constraint Definition 7 global function definition GetRoleTask: R o T:GetRoleTask r =T’ T’ Ž T get the role’s task. so the task will be deleted.The RCL2000 language for specifying role-based authorization constraints [Ph. etc. we set a task executing flag variable. Conclusion The work flow control mechanism is introduced into the T-R&TBAC model based on the RBAC model. Fairfax. indicating the back ordered task of this task should be accomplished by the user.StartTime.  Constraint_valid:state o {True False Constraint_valid(state)={False ( True ( In order to discuss conveniently. IEEE Computer. Botha. GetSuccTask T o Tsucc GetSuccTask taski Tsucc.

the average delay and the throughput are compared. In [1].4 GHz ISM band.2009. Since the transmitted power of Bluetooth is very limited (1 mW in general).4 GHz indoor path loss model is used for Bluetooth device [3]. The theoretical and simulation result of PER.. Therefore. . 100876. Hangzhou.2009 International Conference on Computer Engineering and Technology A Mathematical Model of Interference between RFID and Bluetooth in Fading Channel Junjie Chen Beijing University of Posts and Telecommunications. The rest parts of this article are organized as follows. the simulation and theoretical results are compared to justify the proposed mathematical model. The interference between 802. Chenjunjie78@gmail. the performance of Bluetooth under the interference of RFID is investigated. The interference related to Bluetooth or RFID is extensively investigated in some previous works.11b and Bluetooth is widely researched and many antiinterference methods are proposed in the literature such 978-0-7695-3521-0/09 $25. the frequency hopping. the average delay and the throughput are selected as the performance metrics and are analyzed in this paper. China.. Packet Error Rate (PER). Furthermore. The proposed model carefully takes PHY (i. A mathematical model is proposed to quantify the performance degradation of Bluetooth. RFID devices emitting on a relatively large power (up to 4000 mW EIRP) bring significant interferences and interrupt the data exchange in Bluetooth piconet. to the author’s knowledge. the average delay and the throughput) are formulated.e.193 218 as [6]-[8]. the transmitted power. in this paper. In section II. the path loss can be figured out easily by the path loss model.e. the distance and the modulation) and MAC (i. Abstract—In this paper. 100876. PER. Therefore. the packet format and the traffic load) into account. Packet Error Rate (PER).2 to research the coexistence problem of WLAN and WPAN operating on 2. PER of 802. a mathematical model is established to quantify the performance of Bluetooth in the presence of RFID’s interference. the average delay and the throughput are selected as the performance metrics. Given the required parameters such as the distance and the frequency. In recent years. China. the MAC sublayer is taken into account and the collision time is figured out. the performance metrics (i. Beijing. the interference in PHY is analyzed and Bit Error Rate (BER) of Bluetooth under the interference is obtained.BER. Then. digital cameras and headsets to facilitate the interdevice data exchanging.00 © 2009 IEEE DOI 10. The inter-piconet interference within the Bluetooth network is illustrated in [10]. Keywords-RFID. In final section. PER I.1109/ICCET. IEEE specifically established Working Group 802. The mutual interference between Bluetooth and Zigbee is presented in [9]. In final.15. the theoretical analysis is justified by the simulation.4 GHz band. Although many works related to Bluetooth are completed. Yuchen Zhou Hangzhou branch of China Telecom.11b under the interference of RFID is analyzed. in section IV.e. Bluetooth uses frequency hopping spread spectrum (FHSS) and hops on 79 channels in 2.. 310000. Based on the BER and the collision time. Bluetooth. INTRODUCTION Bluetooth chips are embedded into diverse products such as notebook computers.com Jianqiu Zeng Beijing University of Posts and Telecommunications. the research result on the interference between Bluetooth and RFID not appears in the public literature. Beijing. the shopping mall and the hospital) where many people use Bluetooth-enabled devices. the academia and industry paid much attention to RFID technology and it will be deployed in the place (such as the campus. The following 2. Interference. the channel model. in section III. cellular phones. PHY LAYER INTERFERENCE Path loss is defined as the difference of the signal strength in the unit of decibels between the transmitter and the receiver. II. the performance of Bluetooth under the interference of RFID is worthwhile to be evaluated to pave the way for developing coexistence algorithms. China.

p >0 (8) 10 −2 The Marcum Q function is: 10 −3 10 −4 −2 0 2 4 SIR (dB) 6 8 10 219 . Bluetooth When the distance between the RFID device and the victim receiver is given. according to the parameter a and b is Eq. Both are specified in the following formulas. Path loss follows the free-space propagation (path loss exponent is 2) up to 8 meters and then attenuates more rapidly (path loss exponent is 3. Bluetooth = ⎨ ⎩58. (14) Pb = f (γ ) where SIR can be obtained easily by Eq. The model is not effective below 0. (2). 2 b = γ (1 + 1 − ρ ) 2 (13) So now. the path loss of RFID signal can be calculated by Eq. b) = Q1 (a. It is known that the received power is the difference of the transmitted power and the path loss: (3) Pr = Pt − Lp.15. b = (1 + 1 − ρ ) 2No 2No (10) where Eb/No is the ratio of energy per bit to power spectral density (PSD) of the noise (or the interference).28 Index=0.3). From Eq.5 ≤ d ≤ 8 m (1) Lp. which is used to calculate BER in general. ⎧40. approximates to the SIR and is given by: (5) γ = SINR ≈ SIR = Pr − Pi Bluetooth modulator uses Gaussian Frequency Shift Keying (GFSK) and the envelop detection.5 ⋅ I0(ab) ⋅ exp( −(a 2 + b2 ) 2 ) where Iβ(x) is β order modified Bessel function of the first kind and Q(a. (6) to (11). therefore. And the correlation coefficient ρ in above equations is given as: sin(2πβ ) (11) ρ= 2πβ where β is the modulation index.SIR Index=0.BER Lp. the path loss model of 2. x ≥0 I β ( x) = ∑ k = 0 k !Γ( β + k + 1) where the Gamma function is defined as follows. but BER as the function of SIR is required for the interference analysis. RFID = −147.45 GHz RFID is given by: Q(a. GT and GR are antenna gains of the RFID interferer and Bluetooth receiver.5 meter due to near-field effects. Since the noise power is very weak relative to the interference power. therefore the interference power Pi in Bluetooth receiver is: (4) Pi = Pti − Lp. and IEEE 802. solving the Bessel and Marcum Q function. and it is the function of SIR as follows. therefore the BER is: (6) Pb = Q(a.35 10 −1 Γ( p ) ∫ ∞ 0 t p −1e − t dt . BER of Bluetooth as the function of SIR is illustrated in Fig. BER as the function of Eb/No is obtained.b) − 0.5+33log10(d/8). respectively. RFID The signal-to-interference ratio (SIR) is the ratio of the received power and the interference power. in above equations. (1) to (5). (13) and ρ in Eq. Eb Eb × B Eb × (2 / Tb) 2Pr (12) = = = = 2SIR = 2γ N0 N0× B Pi Pi Using SIR to replace Eb/No in Eq. Similarly.b) = ∫ x ⋅ exp ⎡ −( x 2 + a 2 ) 2 ⎤ ⋅ I 0(ax)dx ⎣ ⎦ b β ∞ (9) ∞ a = exp(−(a 2 + b 2 ) / 2) × ∑ ( ) I β (ab) (b > a > 0 ) β =0 b Besides.6 + 20log(d)+ 20log(f) − 10log(GTGR) (2) where d meters is the distance between the RFID interferer and the Bluetooth receiver.1 standard [2] specifies a minimum modulation index of 0. GFSK modulation of Bluetooth has a bandwidth time (BT) of 0. the path loss of RFID can be obtained by its own path loss model. the noise can be neglected in this case. obtained: a = γ (1 − 1 − ρ ). BER of Bluetooth under the interference of RFID can be figured out. a and b is defined as: Eb Eb 2 2 a= (1 − 1 − ρ ). (10). The Bessel function is: ∞ ( x / 2) β + 2 k (7) .35. 1. According to [5].2+20log10 (d). 0. Hence. Suppose the transmitted power of RFID is Pti dBm.5. (11). d > 8 m where d in the unit of meter denotes the distance between the transmitter and the receiver of Bluetooth. The transmitted power of Bluetooth is denoted by Pt in the unit of dBm and the received power of Bluetooth is denoted by Pr dBm. respectively. b) is Marcum Q function. in which the solid and dotted line stand for BER with the minimum and maximum modulation index. 10 0 Bluetooth BER vs. the signal to interference and noise ratio (SINR). Eb/No in previous equations can be converted to SIR as the following formula. and f in the unit of Hertz is the operating frequency.28 and a maximum modulation index of 0.

The collision time. LR ) In the downlink. The packets of them have a time offset which is a random variable and is denoted by X in Fig. Bluetooth and RFID are asynchronous due to its different PHY and MAC technologies. For RFID. can be calculated as the following equations. the mean of BER can be formulated as follows.. TR < X ≤ LR ⎩ (20) λ Figure 2. That is. the probability of each channel being occupied is Pf =1/ 79.. the TB is 366 us long. The Time Domain Collision When Bluetooth piconet collocates with RFID devices. then the probability of packet occurrence in one slot is exactly λ. Hence. the received power based on path loss and shadowing alone. ⎪TB + X − LR. 2 illustrated. ⎪TR − X . As Fig.. RFID reader sends out a modulated carrier to power up the tags as well as to carry the command message. in general. TC. 2. the 2. the idle time (denoted by Tidle in Fig. and only the header of each RFID packet will occupy the time period greater than one time slot of Bluetooth.4835 GHz. However. Hence. the traffic load of Bluetooth is exactly the probability of packet occurrence in one slot. Although BER is drastically changing. The slot time is denoted by TS. According to the standard. It is a continuous variable and is uniformly distributed between 0 and LR (the average inter-arrival time of two consecutive RFID transmissions). RFID also adopts FHSS to avoid the inband 220 . the Bluetooth packet may overlap with the RFID packets in time domain. MAC SUBLAYER INTERFERENCE A. Pb = ∫ +∞ −∞ Pb ( x / Pi ) p ( x)dx 2 2 − ( a +b ) x − 1 +∞ 1 = ∫ (Q(a.b) − I0(ab) ⋅ e 2 )e Pr dx Pr −∞ 2 (16) III. TR − TB < X < TR (19) ⎪ TC = ⎨ TR ≤ X < LR − TB ⎪0. It is noted that the RFID reader still sends out an unmodulated carrier to power up the passive tag when the tag transmit the response packet in the uplink. i. 3 or 5 slots.. LR − TB ≤ X ≤ LR ⎩ [2] TB ≤TR & TB >Tidle 0 ≤ X ≤ TR − TB ⎧TB. k = 0.. According to the theory of probability. the received signal has Rayleigh distributed amplitude in the fading channel. (14) is the result in the AWGN channel. Bluetooth system adopting FHSS can hop on the 79 channels uniformly.e. therefore SIR is also exponentially distributed. the time domain is divided into 625-us-long slots by Bluetooth system and each packet can occupies 1.Figure 1. The time domain packet collision model B. Bluetooth’s BER versus SIR The BER obtained in Eq. The operating frequency is: (21) f = 2402+ k MHz. the RFID command packet in forward link and the response packet in reverse link have an equivalent interference to the Bluetooth packet. the interfering power of RFID is assumed to be constant. To simplify the analysis. In this model. in the real world. but in order to comply with out-of-band regulations in each country. if the traffic load of Bluetooth is λ . the RFID packet is definitely longer than the Bluetooth one. TR − TB < X ≤ LR − T B ⎪ TC = ⎨ TB − Tidle. the BER is the function of the SIR. 2) and the transmission time (denoted by TR in Fig. 2 is the collision model of RFID and Bluetooth packet in time domain. The Frequency Domain Collision Bluetooth system occupies from 2. λ ∈ [0. LR − TB < X ≤ TR ⎪ ⎪TB + X − LR. the BER is also a random variable. To simplify the analysis. The collision time is defined as the time interval in Bluetooth packet which is overlapped by the RFID packet and is the time duration of the interference. ⎪TR − X . (18) X ∼ U (0.1] Since Bluetooth transmits packets in the slot.45 GHz RFID operate at the data rate of up to 40 Kbps. therefore. Fig. considering the effect of the multipath. the one slot packet is used. 2) has the following relation. Since the SIR is randomly changing. (14). the average BER of Bluetooth can be determined. 1 (15) p ( x ) = e − x / Pr Pr where Pr is the average received power of the signal.78 Each RF channel is 1 MHz wide and 79 channels are available. the traffic load (denoted by λ) is taken into account. but the Bluetooth packet only occupies TB of the slot time to transmit.4 to 2. a guard band is used at the lower and the upper band edge. The received power of the Rayleigh fading follows the exponential distribution. [1] TB ≤TR & TB ≤Tidle 0 ≤ X ≤ TR − TB ⎧TB . 1 (17) Tidle = ( − 1)Tbusy. As illustrated by Eq. and the x is the instantaneous received power.

Forward Error Correct (FEC) is used to combat the bit error in the packet. which is analyzed in section III. However. The Average Delay and the Throughput The master and the slave of Bluetooth use Time Division Multiplexing (TDD) to realize the bidirectional link.1] (23) ⎪ S=⎨ ⎪ 2 − X. and Tb is the bit duration of Bluetooth. the probability of frequency collision of Bluetooth and RFID is the product of the mean overlapping area and the probability of the Bluetooth channel occupied. Figure 3. i = 1.. The retransmission time should be the multiple of the slot time.2. X ∈ (0. S = ∫ XdX + ∫ (2 − X)dX = 1 0 1 1 2 into account both the time domain and the frequency domain is the product of the collision time and the probability of frequency collision. we assume that the power of Bluetooth and RFID is uniformly distributed in the occupied channel. Suppose the BER with and without the interference is denoted by Pb and Pb0. And the symbol S denotes the ratio of the shadowing area and the power of the whole signal. (18) illustrated.. Since the Bluetooth is transmitting according the time slots.8192 MHz. Even if the errors are uncorrected. X ∈ (1. (0 < Pp < 1) 1 − Pp n =1 (31) Under the assumption that one master and only one slave are present at the network.interference and can hop on 100 channels... the voice packet cannot be retransmitted since the voice is delaysensitive and can tolerate some bit errors. The packet error rate (PER) is: Pp = PER = 1 − (1 − Pb)T ′ / Tb (1 − Pb0)(TB −T ′) / Tb (28) B. 3.. X <0 ⎧0.1. the average transmission times for a successful packet transmission is: ∞ 1 m = ∑ (1 − Pp ) Pp n −1 × n = . So the actual collision time is: (26) T = TC × PC = TC /79 The actual collision time T is a random variable. Therefore. this packet will be retransmitted when next transmitting turn of the node. The retransmissions are required when the data packet is detected to contain some errors in the receiver. 1 LR (27) T ′ = E[T ] = T ( X )dX LR ∫ 0 The PER can be derived from the BER and the average collision time. if the packet of RFID and Bluetooth are transmitted by different frequencies. m = 0. The S can be easily obtained according to the offset X as the following formula illustrated. Since the time offset is uniformly distributed on 0 to LR as Eq. Assumed that the number of retransmission is unlimited. the variable X). if one data packet is detected to be error. The probability of one successful packet transmission after (k-1) retransmission is: (30) P (k ) = (1 − Pp) Pp k −1 Since the required times of packet transmission is Geometric distributed.e. the actual collision time taken 221 . Time Division Multiple Access (TDMA) is adopted to enable the communications between one-to-multiple nodes. The frequency domain collision model Since the channels of RFID and Bluetooth are unaligned. the master and the (24) Hence. which is the function of the time offset (i. each channel of RFID is partially overlapped with its adjacent channels. each of which has a bandwidth of 1 MHz. respectively. PERFORMANCE OF BLUETOOTH UNDER THE INTERFERENCE A. and m is the channel number ranged from 0 to 99. It can be calculated by: (25) PC = Pf × S = 1/79 IV..99 (22) where fCH is the frequency spacing. The operating frequencies of the RFID are: fC = (2931 + m) × fCH .. X >2 ⎩ The mean power of one signal is overlapped can be figured out as follows. ⎪ X. Packet Error Rate The collision time TC is the time interval of one Bluetooth packet overlapped with the interfering RFID packet. fCH = 0. if multiple slaves are present in the piconet.2] ⎪0.. At the same time. The shadowing area in the figure is the area which the two channels are overlapped with. (29) τ = iTS.. therefore the retransmission is following the Geometric distribution. the offset of two different channels is a random variable that denoted by X in Fig. To simplify the analysis. the inband interference is negligible. By deconditioning with the random variable. For voice packet of the Bluetooth. the average collision time without the effect of the time offset is given by the following integral. Since the frequency spacing is less than the bandwidth.

a mathematical model is proposed to quantify the performance degradation of Bluetooth system under the interference of 2. John Wiley & Sons. Rome. (29) is the double of the average transmission times m in above formula. In the simulation. “Packet Error Rate Analysis of Zigbee under WLAN and Bluetooth Interference. λ ⋅ Spayload (34) ρ= VI.” August 2003.5 4 Interference distance (meter) 4.6 [4] [5] 0. 4 too. 2825-2830. The theoretical result of the mathematical model is also obtained and is shown in Fig.8 PER of Bluetooth [2] Simulation [3] 0.2 [6] 0 1 1.45 GHz RFID system. Soltanian. 1240-1246.15. “Interference Modeling and Performance of Bluetooth MAC Protocol. The factors that can impact the interference include PHY (the transmitted power.. “Interference of Bluetooth and IEEE 802. June 2001.15. 2nd ed.2. MSWIM’01.” in Proceedings of IEEE ICC’01. the throughput. Van Dyck.2: Coexistence of Wireless Personal Area Networks with Other Wireless Devices Operating in Unlicensed Frequency Bands.4 0. N. Klaus Finkenzeller. November 2003.slave use TDD to realize the duplex. July 2001. vol. The result of this paper can provide the criteria for coexisting between RFID and Bluetooth. “Part 15.11: Simulation modeling and performance evaluation. REFERENCES [1] IEEE Std.E.” Sept. CONCLUSION In this paper. Rebala. R.1: Wireless Medium Access Control (MAC) and Physical Layer (PHY) Specifications for Wireless Personal Area Networks (WPANs).” IEEE Transactions on Wireless Communications.5 2 2. The anti-interference algorithms can be designed based on the analysis of this paper. the average delay time of Bluetooth data packet is: 2TS (33) τ= 1 − Pp Another performance metric. In below formula. the proposed mathematical model is verified.11b Systems.” Wireless Networks. Carlos de Morais Cordeiro and Dharma P. “Interference in the 2. R. Part 4: Parameters for air interface communications at 2. 9. IEEE Std. RFID Handbook. Glomie. Golmie. the distance between the RFID interferer and the Bluetooth receiver is changed from 1 meter to 5 meters. As Fig.4 GHz ISM Band: Impact on the Bluetooth Access Control Performance. the simulation result is very close to the theoretical result. 802. 4 illustrated.E. Finland. Italy. the average delay and the throughput of Bluetooth are selected as the performance metrics.” in Proceedings of the Fourth ACM International Workshop on Modeling. [7] The simulation is done and the result is illustrated in figure 4. τ V.5 3 3. pp 201-210. 2006. pp. The delayed slot number i in Eq.” 2005. 6. SIMULATION COMPARED WITH THE THEORETICAL RESULT Comparison of theoretical result and simulation 1 Theoretical 0. VII. Simulation results are compared with the theoretical results. Helsinki. A. 2003. Tonnerre and O.5 5 Figure 4. Van Dyck. 2003. Therefore. Nada Golmie and Frederic Mouveaux. “Radio frequency identification for item management. 2. 802. Analysis. ISO/IEC 18000-4. and A.45 GHz. Soltanian. the packet format) are taken into account. the symbol Spayload is the payload length of each packet. [8] [9] 222 . August 2007. “Interference Evaluation of Bluetooth and IEEE 802. The PER is inversely proportional to the interference distance. A. The PER. the distance. and Simulation of Wireless and Mobile Systems. N. vol. Agrawal. can be calculated easily from the average delay as follows. “Part 15. the modulation scheme and the channel model) and MAC (the frequency hopping pattern. vol. pp. Soo Young Shin and Hong Seong Park. 2 (32) i = 2m = 1 − Pp Hence.1.” IEEE Transactions on Wireless Communications.

Therefore. [2].e. signal processing. Signals from the brain. Ideally. Brain-Computer Interface. software architecture will be presented and followed by its optimization strategies. Germany. . ag@iat. Axel Gräser Institute for Automation University of Bremen Bremen. These advantages are important. INTRODUCTION Brain-computer interface (BCI) systems are designed to enable humans to send commands to a computer or machine without using the normal output pathways of the brain [1]. Mean accuracy for the spelling system is 92.2009 International Conference on Computer Engineering and Technology Optimization Strategy for SSVEP-Based BCI in Spelling Program Application Indar Sugiarto Department of Electrical Engineering Petra Christian University Surabaya. We have tested our system on 106 subjects during CeBIT 2008 in Hannover.g. and the application (spelling program) in the same display/screen makes it easier for the subject to concentrate and it also simplify the system configuration. usually nonelectrical signals from the surface of the scalp (EEG activity). signal processing. and application (spelling program). In addition. a certain stimulator is required. especially in a spelling program application. At the beginning.ac.00 © 2009 IEEE DOI 10. The maximum synthesizable frequency of up to 30 Hz with frequency resolution 0. no matter which software technology is applied (DirectX or OpenGL).id Brendan Allison. frequency resolution. Germany e-mail: allison@iat. To enable a signal processing system recognizing specific features of the brain signals in the cue-based BCI system. frequencies of the evoked potentials match the frequencies of stimuli. and frequency stability. and application (spelling program).2009. the software analysis result will be presented and discussed. Ideally. e. spelling program elicited using flickering light [3].11Hz is achieved. i. Indonesia e-mail: indi@petra. those three components should run on different processing units in order to obtain optimum performance. The problem arises when using general purpose computers. we will focus on the display driver technology and programming aspects. flickering light. are classified and then translated into commands using certain digital signal processing (DSP) algorithms.unibremen. Then. II. In the spelling program application.de Abstract— This paper describes an optimization strategy for steady state visual evoked potential (SSVEP)-based braincomputer interface (BCI). because end users need a BCI that does not require elaborate hardware (such as customized LED boards or separate computing systems) or expert help (such as to find working SSVEP frequencies or adapt the system to each user). It can be concluded that using a computer monitor as the stimulator. which are usually 978-0-7695-3521-0/09 $25.g. We tested our program on several computers for the following parameters: frequency range. In this application. and the spelling program. there are at least three components for implementing a complete BCI application: stimulator. The optimization of spelling system will be focused on the layout and representation of the letter matrix. EEG. Software Architecture There are three components required for running an SSVEP-based BCI system in a spelling program application as depicted below: I. But integrating those three components in one computer system also gives advantages: make it easier for the subject to concentrate and simplifies the system configuration. the spelling program also provides mental feedback for the BCI user. steady state visual evoked potential (SSVEP)-based BCI. laptops. providing the stimulator.uni-bremen. SSVEP. a signal processing unit. But integrating those three components in one computer system also gives advantages.1109/ICCET. At least.189 223 Figure 1. METHOD A.de. those three components should run on different processing units in order to obtain optimum performance. e. There are two main parts that need to be optimized: the flickering animation and the spelling system. Three components required for running SSVEP-based BCI system in a spelling program application: a stimulator. the DSP algorithm translates those frequency responses into commands for controlling cursor movement and character/letter selection.5%. In an SSVEP-based BCI. the optimization strategy described here led to a stable and reliable system that performed effectively across most subjects without requiring extensive expert help or expensive hardware. This paper will be presented in the following outline. When optimizing the flickering animation. for implementing a complete BCI application including stimulator. the maximum synthesizable stimulator frequency is always half of its minimum refresh-rate.

• The fact that many words composed mainly by vowels. and SELECT. is responsible for signal acquisition. • Since a word formation is achieved first by moving the cursor around. Note that this result can be expanded with additional characters if required. and the spelling program. U. Commands generated by the second program will be sent to the first program through a network connection. E. The spelling system works as follows: • We provide collection of letters arranged in a matrix and a moveable cursor within this matrix. the letter at the cursor position will be selected and displayed on the appropriate position. This cursor can be moved up. we create five flickering boxes on the monitor screen with label: UP. The cursor position is indicated by red-shaded cell. B. the spelling program also provides mental feedback for the BCI user. Characters are arranged in a rhombus layout in order to achieve higher efficiency for cursor movement. Three components required for running SSVEP-based BCI system in a spelling program application: a stimulator. To integrate those three components in a single computer. Based on such probability analysis. One simple way is the square matrix with ascending order as shown below: Figure 3. I. From our previous work [6]. We can arrange the letters in many forms and orders. LEFT. left. down. And if the Signal Processing Program is able to detect this intention. O around the base of the cursor position. In this way. down. and command generation. For this purpose. The second program. The next part that needs to be optimized is the stimulator itself. but these are not modified in this work and are not mentioned further here. Optimization Strategies In the Display Program. feature extraction and classification. In addition. The first program. any BCI also requires a fourth component. Both programs are written in C++.Of course. In this work. it is reasonable to put vowels’ letter such as A. The following figure shows the optimization result from the aforementioned approaches. In addition. namely sensors to acquire the brain signal [1]. it is better if the distance of a letter from the center is kept as near as possible. which is called the Signal Processing Program. the collection of selected letters will form a Figure 4. it is revealed that 224 . DOWN. a signal processing unit. we can construct a letter matrix in an irregular but efficient way. or right according to the command interpreted by the Signal Processing Program from user’s EEG signals. it is better if the home position of the cursor is located at the center of the matrix. we used spatial filter called Minimum Energy Combination described in [4] as the core for the signal processing unit. and right). • When the user wants to select a letter to form a word. One simple way to organize letters is in a square matrix. and in this paper we focus on optimization for stimulator and the spelling program. is responsible for displaying flickering animation as the stimulator and also displaying the letter matrix for the spelling program as well as the visual feedback for the user. since we only utilize four possible movement (up. we created two programs working together and connected via TCP/IP socket. left. which is called the Display Program. there are two main parts that need to be optimized: the flickering animation and the spelling system. he/she must concentrate his/her gaze on the corresponding stimulus. Figure 2. word/phrase. The above configuration can be optimized in the following way. Detail analysis using letter probability of words done by Thorsten Lüth [5] shows that letter E is the most commonly used letter. RIGHT. The following diagram shows this architecture.

RESULT AND DISCUSSION We tested our program on several computers for the following parameters: frequency range. Using these technologies. 225 . we use high resolution timer called Multimedia Timer which has resolution down to 1ms. we conducted experiments with various CPU rating and by varying the animation size and number of animation objects Figure 5. This value is calculated in the following way: Frequency shift = (2) We then increased the process’ priority and it yielded standard deviation interval of about 0. which means that we got the increasing frequency resolution from 0. Since we run the program in Windows platform. no matter which software technology is applied (DirectX or OpenGL). our program performance is improved significantly because those technologies will maximize the utilization of the graphics card. In order to produce flickering animation at 17Hz. we found that for timer interval of 29ms.11Hz.1ms. • When developing program with OpenGL approach.animations with plain texture will elicit better SSVEP response than animations with checkerboard texture. Integrated SSVEP-based BCI for spelling application. That is why we use black-and-white animation with plain texture for the stimulator. frequency resolution. and many important neuro-psychological parameters.46ms.55Hz to 0. The maximum synthesizable frequency of up to 30 Hz is adequate since the optimum flickering frequency for lowfrequency SSVEP-based BCI is around 15 Hz [8]. Figure 6. the standard deviation value is just 0. We also tested our program using subjects during CeBIT 2008 in Hannover and collected useful information such as spelling speed. The following graph shows the comparison between DirectX and OpenGL approach. we have to set the timer interval of 1/(2*17) = 29. In addition. ITR.0 and OpenGL 2. In order to achieve robust and high resolution flickering frequencies. we developed our program with the following approaches: • Each of five flickering boxes has its own thread and corresponding timer. Comparison between OpenGL and DirectX method for displaying flickering animations on the computer screen. according to equation (1).4ms = 29ms (integer value). We then optimized QGraphicsView class with the following parameters: o ScrollBarPolicy : ScrollBarAlwaysOff o Interactive : false (no interaction with other GUI events) o CacheMode : CacheBackground o OptimizationFlags : DontClipPainter and DontAdjustForAntialiasing Here is the screenshot of our SSVEP-based BCI for spelling application.0. accuracy. and frequency stability. To measure frequency resolution. we used QGraphics Framework from QT [7].55Hz above or below the expected frequency. It can be concluded that using a computer monitor as the stimulator. To measure flickering stability against variation of animation size and number of animation objects. [9]. III. Using Windows Multimedia Timer and executing the program as a normal priority process. we add CPU’s mask-affinity to this process in order to utilize the first CPU’s core and give the second core to the Signal Processing Program. The timer intervals are calculated as follows: Interval = 1/(2*fLed) (1) where fLed is the flickering frequency of the animation • All of those threads are synchronized in one process and we set the process with high priority above all other Windows processes. • Two graphics display technologies are utilized and compared: DirectX 9. first we have to measure timer accuracy during run time. It means one may expect that the flickering frequency produced by using this Multimedia Timer may be shifted about 0. the maximum synthesizable stimulator frequency is always half of its minimum refresh-rate (for CRT monitor) or response-time (for LCD-TFT monitor).

. 4. "Multiple Channel Detection of Steady-State Visual Evoked Potentials for BrainComputer Interfaces.6. We summarize the optimization strategy as follows: using advance graphics driver technology (DirectX and OpenGL). Figure 7.R. Acer Extensa 5620 has the value of 4. “High frequency SSVEPs for BCI applications. using high resolution timer (Windows Multimedia Timer). Sperling and R. 3)Three non-textured objects. It can be seen that the highest performance is achieved by HP Compaq nx9420 while the lowest performance belongs to Acer Aspire 5510. April 2008. ‘Chug’. During the experiment. and T. 54. “Creating Cross-Platform Visualization UIs with Qt and OpenGL. “Attentional Modulation of SSVEP Power Depends on The Network Tagged by the Flicker Frequency. Wolpaw.” Trolltech Whitepaper. The following graph shows the result from this experiment.3 sec. Gao.63 sec. Graimann. Inc. 4)Six checkerboard-textured objects. No." Clinical Neurophysiology 113. Ola Friman. 2008: 399-408. X.).. Mean accuracy for all five words was 92. No. 14. According to Windows Experience Index of Windows Vista. Vol.) and eight minutes to spell ‘Brain Computer Interface’ (475. B. Italy. and Axel Gräser. I. Vaughan. is greatly influenced by the computer performance. “Display Optimization in SSVEP BCIs.13 sec. B. The optimization strategy in this paper is focused on the display performance of the stimulator and the speller program.06 sec. Gerwin Schalk.” IEEE Transactions on Neural Systems and Rehabilitation Engineering. Gao." IEEE Transactions on Biomedical Engineering. Italy. and the resulting performance of the system has been evaluated. Ding. July 2006. the subject was asked to spell five words: ‘BCI’. B.R.). Future work should address these two concerns. ‘Brain Computer Interface’.6. G. R. Y. Universität Bremen. 742-751. and S. “Tools zur Verbesserung der Leistung von BrainComputer Interfaces.displayed simultaneously on the screen. 2006. 2.1.852 bits/minute. Florence. HP Compaq nx9420 has the value of 5. Gräser. and selection of a computer with high CPU rating. Vol 16:1016-1029. G.) and ‘Siren’ (111.” Computer-Human Interaction 2008. Trolltech. N. Sugiarto. and one free spelling. 2007. Birbaumer. J. optimizing the multi-thread feature of a dualcore processor. However.” Computer-Human Interaction 2008. A. Dennis McFarland.J. “A Practical VEPBased Brain-Computer Interface. Brendan Allison. Jackson. It seems that the software performance. Pfurtscheller. external factors such as light reflection and interference will also affect the overall performance of the spelling system using SSVEP-Based BCI. CONCLUSION [2] [3] [4] [5] [6] [7] [8] [9] The optimization strategy for the Bremen SSVEP-Based BCI in a spelling program application has been presented 226 .Oxford University Press. Srinivasan. 2002.” Diplomarbeit. McFarland. "Brain-Computer Interface for Communication and Control.72%. mean ITR 13.“ Cerebral Cortext . and try to further improve SSVEP BCI performance while minimizing the needs for expert help. Hong. June 2006. Wang. "Towards an Independent Brain-Computer Interface Using Steady State Visual Evoked Potentials. Comparison of optimum animation (LED) size for three computers in four different procedures: 1)Single non-textured object." Clinical Neurophysiology 119. Shi Dong Zheng. April 2008. external LEDs. ‘Chug’ (139. ‘Siren’. Florence. as indicated by the size of animation object as well as the number of animation object displayable simultaneously on the screen without disturbance. IV. Allison. M.M. Thorsten Lüth. Gary Garcia. or expensive hardware. Wang. and Acer Aspire 5510 has the value of 3. REFERENCES [1] J. D. Ivan Volosyak. We have tested our system on 106 subjects during CeBIT 2008 in Hannover. and J.M. To average subjects needed two minutes to spell ‘BCI’ (122.G. Wolpaw. 2)Single checkerboard-textured object. Vol.

International Conference on Computer Engineering and Technology Session 4 .

.

headline fonts will be displayed over the text great. But regarding take homepage content block as condition GHMM. the output of each observation is released by the term shift. Web Page Identifacation for web pages segmentation using HMM. the algorithm of page segmentation and recognition is introduced in this paper.3%. and formatting) instead of single emission feature (term). this algorithm operation efficiency enhanced by 14. it would be easier to identify and automatic extraction for machines A.edu.the frequency of each word present in different content documents has certain rule that the higher frequency is. The most popular methods of web page segmentation are web DOM tree marking[1].2009 International Conference on Computer Engineering and Technology A Novel Method for the Web page Segmentation And Identification Jing Wang1. The structure of Web page It is the characteristics of web page that the logical content interdependence block can be organized together.cn. THE ANALYSIS OF WEB PAGE I. There are other layouts such as font. II.we can consider the Web Page Segmentation from the content and structure of web page . and background images. Because of its easyestablished. These methods are mainly considering the layout and structure of web page. It has been a new research direction in recent years. So we extract character words constitutes character vectors with VSM model. The text in each block is expressed with a vector form. page location coordinates of entities[2]. and it is considering the structure and the content of web pages. But how to extract the needed information is a necessitous problem. expresses a central subject together. and VIPS page segmentation algorithm[3]. including general navigation area. So a method based on generalized hidden Markov model is presented in this paper. also may be called the leader page. it has become the concern of the researchers. according to the page content as well as the structural configuration.Zhijing Liu1 1School of Computer Science and Technology. Xidian University. A Generalized Hidden Markov Model (GHMM). it is more suitable for web page's features taking these factors into consideration. After the Web page segmentation.2009. Web information contains other emission features such as format. INTRODUCTION Along with the rapid and continuous development of the Internet technology. and the effect of segmentation has significantly improved. China E-MAIL: wangjing@mail. liuprofessor@163. One kind is the subject homepage. We may think the page is composed by the different content block. For example. and we accurately deal with web information only when page is segmented nicety. Because there is more embellishment for letter than text in web page. such as accurate modeling and high recognition rate. Page segmentation and recognition is an important part in the web page processing. One is label page. good adaptability and high accuracy extraction.com Abstract—A method of page segmentation and recognition based on generalized hidden Markov model is present in this paper.xidian. In this way . Also some kind of homepages are composed by the few texts and pictures. is called the picture homepage. thus distinguishes each homepage block part with GHMM. Vector Space Model (VSM)[4] is considered 229 978-0-7695-3521-0/09 $25. The text attribute of Web page For the text processing by traditional HMM. we use multiple emission features (term. The experimental result indicated that. Keywords: Web Page Segmentation. There are many advantages Considering the traditional HMMs limitation that the approaches only consider the semantic term as observed emission feature. According to the homepage manifestation and function. A subject pages can be divided into different areas. italics. Therefore. which could be observed to help improve state transition estimation for HMM. these methods are too reliant on the structure of web page. B. And different character words can divide different content text. the theme labels area. the transition probability between the blocks is decided by the released observation characteristic. the more importance it is.00 © 2009 IEEE DOI 10. the web pages can be divided into three categories. Xi’an 710071. layout. which is mainly composed of text hyperlinks. compares with the original page segmentation algorithm. whose role is to generalize theme of its linked web pages. The labels of page can be classified into navigation label (navigation bar) and theme labels. and it is composed by the massive non-link text and the few hyperlinks text. Web already became in this world the biggest information origin. background color. Because . but not take the features of web content in different areas into account. The so-called page segmentation is to divide a subject page into each region. It can effective carry on the division to the homepage. layout.1109/ICCET. etc. size. The text demonstrated which in the different block also can demonstrate because of its content according to certain form. Therefore. the theme of the text area. if the relevant text is the hyperlink indicated that it will be different from the color of the text in general.149 .

And the related definition as follow: Formally. In other words. User Interaction. A is N × N matrix of the state transition. which reflects the changes in HMM. . where π i = P (q1 = S i ). where ot ∈ V . (1) B = B .B Z } . S 2 . that is b j (k ) = ∑ α s ∗ b j (ks ) . So we can establish a hidden Markov model of five states. Here we denote an observation symbol k with attributes k1 . k2 . B = (b (k )) j s N ×M . So we can establish a Hidden Markov Models of five states for the HMM. and k3 . We consider the overall feature as a linear combination of these attributes. as follow: a) N is the number of states in model. 1 2 { . π 2 . HMM do use the method to record the differences of web page in different area. MS is the number of distinct observation symbols of the attribute s. the probability of being in state j at time t+1 given that we were in state i at time t. Equation (3) now becomes P(O. P (O. t2 . and π = {π 1 . These N states are S = {S1 . wn ( d ). b) M is the number of probability observable symbols. III.…… . the Websites Label regions and so on. A document d is expressed as a vector composed of a serial of cross words.s )⎥ ⎢ 1 1 12 2 ⎣ s=1 ⎦ ⎣ s=1 ⎦ ⎣ s=1 ⎦ For GHMM based learning problems. A common goal of learning problems that use HMM is to figure out the state sequence that has the highest probability of having produced an observation sequence.… . which can be efficiently solved by Viterbi algorithm [5]. which reflect the changes in HMM. s = Vk | qt = S j ) 1 ≤ j ≤ N . t1 . c) π : the initial state probability distribution.… . 0 ≤ α s ≤ 1 .1 ≤ i. GHMM Web page generally can be divided into navigation. Transition probability aij . d) A: the hidden state transition probability distribution. i2 . and suppose the state at time t is qt . 1 • s • Z. 1 ≤ i ≤ N . B. tn . I | ). where α s is the weight factor for s =1 z the s-th attribute.all the observation vector is divided into five states.B . w1 ( d ).one of most popular model for representing the feature of text contents.s )⎥ ai i ⎢∑αs ∗bis (o2. The GHMM defined as = (A. k2 . and the observable symbol at time t is ot . where aij = P (qt +1 = S j | qt = Si ). but the each observation vector belongs to a certain state.as the following figure1 shown. thus recording the individual difference between homepage different regions. .s )⎥ ai ⎡ z ⎤ ⎡ z ⎤ T −1iT which correspond to format. Theme text. layout attribute and the text 230 . Namely.…… . In this model. π N } .1 ≤ s ≤ z . five state itself is not directly visible. ) where ti denotes the i-th keyword and wi ( d ) is the weight of ti in document d . wi ( d ). five states itself are not directly visible. and T is the number of symbols observed. a GHMM consists of the following five parts [6]: In GHMM an observation symbol k is extended from its mere term attribute to a set of attributes as k1 .1 ≤ k ≤ M (2) where N is the number of states in the model. k z . the goal is to find the most probable state sequence using the following extended Viterbi algorithm. . the goal is to find the most probable state sequence using the following extended Viterbi algorithm. For GHMM based learning problems. S N } . w2 ( d ). A = ( aij ) N × N . ) find out the state transition sequence I = i1 . tn refer to a system of n-dimensional coordinates. λ ) P( I | λ ) = π i1 bi1 (o1 )π i2 bi2 (o2 ) π iT −1 biT −1 (oT −1 ) (3) Figure 1. So the HMM is introduced. iT which maximizes P (O. V2 . but the each observation vector belong to a certain state. In this model. I | λ) = (4) ⎡z ⎤ ∑αs ∗bisT (oT . . it does not affect the content of the division. all the observation vectors are divided into these five states. Although there respective position is may be different according to each website design style. Z is the number of attribute sets of observation symbols. b j (k s ) = P (ot . ti . Emission probability b j ( k s ) . The state transition of GHMM In this model. V (d ) = (t1 . wn ( d ) are the relevant values of coordinates. . I | λ ) = P (O | I . w1 (d ). so we can know q t ∈ S . VM } . j ≤ N e) B: the emission probability distribution. V = {V1 . the probability of observing the symbol v s k of the attribute s given that we are in state j. Copyright. ∑α s =1 z s = 1 . πi ⎢∑αs ∗bis (o1.

and α 3 are the weights for term attribute. TABLE 1.J82-D-I:391–400. (3) the method based on GHMM.7% 92. When we compare segmentation results of the different class of web page. Journal of Yunnan University. Hong Bao.6% 76. Ping Zhong. so it has the higher accuracy rate than other methods. and the recall. In the future. 2006.8% 85.2% R 76.4% 89.6111-6114. Weimin Xue. Terry Cook. (2) VIPS page segmentation method. 2007BAH08B02.8% The results indicate that the accuracy of improved method is improved by about 20 percent in navigation area and Theme Label area.2% 87. IEEE Proceedings of the 5th International Conference on Machine Learning and Applications. “Web Page Classification Based on VSM”. IV. p 211–220.9% 61. REFERENCES [1] Chakrabarti S. Integrating the document object model with hyperlinks for enhanced topic distillation and information extraction. that is.5 Method(2) 75.2% 70. The novel method takes the layout and the content of pages into account. and α1 + α 2 + α 3 = 1 .3% 81.2 90.3% 70.6% 93.3 66. The results also show that the new method on the page segmentation and recognition accuracy is improved markedly. a new page segmentation method based on improved GHMM is presented in the paper. p 323–328. (1) DOM tagging tree method. Control and Automation. TABLE2. 2004. of which the 150 are used as the training set. Trans IEICE 2003. so that the model can be fully automated without lots of manual intervention. Proceedings of the 6th World Congress on Intelligent. Because these two areas have significant features such as the text of related topics link are displayed in the form of hyperlinks. June 21-23.1% 83.6% F 74. Here. precision and F measure are defined as: numbers of correct Web segmentation R= numbers of all Web segmentation numbers of correct Web segmentation P= numbers of possible Web segmentation 2× R × P F= R+P The experiment shows the compared result of the accuracy of three extraction methods. CONCLUSION According to the characteristics of web pages. we take the recall and accuracy ratio as evaluation standards. Weitong Huang et al. [2] [3] [4] From table 1. V. 28(2):98-102. α 2 .8% 79. we randomly selected 200 online pages. Buckey C. we use the F measure which combines recall (R) and precision (P). We consider the emission probability as a linear combination of these attributes. EXPERIMENTAL RESULT AND ANALYSIS Area of Web page Navigation Theme Text Theme Labels User Interaction Copy Right Website Label Method(1) 69.feature vector of VSM respectively.4% 85. 0 ≤ α i ≤ 1 . Osawa Y. 2006. The results are shown in table 2.3% Based on the above discussion. 2005. format attribute. Yachida M. in order to verify this page segmentation and the identification method presented in this paper. Table 1 has given the comparison results of these three methods in average accuracy.4% 81. KeyGraph: Automatic indexing by co-occurrence graph based on building construction metaphor. pp.5% 96.4% 79. we will improve the effective learning of GHMM.THE THERE METHODS IN AVERAGE PRECISION Method Method(1) Method(2) Method(3) P 73. we can see that the method based on GHMM has the highest recognition accuracy. THE PRECISION OF THERE MENTHOD IN DIFFERENT AREAS OF WEB PAGE [5] [6] 231 . (5) b j ( k ) = α1 ∗ b j ( k1 ) + α 2 ∗ b j (k2 ) + α 3 ∗ b j ( k3 ) where α1 . Jinlin Chen.3% 82.7% Method(3) 82. ACKNOWLEDGEMENTS This research project was promoted by the National Science & Technology Pillar Program No. Proc Tenth International World-Wide Web Conference. 2006. In: Readings in Information Retrieval. Benson NE.2% 78. and layout attribute respectively.8% 81. while the remaining 50 are used as the test set. The Viterbi algorithm of mixture of HMM2 [J]. Du Shi-ping. Term-weighting approaches in automatic text retrieval. Morgan Kaufmann. Detecting Web Content Function Using Generalized Hidden Markov Model. Salton G.

A distance B0 named metacentric height is designed in z axis to produce an uprighting moment. A practical result was achieved for the control of pitch change of the robot caused by the manipulators and detecting pan/tilt and indicated the practicability of the balance device including its control approach. as shown in Fig. Robot is a complicated dynamic system with serious nonlinear property. single-spaced type. the manipulator has to reach out.2009.00 © 2009 IEEE DOI 10. Henan University of Science & Technology (1) Department of Precision Machinery.1109/ICCET. When the robot moving. Intelligent underwater robot with ability of perceiving. more missions would be 978-0-7695-3521-0/09 $25. The working mode ranges from cleaning. abstract is to be in fully-justified italicized text. drilling. a double-axes system is needed for the balance of pitching and rolling. the fixed coordinate system E and motion coordinate system Oxyz are setup. the application of the underwater robot becomes more and more popular.cn Abstract When an underwater robot is to work under the water. the robot is on the balance position with tiny buoyancy. anti-terrorism and explosive-proof in military fields.2009 International Conference on Computer Engineering and Technology Disturbance Observer-based Variable Structure Control on the Working Attitude Balance Mechanism of Underwater Robot LI Ming (1) LIU Heping(2) School of Electronic Information Engineering. It is unpractical to set up an accurate dynamic model. gripping in civil fields to mining and mine sweeping. 1. At this time. in which a disturbance observer was used to observe the disturbances. And the properties are different between static water and flowing water. in 12-point Times. below the author information. at the top of the left-hand column as it is here. and both origins are on the center of gravity of ROV. It’s difficult to make a maneuver for a pitched robot. According to reference [1]. As the device is a single-axis system. boldface type. sampling.haust. initially capitalized. A number of tests are necessary to estimate the numerous hydrodynamic coefficients. finished by underwater robots to substitute aquanauts for human being. An automechanism was developed to balance the pose of the robot. This can change the center of the gravity and cause the robot pitching. With the developing of our society and the progressing of science. When not working. and up to 150 words in length. then begin the main text. a strong coupling will take place among the movements in each degree of freedom and a strong nonlinear movement exists always. thinking and decision-making and with a strong multi-function will be developed rapidly. This reduced the switching-gain of the control system greatly and weakened the chattering more of course. centered relative to the column. For an underwater robot with open-frame structure. 2. A sliding mode variable structure controller was designed to control the balance device. the manipulator equipped by the robot furled usually beneath the robot shown in Fig.202 232 x G0 =x F0 yG0 =y F0 z F0  z G0 =B0 . School of Mechatronics Engineering and AutomationShanghai University ( ) lim@mail. Use the word “Abstract” as the title.edu. The abstract is to be in 10-point. The control of position and attitude of underwater robot is an arduous task. The center of gravity coincides with the center of buoyancy in x-y plane. kelp reaping. cutting. 1. Leave two blank lines after the abstract. Introduction Nowadays.

as shown in Fig. The change of the position X of the balance body causes a shift of the center of gravity of the robot and the pitch angle T of the robot is changed accordingly. So. lights and camera and the dragging of the neutrally buoyant tether can impose inestimably on the attitude of the robot. the viscous resistance and additional mass make a influence on the movement. When needed. The moment of inertia of the robot is Jy.ym0) in original state to w(xm. The underwater robot test platform described in this paper has two manipulators symmetrical to x axis in structure. The move of the barycenter of the robot certainly causes a pitching on the robot. Why don’t adjust the center of gravity of the robot according to the change of the barycenter of the manipulator arms? Although the movement of the manipulator keeps to a 233 Figure 4 Balance mechanism of the attitude 2. In addition.yG0.zG). the manipulators reach out from the bottom of the robot to carry out various works. the manipulator has only a change of the barycenter in x axis. When the barycenter of the robot system is at the balance. The turning equation of the robot is: J y T=  MT  m 2 X+f1 . The barycenter of the manipulator arms varies from w0(xm0. 4) to balance and compensate for the move of the gravity center of the robot based on the pitching angle and angular velocity measuring by the attitude transducer. Assumed that when pitching and corrected. the operating of the pan/tilt with sonar. This makes the whole barycenter of the robot transfer from G0(xG0.yG. x Figure 3 Reaching out of the manipulator and the operating of the detecting pan/tilt Figure 2 Underwater robot with manipulators and detecting pan/tilt two The robot is driven by propellers and moves to appointed position. it is difficult to operate a pitching robot accurately under the water. pulled by a step motor along the guiding rails. in which all of the disturbances are considering. 3.ym) in working state. When adjusting the position of the robot for the convenience of manipulator’s work. the robot turns only around the y axis and other movements don’t exist.E O y z Figure 1 Fixed and motion coordinate systems cosine law basically. During the process of reaching out from the bottom of the robot. To solve the problem. a balance mechanism was developed (shown in Fig. it’s direct and rapid to control the attitude of the robot. Modelling The mass of the balance body of the balance mechanism of the attitude is m2.zG0) to G (xG. the position of the barycenter of the balance body is assumed as the origin.

detecting pan/tilt. e 2 e1 T . the movement of the state of system is forced along the sliding mode surface by the switching of the control parameters.  K) s (7) V d (c  a)s2  se1(c(a  c) ])  (f 1 In addition max  K) s V2 = 1 f f  e2e2 K1 1 ˆ ˆ = f( f  f)  e 2 (e2  e2 ) K1 From (3). then: ˆ u=]e1  K sgn(s)  f Where ] = ® (4) ­D ¯E se1 ! 0 se1  0 and K represents the switching gain.then (8) ˆ ˆ f be the estimate of the disturbances while e 2 denote the estimate of e 2 and the gains k1 and k2 are Let selected by the pole placement disturbance observer is designed as method. But the chattering of the sliding mode control is inevitable when the control is switching. Let a M / J y .. the viscous resistance and additional mass caused by the move of the manipulators. and V1 1 2 s 2 V2 = 1 2 1 2 f  e2 2K1 2 (6) Lyapunov function is selected as: T=  aT  u+f (1) V=V1  V2 s=ce1  e2 s 1 2 1 2 1 2 s  f  e2 2 2K1 2 ce 2  ae2  u  f 3.where M is the hydrodynamic coefficient when turning around y axis and f1 is the disturbances including the change of the moment of inertia . To reduce the chattering. e 2 =e 2  e 2 . it can be thought as that: ª f º ª0 0 º ª f º ª 0 º ˆ ˆ ª k1 º ˆ « » « « »  « » u  « » > e2  e2 @ (3) » ˆ «e2 » ¬1 a ¼ ¬e2 ¼ ¬1¼ ¬k 2 ¼ ˆ ¼ ¬ The sliding mode controller is defined as: f=0 . then: 0 . then ˆ ce2  ae 2  ]e1  K sgn(s)  f  f =(c  a)e2  ]e1  K sgn(s)  f =(c  a)(s  ce1 )  ]e1  K sgn(s)  f =(c  a)s  c(c  a)e1  ] e1  K sgn(s)  f =(c  a)s  e1 (c(a  c)  ] )  K sgn(s)  f e1 Td  T T e2 ª e1 º «e » ¬ 2¼ e1 T . The ˆ ˆ f=k1 (e 2  e 2 ) k1e 2 and ˆ ˆ ˆ ˆ ˆ ˆ e2 f  ae2  u  k2 (e2  e2 ) f  ae2  u  k2e2 As the disturbances produced by the moves of manipulators and pan/tilt as well as the dragging of the neutrally buoyant tether are slow movements. When the change of the parameters of control object and the external disturbances are applied. the sliding mode is unchangeable and robust. Disturbance Observer-based Variable structure Control The desired value of pitching angle is Td the error and its derivative are defined as: From equation (5) and (2). balance object. Putting into (8) together with (2). It’s attainable that: Putting (4) into it. u m 2 X / J y and f f1 / J y The equation can be re-written as: The sliding line is chosen as: s=ce1  e2 The analysis of stability is shown as follow: (5) ˆ ˆ Let f =f  f . 234 . a disturbance observer is adopted to estimate the disturbances so that the chattering can be weakened [3] [4] [5] [6]. The disturbance produced by the dragging of the neutrally buoyant tether is also included. (2) Putting into (1) yields to the error state equation: ª0 1 º ª e1 º ª 0 º ª0º  « »u  « »f « 0  a » « e » 1 ¬ ¼¬ 2¼ ¬ ¼ ¬1 ¼ That is e 2 ae 2  u  f ss=(c  a)s2  se1 (c(a  c)  ])  K s  fs d (c  a)s2  se1 (c(a  c)  ])  ( f Consequently: max According to the theory of sliding mode variable structure [2].

The mass of the robot is 175kg. the condition of slow velocity mentioned-above for sliding mode controller is satisfied. V d 0 had been be verified. the balance object is designed to have a mass of 2. which produces a max pitching moment up to 0. It is the adoption of the disturbance observer that control input 150 100 ˆ the condition of K t f  f could be satisfied so max 50 easily that the switching gain K is depressed greatly and weaken the chattering therefore.m.V2 =  1 ˆ ˆ ˆ f f  e2 (  ae2  u  f  f  ae2  u  k 2e2 ) K1 ˆ ˆ =  f e +e (ae  ae  f  f  k e ) 2 2 2 2 2 2 State response 1 0.55N.5 2 =  f e2  ae2  e2f  k 2e2 ) 2 V2 =  (a  k 2 )e2 2 k 2 t a atc ­D t c(a  c) ]= ® ¯ E d c(a  c) Kt f k1 ! 0 max (9) From equation (7) and (9).5kg varies from 0 to 900mm while working.2kg and slide a distance from 50 to 250mm.2 e1 0. 0 -1 -2 e2 -3 -4 -5 -6 -0.5 0.e2 plane trajectory f=5  0. 235 .5 to Fig.5 -3 -3. The center of gravity of the manipulator arms that has a mass of 0. In order to balance the moment as well as the other disturbances.11N.1 0.m~ 0.45N. it can be found that: while equation (10) 0 1 2 3 t(s) 4 5 6 7 if se1 t 0 if se1 d 0 Figure 5 Response of the error state (10) 250 200 is satisfied.5 0 -0. This means that the sliding mode control is stable. which will cause a balance moment from 0.5St) The results of the simulations presented here are promising shown in Fig. Since it is the pitching of the whole robot induced by the movements of the manipulators. 0 0 1 2 3 t(s) 4 5 6 7 4.5 -2 -2. detecting pan/tilt and the dragging of the neutrally buoyant tether that has a slow velocity.8. The pitching of high frequency induced by wave is not taken into account. Simulation 1 Figure 6 Control input The control Algorithm was simulated on Matlab®/Simulink® [7][8].4 0.5 -1 -1.1 0 0.m. [9][10] Assumed that the disturbance applied upon the robot obey a law of sines.6 Figure 7 e1 .3sin(0.3 0.

The amplitude of the switching gain was reduced greatly and the chattering in the sliding mode control was weakened certainly. Y0102). Figure 10 Pitching change of the robot controlled by the balance mechanism 6. References 236 . a two DOF balance mechanism is needed to compensate the pitching round the y axis and the rolling round the x axis. Experiment The operating experiment of the underwater robot was made in a static water pool. As it is a single axis device.5 seconds when the balance mechanism was working as shown in Fig. Figure 9 Pitching change caused by the moves of the manipulator and pan/tilt without the work of balance control 8.7 6 5 f and ef 4 3 2 1 0 0 1 2 3 t(s) 4 5 6 7 Figure 8 Estimate of the disturbance 5. National Natural Science Foundation of China (Project No. Nevertheless. 7. The detecting pan/tilt pitched at a angular velocity of 60 /s(without rolling movement). Acknowledgment The research projects concerned in this paper were financially supported by State Leading Academic Discipline Fund and Shanghai Leading Academic Discipline Fund of Shanghai University (Project No.which are greatly appreciated by the authors. the pitching angle would be compensated within 4.9.6 degrees at 0. Conclusion Aimed at the pitching of the robot caused by the move of the manipulator. the pitching can only be compensated around the y axis.4 second. BB 67 and No. The control goal had been implemented basically. 2007AA04Z225). detecting pan/tilt and tether. For the robot with a barycenter disturbance along the y axis.3 degrees as shown in Fig. a balance mechanism was developed and a sliding mode controller was designed to control the device. The max pitching angle reached only 5. This shows that the balance mechanism as well as the control approach is applicable. the max pitching angle of the robot would reach about 10. Meanwhile a disturbance observer was defined in the controller to estimate the disturbances. 60605028) and The National High-tech Research and Development Program (Project No. While the balance mechanism didn’t work. The manipulator turned out from the bottom of the robot at a angular velocity of 150 /s.10. A good effect had been achieved upon the pitching of the robot induced by the manipulator and detecting pan/tilt.

[9] Heping LIU. Wu Tsui-Chou Robust disturbance attenuation with unknown input observer and sliding mode controller Electrical Engineering. pp.Z. n 2. Harbin. Phuket. [10] Heping LIU.493-502. 653-658. and Wu H S. MED. Chadli. Hiroshi. Harbin Engineering University Press. Proceedings of the Institution of Mechanical Engineers. [8] Liu J K. [3] Chang Jeang-Lin. 2008. [4] Kawamura. 1994. pp. pp.. 341-346. 2008. v 220. Zhao Y. Motion and Modelling of Ship. MED. 237 . 446-451.. [2] Yao Q. [5] Oudghiri. Itoh. pp.456-461. Proceedings of the Institution of Mechanical Engineers. Tsinghua University Press. Zhenbang GONG. Mar-Apr. [6] Wang Ying. “Upper Bound Adaptive Learning of Neural Network For The Sliding Mode Control of Underwater Robot”. v 222. 2005. Huang J. 2006. v 30. Kiyoshi Chattering reduction of disturbance observer based sliding mode control IEEE Transactions on Industry Applications. Part I: Journal of Systems and Control Engineering. Ahmed. “The Anti-wave Control of Small Open-frame Underwater Robot”. 2008. and Ding Han. 2007. Atsuo. S. Chongqing University Press. Beijing. pp. Zhenbang GONG. [7] Yu.[1] Li D. n 1. 1999.. MATLAB Simulation for Sliding Mode Control. Sakamoto. 2007 Mediterranean Conference on Control and Automation. September.P. Xiong Zhenhua. 1997. Q. n 3. pp. Variable Structure Control System. “Lateral vehicle velocity estimation using fuzzy sliding mode observer”. Chongqing. “Simulation study on a friction compensation method for the inertial platform based on the disturbance observer”. Mohammed. and El Hajjaji. n 7. v 90. Proceedings of the 3th International Conference on Intelligent System & Knowledge Engineering. 4433910. “Robust controller based on friction compensation and disturbance observer for a motion platform driven by a linear motor”. 2008. H. pp.. Part G: Journal of Aerospace Engineering. 2007 Mediterranean Conference on Control and Automation.33-39. Xiamen. Proceedings of the International Conference on Advanced Computer Theory and Engineering . Mohammed.

1109/ICCET. At the receiver the inverse operation is carried out. Prior to demodulation. In both cases the fast Fourier transform (FFT) and its inverse are utilized. India inderjeetk@gmail. Both. sufficiently strong coding spreads the information over multiple subcarriers. INTRODUCTION In this paper the wideband frequency-selective radio channels is used for investigating the transmission of digital signals.OFDM. The performance of OFDM can be improved significantly by using different modulation schemes for the individual sub-carriers. The third block transforms the symbols into time-domain using inverse fast Fourier transform (IFFT) at the transmitter. Each modulation scheme provides a trade off between spectral efficiency and the bit error rate. minimum mean square error.com kamalthakur12@gmail. Because of this reason. In case of OFDM (orthogonal frequency division multiplexing).com mkuldce@gmail. ADAPTIVE OFDM TRANSMISSION The block diagram of the OFDM transmitter used is shown in Fig. Un-coded OFDM loses all frequency diversity inherent in the channel: a dip in the channel erases the information data on the subcarriers affected by the dip and this information cannot be recovered from the other carriers. The performance of single carrier and multi-carrier modulation schemes will be compared for a frequency-selective fading channel considering un-coded modulation scheme. The paper is organized as follows: In section II the fixed and adaptive OFDM transmitters are described.India Kulkarni. This enables the use of cheaper power amplifier as compared to OFDM system. This in turn means that a SC system requires a smaller linear range to support a given average power. the individual sub-carriers are modulated with fixed and adaptive signal alphabets.India. LOS. so the peak-to-average transmitted power ratio for single carrier modulated signals is smaller. The output signal is transmitted over the radio channel. Adding 978-0-7695-3521-0/09 $25. instead of the hundreds or thousands typically used in OFDM. a frequency-independent as well as the optimum power distribution are used. Furthermore.co. In a multipath radio channel. IFFT. the cyclic extension is removed and the signal is transformed back into frequency domain with an FFT. II.2009 International Conference on Computer Engineering and Technology ADAPTIVE OFDM Vs SINGLE CARRIER MODULATION WITH FREQUENCY DOMAIN EQUALIZATION Kaur Inderjeet* Thakur Kamal* Dept of CSE. Different single carrier and multi-carrier transmission systems are simulated with time-variant transfer functions measured with a wideband channel sounder. Delhi.M# Gupta Daya$ Arora Prabhjyot* Dept of ECE Dept of CSE Dept of ECE # $ National Institute of Technology Delhi College of Engineering # $ Suratkal. Binary data is fed to a modulator which generates complex symbols on its output. In case of OFDM. QAM. BER. At the receiver. In case of single carrier modulation. I. The modulator either uses a fixed signal alphabet (QAM) or adapts the signal alphabets of the individual OFDM sub-carriers. the distribution of Keywords. thus results in an irreducible BER and imposes an upper limit on the data symbol rate. the signal is equalized in frequency domain with the inverse of the transfer function of the radio channel corresponding to a zero-forcing equalizer.2009. the inverse FFT transforms the complex amplitudes of the individual subcarriers at the transmitter into time domain. Two different adaptive modulator/demodulator pairs are considered in this paper: In modulator A. This recovers frequency diversity and improves the BER performance.243 238 .com. Frequency-selective fading caused by multipath time delay spread degrades the performance of digital communication channels by causing intersymbol interference. Single carrier modulation uses a single carrier. it is assumed that the instantaneous channel transfer function can be estimated at the receiver and can be communicated back to the transmitter. The next block inserts the guard interval. signal alphabets and power distribution can be optimized corresponding to the channel transfer function. 1.in aroraprabh@gmail.com daya_gupta2005@yahoo. Propagation measurements of radio channels with fixed antennas show that the transfer function varies very slowly with time. Abstract— The aim of the present paper is to compare multicarrier and single carrier modulation schemes for wireless communication systems. The spectral efficiency can be maximized by choosing the highest modulation scheme that will give an acceptable (BER). the FFT and its inverse are used at the input and output of the frequency domain equalizer in the receiver. The modulation schemes have to be adapted to the prevailing channel transfer function. This mechanism results in a poor Bit Error Rate (BER) performance. frequency selective fading can result in large variation in the received power of each carrier. A description of a single carrier system with frequency domain equalization in section III is followed by simulation results in section IV. Dept of ECE *Institute of Technology & Management *Gurgaon.00 © 2009 IEEE DOI 10.

dua1 subcarriers on1 all the samples in time domain. The obtained SNR margin is the maximum possible so that the error probability becomes minimum. Therefore. 128-QAM. a periodic extension (guard interval) is required in order to mitigate interblock interference. t ) 1 ⋅ H ( w. Modulator B optimizes the power spectrum and distribution of bits simultaneously. modulator B calculates the optimum distribution of power and bits. Furthermore. The result of modulator B is that the same SNR margin is achieved for all sub-carriers. In contrast to adaptive OFDM. 3 . 3. 32-QAM. a block wise signal transmission has to be carried out. 8-QAM. 3.. In order to get a minimum overall error probability. The results of the optimization processes of both modulator A and modulator B are shown in Fig. t) denotes the time-variant transfer function of the radio channel. This means that 0. In case of the single carrier system. the decision is carried out in time domain. 2-PSK. the upper diagram gives the absolute value of the transfer function. 239 . The transfer function of the equalizer H. the error probabilities for all used sub-carriers should be approximately equal. Like in an OFDM system. a basic difference between the single and multi-carrier modulation schemes: In case of the single carrier system. t ) = S / N | r ( w. Therefore. 4-PSK. and 256-QAM. The algorithm for modulator A maximizes the minimum (with respect to all sub-carriers) SNR margin (difference between actual and desired SNR for a given error probability). t) at the input of the receiver: H e ( w. 2. 8 bit per subcarrier and FFT block can be transmitted. Because of this reason. the distribution of bits and the distribution of signal power with respect to frequency. 1.. The figure shows that the basis concepts for single carrier modulation with frequency domain equalization and OFDM transmission are almost similar. SINGLE CARRIER TRANSMISSION WITH FREQUENCY DOMAIN EQUALIZATION The lower part of the block diagrams in fig. Therefore. The algorithms for the distribution of bits and power are described in [7]. t ) S / N | r ( w.t) depends on the SNR of the respective subcarriers S/Nlr(w. The main advantage of single carrier modulation compared with multi-carrier modulation is the fact that the energy of individual symbols is distributed over the whole available frequency range. Furthermore. whereas in case of the multi-carrier system the decision is carried out in frequency domain. 1 Block diagram of a) an OFDM and b) a single carrier transmission system with frequency domain equalization bits on the individual sub-carriers is adapted to the shape of the transfer function of the radio channel. This inverse FFT operation spreads the noise contributions of all the indivi. III. an inverse FFT operation is located between equalization and decision. both modulators yield the same distribution of bits. a zero-j-forcing equalizer shows a poor noise performance. Since the noise contributions of highly attenuated Sub-carriers can be rather large. narrowband notches in the transfer function have only a small impact on error probability. The adaptive modulators select from different QAM modulation formats: no modulation. for single carrier modulation a fixed symbol alphabet is used in order to realize a constant bit rate transmission. For comparison. The main difference is that the block “inverse FFT” is moved from the transmitter to the receiver [l]. Modulator B optimizes simultaneously both. For the specific example presented in Fig. a minimum mean square error (MMSE) equalizer is used for the single carrier system. 64-QAM. single carrier modulation and OFDM without adaptation exhibit the same complexity. In case of modulator A. 16-QAM. the power distribution and SNR is shown for both modulators. For large SNRs the MMSE equalizer turns into the zero-forcing equalizers which mu1tiplies with the inverse transfer function. (w. t ) + 1 H(w. 1 shows the considered single carrier transmission system. the distribution of bits is carried out in an optimum way so that the overall error probability becomes minimum. There is however. the output signal of a single carrier transmitter shows a small crest factor whereas an OFDM signal exhibits a Gaussian distribution.Fig. Since also in case of single carrier modulation the FFT algorithm is used.

a high gain (7 to 9 dB) compared with fixed OFDM is obtained. the time synchronization of the OFDM blocks) is optimized so that the bit error ratio becomes minimum.QAM (bandwidth efficiency: 4 bit/symbol) is used for single carrier modulation and fixed OFDM (systems 1 and 2). Measurement Distance of antennas Propagation conditions Base station antenna User terminal antenna Carrier frequency Average attenuation Delay spread 1 100m LOS Omni directional fixed Omni directional fixed 2 100m LOS Omni directional fixed Omni directional mobile 3 250m NLOS Sectional fixed Omni directional fixed 4 250m NLOS Sectional fixed Omni directional mobile The results show that an enormous improvement in performance (12 to 14 dB) is obtained from OFDM with adaptive modulation. an OFDM signal exhibits a Gaussian distribution with a very high crest factor.4 dB 0.5 dB is achieved using an optimized power spectrum for OFDM instead of a frequency-independent. it is recommended to use a constant power spectrum in order to save computational or signaling effort.IV. But only a gain of less than 0. For all transmission systems a complex base band simulation is carried out with ideal channel estimation and synchronization. OFDM with optimized modulation schemes and frequency-independent power distribution (modulator A) iv OFDM with optimized modulation schemes and optimized power distribution (modulator B).8 GHz 77.8 Mbps 6 dB 2·105 Table 2: Simulation Parameters For the LOS measurements also.e.8 GHz are presented. the user terminal antenna was moved over a distance of 1 m with a low velocity. Examples of the simulation results are presented in Fig. Length of FFT interval Length of guard interval RF bandwidth Average data rate Noise figure of the receiver Number of transmitted bits 256 samples 50 samples 5MHz 16. The figures show the bit error ratio as a function of the average transmitted power. 240 1. Additional simulations show that the gain from adaptive modulation increases when higher-level modulation schemes are used. In all the examples shown. In case of adaptive modulation. the channel must not vary too fast because of the required channel estimation. This can be explained by the fact that in the adaptive system. No over sampling was used since only linear components (except the detectors) are assumed in the transmission systems.5 dB 0. Particularly in the NLOS case with single carrier modulation. Because of this small difference. The main parameters of the simulations are shown in Table 2. only transmission systems with the same average data rate are compared. In case of the mobile scenarios (measurements 2 and 4). SIMULATION RESULTS In the present paper following systems are compared using measuring transfer function of the channels: i Single carrier modulation with minimum mean square error (MMSE) frequency domain equalizer ii OFDM with fixed modulation iii Sub-carriers and frequency-independent power distribution. In case of the LOS channels single carrier modulation yields only a signal gain of 1 to 2 dB. 2] that OFDM with fixed modulation schemes shows approximately the same performance as single carrier modulation with frequency domain equalization. A rapidly varying channel causes also a high amount of signaling information with the effect that the data rate for the communication decreases. the average bandwidth efficiency is the same as in case of fixed modulation. QAM schemes with different bandwidth efficiencies are used.41 µs 66. Furthermore. a significant gain (5 to 6 dB) is obtained from adaptation. The temporal location of the FFT interval with respect to the cyclic extension at the receiver (i. 16. Furthermore. adaptive OFDM is less sensitive to interblock interference due to an insufficient long guard interval than fixed OFDM and single carrier modulation [7].34 µs 112 dB 1. This results from a higher coherence bandwidth of the LOS radio channel transfer function. single carrier and multi-carrier modulation. bad channels are not used or only used with small signal alphabets so that a small amount of interblock interference is not critical. Therefore. Table 1summarize the parameters for all measurements. If channel coding is included in the transmission system also. But adaptive OFDM exhibits also some disadvantages: The calculation of the distribution of modulation schemes causes a high computational effort.1 dB 0.43 µs 105. 2 and 3. but the gain is smaller than in the NLOS case. Additionally. .74 µs Table 1: Parameters of radio channel propagation measurements Simulation results for four typical radio channels at a carrier frequency of 1. Adaptive OFDM shows also a significant gain compared with single carrier modulation. For both. linear power amplifiers with high power consumption have to be used. Therefore. it has been shown in [1.

KARAMA. But adaptive OFDM outperforms single carrier modulation by 3 to 5 dB. Furthermore. it is recommended to refrain from optimizing the power distribution since either additional computation or additional signaling for the synchronization is needed. the latter can be combined with antenna diversity using maximum ratio combining [8]. In 241 . JEANCLAUDEA: An analysis of orthogonal frequency-division multiplexing for mobile radio applications. the required signa1 power can be reduced dramatically compared with fixed modulation. With adaptive OFDM and single carrier modulation. REFERENCES [1] H. CONCLUSION By using adaptive modulation schemes for the individual sub-carriers in an OFDM transmission system. VI. ND I. But simulations reveal that from the optimum power distribution only a small gain of less than 0. the simulation results yield no significant differences between radio channels with fixed and mobile user triennial antennas. In addition to the modulation schemes (bit distribution) also the power distribution of adaptive OFDM can be optimized. higher gains . Simulations show that for a bit error ratio of a gain of 5 to 14 dB can be achieved depending on the radio propagation scenario. SARI.Fig 2: Simulation results for a line-of-sight (LOS) radio channel with a) a fixed (measurement 1) and b) a mobile (measurement 2) user terminal antenna Fig 3: Simulation results for a non-line-of-sight (NLOS) radio channel with a) a fixed (measurement 3) and b) a mobile (measurement 4) user The better performance of adaptive OFDM compared with single carrier modulation results due to the capability of adaptive OFDM to adapt the modulation schemes to subchannels with very different SNRs in an optimum way. Therefore.G . this property is of particular advantage. Since NLOS radio channels exhibit usually higher attenuation. In order to improve the performance of single carrier modulation.compared with conventional OFDM-are obtained for NLOS channels than for LOS channels.5 dB is obtained. Also with single carrier modulation a significantly better performance is obtained than with OFDM with fixed modulation schemes. V.

. KADEL: Diversity and equalization in frequency domain . 238243 (1996). [2] H. In Proceedings of the Globecom '94 in San Francisco. BINGHAMA: practical discrete multitone transceiver loading algorithm for data transmission over spectrally shaped channels. . 1-5 (2004). In Proceedings of the IEEE Vehicular Technology Conference '97. 661-665 (1995). AND K. . [5] P. G.a robust and flexible receiver technology for broadband mobile communication systems. [6] CZYLWIKC: Comparison of the channel capacity of wideband radio channels with achievable data rates using adaptive OFDM. pp. U.227 (1987). CIOFFI AND J. C.2 kbps voice band data modem based on orthogonally multiplexed QAM techniques. pp. In Proceedings of the 5th European Conference on Fixed Radio Systems and Networks ECRR '96. Phoenix (1997). SARI. JEANCLAUDEF: frequency-domain equalization of mobile radio and terrestrial broadcast channels. London. 242 . HUGHES-HARTOGS: Ensemble modem structure for imperfect transmission media. . In IEEE International Conference on Communications.G . [4] D. KARAM AND I. HASEGAWAK. 773-775. pp. YOSHIDA0. M. IEEE Trans. INOUE. Patent 4. TANAKAS. 713-718 (1996). S. In Proceedings of the GLOBECOM '96.679. [7] CZYLWIK: Adaptive (OFDM for wideband radio channels. [3] B. pp. WATANABEA: 191. HIROSAKAI.Proceedings of the VTC '94 in Stockholm. pp. Bologna. on Communications 43 (1995). CHOWJ. A. 16351639 (2006). S. pp.

let λξ be the 0 linear functional such that for any spline s ∈ S d ( ). the dimension of the space S 3 (ΔW ) is determined and a set of dual basis with local support is constructed. It is clear that each B-net coef1 ficient ci jk point (1) where T (l) is a triangle in Δ and Pd is the linear space of bivariate polynomials of total degree d. interior vertices.v(l) 2 3 1 For convenience. the set of all the domain points ξi jk 2 3 will be denoted by Dd. . 978-0-7695-3521-0/09 $25. .. In this paper. . is determined. y) with respect to the triangle T (l) . β. |T |} . China mengtian29@163.Δ . This dependence of geometric structure 1 results in the fact that the dimension of S 3 (Δ) over an arbitrary triangulation becomes an open problem though several results have been obtained for some special triangulations. the dimension of the space of bivariate C 1 cubic spline functions on a kind of refined triangulation. defined by (x. by using the technique of B-net method and the minimal determining set. VB . is said to be a determining set for S provided that s ∈ S and λξ s = 0 for all ξ ∈ M implies s ≡ 0. As we know.2009 International Conference on Computer Engineering and Technology A bivariate C 1 cubic spline space on Wang’s refinement Huan-Wen Liu Faculty of Mathematics & Computer Science Guangxi University for Nationalities Nanning 530006. By using the technique of minimal determining 1 set. for an arbitrary triangulation Δ. For each ξ ∈ Dd. boundary edges and triangles in Δ. Let V.1109/ICCET. 0 If S is a linear subspace of S d ( ). called Wang’s refinement. and a set of dual basis with local support is given. Introduction Let Δ be a regular triangulation of a simply connected polygonal domain Ω in R2 .v(l) 1 ξi jk v(l) . see Diener [3] and Morgan and Scott [10]. The dimension of space S d (Δ) with low degree d verse smoothness r is quite difficult to determine and poorly understood. then M ⊆ Dd.v(l) . 1 2 3 v(l) . . the dimenr sion of space S d (Δ) is known only for those cases with d ≥ 4r + 1 by Alfeld and Schumaker [1].com Wei-Ping Lu Department of Computer Science Guangxi Economic Management Cadre College Nanning 530006.e.15 243 v(l) . .v(l) . 1 2 3 1 and ci jk 2 3 are called the B-net coefficients of s(x.v(l) v(l) . v(l) ∈ T . E. China luweiping06@163. P. the space of bivariate splines over the triangulation Δ is defined by r S d (Δ) = {s ∈ C r (Ω) : s|T (l) ∈ Pd .v(l) . respecr tively. [2].00 © 2009 IEEE DOI 10. E B and F denote the set of vertices.R. l = 1. a common edge or a vertex. edges.2009. γ) is the barycentric coordinates of (x. Given 0 ≤ r < d.v(l) 2 3 d! i j k αβ γ . accord1 2 3 ing to the theory of Bernstein-B´ zier polynomials by Farin e [4]. Δ is a set of closed triangles whose union coincides with Ω such that the intersection of any two triangles in Δ is either empty. i! j!k! (2) where (α. v(l) .v(l) . y) with respect to the triangle T (l) . (3) . y)|T (l) = i+ j+k=d 1 ci jk v(l) . 2. y) = αv(l) + βv(l) + γv(l) . Lai [7] and Liu [9]. d ≥ 3r + 2 by Hong [6] and d = 4 and r = 1 by Alfeld et al. where the 2 dimension of the bivariate C 2 quintic spline space S 5 (ΔW ) was given. i. 2. since it depends not only upon the topological variants but also the geometrical shape of the triangulation as pointed out in several references. the restriction of s in T (l) can be expressed as s(x. respectively.R. boundary vertices. α + β + γ = 1.v(l) . λξ s = the B-net coefficient cξ of s at point ξ.v(l) 2 3 is associated with a corresponding domain := (iv(l) + jv(l) + kv(l) )/d. we consider a kind of refined triangulation ΔW which was first proposed by Wang [13].com Abstract In this paper. such as Farin [5]. Notation and Preliminaries 1.1 Preliminaries r For any s ∈ S d (Δ) and T (l) := v(l) . interior edges. VI . P. E I . 2.

3. i! j!k! 2 2 2 (4) (5) ci[v1 . Theorem 3. (l) where v4 := v(l) . y) ∈ C 1 (T (1) T (2) ) if and only if the following holds: ci[v1 . t = 1.v4 ] = c[v1 . i! j!k! 1 1 1 3! i j k αβ γ. y) ∈ P3 in T (1) . where M denotes the cardinality of M. (l) 1 is a MDS for S 3 (T W ). y) = p2 (x. all the t t t+1 t+2 B-net coefficients associated with the domain points in D1 (v(l) ). and w(l) in 1 2 3 each triangle T (l) respectively. and T (l) = v(l) . β. i > 1.v2 . Then M 2.v2 . respectively. w(l) . i + j = 2. t = 1.v2 . these domain 1 and in Figure 2. 2. γ) is the barycentric coordinates of v4 with respect to the triangle T (1) .w(l) . v(l) .v3 ] + γc[v1 . (l) 1 3 A minimal determining set for S 3 (T W ) 3! i j k αβ γ. Suppose p(x.v2 . (l) 1 Figure 2: A minimal determining set for S 3 (T W ) marked with . y) agrees with p1 (x. v2 ]. w(l) . w(l) . j0 i j0 [v1 ci[v1 . . Then p(x.v2 .v2 . 2. Let T (1) = [v1 .4) are set to zero.1. w(l) := w(l) . 3. Lemma 2. j j+3 Step 2.v4 ] = αci+1. · · · . must be zero. 3.v3 ] + βc[v1 . j j+3 We therefore obtain the refined triangulation ΔW = (l) |T | l=1 T W of the original triangulation Δ. · · · . v2 .v (l) 1 v1(l) b w(l) 3 w w(l) 1 (l) 2 w3(l) c d a w2(l) w(l) 1 (l) v3 v2(l) v2(l) v3(l) (l) Figure 1: The refined triangulation T W of a single triangle. 3. Join v(l) to w(l) and v(l) to w(l) . v3 ]. |T |) be triangles of Δ.w(l) t 3) c111 t+1 t+2 . γ2 ) are the barycentric coordinates of (x.2 Wang’s refined triangulation Let vi (i = 1. M is called a minimal determining set (MDS) for S if there is no other determining set for S with the cardinality being smaller than M. e j ( j = 1. 3. w(l) 1 2 4 5 points are marked with . β1 . v2 . j = 1. l = 1. respectively. t = 1. j j j j+1 where w(l) := w(l) . and . i + j = 3.v2 . β2 . := w(l) . y) with respect to the triangles T (1) and T (2) .v3 ] . Let M be the set of domain points associ(l) 1 ated with the following 16 B-net coefficients for S 3 (T W ): t 1) ci jk v(l) . . 2. 2. It is easy to see that if M is a MDS for S .1) along the edges v(l) . We now introduce the C 1 smoothness condition between two adjacent triangles.w(l) 1 4) c111 2 3 . v4 ] be two adjacent triangles in R2 with a common edge [v1 . l = 1. y) = ci[v1 . y) ∈ P3 in T (2) . . |T |. and with p2 (x. then i) By using the C 1 smoothness conditions (Lemma 2.v3 ] . T (2) = [v1 . which is a special case of the general C r smoothness condition given by Farin [4]. Take three interior points w(l) . 2. t = 1. t . 2. |E|) be edges of Δ. · · · . · · · . 2.w(l) t+1 t+2 v(l) . γ1 ) and (α2 . we establish a theorem concerning MDS (l) 1 for S 3 (T W ). j0 i j+10 i j1 j1 (6) where (α. v(l) (l = 1 2 3 1. 2) c111 v(l) .v3 ] jk i+ j+k=3 (l) Following Wang [13].1.v2 . i + j + k = 3. 2.w(l) . |V|) be vertices of Δ. let T W shown in Figure 1 be a (l) refining subdivision of T formed by the following steps: Step 1. · · · . In this section. w(l) . 2. t = 1. 2.w(l) t t+1 t+2 . |T |. assume p1 (x. 2. where w(l) = 1 v(l) + 4 v(l) + j 7 j 7 j+1 2 (l) 7 v j+2 and v(l) := v(l) .v(l) . 244 (l) 1 Proof : Suppose that the B-net coefficients of S 3 (T W ) listed in 1) .v4 ] jk i+ j+k=3 where (α1 . then dim S = M.w(l) . 3. v(l) . 3. 2. j = 1.

1.1.w(l) . |T | = |E I |−|VI |+1. (9) i=1 j=1 Thus c = 0. are forced to zero by Lemma 2. t = 1. indicated by ♦ are annihilated. according to [12]. w(l) .w(l) . Theorem 4. And by using Theorem 3.w(l) 2 3 1 = 2a − c111 v(l) . Then it follows from a) that all B-net coefficients in D1 (v) around every vertex v ∈ V must be zero. Similarly the B-net coefficients by are zero. |E I | the number of edges in ΔW and ei the number of distinct slopes assumed by these edges. t = 2.ii) It is noted that w(l) = α1 v(l) + β1 w(l) + γ1 v(l) 3 1 2 3 with (α1 . we have 1 3 1 c111 v(l) . 3. iii) It is also noted that .w(l) 2 3 1 = −c111 w(l) . 1 dim S 3 (ΔW ) = 3|V| + |E| + 4|T |. γ3 ) = (−1. all the remaining B-net coefficients in each triangle are zero.w(l) .w(l) . 3. −1. the (l) 1 set M is a MDS for S 3 (T W ).w(l) t t+1 t+2 Thus b = 0. 4). iv) Noticing that v(l) = α3 w(l) + β3 w(l) + γ3 w(l) 1 1 2 3 . indicated by are zero. Similarly the B-net coefficients c102 t = 2.w(l) 2 3 + 4b. α3 . |E| = 2|E I |−3|VI |+3. w(l) . (l) Therefore s ≡ 0.w(l) . then 1 dim S 3 (ΔW ) ≤ 3|V| + |E| + 4|T |. we have 1 2 1 c111 v(l) . Which together with Theorem 4. we have 1 dim S 3 (ΔW ) ≥ 10 + 6|T | + 3|E I | − 7|VI |. we obtain d = 0.w(l) . And then we have 1 c120 .w(l) .w(l) .1. 3. w(l) .w(l) 2 3 1 = −c120 w(l) .1) across edge v(l) . By using C 1 smoothness conditions (Lemma 2.v(l) . It is easy to see that the total number of B-net coefficients in the determining set P is 3|V| + |E| + 4|T |. b) For each edge in the original triangulation Δ. Noticing that T W is a quasi-cross-cut (l) 1 partition. indicated by are annihilated. 1 we have dim S 3 (ΔW ) ≥ 3|V| + |E| + 4|T |. By using C 1 smoothness conditions across edge v(l) . The related domain point is marked with in Figure 3. w(l) = α2 v(l) + β2 v(l) + γ2 w(l) 2 1 2 3 with (α2 . 3.w(l) 2 3 1 = −2c201 v(l) .v(l) 2 3 .w(l) t c300 t+1 t+2 v) It follows from above that items .v(l) . Let Δ be an arbitrary triangulation and ΔW be the Wang’s refined triangulation of Δ. we have 2 3 1 1 c111 (8) Proof: Using the lower bound formula by Schumaker [11]. 2. with (β3 .w(l) 2 3 + 2d.w(l) 2 3 1 − c111 v(l) . γ2 ) = (−2.w(l) . 0. The related three domain points are marked with in Figure 3. 2.w(l) . These domain points are marked with and in Figure 3.w(l) . −1). we know dim S 3 (T W ) = 16.w(l) . w(l) . γ1 ) = (0. w(l) . where |VI | denote the number of interior vertices in ΔW . This completes the proof of this theorem.w(l) t c012 t+1 t+2 ( j + 2 − jei )+ . choose a domain point associated with that edge. ( j + 2 − jei )+ = i=1 j=1 [(3 − ei )+ + (4 − 2ei )+ ] = 0. each of them is the interior domain point located in the triangle which does not include any edge of Δ. (7) 245 . 2). v(l) . β1 . we choose four domain points. |V| = |E I |−2|VI |+3.w(l) t+1 t+2 t Thus a = 0.2. Using the Euler Theorem 4 Main Results Theorem 4. Similarly the B-net coefficients t = 2. Proof: We choose the following domain points to con1 struct a determining set P for S 3 (ΔW ): a) Three B-net coefficients to determine all the B-net coefficients associated with the domain point in D1 (v) around each vertex v ∈ V in the original triangulation Δ. . indicated w(l) . 1 We now set all B-net coefficients of s ∈ S 3 (ΔW ) associated with all domain points in P to zero. v(l) . Noticing that |E I | = 9|T | + |E I |. So s must vanish identically inside each subtriangle. β2 . and |VI | 2 By using Lemma 2. c) For each triangle T (l) in the original triangulation Δ. by using C smoothness conditions across edge w(l) .w(l) t c021 t+1 t+2 |VI | = |VI | + 3|T |.1 yield that 1 dim S 3 (ΔW ) = 3|V| + |E| + 4|T |. v(l) . 3.w(l) 2 3 + 2c. Similarly the B-net coefficients c120 t = 2. we have |VI | 2 1 dim S 3 (ΔW ) ≥ 10+3|E I |−7|VI |+ v(l) . This mean 1 that P is a determining set and then dim S 3 (ΔW ) ≤ |P|.1 and item ii). The proof is completed.

manuscript. [5] Farin. Vol. [13] Wang. Schumaker..W. Comput. [8] Lai.91-106. a [12] Wang. Vol.. “The dimension of piecewise polynomial”. then the t t+1 support set of Bi is the the union of v(l) . pp.. “Triangular Bernstein-B´ zier e patches”. then the support set of Bi is all triangles of Δ sharing v. We now analyze the support properties of the basis function Bi . [11] Schumaker. Aided Geom. “The structural characterization and interpolation for multivariate splines”. K. [9] Liu. w(l) . P. pp..13.): Multivariate Approximation Theory. Schumaker. L. Spline Functions over Triangulations. Comput.24.w(l) . where v(l) . v(l) 1 2 3 w(l) . If ci is associated with a domain point 1 2 3 t lying in ξ111 t+1 t+2 (t = 1. pp. .543-551. v(l) . SIAM J. w(l) . Firstly.w(l) sharing vertices v(l) . Des.. pp. pp. W. Vol.83128.. Acta Math. v(l) .27. (1986). [3] Diener. Aided Geom.891-911. Theory Appl. Vol. pp.J.. we consider two situations.320-327. (1979). . Vol.. ξ2 . (1987). If ci is associated with a domain 1 point lying in ξ111 2 3 . 2. Approx.. R. Denote P = ξ1 . “The dimension of cubic spline space over stratified triangulation”. M. v(l) . [2] Alfeld.199-208.3. (1992). [7] Lai. if v(l) .192. J. “An explicit basis for C 1 quartic bivariate spline”. v(l) . L.L. “Scattered data interpolation and approximation by using bivariate C 1 piecewise cubic polynomials”. we define spline B j ∈ S 3 (ΔW ) to satisfy λξi B j = δi j . [4] Farin. Vol. “Spaces of bivariate spline functions over triangulations”. If ci is associated with a t t+1 t+2 t domain point lying in ξ111 t+1 t+2 (t = 1. “The dimension of spline spaces of smoothness r for d ≥ 4r + 1”. Schumaker.J.nan Key Laboratory for Computation and Simulation in Science and Engineering. B. [10] Morgan. w(l) .(eds. Res. Math. ξ|P| . v(l) . v(l) with v(l) .w(l) Δ... G. v(l) . v(l) is an interior edge of Δ. Anal. 2. then the support set of Bi is the triangle v(l) .3. 3). v(l) . “Dimensions of spline spaces over unconstricted triangulations”. Birkh¨ yser Verlag.7. “On the dimension of space of piecewise polynamials in two variables”.. J. SIAM J. Comput. (1990). Des. Vol. then the support set of Bi is the union of those subtriangles located in v(l) .. pp.H. t t+1 t+1 Secondly. Vol. Suppose ci is the B-net coefficient of Bi which is set to 1.56-75. In: Schempp.. 1 Figure 3: A MDS for S 3 (ΔW ) marked with . Math. t t t+1 t+1 t+1 Acknowledgements. v(l) is a boundary edge of t t+1 v(l) . J. The first author is supported by the Natural Science Foundation in Guangxi (0575029) and Hu- 246 . (1996).18. [6] Hong. pp. (1996). if v(l) .L. If ci is associated with a domain point lying in D1 (v) around a vertex v ∈ Δ.1 is a 1 MDS for S 3 (ΔW ). which is commonly called the dual basis corresponding to P. References [1] Alfeld. Cambridge University Press. Piper. w(l) is the subtriangle t t t+1 ˜ t+1 t+1 ˜ t+1 sharing a common edge v(l) . pp. “Instability in the dimension of spaces of bivariate piecewise polynomials of degree 2r and smoothness r”. . L. P.. (1975). The proof of the theorem is completed.81-88.16. pp. (2006). L. (1991). .. (1975). Aided Geom. Anal.. pp.379-386. G. w(l) .L.396-411. i = 1. Numer. Comput.9. Sinica. . Construct. H. w(l) . for all ξi ∈ P. Des. Approx. R. . D. T.J.. Exp. |P|} obviously forms a dual basis of 1 S 3 (ΔW ). Vol.189-197. w(l) and t t+1 t+1 v(l) .w(l) .w(l) v(l) . Numer. (2007). Scott. M. Vol. v(l) .v(l) . For each 1 ξ j ∈ P. Zeller. 3). It is clear that the set P constructed in Theorem 4. D. then the support set of Bi is the triangle v(l) . (10) The set {Bi . (1987).L. and . Appl. “A C 2 -quintic spline interpolation scheme on triangulation”.

B ). two new measures of shape similarity between the template and images are proposed to meet the real-time image matching. (3) is any norm on the points of A and B. At the same time.53 247 (1) h ( A . B) and h(B. Wenxian Yang Department of Electrical and Electronic Engineering North China Electric Power University Beijing 102206. bq } . B) identifies the largest distance from the point a ∈ A to B. The Hausdorff distance and genetic algorithm were applied to object detection in images [8]. a new genetic algorithm based on fuzzy logic. .00 © 2009 IEEE DOI 10. A) . a hybrid model for fast shape matching is introduced in this paper to search 978-0-7695-3521-0/09 $25. After edge extracting.2009 International Conference on Computer Engineering and Technology Fast Shape Matching Using a Hybrid Model Gang Xu. The experimental results show that the model achieves the shape matching with higher speed and precision compared with the traditional matching algorithms and can be used in real-time image matching and pattern recognition. Based on the above-mentioned articles. The directed partial Hausdorff distance (PHD) is defined as follows: hl ( B.2009. a fast strategy for rough matching and a new improved partial Hausdorff distance for accurate matching are presented as the measures of the degree of shape similarity between the template and images. Huttenlocher proposed the partial Hausdorff distance [2]. A )) . A ) = m a x m in b − a b∈ B a∈ A ⋅ . China E-mail: xugang@ncepu. The genetic algorithm (GA) [7] is such a search and optimization method. the HD is defined as follows: H ( A . which can adaptively regulate the probabilities of crossover and mutation.cn. which has developed to stimulate the mechanism of natural evolution and is powerful in finding the global or near global optimal solution of optimization problems. The genetic algorithm has found kinds of applications successfully and has shown to be of great promising. Some modified Hausdorff distances have been proposed recently. we employ the Euclidean distance in this paper. Hausdorff distance. According to the partial Hausdorff distance. b2 . a robust Hausdorff distance was used in [4].[6]. . A. 1 ≤ l ≤ q . two different improved Hausdorff distances were introduced in [5]. Finally. a directed modified Hausdorff distance (MHD) was introduced by Dubuisson [3]. genetic algorithm.edu. a new genetic algorithm based on fuzzy logic. The distance between a point and a set is defined as the minimum distance from the point to all points of the set. A) = Lth∈B h(b. h ( B . which is composed of three parts: rough matching. The directed distance h( A. However. the conventional Hausdorff distances require high computational complexity and are not suited to the practical applications. A) . a p } and B = {b1 . The Hausdorff distance (HD) [1] measures the mismatch of two sets and is more tolerant to perturbations in image matching for it measures proximity rather than exact superposition. THE HAUSDORFF DISTANCE On the basis of reviewing the conventional and existing improved Hausdorff distances. Conventional HD and Partial HD Given two finite point sets: A = {a1 . B ) = m ax ( h ( A . the applications of Hausdorff distances and genetic algorithms were researched in image shape matching as in [6].com Abstract—A hybrid model is proposed to finish image shape matching from coarse to fine. the computational complexity is O ( p ⋅ q ) . B ) = m a x m in a − b a∈ A b∈ B . B) is defined as the maximum of h( A.1109/ICCET. a fast strategy is given as the rough measure of the similarity between the template and images and a novel partial Hausdorff distance is proposed to compute the shape similarity accurately. Keywords-shape matching. H ( A. (2) h ( B . fuzzy control the image quickly among the given test images and the image has the greatest similarity to the template image. a 2 . I. b (4) . which can adaptively regulate the probabilities of crossover and mutation. achieves the optimum shape matching with higher search speed and quality. II. INTRODUCTION The combination of Hausdorff distances and genetic algorithms can effectively detect rotating.[9]. ywxzgs81@163. and scaling of objects in image shape matching. and optimum matching search. accurate matching. is used to search the optimum shape matching quickly.

If there are points. For A and B are the sets of the edge points of the matched images and the template respectively. The flow diagram of rough matching. Two images are similar only when Mt 0 / q ≥ l1 × q . of which the algorithms based on fuzzy logic have better performances. The genetic algorithm is widely used for the optimization problems including objective functions are discontinuous. Compared with [6]. a new genetic algorithm based on fuzzy logic is used which can regulate the probabilities of crossover and mutation adaptively. (5) Where t ( B) = m ⋅ ⎡cos(θ ) − sin(θ ) ⎤ ⋅ B . is searched to find the feature points and the directed Hausdorff distance h(bj . The process of shape matching. Mt0 add one. Mt0 is unchanged.[6]. 248 . in the matched image only one point or several points in its neighborhood have the smaller distance with the point. III. a novel partial HD is proposed as follows: For the point b j (1 ≤ j ≤ q ) in B. 1) A Novel Partial Hausdorff Distance for Accurate Measurement: Whether the conventional or partial Hausdorff distance. b B. 0 ≤ l1 ≤ 1 is true. ⎢ sin(θ ) cos(θ ) ⎥ ⎣ ⎦ Therefore. It’s better to adopt the genetic algorithm to search the optimum in this model.3.1. this distance has higher accuracy with the same speed. such as premature convergence and slow convergence. thus it is unnecessary to compute each distance. if there are not points. IV. Normalized value h1 ( B . two novel similarity measures are given on the basis of [5]. THE HYBRID MODEL FOR SHAPE MATCHING The hybrid model composed of these algorithms can finish shape matching through two parts: the rough matching and the accurate matching. m ) is the parameter of template rotation and scale. The optimum model is defined as follows: min h (t ( B ). Equation t = (θ . It’s known that the self-organizing genetic algorithm has higher robustness. the lth value is h2 ( B. There have been some Pc and Pm self-adaption algorithms.Where Lth∈B denotes the Lth ranked value of h(b. For higher matching speed and accuracy. high Fig.1. For the point b j in B. if there are not points in the neighborhood.2 and Fig. A) from b j to A is the smallest distance from b j to these points. A ) . which correspond with b j in A. m ) to get the best matching. A new genetic algorithm is proposed in [10] and adaptively regulates Pc and Pm by inquiring the table based on fuzzy control. Pc and Pm are outputs. Two selfadaption normalized operators are given. the small neighborhood M2 (m2×m2). which correspond with b j in A.2. this computational cost limits Hausdorff distance’s practical applications. image rotation (rotation coefficient θ ) and scale (scale coefficient m) are considered. 2) A Fast Strategy for Rough Measurement: Initialize the counter Mt0=0. Two Novel Measures of Shape Similarity Based on the Partial Hausdorff Distance For fast and effective image matching. Ranking h(bj . A) from small to large. the flow diagrams of rough matching and accurate matching are shown in Fig. A) . for each point of B. the differences of the population average fitness and standard deviation between two adjacent generations are inputs. h(bj . The process of shape matching is shown as Fig. the key is to find the optimum parameter (θ . A) is a given maximum. is searched to find the feature points. the small neighborhood M1(m1×m1). Fig. t dimension. This algorithm has better performance and we use it to search the optimum. THE GENETIC ALGORITHM WITH PROBABILITIES OF CROSSOVER AND MUTATION SELF-ADAPTION BASED ON FUZZY CONTROL The convergence speed and solution quality are directly influenced by the probabilities of crossover Pc and mutation Pm. the distances between every point of the template B and the matched image A are computed. A) = 1 − Mt 0 / q is called as the fast similarity measure between A and B. A) which is used for accurate similarity measure. faster convergence and the global optimization. In fact. In view of the shortcomings of simple genetic algorithm (SGA). 3) The Optimum Model for Shape Matching: Quickly retrieving the given test images by rough and accurate matching.

and 315 degree counter-clock-wise respectively.11-15 are images after rotating the template 15. 90.[9]. NO. 4) Optimum Results Search: The genetic algorithm is adopted. NO. NO. and comparison of the fast strategy and Hausdorff distance in this paper are showed in table 1 compared with the Hausdorff distance in [6]. A) = 1− Mt0/ q f1 ∈ (0. NO.00GHz).0 in the Platform of Intel (R) Core (TM)2 Duo CPU (2.1) . 7 8 9 (6) 10 11 12 f 2 (θ .16 is the image after narrowing the size of the template one times and rotating it 90 degree. 245. RESULTS AND ANALYSIS The algorithms are implemented by Matlab7. TABLEⅠ.3.2 image. Algorithms reference [9] reference [6] PHD this paper the fast strategy (in this paper) HD COMPUTATIONAL EFFICIENCY ANALYSIS +/6q×p 3q 3q×M2 q ×/÷ 6q×p 3q 3q×M2 1 Comparison 2q×p M2+q M2×q+q M1×q 13 14 15 16 17 From table1. 167. the largest iterative number NM. The test images are showed in table 2. Generally M1 is a little bigger than M2.V. (7) Where t ( B) = m ⋅ ⎡cos(θ ) − sin(θ )⎤ ⋅ B . The template is the same as NO. TABLE II.1-10 are fishes with kinds of shapes (128×128). multiply and divide (×/÷). Computational Efficiency Analysis The computational times of addition and subtraction (+/). 2) Definition of Objective Functions: The functions for rough and accurate matching are minimized and defined as follows: f1 (θ . (The worst situation is considered in the partial Hausdorff distance and fast strategy. the methods proposed in this paper lessen the computational cost to a great degree. B. 1 2 TEST IMAGES 3 4 Fig. M1 and M2 are much less than p). A) . m) = h2 (t ( B ).17 is the image after doubling the size of the template and rotating it 45 degree. the initial value of Pc and Pm etc. m) = h1 (t (B). 249 . ⎢ sin(θ ) cos(θ ) ⎥ ⎣ ⎦ 3) Parameters and Ranges: Population size N. The Implementation Steps 1) Edge Extracting: Canny operator is employed. 5 6 A. The flow diagram of accurate matching.

250 . Fig. and mutation operator are adopted as in [11]. 1. which are very helpful to wider search.14 0.04 0 10 20 30 40 50 60 70 80 90 100 Generation Fig. 0.1320.Standard deviation Setting parameters: N is 20.25 SGA GA 0.11 for example.18 0.0323) Optimal results From Fig. Algorithms reference [9] reference [6] PHD this paper the fast strategy (in this paper) HD Running time 27min 39s 2min 58s 3min 1s 2min 50s ( θ ,m) (15. The lower population average fitness means that the average distance between individuals of each generation and the template is smaller and the search quality is better in the process of searching.08 0. 1 0.85 0. Each algorithm has 10 operations to overcome the randomness in GA. 0. Noting that the running time is relative because it is influenced by kinds of external factors such as the computer running time. The Changes of Population Average Fitness.1 0 10 20 30 40 50 60 70 80 90 100 Generation Fig. A. Adopting the same genetic algorithm (proposed in this paper). and Fig.5. the changes of population average fitness.9839) (14.9 0.06 SGA GA Average fitness 0.1 0.2 0. NM is 100. The changes of population average fitness. The selection.16 0. The higher standard deviation shows that the individuals are more dispersive.5. shown in table 3.5 Crossover probability From table3. the HD given in this paper has higher matching accuracy as well as faster speed and the fast strategy has lower computational complexity.9839) (14.7801. the optimized function has lower average fitness and higher standard deviation in GA. Mutation probability SGA GA 0.3109.15 0.6.7 0. 0. Crossover Probability Pc.4 and Fig. standard deviation.24 0.05 respectively.4839.4. Fig.6 and 0. Standard Deviation. and the initial value of Pc and Pm is 0. Pc.7 respectively compared with SGA. B. and Pm of GA given in this paper are shown in Fig. 0.[9].4.11 for example. The advantages become more obvious when the images are larger.6. COMPARISON BETWEEN SEVERAL HAUSDORFF DISTANCES 0.22 The changes of crossover probability.2 0. Fig.65 0 10 20 30 40 50 60 70 80 90 100 0. the Hausdorff distance and fast strategy proposed in this paper are compared with which as in [6].5 0 10 20 30 40 50 60 70 80 90 100 Generation Fig. The changes of standard deviation. Generation The changes of mutation probability.7. TABLE III.5 SGA GA 1 0.12 0. crossover. Comparison Between Several Hausdorff Distances Take the matching between the template and NO. and Mutation Probability Pm Take the rough matching between the template and NO.75 0. the coding length of θ and m is 10 and 5 respectively with binary coding.8 0.95 2 1.9839) (15. 2.5.

5513 200.0565 1. TABLE IV. 106-109. The parameters of SGA: ( θ .K. A modified Hausdorff distance for object matching. Its stability is not that great yet and will be studied next. 566-568.1437 168. W.9597 0.9113 1. The parameters show that the matching has low error and high quality.6598 91.7918 167. 15. pp. Comput. Zang Tiefei.0081 0. NO. 16.9839 0. Xu Xiaoming. Jain.P.0323). 915-919. Liu Jianzhuang. 2003. M. On the basis of rough matching. G. D.3167 200. Soc.0003 0. Yang Shuying. on 15–18 June.0005 0 0 0 0 0 0 0 Matched Y N Y Y Y Y Y Y Y [7] [8] [9] [10] From table 4. Jerusalem. Image Pattern Recognition——VC++ Implementation.1437 45.5 0.9839).From Fig..8886 87. m) = (17. pp. pp. Acta Electronica Sinica. 2000. Results and Analysis of Rough and Accurate Matching The results of rough and accurate matching are shown in tables 4 and 5 (n denotes the optimal generation.8046 The hybrid model produced in this paper has features as shown below: • Quickly and effectively finishing the shape matching from coarse-to-fine with the fast strategy and improved HD. Holland J. Object matching algorithms using robust Hausdorff distance measures. pp.6657 14.0002 0. The Application of Improved Hausdorff Distance and Genetic Algorithm in Image Matching. this model can be used for image recognition and matching in practice. Y denotes the image is exactly matched with the template). Intelligent Systems and Signal Processing. Vision Pattern Recogn.5865 253. m) = (14. Zhang Wei. NO.0002 0. The image 3 which does not 251 [11] Csaszar A.0001 0 0 0.478 21.5 0. 79-83.0088. NO. pp. Shen Tingzhi. whereas Pc and Pm in SGA remain in the process of searching. CONCLUSIONS θ (degree) 45. 1996.5103 315.5719 2. Sim. Israel.0001 Selected × √ √ × × × × × × × √ √ √ √ √ √ √ match with the template exactly is removed.9003 14..9766 46. Bristol: Adam Hilger. 2000. REFERENCES [1] [2] [3] [4] [5] [6] TABLE V.3109.0002 0. 3.H. An Improved Algorithm for 2D Shape Matching Based on Hausdorff Distance. pp. 1994.. A Fast Strategy for Image Matching Using Hausdorff Distance.9839 1.0909 217. better quality. Adaption in Natural and Artificial Systems. D..8871 0. In conclusion. other images are kinds of affine transformations of the template and are also selected. GA: ( θ .0323 0.0381 317.. IEEE Transactions on Image procession.9879 HD 0 0. • The genetic algorithm with the probabilities of crossover and mutation self-adaption based on fuzzy control is adopted to get higher speed. Comparing images using the Hausdorff distance undertranslation. Dubuisson and A. VI. Proceedings of the IEEE Intemational Conference on Robotics.J.9839 0.G. Huang Shabai. ICPR. Yin Yixin. .9718 0.0002 0 0. and R.0323 0. General Topology. compared with other similar matching methods.7625 181. 5 (2). Xie Weixin. 733-737.9150 247. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 n 87 71 90 87 98 92 96 90 97 84 51 93 83 61 40 77 76 RESULTS OF ROUGH MATCHING m 0. Gao Xinbo. and Klanderman.. O.8387 0. December 1999.5 0. 12. Zhang Wenjing. 2005. and Shi Zelin. Ann Arbor.7801 90. A new adaptive algorithm for regulating the probabilities of crossover and mutation. pp. eight images of transformations of the template are gotten by accurate matching as shown in table 5.9758 The fast measures 0. CVPR ’92. 23 (1).H. Tsinghua University Press.0446 m 0. 1~6.5 1. Rucklidge. The image 3 is selected which is similar to the template. 2 3 11 12 13 14 15 16 17 n 62 95 58 67 66 54 79 49 86 RESULTS OF ACCURATE MATCHING θ (degree) 1.5 1. 14. which show that the GA given in this paper has a strong self-adaptive ability.5074 246. Kwon.A. Beijing Jiaotong University Press. In: Proc. the average optimum iterative times of SGA and GA are 74 and 51 respectively. Hausdorff Distance Based Object Matching with Genetic Algorithms.7.9839 1. √ denotes the image is selected. 13. 24 (4). Li Qing. Journal of Beijing Institute of Technology.2. 654–656. C. the changes of Pc and Pm in GA are frequent with the changes of average fitness and standard deviation.8992 0.9795 203.8145 0.Conf. 425429. The University of Michigan Press.0323 1. and Gu Jianjun.4663 12. 1978.K. and stronger self-adaptive ability. and Wang Zhiliang. Park. and 17 are selected after rough matching. 20 (6). 11.. Chen Jianjun. 0.0002 0. 2008.5726 0. pp.6 and Fig.0002 0. and Su Jianfeng.3109 91. Journal of Image and Graphics. The matching results fit with human vision. 1975. 1.6935 0.0002 0 0 0 0.2317 349..P. Ma Kun. 1992 IEEE Comput. Zhang Zhijia.0202 1.0001 0. Control and Decision. Huttenlocher. M I.

INTRODUCTION III.ac. a genetic algorithm approach is proposed for solving multi-objective cell formation problem. The mathematical model is given as [9]: . etc. such as grouping efficiency proposed by Chandrasekharan and Rajagopalan [6]. authors used a SPEA-II method as well known and efficient standard evolutionary multi-objective optimization technique. and S. Multi-Objective. [3] and Dimopoulos [4]. Mathematically.2009. M. the most fundamental objectives for the CFP are Minimization of intercell flows and cell load variation. II. we propose to minimize intercell flows and cell load variation in a consideration of the processing time and available time on machine in a given period. optimization is normally achieved through the aggregation of the objectives into a single composite objective. or their combinations. without considering other important design factors such as processing time. SPEA II. DEFINITIONS A general multi-objective optimization problem can be described as a vector function that maps a tuple of m parameters (decision variables) to a tuple of n objectives. s_m_hatefi@yahoo. these measures are only suitable for those CFPs whose machine– part flow chart is a 0–1 matrix denoting the manufacturing relationship between machines and parts. H. Hatefi Department of Industrial Engineering. In this paper. Genetic Algorithm. Formally [5]: Where x is called the decision vector.00 © 2009 IEEE DOI 10. Iran hkor@ut. This technique does not provide the system designer with a set of alternative trade-off solutions.ir. and Y is the objective space.212 252 According to the literature. H. haleh24@hotmail. y is the objective vector. Cell load variation: 978-0-7695-3521-0/09 $25. The objectives are the minimization of both total moves (intercell as well as intracell moves) and the cell load Variation. In our research. the concept of Pareto optimality is as follows: Assume. it is surprising that relatively few solution methodologies explicitly address the multi-objective version of the problem. College of Engineering University of Tehran Tehran. Then. Even when multiple objectives are considered.1109/ICCET.2] indicate that the practical implementation of a cellular manufacturing system involves the optimization of many conflicting objectives. In this paper.com Abstract—Cellular manufacturing (CM) is an important application of group technology (GT) in which families of parts is produced in manufacturing cells or in a group of various machines. production requirements and available time on machine in a given period.ac. The set of solutions of a multi-objective optimization problem consists of all decision vectors for which the corresponding objective vectors cannot be improved in any dimension without degradation in another. haleh. without loss of generality.com. group efficacy by Kumar and Chandrasekharan [7] and bond energy by Miltenburg and Zhang [8]. We can benefit from lower parts transfer cost due to the minimization of intercell flows and higher within-cell machine utilization due to the minimization of cell load variation. a maximization problem and consider two decision vectors . hiranmanesh@ut. The efficiency of multi-objective GA-SPEA II is illustrated on a large-sized test problem taken from the literature. Several measures have been used to evaluate the objective function of the CFP. as indicated in the reviews by Mansouri et al. Recent surveys [1. Keywords— Cellular Manufacturing System.2009 International Conference on Computer Engineering and Technology A Multi-objective Genetic Algorithm for optimization of Cellular manufacturing system H. However. which is the main aim of a typical multi-objective optimization process. This hybrid method presents the large set of non-dominance solutions for decision makers to making best solution. Iranmanesh. a is said to dominate b (also written as a>b) if I. X is the parameter space. kor.ir. These vectors are known as Pareto optimal. MATHEMATICAL MODEL The Cellular manufacturing (CM) is the application of group technology (GT) in manufacturing systems. Given the size of the cellular manufacturing research output. Pareto set.

SPEA II is also a revised version of SPEA whose pseudo code is shown in Algorithm 2 [14]. M is a c X p matrix of average cell load. The total number of intracell moves performed by part i in order to complete its processing requirements. In SPEA. instead of using niches based on distance. SPEA uses an external archive containing nondominated solutions previously found (the so-called external nondominated set). (external set) 253 . we give a brief summary of the algorithm here. The result would be present in section 5. The overall algorithm is as follows: Algorithm 1 (SPEA2 Main Loop) Input: N (population size) (archive size) T (maximum number of generations) Output: A (nondominated set) Step 1: Initialization: Generate an initial population P0 and create the empty archive = Set t = 0. For each individual in this external set. The fractions representing the weights attributed to the intercell and intracell moves respectively ( and are assumed to be 0. and (3) it has an enhanced archive truncation method that guarantees the preservation of boundary solutions.–‘–ƒŽ ‘˜‡• Where: The cell number in which operation k is performed on part i taking into consideration the sequence of operations. Pareto dominance is used to ensure that the solutions are properly distributed along the Pareto front. In fact. where According to the above model. This approach was conceived as a way of integrating different MOEAs (Multi Objective Evolutionary Algorithms). SPEA II TECHNIQUE As SPEA (Strength Pareto Evolutionary Algorithm) [5] forms the basis for SPEA2. (2) it uses a nearest neighbor density estimation technique which guides the search more efficiently. thus slowing down the search. Because of this. Thus. For a more detailed description the interested reader is referred to Zitzler [5]. if machine is in cell l and 0 where otherwise. taking into consideration the sequence of operations. nondominated individuals are copied to the external nondominated set. the fitness of each member of the current population is computed according to the strengths of all external nondominated solutions that dominate it. where is workload on machine i induced by part j and is equal to X [ is an m X c cell membership matrix. The number of parts. At each generation. IV. it might reduce the selection pressure. its effectiveness relies on the size of the external nondominated set. since it is proportional to the number of solutions to which a certain individual dominates. the total number of operations to be performed on pan i to complete its processing requirements c The number of cells. we solve the problem by GA-SPEA II method and present set of non-dominated solutions.3 respectively as suggested by Dimopoulos [10] for comparison purposes). This strength is similar to the ranking value of MOGA [12]. SPEA II has three main differences with respect to its predecessor [14]: (1) it incorporates a fine-grained fitness assignment strategy which takes into account for each individual the number of individuals that dominate it and the number of individuals to which it dominates. The fitness assignment process of SPEA considers both closeness to the true Pareto front and even distribution of solutions at the same time. Although this approach does not require a niche radius. The approach adopted for this sake was a clustering technique called average linkage method [13]. The cell number in which operation (k+1) is performed on part i taking into Consideration the sequence of operations. m the total number of machines the processing time (hour/piece) of part j on machine i the available time on machine i in a given period of time the production requirement of part j in a given period of time [ ] is an m p machine-part incidence matrix. since the external nondominated set participates in the selection process of SPEA. if its size grows too large. a strength value is computed. the authors decided to adopt a technique that prunes the contents of the external nondominated set so that its size remains below a certain threshold.7 and 0.

Step 2: Step 3: Step 4: Step 5: Step 6: Fitness assignment: Calculate fitness and . Step2 Evaluate the fitness function of each chromosome x in the population by using the proposed objective functions. (ii) With a preset crossover probability. As already indicated in table I. VIII. Deliver the best solution in the current population. which is the aim of a natural multi-objective optimization Process. (iii) With a preset mutation probability. Here. If the end condition is satisfied.The authors purposely use this test problem in order to assess the validity of their approach in a large-sized test problem. stop. Chosen genes are swapped to perform mutation process. Stop. crossover operation will perform on the selected parents and to form new offsprings (children). As it can be seen in 254 . 43 that get by Dimopoulos’algorithm. mutation will perform on new offspring Multi-objective GP-SPE II is an algorithm for automatically producing sets of non-dominated solutions for multi-objective CFP. CONCLUSION This research aim to implementation of SPEA II algorithm as a state of art method in multi objective problem. In fact. [9] In addition. a Cell Formation Problem test problem [9] solved by GP-SLCA algorithm. the set of non-dominated solutions produced by multi-objective GP-SPEA II provides the decision maker with a reasonably complete picture of the potential design trade-offs. Multi-point crossover is used while partially matched crossover is employed for Problem. the solution is near to each other. Gupta et al. Increment generation counter (t = t + 1) and go to Step 2. RESULTS & DISCUSSION The general outline of GA is summarized below [11]: Algorithm 1: Genetic algorithm Step 1 Generate random population with n chromosomes by using symbolic representation scheme (suitable size of solutions for the problem). Variation: Apply recombination and mutation operators to the mating pool and set to the resulting population. VII. Step3 Create a new population by iterating loop highlighted in the following steps until the new population is complete (i) Select two parent chromosomes from a population according to their fitness from step2. This is because it’s using of SPEA II methodology that solved the multi objective problem and produce the set of non dominated solution. Those chromosomes with the better fitness will have chosen. this set of non dominated solution also. Termination: If t T or another stopping criterion is satisfied then set A to the set of decision vectors represented by the nondominated individuals in  . So. authors use a large scale of test problem that taken from literature [9] . otherwise if size of  is less than N then fill  with dominated individuals in and . In the other solutions that get by two algorithms. On the other hand. the authors provided a corresponding set of non-dominated solutions for comparison purposes. offsprings are the exact copy of parents. in this paper. values of individuals in Environmental selection: Copy all nondominated individuals in and to If size of exceeds N then reduce  by means of the truncation operator. Also. NUMERICAL EXAMPLE In order to demonstrate efficiency of our model. If no crossover was performed. V. authors get 36 solutions.…. Mating selection: Perform binary tournament selection with replacement on  in order to fill the mating pool. authors in different models and objective used this test problem. compared with Dimopoulos’s solution. by this study authors want to show the efficiency of this algorithm in CMS environment. By comparing these solutions. (iv) Place new offsprings in the new population. In the research of Dimopoulos. The set of solutions provided by multi-objective GP-SPEA II provides a good starting point for other decision-maker activities that would lead to an informed decision. The set of evolved solutions covers the entire trade-off range of the objectives considered. He gets the set of non dominated solution that consist of 43 solutions. GENETIC ALGORITHM Step 4 Step 5 at each gene. Go to step 2 VI. we perceive solution number 36 in this paper can dominate the solution numbers 23. since multi-objective cell-formation problems of this type do not exist in the literature.

7 230.4 76.5 108.6 113.8 85 89.8 156.634 1.822 1.565 1.303 1.2 84.2 217.3 58.4 DIMOPOULOS GPSLCA Total part moves 0.008 1.558 1.373 1.482 0.224 Cell–load variation 94.665 1.902 0.362 0.6 94 94 98.739 0.003 0. Cell–load variation 2.9 Our GA-SPEA II Solution No.7 100.315 1.228 1.989 0. Total part moves 1.5 49.8 200.009 0.159 1.885 25.9 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 255 .741 0.8 76.997 0.383 1.2 132.845 0.previous section. authors compare this algorithm with GPSLCA that presented by Dimopoulos [10].2 162 170.8 40.9 123.2 70.8 28.8 28.008 1.472 1.157 1.2 63.6 31.051 0 238.6 34.1 38 38.168 0 Cell–load variation 83.364 0.149 Total part moves 25.28 0.634 1.7 51.3 122.5 82.315 1.8 170.3 84.8 111.8 34.378 0.28 1.665 1.7 47.6 68 69.48 1.822 1.1 67.3 79.565 1.7 32.173 1.093 0.6 45.158 1.8 104.185 0.346 1.6 69.5 122.613 1.8 55.661 1.9 44.312 1.1 83.476 1.2 113.4 65 66.6 131.9 53. Authors suggest to researcher to study about NSGA II and another efficient algorithm to comparing and presenting the best algorithm for this type of problems.892 0.989 0.7 186.9 89.6 38.472 1.31 0.5 35.016 1.078 1.738 0.313 1.843 0.115 1.842 0.1 33.5 148.509 1.2 Solution No.4 45.128 0.4 76.932 0.136 1.4 42 44.105 1.2 70.3 0.635 0.016 1.763 0. TABLE I.14 0.7 43.2 35.507 1.769 1.456 1.417 0. OBJECTIVE FUNCTION VALUE FOR THE SET OF NON-DOMINATED SOLUTION AND COMPARISON WITH DIMOPOULOS [10] RESULT Our GA-SPEA II DIMOPOULOS GPSLCA Cell–load variation 2.7 0.2 60.3 85.1 208.621 0.9 174.8 180 183.661 1.982 0.

J.. A. Y. Optimization and Control with Applications to Industrial Problems. 2000. Giannakoglou. R.. J. and Johnson. 1996. Res. C. S. and Zhang. In K. Genetic Algorithms for Multiobjective Optimization: Formulation. Res. Int. A genetic algorithm-based approach to cell composition and layout design problems. A comparative evaluation of nine wellknown algorithms for solving the cell formation problem in group technology... 1991. 1998. Evol. Morse.TABLE II. Multiobjective evolutionary algorithms: a comparative case study and the Strength Pareto approach. 2001. D. 42. Discussion and Generalization.. Int. SPEA2: Improving the Strength Pareto Evolutionary Algorithm. and T. Int. Int. and Newman.. Gupta.P. J.. J. Res. 7(1–2):55–66. Reducing the size of the nondominated set: Pruning by clustering. Grouping efficacy: a quantitative criterion for goodness of block diagonal forms of binary matrices in group technology. second edition. 27. N. 1993. Thiele. 44–72. Int. Greece. Int.. Wemmerlov.P. 447–482. Empirical findings in manufacturing cell design.M.T. 38. and Mort.. Miltenburg. M. editors. E. J. Practical genetic algorithm. C. C. University of Illinois at UrbanaChampaign. and Thiele. editor. Prod. IEEE Trans. Dimopoulos. U. 29–49. pages 416–423. California. Res. Oper. and Rajagopalan. Papailou. 28. Forrest. Fleming. Prod. Prod. Tsahalis. Cellular manufacturing at 46 user plants: implementation experiences and performance improvements. Res. EUROGEN 2001. C. 38. L. Manage. 2000.. [2] [3] [10] [11] [12] [4] [5] [13] [6] [14] [7] [8] 256 . M. Prod.S.J. Evolving knowledge for the solution of clustering problems in cellular manufacturing.. Haupt. Groupability: analysis of the properties of binary data matrices for group technology. Zitzler. Moattar Husseini. 1035–1052.C. Res. D.E. 1999. J. M.L. and L. 1989. Athens. Multi-objective optimization of manufacturing cell design. 44(22). Haupt. John Wiley. Chandrasekharan. 1990.S. and Sundaram. Laumanns. Zitzler. Computers and Operations Research.. S. San Mateo. J. 257–271.. and Johnson. 4855–4875. Proceedings of the Fifth International Conference on Genetic Algorithms. Prod. D. and Chandrasekharan. 1201–1218.. J.R. Dimopoulos. Res.. M. Mansouri. 10. Fonseca and P. 2004. Periaux.. C. E.. [9] Gupta. Prod. U. Comput.. Prod. 2006. 3. J. Kumar. A review of the modern approaches to multi-criteria cell design.P. J. J. M. Solution 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 W1 C1 C2 C3 C1 C3 C3 C4 C1 C1 C4 C4 C2 C2 C1 C6 C5 C5 C2 C2 C1 C5 C1 C2 C2 C2 C1 C2 C1 C7 C7 C1 C1 C1 C1 C1 C2 W2 C1 C1 C3 C2 C1 C1 C3 C2 C2 C1 C1 C1 C1 C2 C1 C1 C1 C1 C1 C2 C1 C2 C1 C1 C1 C2 C1 C2 C1 C1 C2 C2 C2 C2 C2 C1 W3 C1 C2 C3 C3 C3 C3 C4 C3 C4 C4 C2 C3 C3 C3 C3 C5 C2 C3 C3 C3 C4 C3 C3 C3 C3 C3 C3 C3 C5 C5 C3 C3 C3 C3 C3 C3 W4 C1 C2 C3 C3 C3 C3 C4 C4 C4 C4 C4 C4 C5 C5 C6 C5 C5 C4 C4 C6 C7 C6 C4 C4 C4 C4 C4 C4 C3 C3 C4 C4 C4 C4 C4 C4 MACHINE-CELL CONFIGURATION FOR THE SOLUTION IN TABLE I W5 C1 C2 C3 C3 C3 C3 C4 C4 C4 C4 C4 C5 C5 C5 C6 C5 C5 C5 C6 C6 C5 C6 C5 C5 C5 C7 C7 C7 C7 C7 C5 C8 C8 C8 C5 C5 W6 C1 C2 C3 C3 C3 C3 C4 C4 C4 C4 C4 C5 C4 C5 C6 C4 C3 C6 C6 C6 C5 C4 C6 C7 C7 C7 C6 C7 C6 C6 C8 C7 C8 C6 C7 C8 W7 C1 C2 C3 C3 C3 C3 C4 C4 C4 C4 C4 C5 C5 C5 C6 C5 C5 C6 C6 C6 C5 C6 C7 C7 C7 C7 C7 C7 C7 C7 C8 C8 C8 C8 C8 C9 W8 C1 C2 C3 C3 C3 C2 C4 C4 C4 C3 C3 C5 C5 C4 C5 C3 C4 C6 C5 C4 C3 C5 C7 C7 C6 C6 C5 C5 C4 C4 C7 C6 C6 C5 C7 C7 W9 C1 C2 C3 C3 C3 C3 C4 C4 C4 C4 C4 C5 C5 C5 C6 C5 C5 C6 C6 C6 C5 C6 C7 C7 C7 C7 C7 C7 C7 C7 C8 C8 C8 C8 C8 C9 W10 C1 C2 C3 C3 C3 C3 C4 C4 C4 C4 C4 C5 C5 C5 C6 C5 C5 C6 C6 C6 C5 C6 C7 C7 C7 C7 C7 C7 C7 C7 C8 C8 C8 C8 C8 C9 W11 C1 C2 C2 C3 C2 C3 C2 C4 C3 C2 C4 C5 C5 C5 C4 C2 C5 C6 C6 C5 C2 C6 C7 C6 C7 C5 C7 C7 C2 C2 C6 C5 C5 C8 C6 C6 W12 C1 C2 C3 C3 C3 C3 C4 C4 C4 C4 C4 C5 C5 C5 C6 C5 C5 C6 C6 C6 C5 C6 C7 C7 C7 C7 C7 C7 C7 C7 C8 C8 C8 C8 C8 C9 W13 C1 C2 C3 C3 C3 C3 C4 C4 C4 C4 C4 C5 C5 C5 C6 C5 C5 C6 C6 C6 C5 C6 C7 C7 C7 C7 C7 C6 C7 C7 C8 C8 C7 C7 C8 C9 W14 C1 C2 C1 C3 C3 C3 C1 C4 C4 C4 C4 C5 C5 C5 C6 C5 C5 C6 C6 C6 C5 C6 C7 C7 C7 C7 C7 C7 C7 C7 C8 C8 C8 C8 C8 C9 W15 C1 C2 C3 C3 C3 C3 C4 C4 C4 C4 C4 C5 C5 C5 C6 C5 C5 C6 C6 C6 C5 C6 C7 C7 C7 C7 C7 C7 C7 C7 C8 C8 C8 C8 C8 C9 REFERENCES [1] Wemmerlov. Int.. 481–507.. 35. Res... Morgan Kaufmann Publishers. 4119–4133. Int. Kumar. Fogarty. W. Evolutionary Methods for Design. Prod. P. 1997.. 233–243. pages 95–100. S.J. 34. In S.. 1980. J. J.A.

V> is called slicing criterion. understanding. we provide not only a precise semantic basis for program slicing but also a sound mechanism for reasoning and verification about program slicing. . and program slicing algorithms. chopping. V> is called slicing criterion. 978-0-7695-3521-0/09 $25. optimizing. respectively.6] and so on. measuring. and make a new program (called by slice). Major difficulties in formalizing program slicing lie in how to describe various definitions. J. program slicing has been widely used in program analyzing. China wufangjun@jxufe.cn Abstract This paper represents a research effort towards the formal mapping between program slicing and Z specifications. here referred to as Weiser’s slicing and Ottenstein’s. which might influence the variables of V at a special program point p. Introduction Program slicing. With this approach. dicing. very little work is involved in its formalization [7]. intraprocedural slicing [3] & interprocedural slicing [4].edu. while the latter due to Ottenstein K. and <point. model checking. specification slicing [5. and developed lots of practical applications [2].e.. The reasons for choosing the Z notation are as follows. transforming. denotational slicing. Although program slicing has been widely studied in literatures. obtained lots of theoretical results.. this paper represents a research effort towards the semantic formalization for program slicing technique using Z schema calculus [8-10].2009 International Conference on Computer Engineering and Technology A Formal Mapping between Program Slicing and Z Specifications Fangjun Wu School of Information Technology Jiangxi University of Finance and Economics NanChang.. The specification language Z includes both a means of specifying data types. hybrid slicing. where Q is called by slice. based on sets. It is used to extract statements and predicates from original program P.2009. Till now. but also for industrial development of high-integrity systems such as safetycritical software. researchers have proposed forward slicing & backward slicing. The teaching of Z has become of increasing interest. 1. relevant slicing. and how to deal with program slicing algorithms. Many textbooks on Z are now available. development and applications of program slicing technique have been carried out for more than twenty years. definitions of program slicing. and software security. 330013. aspect-oriented progran slicing. testing. the former due to Weiser M. find a projection Q of P by deleting zero or more statements that when P and Q run on the same initial state. Nowadays. object-oriented program slicing.1109/ICCET. i. is an effective technique for narrowing the focus of attention to the relevant parts of a program during the debugging process [1]. The research. and a means of specifying constrains using predicate logic. amorphous slicing. The Z notation is used recently not only in academic domain. nodes and edges of program dependence graphs.00 © 2009 IEEE DOI 10. To help alleviate this situation. union slicing. originally introduced by Weiser M. A formal mapping between program slicing and Z specifications The most related concepts and definitions of program slicing are known from [1] and [3].122 257 2. reverse and reengineering. static slicing & dynamic slicing. maintenance. the sequences of states that arise at point point in the two programs have identical values for all variables in V. Thus we can analyze the original programs P by the slice. conditioned program slicing. It is thus attractive to consider it used in formalization of program slicing. reusing. Many researchers have done much in program slicing. Z has been used for a number of digital systems in a variety of ways to improve the specifications of computer-based systems. General aspects of program slicing are considered. debugging. etc. sequence program slicing & current program slicing. where <p. Definition 1 Weiser’s slicing problem is stated as follows: For a given program point point and a set of variables V in program P.

Generally.Definition 2 Ottenstein’s slicing problem is stated as follows: For a given point point and a set if variables V in program P. Free types are introduced to describe nodes and edges respectively: [Node. to assure that slices contain all of statements that might affect values for all variables in VariableSet. such as: program dependence graphs (PDG). Node_Shapes will normally be an enumerated type which holds the possible shapes that nodes can have on a given graph type. namely control dependence edges. In following section. Several formalisms have been used to represent program dependences. then the sequences of states that arise at point point in OriginalProgram and SliceProgram have identical values for all variables in VariableSet. Definition 4 Horwitz’s program dependence graph is a modification of Ottenstein’s. an original program OriginalProgram must exist. Arc]. architectural dependence graphs. Variable]. and their possible shapes can be described by the enumerated set Arc_Shapes. they share some common features. Although different definitions of program dependence representations have been given. unless explicitly stated. value dependence graphs. deforder dependence edges . while others are extended on it by adding some specifical characteristics. and it is a directed graph whose nodes are connected by several kinds of edges. system dependence graphs (SDG). program dependence graphs imply Horwitz’s. loop independent edges and loop carried edges. For the Ottenstein’s slicing problem. Therefore. unless explicitly stated. where <point. Similarly. From Definition 1 and Definition 2. while Ottenstein’s slice does not necessarily constitute an executable program. A node has a shape. free types Statement and Variable are introduced: [Statement. Henceforth. if original program OriginalProgram and slice SliceProgram have the same initial state. Definition 3 Ottenstein’s program dependence graph is composed of control dependence graph (CDG). States]. program dependence graph is the basic. programs are composed of statements and variables. 258 . dependence edge also has shape. No means not. Two typical definitions of program dependence graphs are Ottenstein’s definition[3] and Horwitz’s[4]. given slicing criterion <point. The Ottenstein’s slicing problem can be described using schema OttensteinProgramSlicing. Thus two free types states of program States and values of variables Value are introduced: [Value. Node_Types::=entry | assignment | controlpredicate | InitialState | FinialUse | others Many of the characteristics of dependence edges are similar to those of a node. For Weiser’s slicing problem. Weiser’s slicing problem can be described using schema WeiserProgramSlicing. a program slice implies Ottenstein’s slice. V> is called slicing criterion. VariableSet>. data dependence graph (DDG) and control flow graph (CFG). we will outline the features of program dependence graphs. we know that Weiser’s slice is executable. Among them. Line. Henceforth. a program slice is a subset of the statements and control predicates of the program P that directly or indirectly influence the variables of V at a special program point point. Name. extended program dependence graphs. a Boolean type Affect::=Yes | No is introduced to decide whether the variable is affected or not. To slice programs. UML class diagram dependence graphs and specification dependence graphs (SpDG). in which Yes means that the variable is affected. Node_Shapes::=circle | ellipse The node will also have a type.

Arc_Shapes::=solid | solid_with_a_hash_mark | dashed | medium_boldface | heavy_boldface Similarly. we will take them as example to discuss in the following part. Arc_Types::=loop_carried_flow_dependence|loop_ind ependence_flow_dependence | def_order_dependence | control_dependence In addition. initial state and final use of variables are formalized by schemas InitialStateNode and FinalUseNode respectively. Ends::=plain | arrow Although there are a lot of different kinds of nodes. shape. Secondly. type of nodes. different kinds of edges also have some common features. they have some common features. Similarly. 259 . shape. For the reason that program dependence graphs are the basic. There are two possible types of end: plain and arrow. the source and target of dependence edges. We define these common features as a generic arc. whether an arrowhead is included or not. too. dependence edges have type. nodes assignment and nodes controlpredicate are formalized by schemas AssignmentNode and ControlPredicateNode respectively. We define these common features as a generic node. etc. type. Similarly. we can add arrowhead to it. and their possible types can be described by the enumerated set Arc_Types. represented by state schema GenericNodes. Declaration part contains name. All kinds of nodes and edges are depicted respectively. Declaration part contains name. in order to specify whether an arc is directional or not. direction of nodes. Firstly. represented by state schema GenericArcs. nodes entry are formalized by schema EntryNode.

3.5.. Conclusions This paper represents a research effort towards the semantic formalization for program slicing technique using Z schema calculus. [3] Ferrante J.. 2007GQS0495) and Science and Technology Foundation of the Education Department of Jiangxi (Project No. which can be formalized by schema SlicingPDG. The other lays its base on graph reachability algorithms[4]. 1995. pp. 3. “Program slicing”. Acknowledgements This research has been supported by the Natural Science Foundation of Jiangxi (Project No. Warren J. “A survey of program slicing techniques”. References So far. D. there are a number of existing tools on the market which do manipulate Z specifications (for example. 498-509. no. vol. on which our work in this paper is also based. J. we will formalize program dependence graphs by schema PDG.algorithms line. This operation can still be performed in time linear in the size of the slice by performing a single traversal of the PDG starting from the set of points. 3. Furthermore. After formalized various kinds of nodes. which are formalized by schema ControlDependenceArc. Many researchers are doing work along the graph reachability Weiser M. Hence inference and verification of program slicing become possible. 121-189. which iteratively solves dataflow equations derived from inter-statement influence[1]. GJJ08353 and [2007] 434). no. ACM [1] 260 . two methods of computing program slices have been developed. we are going to formalize control dependence edges. IEEE Transactions on Software Engineering. On the basis of formalization of nodes and edges. pp. vol.16. 1984. One is the Weiser’s method. Ottenstein K. This formalization could be helpful in correct understanding of different types of slicing and also the correct application of a desired slicing regime in a rigorous way. All schemas in this paper are checked using the ZTC type-checker package [11] and Z User Studio[12]. Journal of Programming Languages. “The program dependence graph and its use in optimization”. [2] Tip F. to perform type-checking)..

1999. pp. vol. 3. Yu Chuanjiang. Binkley D. “Interprocedural slicing using dependency graphs”. ACM SIGPLAN Notices. August 1998. Telecommunication. Research on Z formal specifications slicing. APSEC 2001. Zhu Guanming. M. ZTC: A Type Checker for Z Notation.. Proceedings of 4th Workshop on Source Code Analysis and Manipulation (SCAM 2004). Jiangxi nanchang: Jiangxi university of finance & economics.ac. London: Prentice Hall. 8.43-52. KissAkos O L. 1. 39. 319-349. pp. “Formalizing executable dynamic and forward slicing”. and Information Systems. no. Wu Fangjun. Ming Jijun. 2002. User’s Guide. ACM Transactions on Programming Languages and Systems. [9] 261 . Spivey J. pp. no. http://web. Gyimothy T.03. Reps T.comlab. vol. “Z User Studio: an integrated support tool for Z specifications”. Lili. Shanghai: Shanghai Science and Technology Information Publishing House. “Slicing Z specifications”. Software engineering languageZ. 12. December 2001. 1998. ISO/IEC 13568. type system and semantics. Liu Ling. Harman M.2004.uk/oucl/work/andrew. International Standards Organization. 9. Wu Fangjun. vol.. 1992. no.[4] [5] [6] [7] [8] Transactions on Programming Languages and Systems. 2006. September 2004. Li Gan. pp.1990. Yi Tong.. 39-48. Division of Software Engineering. DePaul University. [11] Jia X. pp. Z formal specification notation-syntax.ox. 437-444. Binkley D. Horwitz S. USA. The Z notation: a reference manual: second edition.. 26-60. Version 2.1987. School of Computer Science.. [12] Miao Huaikou. Danicic S.martin/zst andards/ [10] Miao Huaikou.

we propose an efficient method to resolving the optimal discriminant vectors of the batch GDA and class-incremental GDA respectively. we propose an efficient method for resolving the optimal discriminant vectors of Generalized Discriminant Analysis (GDA) and point out the drawback of high computational complexity in the traditional classincremental GDA [W. the triangular matrix R can be obtained by performing Cholesky decomposition of K [11]. GRAM-SCHMIDT ORTHOGONALIZATION IN FEATURE SPACE I. “Class-Incremental Generalized Discriminant Analysis”.1109/ICCET. x j ) K is symmetric positive semi-definite matrix. In the next section. A numerous methods have been proposed to deal with the SSS problem of LDA [6-8] for resolving the optimal discriminant vectors in the range space of total scatter matrix and the null space of within-class scatter matrix. the traditional approach is the use of the singular value decomposition (SVD) [2]. Keywords... However.class-incremental generalized discriminant analysis. the implicit features vector in F dose not need to be computed explicitly. we can see that performing the Gram-Schmidt orthogonalization in feature space is actually . the SVD-based GDA algorithms suffer from the numerical instability problem due to the numerical perturbation [9]. To overcome this drawback. y ) = Φ ( x ) T Φ ( y ) [3. Because there is no need to compute the mean of classes and the mean of total samples in the proposed method as needed in the traditional class-incremental GDA.. the algorithms in [10] are not optimal in terms of computational complexity because the mean of classes and the mean of samples must be computed before the QR decomposition is applied. INTRODUCTION The classical linear discriminant analysis (LDA) [1] was nonlinearly extended to the Generalized discriminant analysis (GDA) [2] by mapping the samples from input space to a high-dimensional feature space via the kernel trick [3. If samples are linearly independent. For solving GDA. Recently.. the computational complexity is thus reduced greatly. Let the sample matrix be which X = [ x1 . difference space. xn ] X Φ = [Φ( x1 ).00 © 2009 IEEE DOI 10. in this paper. the LDA often encounters the small sample size (SSS) problem [5] when the dimensionality of samples is greater than the number of samples. From (2). The conclusions are drawn in Section 5. Zheng [10] proposed a numerically stable algorithm for batch GDA and class-incremental GDA in the case of the small sample size problem by applying only QR decomposition in feature space. 210044 yunhuihe@163. by directly performing the Gram-Schmidt (GS) orthogonalization procedure [11] in the difference space (DS) [12] using the kernel trick..2009. We call the proposed 978-0-7695-3521-0/09 $25...2009 International Conference on Computer Engineering and Technology Modified Class-Incremental Generalized Discriminant Analysis Yunhui He Department of Communications Engineering Nanjing University of Information Science and Technology Nanjing. In implementation. the Gram-Schmidt orthogonalization in feature space is introduced in Section 2. China. then upper triangular matrix R is positive definite and we obtain (3) X Φ R −1 = Q From above analysis.com Abstract—In this paper. becomes By using QR decomposition. orthogonalization.4].4]. However. II. Because there is no need to compute the mean of classes and the mean of total samples in GS-DS GDA as needed in the class-incremental GDA. Zheng. the matrix X Φ can be expressed as X Φ = QR (1) where Q is column orthogonal matrix and R is an upper triangular matrix Because the columns of matrix Q are orthonormal.28 262 In kernel method. Neural Computation 18.. Φ( x n )] in feature space. we have T (2) K = X Φ X Φ = R T Q T QR = R T R where K is a n × n kernel matrix which can be computed using kernel function as ( K ) ij = k ( xi . The GDA methods also suffers from this problem since the dimensionality of samples in feature space is much greater than the number of the samples. However. the computational complexity is reduced greatly. The theoretical justifications of the proposed batch GDA and the classincremental GDA are presented in this paper. the samples are transformed into an implicit higher dimensional feature space F though a nonlinear mapping Φ(x) . kernel method method GS-DS GDA as an abbreviation. 979–1006 (2006)]. The batch GS-DS GDA algorithm and class-increment algorithms are proposed and the theoretical justification is proofed in Section 3 and 4 respectively. and it is just done by computing the inner product with a kernel of two vectors in F function k ( x.

n1 − 1 i 1 i i Φ(dkii ) = Φ( xki ) − Φ( x1 ) = (Φ( xki ) − Φ(x1 )) + i 1 (Φ(x1 ) − Φ(x1 )) = Φ(bkii −1) + Φ(d1i ) where is covariance matrix of class i .Φ(d1C )} Φ i in feature space: i Φ ( bkii ) = Φ ( xki +1 ) − Φ ( x1i ) ( ki = 1.. it can be seen that the vectors in (15) equal in sequence 1 1 2 {Φ(b1 ).. C where ki = 2..Φ(bn1−1 ). Φ R( StΦ ) and leftmost n − C vectors span R( S w ) .. Φ ( d 2 ) − Φ ( d 1C ).... (x11 ).2. where each class i has Let all samples be 1 {Φ(x1 ).. Φ ( d 12 ).....Φ(b C 1 )} from Eq.. these ni − 1 difference vectors span the difference space of class i which equals the space of all eigenvectors corresponding to the nonzero eigenvalues (16) From (14) and (16).....Φ(bn1−1). Φ ( d n1 −1 )..... III. S w and S tΦ are C −1. n− C and n −1 for class i = 2... (4. in this section. From this relation we obtain C 2 ...Φ(dnC )} 1 1 R(StΦ ) = span Φ(d1 ). ni From (8) and (10)...Φ(d12 )..Φ(b ). Let the training set has C classes. All samples are independent....... ni − 1) (9) From the relation in [12].Φ(bn2 −1 ). C ...Φ(dn2 ).... Φ(d12 ). Φ ( d 1C )} From the definition of difference space [12].... To overcome this drawback. ki = 1......... we have Φ N ( S w ) = ∩ N ( S iΦ ) i =1 C Now i ki we rearrange all difference vectors (7) C Φ(d ) ( i =1. by Since from (12) all n −1 vectors in (15) span the 263 . we propose the optimal vectors on which the orthogonalization is performed for resolving the optimal discriminant vectors of the batch GDA... The ranks of Because Φ Φ S b ...Φ(bnC −1 ).. we obtain 1 1 Φ R(Sw ) = ∪ R(SiΦ ) = {Φ(b1 ).ni ) in sequence as follows 1 { Φ ( d 11 )....2....Φ(d1C ). { (12) S bΦ = ∑ ni ( μ iΦ − μ Φ )( μ iΦ − μ Φ ) T Φ i i Sw = ∑∑(Φ( xm ) − μiΦ )(Φ( xm ) − μiΦ )T = ∑ SiΦ i i Φ Φ StΦ = ∑∑(Φ(xm ) − μΦ )(Φ(xm ) − μΦ )T = Sb + Sw i=1 m=1 i i SiΦ = ∑∑(Φ( xm ) − μiΦ )(Φ( xm ) − μiΦ )T m=1 m=1 Ni ni (4) C By comparing the definitions in (11) and (9).. we obtain (5) (6) the 1 Φ(d l1 ) = Φ( xl1+1 ) − Φ( x1 ) = Φ(bl1 ) for class 1 where C ni i=1 m=1 C ni i=1 l = 1. Φ ( d 22 ) − Φ ( d 12 ). it can be seen that the leftmost n − C vectors in (12) span the R( S w ) . Φ S w and S tΦ are positive semidefinite matrices. C i Let Φ ( xk ) be kth sample i